CN115752442B - Monocular vision-based auxiliary inertial positioning method - Google Patents

Monocular vision-based auxiliary inertial positioning method Download PDF

Info

Publication number
CN115752442B
CN115752442B CN202211560882.8A CN202211560882A CN115752442B CN 115752442 B CN115752442 B CN 115752442B CN 202211560882 A CN202211560882 A CN 202211560882A CN 115752442 B CN115752442 B CN 115752442B
Authority
CN
China
Prior art keywords
inertial
reference image
time
calculating
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211560882.8A
Other languages
Chinese (zh)
Other versions
CN115752442A (en
Inventor
于振华
赵渊
李坤
丁国良
曹中心
孟利平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunlai Intelligent Equipment Wuxi Co ltd
Original Assignee
Yunlai Intelligent Equipment Wuxi Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunlai Intelligent Equipment Wuxi Co ltd filed Critical Yunlai Intelligent Equipment Wuxi Co ltd
Priority to CN202211560882.8A priority Critical patent/CN115752442B/en
Publication of CN115752442A publication Critical patent/CN115752442A/en
Application granted granted Critical
Publication of CN115752442B publication Critical patent/CN115752442B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a monocular vision-based auxiliary inertial positioning method, belonging to the technical field of navigation positioning; the monocular vision system acquires the world coordinates of the markers to be used for correcting the positioning errors of the inertial system, so that the positioning accuracy is improved, the monocular vision system is intermittent in correcting the positioning errors of the inertial system, and the inertial system and the vision system do not need to be fused and calculated in real time, so that the cost and the calculation capability are very low, the embedded acceleration and popularization and application of the algorithm are very facilitated, the combined positioning cost and calculation force requirement are effectively reduced, and the positioning accuracy is guaranteed to a certain extent.

Description

Monocular vision-based auxiliary inertial positioning method
Technical Field
The invention relates to the technical field of navigation positioning, in particular to a monocular vision-based auxiliary inertial positioning method.
Background
With the rapid development of artificial intelligence and perception technologies, new generation intelligent carriers, such as unmanned aerial vehicles, unmanned aerial vehicles and mobile robots, are widely applied to the fields of battlefield reconnaissance, accurate striking, logistics distribution and the like. The essence of smart carrier autonomous navigation is to safely reach a specified destination without human intervention, where navigation and positioning are a critical issue. Inertial navigation technology based on Newton mechanics uses accelerometer and gyroscope to sense the acceleration and angular velocity of motion of carrier, and calculates navigation parameters of carrier by dead reckoning. It has the characteristics of high autonomy, high concealment and short-term high precision. However, the error will diverge gradually, with longer on-time the error will be greater. In contrast, visual navigation uses monocular or binocular cameras to acquire ambient information and extracts location information through image processing techniques and positioning algorithms to complete the navigation task, with the advantage that errors do not accumulate over time. However, pure visual navigation algorithms have an inevitable inherent disadvantage: it depends on the texture features of the scene, is susceptible to lighting conditions, and is difficult to handle fast rotational movements. The inertial system is combined with the vision system, so that on one hand, the vision system can correct the accumulated error of the inertial system; on the other hand, the inertial system can make up for the defect of the real-time performance of the vision system. Therefore, the visual and inertial integrated navigation technology is gradually developed into a research hot spot in the autonomous navigation field.
At present, there are two main schemes for model-based vision and inertial integrated navigation: one is to fuse inertial and visual information using filtering techniques; another uses nonlinear iterative optimization techniques to fuse inertial and visual information. The model-based vision and inertial integrated navigation technique requires input data with a high signal-to-noise ratio. The overall performance of an algorithm depends not only on the basic principle of the algorithm but also on the rationality and accuracy of the parameters. Thus, researchers have developed a series of vision, inertial integrated navigation techniques based on deep learning, and it is a more straightforward idea to use deep learning neural networks instead of individual modules in traditional algorithms. Whether filtering, optimization or deep learning based, vision and inertial combining algorithms place higher demands on cost and computational power.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention aims to provide a monocular vision-based auxiliary inertial positioning method, which solves the following technical problems: the existing inertial and visual combined positioning scheme has high cost and large calculation force consumption.
The aim of the invention can be achieved by the following technical scheme:
a monocular vision-based auxiliary inertial positioning method comprises the following steps:
step one: defining a local geographic coordinate system as a navigation system, and marking as an n system, wherein x, y and z axes of the navigation system point to the east, north and the heaven along the opposite direction of gravity respectively; defining an inertial system coordinate system as a body system, and marking as a b system, wherein x, y and z axes of the b system form a right-hand coordinate system;
step two: recording that a reference image 1 and a reference image 2 are two frames of images obtained by a monocular vision system at the time t1 and the time t2 respectively, firstly extracting sift characteristics for image registration, and simultaneously adopting a RANSAC algorithm to reduce the error matching rate;
step three: calculating a motion relationship between the reference image 1 and the reference image 2 by epipolar geometry;
step four: calculating depth information, i.e. the point-to-camera distances on reference image 1 and reference image 2, by triangulation;
step five: extracting the edge of the marker in the reference image 2 through a Canny operator, and calculating the centroid pixel coordinates of the marker;
step six: calculating the attitude and the position of an inertial system at the moment t2;
step seven: calculating the pose of the camera at the time t2 through the pose of the inertial system at the time t2 and the installation relation of the monocular camera and the inertial system;
step eight: calculating world coordinates of the camera at the time t2 through the posture of the camera at the time t2, the depth information corresponding to the centroid pixel coordinates of the markers in the reference image 2 and the centroid world coordinates of the markers;
step nine: taking the world coordinate of the camera at the time t2 as a reference, and calculating the world coordinate of the inertial system at the time t2 through the installation relation of the monocular camera and the inertial system;
step ten: taking the world coordinates of the inertial system obtained by calculation in the step nine as a correction value, and correcting the position of the inertial system at the corresponding moment;
step eleven: the subsequent position update is based on the position of step ten, and the subsequent correction step repeats steps two-ten.
According to a further aspect of the present invention, the motion relationship in the third step is a rotation matrix and a translation vector.
According to a further scheme of the invention, the inertial system coordinate is composed of a triaxial MEMS gyroscope, a triaxial MEMS accelerometer and a triaxial magnetometer; the three-axis MEMS gyroscope, the three-axis MEMS accelerometer and the three-axis magnetometer are the same in direction.
Compared with the prior art, the invention has the beneficial effects that:
the monocular vision system acquires the world coordinates of the markers to be used for correcting the positioning errors of the inertial system, so that the positioning accuracy is improved, the inertial system and the vision system do not need to be integrated and calculated in real time, the cost and the calculation capability are very low, the embedded acceleration and popularization and application of the algorithm are very facilitated, the combined positioning cost and the calculation force requirement are effectively reduced, and the positioning accuracy is ensured to a certain extent.
Drawings
Fig. 1 is a flow chart of a monocular vision-based aided inertial positioning method according to the present invention.
Detailed Description
The invention will be described in further detail with reference to the drawings and the detailed description. The embodiments of the invention have been presented for purposes of illustration and description, and are not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
Referring to fig. 1, the invention discloses a monocular vision-based auxiliary inertial positioning method, which comprises the following steps:
step one: defining a local geographic coordinate system as a navigation system, and marking as an n system, wherein x, y and z axes of the navigation system point to the east, north and the heaven along the opposite direction of gravity respectively; defining an inertial system coordinate system as a body system, and marking as a b system, wherein x, y and z axes of the b system form a right-hand coordinate system; the inertial system coordinate is composed of a triaxial MEMS gyroscope, a triaxial MEMS accelerometer and a triaxial magnetometer; the three-axis MEMS gyroscope, the three-axis MEMS accelerometer and the three-axis magnetometer have the same direction;
step two: recording a reference image 1 and a reference image 2 which are two frames of images obtained by a monocular vision system at the time t1 and the time t2 respectively, firstly extracting the sift characteristics of the two images to perform image registration, and simultaneously adopting a RANSAC algorithm to reduce the mismatching rate; wherein t1< t2;
step three: calculating a motion relationship between the reference image 1 and the reference image 2, namely a rotation matrix and a translation vector, by epipolar geometry;
step four: calculating depth information, i.e. the distances of points on reference image 1 and reference image 2 to the camera imaging plane, by triangulation;
step five: extracting the edge of the marker in the reference image 2 through a Canny operator, and calculating the centroid pixel coordinates of the marker;
step six: calculating the attitude and the position of an inertial system at the moment t2;
step seven: calculating the pose of the camera at the time t2 through the pose of the inertial system at the time t2 and the installation relation of the monocular camera and the inertial system;
step eight: calculating world coordinates of the camera at the time t2 through the posture of the camera at the time t2, the depth information corresponding to the centroid pixel coordinates of the markers in the reference image 2 and the centroid world coordinates of the markers;
step nine: taking the world coordinate of the camera at the time t2 as a reference, and calculating the world coordinate of the inertial system at the time t2 through the installation relation of the monocular camera and the inertial system;
step ten: taking the world coordinates of the inertial system obtained by calculation in the step nine as a correction value, and correcting the position of the inertial system at the corresponding moment;
step eleven: the subsequent position updating takes the position of the step ten as a reference, and the subsequent correction step is repeated from the step two to the step ten; the positioning error of the inertial system is intermittently corrected through the position information of the monocular vision system, so that the positioning accuracy is further improved.
Specifically, in the first step, the MEMS is Micro-electro Mechanical Systems, namely a Micro-electromechanical system.
Specifically, sift in the second step is Scale Invariant Feature Transform, namely scale-invariant feature transformation; RANSAC is Random Sample Consensus, i.e. random samplings are consistent.
The invention uses the monocular vision system to obtain the world coordinates of the markers to correct the positioning errors of the inertial system, improves the positioning accuracy, and simultaneously, the inertial system and the vision system do not need real-time integrated calculation, so the cost and the calculation capability are very low, the embedded acceleration and popularization and application of the algorithm are very facilitated, and the combined positioning cost and the calculation force requirement are effectively reduced.
In the description of the present invention, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and the like are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
The foregoing describes one embodiment of the present invention in detail, but the description is only a preferred embodiment of the present invention and should not be construed as limiting the scope of the invention. All equivalent changes and modifications within the scope of the present invention are intended to be covered by the present invention.

Claims (1)

1. The monocular vision-based auxiliary inertial positioning method is characterized by comprising the following steps of:
step one: defining a local geographic coordinate system as a navigation system, and marking as an n system, wherein x, y and z axes of the navigation system point to the east, north and the heaven along the opposite direction of gravity respectively; defining an inertial system coordinate system as a body system, and marking as a b system, wherein x, y and z axes of the b system form a right-hand coordinate system;
step two: recording a reference image 1 and a reference image 2 which are two frames of images obtained by a monocular vision system at the time t1 and the time t2 respectively, firstly extracting the sift characteristics of the two images to perform image registration, and simultaneously adopting a RANSAC algorithm to reduce the mismatching rate;
step three: calculating a motion relationship between the reference image 1 and the reference image 2 by epipolar geometry;
step four: calculating depth information, i.e. the distances of points on reference image 1 and reference image 2 to the camera imaging plane, by triangulation;
step five: extracting the edge of the marker in the reference image 2 through a Canny operator, and calculating the centroid pixel coordinates of the marker;
step six: calculating the attitude and the position of an inertial system at the moment t2;
step seven: calculating the pose of the camera at the time t2 through the pose of the inertial system at the time t2 and the installation relation of the monocular camera and the inertial system;
step eight: calculating world coordinates of the camera at the time t2 through the posture of the camera at the time t2, the depth information corresponding to the centroid pixel coordinates of the markers in the reference image 2 and the centroid world coordinates of the markers;
step nine: taking the world coordinate of the camera at the time t2 as a reference, and calculating the world coordinate of the inertial system at the time t2 through the installation relation of the monocular camera and the inertial system;
step ten: taking the world coordinates of the inertial system obtained by calculation in the step nine as a correction value, and correcting the position of the inertial system at the corresponding moment;
step eleven: the subsequent position updating takes the position of the step ten as a reference, and the subsequent correction step is repeated from the step two to the step ten;
the motion relation in the third step is a rotation matrix and a translation vector;
the inertial system consists of a triaxial MEMS gyroscope, a triaxial MEMS accelerometer and a triaxial magnetometer; the three-axis MEMS gyroscope, the three-axis MEMS accelerometer and the three-axis magnetometer are the same in direction.
CN202211560882.8A 2022-12-07 2022-12-07 Monocular vision-based auxiliary inertial positioning method Active CN115752442B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211560882.8A CN115752442B (en) 2022-12-07 2022-12-07 Monocular vision-based auxiliary inertial positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211560882.8A CN115752442B (en) 2022-12-07 2022-12-07 Monocular vision-based auxiliary inertial positioning method

Publications (2)

Publication Number Publication Date
CN115752442A CN115752442A (en) 2023-03-07
CN115752442B true CN115752442B (en) 2024-03-12

Family

ID=85343859

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211560882.8A Active CN115752442B (en) 2022-12-07 2022-12-07 Monocular vision-based auxiliary inertial positioning method

Country Status (1)

Country Link
CN (1) CN115752442B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101598556A (en) * 2009-07-15 2009-12-09 北京航空航天大学 Unmanned plane vision/inertia integrated navigation method under a kind of circumstances not known
CN102162738A (en) * 2010-12-08 2011-08-24 中国科学院自动化研究所 Calibration method of camera and inertial sensor integrated positioning and attitude determining system
CN102435188A (en) * 2011-09-15 2012-05-02 南京航空航天大学 Monocular vision/inertia autonomous navigation method for indoor environment
CN110702107A (en) * 2019-10-22 2020-01-17 北京维盛泰科科技有限公司 Monocular vision inertial combination positioning navigation method
WO2020087846A1 (en) * 2018-10-31 2020-05-07 东南大学 Navigation method based on iteratively extended kalman filter fusion inertia and monocular vision
CN112229424A (en) * 2020-11-16 2021-01-15 浙江商汤科技开发有限公司 Parameter calibration method and device for visual inertial system, electronic equipment and medium
CN114136315A (en) * 2021-11-30 2022-03-04 山东天星北斗信息科技有限公司 Monocular vision-based auxiliary inertial integrated navigation method and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9766074B2 (en) * 2008-03-28 2017-09-19 Regents Of The University Of Minnesota Vision-aided inertial navigation
US9068847B2 (en) * 2009-04-22 2015-06-30 Honeywell International Inc. System and method for collaborative navigation
US9607401B2 (en) * 2013-05-08 2017-03-28 Regents Of The University Of Minnesota Constrained key frame localization and mapping for vision-aided inertial navigation
CN108765498B (en) * 2018-05-30 2019-08-23 百度在线网络技术(北京)有限公司 Monocular vision tracking, device and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101598556A (en) * 2009-07-15 2009-12-09 北京航空航天大学 Unmanned plane vision/inertia integrated navigation method under a kind of circumstances not known
CN102162738A (en) * 2010-12-08 2011-08-24 中国科学院自动化研究所 Calibration method of camera and inertial sensor integrated positioning and attitude determining system
CN102435188A (en) * 2011-09-15 2012-05-02 南京航空航天大学 Monocular vision/inertia autonomous navigation method for indoor environment
WO2020087846A1 (en) * 2018-10-31 2020-05-07 东南大学 Navigation method based on iteratively extended kalman filter fusion inertia and monocular vision
CN110702107A (en) * 2019-10-22 2020-01-17 北京维盛泰科科技有限公司 Monocular vision inertial combination positioning navigation method
CN112229424A (en) * 2020-11-16 2021-01-15 浙江商汤科技开发有限公司 Parameter calibration method and device for visual inertial system, electronic equipment and medium
CN114136315A (en) * 2021-11-30 2022-03-04 山东天星北斗信息科技有限公司 Monocular vision-based auxiliary inertial integrated navigation method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
一种基于惯性/视觉信息融合的无人机自主着陆导航算法;刘畅等;《导航定位与授时》;第3卷(第6期);6-11 *
一种多时间尺度融合的视觉辅助惯性定姿算法;张琳;曾成;王羿帆;;现代电子技术(第12期);全文 *
惯性/视觉组合导航关键技术研究;吴禹彤;《中国优秀硕士学位论文全文数据库 信息科技辑》(第2期);I136-1491 *

Also Published As

Publication number Publication date
CN115752442A (en) 2023-03-07

Similar Documents

Publication Publication Date Title
CN109029433B (en) Method for calibrating external parameters and time sequence based on vision and inertial navigation fusion SLAM on mobile platform
CN111024066B (en) Unmanned aerial vehicle vision-inertia fusion indoor positioning method
CN109579843B (en) Multi-robot cooperative positioning and fusion image building method under air-ground multi-view angles
CN108303099B (en) Autonomous navigation method in unmanned plane room based on 3D vision SLAM
CN110044354B (en) Binocular vision indoor positioning and mapping method and device
CN109945858B (en) Multi-sensing fusion positioning method for low-speed parking driving scene
CN109307508B (en) Panoramic inertial navigation SLAM method based on multiple key frames
CN109211241B (en) Unmanned aerial vehicle autonomous positioning method based on visual SLAM
CN106679648B (en) Visual inertia combination SLAM method based on genetic algorithm
CN110068335B (en) Unmanned aerial vehicle cluster real-time positioning method and system under GPS rejection environment
CN110533719B (en) Augmented reality positioning method and device based on environment visual feature point identification technology
CN109579825A (en) Robot positioning system and method based on binocular vision and convolutional neural networks
Bao et al. Vision-based horizon extraction for micro air vehicle flight control
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
CN114001733A (en) Map-based consistency efficient visual inertial positioning algorithm
CN106556395A (en) A kind of air navigation aid of the single camera vision system based on quaternary number
Bazin et al. UAV attitude estimation by vanishing points in catadioptric images
Unicomb et al. Distance function based 6dof localization for unmanned aerial vehicles in gps denied environments
CN117152249A (en) Multi-unmanned aerial vehicle collaborative mapping and perception method and system based on semantic consistency
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
Fang et al. A motion tracking method by combining the IMU and camera in mobile devices
CN115752442B (en) Monocular vision-based auxiliary inertial positioning method
CN112444245A (en) Insect-imitated vision integrated navigation method based on polarized light, optical flow vector and binocular vision sensor
CN114485648B (en) Navigation positioning method based on bionic compound eye inertial system
Atsuzawa et al. Robot navigation in outdoor environments using odometry and convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: Room 1008, building 3, 311 Yanxin Road, Huishan Economic Development Zone, Wuxi City, Jiangsu Province, 214000

Applicant after: Yunlai Intelligent Equipment (Wuxi) Co.,Ltd.

Address before: Room 1008, Building 3, No. 311, Yanxin Road, Huishan Economic Development Zone, Wuxi, Jiangsu 214000

Applicant before: WUXI A-CARRIER ROBOT CO.,LTD.

Country or region before: China

GR01 Patent grant
GR01 Patent grant