CN113790728A - Loosely-coupled multi-sensor fusion positioning algorithm based on visual odometer - Google Patents

Loosely-coupled multi-sensor fusion positioning algorithm based on visual odometer Download PDF

Info

Publication number
CN113790728A
CN113790728A CN202111148825.4A CN202111148825A CN113790728A CN 113790728 A CN113790728 A CN 113790728A CN 202111148825 A CN202111148825 A CN 202111148825A CN 113790728 A CN113790728 A CN 113790728A
Authority
CN
China
Prior art keywords
pose
mobile terminal
visual odometer
odometer
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111148825.4A
Other languages
Chinese (zh)
Inventor
陈颖聪
关伟鹏
梁婉琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute
Foshan Guangdong University CNC Equipment Technology Development Co. Ltd
Original Assignee
Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute
Foshan Guangdong University CNC Equipment Technology Development Co. Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute, Foshan Guangdong University CNC Equipment Technology Development Co. Ltd filed Critical Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute
Priority to CN202111148825.4A priority Critical patent/CN113790728A/en
Publication of CN113790728A publication Critical patent/CN113790728A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/50Systems of measurement based on relative movement of target
    • G01S17/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/87Combinations of systems using electromagnetic waves other than radio waves

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a loosely-coupled multi-sensor fusion positioning algorithm based on a visual odometer, which comprises the visual odometer, an RSE camera and a laser radar, wherein the RSE camera is used for acquiring the visual odometer; the laser radar can compensate accumulated errors, the visual odometer is not influenced by slipping of wheels under special conditions (such as uneven road surfaces, water surfaces or deserts) in severe environments and the like, the VLP measurement can provide high-precision attitude initialization or attitude calibration, and high-precision positioning is realized through state prediction of the visual odometer and measurement updating of a laser radar scanner and the VLP; the invention reduces the drift of the traditional wheel type odometer and simultaneously relaxes the requirement of the number of observable LEDs to zero. The positioning method fusing the visible light, the laser radar and the vision odometer multi-sensor enables the mobile terminal to improve the perception consciousness of the mobile terminal to the environment to the maximum extent, obtains enough measured values and is more accurate in indoor positioning and navigation.

Description

Loosely-coupled multi-sensor fusion positioning algorithm based on visual odometer
Technical Field
The invention relates to the technical field of intelligent equipment, in particular to a loosely-coupled multi-sensor fusion positioning algorithm based on a visual odometer.
Background
The visual odometer is different from the traditional odometer, does not use equipment such as a code disc and the like, can calculate the mileage only by utilizing continuous image frames shot by a camera, is very convenient, and has wide application. The visual odometer has the advantage over wheel odometers of being immune to wheel slip in harsh environments such as uneven road surfaces, water surfaces or deserts. When the automobile turns, the turning radii of the left wheel and the right wheel are different, which increases the error of the wheel odometer. And the visual odometer can provide more accurate track estimation, and the error range of the relative pose is 0.1% to 2% (2012 data). Visual odometers are very important and necessary in some special situations, such as in environments where wheel odometers cannot be used (e.g., unmanned aerial vehicles), in special environments where GPS fails (e.g., underwater, outer space).
The visible light communication technology (VLC for short) greatly fills the need of commercial indoor positioning with the advantages of high precision, low cost and easy realization. Meanwhile, the visible light communication technology cannot generate any radio frequency interference, so that the visible light positioning algorithm has good performance in the environment (such as hospitals, nuclear power stations and the like) with strictly limited radio frequency radiation. The visible light positioning technology (VLP for short) modulates the LEDs to enable the LEDs to flicker in light and shade at different frequencies and transmit pose information of the LEDs through air. The receiving device of the mobile terminal captures VLP information, processes the captured picture through an image processing technology, demodulates the pose information of the LED, and calculates the pose information of the mobile terminal by using the principles of geometry and the like.
Camera-based VLP systems employ modulated LED lights mounted in known poses (e.g., on the ceiling) as artificial landmarks, associating each LED with ID coordinates through RSE-based OCC measurements. If the number of indicator lights required in the VLP is to be reduced, the captured LED image is no longer taken as a point, but as an image that utilizes geometric features to determine the receiver orientation and pose. This method requires an additional marker to be placed on the LED.
SLAM positioning techniques are useful for building and updating maps in unknown environments, while mobile terminals maintain information about their pose. It includes an environment model (map) constructed simultaneously and an estimation of the pose of the terminal moving inside it. SLAMs can be classified into two categories according to the sensor: vision based and lidar based. In comparison with other distance measuring devices, the lidar sensor performs measurement around it at a larger scanning angle and a high angular resolution. Furthermore, it is invariant to illumination. High reliability and accuracy make lidar sensors a popular choice for attitude estimation.
Aiming at the problems, the invention discloses a loosely-coupled multi-sensor fusion positioning algorithm based on a visual odometer, and provides a positioning algorithm which is small in positioning error, high in precision, capable of correcting in real time, high in reliability and capable of adapting to complex scenes.
Disclosure of Invention
The invention aims to provide a loosely-coupled multi-sensor fusion positioning algorithm based on a visual odometer, so as to solve the technical problems in the background technology. In order to achieve the purpose, the invention provides the following technical scheme: a loosely-coupled multi-sensor fusion localization algorithm based on visual odometry, comprising:
step S1: obtaining the observed pose S of the mobile terminal calculated from the SLO-VLPt(Xt,Yt,Zt,θt) Wherein yaw angle thetatCorrecting through a laser radar matcher;
step S2: the mobile terminal calculated from the SLO-VLP in step S1The end pose is taken as the initial pose of the particle in the adaptive Monte Carlo localization algorithm, and the pose estimated by the adaptive Monte Carlo localization algorithm is recorded as (X)i,Yi);
Step S3: obtaining pose P of mobile terminal matched by visual odometer based on ORB characteristicst
Step S4: using the observation pose obtained in steps S1 and S2 as an observation value of Kalman filtering algorithm to correct the pose P of the mobile terminal predicted by the visual odometer obtained in step S3t
Preferably, in the step S1, the corrected observation pose S of the mobile terminal calculated and obtained from the SLO-VLP is obtainedt(Xt,Yt,Zt,θt) The calculating method comprises the following steps:
step S11: modulating LED light for transmitting an LED lamp body ID and obtaining LED lamp body pose information corresponding to the lamp body ID;
step S12: acquiring the pose (X) of a certain LED lamp body in a world coordinate systemi,Yi,Zi) And calculating the pose P (X) of the camera image center on the mobile terminal in the world coordinate systems,Ys,Zs);
Step S13: calculating the attitude P of the mobile terminal through the coordinate system transformation relation between the coordinate system of the camera on the mobile terminal and the coordinate system of the mobile terminalt(Xt,Yt,Zt);
Step S14: obtaining an estimated yaw angle gamma by a visual odometer on a mobile terminalodomAnd the relative direction conversion relation gamma detected by the laser radar matching unitLIDAR→MapThe corrected yaw angle theta can be obtainedt=γ=γodomLIDAR→Map(ii) a And the P obtained by the step S12 can obtain the SLO-VLP pose S of the mobile terminalt(Xt,Yt,Zt,θt)。
Preferably, in step S2, the pose (X) obtained by the adaptive monte carlo positioning algorithm is obtainedi,Yi) The method comprises the following steps:
step S21: pose S of SLO-VLP obtained in claim 2tThe initial pose of the filter particles is used as the initial pose of the filter particles of the self-adaptive Monte Carlo positioning algorithm;
step S22: inputting the pose information and laser radar data obtained by the vision odometer sensor into the self-adaptive Monte Carlo positioning algorithm positioner, and outputting the pose (X) of the mobile terminal on the map coordinate systemi,Yi)。
Preferably, in step S3, the method for acquiring the pose of the mobile terminal matched by the visual odometer based on the ORB features includes:
step S31: and (3) feature detection: detecting interest points from pictures shot by a camera, and extracting key points;
step S32: calculating the descriptors of the key points obtained in step S31, and performing feature point matching based on the descriptors;
step S33: estimating an essential matrix E by an eight-point method according to epipolar geometric constraint, and recovering a rotation matrix and a translation matrix R and t by SVD (singular value decomposition) of E;
step S34: estimating the relative motion of the camera by the R, t obtained in the step S33, thereby estimating the current pose of the camera;
step S35: the pose of the mobile terminal can be calculated by the TF transformation relation between the camera coordinate system and the mobile terminal coordinate system
Figure BDA0003286409340000042
Preferably, in the step S4, the method for using the observation pose obtained in the steps S1 and S2 as the observation value of the kalman filtering algorithm to correct the pose of the mobile terminal predicted by the visual odometer obtained in the step S3 is as follows:
the pose observation value Z can be obtained from the observation pose calculated by the SLO-VLP and the observation pose of the laser radartAnd performing Kalman filter on the pose obtained in the step S3
Figure BDA0003286409340000041
Correcting to obtain the final movementPose P of terminalt
Compared with the prior art, the invention has the beneficial effects that:
1. the pose error of the mobile terminal estimated by the visual odometer is obviously reduced, and the method can be applied to a special environment where the wheel odometer cannot work, and the positioning precision is obviously improved.
2. According to the invention, a reliable attitude observation value is provided for the laser SLAM technology, so that the accumulated errors of sensors such as a visual odometer and a laser radar can be corrected, a more accurate pose is provided, and the robustness of an algorithm is enhanced.
3. Through the fusion of multiple sensors, the invention can make up the defects of independent sensors and provide more reliable attitude estimation; the method provides accurate and reliable positioning for the terminal when the LED is in shortage/fault, and the robustness of the algorithm ensures that the mobile terminal is not only suitable for the occasions with perfect lighting facilities, but also can provide high-precision positioning service under the condition of information source shortage, thereby laying a solid foundation for path planning and autonomous navigation.
Drawings
FIG. 1 is a general block diagram illustrating a visual odometer-based loosely-coupled multi-sensor fusion positioning algorithm of the present invention;
FIG. 2 is a schematic representation of the transformation between the world, camera and image coordinate systems;
FIG. 3 is a flow chart of the acquisition of the pose of the mobile terminal by the visual odometer.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 2, and fig. 3, the present invention provides a loosely coupled multi-sensor fusion positioning algorithm based on a visual odometer, the method includes:
step (ii) ofS1: obtaining a mobile terminal observation pose (X) calculated from the SLO-VLPt,Yt,Zt,θt) And its direction angle thetatCorrected by a laser radar matcher (AMCL).
Step S11: acquiring the pose (X) of a certain LED lamp body in a world coordinate systemi,Yi,Zi) And calculating the pose P (X) of the image center of the terminal camera in the world coordinate systems,Ys,Zs). The method comprises the following steps:
step S111: and modulating the LED light for transmitting the ID of the LED lamp body and obtaining the pose information of the LED lamp body corresponding to the ID of the lamp body. Modulating the LED light means that a light emitting chip of the LED light flickers according to a certain frequency, wherein the flickering frequency corresponds to a modulation signal; and in the modulation process, each LED lamp body is modulated and distributed with an ID code, and simultaneously the pose information of the LED lamp body is modulated.
Step S112: acquiring a photosensitive region ROI of a certain LED lamp body captured by a mobile terminal, identifying the identity (LED-ID) of the photosensitive region ROI, and retrieving the 3D pose P of an LED from a registered light database (a prefabricated LED landmark map)LED(Xi,Yi,Zi)。
Step S113: p acquired by step S112LED(Xi, Yi, Zi) can calculate the pose of the LED lamp body in the camera coordinate system as (x)i,yi) The method comprises the following steps:
Figure BDA0003286409340000061
lambda is the height difference between the mobile terminal and the LED lamp body on the z axis, the height difference between the mobile terminal and the LED lamp body on the z axis is obtained according to the calculated proportional relation between the diameter of the LED image plane and the actual diameter of the LED lamp body, the triangular property and the imaging principle,
namely, it is
Figure BDA0003286409340000062
Wherein D is LDiameter of ED, dpixelIs the pixel distance, P, of the cameradIs the conversion of pixel distance to physical distance, and f is the focal length of the camera.
f (focal length), dx/dy (pixel distance), u0/v0(image center pixel) constitutes a feature matrix K;
t is a translation matrix, which is equivalent to the pose of the mobile terminal in a three-dimensional world coordinate system;
r is a rotation matrix from the world coordinate system to the camera coordinate system.
Figure BDA0003286409340000063
When only two-dimensional planes are considered, α, β are considered constants, while γ is acquired by the odometer sensor.
Step S114: p obtained in step S112LED(Xi, Yi, Zi), one can calculate:
Zs=Zi-λ (2)
step S115: the method for calculating the pose of the camera center in the world coordinate system comprises the following steps:
by
Figure BDA0003286409340000064
Obtaining Ps [ Xs, Ys, Zs, gamma ]]K is the sum of f (focal length), dx/dy (pixel distance), u0/v0(image center pixel) of the feature matrix.
Step S115: the pose St of the mobile terminal in the world coordinate system can be obtained by the following method:
Figure BDA0003286409340000071
wherein r isx,ry,rzAnd tx,ty,tzAre coefficients of rotation and translation of the coordinate transformation from the camera coordinate system to the base _ link coordinate system of the mobile terminal.
Step S12, gamma calculated by SLO-VLP and the yaw angle theta input by the odometer, and TF coordinate relation correction is carried out by a laser radar matcher (AMCL), and the method comprises the following steps:
the laser radar obtains a relative pose transformation relation between two moments by matching scanning data with previous scanning, and corrects the direction of the mobile terminal obtained by the odometer sensor according to a coordinate transformation relation of a direction gamma, and the specific method is as follows:
θt=γ=γodomLIDAR→Map
step S2: and taking the SLO-VLP pose obtained in the step S1 as an initialization pose of an adaptive Monte Carlo positioning Algorithm (AMCL) particle to obtain a pose (Xi, Yi) estimated by the AMCL. The method comprises the following steps:
step S21: the SLO-VLP posture St obtained in the step S1 is used as an initialization pose of AMCL filter particles;
step S22: inputting pose information and laser radar data acquired by the odometer sensor into the AMCL positioner, and outputting the pose of the terminal on a map;
step S3: and acquiring the pose of the mobile terminal matched by the visual odometer based on the ORB characteristics.
Step S31: and (3) feature detection: the method comprises the following steps of detecting interest points from pictures shot by a camera and extracting key points: the FAST (features From accessed Segment test) algorithm is used to detect the key points. This definition detects a circle of pixel values around a candidate keypoint based on the gray-scale values of the image around the keypoint, and if there are enough pixel points in the area around the candidate point to have a sufficiently large difference from the gray-scale value of the candidate point, the candidate point is considered as a keypoint.
Step S32: the descriptors of the keypoints obtained in step S31 are calculated, and feature point matching is performed based on the descriptors, as follows:
step S321: computing a BRIEF descriptor from the keypoints obtained by step S31:
the moment method is used to determine the direction of the FAST feature points. That is, the centroid of the feature point in the radius range with r is calculated through the moment, the coordinate of the feature point is connected with the centroid, and the included angle between the straight line and the abscissa axis is obtained, namely the direction of the feature point. The specific method comprises the following steps:
moment definition: m ispq=∑x,yxpyqI(x,y)
Centroid definition:
Figure BDA0003286409340000081
then the direction of vector OC is found, while if the range of x, y is kept at [ -r, r]When the feature point is taken as the origin of coordinates, the direction angle (i.e. the direction of the FAST feature point) obtained by taking the feature point as the radius of the neighborhood of the feature point is: θ ═ arctan (m)01,m10)。
Then randomly selecting N point pairs from the periphery of the key point p (circle with radius r), and for two points in each point pair, if the gray value of the former point is greater than that of the latter point, taking 1, otherwise taking 0, and thus calculating to obtain the descriptor of the key point p.
Step S322: matching BRIEF descriptors in both images using Hamming distance.
Step S33: from epipolar geometric constraints, assume p2The feature points p of the camera pose motion R (rotation), t (translation) and the previous frame1Matched feature points, s1,s2Is the depth of a spatial point, P is the coordinate of a certain point in the world coordinate system, K is an internal reference, and K is known under a calibrated camera. Then there are:
s1p1=KP (1)
s2p2=K(RP+t) (2)
when using homogeneous coordinates, a vector will equal itself multiplied by an arbitrary non-zero constant. This is typically used to express a projective relationship. For example, s1p1 and p1 are in a projective relationship, which are equal in the sense of homogeneous coordinates. We call this equality relationship equal in the scale sense, written as:
Figure BDA0003286409340000082
then equation (1) (2) can be written as:
Figure BDA0003286409340000091
get x14=K-1p1,x2=K-1p2,x1,x2Is the coordinate on the normalization plane of two pixel points, and is obtained by substituting the formula:
Figure BDA0003286409340000092
both sides multiply t ^ (equivalent to both sides simultaneously add t), and both sides multiply x2 T
Simplifying to obtain: x is the number of2 Tt^Rx1=0
Let E ^ t ^ R, E be called intrinsic matrix
The essential matrix E is estimated according to the eight-point method, and the rotation and translation vectors R and t are recovered from E by Singular Value Decomposition (SVD).
Step S34: estimating the relative motion of the camera by the R, t obtained in the step S33, and calculating the pose of the camera:
Figure BDA0003286409340000093
step S35: the pose of the mobile terminal can be calculated according to the TF transformation relation between the camera coordinate system and the mobile terminal coordinate system (base _ link)
Figure BDA0003286409340000094
Figure BDA0003286409340000095
Wherein r isx,ry,rzAnd tx,ty,tzAre coefficients of rotation and translation of the coordinate transformation from the camera coordinate system to the base _ link coordinate system of the mobile terminal.
Step S4: taking the observation pose obtained in the steps S1 and S2 as an input value of an EKF algorithmTo correct the mobile terminal pose estimate calculated by the visual odometer obtained in step S3
Figure BDA0003286409340000096
The method comprises the following steps:
step S41: calculating kalman gain K ═ Pt|t-1Ht T(HtPt|t-1Ht T+Rt)-1 (3)
Pt|t-1=FPt-1FTWherein
Figure BDA0003286409340000101
Wherein HtIs an observation matrix, RtIs the error covariance matrix of the observed noise and assumes that the observed noise follows a normal distribution.
Step S42: updating the pose vector and the covariance matrix by using the Kalman gain K obtained in the step S41, which specifically comprises the following steps:
Figure BDA0003286409340000102
Pt=Pt|t-1-KHtPt|t-1 (5)
of the above, in the above, the present invention,
Figure BDA0003286409340000103
for the actual value x of the current statet(ii) an estimate of (d);
Figure BDA0003286409340000104
is pre-estimation based on t-1 times and t times; assuming that both the input noise and the observed noise follow a normal distribution, Q is the covariance of the input noise, R is the covariance of the observed noise, PtIs a covariance matrix.
Therefore, the estimation attitude through the vision odometer prediction equation and the multi-sensor fusion can be used for updating the prediction attitude and correcting the accumulated error of the vision odometer by using a Kalman filtering algorithm.

Claims (5)

1. A loosely-coupled multi-sensor fusion positioning algorithm based on visual odometry, comprising:
step S1: obtaining the observed pose S of the mobile terminal calculated from the SLO-VLPt(Xt,Yt,Zt,θt) Wherein yaw angle thetatCorrecting through a laser radar matcher;
step S2: taking the pose of the mobile terminal calculated by the SLO-VLP in step S1 as the initial pose of the particle in the adaptive monte carlo localization algorithm, and noting the pose estimated by the adaptive monte carlo localization algorithm as (X)i,Yi);
Step S3: obtaining pose P of mobile terminal matched by visual odometer based on ORB characteristicst
Step S4: using the observation pose obtained in steps S1 and S2 as an observation value of Kalman filtering algorithm to correct the pose P of the mobile terminal predicted by the visual odometer obtained in step S3t
2. The visual odometer-based loosely-coupled multi-sensor fusion positioning algorithm of claim 1, wherein in step S1, the mobile terminal observation pose S calculated and corrected from SLO-VLP is obtainedt(Xt,Yt,Zt,θt) The calculating method comprises the following steps:
step S11: modulating LED light for transmitting an LED lamp body ID and obtaining LED lamp body pose information corresponding to the lamp body ID;
step S12: acquiring the pose (X) of a certain LED lamp body in a world coordinate systemi,Yi,Zi) And calculating the pose P (X) of the camera image center on the mobile terminal in the world coordinate systems,Ys,Zs);
Step S13: calculating the attitude P of the mobile terminal through the coordinate system transformation relation between the coordinate system of the camera on the mobile terminal and the coordinate system of the mobile terminalt(Xt,Yt,Zt);
Step S14: obtaining an estimated yaw angle gamma by a visual odometer on a mobile terminalodomAnd the relative direction conversion relation gamma detected by the laser radar matching unitLIDAR→MapThe corrected yaw angle theta can be obtainedt=γ=γodomLIDAR→Map(ii) a And the P obtained by the step S12 can obtain the SLO-VLP pose S of the mobile terminalt(Xt,Yt,Zt,θt)。
3. The visual odometer-based loosely-coupled multi-sensor fusion positioning algorithm of claim 1, wherein in step S2, the pose (X) obtained by the adaptive Monte Carlo positioning algorithm is obtainedi,Yi) The method comprises the following steps:
step S21: pose S of SLO-VLP obtained in claim 2tThe initial pose of the filter particles is used as the initial pose of the filter particles of the self-adaptive Monte Carlo positioning algorithm;
step S22: inputting the pose information and laser radar data obtained by the vision odometer sensor into the self-adaptive Monte Carlo positioning algorithm positioner, and outputting the pose (X) of the mobile terminal on the map coordinate systemi,Yi)。
4. The visual odometer-based loosely-coupled multi-sensor fusion positioning algorithm of claim 1, wherein in the step S3, the method for obtaining the pose of the mobile terminal matched by the visual odometer based on ORB features comprises:
step S31: and (3) feature detection: detecting interest points from pictures shot by a camera, and extracting key points;
step S32: calculating the descriptors of the key points obtained in step S31, and performing feature point matching based on the descriptors;
step S33: estimating an essential matrix E by an eight-point method according to epipolar geometric constraint, and recovering a rotation matrix and a translation matrix R and t by SVD (singular value decomposition) of E;
step S34: estimating the relative motion of the camera by the R, t obtained in the step S33, thereby estimating the current pose of the camera;
step S35: the pose of the mobile terminal can be calculated by the TF transformation relation between the camera coordinate system and the mobile terminal coordinate system
Figure FDA0003286409330000021
5. The visual odometer-based loosely-coupled multi-sensor fusion positioning algorithm of claim 1, wherein in step S4, the observation pose obtained in steps S1 and S2 is used as the observation value of the kalman filtering algorithm to correct the pose of the mobile terminal predicted by the visual odometer obtained in step S3 as follows:
the pose observation value Z can be obtained from the observation pose calculated by the SLO-VLP and the observation pose of the laser radartAnd performing Kalman filter on the pose obtained in the step S3
Figure FDA0003286409330000031
Correcting to obtain the final pose P of the mobile terminalt
CN202111148825.4A 2021-09-29 2021-09-29 Loosely-coupled multi-sensor fusion positioning algorithm based on visual odometer Pending CN113790728A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111148825.4A CN113790728A (en) 2021-09-29 2021-09-29 Loosely-coupled multi-sensor fusion positioning algorithm based on visual odometer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111148825.4A CN113790728A (en) 2021-09-29 2021-09-29 Loosely-coupled multi-sensor fusion positioning algorithm based on visual odometer

Publications (1)

Publication Number Publication Date
CN113790728A true CN113790728A (en) 2021-12-14

Family

ID=78877507

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111148825.4A Pending CN113790728A (en) 2021-09-29 2021-09-29 Loosely-coupled multi-sensor fusion positioning algorithm based on visual odometer

Country Status (1)

Country Link
CN (1) CN113790728A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114370871A (en) * 2022-01-13 2022-04-19 华南理工大学 Close coupling optimization method for visible light positioning and laser radar inertial odometer

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106170729A (en) * 2013-03-25 2016-11-30 英特尔公司 For the method and apparatus with the head-mounted display of multiple emergent pupil
CN106646366A (en) * 2016-12-05 2017-05-10 深圳市国华光电科技有限公司 Visible light positioning method and system based on particle filter algorithm and intelligent equipment
CN108507561A (en) * 2018-03-05 2018-09-07 华南理工大学 A kind of VLC based on mobile terminal and IMU fusion and positioning methods
CN108680160A (en) * 2018-03-30 2018-10-19 深圳清创新科技有限公司 Indoor positioning, air navigation aid, device, storage medium and computer equipment
US20190101377A1 (en) * 2017-09-29 2019-04-04 Abl Ip Holding Llc Light fixture commissioning using depth sensing device
WO2019126332A1 (en) * 2017-12-19 2019-06-27 Carnegie Mellon University Intelligent cleaning robot
CN110243358A (en) * 2019-04-29 2019-09-17 武汉理工大学 The unmanned vehicle indoor and outdoor localization method and system of multi-source fusion
CN110320497A (en) * 2019-06-04 2019-10-11 华南理工大学 Particle filter fusion and positioning method based on VLC and IMU
CN110595466A (en) * 2019-09-18 2019-12-20 电子科技大学 Lightweight inertial-assisted visual odometer implementation method based on deep learning
US20200132461A1 (en) * 2016-12-21 2020-04-30 Blue Vision Labs UK Limited Localisation of mobile device using image and non-image sensor data in server processing
US10740729B1 (en) * 2019-09-12 2020-08-11 GM Cruise Holdings, LLC Real-time visualization of autonomous vehicle behavior in mobile applications
CN112129297A (en) * 2020-09-25 2020-12-25 重庆大学 Self-adaptive correction indoor positioning method for multi-sensor information fusion
CN112254729A (en) * 2020-10-09 2021-01-22 北京理工大学 Mobile robot positioning method based on multi-sensor fusion
CN112729311A (en) * 2020-12-25 2021-04-30 湖南航天机电设备与特种材料研究所 Sampling method and sampling system of inertial navigation system
CN112734839A (en) * 2020-12-31 2021-04-30 浙江大学 Monocular vision SLAM initialization method for improving robustness
CN112731335A (en) * 2020-12-20 2021-04-30 大连理工大学人工智能大连研究院 Multi-unmanned aerial vehicle cooperative positioning method based on whole-region laser scanning
CN112747750A (en) * 2020-12-30 2021-05-04 电子科技大学 Positioning method based on fusion of monocular vision odometer and IMU (inertial measurement Unit)
CN113108771A (en) * 2021-03-05 2021-07-13 华南理工大学 Movement pose estimation method based on closed-loop direct sparse visual odometer
US11069082B1 (en) * 2015-08-23 2021-07-20 AI Incorporated Remote distance estimation system and method

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106170729A (en) * 2013-03-25 2016-11-30 英特尔公司 For the method and apparatus with the head-mounted display of multiple emergent pupil
US11069082B1 (en) * 2015-08-23 2021-07-20 AI Incorporated Remote distance estimation system and method
CN106646366A (en) * 2016-12-05 2017-05-10 深圳市国华光电科技有限公司 Visible light positioning method and system based on particle filter algorithm and intelligent equipment
US20200132461A1 (en) * 2016-12-21 2020-04-30 Blue Vision Labs UK Limited Localisation of mobile device using image and non-image sensor data in server processing
US20190101377A1 (en) * 2017-09-29 2019-04-04 Abl Ip Holding Llc Light fixture commissioning using depth sensing device
WO2019126332A1 (en) * 2017-12-19 2019-06-27 Carnegie Mellon University Intelligent cleaning robot
CN108507561A (en) * 2018-03-05 2018-09-07 华南理工大学 A kind of VLC based on mobile terminal and IMU fusion and positioning methods
CN108680160A (en) * 2018-03-30 2018-10-19 深圳清创新科技有限公司 Indoor positioning, air navigation aid, device, storage medium and computer equipment
CN110243358A (en) * 2019-04-29 2019-09-17 武汉理工大学 The unmanned vehicle indoor and outdoor localization method and system of multi-source fusion
CN110320497A (en) * 2019-06-04 2019-10-11 华南理工大学 Particle filter fusion and positioning method based on VLC and IMU
US10740729B1 (en) * 2019-09-12 2020-08-11 GM Cruise Holdings, LLC Real-time visualization of autonomous vehicle behavior in mobile applications
CN110595466A (en) * 2019-09-18 2019-12-20 电子科技大学 Lightweight inertial-assisted visual odometer implementation method based on deep learning
CN112129297A (en) * 2020-09-25 2020-12-25 重庆大学 Self-adaptive correction indoor positioning method for multi-sensor information fusion
CN112254729A (en) * 2020-10-09 2021-01-22 北京理工大学 Mobile robot positioning method based on multi-sensor fusion
CN112731335A (en) * 2020-12-20 2021-04-30 大连理工大学人工智能大连研究院 Multi-unmanned aerial vehicle cooperative positioning method based on whole-region laser scanning
CN112729311A (en) * 2020-12-25 2021-04-30 湖南航天机电设备与特种材料研究所 Sampling method and sampling system of inertial navigation system
CN112747750A (en) * 2020-12-30 2021-05-04 电子科技大学 Positioning method based on fusion of monocular vision odometer and IMU (inertial measurement Unit)
CN112734839A (en) * 2020-12-31 2021-04-30 浙江大学 Monocular vision SLAM initialization method for improving robustness
CN113108771A (en) * 2021-03-05 2021-07-13 华南理工大学 Movement pose estimation method based on closed-loop direct sparse visual odometer

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
LINYI HUANG等: "Single LED positioning scheme based on angle sensors in robotics", 《APPLIED OPTICS》, 20 July 2021 (2021-07-20), pages 6275 - 6287 *
关伟鹏: "基于图像传感器的高精度室内可见光定位算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 1, 15 January 2020 (2020-01-15), pages 136 - 868 *
沈奇翔: "基于可见光的煤矿井下人员定位系统研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》, no. 2, 15 February 2020 (2020-02-15), pages 021 - 181 *
王振: "基于ARM/DSP的轮式机器人视觉导航及控制系统", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 8, 15 August 2015 (2015-08-15), pages 138 - 1176 *
靳东: "室内复杂环境下移动机器人激光视觉融合SLAM及导航研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 1, 15 January 2021 (2021-01-15), pages 136 - 1120 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114370871A (en) * 2022-01-13 2022-04-19 华南理工大学 Close coupling optimization method for visible light positioning and laser radar inertial odometer

Similar Documents

Publication Publication Date Title
Heng et al. Project autovision: Localization and 3d scene perception for an autonomous vehicle with a multi-camera system
US9989969B2 (en) Visual localization within LIDAR maps
Zhang et al. Visual-lidar odometry and mapping: Low-drift, robust, and fast
US20180253108A1 (en) Mobile robot system and method for generating map data using straight lines extracted from visual images
KR20190087266A (en) Apparatus and method for updating high definition map for autonomous driving
Guan et al. Robot localization and navigation using visible light positioning and SLAM fusion
Pink et al. Visual features for vehicle localization and ego-motion estimation
WO2012043045A1 (en) Image processing device and image capturing device using same
CN112396656B (en) Outdoor mobile robot pose estimation method based on fusion of vision and laser radar
CN116222543B (en) Multi-sensor fusion map construction method and system for robot environment perception
Tao et al. Automated processing of mobile mapping image sequences
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
CN114370871A (en) Close coupling optimization method for visible light positioning and laser radar inertial odometer
CN113790728A (en) Loosely-coupled multi-sensor fusion positioning algorithm based on visual odometer
CN111833443A (en) Landmark position reconstruction in autonomous machine applications
CN113971697A (en) Air-ground cooperative vehicle positioning and orienting method
CN112862818A (en) Underground parking lot vehicle positioning method combining inertial sensor and multi-fisheye camera
Hoang et al. Combining edge and one-point ransac algorithm to estimate visual odometry
Mishra et al. Localization of a smart infrastructure fisheye camera in a prior map for autonomous vehicles
Zheng et al. Localization method based on multi-QR codes for mobile robots
Park et al. Localization of an unmanned ground vehicle based on hybrid 3D registration of 360 degree range data and DSM
CN113686340A (en) EKF-based loosely-coupled multi-sensor fusion positioning method and system
CN112001970A (en) Monocular vision odometer method based on point-line characteristics
Song et al. A survey: Stereo based navigation for mobile binocular robots
CN113838140B (en) Monocular video pedestrian three-dimensional positioning method based on three-dimensional map assistance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination