CN114323002B - AGV positioning navigation method based on binocular vision, IMU and UWB fusion - Google Patents

AGV positioning navigation method based on binocular vision, IMU and UWB fusion Download PDF

Info

Publication number
CN114323002B
CN114323002B CN202111611402.1A CN202111611402A CN114323002B CN 114323002 B CN114323002 B CN 114323002B CN 202111611402 A CN202111611402 A CN 202111611402A CN 114323002 B CN114323002 B CN 114323002B
Authority
CN
China
Prior art keywords
uwb
imu
agv
data
integration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111611402.1A
Other languages
Chinese (zh)
Other versions
CN114323002A (en
Inventor
卢建刚
余天义
陈金水
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202111611402.1A priority Critical patent/CN114323002B/en
Publication of CN114323002A publication Critical patent/CN114323002A/en
Application granted granted Critical
Publication of CN114323002B publication Critical patent/CN114323002B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Navigation (AREA)

Abstract

The invention provides an AGV positioning navigation method based on binocular vision, IMU and UWB fusion, wherein a UWB base station is arranged in a space part, and UWB positioning labels, binocular cameras and IMU are carried on the AGV; calculating the distance between the UWB base station and the UWB positioning tag according to the propagation time of the UWB signal between the UWB base station and the UWB positioning tag, and collecting binocular camera data, accelerometer data and gyroscope data in the IMU; simultaneously, obtaining pixel coordinates of feature points by utilizing an image feature extraction and tracking algorithm, and carrying out inertial pre-integration between two adjacent image frames according to accelerometer data and gyroscope data in the IMU; and establishing a target optimization function of the sum of UWB ranging residual error, binocular vision re-projection residual error and inertia pre-integration residual error, and solving by adopting a nonlinear optimization method, thereby realizing AGV position solving. According to the invention, through fusion of the three sensors, the positioning information of the AGV is effectively and accurately calculated, the accumulated error in pure vision positioning and the error of the IMU can be avoided, the cost is obviously reduced, and the use is convenient.

Description

AGV positioning navigation method based on binocular vision, IMU and UWB fusion
Technical Field
The invention belongs to the technical field of AGV indoor positioning navigation, and particularly relates to an AGV positioning navigation method based on binocular vision, IMU and UWB fusion.
Background
AGVs (Automated Guided Vehicle, automated guided vehicles) are used as key core devices for intelligent logistics and are widely used in a number of fields such as machining, automobiles, military industry, home appliance production, microelectronics manufacturing, e-commerce storage and the like. Because the enterprise needs AGV to accurately reach a specific position in the engineering manufacturing process, high requirements are put forward on the accuracy of the real-time positioning and navigation of the AGV.
Currently, AGV positioning navigation is classified into fixed path guidance and free guidance. The existing fixed path guiding technical route mainly comprises magnetic stripe guiding, ribbon guiding and other modes, and the guiding route is fixed and lacks flexibility, so that the method is difficult to adapt to the diversified requirements of industrial sites on intelligent logistics. The existing free guiding technical route mainly adopts a laser radar sensor, and the positioning accuracy can generally meet the industrial requirement, but the popularization and the application of the free guiding technical route in the industrial field are restricted due to higher cost; the method also has the advantage of low cost due to the adoption of a pure visual inertial navigation mode based on the fusion of binocular vision and an IMU (Inertial Measurement Unit ), but the accumulated error is larger due to easy drift, and the positioning precision is sometimes difficult to meet the industrial requirement, so that the application of the method is limited.
Disclosure of Invention
The invention aims to provide an AGV positioning navigation method based on binocular vision, IMU and UWB (Ultra-Wide Band) fusion, which solves the problems in the background technology, and therefore, the invention adopts the following technical scheme:
an AGV positioning navigation method based on binocular vision, IMU and UWB fusion is characterized in that:
selecting a fixed point to deploy a UWB base station in an indoor environment, and carrying a binocular camera, an IMU and a UWB positioning tag on an AGV; the method comprises the following steps:
step 1, in the moving process of an AGV, a UWB positioning tag sends a pulse data packet composed of ultra-wideband pulse signals to a UWB base station, and the propagation time T of the signals between the UWB base station and the UWB positioning tag is calculated, so that the distance d (k) =c×T between the UWB positioning tag and the UWB base station at the moment k is calculated according to the light speed c multiplied by the propagation time T;
step 2, acquiring image frame data C (k) of binocular vision sensor at k moment and accelerometer data in IMUAnd gyroscope data->
Step 3, according to the binocular vision sensor image frame data C (k), an image feature extraction and tracking algorithm is applied, and pixel coordinates of left-eye camera image frame vision feature points projected into the corresponding right-eye camera image frames are calculated;
step 4, forAdding speedometer data to the acquired IMUAnd gyroscope data->Performing inertial pre-integration between two adjacent image frames;
step 5, establishing a target optimization function based on the sum of UWB ranging residual error, binocular vision re-projection residual error and inertia pre-integration residual error, and solving the established target optimization function by adopting a nonlinear optimization method, thereby solving the position of the AGV;
and 6, outputting the data of the AGV position, and returning to the step 1.
The invention can also adopt or combine the following technical proposal when adopting the technical proposal:
as a preferable technical scheme of the invention: in the step 1, the propagation time of the signal between the UWB base station and the UWB positioning tag is based on the T of the UWB base station on the time stamp a1 Transmitting a pulse signal of a requested nature at a moment, T of the UWB positioning tag at its time stamp b1 The signal is received at the moment, and then the UWB positioning label is at T b2 Transmitting a signal with response property at moment by UWB base station at own time stamp T a2 Receiving at the moment; from this, it was calculated that the propagation time of the pulse signal between the UWB base station and the UWB positioning tag was t=0.5 (T a2 -T a1 )-0.5(T b2 -T b1 )。
As a preferable technical scheme of the invention: in the step 2, the accelerometer dataAnd the gyroscope data->The method comprises the following steps of:
wherein a (k) and ω (k) are ideal values measured by the accelerometer and gyroscope at time k, b a(k) And b ω(k) Offset values of accelerometer and gyroscope, respectively, n a And n ω Measurement noise g for accelerometer and gyroscope w Is the gravitational acceleration in the world coordinate system,the rotation matrix from the world coordinate system to the inertial coordinate system at the moment k.
As a preferable technical scheme of the invention: in the step 3, the image feature extraction and tracking algorithm specifically includes: extracting Shi-Tomasi corner points for each acquired frame of picture, and extracting and tracking visual feature points; for the visual feature points of the current image frame obtained by the left eye, tracking by using a KLT sparse optical flow method to obtain corresponding feature points on the right eye image, and then carrying out reverse optical flow tracking on the feature points obtained by optical flow tracking of the right eye image to obtain the corresponding visual feature points in the left eye image frame; and finally, when the pixel point distances between the feature points tracked by the left-eye reverse optical flow and the qualified feature points of the image frame at the current time of the left are all smaller than a threshold value, determining the final qualified feature points, then storing the feature points of the image frame of the left eye as the qualified feature points of the image frame of the left eye, and simultaneously storing the feature points of the image frame of the right eye as the qualified feature points of the image frame of the right eye.
As a preferable technical scheme of the invention: in the step 4, the specific process of inertial pre-integration between the two adjacent image frames is as follows: for the b th k Frame and b k+1 IMU measurement data at instant i between two frames of images including accelerometer dataAnd gyroscope data->Inertial pre-integration is performed to obtain inertial pre-integration +.> And +.>
Wherein,for pre-integration of position +.>For pre-integration of speed, +.>For the pre-integration of the rotation, δt is the interval time between the moments i+1 and i, the rotation is expressed in the form of a quaternion, R (gamma) represents the conversion of the quaternion into a rotation matrix, ++>Is the multiplication of quaternions.
As a preferable technical scheme of the invention: in the step 5, the UWB ranging residual is an error between the measured distance value and the estimated distance value; the binocular vision re-projection residual error is a distance value between the binocular vision re-projection residual error and a corresponding vision feature point obtained at the moment of the right eye j after the vision feature point at the moment of the left eye i is projected to an image frame at the moment of the right eye j; the inertial pre-integration residual error is the error of the predictive pre-integration between two adjacent image frames and the inertial pre-integration between two adjacent image frames obtained by calculation in claim 5; the method for calculating the prediction pre-integral between the two adjacent image frames is IMU accelerometer dataAnd gyroscope data->And integrating the time.
The invention provides an AGV positioning navigation method based on binocular vision, IMU and UWB fusion, which adopts the technical scheme of fusing binocular vision, IMU and UWB, can effectively and accurately calculate the positioning information of the AGV, can avoid the accumulated error in pure vision positioning and the error of an IMU measuring instrument, can effectively reduce the cost, and is convenient to use, practical and convenient.
Drawings
FIG. 1 is a schematic diagram of sensor information fusion according to the present invention.
FIG. 2 is a flow chart of the present invention.
FIG. 3 is a comparative graph of the results of the present invention.
Detailed Description
As shown in fig. 1 and 2, a fixed point is selected to arrange a UWB base station in an indoor environment, a UWB positioning tag, a binocular camera and an IMU measuring unit are mounted at the center of mass of an AGV, the binocular camera sets the resolution of pictures to 640 x 480, the acquisition frequency of pictures to 30Hz, the message release frequency of the UWB positioning tag to 100Hz, and the release frequency of the IMU measuring unit to 200Hz. The AGV positioning navigation method based on binocular vision, IMU and UWB fusion provided by the invention comprises the following steps:
step 1: in the moving process of the AGV, the UWB positioning tag sends pulse data packets composed of UWB pulse signals to the UWB base station, and the propagation time T of the signals between the UWB base station and the UWB positioning tag is calculated, so that the distance d (k) =c×T between the UWB positioning tag and the UWB base station at the k moment is calculated according to the light speed c multiplied by the propagation time T; the time difference solving method comprises the following steps: according to T of UWB base station on its time stamp a1 Transmitting a pulse signal of a requested nature at a moment, T of the UWB positioning tag at its time stamp b1 The signal is received at the moment, and then the UWB positioning label is at T b2 Transmitting a signal with response property at moment by UWB base station at own time stamp T a2 Receiving at the moment; from this, it was calculated that the propagation time of the pulse signal between the UWB base station and the UWB positioning tag was t=0.5 (T a2 -T a1 )-0.5(T b2 -T b1 )。
Step 2: acquisition of binocular vision sensor image frame data C (k) at time k, accelerometer data in IMUAnd gyroscope data->For accelerometer data acquired +.>And gyroscope data->Can be expressed as:
wherein a (k), ω (k) are respectivelyIdeal value measured by accelerometer and gyroscope at time k, b a(k) ,b ω(k) Offset values of accelerometer and gyroscope, respectively, n a ,n ω Measurement noise g for accelerometer and gyroscope w Is the gravitational acceleration in the world coordinate system,a rotation matrix from a world coordinate system to an inertial coordinate system at the moment k;
step 3: according to binocular vision sensor image frame data C (k), an image feature extraction and tracking algorithm is applied, and pixel coordinates of left-eye camera image frame vision feature points projected into corresponding right-eye camera image frames are calculated, wherein the image feature extraction and tracking algorithm specifically comprises the following steps: extracting Shi-Tomasi corner points for each acquired frame of picture, and extracting and tracking visual feature points; for the visual feature points of the current image frame obtained by the left eye, tracking by using a KLT sparse optical flow method to obtain corresponding feature points on the right eye image, and then carrying out reverse optical flow tracking on the feature points obtained by optical flow tracking of the right eye image to obtain the corresponding visual feature points in the left eye image frame; and finally, when the pixel point distances between the feature points tracked by the left-eye reverse optical flow and the qualified feature points of the image frame at the current time of the left are all smaller than a threshold value, determining the final qualified feature points, then storing the feature points of the image frame of the left eye as the qualified feature points of the image frame of the left eye, and simultaneously storing the feature points of the image frame of the right eye as the qualified feature points of the image frame of the right eye.
Step 4: for accelerometer data in acquired IMUAnd gyroscope data->Inertial pre-integration between two adjacent image frames is performed for b < th) k Frame and b k+1 IMU measurement data at instant i between two frames of images including accelerometer data +.>And gyroscope data->Inertial pre-integration is performed to obtain inertial pre-integration +.> And +.>
Wherein,for pre-integration of position +.>For pre-integration of speed, +.>For the pre-integration of the rotation, δt is the interval time between the moments i+1 and i, the rotation is expressed in the form of a quaternion, R (gamma) represents the conversion of the quaternion into a rotation matrix, ++>Multiplication for quaternion;
step 5: and establishing an objective optimization function based on the sum of UWB ranging residual errors, binocular vision re-projection residual errors and inertia pre-integration residual errors.
a) Establishing an optimization variable χ:
wherein,n is the last image frame, ">The position, the speed, the rotation quaternion, the bias of the accelerometer and the gyroscope of the AGV of the kth frame under the world coordinate system are respectively; l (L) m Representing the inverse depth of the mth visual feature point, which is the conversion value from the space coordinate system coordinate of the feature point to the pixel coordinate system coordinate,representing the coordinates of the UWB base station in the world coordinate system.
b) Establishing a target optimization function:
the UWB ranging residual error is the error between the measured distance value and the estimated distance value; the binocular vision re-projection residual error is an image frame which projects the vision characteristic point at the moment of the left eye i to the moment of the right eye j, and is compared with the corresponding vision characteristic point obtained at the moment of the right eye j, and the difference value of the distance values of the two vision characteristic points is obtained; the inertia pre-integration residual error is the error of the prediction pre-integration between two adjacent image frames and the inertia pre-integration between the two adjacent image frames; the method for calculating the prediction pre-integral between two adjacent image frames is IMU accelerometer dataAnd gyroscope data->And integrating the time. In addition, an objective optimization function is established that does not contain UWB ranging residual terms, including only binocular vision re-projection residual and inertial pre-integration residual, as a comparison:
c) And carrying out iterative solution on the respectively established optimization functions by a Rovenberg-Marquardt method, and stopping iteration when the result is converged, so as to realize AGV position solution.
Step 6: and (3) outputting position information of the AGV, comparing and returning to the step (1).
The actual operation effect of fig. 3 shows that: after the AGV moves for a long time, a pure visual inertial navigation mode based on fusion of binocular vision and an IMU is adopted, drift errors with obviously misaligned starting points and ending points can occur, and the root mean square error of a positioning effect reaches 0.240 m; by adopting the navigation mode based on binocular vision, IMU and UWB fusion, the positioning accuracy is obviously improved, and the root mean square error of the positioning effect is greatly reduced to 0.092 meter.
Table 1 root mean square error comparison of positioning effect
The above detailed description is intended to illustrate the present invention by way of example only and not to limit the invention to the particular embodiments disclosed, but to limit the invention to the precise embodiments disclosed, and any modifications, equivalents, improvements, etc. that fall within the spirit and scope of the invention as defined by the appended claims.

Claims (6)

1. An AGV positioning navigation method based on binocular vision, IMU and UWB fusion is characterized in that:
selecting a fixed point to deploy a UWB base station in an indoor environment, and carrying a binocular camera, an IMU and a UWB positioning tag on an AGV; the method comprises the following steps:
step 1, in the moving process of an AGV, a UWB positioning tag sends a pulse data packet composed of ultra-wideband pulse signals to a UWB base station, and the propagation time T of the signals between the UWB base station and the UWB positioning tag is calculated, so that the distance d (k) =c×T between the UWB positioning tag and the UWB base station at the moment k is calculated according to the light speed c multiplied by the propagation time T;
step 2, acquiring image frame data C (k) of binocular vision sensor at k moment and accelerometer data in IMUAnd gyroscope data->
Step 3, according to the binocular vision sensor image frame data C (k), an image feature extraction and tracking algorithm is applied, and pixel coordinates of left-eye camera image frame vision feature points projected into the corresponding right-eye camera image frames are calculated;
step 4, adding speed meter data to the acquired IMUAnd gyroscope data->Performing inertial pre-integration between two adjacent image frames;
step 5, establishing a target optimization function based on the sum of UWB ranging residual error, binocular vision re-projection residual error and inertia pre-integration residual error, and solving the established target optimization function by adopting a nonlinear optimization method, thereby solving the position of the AGV;
and 6, outputting the data of the AGV position, and returning to the step 1.
2. The AGV positioning navigation method based on binocular vision, IMU and UWB fusion according to claim 1, wherein: in the step 1, the propagation time of the signal between the UWB base station and the UWB positioning tag is based on the T of the UWB base station on the time stamp a1 Transmitting a pulse signal of a requested nature at a moment, T of the UWB positioning tag at its time stamp b1 The signal is received at the moment, and then the UWB positioning label is at T b2 Transmitting a signal with response property at moment by UWB base station at own time stamp T a2 Receiving at the moment; from this, it was calculated that the propagation time of the pulse signal between the UWB base station and the UWB positioning tag was t=0.5 (T a2 -T a1 )-0.5(T b2 -T b1 )。
3. The binocular vision, IMU and UWB fusion based AGV positioning navigation method according to claim 1, wherein in the step 2, the accelerometer dataAnd the gyroscope data->The method comprises the following steps of:
wherein a (k) and ω (k) are ideal values measured by the accelerometer and gyroscope at time k, b a(k) And b ω(k) Offset values of accelerometer and gyroscope, respectively, n a And n ω Measurement noise g for accelerometer and gyroscope w Is the weight in the world coordinate systemThe acceleration of the force is such that,the rotation matrix from the world coordinate system to the inertial coordinate system at the moment k.
4. The AGV positioning and navigation method according to claim 1, wherein in step 3, the image feature extraction and tracking algorithm specifically comprises: extracting Shi-Tomasi corner points for each acquired frame of picture, and extracting and tracking visual feature points; for the visual feature points of the current image frame obtained by the left eye, tracking by using a KLT sparse optical flow method to obtain corresponding feature points on the right eye image, and then carrying out reverse optical flow tracking on the feature points obtained by optical flow tracking of the right eye image to obtain the corresponding visual feature points in the left eye image frame; and finally, when the pixel point distances between the feature points tracked by the left-eye reverse optical flow and the qualified feature points of the image frame at the current time of the left are all smaller than a threshold value, determining the final qualified feature points, then storing the feature points of the image frame of the left eye as the qualified feature points of the image frame of the left eye, and simultaneously storing the feature points of the image frame of the right eye as the qualified feature points of the image frame of the right eye.
5. The AGV positioning and navigation method based on binocular vision, IMU and UWB fusion according to claim 1, wherein in the step 4, the specific process of inertial pre-integration between the two adjacent image frames is: for the b th k Frame and b k+1 IMU measurement data at instant i between two frames of images including accelerometer dataAnd gyroscope data->Inertial pre-integration is performed to obtain inertial pre-integration +.>And +.>
Wherein,for pre-integration of position +.>For pre-integration of speed, +.>For the pre-integration of the rotation, δt is the interval time between the i+1 moment and the i moment, the rotation is expressed in the form of a quaternion, R (γ) represents the conversion of the quaternion into a rotation matrix,is the multiplication of quaternions.
6. The AGV positioning and navigation method based on binocular vision, IMU and UWB fusion according to claims 1 and 5, wherein in said step 5, the UWB ranging residual is the error between the measured distance value and the estimated distance value; the binocular vision re-projection residual error is to project the vision characteristic point at the moment of the left eye i toAfter the image frame at the moment of the right eye j, the distance value of the corresponding visual characteristic point obtained at the moment of the right eye j; the inertial pre-integration residual error is the error of the predictive pre-integration between two adjacent image frames and the inertial pre-integration between two adjacent image frames obtained by calculation in claim 5; the method for calculating the prediction pre-integral between the two adjacent image frames is IMU accelerometer dataAnd gyroscope data->And integrating the time.
CN202111611402.1A 2021-12-27 2021-12-27 AGV positioning navigation method based on binocular vision, IMU and UWB fusion Active CN114323002B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111611402.1A CN114323002B (en) 2021-12-27 2021-12-27 AGV positioning navigation method based on binocular vision, IMU and UWB fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111611402.1A CN114323002B (en) 2021-12-27 2021-12-27 AGV positioning navigation method based on binocular vision, IMU and UWB fusion

Publications (2)

Publication Number Publication Date
CN114323002A CN114323002A (en) 2022-04-12
CN114323002B true CN114323002B (en) 2023-12-08

Family

ID=81013768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111611402.1A Active CN114323002B (en) 2021-12-27 2021-12-27 AGV positioning navigation method based on binocular vision, IMU and UWB fusion

Country Status (1)

Country Link
CN (1) CN114323002B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108012325A (en) * 2017-10-30 2018-05-08 上海神添实业有限公司 A kind of navigation locating method based on UWB and binocular vision
CN111156997A (en) * 2020-03-02 2020-05-15 南京航空航天大学 Vision/inertia combined navigation method based on camera internal parameter online calibration
CN113124856A (en) * 2021-05-21 2021-07-16 天津大学 Visual inertia tight coupling odometer based on UWB online anchor point and metering method
CN113140040A (en) * 2021-04-26 2021-07-20 北京天地玛珂电液控制系统有限公司 Multi-sensor fusion coal mine underground space positioning and mapping method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110261823B (en) * 2019-05-24 2022-08-05 南京航空航天大学 Visible light indoor communication positioning method and system based on single LED lamp

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108012325A (en) * 2017-10-30 2018-05-08 上海神添实业有限公司 A kind of navigation locating method based on UWB and binocular vision
CN111156997A (en) * 2020-03-02 2020-05-15 南京航空航天大学 Vision/inertia combined navigation method based on camera internal parameter online calibration
CN113140040A (en) * 2021-04-26 2021-07-20 北京天地玛珂电液控制系统有限公司 Multi-sensor fusion coal mine underground space positioning and mapping method and device
CN113124856A (en) * 2021-05-21 2021-07-16 天津大学 Visual inertia tight coupling odometer based on UWB online anchor point and metering method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于麦克纳姆轮的重载AGV定位与控制研究;方玉发 等;2018中国自动化大会(CAC2018)论文集;全文 *
多目视觉定位协同引导自动运输车研究;曾群生;苏成悦;程天明;张洪钊;黄佩铎;;计算机工程与应用(11);全文 *

Also Published As

Publication number Publication date
CN114323002A (en) 2022-04-12

Similar Documents

Publication Publication Date Title
CN110243358B (en) Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system
CN111024066B (en) Unmanned aerial vehicle vision-inertia fusion indoor positioning method
CN109100730B (en) Multi-vehicle cooperative rapid map building method
CN113311411B (en) Laser radar point cloud motion distortion correction method for mobile robot
CN110044354A (en) A kind of binocular vision indoor positioning and build drawing method and device
JP2022019642A (en) Positioning method and device based upon multi-sensor combination
CN110207714B (en) Method for determining vehicle pose, vehicle-mounted system and vehicle
CN109341706A (en) A kind of production method of the multiple features fusion map towards pilotless automobile
CN110533719B (en) Augmented reality positioning method and device based on environment visual feature point identification technology
CN111121754A (en) Mobile robot positioning navigation method and device, mobile robot and storage medium
US12008173B2 (en) Multi-sensor handle controller hybrid tracking method and device
CN112212852B (en) Positioning method, mobile device and storage medium
CN112254729A (en) Mobile robot positioning method based on multi-sensor fusion
CN110887486B (en) Unmanned aerial vehicle visual navigation positioning method based on laser line assistance
CN114612348B (en) Laser point cloud motion distortion correction method and device, electronic equipment and storage medium
CN113218407A (en) Map generation method and device based on fusion of VIO and satellite navigation system
CN113758488A (en) Indoor positioning method and equipment based on UWB and VIO
CN111721305B (en) Positioning method and apparatus, autonomous vehicle, electronic device, and storage medium
CN111121755B (en) Multi-sensor fusion positioning method, device, equipment and storage medium
CN114812573A (en) Monocular visual feature fusion-based vehicle positioning method and readable storage medium
CN114323002B (en) AGV positioning navigation method based on binocular vision, IMU and UWB fusion
CN112284381B (en) Visual inertia real-time initialization alignment method and system
CN111402702A (en) Map construction method, device and system
CN116907469A (en) Synchronous positioning and mapping method and system for multi-mode data combined optimization
CN115540854A (en) Active positioning method, equipment and medium based on UWB assistance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant