CN113758488A - Indoor positioning method and equipment based on UWB and VIO - Google Patents
Indoor positioning method and equipment based on UWB and VIO Download PDFInfo
- Publication number
- CN113758488A CN113758488A CN202111134020.4A CN202111134020A CN113758488A CN 113758488 A CN113758488 A CN 113758488A CN 202111134020 A CN202111134020 A CN 202111134020A CN 113758488 A CN113758488 A CN 113758488A
- Authority
- CN
- China
- Prior art keywords
- uwb
- vio
- positioning
- positioning result
- algorithm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1652—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1656—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Navigation (AREA)
- Numerical Control (AREA)
Abstract
The invention relates to an indoor positioning method and equipment based on UWB and VIO, wherein the positioning method comprises the following steps: obtaining a UWB positioning result by adopting a DS-TWR algorithm and a Kalman filtering algorithm; obtaining a VIO positioning result by adopting an S-MSCKF algorithm; carrying out time synchronization processing on the UWB positioning result and the VIO positioning result; and performing data fusion on the UWB positioning result after time synchronization and the VIO positioning result by using an ES-EKF algorithm to obtain the optimal position estimation of the robot. Compared with the prior art, the invention has the advantages of flexible and accurate positioning in a complex indoor environment and the like.
Description
Technical Field
The invention belongs to the field of SLAM and indoor positioning, relates to an indoor positioning method and equipment, and particularly relates to an indoor positioning method and equipment based on UWB and VIO.
Background
In recent years, indoor mobile robots, typified by service robots, have been rapidly popularized in daily life. How to flexibly and accurately position in a complex indoor environment and comprehensively model the indoor environment is the first problem that the mobile robot needs to face before completing an indoor task.
UWB (ultra-wideband) technology is a carrier-less communication technology with low power consumption and high data rate, which transmits data through high-frequency non-sinusoidal narrow pulses, so that UWB systems occupy a wide frequency band and also have a low transmitted power spectral density due to the extremely short duration of the transmitted signal pulses. Compared with wireless technologies such as WLAN, Bluetooth and the like, the positioning system based on the UWB technology has the advantages of low cost, strong penetration capacity, insensitivity to channel fading, high timestamp precision and high ranging positioning precision, and is suitable for high-speed wireless data communication and wireless positioning in indoor places. Such as: prorok et al, in the article of Accurate index localization with ultra-wide using spatial models and colorformation, first proposed a positioning model based on TDOA model (Time Difference of Arrival), which is simpler and more scalable in UWB base station deployment, however, the positioning accuracy needs to be improved, and the robustness of the positioning result is not strong; chinese patent publication No. CN107566065A discloses a UWB positioning technique based on TOF algorithm, which is fast in response speed, but causes a large error in positioning result because the TOF algorithm itself is easily interfered by environment.
The UWB positioning still has the defects that the positioning accuracy needs to be improved, the positioning result has large errors, and the like, and needs to be improved. The VIO (visual-inertial odometer) is an algorithm for fusing camera and IMU data to realize SLAM, and is divided into tight coupling and loose coupling according to the difference of a fusion framework. The visual motion estimation and inertial navigation motion estimation system in loose coupling is two independent modules, the output result of each module is fused, the tight coupling jointly estimates a group of variables by using the original data of two sensors, the noise of the sensors is also mutually influenced, the tight coupling algorithm is complex, but the data of the sensors are fully utilized, and better effect can be realized. Chinese patent publication No. CN110487267A discloses a positioning method based on loose coupling of UWB and VIO (visual inertial odometer), and the algorithm is a loose coupling framework that fuses UWB information, visual information, and inertial data by kalman filtering. However, this method does not perform well in terms of positioning errors due to the loose coupling.
Disclosure of Invention
The present invention aims to overcome the defects of the prior art and provide an indoor positioning method and device based on UWB and VIO, which can flexibly and accurately position in a complex indoor environment.
The purpose of the invention can be realized by the following technical scheme:
the invention provides an indoor positioning method based on UWB and VIO, which comprises the following steps:
obtaining a UWB positioning result by adopting a DS-TWR algorithm and a Kalman filtering algorithm;
obtaining a VIO positioning result by adopting an S-MSCKF algorithm;
carrying out time synchronization processing on the UWB positioning result and the VIO positioning result;
and performing data fusion on the UWB positioning result after time synchronization and the VIO positioning result by using an ES-EKF algorithm to obtain the optimal position estimation of the robot.
Further, the UWB positioning result is obtained by:
performing base station label ranging by adopting a DS-TWR algorithm to obtain ranging data;
and smoothing the ranging data by adopting Kalman filtering to obtain the optimal coordinate estimation as a UWB positioning result.
Further, the VIO positioning result is obtained by the following steps:
detecting characteristic points in the image by adopting an S-MSCKF front end, tracking the extracted characteristic points, and removing;
and performing extended Kalman filtering by adopting the S-MSCKF rear end according to the characteristic point coordinates transmitted from the front end and IMU data so as to obtain the optimal estimation value of the current pose of the camera as a VIO positioning result.
Further, the rejecting process comprises rejecting binocular mismatching points or front and rear frame mismatching points.
Further, the data fusion specifically includes:
an independent coordinate system used in the UWB positioning process is used as a world coordinate system for VIO positioning, and the position and the speed of the mobile robot are used as system state vectors;
predicting the error state and covariance at the moment k by taking the VIO positioning result as the nominal value of the system state vector;
updating the error state and the covariance by taking a positioning information difference value of a UWB positioning result and a VIO positioning result as a system observation vector;
and obtaining the optimal estimation value of the real state at the k moment according to the nominal value and the updated error state, and using the optimal estimation value as the optimal position estimation of the robot.
The invention also provides an indoor positioning device based on UWB and VIO, comprising:
the UWB positioning module adopts a DS-TWR algorithm and a Kalman filtering algorithm to obtain a UWB positioning result;
the VIO positioning module obtains a VIO positioning result by adopting an S-MSCKF algorithm;
and the fusion module is used for carrying out time synchronization processing on the UWB positioning result and the VIO positioning result, and carrying out data fusion on the UWB positioning result and the VIO positioning result after time synchronization by using an ES-EKF algorithm to obtain the optimal position estimation of the robot.
Further, the UWB positioning module includes a plurality of UWB base stations, a UWB tag attached to the mobile robot, and a UWB result acquiring unit configured to:
performing base station label ranging by adopting a DS-TWR algorithm to obtain ranging data;
and smoothing the ranging data by adopting Kalman filtering to obtain the optimal coordinate estimation as a UWB positioning result.
Further, the VIO positioning module comprises a camera, an IMU, and a VIO result acquisition unit configured to:
detecting characteristic points in an image obtained by a camera by adopting an S-MSCKF front end, tracking the extracted characteristic points, and performing rejection processing;
and performing extended Kalman filtering by adopting the S-MSCKF rear end according to the characteristic point coordinates transmitted from the front end and IMU data so as to obtain the optimal estimation value of the current pose of the camera as a VIO positioning result.
Further, the rejecting process comprises rejecting binocular mismatching points or front and rear frame mismatching points.
Further, the fusion module is configured to:
an independent coordinate system used in the UWB positioning process is used as a world coordinate system for VIO positioning, and the position and the speed of the mobile robot are used as system state vectors;
predicting the error state and covariance at the moment k by taking the VIO positioning result as the nominal value of the system state vector;
updating the error state and the covariance by taking a positioning information difference value of a UWB positioning result and a VIO positioning result as a system observation vector;
and obtaining the optimal estimation value of the real state at the k moment according to the nominal value and the updated error state, and using the optimal estimation value as the optimal position estimation of the robot.
Compared with the prior art, the invention has the following beneficial effects:
1. the UWB positioning and VIO positioning results are subjected to data fusion, so that the defect that UWB positioning is easily influenced by non-line-of-sight errors and stable positioning information cannot be provided is overcome; meanwhile, the problems that the errors of VIO positioning are accumulated along with the time lapse, accurate positioning information cannot be provided, the problems that the VIO positioning is greatly limited by light-emitting line parts and cannot work in a dark environment and the like are solved, a long-term, stable and accurate positioning result is provided, and the positioning accuracy and robustness of the mobile robot are effectively improved.
2. The invention realizes the UWB positioning function by utilizing the DS-TWR algorithm, and has high ranging precision and strong robustness.
3. The final experimental result proves that the method and the device can reduce the maximum positioning error by about 4.4 percent and the mean square error by about 6.3 percent in an indoor environment with obstacles, and overcome the limitation of single UWB and VIO positioning method.
4. The invention applies UWB positioning and VIO positioning technology, and integrates the positioning data of the UWB positioning and the VIO positioning through ES-EKF algorithm to provide a positioning result closer to a real track, which can serve the future indoor mobile robot field.
Drawings
FIG. 1 is a general schematic frame diagram of the present invention;
FIG. 2 is a UWB positioning schematic;
FIG. 3 is a schematic diagram of the DS-TWR algorithm of the present invention;
FIG. 4 is a comparison chart of UWB ranging data before and after Kalman filtering, wherein (4a) is a whole comparison chart, (4b) is an enlarged view of 1 in (4a), and (4c) is an enlarged view of 2 in (4 a);
FIG. 5 is a UWB positioning flow diagram;
FIG. 6 is a front-end visual flow chart of the S-MSCKF algorithm;
FIG. 7 is a flow chart of the S-MSCKF algorithm visual front end initialization first frame;
FIG. 8 is a flow chart of the S-MSCKF algorithm visual front-end tracking feature points;
FIG. 9 is a flow chart of the S-MSCKF algorithm back-end filtering;
FIG. 10 is a flow chart of the ES-EKF algorithm;
FIG. 11 is a comparison chart of experimental positioning results of a small field without obstacles;
fig. 12 is a comparison graph of experimental positioning results of a large field with obstacles.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
Example 1
Referring to fig. 1, the present embodiment provides an indoor positioning method based on UWB and VIO, including: obtaining a UWB positioning result by adopting a DS-TWR algorithm and a Kalman filtering algorithm; obtaining a VIO positioning result by adopting an S-MSCKF algorithm; carrying out time synchronization processing on the UWB positioning result and the VIO positioning result; and performing data fusion on the UWB positioning result after time synchronization and the VIO positioning result by using an ES-EKF algorithm to obtain the optimal position estimation of the robot.
1. UWB positioning
The UWB positioning is an active positioning mode, and on the premise of knowing the coordinates of a UWB base station, the positioning process mainly comprises two steps of ranging between the base station and a label and coordinate calculation of the label. A schematic diagram of the UWB positioning section is shown in fig. 2. After the base station and the label are deployed, the system adopts a DS-TWR algorithm to measure the distance between the base station and the label, then adopts a Kalman filtering algorithm to smooth the distance, and then can obtain the coordinate of the label in the current coordinate system through a resolving equation.
(1) Base station tag ranging
In this embodiment, the base station tag Ranging is implemented by using a DS-TWR (Double-sided Two-way Ranging) algorithm, which is a wireless signal Ranging algorithm that establishes an equation by performing Two signal transmissions between a UWB base station and a tag to solve a distance between the base station and the tag. The algorithm has the main advantages of high ranging precision and strong robustness.
The principle of the DS-TWR algorithm is shown in FIG. 3: the equipment A actively transmits TX data and simultaneously records a transmission time stamp; the device B records a receiving time stamp while receiving the data; time delay TreplyThen, the device B sends data and records a sending time stamp; after the device A receives the data, the receiving time stamp is recorded at the same time, and the transmission is added again on the basis, so that the flight time T of the UWB signal between the device A and the device B is calculatedprop. And multiplying by the light speed c to obtain the distance between the base station labels.
(2) Tag coordinate solution
Considering that there are many trip points in the raw data of UWB ranging, which will affect the subsequent coordinate calculation, kalman filtering is used in the embodiment to smooth the ranging data. The Kalman filtering is a data processing method which takes the observed quantity of a system as the input quantity of a filter, takes the estimated value of the state quantity of the system as the output quantity of the filter and utilizes the statistical characteristics of system noise and observation noise to carry out optimal estimation. Further, in the nonlinear form, an Extended form of Kalman filtering is Extended Kalman Filtering (EKF); and according to different system states, Kalman filtering also derives an Error State Kalman filtering (ESKF, Error-State Kalman Filter).
Note xkAs true coordinates of the label, zkObserve the coordinates for the tag through UWB, state transition matrix AkAnd an observation matrix CkAre all unit arrays, system input ukIs a zero matrix. Therefore, the optimal estimation can be performed according to the label coordinate state and the covariance at the moment of k-1Coordinate state and covariance of current k-time label are predicted through formula (1)
And calculating Kalman gain K according to formula (2) on the basis, and updating the label coordinate state and covariance according to formula (3) to obtain the optimal estimation
Where I is the identity matrix.
The results after kalman filtering are shown in fig. 4.
2. VIO positioning
Considering that the time synchronization requirement of data fusion is strict, the embodiment adopts a binocular version of the MSCKF algorithm, namely the S-MSCKF algorithm, which is high in operation efficiency and high in speed and is suitable for being operated on an embedded platform with limited computing resources, and performs VIO indoor positioning based on image data and IMU data.
The S-MSCKF algorithm is divided into a front-end vision part and a rear-end filtering part. The front-end vision is mainly used for detecting feature points in an image and distributing data such as coordinates of the feature points in a camera coordinate system to the rear end. In the process, the extracted feature points are tracked, and binocular mismatching points and front and rear frame mismatching points are eliminated. The main function of the S-MSCKF rear end is to perform extended Kalman filtering with the IMU state according to the feature point coordinates transmitted from the vision front end so as to obtain the optimal estimation value of the current pose of the camera.
(1) Front end vision
The algorithm flow of the front-end vision is shown in fig. 6, which includes:
firstly, initializing a first frame: after the first frame of picture is obtained by the binocular camera, firstly, FAST feature points in the left eye picture are detected, and then the left eye feature points are projected to the right eye according to the internal reference between the left eye camera and the right eye camera to carry out binocular feature point matching. And then carrying out processing such as LK optical flow tracking, epipolar geometric constraint and the like by the image pyramid to eliminate the characteristic points outside the tracked image. And finally, carrying out meshing on the feature points and outputting FAST feature point information. This process is illustrated in fig. 7.
Tracing the characteristic points: different frame images are received successively in the moving process of the camera, and the feature points extracted from the previous frame image need to be tracked in the next frame image, and new feature points are extracted. This requires us to continuously cull left and right mismatching feature points and feature points beyond the image boundary using epipolar geometry constraint, LK optical flow tracing, and Two-points RANSAC. This process is illustrated in fig. 8.
Adding new characteristic points: if the original feature points are tracked all the time, some features disappear and some may have accumulated errors, so that new features must be added to ensure that the program can run all the time.
(2) Back-end filtering
The algorithm for back-end filtering is shown in fig. 9: the method comprises the steps of receiving front-end image feature point information by subscribing related topics, and initializing related quantities such as gravity and deviation. And then processing the IMU data to construct a state matrix F and an input matrix G in a differential equation set, and solving a state transition matrix phi. And then, state prediction, state amplification and state updating are sequentially carried out, and historical camera state data are removed to obtain the current camera state. And finally publishing the pose information topic of the camera.
3. Data fusion
In order to obtain accurate and reliable positioning data for a long time, the positioning data of the UWB and VIO are fused. The information fusion method mainly comprises the following steps: weighted average method, Kalman filtering method, Bayesian inference method, neural network algorithm, etc. Although the weighted average method is simple and intuitive, the optimal weighted average is difficult to obtain, and a large amount of time is needed for calculating the optimal weighted average; information in Bayesian inference in multi-sensor fusion is described as probability distribution, prior probability and likelihood function are needed, and analysis and calculation are complex; the neural network algorithm trains and adjusts the weights of the network according to input data samples, but the neural network requires a large amount of data and is poor in instantaneity.
The method considers nonlinear factors, and adopts an ES-EKF algorithm for processing the nonlinear problem to perform information fusion in order to avoid possible problems of covariance matrix singularity, universal lock and the like.
In the combined positioning, an independent coordinate system used in the UWB positioning process is used as a world coordinate system of VIO positioning. Then, the position and the speed of the mobile robot are taken as the state vector of the system, and the state vector at the time k is the state vectorWherein (x)k,yk) Representing the position coordinates of the combined system on the two-dimensional plane at the k-th moment;andrepresenting the velocity of the combined system in the x-direction and the y-direction, respectively, at time k. Labeling of state vectors at hypothetical time kIs weighed asError value is δ Xk,The error state model of the system is then:
wherein t is the sampling interval time of the combined system;andrepresenting the acceleration of the combined system in the x-direction and the y-direction at time k-1, respectively. The ES-EKF algorithm flow is shown in FIG. 10. Using VIO positioning data received from combined system as state vector nominal valueAnd then updating the error state. When equation (4) is summarized, the prediction of the error state and covariance at time k is shown in equation (5):
wherein:
wherein, Pk-1Optimal estimation of error state covariance at time k-1; q is noiseThe covariance. Considering that the state vector of the combined system is only used to track the position and velocity of the system, we use the acceleration as the random noise of the system.
Positioning information difference obtained by UWB and VIO is used as observation vector Y of the systemkThen the observation equation of the combined system is:
Yk=HδXk+Vk (6)
wherein, VkTo observe noise; y iskAnd H values are as follows:
the error state and covariance matrix may then be updated, including calculation of the Kalman gain KkUpdating the error state deltaXk|kAnd updating the covariance matrix Pk|k. The specific equation is as in formula (7):
The pseudo code for the above fusion process is as follows:
in order to verify the effectiveness of the method provided by the invention, positioning experiments are respectively carried out on a small field and a large field, wherein no barrier exists in the scene of the small field; in the positioning process of a large field, an indoor strut is used as a barrier to interfere; the experimental times were all selected at 14 pm: about 00, the illumination environment is good at this moment, and the development of the experiment is facilitated. The experiment adopts three base stations with known coordinates and a label to be positioned. Every two equidistance of three UWB basic station locating position as far as possible to ensure UWB positioning's accuracy. The indoor mobile robot moves along a known fixed trajectory in a near uniform motion.
(1) Obstacle-free small-field positioning experiment:
in the obstacle-free small-field positioning experiment, the moving process of the robot has no obstacle, and taking the data of one positioning experiment as an example, the result pair of the three positioning methods is shown in fig. 11. In the figure, a line segment a represents an actual travel locus of the mobile robot; line b represents a UWB-based positioning solution trajectory; line segment c represents a VIO-based localization solution; line d represents the location track after ES-EKF fusion of the location data obtained by the two methods. The average of 10 experiments is shown in table 1, where MAX represents the maximum error of the positioning result compared to the real result, and RMSE represents the mean square error.
TABLE 1 Experimental location error statistical table for small field without obstacles
It can be seen that under the condition of being relatively empty indoors, the indoor positioning method only depending on UWB can achieve centimeter-level positioning accuracy, while the VIO positioning method has satisfactory positioning results in the initial stage, but with the advance of the positioning process, the positioning results in the latter stage of the track drift due to the existence of accumulated errors, so that the positioning result error is overlarge. At the moment, the precision of the combined positioning result is inferior to that of a UWB single positioning method, but superior to that of a VIO single positioning method.
(2) Large-site positioning experiment with obstacles:
in the experiment of locating a large place with an obstacle, the running track of the robot comprises the obstacle of an indoor pillar, which causes certain interference on UWB locating. The results of the three positioning methods are shown in fig. 12. In this case, the positioning accuracy of UWB is subject to large jumps and errors, as indicated by the oval dotted area. And the positioning result of the visual inertial odometer is smooth as a whole, and the effect is satisfactory at the initial positioning stage. However, the accumulated error causes the positioning result to drift in the latter half of the track, and the positioning result is not ideal in the corner due to the large change of the camera scene. In this case, compared with the two single-sensor positioning methods, the positioning result after ES-EKF fusion has higher positioning accuracy and stronger reliability, and the positioning result is closer to the actual track. The maximum error and the root mean square error of the positioning result and the actual track are improved, as shown in table 2.
TABLE 2 statistical table of experimental positioning errors in large field with obstacles
The method adopts ES-EKF algorithm to fuse UWB positioning data and VIO positioning data, takes a ground mobile robot as a test object, and constructs a UWB/VIO hybrid positioning system based on a Ubuntu system and an ROS platform. Two groups of comparison experiments prove that the method can improve the positioning accuracy and robustness of the mobile robot in the indoor environment containing the obstacles, and show the practical application prospect of the algorithm.
Example 2
The embodiment provides an indoor positioning device based on UWB and VIO, which comprises a UWB positioning module, a VIO positioning module and a fusion module, wherein the UWB positioning module adopts a DS-TWR algorithm and a Kalman filtering algorithm to obtain a UWB positioning result; the VIO positioning module obtains a VIO positioning result by adopting an S-MSCKF algorithm; and the fusion module performs time synchronization processing on the UWB positioning result and the VIO positioning result, and performs data fusion on the UWB positioning result and the VIO positioning result after time synchronization by using an ES-EKF algorithm to obtain the optimal position estimation of the robot.
In this embodiment, the UWB positioning module mainly includes three UWB base stations, a UWB tag attached to the robot, and a UWB result acquisition unit.
In this embodiment, the VIO positioning module includes a camera, an IMU, and a VIO result obtaining unit, and mainly fuses visual odometer information and inertial measurement data to obtain the current coordinate. The hardware part of the VIO positioning module adopts an Intel T265 camera, and further comprises a group of wide-field binocular fisheye lenses (OV9282) which have a circular field of view of about 165 degrees, and a Bosch BMI055 frequency 200Hz gyroscope and a 62.5Hz accelerometer which are synchronized between hardware and a powerful Intel movidia myrid 2 VPU (Video Processing Unit) are integrated inside, so that the VIO positioning module can run on an embedded processor.
The IMU is known as an Inertial navigation system (Inertial Measurement Unit) and its main components are a gyroscope, an accelerometer and a magnetometer. Wherein the gyroscope can obtain the acceleration of each axis, the accelerometer can obtain the acceleration of x, y and z directions, and the magnetometer can obtain the information of the surrounding magnetic field. The primary task of the IMU is to fuse the data of the three sensors to obtain more accurate attitude information.
The specific process of implementing positioning by the indoor positioning device based on UWB and VIO in this embodiment is as described in the positioning method in embodiment 1.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.
Claims (10)
1. An indoor positioning method based on UWB and VIO is characterized by comprising the following steps:
obtaining a UWB positioning result by adopting a DS-TWR algorithm and a Kalman filtering algorithm;
obtaining a VIO positioning result by adopting an S-MSCKF algorithm;
carrying out time synchronization processing on the UWB positioning result and the VIO positioning result;
and performing data fusion on the UWB positioning result after time synchronization and the VIO positioning result by using an ES-EKF algorithm to obtain the optimal position estimation of the robot.
2. The UWB and VIO based indoor positioning method of claim 1, wherein the UWB positioning result is obtained by the following steps:
performing base station label ranging by adopting a DS-TWR algorithm to obtain ranging data;
and smoothing the ranging data by adopting Kalman filtering to obtain the optimal coordinate estimation as a UWB positioning result.
3. The UWB and VIO based indoor positioning method of claim 1, wherein the VIO positioning result is obtained by the following steps:
detecting characteristic points in the image by adopting an S-MSCKF front end, tracking the extracted characteristic points, and removing;
and performing extended Kalman filtering by adopting the S-MSCKF rear end according to the characteristic point coordinates transmitted from the front end and IMU data so as to obtain the optimal estimation value of the current pose of the camera as a VIO positioning result.
4. The UWB and VIO based indoor positioning method of claim 3 wherein the culling process comprises culling binocular mis-match points or pre-and post-frame mis-match points.
5. The UWB and VIO based indoor positioning method of claim 1, wherein the data fusion is specifically:
an independent coordinate system used in the UWB positioning process is used as a world coordinate system for VIO positioning, and the position and the speed of the mobile robot are used as system state vectors;
predicting the error state and covariance at the moment k by taking the VIO positioning result as the nominal value of the system state vector;
updating the error state and the covariance by taking a positioning information difference value of a UWB positioning result and a VIO positioning result as a system observation vector;
and obtaining the optimal estimation value of the real state at the k moment according to the nominal value and the updated error state, and using the optimal estimation value as the optimal position estimation of the robot.
6. An indoor positioning device based on UWB and VIO, characterized by comprising:
the UWB positioning module adopts a DS-TWR algorithm and a Kalman filtering algorithm to obtain a UWB positioning result;
the VIO positioning module obtains a VIO positioning result by adopting an S-MSCKF algorithm;
and the fusion module is used for carrying out time synchronization processing on the UWB positioning result and the VIO positioning result, and carrying out data fusion on the UWB positioning result and the VIO positioning result after time synchronization by using an ES-EKF algorithm to obtain the optimal position estimation of the robot.
7. The UWB and VIO based indoor positioning device of claim 6, wherein the UWB positioning module comprises a plurality of UWB base stations, a UWB tag attached to the mobile robot, and a UWB result obtaining unit configured to:
performing base station label ranging by adopting a DS-TWR algorithm to obtain ranging data;
and smoothing the ranging data by adopting Kalman filtering to obtain the optimal coordinate estimation as a UWB positioning result.
8. The UWB and VIO based indoor positioning apparatus of claim 6 wherein the VIO positioning module comprises a camera, an IMU and a VIO result acquisition unit configured to:
detecting characteristic points in an image obtained by a camera by adopting an S-MSCKF front end, tracking the extracted characteristic points, and performing rejection processing;
and performing extended Kalman filtering by adopting the S-MSCKF rear end according to the characteristic point coordinates transmitted from the front end and IMU data so as to obtain the optimal estimation value of the current pose of the camera as a VIO positioning result.
9. The UWB and VIO based indoor positioning apparatus of claim 8, wherein the culling process comprises culling binocular mismatch points or front and rear frame mismatch points.
10. The UWB and VIO based indoor positioning device of claim 6 wherein the fusion module is configured to:
an independent coordinate system used in the UWB positioning process is used as a world coordinate system for VIO positioning, and the position and the speed of the mobile robot are used as system state vectors;
predicting the error state and covariance at the moment k by taking the VIO positioning result as the nominal value of the system state vector;
updating the error state and the covariance by taking a positioning information difference value of a UWB positioning result and a VIO positioning result as a system observation vector;
and obtaining the optimal estimation value of the real state at the k moment according to the nominal value and the updated error state, and using the optimal estimation value as the optimal position estimation of the robot.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111134020.4A CN113758488B (en) | 2021-09-27 | 2021-09-27 | Indoor positioning method and equipment based on UWB and VIO |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111134020.4A CN113758488B (en) | 2021-09-27 | 2021-09-27 | Indoor positioning method and equipment based on UWB and VIO |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113758488A true CN113758488A (en) | 2021-12-07 |
CN113758488B CN113758488B (en) | 2023-08-29 |
Family
ID=78797670
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111134020.4A Active CN113758488B (en) | 2021-09-27 | 2021-09-27 | Indoor positioning method and equipment based on UWB and VIO |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113758488B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114485623A (en) * | 2022-02-16 | 2022-05-13 | 东南大学 | Camera-IMU-UWB fusion accurate positioning method for focusing distance |
CN115711616A (en) * | 2022-06-09 | 2023-02-24 | 同济大学 | Indoor and outdoor unmanned aerial vehicle penetrating smooth positioning method and device |
CN116222556A (en) * | 2023-01-13 | 2023-06-06 | 浙江大学 | Indoor positioning method and system based on multi-source sensor fusion |
WO2023173729A1 (en) * | 2022-03-16 | 2023-09-21 | 中国矿业大学 | Accurate positioning method and system based on multi-source information fusion and for monorail crane in underground coal mine |
CN118310523A (en) * | 2024-04-01 | 2024-07-09 | 广东经纬天地科技有限公司 | Indoor positioning method, system, equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170227656A1 (en) * | 2016-02-09 | 2017-08-10 | Qualcomm Incorporated | Mobile platform positioning using satellite positioning system and visual-inertial odometry |
CN107389063A (en) * | 2017-07-26 | 2017-11-24 | 重庆邮电大学 | The indoor fusion and positioning method of high accuracy based on GSM/MEMS fusions |
CN110315540A (en) * | 2019-07-15 | 2019-10-11 | 南京航空航天大学 | One kind being based on the tightly coupled robot localization method and system of UWB and binocular VO |
CN110487267A (en) * | 2019-07-10 | 2019-11-22 | 湖南交工智能技术有限公司 | A kind of UAV Navigation System and method based on VIO&UWB pine combination |
CN112378396A (en) * | 2020-10-29 | 2021-02-19 | 江苏集萃未来城市应用技术研究所有限公司 | Hybrid high-precision indoor positioning method based on robust LM visual inertial odometer and UWB |
CN113074739A (en) * | 2021-04-09 | 2021-07-06 | 重庆邮电大学 | UWB/INS fusion positioning method based on dynamic robust volume Kalman |
-
2021
- 2021-09-27 CN CN202111134020.4A patent/CN113758488B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170227656A1 (en) * | 2016-02-09 | 2017-08-10 | Qualcomm Incorporated | Mobile platform positioning using satellite positioning system and visual-inertial odometry |
CN107389063A (en) * | 2017-07-26 | 2017-11-24 | 重庆邮电大学 | The indoor fusion and positioning method of high accuracy based on GSM/MEMS fusions |
CN110487267A (en) * | 2019-07-10 | 2019-11-22 | 湖南交工智能技术有限公司 | A kind of UAV Navigation System and method based on VIO&UWB pine combination |
CN110315540A (en) * | 2019-07-15 | 2019-10-11 | 南京航空航天大学 | One kind being based on the tightly coupled robot localization method and system of UWB and binocular VO |
CN112378396A (en) * | 2020-10-29 | 2021-02-19 | 江苏集萃未来城市应用技术研究所有限公司 | Hybrid high-precision indoor positioning method based on robust LM visual inertial odometer and UWB |
CN113074739A (en) * | 2021-04-09 | 2021-07-06 | 重庆邮电大学 | UWB/INS fusion positioning method based on dynamic robust volume Kalman |
Non-Patent Citations (2)
Title |
---|
王嘉欣;李桂林;曹海东;: "UWB和IMU技术融合的室内定位算法研究", 单片机与嵌入式系统应用, no. 08 * |
靳果;朱清智;: "移动机器人的卡尔曼滤波定位算法改进与仿真", 兵工自动化, no. 04 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114485623A (en) * | 2022-02-16 | 2022-05-13 | 东南大学 | Camera-IMU-UWB fusion accurate positioning method for focusing distance |
CN114485623B (en) * | 2022-02-16 | 2024-02-23 | 东南大学 | Focusing distance camera-IMU-UWB fusion accurate positioning method |
WO2023173729A1 (en) * | 2022-03-16 | 2023-09-21 | 中国矿业大学 | Accurate positioning method and system based on multi-source information fusion and for monorail crane in underground coal mine |
CN115711616A (en) * | 2022-06-09 | 2023-02-24 | 同济大学 | Indoor and outdoor unmanned aerial vehicle penetrating smooth positioning method and device |
CN116222556A (en) * | 2023-01-13 | 2023-06-06 | 浙江大学 | Indoor positioning method and system based on multi-source sensor fusion |
CN116222556B (en) * | 2023-01-13 | 2024-03-26 | 浙江大学 | Indoor positioning method and system based on multi-source sensor fusion |
CN118310523A (en) * | 2024-04-01 | 2024-07-09 | 广东经纬天地科技有限公司 | Indoor positioning method, system, equipment and storage medium |
CN118310523B (en) * | 2024-04-01 | 2024-09-10 | 广东经纬天地科技有限公司 | Indoor positioning method, system, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113758488B (en) | 2023-08-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113758488B (en) | Indoor positioning method and equipment based on UWB and VIO | |
US20240202938A1 (en) | Fault-tolerance to provide robust tracking for autonomous and non-autonomous positional awareness | |
US10390003B1 (en) | Visual-inertial positional awareness for autonomous and non-autonomous device | |
US10192113B1 (en) | Quadocular sensor design in autonomous platforms | |
Zhao et al. | A robust laser-inertial odometry and mapping method for large-scale highway environments | |
US10496104B1 (en) | Positional awareness with quadocular sensor in autonomous platforms | |
US11527084B2 (en) | Method and system for generating a bird's eye view bounding box associated with an object | |
EP3447448B1 (en) | Fault-tolerance to provide robust tracking for autonomous and non-autonomous positional awareness | |
CN114474061B (en) | Cloud service-based multi-sensor fusion positioning navigation system and method for robot | |
CN109341694A (en) | A kind of autonomous positioning air navigation aid of mobile sniffing robot | |
Liu et al. | A Vision‐Based Target Detection, Tracking, and Positioning Algorithm for Unmanned Aerial Vehicle | |
CN109855621A (en) | A kind of composed chamber's one skilled in the art's navigation system and method based on UWB and SINS | |
CN112212852B (en) | Positioning method, mobile device and storage medium | |
CN114459467B (en) | VI-SLAM-based target positioning method in unknown rescue environment | |
US11774983B1 (en) | Autonomous platform guidance systems with unknown environment mapping | |
CN112529962A (en) | Indoor space key positioning technical method based on visual algorithm | |
Perez-Grau et al. | Long-term aerial robot localization based on visual odometry and radio-based ranging | |
Gao et al. | Localization of mobile robot based on multi-sensor fusion | |
CN117451054A (en) | Unmanned aerial vehicle high-precision indoor positioning method based on monocular camera, IMU and UWB multi-sensor fusion | |
CN112945233A (en) | Global drift-free autonomous robot simultaneous positioning and map building method | |
Yusefi et al. | A Generalizable D-VIO and Its Fusion with GNSS/IMU for Improved Autonomous Vehicle Localization | |
CN114323002B (en) | AGV positioning navigation method based on binocular vision, IMU and UWB fusion | |
Liu et al. | Vision-inertial collaborative localization of multi-agents with remote interaction | |
CN117928527B (en) | Visual inertial positioning method based on pedestrian motion feature optimization | |
Merino et al. | Person Tracking in Urban Scenarios by Robots Cooperating with Ubiquitous Sensors |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |