CN116772831A - Mobile robot navigation and positioning system and method based on multi-sensor fusion - Google Patents

Mobile robot navigation and positioning system and method based on multi-sensor fusion Download PDF

Info

Publication number
CN116772831A
CN116772831A CN202310450309.XA CN202310450309A CN116772831A CN 116772831 A CN116772831 A CN 116772831A CN 202310450309 A CN202310450309 A CN 202310450309A CN 116772831 A CN116772831 A CN 116772831A
Authority
CN
China
Prior art keywords
tag
data
positioning
value
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310450309.XA
Other languages
Chinese (zh)
Inventor
刘健
张琦
黄达健
赖飞平
雷远进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pengkun Zhike Technology Co ltd
Original Assignee
Shenzhen Pengkun Zhike Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pengkun Zhike Technology Co ltd filed Critical Shenzhen Pengkun Zhike Technology Co ltd
Priority to CN202310450309.XA priority Critical patent/CN116772831A/en
Publication of CN116772831A publication Critical patent/CN116772831A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/024Guidance services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • H04W64/003Locating users or terminals or network equipment for network management purposes, e.g. mobility management locating network equipment

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention belongs to the field of robot navigation and positioning, and discloses a mobile robot navigation and positioning system based on multi-sensor fusion, which comprises the following components: at least 4 UWB base stations installed in a given manner within the mobile robot work area; the combined tag device is composed of two UWB positioning tags which are arranged on the connecting component at set distance intervals and a gyroscope for measuring the rotation angular velocity of the connecting component; and the main control device is used for receiving the tag data, the gyroscope data and the mobile robot odometer data, carrying out real-time fusion processing on the received data and outputting the positioning data and the course angle data of the current tag device. The invention solves the problems of large positioning error, lack of course angle information and the like of the UWB positioning system through data fusion of the odometer, the gyroscope and the plurality of UWB tags, can provide more accurate navigation positioning information for various indoor and outdoor mobile robots such as automatic driving forklifts, piling trucks, tractors, AGV trolleys and the like, and also discloses a corresponding method.

Description

Mobile robot navigation and positioning system and method based on multi-sensor fusion
Technical Field
The invention belongs to the field of robot navigation and positioning, and particularly relates to a mobile robot navigation and positioning system and method based on multi-sensor fusion.
Background
The current main navigation positioning mode in the field of mobile robots is two-dimensional code navigation and laser SLAM navigation. The two-dimensional code navigation essence belongs to visual navigation, a camera is used for acquiring two-dimensional code images arranged on the ground, an image recognition algorithm is used for recognizing the two-dimensional code and calculating the vehicle pose, and the two-dimensional code navigation essence has the advantages of wider applicability, higher positioning precision, low algorithm complexity and the like, and has the defects of troublesome installation and deployment, incapability of randomly changing a navigation route, high maintenance cost of the two-dimensional code in the later period and the like; the laser SLAM has the advantages of higher precision, high course angle precision, convenient deployment and the like, but has high cost, and in addition, the laser radar has strict requirements on the use scene and the ground flatness, and the periphery of the laser SLAM is required to have enough fixed objects with reflective characteristics, so the laser SLAM cannot adapt to scenes which are widely opened and contain more glass wall fields and production equipment and move frequently. With the addition of intelligent reformation in more and more application scenes, the above-mentioned navigation positioning scheme cannot meet the navigation positioning requirement of the mobile robot, and a new solution is needed, wherein UWB (UltraWideBand) positioning technology is a scheme with great potential, researches are quite hot in recent years, and a UWB positioning system based on TDOA (TimeDifferenceOfArrival) principle is widely applied to positioning personnel and materials in various places. However, due to the influences Of factors such as poor positioning accuracy, lack Of course angle information and the like caused by NLOS (No-Light-Of-Sight) interference, the UWB positioning system is still difficult to apply to the field Of mobile robot navigation positioning at present.
At present, other sensors are fused on the basis of UWB positioning, but most of the existing researches are theoretical researches, and most of the few published patents also fail to consider the actual use requirements of robot navigation positioning, so that the application deployment is difficult. The invention patent application with the publication number of CN112833876A and the application number of CN202011625879.0 discloses a multi-robot cooperative positioning method integrating an odometer and UWB, which adopts a higher-complexity graph optimization algorithm and requires the cooperation of multiple robots to realize, and has harsh practical deployment and application conditions; according to the UWB-IMU combined indoor positioning method based on the sliding window factor graph, pre-integration data of the IMU is adopted to conduct graph optimization on UWB positioning data, NLOS interference can be reduced to a certain extent, but due to the fact that a global course angle tracking and correcting system is lacked, gyroscope error integration can cause larger accumulated errors of reference gestures, and IMU coordinate system data lose reference value.
The current main flow technical scheme for positioning based on UWB mainly comprises two schemes, one scheme is based on TDOA (TimeDifferenceOfArrival), namely, the principle of time difference is achieved, the scheme only needs to send a signal once, a system obtains the time difference of the signal reaching each base station, and then the position of the tag can be calculated by combining the coordinates of each base station. Another scheme is based on TWR (Two-Way-Ranging) which is a Two-Way Ranging principle, the scheme obtains the flight Time (TOF) of electromagnetic waves between base stations through multiple communications between the tags and the base stations, and multiplies the value by the speed of light to obtain the distance between the base stations and the tags, under the condition of line-of-sight propagation, the distance error between the tags and the base stations measured by the method is generally about 10cm, and the distance between the tags and at least 3 base stations can be obtained to achieve 2D plane positioning with the accuracy of about 10cm through the three-side positioning principle, but under the condition of non-line-of-sight propagation (NLOS) (such as refraction, reflection, penetrating objects and the like), the Ranging values generate larger deviation, and further the positioning generates larger error. Therefore, one difficulty to be solved by the UWB positioning algorithm is to filter non-line-of-sight propagation interference, most of the currently disclosed algorithms are to filter the positioned result so as to reduce positioning errors caused by non-line-of-sight replay, and few algorithms are available to directly filter the original TWR ranging data. The invention provides a gradient outlier detection filtering algorithm based on a sliding window, which can effectively detect and remove the data of which the range error exceeds a certain threshold value,
disclosure of Invention
In order to solve the technical problems, the invention provides a mobile robot navigation positioning system and a mobile robot navigation positioning method based on multi-sensor fusion, which solve the problems of large positioning error, lack of course angle information and the like of a UWB positioning system through data fusion of an odometer, a gyroscope and a plurality of UWB labels, and can provide more accurate navigation positioning information for various indoor and outdoor mobile robots such as automatic driving forklifts, piling trucks, tractors, AGV carts and the like. The final purpose is to provide accurate positioning information and course angle information for the mobile robot, wherein the positioning information is as follows: i.e. the spatial position of the robot in the map, usually expressed in terms of spatial coordinates, heading angle information: that is, the direction of the robot is generally expressed as the angle between the positive direction of the X axis of the robot and the direction of the X axis of the map, and the heading angle is the necessary information for the mobile robot to perform path navigation and motion control.
In order to achieve the above object, the present invention provides a mobile robot navigation and positioning system based on multi-sensor fusion, comprising:
a UWB base station system comprising at least 4 UWB base stations installed in a mobile robot work area;
the combined tag device comprises at least two UWB tags and a gyroscope, wherein the UWB tags are arranged on the connecting component at intervals, the gyroscope is used for measuring the rotation angular velocity of the connecting component, and the gyroscope is arranged at the center of the connecting component;
the main control device comprises a tag interface, a gyroscope interface and an odometer interface and is used for receiving the combined tag device data, the gyroscope data and the mobile robot odometer data, carrying out real-time fusion processing on the received data and outputting the current combined tag device positioning data and heading angle data;
the mobile robot odometer is connected with the main control device through an odometer interface.
Preferably, the UWB base station is divided into different location areas including indoor, outdoor or across indoor and outdoor.
Preferably, the installation distance interval between the UWB tags is set according to the size of the robot body; the gyroscope is inertial navigation equipment comprising a gyroscope and an accelerometer.
The second aspect of the invention provides a method applied to mobile robot navigation positioning based on multi-sensor fusion, comprising the following steps:
the specific method comprises the following steps:
s1: constructing a sliding window filter to detect and filter the distance data from the NLOS-containing tag to each base station to obtain the distance data from the filtered tag to each base station, comprising the following steps:
s11: the distance data from the tag to each base station is obtained in real time through the tag, the distance data are stored into corresponding arrays according to the base station ID, and if m base stations exist, the sliding window size is set to n, m distance arrays with the length of n can be obtained: d (D) 1 ,D 2 ...D m Wherein d=d (1), D (2)..d (n)
S12: one of the arrays is recorded as D i The distance gradient absolute value array d is calculated by the following formula i
d i (n)=|D i (n)-D i (n-1)|
Calculate d i Whether the value is larger or smaller than the set threshold value is judged, and if the value is smaller than the set variance threshold value, the distance array D is described i The condition of larger value jump is not existed, the influence of NLOS noise is smaller, and the corresponding distance data can be kept to the original value; if the set placement threshold is larger than the set placement threshold, the distance array D is described i The larger value jump condition exists, the influence by NLOS is larger, the possibility that the data with larger gradient change contains NLOS noise is larger, and the data need to be found and removed, and the method is shown as S13;
s13: ladder with traversing standard deviation larger than set standard deviation thresholdDegree array d i If the j-th value d i (j) The original distance value (FD) is reserved when the gradient threshold value is smaller than the set gradient threshold value i (j)=D i (j) If d i (j) Discarding the position distance value if the gradient threshold is larger than the set gradient threshold, and replacing the position distance value with the previous value, namely FD i (j)=D i (j-1);
S14: for each distance array D 1 ,D 2 ...D m Repeating the operations s12 and s13 to obtain the distance data FD from the label to each base station for filtering NLOS noise to a great extent 1 ,FD 2 ...FD m
Preferably, the invention proposes an improved least squares trilateration algorithm: s2: dividing the filtered distance data into a plurality of combinations according to each 3 groups, calculating a plurality of groups of positioning data by using a least square method, calculating the confidence coefficient of each group of positioning data, and weighting and calculating final tag positioning data according to the confidence coefficient, wherein the specific implementation steps are as follows:
s21: dividing m base stations of a current positioning area into k groups according to each 3 groups according to a permutation and combination algorithm, and respectively calculating coordinates of each group by using a least square algorithm, wherein a least square general solution formula is as follows:
X=(A T A) -1 A T b
wherein X is a matrix to be solved, and the specific form of the matrix is X=x, y in a 2D plane trilateral positioning system
Wherein A is a parameter matrix, and in a 2D plane trilateral positioning system, the specific form of A is as follows:
x in the above 1 ,y 1 、x 2 ,y 2 、x 3 ,y 3 The x-axis and y-axis coordinates of the 3 base stations respectively;
wherein b is a parameter matrix, and in the 2D plane trilateral positioning system, the specific form of b is as follows:
x in the above 1 ,y 1 、x 2 ,y 2 、x 3 ,y 3 X-axis and y-axis coordinates, s, of 3 base stations respectively 1 ,s 2 ,s 3 The horizontal distances from the tag to 3 base stations can be calculated according to the tag installation height and the tag to base station distance information, and the following formula is used:
where d is the linear distance from the tag to the specified ID base station, and the filtered distance data can be output from S1: FD (FD) 1 ,FD 2 ...FD m For example, id=1, and take the latest data as the measurement result, the tag-to-base station distance is: d=fd 1 (n); z is the z-axis coordinate of the current base station, and h is the mounting height of the tag;
s22: according to the calculation method provided by S21, the specific values of the parameters are substituted into the corresponding formulas, and k positioning data sets can be obtained through calculation: p is p 1 ,p 2 ...p k The common algorithm directly averages the group of data as a final positioning result, and the method is simple, but cannot effectively remove positioning data containing NLOS noise, and the finally obtained positioning accuracy is lower.
1. Firstly, calculating the sum of squares of the difference between the distances from the positioning point to each base station and the measured distance of the tag, wherein the formula is as follows:
x in the above p ,y p Positioning data, x, combined for one of the base stations i ,y i Is the x-axis and y-axis coordinates of the ith base station, where i.epsilon.1, 2,3, s i The horizontal distance from the tag to the ith base station; in theory, the more accurate the distance measurement is, the more accurate the positioning result calculated by least square is, the smaller the sum value is, the larger the measured distance error is, the larger the fixed error is, and in order to facilitate the later calculation and visual display, the sum value is subjected to formula transformation to calculate the confidence coefficient c of the positioning result, and the following formula is used:
in the above formula, c is the confidence coefficient of the positioning result, the numerical range of the confidence coefficient belongs to (0, 1), and the higher the confidence coefficient is, the smaller the possible positioning error is represented;
2. according to the method given by S1, the positioning result array p 1 ,p 2 ...p k Confidence coefficient is calculated on each positioning data of the corresponding confidence coefficient array c 1 ,c 2 ...c k Traversing the array, removing the data with the confidence coefficient smaller than the set threshold value, and simultaneously storing the data with the confidence coefficient larger than the set threshold value to obtain a new confidence coefficient array c 1 ,c 2 ...c n And a corresponding positioning result array p 1 ,p 2 ...p n When n=0, it indicates that the confidence coefficient of all positioning results is smaller than the set threshold value, the positioning failure can be directly solved, and when n is not smaller than 1 and not larger than k, the positioning result array is weighted and averaged according to the confidence coefficient to obtain the final positioning result p mean x, y, the calculation formula is as follows:
s23: and respectively performing operations S1, S21 and S22 on the label A and the label B to obtain positioning data of the two labels: p is p tag1 x,y、p tag2 x,y;
Preferably, S3: calculating the course angle of the combined tag device according to the two tag positioning data, and coupling the tag course angle data and the gyroscope angular speed by using a Kalman filtering algorithm to obtain fused course angle data; the method specifically comprises the following steps:
s31: calculating the vector angle of the tag B and the tag A according to the coordinate information of the tag A and the tag B in the map, and calculating the course angle of the robot according to the installation angle of the tag connecting component relative to the X axis of the robot body, wherein the calculated course angle data contains a large amount of high-frequency noise and cannot be directly used for the navigation of the robot because the tag positioning data shake within a certain error range and the size of the tag vector module is limited, but the long-time accumulated error of the course angle is approaching 0 from the view of the global map, so the accumulated error can be used as the observed quantity input of a Kalman filter to calibrate the accumulated error generated by the integration of a gyroscope; the specific calculation formula is as follows:
in the above, yaw mes Outputting (x) for the observed course angle of the car body tag1 ,y tag1 )、(x tag2 ,y tag2 ) Positioning data p for tag A and tag B, respectively tag1 x,y、p tag2 X, y, fix_angle is the mounting angle of the tag connection assembly relative to the X axis of the robot body;
s32: course angle yaw calculated by two-tag positioning mes In order to measure input, the angular velocity omega of a gyroscope is used as prediction input, a Kalman filter equation set is constructed to carry out course angle fusion, a state transition matrix, a control matrix and a measurement matrix are all 1 multiplied by 1 identity matrixes in the system, and the state transition matrix, the control matrix and the measurement matrix are substituted into a standard Kalman filter equation set to obtain the simplified Kalman filter equation set:
X(k|k-1)=X(k-1|k-1)+U(k)
the above equation is Kalman system state prediction equation, X is fusion course angle yaw fusion X (k|k-1) is the optimal estimated value of the current system, namely the optimal estimated value of the current course angle, X (k-1|k-1) is the optimal estimated value of the last system, and U (k) is the systemThe control quantity, U (k) =omega×Δt in the system, wherein omega is the angular velocity value returned by the gyroscope measurement, and Δt is the time interval for updating the Kalman filter;
P(k|k-1)=P(k-1|k-1)+Q
the above equation is Kalman system covariance prediction equation, P (k|k-1) is the current system priori covariance optimal estimation value, P (k-1|k-1) is the last system priori covariance optimal estimation value, Q is the process noise covariance matrix of the system, and Q is gyroscope angular velocity measurement noise in the system;
the above equation is Kalman gain calculation equation, where K (K) represents Kalman gain of the system at K moment, P (K-1|k-1) is the last prior covariance optimal estimation value of the system, R is the measured noise covariance matrix of the system, and R is the tag heading angle yaw in the system mes Is a measurement noise of (a);
X(k|k)=X(k-1|k-1)+K(k)Z(k)-X(k|k-1)
the optimal state estimation equation of the Kalman system is shown in the above formula, wherein X (k|k) is the current optimal state estimation value of the system, namely the current fusion course angle yw fusion X (k|k-1) is the optimal state estimation value of the system at the time of K-1, K (K) is the Kalman gain of the system, X (K-1|k-1) is the optimal state estimation value of the system at the time of K-2, Z (K) is the measured value of the system, and in the system, Z (K) =yaw mes
P(k|k)=1-K(k)P(k|k-1)
The posterior estimation covariance equation of the Kalman system is shown in the specification, wherein P (k|k) is the posterior estimation covariance of the system, K (K) is the Kalman gain of the system, and P (k|k-1) is the posterior covariance optimization estimation result of the system at the moment of K-1;
s33: repeating the iterative updating of the Kalman filter by S31 and S32 to obtain the real-time fusion course angle yaw fusion
Preferably, S4: taking the average value of the two-label positioning data as the positioning data of the combined label device, and then utilizing first-order low-pass filteringThe wave algorithm is coupled with the tag positioning data, the odometer data and the course angle data to obtain the fusion positioning data p fusion The method comprises the steps of carrying out a first treatment on the surface of the The method specifically comprises the following steps:
s41: taking the average value of the two tag positioning data as the positioning data p of the combined tag device tag The method comprises the following steps:
s42: preprocessing return data of the odometer to ensure that the input data format of the odometer is the instantaneous movement speed relative to the moment on the vehicle bodyAnd converting it to the center of the tag connection assembly to obtain the instantaneous speed of movement of the combined tag device relative to the moment on the vehicle body>Considering only two-dimensional plane motion, according to the fused course angle yaw fusion Euler angle (yaw) of the vehicle body relative to the map coordinate system can be obtained fusion 0, 0), which can be converted into a gesture quaternion q for easy operation, and the quaternion pair +.>And carrying out coordinate transformation to obtain the movement speed of the combined label device under a map coordinate system, wherein the calculation formula is as follows:
in the above, q is gesture four elements, q -1 Is the inverse of the quaternion;
s43: coupling tag positioning data p by adopting first-order low-pass filtering algorithm tag Tag movement speed calculated by mileage calculationThe calculation formula is as follows:
p in the above fusion (t) fusing positioning data for the current moment, p fusion (t-1) fusing the positioning data for the previous time, p tag (t) tag positioning data, p tag (t-1) tag positioning data for the previous time,for the motion speed of the combined label device at the current moment in a map coordinate system, delta t is the time interval between the moment t and the moment t-1, and w is the data weight coefficient of the odometer;
according to the technical scheme, the real-time positioning data p of the positioning device can be obtained by circularly executing S1, S2, S3 and S4 fusion [x,y]Heading angle data yaw fusion The two can be combined into the fusion pose data phase fusion [x,y,yaw]The data can provide positioning navigation support for various indoor and outdoor mobile robots.
Compared with the prior art, the invention has at least the following beneficial effects:
the method provided by the invention can effectively remove TWR distance data and positioning data containing NLOS errors in the UWB positioning system, effectively couple tag positioning data, gyroscope data and mileage data by adopting algorithms such as Kalman filtering, first-order low-pass filtering and the like, ensure the continuity of data output and improve the reliability and robustness of the navigation positioning system while improving the positioning and course angle precision; positioning data p output by the system fusion [x,y]And heading angle data yaw fusion Can be combined into fusion pose data phase fusion [x,y,yaw]The data can provide positioning navigation support for various indoor and outdoor mobile robots.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to enable those skilled in the relevant art to more clearly understand the technical means of the present invention, the present invention will be further described with reference to the accompanying drawings.
Drawings
FIG. 1 is a block diagram of a mobile robot navigation and positioning system based on multi-sensor fusion;
FIG. 2 is a flow chart of a mobile robot navigation and positioning method based on multi-sensor fusion;
FIG. 3 is a visual view of the positioning results of a mobile robot navigation positioning system based on multi-sensor fusion;
in the figure: 1. a UWB base station system; 11. a UWB base station; 2. a combination tag device; 21. a label A; 22. a label B; 23. a gyroscope; 3. and a master control device.
Detailed Description
Referring to fig. 1, the present embodiment provides a mobile robot navigation positioning system based on multi-sensor fusion, including:
UWB base station system 1 for performing TWR ranging in conjunction with tags, the system comprising at least 4 UWB base stations 1111 installed in a given manner within a mobile robot work area; the number and the installation positions of the UWB base stations 11 can be flexibly set according to the actual application scenario, but the following basic principles should be adhered to:
1) The maximum effective transmission distance of the UWB base station 11 is typically around 50m-100m, and it should be ensured that at least 4 UWB base station 11 signals can be received by the tag at any location in the operating area;
2) The UWB base station 11 should be arranged as close as possible to a rectangle with an aspect ratio of 1:1, and the maximum length-to-width ratio should not exceed 3:1;
3) The UWB base station 11 is arranged far away from the metal reflecting surface as far as possible, if the UWB base station is limited by the practical site, the UWB base station cannot be arranged, and a wave absorbing material can be attached to the reflecting surface;
4) The distance between the mounting point of the UWB base station 11 and the wall corner is more than 1.0m, and the distance between the mounting point of the UWB base station and the wall surface is more than 0.5m;
5) The UWB base station 11 is installed on the same plane as much as possible, and the distance between the plane of the UWB base station 11 and the working plane of the tag is ensured to be more than 1.0m;
a combined tag device 2 composed of a tag A21, a tag B22 and a gyroscope 23, wherein the tag A21 and the tag B22 are arranged at the left end and the right end of the connecting component at a distance interval of about 0.8m, and the gyroscope 23 is arranged at the center of the connecting component; the connecting component can be selected from the length, width and height: the actual size of the rigid strip material with the thickness of about 820x30x3mm can be determined according to factors such as the size of a robot body and the distance interval of labels, and the connection is ensured to be stable and reliable;
the main control device 3 is used for receiving the data of the combined tag device 2, the data of the gyroscope 23 and the data of the mobile robot odometer, carrying out real-time fusion processing on the received data, and outputting the positioning data and the course angle data of the current combined tag device 2; on the one hand, the main control device 3 should contain a sufficient number of adaptive hardware connection ports which can be used for connecting with sensors or data input interfaces such as UWB labels, gyroscopes 23, vehicle-mounted odometers and the like and completing data interaction; on the other hand, the main control device 3 should have enough data storage, data cache and data operation resources to meet the operation requirements of the navigation positioning method provided by the invention; the following are parameters of the embedded master control device used in this embodiment:
1. hardware interface: UARTx4, CANx2, IOx8
CPU: ARMCortex-A7 kernel, operating frequency 900MHz,128KBL2 cache
Ram: 16-bit LP-DDR2, DDR3/DDR3L1GB
ROM:16 bit parallel NORFLASH/PSRAM16GB
Referring to fig. 2, the embodiment provides a mobile robot navigation positioning method based on multi-sensor fusion, which includes the following steps:
s1: constructing a sliding window filter to detect and filter the distance data from the tag containing the NLOS to each UWB base station 11 to obtain the distance data from the tag after filtering to each UWB base station 11, comprising the following specific steps:
s11: acquiring distance data from the tag to each UWB base station 11 in real time through the tag, storing the distance data into corresponding arrays according to the UWB base station 11ID, and obtaining m distance arrays with the length of n on the assumption that m UWB base stations 11 exist and the sliding window size is set to n: d (D) 1 ,D 2 ...D m Which is provided withD=d (1), D (2)..d (n),
the tag data return frequency used in this embodiment is 20HZ, and the corresponding sliding window size n is set to 20;
s12: one of the arrays is recorded as D i The distance gradient absolute value array d is calculated by the following formula i
d i (n)=|D i (n)-D i (n-1)|
Calculate d i Whether the value is larger or smaller than the set threshold value is judged, and if the value is smaller than the set variance threshold value, the distance array D is described i The condition of larger value jump is not existed, the influence of NLOS noise is smaller, and the corresponding distance data can be kept to the original value; if the set placement threshold is larger than the set placement threshold, the distance array D is described i The larger value jump condition exists, the influence by NLOS is larger, the possibility that the data with larger gradient change contains NLOS noise is larger, and the data need to be found and removed, and the method is shown as S13;
s13: setting a standard deviation threshold sd_threshold=0.2m and a gradient threshold gradient_threshold=0.3m, wherein the smaller the threshold parameter value is, the easier the data containing NLOS errors is removed, but partial effective data can be removed at the same time of removing NLOS data, so that the available data amount is too small, and therefore, the parameter needs to be reasonably set according to the actual test condition; traversing a gradient array d with standard deviation greater than sd_threshold i If the j-th value d i (j) The original distance value (FD) is reserved when the value is smaller than the set gradient_threshold i (j)=D i (j) If d i (j) Discarding the position distance value if the position distance value is greater than the set gradient threshold, and replacing the position distance value with the previous value, namely FD i (j)=D i (j-1);
S14: for each distance array D 1 ,D 2 ...D m Repeating the operations s12 and s13 to obtain the distance data FD from the tag to each UWB base station 11, which can filter NLOS noise to a great extent 1 ,FD 2 ...FD m
Further, the invention provides an improved least square trilateral positioning algorithm: s2: dividing the filtered distance data into a plurality of combinations according to each 3 groups, calculating a plurality of groups of positioning data by using a least square method, calculating the confidence coefficient of each group of positioning data, and weighting and calculating final tag positioning data according to the confidence coefficient, wherein the specific implementation steps are as follows:
s21: according to the permutation and combination algorithm, m UWB base stations 11 in the current positioning area are divided into k groups according to each 3 groups, the coordinates of each group are calculated by using a least square algorithm, and a least square general solution formula is as follows:
X=(A T A) -1 A T b
wherein X is a matrix to be solved, and the specific form of the matrix is X=x, y in a 2D plane trilateral positioning system
Wherein A is a parameter matrix, and in a 2D plane trilateral positioning system, the specific form of A is as follows:
x in the above 1 ,y 1 、x 2 ,y 2 、x 3 ,y 3 X-axis and y-axis coordinates of the 3 UWB base stations 11, respectively;
wherein b is a parameter matrix, and in the 2D plane trilateral positioning system, the specific form of b is as follows:
x in the above 1 ,y 1 、x 2 ,y 2 、x 3 ,y 3 X-axis and y-axis coordinates s of 3 UWB base stations 11 respectively 1 ,s 2 ,s 3 The horizontal distances from the tag to 3 UWB base stations 11 are calculated according to the tag mounting height and the distance information from the tag to the UWB base stations 11, and the following formula is used:
on the upper partD is the linear distance from the tag to the specified IDUWB base station 11, and the filtered distance data can be output from S1: FD (FD) 1 ,FD 2 ...FD m For example, id=1, and take the latest data as the measurement result, the distance from the tag to the UWB base station 11 is: d=fd 1 (n); z is the z-axis coordinate of the current UWB base station 11, and h is the mounting height of the tag;
s22: according to the calculation method provided by S21, the specific values of the parameters are substituted into the corresponding formulas, and k positioning data sets can be obtained through calculation: p is p 1 ,p 2 ...p k The common algorithm directly averages the group of data as a final positioning result, and the method is simple, but cannot effectively remove positioning data containing NLOS noise, and the finally obtained positioning accuracy is lower.
1. Firstly, calculating the sum of squares of the difference between the distances from the locating point to each UWB base station 11 and the measured distance of the tag, wherein the following formula is adopted:
x in the above p ,y p Positioning data, x, combined for one of the UWB base stations 11 i ,y i Is the x-axis y-axis coordinate of the ith UWB base station 11, where i ε 1,2,3, s i A horizontal distance from the tag to the ith UWB base station 11; in theory, the more accurate the distance measurement is, the more accurate the positioning result calculated by least square is, the smaller the sum value is, the larger the measured distance error is, the larger the fixed error is, and in order to facilitate the later calculation and visual display, the sum value is subjected to formula transformation to calculate the confidence coefficient c of the positioning result, and the following formula is used:
in the above formula, c is the confidence coefficient of the positioning result, the numerical range of the confidence coefficient belongs to (0, 1), and the higher the confidence coefficient is, the smaller the possible positioning error is represented;
3. according to the method given by S1, the positioning result array p 1 ,p 2 ...p k Confidence coefficient is calculated on each positioning data of the corresponding confidence coefficient array c 1 ,c 2 ...c k Traversing the array, removing the data with the confidence coefficient smaller than the set threshold value, and simultaneously storing the data with the confidence coefficient larger than the set threshold value to obtain a new confidence coefficient array c 1 ,c 2 ...c n And a corresponding positioning result array p 1 ,p 2 ...p n When n=0, it indicates that the confidence coefficient of all positioning results is smaller than the set threshold value, the positioning failure can be directly solved, and when n is not smaller than 1 and not larger than k, the positioning result array is weighted and averaged according to the confidence coefficient to obtain the final positioning result p mean x, y, the calculation formula is as follows:
the positioning confidence threshold set in the embodiment is 0.4, the smaller the confidence threshold is, the more severe the positioning result is, the higher the probability of removing positioning data containing NLOS errors is, but the probability of tag positioning failure is correspondingly increased, of course, the invention fuses the odometer data, the tag positioning failure in a short period of time does not influence the fused positioning output of the system, but still the accumulated error of the odometer is required to be corrected by the tag positioning data, and the parameter cannot be set too small to ensure that enough effective tag positioning data can be obtained;
s23: and respectively performing operations S1, S21 and S22 on the label A and the label B to obtain positioning data of the two labels: p is p tag1 x,y、p tag2 x,y;
Further, S3: calculating the course angle of the combined tag device 2 according to the two tag positioning data, and coupling the tag course angle data and the gyroscope 23 angular speed by using a Kalman filtering algorithm to obtain fused course angle data; the method specifically comprises the following steps:
s31: calculating the vector angle of the tag B and the tag A according to the coordinate information of the tag A and the tag B in the map, and calculating the course angle of the robot according to the installation angle of the tag connecting component relative to the X axis of the robot body, wherein the calculated course angle data contains a large amount of high-frequency noise and cannot be directly used for the navigation of the robot because the tag positioning data shake within a certain error range and the size of the tag vector module is limited, but the long-time accumulated error of the course angle is approaching 0 from the view of the global map, so the accumulated error can be used as the observed quantity input of a Kalman filter to calibrate the accumulated error generated by the integration of the gyroscope 23; the specific calculation formula is as follows:
in the above, yaw mes Outputting (x) for the observed course angle of the car body tag1 ,y tag1 )、(x tag2 ,y tag2 ) Positioning data p for tag A and tag B, respectively tag1 x,y、p tag2 X, y, fix_angle is the mounting angle of the tag connection assembly relative to the X axis of the robot body;
s32: course angle yaw calculated by two-tag positioning mes In order to measure and input, the angular velocity omega of the gyroscope 23 is used as prediction input, a Kalman filter equation set is constructed to carry out course angle fusion, a state transition matrix, a control matrix and a measurement matrix are all 1×1 identity matrices in the system, and the identity matrices are substituted into a standard Kalman filter equation set to obtain a simplified Kalman filter equation set:
X(k|k-1)=X(k-1|k-1)+U(k)
the above equation is Kalman system state prediction equation, X is fusion course angle yaw fusion X (k|k-1) is the optimal estimated value of the current system, namely the optimal estimated value of the current course angle, X (k-1|k-1) is the optimal estimated value of the last system, U (k) is the control quantity of the system, U (k) =ω×Δt in the system, ω is the measurement return of the gyroscope 23The angular velocity value of delta t Kalman filter updating time interval;
P(k|k-1)=P(k-1|k-1)+Q
the above equation is Kalman system covariance prediction equation, P (k|k-1) is the current system priori covariance optimal estimation value, P (k-1|k-1) is the last system priori covariance optimal estimation value, Q is the process noise covariance matrix of the system, in the system, Q is gyroscope 23 angular velocity measurement noise, and the magnitude is set to 0.0001;
the above equation is Kalman gain calculation equation, where K (K) represents Kalman gain of the system at K moment, P (K-1|k-1) is the last prior covariance optimal estimation value of the system, R is the measured noise covariance matrix of the system, and R is the tag heading angle yaw in the system mes Is set to 10.0;
X(k|k)=X(k-1|k-1)+K(k)Z(k)-X(k|k-1)
the optimal state estimation equation of the Kalman system is shown in the above formula, wherein X (k|k) is the current optimal state estimation value of the system, namely the current fusion course angle yw fusion X (k|k-1) is the optimal state estimation value of the system at the time of K-1, K (K) is the Kalman gain of the system, X (K-1|k-1) is the optimal state estimation value of the system at the time of K-2, Z (K) is the measured value of the system, and in the system, Z (K) =yaw mes
P(k|k)=1-K(k)P(k|k-1)
The posterior estimation covariance equation of the Kalman system is shown in the specification, wherein P (k|k) is the posterior estimation covariance of the system, K (K) is the Kalman gain of the system, and P (k|k-1) is the posterior covariance optimization estimation result of the system at the moment of K-1;
s33: repeating the iterative updating of the Kalman filter by S31 and S32 to obtain the real-time fusion course angle yaw fusion
Further, S4: taking the average value of the two-tag positioning data as the positioning data of the combined tag device 2, and then coupling by using a first-order low-pass filtering algorithmThe label-closing positioning data, the odometer data and the course angle data are combined to obtain the integrated positioning data p fusion The method comprises the steps of carrying out a first treatment on the surface of the The method specifically comprises the following steps:
s41: taking the average value of the two-tag positioning data as the positioning data p of the combined tag device 2 tag The method comprises the following steps:
s42: preprocessing return data of the odometer to ensure that the input data format of the odometer is the instantaneous movement speed relative to the moment on the vehicle bodyAnd converting it to the tag connection assembly center to obtain the instantaneous speed of movement of the combined tag device 2 relative to the moment on the car body +.>Considering only two-dimensional plane motion, according to the fused course angle yaw fusion Euler angle (yaw) of the vehicle body relative to the map coordinate system can be obtained fusion 0, 0), which can be converted into a gesture quaternion q for easy operation, and the quaternion pair +.>Coordinate transformation is carried out, and the movement speed of the combined label device 2 under a map coordinate system is obtained, wherein the calculation formula is as follows:
in the above, q is gesture four elements, q -1 Is the inverse of the quaternion;
s43: coupling tag positioning data p by adopting first-order low-pass filtering algorithm tag Tag movement speed calculated by mileage calculationCalculation formulaThe formula is as follows:
p in the above fusion (t) fusing positioning data for the current moment, p fusion (t-1) fusing the positioning data for the previous time, p tag (t) tag positioning data, p tag (t-1) tag positioning data for the previous time,for the movement speed of the combined label device 2 at the current moment in the map coordinate system, delta t is the time interval between the moment t and the moment t-1, w is the weighting coefficient of the data of the odometer, and the range of the weighting coefficient is [0,1]The larger w is set to indicate that the larger the influence of the odometer data on the fusion positioning data is, the smaller the influence of the tag positioning data on the fusion positioning data is, and the specific size is set according to the accuracy of the odometer and the application scene, and the typical value is 0.1;
according to the technical scheme, the real-time positioning data p of the positioning device can be obtained by circularly executing S1, S2, S3 and S4 fusion [x,y]Heading angle data yaw fusion
Experiments prove that the method provided by the invention can effectively remove TWR distance data and positioning data containing NLOS errors in a UWB positioning system, effectively couple tag positioning data, gyroscope 23 data and mileage data by adopting algorithms such as Kalman filtering, first-order low-pass filtering and the like, ensure the continuity of data output and improve the reliability and robustness of a navigation positioning system while improving the accuracy of positioning and course angles; positioning data p output by the system fusion [x,y]And heading angle data yaw fusion Can be combined into fusion pose data phase fusion [x,y,yaw]The data can provide positioning navigation support for various indoor and outdoor mobile robots. Example operational effects are shown in fig. 3.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, and it should be noted that it is possible for those skilled in the art to make several improvements and modifications without departing from the technical principle of the present invention, and these improvements and modifications should also be regarded as the protection scope of the present invention.

Claims (10)

1. A mobile robotic navigation positioning system based on multi-sensor fusion, comprising:
a UWB base station system comprising at least 4 UWB base stations installed in a mobile robot work area;
the combined tag device comprises at least two UWB tags and a gyroscope, wherein the UWB tags are arranged on the connecting component at intervals, the gyroscope is used for measuring the rotation angular velocity of the connecting component, and the gyroscope is arranged at the center of the connecting component;
the main control device comprises a tag interface, a gyroscope interface and an odometer interface and is used for receiving the tag device data, the gyroscope data and the mobile robot odometer data, carrying out real-time fusion processing on the received data and outputting the current tag device positioning data and heading angle data;
the mobile robot odometer is connected with the main control device through an odometer interface.
2. A mobile robot navigation and positioning system based on multi-sensor fusion according to claim 1, wherein:
the UWB base stations are divided into different positioning areas including indoor, outdoor or across indoor and outdoor.
3. A mobile robot navigation and positioning system based on multi-sensor fusion according to claim 1, wherein:
the installation distance interval between the UWB labels is set according to the size of the robot body; the gyroscope is inertial navigation equipment comprising a gyroscope and an accelerometer.
4. A mobile robot navigation and localization method based on multi-sensor fusion as claimed in any one of claims 1-3, comprising the steps of:
s1: constructing a sliding window filter to detect and filter the distance data from the NLOS-containing tag to each base station, and obtaining the distance data from the filtered tag to each base station;
s2: dividing the distance data in the S1 into a plurality of combinations according to each 3 groups, solving a plurality of groups of positioning data by utilizing a least square trilateral positioning algorithm, solving tag positioning data according to confidence weighting, and respectively obtaining positioning data of a tag A and a tag B by utilizing the algorithm;
s3: calculating the course angle of the tag device according to the positioning data of the tag A and the tag B, and coupling the tag course angle data and the gyroscope angular speed by using a Kalman filtering algorithm to obtain fused course angle data;
s4: taking the average value of the positioning data of the tag A and the tag B as the positioning data of the combined tag device, and coupling the positioning data of the tag, the odometer data and the course angle data by using a first-order low-pass filtering algorithm to obtain fused positioning data p fusion
5. The mobile robot navigation positioning method based on multi-sensor fusion according to claim 4, wherein the step S1 comprises the following steps:
s11: the distance data from the tag to each base station is obtained in real time through the tag, the distance data are stored into corresponding arrays according to the base station ID, and if m base stations exist, the sliding window size is set to n, m distance arrays with the length of n can be obtained: d (D) 1 ,D 2 ...D m Wherein d=d (1), D (2)..d (n);
s12: one of the arrays is recorded as D i The distance gradient absolute value array d is calculated by the following formula i
d i (n)=|D i (n)-D i (n-1)|
Calculate d i Whether the value is larger or smaller than the set threshold value is judged, and if the value is smaller than the set variance threshold value, the distance array D is described i The condition of larger value jump is not existed, the influence of NLOS noise is smaller, and the corresponding distance data can keep the original valueThe method comprises the steps of carrying out a first treatment on the surface of the If the set placement threshold is larger than the set placement threshold, the distance array D is described i The larger value jump condition exists, the influence by NLOS is larger, the possibility that the data with larger gradient change contains NLOS noise is larger, and the data need to be found out and removed, for example, S13;
s13: traversing a gradient array d with standard deviation greater than a set standard deviation threshold i If the j-th value d i (j) The original distance value (FD) is reserved when the gradient threshold value is smaller than the set gradient threshold value i (j)=D i (j) If d i (j) Discarding the position distance value if the gradient threshold is larger than the set gradient threshold, and replacing the position distance value with the previous value, namely FD i (j)=D i (j-1);
S14: for each distance array D 1 ,D 2 ...D m Repeating the operations s12 and s13 to obtain the distance data FD from the label to each base station for filtering NLOS noise to a great extent 1 ,FD 2 ...FD m
6. The mobile robot navigation positioning method based on multi-sensor fusion according to claim 5, wherein the step S2 comprises the following steps:
s21, dividing m base stations of a current positioning area into k groups according to each 3 groups according to a permutation and combination algorithm, and respectively calculating coordinates of each group by using a least square algorithm;
s22, substituting specific numerical values of all parameters into corresponding formulas according to the calculation method provided in S21 to calculate and obtain k positioning data sets: p is p 1 ,p 2 ...p k Calculating the confidence coefficient of each positioning result by utilizing a least square calculation result evaluation algorithm, removing data with lower confidence coefficient, and outputting the rest data as a final positioning result according to weighted average of the confidence coefficient;
s23, performing operations S1, S21 and S22 on the tag A and the tag B respectively to obtain positioning data of the tag A and the tag B: p is p tag1 x,y、p tag2 x,y。
7. The mobile robot navigation and positioning method based on multi-sensor fusion according to claim 6, wherein the method comprises the following steps: the step S3 comprises the following steps:
s31: calculating vector angles of the tag B and the tag A according to coordinate information of the tag A and the tag B in a map, and calculating a course angle of the robot according to an installation angle of the tag connection assembly relative to an X axis of a robot body, wherein a specific calculation formula is as follows:
in the above, yaw mes Outputting (x) for the observed course angle of the car body tag1 ,y tag1 )、(x tag2 ,y tag2 ) Positioning data p for tag A and tag B, respectively tag1 x,y、p tag2 X, y, fix_angle is the mounting angle of the tag connection assembly relative to the X axis of the robot body;
s32: course angle yaw calculated by two-tag positioning mes In order to measure input, the angular velocity omega of a gyroscope is used as prediction input, a Kalman filter equation set is constructed to carry out course angle fusion, a state transition matrix, a control matrix and a measurement matrix are all 1 multiplied by 1 identity matrixes in the system, and the state transition matrix, the control matrix and the measurement matrix are substituted into a standard Kalman filter equation set to obtain the simplified Kalman filter equation set:
X(k|k-1)=X(k-1|k-1)+U(k)
the above equation is Kalman system state prediction equation, X is fusion course angle yaw fusion X (k|k-1) is the optimal estimated value of the current system, namely the optimal estimated value of the current course angle, X (k-1|k-1) is the optimal estimated value of the last system, U (k) is the control quantity of the system, U (k) =omega×Δt in the system, omega is the angular velocity value returned by the gyroscope measurement, and Δt Kalman filter updating time interval is carried out;
P(k|k-1)=P(k-1|k-1)+Q
the above equation is Kalman system covariance prediction equation, P (k|k-1) is the current system priori covariance optimal estimation value, P (k-1|k-1) is the last system priori covariance optimal estimation value, Q is the process noise covariance matrix of the system, and Q is gyroscope angular velocity measurement noise in the system;
the above equation is Kalman gain calculation equation, where K (K) represents Kalman gain of the system at K moment, P (K-1|k-1) is the last prior covariance optimal estimation value of the system, R is the measured noise covariance matrix of the system, and R is the tag heading angle yaw in the system mes Is a measurement noise of (a);
X(k|k)=X(k-1|k-1)+K(k)Z(k)-X(k|k-1)
the optimal state estimation equation of the Kalman system is shown in the above formula, wherein X (k|k) is the current optimal state estimation value of the system, namely the current fusion course angle yw fusion X (k|k-1) is the optimal state estimation value of the system at the time of K-1, K (K) is the Kalman gain of the system, X (K-1|k-1) is the optimal state estimation value of the system at the time of K-2, Z (K) is the measured value of the system, and in the system, Z (K) =yaw mes
P(k|k)=1-K(k)P(k|k-1)
The posterior estimation covariance equation of the Kalman system is shown in the specification, wherein P (k|k) is the posterior estimation covariance of the system, K (K) is the Kalman gain of the system, and P (k|k-1) is the posterior covariance optimization estimation result of the system at the moment of K-1;
s33: repeating the steps S31 and S32 to repeatedly update the Kalman filter continuously to obtain the real-time fusion course angle yaw fusion
8. The mobile robot navigation and positioning method based on multi-sensor fusion according to claim 7, wherein the method comprises the following steps: the course angle yaw mes The measured heading angle may also be obtained by 3, 4 or more UWB tags, or by a single UWB tag with an array antenna.
9. The mobile robot navigation and positioning method based on multi-sensor fusion according to claim 8, wherein the method comprises the following steps: the step S4 comprises the following steps:
s41: taking the average value of the two tag positioning data as the positioning data p of the combined tag device tag The method comprises the following steps:
s42: preprocessing return data of the odometer to ensure that the input data format of the odometer is the instantaneous movement speed relative to the moment on the vehicle bodyAnd converting it to the center of the tag connection assembly to obtain the instantaneous speed of movement of the combined tag device relative to the moment on the vehicle body>Considering only two-dimensional plane motion, according to the fused course angle yaw fusion Euler angle (yaw) of the vehicle body relative to the map coordinate system can be obtained fusion 0, 0), which can be converted into a gesture quaternion q for easy operation, and the quaternion pair +.>And carrying out coordinate transformation to obtain the movement speed of the combined label device under a map coordinate system, wherein the calculation formula is as follows:
in the above, q is gesture four elements, q -1 Is the inverse of the quaternion;
s43: coupling tag positioning data p by adopting first-order low-pass filtering algorithm tag Tag movement speed calculated by mileage calculationThe calculation formula is as follows:
p in the above fusion (t) fusing positioning data for the current moment, p fusion (t-1) fusing the positioning data for the previous time, p tag (t) tag positioning data, p tag (t-1) tag positioning data for the previous time,for the motion speed of the combined label device at the current moment in a map coordinate system, delta t is the time interval between the moment t and the moment t-1, and w is the weight coefficient of the mileage data.
10. The mobile robot navigation positioning method based on multi-sensor fusion according to claim 9, wherein the method comprises the following steps: the odometer data sources include single steering wheel, double wheel differential, mecanum wheel or Ackerman structure.
CN202310450309.XA 2023-04-25 2023-04-25 Mobile robot navigation and positioning system and method based on multi-sensor fusion Pending CN116772831A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310450309.XA CN116772831A (en) 2023-04-25 2023-04-25 Mobile robot navigation and positioning system and method based on multi-sensor fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310450309.XA CN116772831A (en) 2023-04-25 2023-04-25 Mobile robot navigation and positioning system and method based on multi-sensor fusion

Publications (1)

Publication Number Publication Date
CN116772831A true CN116772831A (en) 2023-09-19

Family

ID=88008874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310450309.XA Pending CN116772831A (en) 2023-04-25 2023-04-25 Mobile robot navigation and positioning system and method based on multi-sensor fusion

Country Status (1)

Country Link
CN (1) CN116772831A (en)

Similar Documents

Publication Publication Date Title
Campbell et al. Sensor technology in autonomous vehicles: A review
CN110178048B (en) Method and system for generating and updating vehicle environment map
US10006772B2 (en) Map production method, mobile robot, and map production system
CN112197770B (en) Robot positioning method and positioning device thereof
CN1940591B (en) System and method of target tracking using sensor fusion
CN110554376A (en) Radar range finding method for vehicles
US20150142248A1 (en) Apparatus and method for providing location and heading information of autonomous driving vehicle on road within housing complex
WO2020133217A1 (en) Continuous obstacle detection method, device and system, and storage medium
CN111426320B (en) Vehicle autonomous navigation method based on image matching/inertial navigation/milemeter
CN113706612B (en) Underground coal mine vehicle positioning method fusing UWB and monocular vision SLAM
CN113108791A (en) Navigation positioning method and navigation positioning equipment
CN112967392A (en) Large-scale park mapping and positioning method based on multi-sensor contact
CN113655453A (en) Data processing method and device for sensor calibration and automatic driving vehicle
KR20160120467A (en) Azimuth correction apparatus and method of 2-dimensional radar for vehicle
Motroni et al. A phase-based method for mobile node localization through UHF-RFID passive tags
CN109029418A (en) A method of vehicle is positioned in closed area
KR101106265B1 (en) Localization apparatus and method for mobile robot using rfid
Ivancsits et al. Visual navigation system for small unmanned aerial vehicles
CN113093759A (en) Robot formation construction method and system based on multi-sensor information fusion
Lee et al. LiDAR odometry survey: recent advancements and remaining challenges
US11561553B1 (en) System and method of providing a multi-modal localization for an object
CN110243363B (en) AGV real-time positioning method based on combination of low-cost IMU and RFID technology
CN117387604A (en) Positioning and mapping method and system based on 4D millimeter wave radar and IMU fusion
CN115421153B (en) Laser radar and UWB combined positioning method and system based on extended Kalman filtering
EP4160269A1 (en) Systems and methods for onboard analysis of sensor data for sensor fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination