CN113706612A - Underground coal mine vehicle positioning method fusing UWB and monocular vision SLAM - Google Patents

Underground coal mine vehicle positioning method fusing UWB and monocular vision SLAM Download PDF

Info

Publication number
CN113706612A
CN113706612A CN202111259920.1A CN202111259920A CN113706612A CN 113706612 A CN113706612 A CN 113706612A CN 202111259920 A CN202111259920 A CN 202111259920A CN 113706612 A CN113706612 A CN 113706612A
Authority
CN
China
Prior art keywords
uwb
vehicle
positioning
slam
monocular vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111259920.1A
Other languages
Chinese (zh)
Other versions
CN113706612B (en
Inventor
邹盛
周李兵
沈科
于政乾
王天宇
王芳
赵叶鑫
王国庆
季亮
陈珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tiandi Changzhou Automation Co Ltd
Changzhou Research Institute of China Coal Technology and Engineering Group Corp
Original Assignee
Tiandi Changzhou Automation Co Ltd
Changzhou Research Institute of China Coal Technology and Engineering Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tiandi Changzhou Automation Co Ltd, Changzhou Research Institute of China Coal Technology and Engineering Group Corp filed Critical Tiandi Changzhou Automation Co Ltd
Priority to CN202111259920.1A priority Critical patent/CN113706612B/en
Publication of CN113706612A publication Critical patent/CN113706612A/en
Application granted granted Critical
Publication of CN113706612B publication Critical patent/CN113706612B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to the technical field of automatic driving of a vehicle in a coal mine, in particular to a method for positioning the vehicle in the coal mine, which integrates UWB and monocular vision SLAM and is used for acquiring a front-view camera image of the vehicle; loading an underground target detection model for image target detection; loading a global map made by ORB-SLAM; performing feature matching on the same target ROI area in the map key frame and the current forward-looking camera image; determining the pose of the vehicle through the successfully matched frame, and reading the position of the vehicle in a map; unifying the position information of the two devices to a UWB coordinate system, and acquiring position and speed updates; updating the vehicle motion quantity and the observed quantity through EKF extended Kalman filtering, and finally obtaining the underground vehicle positioning after UWB and visual SLAM fusion. The underground coal mine vehicle positioning method fusing the UWB and monocular vision SLAM has the advantage of realizing high-precision positioning of the vehicle underground.

Description

Underground coal mine vehicle positioning method fusing UWB and monocular vision SLAM
Technical Field
The invention relates to the technical field of automatic driving of a vehicle in a coal mine, in particular to a method for positioning the vehicle in the coal mine by fusing UWB and monocular vision SLAM.
Background
At present, in a closed limited space similar to a coal mine underground, the case of realizing autonomous intelligent operation by adopting an unmanned technology aiming at vehicles is relatively mature. For example, in closed places such as ground wharfs, airports, stations, industrial parks and the like, some special vehicles (such as ferry vehicles, sweeper trucks and the like) realize safe and efficient autonomous operation under the condition of no participation of people. In mining applications, autonomous mining trucks for foreign opencast mines have been in operation for many years, and autonomous vehicle positioning solutions for both ground and opencast mines rely on GNSS (global navigation satellite system) signals.
However, compared with an open coal mine, the underground coal mine environment developed by the miners has the problems of no GNSS signal, the problem of visual SLAM (synchronous positioning and mapping) "gallery effect" and the problem of low UWB (ultra wide band communication technology) dynamic positioning accuracy, and the "gallery effect" refers to the condition that the SLAM is easily subjected to mismatching positioning loss in a long and straight gallery environment with walls on both sides. Therefore, the ground unmanned related positioning scheme cannot be directly applied to the underground coal mine, and a simple and efficient positioning method needs to be provided by combining the underground coal mine characteristics.
The underground automatic driving belongs to a starting stage, and the positioning of underground automatic driving vehicles mostly depends on an indoor positioning technology, wherein the indoor positioning technology comprises Bluetooth, UWB, RFID (radio frequency identification), infrared, ultrasonic wave, ZigBee (wireless network transmission protocol) and the like, the UWB technology is widely applied to indoor positioning in the large market at present, the UWB technology is also applied to positioning of vehicles and personnel under a coal mine, a positioning base station needs to be arranged underground, and a positioning terminal is configured on the vehicle to obtain the distance from the vehicle to the base station, so that the positioning of the vehicle under the underground is realized. UWB positioning typically has three modes: the signal arrival angle (AOA), the Received Signal Strength (RSS) and the signal arrival Time (TOA) or the signal arrival Time Difference (TDOA), the high-precision UWB positioning is generally calculated based on time measurement, and by adopting the TDOA positioning method, only the clock synchronization between the positioning base stations, i.e., the reference nodes needs to be considered, and the clock synchronization between the reference mobile station and the reference nodes does not need to be referred to, so that the implementation is easy and the equipment cost is low. The vision-based simultaneous localization and mapping (SLAM) technology has developed rapidly in recent years, and visual SLAM research is mainly divided into three major categories, monocular, binocular (or multi-ocular), and RGBD. Most visual SLAM systems work by tracking set-up keypoints through successive camera frames, triangulating their 3D positions, and using this information to approximate the pose of the camera itself. Mainstream SLAM algorithms include SLAM based on kalman filtering of EKF, SLAM based on particle filtering, SLAM based on graph optimization, and the like. Compared with other SLAM methods, the ORB-SLAM combined with EKF has higher positioning precision and real-time performance.
UWB belongs to an active positioning method, and although the positioning accuracy is relatively high, when signals are interfered, a large error is generated. Monocular vision ORB-SLAM belongs to a passive positioning method, tracking failure easily occurs when the environment is single or the movement speed is high, so that the system is broken down, and the method belongs to a relative positioning mode, and positioning can be diverged along with the increase of time. According to the analysis, it is considered that a positioning result with good robustness and high precision is difficult to obtain only by a certain positioning method under a complex indoor environment.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art.
Therefore, the invention provides the underground coal mine vehicle positioning method fusing the UWB and the monocular vision SLAM, and the underground coal mine vehicle positioning method fusing the UWB and the monocular vision SLAM has the advantage of realizing high-precision positioning of the vehicle underground.
The coal mine underground vehicle positioning method fusing the UWB and monocular vision SLAM provided by the embodiment of the invention comprises the following specific steps:
step 1, acquiring a front-view camera image of a vehicle;
step 2, loading an underground target detection model for image target detection;
step 3, loading the global map made by ORB-SLAM;
step 4, performing feature matching on the same target ROI area in the map key frame and the current forward-looking camera image;
step 5, determining the pose of the vehicle through the successfully matched frame, and reading the position of the vehicle in a map;
step 6, unifying the position information of the vehicle pose determined by the ORB-SLAM and the position information of the underground coordinates of the vehicle determined by the UWB into a UWB coordinate system, and acquiring the coordinate position and the speed update of the vehicle in the underground x and y directions;
and 7, updating the motion quantity and the observed quantity of the vehicle through EKF extended Kalman filtering, and finally obtaining the underground vehicle positioning after the fusion of the UWB and the visual SLAM.
The invention has the advantages that by combining the monocular vision ORB-SLAM and the UWB positioning method, the monocular vision ORB-SLAM can reduce the influence of UWB non-line-of-sight errors, the positioning reliability can be improved by adding the UWB, the positioning precision is ensured, and the high-precision indoor positioning is finally realized by utilizing the combined positioning method, namely, the precision and the timeliness of the underground positioning of the vehicle are improved.
Further specifically, in the above technical solution, in the 6 th step, the acquiring step of the UWB coordinate system is as follows:
step 6.1, adopting a TDOA wireless network to keep the time synchronization of the UWB base station;
step 6.2, resolving UWB positioning information through a Chan algorithm;
and 6.3, optimizing the Chan calculation result by adopting median average filtering, and finally obtaining the underground UWB positioning of the vehicle.
Further specifically, in the above technical solution, in the 6.2 step, the obtaining of the UWB positioning information is to obtain original UWB positioning data by using a vehicle-mounted UWB positioning tag card, and obtain the UWB positioning information by resolving and optimizing, and the specific steps are as follows:
step 6.2.1, keeping time synchronization: TDOA needs to keep clock synchronization of positioning base stations or obtain positioning time difference among the positioning base stations, any positioning base station broadcasts and sends a ranging message by adopting a wireless network-based synchronization mode, other positioning base stations receive the ranging message, a test time difference T is counted, an actual time difference T 'is calculated according to the relative position of the base stations and the propagation speed of electromagnetic waves, and the positioning time difference delta T of each base station is obtained, wherein delta T = T-T', and the time synchronization needs to be continuously carried out in the positioning process due to clock drift;
and 6.2.2, the vehicle-mounted UWB positioning tag card sends out a UWB signal once, all base stations in the tag positioning distance receive the wireless signal, the message flight time difference between the vehicle tag card and each base station is obtained through a TDOA method, and the distance difference between the tag card and the base station is calculated.
Further specifically, in the technical scheme, the monocular vision SLAM is used for obtaining the pose estimation value, the ORB-SLAM algorithm is used for solving the pose estimation of the camera after the vehicle forward-looking camera is used for collecting the image, and the vehicle position information is obtained.
Further specifically, in the above technical solution, the method performs target detection on the image by using a target detection model trained by making a target detection data set for the underground target of the coal mine, and obtains a bounding box area of the target.
Further specifically, in the above technical solution, a bounding box region of the same target in an image frame acquired by a current camera and a key frame image in a map is taken as an image matching ROI, and ORB feature matching is performed on the image ROI region to obtain ORB features formed by an Oriented keypoint of the matched image and a BRIEF descriptor.
Further specifically, in the above technical solution, a descriptor hamming distance is calculated for each ORB feature descriptor in the previous frame of picture and each ORB feature point descriptor in the next frame of picture according to a brute force matching algorithm, and a point with the distance within a set threshold range is taken as an optimal matching point of the two frames of pictures, so as to obtain the number Q of matching feature points of the two frames of pictures.
Further specifically, in the above technical solution, a threshold U for the number of matched feature points is set, the number of matched feature points Q is compared with the threshold U, if Q is greater than or equal to U, matching is successful, ORB features of adjacent frames in a vehicle motion process are tracked, current camera pose positioning is determined, a position of a target vehicle in a visual map is read according to the determined target vehicle pose, and vehicle positioning is completed.
Further specifically, in the above technical solution, in step 6.2.2, when the number of the positioning base stations exceeds 3, a Chan algorithm is adopted to perform a weighted least square method twice.
More specifically, in the above technical solution, data fusion is performed, in the monocular vision ORB-SLAM and UWB combination method, an independent coordinate system used in the UWB positioning process is used as a global coordinate system, position information calculated by monocular vision ORB-SLAM is converted into a UWB coordinate system through spatial transformation, and data of UWB and monocular vision ORB-SLAM are fused to obtain a fused vehicle position and speed update.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the description of the embodiments or prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of the distance of a vehicle locating terminal from a base station.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects solved by the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, the method for positioning the vehicle under the coal mine with the fusion of the UWB and the monocular vision SLAM of the present invention specifically includes the following steps:
step 1, acquiring a front-view camera image of a vehicle;
step 2, loading an underground target detection model for image target detection;
step 3, loading the global map made by ORB-SLAM;
step 4, performing feature matching on the same target ROI area in the map key frame and the current forward-looking camera image;
step 5, determining the pose of the vehicle through the successfully matched frame, and reading the position of the vehicle in a map;
step 6, unifying the position information of the vehicle pose determined by the ORB-SLAM and the position information of the underground coordinates of the vehicle determined by the UWB into a UWB coordinate system, and acquiring the coordinate position and the speed update of the vehicle in the underground x and y directions; the vehicle pose determined by the ORB-SLAM comprises acceleration of the vehicle in x and y directions, and the underground coordinates of the vehicle determined by the UWB are the underground x and y coordinates of the vehicle determined by the UWB;
in the 6 th step, the UWB coordinate system is acquired as follows:
step 6.1, maintaining time synchronization of UWB base stations by adopting TDOA (time difference of arrival) wireless network, wherein TDOA is a method for positioning by using time difference of arrival, and is also called hyperbolic positioning;
step 6.2, resolving UWB positioning information through a Chan algorithm; in the 6.2 th step, the UWB positioning information is obtained by obtaining original UWB positioning data using a vehicle-mounted UWB positioning tag card, and obtaining UWB positioning information by resolving and optimizing, and the specific steps are as follows: step 6.2.1, keeping time synchronization: TDOA needs to keep clock synchronization of positioning base stations or obtain positioning time difference among the positioning base stations, any positioning base station broadcasts and sends a ranging message by adopting a wireless network-based synchronization mode, other positioning base stations receive the ranging message, a test time difference T is counted, an actual time difference T 'is calculated according to the relative position of the base stations and the propagation speed of electromagnetic waves, and the positioning time difference delta T of each base station is obtained, wherein delta T = T-T', and the time synchronization needs to be continuously carried out in the positioning process due to clock drift; and 6.2.2, the vehicle-mounted UWB positioning tag card sends out a UWB signal once, all base stations in the tag positioning distance receive the wireless signal, the message flight time difference between the vehicle tag card and each base station is obtained through a TDOA method, and the distance difference between the tag card and the base station is calculated. In the 6.2.2 step, when more than 3 positioning base stations exist, a Chan algorithm is adopted to obtain a better calculation result by fully utilizing redundant data through a twice weighted least square method (WLS) under the environment that noise obeys Gaussian distribution of zero mean value, and higher positioning accuracy is obtained.
Referring to fig. 2, the distance from the vehicle positioning terminal M to the base station S1 is R1, the distance from the vehicle positioning terminal M to the base station S2 is R2, the distance from the vehicle positioning terminal M to the base station S3 is R3, and the distance from the vehicle positioning terminal M to the base station Sn is Rn, where n is a positive integer greater than or equal to 1.
The specific steps of the Chan algorithm are as follows:
assuming that a vehicle UWB positioning terminal M (x, y) is a position to be estimated for vehicle positioning, coordinates of a S1 base station, coordinates of a S2 base station, coordinates of a S3 base station … … Sn base station are known, the coordinates are Si (xi, yi), i =1, 2, …, n, wherein n is a positive integer greater than or equal to 1, and then the distance between M and xi is:
Figure 584262DEST_PATH_IMAGE001
(1)
wherein R represents a vehicle-to-base distance; riIndicating vehicle to SiBase station distance; (xi, yi) represents coordinates of the Si base station; (x, y) represents the coordinates of the vehicle in the underground, which are obtained by the vehicle positioning terminal;
based on S1, the difference between the positioning distances M to Si (i ≠ 1) and M to S1 is:
Figure 387133DEST_PATH_IMAGE002
(2)
wherein c represents a radio wave propagation speed;
Figure 525990DEST_PATH_IMAGE003
represents the time-of-flight difference from M to Si (i ≠ 1); riIndicating vehicle location terminals M to SiBase station distance; r1Indicating the distance of the vehicle to the base station of S1.
The following two equations are obtained from the above equations (1) and (2):
Figure 296369DEST_PATH_IMAGE004
(3)
wherein the content of the first and second substances,
Figure 749347DEST_PATH_IMAGE005
represents the horizontal coordinate difference between the Si-th base station and the S1-th base station;
Figure 661852DEST_PATH_IMAGE006
represents a difference in vertical coordinates between the Si-th base station and the S1-th base station;
Figure 971610DEST_PATH_IMAGE007
is the case of i =1 in equation (4);
Figure 980017DEST_PATH_IMAGE008
(4)
wherein the content of the first and second substances,
Figure 220375DEST_PATH_IMAGE009
coordinates representing the base station Si;
Figure 997838DEST_PATH_IMAGE010
(5)
wherein the content of the first and second substances,
Figure 478498DEST_PATH_IMAGE011
the abscissa representing the base station S1;
Figure 223469DEST_PATH_IMAGE012
represents the abscissa of the base station Si;
Figure 752670DEST_PATH_IMAGE013
(6)
wherein the content of the first and second substances,
Figure 446957DEST_PATH_IMAGE014
represents the ordinate of the base station S1;
Figure 551048DEST_PATH_IMAGE015
represents the ordinate of the base station Si;
wherein the content of the first and second substances,
Figure 268468DEST_PATH_IMAGE015
are all constant values that are known to be constant,
Figure 663677DEST_PATH_IMAGE016
、y、
Figure 665000DEST_PATH_IMAGE017
for unknown variables, a series of linear equations can be obtained by equations (3) to (6) to solve for unique coordinate values.
When the number of the positioning base stations is more than 3, the number of the nonlinear equation sets obtained by the TDOA value is more than that of the unknown variables. And (3) obtaining the optimal position estimation value for vehicle positioning by fully utilizing redundant data by adopting a weighted least square method (WLS). Firstly, converting an initial nonlinear TDOA equation set into a linear equation set, then obtaining an initial solution by adopting a WLS (Linear navigation System), and then carrying out WLS estimation for the second time by using known constraint conditions such as the estimated coordinate and the additional variable obtained for the first time, so as to obtain an improved estimated vehicle location coordinate M (x, y).
And 6.3, optimizing the Chan calculation result by adopting median average filtering to finally obtain the UWB positioning of the vehicle in the underground, further optimizing the estimation vehicle position of the Chan algorithm by a median average filtering method, and realizing the rapid, convenient and accurate positioning of the vehicle position.
Assuming a set of vehicle positioning coordinates of length p that the Chan algorithm solves:
Figure 425146DEST_PATH_IMAGE018
(7)
wherein the content of the first and second substances,
Figure 879130DEST_PATH_IMAGE019
representing uwb vehicle positioning coordinates M; p is a positive integer greater than 2;
Figure 78030DEST_PATH_IMAGE020
locating the coordinates of the terminal for the vehicle to be evaluated;
Figure 684592DEST_PATH_IMAGE021
representing the p-th vehicle positioning coordinates resolved by the Chan algorithm; removing the maximum value and the minimum value, and calculating the arithmetic mean value of p-2 data to obtain the optimal positioning coordinate estimation of the vehicle:
Figure 864906DEST_PATH_IMAGE022
(8)
wherein the content of the first and second substances,
Figure 556919DEST_PATH_IMAGE023
indicates that it is onAnd (5) obtaining the final downhole coordinate value of the vehicle through uwb.
And 7, updating the motion quantity and the observed quantity of the vehicle through EKF extended Kalman filtering, and finally obtaining the underground vehicle positioning after the fusion of the UWB and the visual SLAM.
And acquiring a pose estimation value by using monocular vision SLAM, acquiring images by using a vehicle forward-looking camera, and solving the pose estimation of the camera by using an ORB-SLAM algorithm to acquire vehicle position information. And (2) making a target detection model trained by a target detection data set aiming at underground targets (including pedestrians, various vehicles (such as skip cars, people cars and bulldozers …), traffic lights, signboards and the like) of the coal mine to perform target detection on the images, and acquiring a bounding box area of the targets. And taking a bounding box area of the same target in the image frame acquired by the current camera and the key frame image in the map as an image matching ROI area, and carrying out ORB feature matching on the image ROI area to obtain ORB features formed by an Oriented key point of the matched image and a BRIEF descriptor. Calculating the hamming distance of the descriptor according to a violent matching algorithm for each ORB feature descriptor in the previous frame of image and each ORB feature point descriptor in the next frame of image, and taking the point with the distance within a set threshold range as the optimal matching point of the two frames of images to obtain the number Q of the matching feature points of the two frames of images, wherein Q is a positive integer greater than or equal to 1. Setting a threshold value U of the number of matched feature points, comparing the obtained number of matched feature points Q with the threshold value U, if Q is larger than or equal to U, successfully matching, tracking ORB (object distance bearing) features of adjacent frames in the motion process of the vehicle, determining the position of the current camera pose, reading the position of the target vehicle in a visual map according to the determined position of the target vehicle, and completing vehicle positioning (x is xSLAM,ySLAM). And performing data fusion, wherein in the monocular vision ORB-SLAM and UWB combined method, an independent coordinate system used in the UWB positioning process is used as a global coordinate system, the position information calculated by the monocular vision ORB-SLAM is converted into a UWB coordinate system after spatial transformation, and the UWB and monocular vision ORB-SLAM data are fused to obtain the fused vehicle position and speed update. The fused vehicle position and velocity model is:
Figure 293931DEST_PATH_IMAGE024
(9)
wherein t represents a sampling interval time of the vehicle system;
Figure 4267DEST_PATH_IMAGE025
indicating the plane position of the vehicle at the k-th moment;
Figure 371794DEST_PATH_IMAGE026
represents the plane position of the vehicle at the k-1 th moment;
Figure 794511DEST_PATH_IMAGE027
representing the speed of the system in the x direction at the k-th moment;
Figure 69634DEST_PATH_IMAGE028
representing the speed of the system in the y direction at time k;
Figure 650788DEST_PATH_IMAGE029
represents the speed of the system in the x direction at the k-1 th moment;
Figure 172906DEST_PATH_IMAGE030
represents the speed of the system in the y direction at the k-1 time;
Figure 901827DEST_PATH_IMAGE031
represents the acceleration of the system in the x direction at the k-1 th moment;
Figure 918325DEST_PATH_IMAGE032
representing the acceleration of the system in the y-direction at time k-1.
And (3) taking the position and speed errors of the vehicle system as the motion state vector of the monocular vision ORB-SLAM/UWB combined system, and obtaining a motion equation according to the formula (9):
Figure 603253DEST_PATH_IMAGE033
(10)
wherein subscript k represents time k; the subscript k-1 indicates the time k-1 immediately preceding time k.
Figure 312583DEST_PATH_IMAGE034
(11)
Figure 263221DEST_PATH_IMAGE035
(12)
Figure 332677DEST_PATH_IMAGE036
(13)
And (3) obtaining an observation equation by using the difference value of the position information of the monocular vision ORB-SLAM and the position information obtained by UWB solution under the unified coordinate system as the observation information of the combination system:
Figure 622844DEST_PATH_IMAGE037
(14)
wherein the content of the first and second substances,
Figure 752343DEST_PATH_IMAGE038
representing observation noise;
Figure 190278DEST_PATH_IMAGE039
representing the error value of the position coordinates of the two; h denotes the observation equation coefficient matrix.
Figure 548578DEST_PATH_IMAGE040
(15)
Wherein the content of the first and second substances,
Figure 208098DEST_PATH_IMAGE041
vehicle positioning information indicating an output of ORB-SLAM at the k-th time;
Figure 321548DEST_PATH_IMAGE042
represents the vehicle positioning information output by the UWB at the k-th time.
Figure 184462DEST_PATH_IMAGE043
(16)
Figure 595720DEST_PATH_IMAGE044
(17)
Wherein the content of the first and second substances,
Figure 594900DEST_PATH_IMAGE045
an observation noise error representing the positioning abscissa x at the time k;
Figure 66202DEST_PATH_IMAGE046
an observation noise error representing the positioning ordinate y at the moment k;
and updating the motion state and the observation information through EKF (extended Kalman Filter) to obtain fused vehicle positioning information. The motion state update includes motion state prediction and error covariance prediction:
Figure 478729DEST_PATH_IMAGE047
(18)
wherein the content of the first and second substances,
Figure 444411DEST_PATH_IMAGE048
representing a prediction of a vehicle motion state vector at a k-1 th time instant by a k-1 th time instant;
Figure 818803DEST_PATH_IMAGE049
representing the predicted value of the motion state vector at the k-1 th moment;
Figure 8476DEST_PATH_IMAGE050
representing a prediction of the vehicle motion state error covariance at time k-1;
Figure 845982DEST_PATH_IMAGE051
representing the predicted value of the error covariance matrix at the kth moment, wherein the covariance is used for measuring the total error of two variables; f represents a state transition matrix; t is shown asTransposing a state transition matrix F;
Figure 599043DEST_PATH_IMAGE052
representing the noise covariance matrix in the process.
The observation update includes computing a Kalman gain
Figure 900712DEST_PATH_IMAGE053
Vehicle motion state vector at time k
Figure 198969DEST_PATH_IMAGE054
Sum error covariance
Figure 773039DEST_PATH_IMAGE055
The specific equation is as follows:
Figure 142840DEST_PATH_IMAGE056
(19)
wherein I represents an identity matrix;
Figure 971119DEST_PATH_IMAGE057
representing an observed noise covariance matrix;
Figure 955124DEST_PATH_IMAGE054
represents the motion state vector predictor at the k-th time, i.e., the best estimate value at that time to be output.
The steps are executed circularly, so that the real-time positioning information of the vehicle can be obtained.
The underground coal mine vehicle positioning method fusing the UWB and the monocular vision SLAM is combined with an ORB-SLAM algorithm of target detection, and the UWB and monocular vision SLAM fusion positioning method is applied to underground coal mines, so that the cost of the adopted monocular camera is lower compared with that of a binocular camera and a depth camera; aiming at the problems that a gallery effect exists in the underground coal mine roadway environment, a plurality of similar characteristics exist, a large amount of mismatching can occur in the traditional visual SLAM algorithm, and the positioning precision is influenced, the target detection algorithm is adopted to firstly detect the similar target, and then the target area is matched and positioned, so that the matching accuracy can be effectively improved, and the positioning precision is improved; aiming at the requirement of high-precision positioning in the field of automatic driving under a coal mine, the invention only depends on the low dynamic positioning precision of UWB or SLAM, combines the advantages of the two, provides a fusion positioning method, improves the dynamic positioning precision, and can meet the positioning requirement of automatic driving.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention are equivalent to or changed within the technical scope of the present invention.

Claims (10)

1. A coal mine underground vehicle positioning method fusing UWB and monocular vision SLAM is characterized by comprising the following specific steps:
step 1, acquiring a front-view camera image of a vehicle;
step 2, loading an underground target detection model for image target detection;
step 3, loading the global map made by ORB-SLAM;
step 4, performing feature matching on the same target ROI area in the map key frame and the current forward-looking camera image;
step 5, determining the pose of the vehicle through the successfully matched frame, and reading the position of the vehicle in a map;
step 6, unifying the position information of the vehicle pose determined by the ORB-SLAM and the position information of the underground coordinates of the vehicle determined by the UWB into a UWB coordinate system, and acquiring the coordinate position and the speed update of the vehicle in the underground x and y directions;
and 7, updating the motion quantity and the observed quantity of the vehicle through EKF extended Kalman filtering, and finally obtaining the underground vehicle positioning after the fusion of the UWB and the visual SLAM.
2. The method for positioning the coal mine vehicle fusing the UWB and monocular vision SLAM as claimed in claim 1, wherein the UWB and monocular vision SLAM are integrated into a single unit, and the method comprises the following steps: in the 6 th step, the UWB coordinate system is acquired as follows:
step 6.1, adopting a TDOA wireless network to keep the time synchronization of the UWB base station;
step 6.2, resolving UWB positioning information through a Chan algorithm;
and 6.3, optimizing the Chan calculation result by adopting median average filtering, and finally obtaining the underground UWB positioning of the vehicle.
3. The method for positioning the coal mine vehicle fusing the UWB and monocular vision SLAM as claimed in claim 2, wherein the UWB and monocular vision SLAM are integrated into a single unit, and the method comprises the following steps: in the 6.2 th step, the UWB positioning information is obtained by obtaining original UWB positioning data using a vehicle-mounted UWB positioning tag card, and obtaining UWB positioning information by resolving and optimizing, and the specific steps are as follows:
step 6.2.1, keeping time synchronization: TDOA needs to keep clock synchronization of positioning base stations or obtain positioning time difference among the positioning base stations, any positioning base station broadcasts and sends a ranging message by adopting a wireless network-based synchronization mode, other positioning base stations receive the ranging message, a test time difference T is counted, an actual time difference T 'is calculated according to the relative position of the base stations and the propagation speed of electromagnetic waves, and the positioning time difference delta T of each base station is obtained, wherein delta T = T-T', and the time synchronization needs to be continuously carried out in the positioning process due to clock drift;
and 6.2.2, the vehicle-mounted UWB positioning tag card sends out a UWB signal once, all base stations in the tag positioning distance receive the wireless signal, the message flight time difference between the vehicle tag card and each base station is obtained through a TDOA method, and the distance difference between the tag card and the base station is calculated.
4. The method for positioning the coal mine vehicle fusing the UWB and monocular vision SLAM as claimed in claim 1, wherein the UWB and monocular vision SLAM are integrated into a single unit, and the method comprises the following steps: and acquiring a pose estimation value by using monocular vision SLAM, acquiring images by using a vehicle forward-looking camera, and solving the pose estimation of the camera by using an ORB-SLAM algorithm to acquire vehicle position information.
5. The method for positioning the coal mine vehicle fusing the UWB and monocular vision SLAM as claimed in claim 4, wherein the UWB and monocular vision SLAM are integrated into a single unit, and the method comprises the following steps: and carrying out target detection on the image by utilizing a target detection model trained by making a target detection data set aiming at the underground target of the coal mine, and obtaining a bounding box area of the target.
6. The method for positioning the coal mine vehicle fusing the UWB and monocular vision SLAM as claimed in claim 5, wherein the UWB and monocular vision SLAM are integrated into a single unit, and the method comprises the following steps: and taking a bounding box area of the same target in the image frame acquired by the current camera and the key frame image in the map as an image matching ROI area, and carrying out ORB feature matching on the image ROI area to obtain ORB features formed by an Oriented key point of the matched image and a BRIEF descriptor.
7. The method for positioning the coal mine vehicle fusing the UWB and monocular vision SLAM as claimed in claim 6, wherein the UWB and monocular vision SLAM are integrated into a single unit, and the method comprises the following steps: calculating the hamming distance of the descriptor according to a violent matching algorithm for each ORB feature descriptor in the previous frame of image and each ORB feature point descriptor in the next frame of image, and taking the point with the distance within a set threshold range as the optimal matching point of the two frames of images to obtain the number Q of the matching feature points of the two frames of images.
8. The method for positioning a coal mine vehicle fusing the UWB and monocular vision SLAM as claimed in claim 7, wherein the UWB and monocular vision SLAM are integrated into a single unit, and the method comprises the following steps: and setting a threshold U for the number of matched feature points, comparing the obtained number of matched feature points Q with the threshold U, if Q is larger than or equal to U, successfully matching, tracking ORB features of adjacent frames in the motion process of the vehicle, determining the position of the current camera pose, reading the position of the target vehicle in a visual map according to the determined position of the target vehicle, and completing vehicle positioning.
9. The method for positioning the coal mine vehicle fusing the UWB and monocular vision SLAM as claimed in claim 3, wherein the UWB and monocular vision SLAM are integrated into a single vehicle, and the method comprises the following steps: in step 6.2.2, when more than 3 base stations are located, the Chan algorithm is used to perform a weighted least squares method twice.
10. The method for positioning the coal mine vehicle fusing the UWB and monocular vision SLAM as claimed in claim 1, wherein the UWB and monocular vision SLAM are integrated into a single unit, and the method comprises the following steps: and performing data fusion, wherein in the monocular vision ORB-SLAM and UWB combined method, an independent coordinate system used in the UWB positioning process is used as a global coordinate system, the position information calculated by the monocular vision ORB-SLAM is converted into a UWB coordinate system after spatial transformation, and the UWB and monocular vision ORB-SLAM data are fused to obtain the fused vehicle position and speed update.
CN202111259920.1A 2021-10-28 2021-10-28 Underground coal mine vehicle positioning method fusing UWB and monocular vision SLAM Active CN113706612B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111259920.1A CN113706612B (en) 2021-10-28 2021-10-28 Underground coal mine vehicle positioning method fusing UWB and monocular vision SLAM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111259920.1A CN113706612B (en) 2021-10-28 2021-10-28 Underground coal mine vehicle positioning method fusing UWB and monocular vision SLAM

Publications (2)

Publication Number Publication Date
CN113706612A true CN113706612A (en) 2021-11-26
CN113706612B CN113706612B (en) 2022-02-11

Family

ID=78647227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111259920.1A Active CN113706612B (en) 2021-10-28 2021-10-28 Underground coal mine vehicle positioning method fusing UWB and monocular vision SLAM

Country Status (1)

Country Link
CN (1) CN113706612B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114264297A (en) * 2021-12-01 2022-04-01 清华大学 Positioning and mapping method and system for UWB and visual SLAM fusion algorithm
CN114353782A (en) * 2022-01-11 2022-04-15 华北理工大学 Underground positioning method and underground positioning device based on Baseline-RFMDR
CN114758525A (en) * 2022-03-17 2022-07-15 煤炭科学技术研究院有限公司 Traffic control system for underground coal mine roadway
CN115542245A (en) * 2022-12-01 2022-12-30 广东师大维智信息科技有限公司 UWB-based pose determination method and device
CN117459898A (en) * 2023-12-22 2024-01-26 浙江深寻科技有限公司 Emergency positioning communication method and system
CN117590858A (en) * 2024-01-19 2024-02-23 潍坊现代农业山东省实验室 Greenhouse unmanned vehicle navigation method and greenhouse unmanned vehicle navigation system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109631855A (en) * 2019-01-25 2019-04-16 西安电子科技大学 High-precision vehicle positioning method based on ORB-SLAM
CN111766561A (en) * 2020-04-24 2020-10-13 天津大学 Unmanned aerial vehicle positioning method based on UWB technology
CN113038364A (en) * 2021-02-25 2021-06-25 杨亦非 Underground two-dimensional positioning method based on combination of TDOA and DS _ TWR of UWB technology

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109631855A (en) * 2019-01-25 2019-04-16 西安电子科技大学 High-precision vehicle positioning method based on ORB-SLAM
CN111766561A (en) * 2020-04-24 2020-10-13 天津大学 Unmanned aerial vehicle positioning method based on UWB technology
CN113038364A (en) * 2021-02-25 2021-06-25 杨亦非 Underground two-dimensional positioning method based on combination of TDOA and DS _ TWR of UWB technology

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
乔智等: "一种单目视觉/UWB组合的室内定位方法", 《导航定位学报》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114264297A (en) * 2021-12-01 2022-04-01 清华大学 Positioning and mapping method and system for UWB and visual SLAM fusion algorithm
CN114353782A (en) * 2022-01-11 2022-04-15 华北理工大学 Underground positioning method and underground positioning device based on Baseline-RFMDR
CN114758525A (en) * 2022-03-17 2022-07-15 煤炭科学技术研究院有限公司 Traffic control system for underground coal mine roadway
CN114758525B (en) * 2022-03-17 2024-03-22 煤炭科学技术研究院有限公司 Traffic control system for underground coal mine tunnel
CN115542245A (en) * 2022-12-01 2022-12-30 广东师大维智信息科技有限公司 UWB-based pose determination method and device
CN117459898A (en) * 2023-12-22 2024-01-26 浙江深寻科技有限公司 Emergency positioning communication method and system
CN117590858A (en) * 2024-01-19 2024-02-23 潍坊现代农业山东省实验室 Greenhouse unmanned vehicle navigation method and greenhouse unmanned vehicle navigation system
CN117590858B (en) * 2024-01-19 2024-04-16 潍坊现代农业山东省实验室 Greenhouse unmanned vehicle navigation method and greenhouse unmanned vehicle navigation system

Also Published As

Publication number Publication date
CN113706612B (en) 2022-02-11

Similar Documents

Publication Publication Date Title
CN113706612B (en) Underground coal mine vehicle positioning method fusing UWB and monocular vision SLAM
El-Sheimy et al. Indoor navigation: State of the art and future trends
Badino et al. Visual topometric localization
CN103561462B (en) Indoor positioning system and method totally based on smart mobile terminal platform
CN109164411B (en) Personnel positioning method based on multi-data fusion
CN103995250B (en) Radio-frequency (RF) tag trajectory track method
CN109974694B (en) Indoor pedestrian 3D positioning method based on UWB/IMU/barometer
Retscher et al. Ubiquitous positioning technologies for modern intelligent navigation systems
CN114222240A (en) Multi-source fusion positioning method based on particle filtering
Gustafsson et al. Navigation and tracking of road-bound vehicles
Wang et al. UGV‐UAV robust cooperative positioning algorithm with object detection
CN111308420A (en) Indoor non-line-of-sight positioning method based on acoustic signal time delay estimation and arrival frequency
Aggarwal GPS-based localization of autonomous vehicles
Shan et al. A Survey of Vehicle Localization: Performance Analysis and Challenges
CN112556689B (en) Positioning method integrating accelerometer and ultra-wideband ranging
Shin et al. Received signal strength-based robust positioning system in corridor environment
Chen et al. Multi-level scene modeling and matching for smartphone-based indoor localization
Ta Smartphone-based indoor positioning using Wi-Fi, inertial sensors and Bluetooth
Song et al. RFID/in-vehicle sensors-integrated vehicle positioning strategy utilising LSSVM and federated UKF in a tunnel
Almansoub et al. Multi-scale vehicle localization in underground parking lots by integration of dead reckoning, Wi-Fi and vision
Zhou et al. A case study of cross-floor localization system using hybrid wireless sensing
Xu et al. Doppler‐shifted frequency measurement based positioning for roadside‐vehicle communication systems
Burgess et al. Geometric constraint model and mobility graphs for building utilization intelligence
Khamooshi Cooperative vehicle perception and localization using infrastructure-based sensor nodes
Hyun et al. Pose-graph-based uwb slam with nlos factor estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant