CN117406721A - UWB-based mobile robot and navigation and following method thereof - Google Patents

UWB-based mobile robot and navigation and following method thereof Download PDF

Info

Publication number
CN117406721A
CN117406721A CN202311299256.2A CN202311299256A CN117406721A CN 117406721 A CN117406721 A CN 117406721A CN 202311299256 A CN202311299256 A CN 202311299256A CN 117406721 A CN117406721 A CN 117406721A
Authority
CN
China
Prior art keywords
mobile robot
following
uwb positioning
node
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311299256.2A
Other languages
Chinese (zh)
Inventor
黄家才
王徐寅
颜爱忠
宋成法
薛源
孙丁斌
汤文俊
汪涛
梅贤慧
张杨桂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sinoso Science And Technology Inc
Nanjing Institute of Technology
Original Assignee
Sinoso Science And Technology Inc
Nanjing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sinoso Science And Technology Inc, Nanjing Institute of Technology filed Critical Sinoso Science And Technology Inc
Priority to CN202311299256.2A priority Critical patent/CN117406721A/en
Publication of CN117406721A publication Critical patent/CN117406721A/en
Pending legal-status Critical Current

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a mobile robot based on UWB and a navigation and following method thereof. The upper computer is used for receiving the sensor data, carrying out navigation and following path planning, and sending corresponding movement to the lower computer; the lower computer is used for receiving the motion instruction of the upper computer and controlling the robot to move; the UWB positioning device and the external UWB positioning device are communicated and used for accurately positioning and following coordination of the robot in an indoor large environment, so that indoor inspection and related operation are realized; the laser radar is used for avoiding obstacles in the navigation and movement process; the inertial sensor is used for judging the direction of the mobile robot and assisting in positioning. According to the invention, path planning can be performed according to the pre-established indoor map, the obstacle is avoided in the moving process, and the robots cooperatively work, so that the working efficiency is effectively improved.

Description

UWB-based mobile robot and navigation and following method thereof
Technical Field
The invention relates to a mobile robot based on UWB and a navigation and following method thereof, belonging to the technical field of intelligent equipment.
Background
Currently, mobile robots are used in various fields, and in particular, mobile robots have been widely used in factory warehouse inspection.
In the prior art, in the indoor large-environment operation area, the problem that the laser radar SLAM usually generates map building drift and cannot accurately position in a large environment when building a map in a large-range scene such as a factory warehouse and the like, and the positioning accuracy is seriously reduced when the laser radar is operated for a long time in the indoor large environment because the laser radar positioning has accumulated errors, so that a robot cannot autonomously finish the inspection task of the large-range scene such as the factory warehouse and the like.
Disclosure of Invention
The purpose is as follows: in order to overcome the defects in the prior art, the invention provides a mobile robot based on UWB and a navigation and following method thereof, wherein the mobile robot determines the accurate position of the robot through a preset environment map and UWB positioning equipment, plans the moving path of the mobile robot operation according to the environment map and the starting and ending positions of the mobile robot, can effectively avoid obstacles in the environment, completes autonomous inspection operation and saves human resources.
The technical scheme is as follows: in order to solve the technical problems, the invention adopts the following technical scheme:
in a first aspect, a UWB-based mobile robot, comprising: a robot body and an external UWB positioning device.
The robot body includes: the device comprises an upper computer, a lower computer, UWB positioning equipment, a laser radar, an inertial sensor and equipment driving which are arranged in the robot main body. The lower computer, the UWB positioning equipment, the laser radar and the inertial sensor are all connected with the upper computer. The lower computer is connected with the equipment driver. The UWB positioning device communicates with an external UWB positioning device.
In a second aspect, a method for navigating and following a UWB-based mobile robot includes the steps of:
step 1, loading an environment map of a preset working space, and planning the working path information of the mobile robot in the environment map according to positioning information acquired in real time.
And 2, when the obstacle information is detected, planning the operation path information of the mobile robot avoiding the obstacle according to the obstacle information.
Preferably, the method further comprises: step 3 of the method, in which the step 3,
and 3, planning the following operation path information of the mobile robot according to the distance and the angle of the following target.
Preferably, the step 1 specifically includes:
step 1.1, abstracting an environment map of the working space into a grid, wherein each node of the grid represents a discrete position, determining a starting point and a target point of the mobile robot, and mapping the starting point and the target point to the nearest node.
Step 1.2, creating an Open List for storing the nodes to be expanded, and a Closed List for storing the nodes that have already been expanded. Euclidean distance is used as a heuristic function to estimate the cost from the current node to the target node.
And 1.3, putting the starting point into an Open List, setting the length of a path which actually passes through as a g value and setting the g value as 0, and setting the estimated distance from the starting point to the target point calculated by a heuristic function as an h value.
Step 1.4, in each cycle, taking the value of g plus the value of h as the value of f, selecting the node with the minimum value of f from the Open List for expansion, and moving the expanded node from the Open List to the Closed List.
Step 1.5, for each neighbor node of the node, if the neighbor node is in the Closed List, skipping this neighbor node. If the neighboring node is not in the Open List, it is added to the Open List, and g, h, and f values of the neighboring node are calculated. If the neighboring node is already in the Open List, the g value of the path through the current node to the neighboring node is calculated. If the g value of the path is smaller, updating the parent node of the adjacent node as the current node, and updating the g value and the f value.
Step 1.6, when the target node enters the Closed List or the Open List is empty, the algorithm is terminated, and the shortest path node set from the starting point to the target point is output.
As a preferred solution, the method for obtaining the g value specifically includes:
and acquiring the distance d between the UWB positioning device of the mobile robot and the external UWB positioning device in the current time step.
Inputting the coordinates of the external UWB positioning equipment and the distance d into a position calculation model, and solving to obtain the coordinates of the UWB positioning equipment of the mobile robot in the current time step.
And acquiring the speed and the moving direction of the mobile robot in the current time step according to the inertial sensor of the mobile robot.
And taking the coordinates, the speed and the moving direction of the mobile robot in the current time step as state vectors, and acquiring the state vector of the mobile robot estimated in the next time step by adopting an extended Kalman filtering method.
And calculating the path length which the mobile robot actually has walked according to the current time step mobile robot state vector and the next time step mobile robot state vector.
Preferably, the calculation formula of the distance d is as follows:
where c is the approximate velocity of the electromagnetic wave in vacuum; t (T) round1 Is the time difference, T, between the UWB positioning device transmitting a signal to the external UWB positioning device and acknowledging its receipt round2 The external UWB positioning equipment responds to the UWB positioning equipment after receiving the signal of the UWB positioning equipment and confirms the time difference of the reception; t (T) reply1 After the UWB positioning device receives the return signal of the external UWB positioning device, returning a termination signal to the external UWB positioning device and confirming the time difference and T of the reception of the termination signal reply2 Is the time difference between the external UWB positioning device returning a signal from the UWB positioning device to receiving a termination signal.
Preferably, the calculation formula of the position calculation model is as follows:
wherein x is 0 、y 0 Coordinate values of x-axis and y-axis of UWB positioning equipment for mobile robot and x 1 、y 1 Is the x-axis and y-axis coordinate values of the external UWB positioning device.
The method for obtaining the state vector of the mobile robot estimated by the next time step by taking the coordinates, the speed and the moving direction of the mobile robot at the current time step as the state vector and adopting an extended Kalman filtering method specifically comprises the following steps:
step 1.1.1, initializing a state vector x_k= [ x, y, vx, vy ], initializing a state covariance matrix p_k, representing uncertainty of state estimation, and initializing a system noise covariance matrix q_k, which is used for representing noise in a state transition process. x represents the coordinates of the robot body in the x-axis, y represents the coordinates of the robot body in the y-axis, vx represents the speed of the robot body in the x-axis direction, vy represents the speed of the robot body in the y-axis direction.
Step 1.1.2, obtaining a measured value uwb_data_k of the UWB positioning device, and obtaining a measured value imu_data_k of the inertial sensor, wherein the measured value imu_data_k is a measured value of acceleration and angular velocity.
Step 1.1.3, calculating a prediction state vector x_pred_k and a prediction state covariance matrix p_pred_k, wherein the calculation formula is as follows:
x_pred_k=f(x_k-1,imu_data_k)
P_pred_k=F_k*P_k-1*(F_k) T +Q_k
wherein f is a state transition equation, T represents a transposed matrix, x_k-1 represents a state vector of a previous time step, and P_k-1 represents a state covariance matrix of the previous time step; f_k represents the jacobian of the state transition equation for the current time step.
Step 1.1.4, calculating a measurement residual error y_k, wherein the calculation formula is as follows:
y_k=data_uwb_k-h(x_pred_k)
where h is a function of mapping the state vector x_pred_k to the UWB measurement space for estimating UWB measurements.
Step 1.1.5, calculating a Kalman gain K_k according to a measurement noise covariance matrix R_k, wherein the calculation formula is as follows:
S_k=H_k*P_pred_k*(H_k) T +R_k
K_k=P_pred_k*(H_k) T *(S_k) -1
where h_k is the jacobian matrix of the measurement model function H (x_pred_k) to the state vector x_pred_k, and r_k represents the measurement noise covariance matrix of the current time step.
Step 1.1.6, updating the state covariance matrix p_pred_k by using the kalman gain k_k to obtain an updated state covariance matrix p_corrected_k:
P_corrected_k=(I-K_k*H_k)*P_pred_k
Wherein I represents an identity matrix.
Step 1.1.7, update the state vector x_pred_k by using the kalman gain k_k to obtain an updated state vector x_corrected_k, and the calculation formula is as follows:
x_corrected_k=x_pred_k+K_k*y_k
step 1.1.8, substituting x_corrected_k into x_k-1, substituting p_corrected_k into p_k-1, repeating steps 1.13-1.1.7, and outputting x_corrected_k of the next time step to obtain the position and speed of the mobile robot.
Preferably, the step 2 specifically includes:
step 2.1, setting a starting point state as (x_start, y_start, θ_start, v_start), wherein (x_start, y_start) is a position coordinate of the mobile robot when the mobile robot detects an obstacle, θ_start is an orientation angle of the mobile robot at the moment, and v_start is a speed of the mobile robot at the moment. A target point is defined, the state of which is denoted (x_gold, y_gold). Setting TEB parameters: including the time resolution dt, and the limits of speed and acceleration v_max and a_max.
Step 2.2, defining a time window [ t_start, t_end ], and calculating the time step number N= (t_end-t_start)/dt according to the time window and the time resolution.
Step 2.3, for each time step t_i, calculating the possible trajectory points (x_i, y_i) and the velocity v_i, the orientation angle θ_i, using a kinematic model, the kinematic model calculation formula being as follows:
Where δ is the mobile robot corner and L is the mobile robot body wheelbase. x_ (i-1), y_ (i-1), θ_ (i-1), v_ (i-1) represent the position coordinates, orientation angle, and velocity, respectively, of the last time step.
Step 2.4, constructing a cost function, wherein the calculation formula is as follows:
J_total(x_i,y_i,v_i)=J_time(x_i,y_i,v_i)+J_obstacle(x_i,y_i,v_i)+J_smooth(x_i,y_i,v_i)
wherein:
J_time(x_i,y_i,v_i)=w_time*(t_goal-t_end)
wherein: w_time time weight, t_gold is target arrival time, and t_end is track end time.
Wherein:
J_smooth(x_i,y_i,v_i)=w_smooth*(k_1*(x_i-x_i_prev) 2 +k_2*(y_i-y_i_prev) 2 +k_3*(v_i-v_i_prev) 2 )
where w_smooth is a smoothness weight, k_1, k_2, k_3 are smoothness coefficients, x_i_prev, y_i_prev, v_i_prev are coordinate values and speeds of the track point of the last time step.
Wherein: j_obstacle (x_i, y_i, v_i) = Σ [ w_obstacle f (d) ]
Wherein: w_obstacle is the weight of the obstacle avoidance cost, and is used for controlling obstacle avoidance. The f (d) function is a gaussian function, i.e. f (d) =exp (- (d) 2 )/(2*sigma 2 ) Where sigma is a parameter that controls the width of the gaussian function. At smaller sigma, the cost increases rapidly closer to the obstacle, encouraging the trajectory to move away from the obstacle.
The cost function j_total is minimized using a least squares method, resulting in N time step trace points (x_i, y_i) and a velocity v_i.
Preferably, the step 3 specifically includes:
and 3.1, obtaining the linear distance between the UWB positioning device of the mobile robot and the following target containing the external UWB positioning device and the angle between the mobile robot and the following target.
And 3.2, setting a minimum following distance, a maximum following distance and a following angle.
And 3.3, when the distance between the mobile robot and the following target is smaller than the minimum following distance, controlling the mobile robot to move to the right rear until the following distance is larger than the minimum following distance, when the distance between the mobile robot and the following target is larger than the minimum following distance and smaller than the maximum following distance, controlling the mobile robot to keep a standby state in situ, and when the distance between the mobile robot and the following target is larger than the maximum following distance, controlling the mobile robot to advance to the following target until the following distance is between the minimum following distance and the maximum following distance.
And 3.4, when the body central axis angle of the mobile robot and the following target body central axis are larger than the following angle positive value, controlling the mobile robot to rotate leftwards until the body central axis angle of the mobile robot and the following target body central axis are smaller than the following angle positive value, and when the body central axis angle of the mobile robot and the following target body central axis are smaller than the following angle negative value, controlling the mobile robot to rotate rightwards until the body central axis angle of the mobile robot and the following target body central axis are larger than the following angle negative value.
Preferably, the minimum following distance is set to 1m, the maximum following distance is set to 3m, and the following angle is set to-15 ° -15 °.
The beneficial effects are that: according to the UWB-based mobile robot and the navigation and following method thereof, the mobile robot determines the accurate position of the robot through the cooperation of the preset environment map, the UWB positioning equipment and the inertial sensor, plans the moving path of the mobile robot operation according to the starting and ending positions of the environment map and the mobile robot, senses the obstacle in front of the mobile robot, which is about to fall into the expansion range of the mobile robot, in the moving process of the robot by means of the laser radar, effectively avoids the obstacle in the environment, completely and independently patrols and examines the operation, and saves manpower resources.
And as the mobile robot can realize the following operation of other following targets according to the positioning information of the UWB equipment, the mobile robot obtains the positions following other targets through the UWB positioning equipment, and the upper computer judges the moving route of the mobile robot, so that the multi-machine cooperative work is realized, the time is saved, and the working efficiency is improved.
Drawings
Fig. 1 is a schematic view of a mobile robot in an example of the present invention.
Fig. 2 is a flow chart of mobile robot navigation in an example of the invention.
Fig. 3 is a schematic view of a mobile robot obstacle avoidance in an example of the invention.
Fig. 4 is a flow chart of a mobile robot following in an example of the invention.
Fig. 5 is a schematic diagram of a mobile robot following in an example of the invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings.
Example 1:
as shown in fig. 1, the present embodiment describes a mobile robot, which includes a robot body and an external UWB positioning device 106 separated from the robot body, wherein the robot body includes an upper computer 100, a lower computer 101, a UWB positioning device 102, a laser radar 103, an inertial sensor 104, and a device driver 105, which are disposed in the robot body. The lower computer 101, the UWB positioning device 102, the lidar 103, and the inertial sensor (IMU) 104 are all connected to the upper computer 100, including but not limited to electrical and/or communication connections.
The UWB positioning device is communicated with the external UWB positioning device and used for determining the position of the robot main body in a preset environment map, the UWB positioning device is used for transmitting the position information of the robot main body to the upper computer, the upper computer is used for planning a moving route of the robot main body through the positioning information provided by the UWB positioning device, the upper computer is used for analyzing the moving instruction of the robot main body and transmitting the moving instruction to the lower computer, the lower computer is used for receiving the moving instruction of the upper computer and controlling the robot main body to move in the environment map, the laser radar is used for scanning the position information of an obstacle of the robot main body in the moving process and transmitting the position information to the upper computer, and the upper computer is used for adding the obstacle information into the environment map and updating the moving route of the robot main body according to the updated environment map; the inertial sensor is used for judging the movement direction of the robot main body and realizing the co-location of the robot main body in the environment map with the UWB positioning equipment.
The laser radar is arranged at the front position of the robot main body and is used for sensing an obstacle which is in front of the robot main body and is about to fall into the expansion range of the robot main body in the moving process of the robot, and marking the position information of the obstacle which is about to fall into the expansion range of the robot main body on the environment map, and the upper computer controls the robot main body to optimize a moving path before touching the obstacle which is about to fall into the expansion range of the mobile robot main body according to the position information of the obstacle so as to avoid the obstacle which is about to fall into the expansion range of the robot main body.
The laser radar is arranged at a position lower than the height of the obstacle and is arranged at the front part of the robot main body without shielding.
The lower computer is connected with the equipment drive of the robot main body, and the equipment drive is used for realizing movement of the robot main body.
Wherein the external UWB positioning device is installed at a corresponding location of the operating environment.
The corresponding positions of the working environment are four vertexes of a square area with the side length of 20 m.
Wherein the laser radar is a 2-dimensional or 3-dimensional laser radar.
Example 2:
as shown in fig. 2, the present embodiment describes a mobile robot following and navigating method, including the steps of:
and drawing an environment map of the working space of the mobile robot in advance, and planning a working path of the mobile robot by the upper computer according to the starting position and the ending position of the mobile robot and the environment map.
The UWB positioning device is communicated with the external UWB positioning device and cooperates with the inertial sensor to realize real-time positioning of the mobile robot, and the upper computer updates the position of the mobile robot in the environment map through the real-time position information of the mobile robot.
The laser radar senses an obstacle which is in front of the mobile robot and is about to fall into the expansion range of the mobile robot in the moving process of the mobile robot, marks the position information of the obstacle which is about to fall into the expansion range of the mobile robot on the environment map, and updates the environment map.
And the mobile robot is used for avoiding obstacles which are about to fall into the expansion range of the mobile robot according to the planned working path of the upper computer, and working according to the path.
The mobile robot can judge the distance and the angle between the mobile robot and the following target through the upper computer according to the positioning information of the UWB equipment so as to realize the following operation of other following targets.
The mobile robot obtains the positions of other following targets through the UWB positioning equipment, and judges the moving route of the mobile robot through the upper computer 100.
The mobile robot keeps a distance from the following target within a certain range, wherein the certain range is-30 degrees of the central axis of the body of the following target, and the distance is larger than the length of the body of the following target.
The following target is not limited to the other mobile robots, and the following target only needs to carry UWB positioning equipment.
Furthermore, the existing mobile robot cannot accurately position for a long time in the indoor large-environment operation area by using the laser radar SLAM, so that the positioning of the mobile robot main body in the indoor large-environment operation area is realized by adopting a method of combining absolute positioning and relative positioning. Positioning by using the UWB positioning device 102 belongs to an absolute positioning mode, the UWB positioning device 102 is installed at the center of the mobile robot main body, the external UWB positioning device 106 is installed in the indoor large environment, and the external UWB positioning device 106 is installed at any three of four vertexes of a square area with a side length of 20m and covers the indoor large environment. The UWB positioning device 102 communicates wirelessly with the external UWB positioning devices 106, and the distance between the UWB positioning device 102 and each of the external UWB positioning devices 106 is measured using a two-way time-of-flight method, the ranging formula being shown in equation (1):
Where c is the approximate velocity of the electromagnetic wave in vacuum; t (T) round1 Is the time difference, T, between the UWB positioning device 102 sending a signal to the external UWB positioning device 106 and acknowledging its receipt round2 Is the time difference that the external UWB positioning device 106 responds to and confirms receipt of the signal from the UWB positioning device 102; t (T) reply1 After the UWB positioning device 102 receives the external UWB positioning device 106 return signal, a termination signal is returned to the external UWB positioning device 106 and the time difference, T, of the reception is confirmed reply2 Is the time difference between the external UWB positioning device 106 returning a signal from the UWB positioning device 102 to receiving a termination signal.
The UWB positioning devices 102 on the robot body respectively measure distances from the external UWB positioning devices 106 and respectively store the distances as d 1 、d 2 、d 3 Stored in the upper computer 100, the present embodiment uses a three-point positioning method to obtain the positioning in the indoor large environment working area, since the specific position coordinates of the external UWB positioning device 106 are already determined, that is, the positions of the three external UWB positioning devices 106 participating in the positioning are a respectively 1 (x 1 ,y 1 ,z 1 )、A 2 (x 2 ,y 2 ,z 2 )、A 3 (x 3 ,y 3 ,z 3 ) Thus the position T of the robot body can be obtained 0 (x 0 ,y 0 ,z 0 ) Wherein z is 0 The height of the robot body is fixed to z because the height of the robot body is relatively fixed 0 The specific position calculation method is shown in the following formula (2):
the inertial sensor 104 is used to measure acceleration and angular velocity of the robot body, and calculate the speed and position of the robot body by integrating successive acceleration and angular velocity measurements. The inertial sensor 104 integrates once from the measured value of acceleration to obtain a velocity, integrates again to obtain a position, and integrates from the measured value of angular velocity of the gyroscope to determine the direction of the robot body.
The inertial sensor 104 is susceptible to error accumulation during long-time movement, resulting in positioning inaccuracy, and the invention adopts an extended Kalman filtering mode to fuse absolute positioning information of the UWB positioning device 102 with relative positioning information of the inertial sensor 104.
Firstly, the initialization of the extended Kalman filtering is completed in the upper computer, and a state vector is initialized firstly, and because the body height of the mobile robot is fixed, the state vector x_k= [ x, y, vx, vy ] is initialized, a state covariance matrix P_k is initialized, uncertainty of state estimation is represented, and a system noise covariance matrix Q_k is initialized and used for representing noise in a state transition process. x represents the coordinates of the robot body in the x-axis, y represents the coordinates of the robot body in the y-axis, vx represents the speed of the robot body in the x-axis direction, vy represents the speed of the robot body in the y-axis direction.
The second step is data acquisition, which is to acquire a measured value uwb_data_k of the UWB positioning device 102, and acquire a measured value imu_data_k of the inertial sensor 104, which is a measured value of acceleration and angular velocity.
The third step is a prediction step of predicting a state vector x_pred_k and a state covariance matrix p_pred_k to estimate the state of the object at the next time step, updating the state vector x_pred_k using a state transition equation f, taking into account the measurement data from the inertial sensor 104, and estimating the velocity and position using the integrated acceleration and angular velocity. The prediction state covariance matrix P _ pred _ k takes into account the uncertainty of the state transitions. It is updated by the jacobian matrix f_k and the system noise covariance matrix q_k of the state transfer equation. The specific formula is as follows:
x_pred_k =f(x_k-1, imu_data_k) (3)
P_pred_k=F_k*P_k-1*(F_k) T +Q_k (4)
wherein f is a state transition equation, T represents a transposed matrix, x_k-1 represents a state vector of a previous time step, and P_k-1 represents a state covariance matrix of the previous time step; f_k represents the jacobian of the state transition equation for the current time step.
The fourth step is a measurement update step, and when the upper computer 100 receives the UWB positioning device 102 measurement value uwb_data_k, measurement update is performed. The measurement residual y, i.e. the difference between the predicted state and the actual measurement, is calculated.
y_k =data_uwb_k -h(x_pred_k) (5)
Where h is a function of mapping the state vector x_pred_k to the UWB measurement space for estimating UWB measurements.
The upper computer 100 calculates a measurement noise covariance matrix R, which describes the uncertainty of the UWB positioning device 102 measurements, determined by the characteristics of the UWB positioning device, and then calculates a kalman gain k_k:
where h_k is the jacobian matrix of the measurement model function H (x_pred_k) to the state vector x_pred_k, and r_k represents the measurement noise covariance matrix of the current time step.
The state vector x_pred_k is updated using the kalman gain k_k, resulting in an updated state vector x_corrected_k:
x_corrected_k = x_pred_k + K_k * y_k (7)
the state covariance matrix p_pred_k is then updated using the kalman gain k_k, resulting in an updated state covariance matrix p_corrected_k:
P_corrected_k = (I- K_k * H_k) * P_pred_k (8)
wherein I represents an identity matrix.
And fifthly, outputting a result, wherein the upper computer 100 uses the corrected state vector x_corrected_k as an estimation result of the position and the speed when running the extended Kalman filtering algorithm.
The sixth step is iteration, the upper computer 100 repeatedly executes the prediction and measurement updating step, returns to the third prediction step, uses x_corrected_k as a new state vector, uses the updated p_corrected_k as a new state covariance matrix p_pred_k, and then continues to execute the prediction and measurement updating of the next time step, and then repeats the prediction and measurement updating step. Each iteration step depends on the calculation result of the previous step, ensuring that the state estimation remains accurate when new measurement data are continuously received, thereby obtaining an accurate real-time position of the mobile robot.
Further, the upper computer 100 obtains an accurate map of the working environment of the mobile robot and a real-time position of the mobile robot in the working environment, sets a working path of the mobile robot in the indoor large environment, and the upper computer 100 obtains the real-time position of the mobile robot and plans the working path of the mobile robot by using an a-x algorithm.
The a algorithm is a heuristic search algorithm for finding the shortest path from the start point to the target point. It operates on a graph or grid by taking into account the path length that has actually travelled (denoted as g-value) and the heuristically estimated distance to the target (denoted as h-value) to produce a composite estimated value (f-value), and then selecting the node with the smallest f-value for expansion.
The upper computer 100 prepares a map and nodes, i.e., abstracts the map of the working environment of the mobile robot into a graph or grid, wherein each node represents a discrete location, determines the starting point and the target point of the mobile robot, and maps them to the nearest node.
The upper computer 100 then initializes a data structure to create an Open List (Open List) for storing the nodes to be expanded, and a Closed List (Closed List) for storing the nodes that have been expanded. The Euclidean distance is used as a heuristic function to estimate the cost from the current node to the target node.
The host computer 100 then initializes the starting point, places the starting point in the open list, and sets the g value to 0 and the h value to the estimated distance from the starting point to the target point calculated by the heuristic function.
In each cycle, the upper computer 100 selects a node with the smallest f value (i.e., a node with the smallest g value plus h value) from the open list for expansion. The selected node is moved from the open list to the closed list. For each neighbor node of a node, if the neighbor node is in the closed list, this neighbor node is skipped. If the neighbor node is not in the open list, it is added to the open list and its g, h and f values are calculated. If the neighboring node is already in the open list, a g value of the path through the current node to the neighboring node is calculated. If the g value of the path is smaller, updating the parent node of the adjacent node as the current node, and updating the g value and the f value. And finally judging a termination condition, and terminating the algorithm when the target node enters a closed list or an open list is empty (no extensible node exists).
Further, the starting point and the terminal of each different content path of the mobile robot in the indoor large scene are regarded as a group of independent starting point and independent terminal, and the upper computer 100 is used for respectively planning the path of each section of path to obtain the optimal path of the whole operation process.
As shown in fig. 3, further, the lidar 103 senses an obstacle in front of the mobile robot in the moving process of the robot, that is, a circular blank area with the center of the mobile robot structure being set by the upper computer, the radius of the expansion area is larger than the width of the robot body, so that the mobile robot is guaranteed not to collide when encountering the obstacle and avoiding the obstacle, and marks the position information of the obstacle in the expansion area of the mobile robot on the environment map, that is, the situation that the obstacle in the expansion area of the mobile robot is blocked by the mobile robot to normally operate, the upper computer transmits the position information of the obstacle in the expansion area of the mobile robot scanned by the lidar 103 to the upper computer 100, the upper computer generates temporary obstacle information of the obstacle in the expansion area of the mobile robot body in the environment, and the upper computer 100 updates the planned map to avoid the obstacle in the local path of the mobile robot by adopting the planned moving path map b.
The TEB algorithm considers a time elastic band (i.e., time window) to solve the path planning problem on different time scales, thereby compromising safety and speed.
The upper computer 100 first determines map and track input, takes the obstacle, which is in front of the mobile robot and is about to fall into the expansion range of the mobile robot, as a starting point of local path planning, takes the laser radar as an ending point, and obtains the speed and the gesture of the mobile robot at the moment, and then sets a time elastic band (time window), namely, different positions which can be reached by the mobile robot within a period of time.
First, the host computer 100 initializes TEB-related parameters, and sets a start point state to (x_start, y_start, θ_start, v_start), where (x_start, y_start) is a position coordinate of the mobile robot when the mobile robot detects an obstacle, θ_start is an orientation angle of the mobile robot at this time, and v_start is a speed of the mobile robot at this time. Subsequently define a target point whose state is denoted (x_gold, y_gold), and set the TEB parameter: including the time resolution dt, and the limits of speed and acceleration v_max and a_max.
Subsequently, the host computer 100 discretizes the path, defines a time window [ t_start, t_end ], calculates the number of time steps n= (t_end-t_start)/dt according to the time window and the time resolution, and calculates the trajectory point by using a kinematic model, wherein for each time step t_i, the possible trajectory point (x_i, y_i) and the velocity v_i are calculated by the host computer 100 as follows:
where δ is the mobile robot corner and L is the mobile robot body wheelbase.
The host computer 100 then constructs a cost function, defining a cost function J (x, y, v), which is a function of the track points (x_i, y_i) and the velocity v_i, including a target arrival time term, i.e., encouraging the track to arrive at the target point within a specified time, which may be expressed as j_time (x_i, y_i, v_i) =w_time x (t_gold-t_end), where w_time time is a target arrival time, and t_end is a track end time. Also included are obstacle avoidance terms, i.e., punishment trajectories traversing obstacles, can be calculated using map information and obstacle distances. And smoothness terms, i.e., encouraging track smoothing, may be expressed as:
where w_smooth is the smoothness weight, k_1, k_2, k_3 are the smoothness coefficients x_prev, y_prev, v_prev, and are the track point and speed of the last time step.
J_obstacle(x_i,y_i,v_i)=∑[w_obstacle*f(d)] (11)
Wherein: w_obstacle is the weight of the obstacle avoidance cost, and is used for controlling obstacle avoidance. The f (d) function is a gaussian function, i.e. f (d) =exp (- (d) 2 )/(2*sigma 2 ) Where sigma is a parameter that controls the width of the gaussian function. At smaller sigma, the cost increases rapidly closer to the obstacle, encouraging the trajectory to move away from the obstacle.
The sum of the cost functions can be expressed as:
the upper computer 100 uses a least square method to minimize the cost function j_total in the optimization process.
The optimization variables are the trajectory points (x_i, y_i) and the velocity v_i, and constraints can be used during the optimization to ensure that the velocity and acceleration are within the allowed range, as well as the smoothness of the path.
The host computer 100 then selects appropriate path points from the optimized trajectories, and issues instructions to the lower computer 101 to drive the mobile robot to move along these path points, during which the expansion radius range of the mobile robot is not in contact with obstacles, and during execution, the planned path of the host computer 100 may be adjusted in real time according to environmental changes to ensure safety and adaptability, and if dynamic obstacles exist, the TEB algorithm may consider the positions and speeds of these obstacles in the optimization to avoid collisions.
As shown in fig. 4 and 5, further, the mobile robot performs the following operation on other following targets according to the positioning information of the UWB device 102, the UWB positioning device 102 installed on the mobile robot performs wireless communication with the external UWB positioning device 106 installed on the following targets, the linear distance between the UWB positioning device 102 and the external UWB positioning device 106 and the angle between the mobile robot and the following targets are obtained by a bidirectional flight time measurement method, and these data are sent to the upper computer 100 through a TTL serial port, the minimum following distance set by the upper computer 100 needs to be greater than the body length of the following targets, and the maximum following distance is kept within the effective communication range of the UWB positioning device 102, for example, when the following distance is 1m-3m, the following angle is set to be a corresponding angle, for example, the following angle is-15 ° -15 °. When the distance between the mobile robot and the following target is less than 1m, the upper computer 100 issues an instruction to the lower computer 101, controls the mobile robot to move to the right behind until the following distance is greater than 1m, when the distance between the mobile robot and the following target is greater than 1m and less than 3m, the upper computer 100 issues an instruction to the lower computer 101, controls the mobile robot to keep a standby state in situ, and when the distance between the mobile robot and the following target is greater than 3m, the upper computer 100 issues an instruction to the lower computer 101, and controls the mobile robot to move forward in the following target direction until the following distance is between 1m and 3 m. When the body axis angle of the mobile robot and the following target body axis are greater than 15 °, the upper computer 100 issues an instruction to the lower computer 101, and controls the mobile robot to rotate leftwards until the body axis angle of the mobile robot and the following target body axis are less than 15 °, and when the body axis angle of the mobile robot and the following target body axis are less than-15 °, the upper computer 100 issues an instruction to the lower computer 101, and controls the mobile robot to rotate rightwards until the body axis angle of the mobile robot and the following target body axis are greater than-15 °.
The invention utilizes the characteristics of UWB technology to realize the accurate following and navigation of the mobile robot to the target, thereby efficiently completing the task in the indoor large environment. The mobile robot comprises two small computers, namely an upper computer, a lower computer, UWB positioning equipment, a two-dimensional laser radar and an inertial sensor. The upper computer is used for receiving the sensor data, carrying out navigation and following path planning, and sending corresponding movement to the lower computer; the lower computer is used for receiving the motion instruction of the upper computer and controlling the robot to move; the UWB positioning device and the external UWB positioning device are communicated and used for accurately positioning and following coordination of the robot in an indoor large environment, so that indoor inspection and related operation are realized; the laser radar is used for avoiding obstacles in the navigation and movement process; the inertial sensor is used for judging the direction of the mobile robot and assisting in positioning. The mobile robot can carry out path planning according to the pre-established indoor map, avoid obstacles in the moving process, and carry out cooperative operation of a plurality of robots, so that the operation efficiency is effectively improved.
The foregoing is only a preferred embodiment of the invention, it being noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the invention.

Claims (10)

1. A UWB-based mobile robot, characterized by: comprising the following steps: a robot body and an external UWB positioning device;
the robot body includes: the upper computer, the lower computer, the UWB positioning equipment, the laser radar, the inertial sensor and the equipment drive are arranged in the robot main body; the lower computer, the UWB positioning equipment, the laser radar and the inertial sensor are all connected with the upper computer; the lower computer is in driving connection with the equipment; the UWB positioning device communicates with an external UWB positioning device.
2. A method for navigating and following a mobile robot based on UWB, characterized by: the method comprises the following steps:
step 1, loading an environment map of a preset operation space, and planning operation path information of a mobile robot in the environment map according to positioning information acquired in real time;
and 2, when the obstacle information is detected, planning the operation path information of the mobile robot avoiding the obstacle according to the obstacle information.
3. The navigation and follow-up method according to claim 2, characterized in that: further comprises: step 3 of the method, in which the step 3,
and 3, planning the following operation path information of the mobile robot according to the distance and the angle of the following target.
4. The navigation and follow-up method according to claim 1, wherein: the step 1 specifically includes:
step 1.1, abstracting an environment map of a working space into a grid, wherein each node of the grid represents a discrete position, determining a starting point and a target point of the mobile robot, and mapping the starting point and the target point to the nearest node;
step 1.2, creating an Open List for storing the nodes to be expanded, and a Closed List for storing the nodes which have already been expanded; using Euclidean distance as heuristic function to estimate cost from current node to target node;
step 1.3, putting the starting point into an Open List, setting the length of a path which actually passes through as a g value and setting the g value as 0, and setting the estimated distance from the starting point to the target point calculated by a heuristic function as an h value;
step 1.4, in each cycle, adding a value of h to a value of g as an f value, selecting a node with the minimum f value from an Open List for expansion, and moving the expanded node from the Open List to a Closed List;
step 1.5, for each neighboring node of the node, if the neighboring node is in the Closed List, skipping the neighboring node; if the adjacent node is not in the Open List, adding the adjacent node to the Open List, and calculating g value, h value and f value of the adjacent node; if the neighboring node is already in the Open List, calculating g value of a path through the current node to the neighboring node; if the g value of the path is smaller, updating the father node of the adjacent node as the current node, and updating the g value and the f value;
Step 1.6, when the target node enters the Closed List or the Open List is empty, the algorithm is terminated, and the shortest path node set from the starting point to the target point is output.
5. The navigation and follow-up method of claim 4, wherein: the g value obtaining method specifically comprises the following steps:
acquiring the distance d between UWB positioning equipment of the mobile robot and external UWB positioning equipment in the current time step;
inputting the coordinates of the external UWB positioning equipment and the distance d into a position calculation model, and solving to obtain the coordinates of the UWB positioning equipment of the mobile robot in the current time step;
acquiring the speed and the moving direction of the mobile robot in the current time step according to the inertial sensor of the mobile robot;
taking the coordinates, the speed and the moving direction of the mobile robot in the current time step as state vectors, and acquiring the state vectors of the mobile robot estimated in the next time step by adopting an extended Kalman filtering method;
and calculating the path length which the mobile robot actually has walked according to the current time step mobile robot state vector and the next time step mobile robot state vector.
6. The navigation and follow-up method of claim 5, wherein: the distance d has the following calculation formula:
Where c is the approximate velocity of the electromagnetic wave in vacuum; t (T) round1 Is the time difference T between the UWB positioning device sending a signal to the external UWB positioning device and confirming the reception of the external UWB positioning device round2 The external UWB positioning equipment receives the signals of the UWB positioning equipment and then responds to the UWB positioning equipment and confirms the time difference of the receiving; t (T) reply1 After the UWB positioning device receives the return signal of the external UWB positioning device, returning a termination signal to the external UWB positioning device and confirming the time difference and T received by the external UWB positioning device reply2 Is the time difference between the external UWB positioning device returning the signal from the UWB positioning device to the receipt of the termination signal.
7. The navigation and follow-up method of claim 5, wherein: the calculation formula of the position calculation model is as follows:
wherein x is 0 、y 0 Coordinate values of x-axis and y-axis of UWB positioning equipment for mobile robot and x 1 、y 1 Is the x-axis and y-axis coordinate values of the external UWB positioning device.
8. The navigation and follow-up method of claim 5, wherein: the method for obtaining the state vector of the mobile robot estimated in the next time step by taking the coordinates, the speed and the moving direction of the mobile robot in the current time step as the state vector and adopting an extended Kalman filtering method specifically comprises the following steps:
Step 1.1.1, initializing a state covariance matrix P_k and a system noise covariance matrix Q_k by initializing a state vector x_k= [ x, y, vx, vy ]; x represents the coordinates of the robot body in the x-axis, y represents the coordinates of the robot body in the y-axis, vx represents the speed of the robot body in the x-axis direction, vy represents the speed of the robot body in the y-axis direction;
step 1.1.2, obtaining a measured value uwb_data_k of the UWB positioning equipment and obtaining a measured value imu_data_k of the inertial sensor;
step 1.1.3, calculating a prediction state vector x_pred_k and a prediction state covariance matrix p_pred_k, wherein the calculation formula is as follows:
x_pred_k=f(x_k-1,imu_data_k)
P_pred_k=F_k*P_k-1*(F_k) T +Q_k
wherein f is a state transition equation, T represents a transposed matrix, x_k-1 represents a state vector of a previous time step, and P_k-1 represents a state covariance matrix of the previous time step; f_k represents a jacobian matrix of a state transition equation of the current time step;
step 1.1.4, calculating a measurement residual error y_k, wherein the calculation formula is as follows:
y_k=data_uwb_k-h(x_pred_k)
wherein h is used to estimate UWB measurements;
step 1.1.5, calculating a Kalman gain K_k according to a measurement noise covariance matrix R_k, wherein the calculation formula is as follows:
S_k=H_k*P_pred_k*(H_k) T +R_k
K_k=P_pred_k*(H_k) T *(S_k) -1
wherein H_k is a jacobian matrix of a measurement model function H (x_pred_k) to a state vector x_pred_k, and R_k represents a measurement noise covariance matrix of a current time step;
Step 1.1.6, updating the state covariance matrix p_pred_k by using the kalman gain k_k to obtain an updated state covariance matrix p_corrected_k:
P_corrected_k=(I-K_k*H_k)*P_pred_k
wherein I represents an identity matrix;
step 1.1.7, update the state vector x_pred_k by using the kalman gain k_k to obtain an updated state vector x_corrected_k, and the calculation formula is as follows:
x_corrected_k=x_pred_k+K_k*y_k
step 1.1.8, substituting x_corrected_k into x_k-1, substituting p_corrected_k into p_k-1, repeating steps 1.13-1.1.7, and outputting x_corrected_k of the next time step to obtain the position and speed of the mobile robot.
9. The navigation and follow-up method according to claim 2, characterized in that: the step 2 specifically includes:
step 2.1, setting a starting point state as (x_start, y_start, theta_start, v_start), wherein (x_start, y_start) is a position coordinate of the mobile robot when the mobile robot detects an obstacle, theta_start is an orientation angle of the mobile robot at the moment, and v_start is a speed of the mobile robot at the moment; defining a target point, the state of which is expressed as (x_gold, y_gold); setting TEB parameters: limits v_max and a_max including time resolution dt, speed and acceleration;
step 2.2, defining a time window [ t_start, t_end ], and calculating the time step number N= (t_end-t_start)/dt according to the time window and the time resolution;
Step 2.3, for each time step t_i, calculating the possible trajectory points (x_i, y_i) and the velocity v_i, the orientation angle θ_i, using a kinematic model, the kinematic model calculation formula being as follows:
wherein delta is the corner of the mobile robot, and L is the body wheelbase of the mobile robot; x_ (i-1), y_ (i-1), θ_ (i-1), v_ (i-1) represent the position coordinates, orientation angles and velocities, respectively, of the last time step;
step 2.4, constructing a cost function, wherein the calculation formula is as follows:
J_total(x_i,y_i,v_i)=J_time(x_i,y_i,v_i)+J_obstacle(x_i,y_i,v_i)+J_smooth(x_i,y_i,v_i)
wherein:
J_time(x_i,y_i,v_i)=w_time*(t_goal-t_end)
wherein: w_time time weight, t_gold is target arrival time, t_end is track end time;
wherein:
J_smooth(x_i,y_i,v_i)=w_smooth*(k_1*(x_i-x_i_prev) 2 +k_2*(y_i-y_i_prev) 2 +k_3*(v_i-v_i_prev) 2 )
where w_smooth is a smoothness weight, k_1, k_2, k_3 are smoothness coefficients, x_i_prev, y_i_prev, v_i_prev are coordinate values and speeds of the track points of the previous time step;
wherein: j_obstacle (x_i, y_i, v_i) = Σ [ w_obstacle f (d) ]
Wherein: w_obstacle is the weight of the obstacle avoidance cost; f (d) is a Gaussian function;
the cost function j_total is minimized using a least squares method, resulting in N time step trace points (x_i, y_i) and a velocity v_i.
10. A navigation and follow-up method according to claim 3, characterized in that said step 3 comprises in particular:
step 3.1, obtaining the linear distance between the UWB positioning equipment of the mobile robot and the following target containing the external UWB positioning equipment, and the angle between the mobile robot and the following target;
Step 3.2, setting a minimum following distance, a maximum following distance and a following angle;
3.3, when the distance between the mobile robot and the following target is smaller than the minimum following distance, controlling the mobile robot to move to the right rear until the following distance is larger than the minimum following distance, when the distance between the mobile robot and the following target is larger than the minimum following distance and smaller than the maximum following distance, controlling the mobile robot to keep a standby state in situ, and when the distance between the mobile robot and the following target is larger than the maximum following distance, controlling the mobile robot to advance to the following target until the following distance is between the minimum following distance and the maximum following distance;
and 3.4, when the body central axis angle of the mobile robot and the following target body central axis are larger than the following angle positive value, controlling the mobile robot to rotate leftwards until the body central axis angle of the mobile robot and the following target body central axis are smaller than the following angle positive value, and when the body central axis angle of the mobile robot and the following target body central axis are smaller than the following angle negative value, controlling the mobile robot to rotate rightwards until the body central axis angle of the mobile robot and the following target body central axis are larger than the following angle negative value.
CN202311299256.2A 2023-10-08 2023-10-08 UWB-based mobile robot and navigation and following method thereof Pending CN117406721A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311299256.2A CN117406721A (en) 2023-10-08 2023-10-08 UWB-based mobile robot and navigation and following method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311299256.2A CN117406721A (en) 2023-10-08 2023-10-08 UWB-based mobile robot and navigation and following method thereof

Publications (1)

Publication Number Publication Date
CN117406721A true CN117406721A (en) 2024-01-16

Family

ID=89495425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311299256.2A Pending CN117406721A (en) 2023-10-08 2023-10-08 UWB-based mobile robot and navigation and following method thereof

Country Status (1)

Country Link
CN (1) CN117406721A (en)

Similar Documents

Publication Publication Date Title
CN110262495B (en) Control system and method capable of realizing autonomous navigation and accurate positioning of mobile robot
CN103926925B (en) Improved VFH algorithm-based positioning and obstacle avoidance method and robot
Li et al. An algorithm for safe navigation of mobile robots by a sensor network in dynamic cluttered industrial environments
He et al. Planning in information space for a quadrotor helicopter in a GPS-denied environment
EP1734433B1 (en) Path generator for mobile object
Moreno et al. A genetic algorithm for mobile robot localization using ultrasonic sensors
EP2450763B1 (en) Global position and orientation estimation system for a vehicle in a passageway environment
CN111522339A (en) Automatic path planning and positioning method and device for inspection robot of livestock and poultry house
Levine et al. Information-rich path planning with general constraints using rapidly-exploring random trees
US20020095239A1 (en) Autonomous multi-platform robot system
CN112882053B (en) Method for actively calibrating external parameters of laser radar and encoder
CN111090282A (en) Obstacle avoidance method for robot, robot and device
Fulgenzi et al. Probabilistic motion planning among moving obstacles following typical motion patterns
CN110702091A (en) High-precision positioning method for moving robot along subway rail
US7546179B2 (en) Method and apparatus for allowing mobile robot to return to docking station
WO2022151794A1 (en) Wireless ranging sensor-based mobile robot positioning method and system, and chip
CN114527753A (en) Man-machine integrated building path planning method, computer device and program product
CN114371716A (en) Automatic driving inspection method for fire-fighting robot
Gao et al. Localization of mobile robot based on multi-sensor fusion
Zhou et al. Slam algorithm and navigation for indoor mobile robot based on ros
CN114838732A (en) Collaborative navigation method based on graph optimization under communication limited environment
CN113218384B (en) Indoor AGV self-adaptive positioning method based on laser SLAM
CN111045428B (en) Obstacle avoidance method, mobile robot, and computer-readable storage medium
Edlinger et al. Exploration, navigation and self-localization in an autonomous mobile robot
CN117406721A (en) UWB-based mobile robot and navigation and following method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination