CN115218891B - Autonomous positioning and navigation method for mobile robot - Google Patents

Autonomous positioning and navigation method for mobile robot Download PDF

Info

Publication number
CN115218891B
CN115218891B CN202211067758.8A CN202211067758A CN115218891B CN 115218891 B CN115218891 B CN 115218891B CN 202211067758 A CN202211067758 A CN 202211067758A CN 115218891 B CN115218891 B CN 115218891B
Authority
CN
China
Prior art keywords
mobile robot
positioning
coordinate system
data
laser radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211067758.8A
Other languages
Chinese (zh)
Other versions
CN115218891A (en
Inventor
李军民
杨文龙
龚志冬
宋昌林
龙驹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xihua University
Original Assignee
Xihua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xihua University filed Critical Xihua University
Priority to CN202211067758.8A priority Critical patent/CN115218891B/en
Publication of CN115218891A publication Critical patent/CN115218891A/en
Application granted granted Critical
Publication of CN115218891B publication Critical patent/CN115218891B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/027Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means comprising intertial navigation means, e.g. azimuth detector
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0272Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means comprising means for registering the travel distance, e.g. revolutions of wheels

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to an autonomous positioning and navigation method for a mobile robot, which comprises the following steps: constructing a grid map based on the Gmapping algorithm by combining data of the laser radar and data of a wheel speed meter; fusing the data of the inertia measurement unit and the data of the wheel speed meter based on an AMCL algorithm to obtain an AMCL positioning result; scanning, matching and optimizing the AMCL positioning result by using data of the laser radar to realize global positioning navigation; and performing error correction on the current pose information of the mobile robot according to vision-assisted positioning so as to obtain the accurate position of a target station and realize local positioning navigation. The invention combines global positioning navigation and local positioning navigation, and solves the defect of large positioning deviation of the mobile robot at the target station.

Description

Autonomous positioning and navigation method for mobile robot
Technical Field
The invention relates to the technical field of measurement navigation, in particular to an autonomous positioning and navigation method for a mobile robot.
Background
The technology of the robot cannot be ignored on the road of science and technology development, with the continuous progress of the society and the continuous development of economy, the social demand of the robot for coping with high strength, high risk and tedious work is increasingly rising. As can be seen from the application of the existing robots in transportation in areas, most robots are mobile robots, and according to different requirements of working environments, the robots can select wheels, tracks or the existing four-legged robots with comparatively hot fire to adapt to different environments, and different mechanical structures are carried on the mobile robots to complete corresponding functions.
In the transportation process of the mobile robot, map construction and autonomous positioning can be performed simultaneously, so that the map construction and the autonomous positioning are generally researched together, the map construction is called as simultaneous positioning and map construction, and the map construction is also called as an SLAM technology, and the SLAM technology is widely applied to the aspect of automatic driving and is a basis for realizing intellectualization of the mobile robot.
However, the conventional Adaptive Monte Carlo Localization Algorithm (AMCL) is deficient in the precision of Localization, and the target point is not accurate and stable enough in Localization. In the market, sensors such as ultra wideband positioning technology (UWB) and depth cameras are mostly adopted for data fusion in order to increase the accuracy of the sensors, but the defects of the UWB and the depth cameras are obvious, the cost is high, the data noise is high, and particularly, the CPU is greatly burdened by processing the data of the depth camera, so that the method is not suitable for the original purpose of low cost and low computing power.
Disclosure of Invention
The invention aims to create an autonomous positioning and navigation method of a mobile robot, which has low cost and can enable the mobile robot to quickly and accurately autonomously position and navigate, and provides a mobile robot autonomous positioning and navigation method.
In order to achieve the above object, the embodiments of the present invention provide the following technical solutions:
a mobile robot autonomous positioning navigation method comprises a chassis core controller and a positioning navigation core controller, wherein the chassis core controller is connected with an inertia measurement unit and a wheel speed meter, and the positioning navigation core controller is connected with a laser radar and a camera;
the autonomous positioning navigation method comprises the following steps:
s1, constructing a grid map based on a Gmapping algorithm by combining data of a laser radar and data of a wheel speed meter, and simultaneously establishing a map coordinate system corresponding to the grid map; receiving target station information issued by a remote control host, and analyzing the position of the target station information in a map coordinate system;
s2, fusing the data of the inertia measurement unit and the data of the wheel speed meter based on an AMCL algorithm to obtain an AMCL positioning result; scanning, matching and optimizing the AMCL positioning result by using data of the laser radar to obtain pose information of the mobile robot, and calculating a path plan from the current position of the mobile robot to a target station to realize global positioning navigation;
s3, after the mobile robot reaches a target station according to the path plan, acquiring two-dimensional code information arranged on the target station through a camera, and fusing data of the laser radar and data of the camera to obtain vision auxiliary positioning; and performing error correction on the current pose information of the mobile robot according to vision-assisted positioning so as to obtain the accurate position of the target station and realize local positioning navigation.
The step of constructing the grid map based on the Gmapping algorithm by combining the data of the laser radar and the data of the wheel speed meter comprises the following steps:
the Gmapping algorithm is based on the RBPF particle filtering algorithm, and is used for positioning and establishing a graph at the same time, and the Gmapping algorithm comprises the following steps of:
Figure 338955DEST_PATH_IMAGE001
Figure 193779DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 911199DEST_PATH_IMAGE003
representing the estimated pose of the mobile robot, which is the posterior estimation of the current moment;
x 1:t representing the trajectory, m representing the probability of the end point of the lidar sweep being on the grid map, u 1:t-1 Is a velocity observation value of the mobile robot, z 1:t Observing a point cloud data value of the laser radar; 1;
Figure 40829DEST_PATH_IMAGE004
the estimated track of the mobile robot is represented and is the posterior track of the current moment;
Figure 855201DEST_PATH_IMAGE005
a priori estimate representing the probability of the grid map of the mobile robot under each observation;
Figure 615347DEST_PATH_IMAGE006
representing the construction of a map under the condition that the track of the mobile robot and sensor data are known, wherein the sensor data comprise data of a laser radar and data of a wheel speed meter;
and relieving the particle dissipation by reducing the resampling times, and evaluating the dispersion degree of all particle weights to determine whether to perform particle resampling by the following formula:
Figure 616801DEST_PATH_IMAGE007
where i denotes the number of particles sampled from the beginning, N denotes the total number of particles,
Figure 550122DEST_PATH_IMAGE008
the weight of the particles is represented by,
Figure 687842DEST_PATH_IMAGE009
represents the degree of dispersion of all particle weights; when N is present eff Less than threshold N eff When the sampling is needed, the resampling is needed, otherwise, the resampling is not needed.
Before scanning, matching and optimizing the AMCL positioning result by using the data of the laser radar, the distortion generated by the point cloud data of the laser radar needs to be corrected, and the method specifically comprises the following steps:
calculating the measuring time of each end point in the current scanning period, wherein the horizontal angle of the end point of the laser beam is equivalent to the ratio of the horizontal angle to the measuring time:
Figure 150047DEST_PATH_IMAGE010
wherein the scanning period is 360 degrees, T p The measured time of the current endpoint relative to the first endpoint,
Figure 647587DEST_PATH_IMAGE011
angle of the current end point with respect to the first end point, T n Time stamp for current frame point cloud data, T o A timestamp of the previous frame of point cloud data;
converting the point cloud data to a radar coordinate system:
Figure 119019DEST_PATH_IMAGE012
wherein (X, Y) is the coordinate of a certain end point on the radar coordinate system, D p Depth information measured for the laser radar, namely the distance from the origin of the laser beam to the end point;
calculating the angle proportion of the current end point relative to the end scanning point:
Figure 376825DEST_PATH_IMAGE013
wherein pro is the distortion degree of the current end point relative to the end scanning point;
Figure 275511DEST_PATH_IMAGE014
is the rotation angle of the current end point with respect to the X-axis of the radar coordinate system,
Figure 985978DEST_PATH_IMAGE015
which represents the starting scanning point of the scanning,
Figure 995522DEST_PATH_IMAGE016
indicating a termination scan point;
the conversion relation from the current endpoint to the radar coordinate system is as follows:
Figure 107835DEST_PATH_IMAGE017
wherein the content of the first and second substances,
Figure 177422DEST_PATH_IMAGE018
the coordinates of the corrected terminal point are expressed by (X, Y) in a radar coordinate system; p is a radical of o The point cloud data of the laser radar before correction; slerp (pro, Q) is a spherical interpolation process of quaternion; pro is the angle proportion of the current end point relative to the end scanning point; q is a quaternion generated by the rotation of the mobile robot in the scanning process of the laser radar; d t The displacement vector is generated by the mobile robot in the laser radar scanning process.
The step of fusing the data of the inertia measurement unit and the data of the wheel speed meter based on the AMCL algorithm to obtain the AMCL positioning result comprises the following steps:
under a robot coordinate system, acquiring the pose of a first wheel of the mobile robot:
Figure 109606DEST_PATH_IMAGE019
the system comprises a robot coordinate system, a mobile robot, a control system and a control system, wherein Ovx and Ovy respectively represent the moving speed of the mobile robot in X-axis and Y-axis directions under the robot coordinate system;
Figure 922841DEST_PATH_IMAGE020
an angular velocity representing the rotation of the mobile robot about the Z-axis; ov1 is the speed of the first wheel; ov1X and Ov1Y are the components of Ov1 in the X-axis and Y-axis directions respectively; d1 is 1/2 of the wheel base of the movable chassis, and d2 is 1/2 of the wheel base of the movable chassis;
the angular velocities of the four wheels are:
Figure 155240DEST_PATH_IMAGE021
wherein the content of the first and second substances,
Figure 395728DEST_PATH_IMAGE022
Figure 815208DEST_PATH_IMAGE023
Figure 432134DEST_PATH_IMAGE024
Figure 253460DEST_PATH_IMAGE025
angular velocities of four wheels, respectively;
the kinematic equation of the mobile robot under the robot coordinate system is as follows:
Figure 196008DEST_PATH_IMAGE026
using L 1 ~L 4 Representing the mileage covered by four wheels at time t, R being the wheel radius, we obtain:
Figure 102784DEST_PATH_IMAGE027
angle of rotation of mobile robot around Z axis
Figure 992243DEST_PATH_IMAGE028
Comprises the following steps:
Figure 199233DEST_PATH_IMAGE029
the total mileage L and the rotation angle of the mobile robot
Figure 47103DEST_PATH_IMAGE028
Unified as function u (t), the model of the wheel speed meter is:
Figure 706755DEST_PATH_IMAGE030
wherein x (t) is a wheel speed meter model at the time t; x (t-1) is at time t-1The wheel speed meter model of (1); u (t) is the total mileage L and the rotation angle of the mobile robot
Figure 665483DEST_PATH_IMAGE028
A unified function.
Fusing data of an inertia measurement unit and data of a wheel speed meter based on an AMCL algorithm to obtain an AMCL positioning result, wherein the pose of the mobile robot needs to be converted to a map coordinate system, and the method comprises the following specific steps:
the transformation matrix T for projecting the pose of the mobile robot from the robot coordinate system to the map coordinate system is as follows:
Figure 192892DEST_PATH_IMAGE031
moving the robot's coordinates in the map coordinate system from
Figure 211664DEST_PATH_IMAGE032
Switch over to
Figure 827453DEST_PATH_IMAGE033
The process comprises the following steps:
Figure 589873DEST_PATH_IMAGE034
when the mobile robot moves from the first position to the nth position under the map coordinate system, the pose of the mobile robot is as follows:
Figure 240297DEST_PATH_IMAGE035
Figure 164390DEST_PATH_IMAGE036
wherein (x) n ,y n ) Coordinates (x) of the mobile robot at the nth position 1 ,y 1 ) Coordinates at the 1 st position; i tableShows the ith position,/ i Indicating the displacement when reaching the ith position;
Figure 533055DEST_PATH_IMAGE037
indicating the angle of rotation when the ith position is reached.
The method comprises the following steps of carrying out error correction on the current pose information of the mobile robot according to vision auxiliary positioning so as to obtain the accurate position of a target station, wherein the steps comprise:
using a Scan-Match method to take the particles converged by the AMCL as an initial value of the Scan-Match, and adopting a Newton Gaussian method to iteratively align the terminal point with the grid map:
Figure 99165DEST_PATH_IMAGE038
Figure 869675DEST_PATH_IMAGE039
wherein the content of the first and second substances,
Figure 699091DEST_PATH_IMAGE040
representing the real pose of the mobile robot;
Figure 820631DEST_PATH_IMAGE041
the estimated pose of the mobile robot representing the AMCL output;
Figure 924853DEST_PATH_IMAGE042
representing a deviation between the true pose and the estimated pose;
Figure 284290DEST_PATH_IMAGE043
indicating that the mobile robot is under the map coordinate system
Figure 815766DEST_PATH_IMAGE041
End point coordinates of radar laser scanning at the location;
Figure 424602DEST_PATH_IMAGE044
representing the probability of the scanned end point coordinate on the grid map;
the coordinate of the scanning end point of the laser radar under the map coordinate system is as follows:
Figure 66936DEST_PATH_IMAGE045
wherein the content of the first and second substances,
Figure 812038DEST_PATH_IMAGE046
Figure 514414DEST_PATH_IMAGE047
representing the position coordinates of the mobile robot in a map coordinate system;
Figure 79388DEST_PATH_IMAGE048
Figure 525413DEST_PATH_IMAGE049
representing the distance of the origin of the radar coordinate system relative to the origin of the robot coordinate system;
Figure 125021DEST_PATH_IMAGE050
representing point cloud data depth information;
Figure 998299DEST_PATH_IMAGE028
representing the rotation angle of the mobile robot under a map coordinate system;
Figure 316148DEST_PATH_IMAGE051
representing the angle of the origin of the radar coordinate system relative to the origin of the robot coordinate system;
obtaining an optimal solution of pose through multiple iterations
Figure 300285DEST_PATH_IMAGE041
To make the original model
Figure 751470DEST_PATH_IMAGE040
The sum of the squared residuals of (a) is minimal, as follows:
Figure 795650DEST_PATH_IMAGE052
using a bilinear interpolation method, firstly interpolating in the Y direction:
Figure 600795DEST_PATH_IMAGE053
Figure 388622DEST_PATH_IMAGE054
then, carrying out interpolation on the X direction to obtain the probability of P points on the grid map:
Figure 697244DEST_PATH_IMAGE055
since the grid map has several square grids of comparable area, each grid having four vertices, wherein,
Figure 912324DEST_PATH_IMAGE056
Figure 939186DEST_PATH_IMAGE057
the coordinates of the lower left vertex of any grid are represented,
Figure 530704DEST_PATH_IMAGE058
Figure 959412DEST_PATH_IMAGE059
the coordinates of the top right vertex of the grid are expressed, and x and y express the coordinates of any point P in the grid; f (R1) and f (R2) represent interpolation results in the Y direction, and f (P1), f (P2), f (P3) and f (P4) represent interpolation results of four vertexes of the grid respectively; m (P) represents the probability that the P point is within the grid.
After the step of performing error correction on the current pose information of the mobile robot according to vision-assisted positioning to obtain the accurate position of the target station, the method further comprises the following steps of: the method comprises the following steps of starting three PID threads to control the tracking action of the two-dimensional code, specifically:
abstracting the tracking motion of the mobile robot into three closed-loop controls which are a yaw correction ring, a distance keeping ring and a transverse deviation ring respectively;
the output quantity of the Yaw correction ring is the rotating speed of a Yaw axis, the output quantity of the distance keeping ring is the speed of the mobile robot in the X-axis direction under a map coordinate system, and the output quantity of the transverse deviation ring is the speed of the mobile robot in the Y-axis direction under the map coordinate system;
packaging the speed in the X-axis direction, the speed in the Y-axis direction and the rotating speed of the Yaw axis into the same frame of data and then sending the same frame of data to the mobile robot, so that the mobile robot can perform local positioning navigation according to each frame of data.
Compared with the prior art, the invention has the beneficial effects that:
1. the grid map is constructed by using a Gmapping algorithm, and global positioning navigation is realized by using an adaptive Monte Carlo positioning Algorithm (AMCL); and (4) correcting local positioning navigation by using vision-assisted positioning, and realizing autonomous path planning based on DWA. In order to improve the robustness of positioning, the sensor data is optimized and a mathematical model of the sensor is established, on the basis, a multi-sensor fusion scheme is provided to improve the positioning precision, and a premise is established for subsequent multi-sensor data fusion.
2. In order to meet the requirement of efficient and accurate positioning, the accuracy of the self-adaptive Monte Carlo positioning result is improved by the aid of algorithms such as a self-adaptive Monte Carlo positioning algorithm and extended Kalman filtering and by means of scan matching optimization.
3. In order to meet the high-precision local positioning and navigation requirements of the mobile robot at a target station, the point cloud data of the laser radar is mapped into the camera image by calibrating external parameters of the camera and the laser radar, so that the image of the monocular camera has depth information in a specific area. And then, the position of the two-dimensional code is identified to carry out accurate and quick positioning, and the global positioning navigation and the local positioning navigation are combined, so that the defect of large positioning deviation of the mobile robot at a target station is overcome.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a schematic structural diagram of an autonomous positioning and navigation system of a mobile robot according to the present invention;
FIG. 2 is a schematic diagram of a mobile robot according to the present invention;
FIG. 3 is a schematic flow chart of the Gmapping algorithm of the present invention;
FIG. 4 is a schematic view of the flow of global positioning navigation and local positioning navigation according to the present invention;
FIG. 5 is a comparison graph of the point cloud data correction result of the present invention, where a in FIG. 5 is the lidar point cloud data before correction, and b in FIG. 5 is the lidar point cloud data after correction;
FIG. 6 is a schematic view of a mobile chassis model of the mobile robot of the present invention;
FIG. 7 is a schematic representation of several consecutive poses of the present invention in a robot motion process under a map coordinate system;
FIG. 8 is a schematic view of a multi-sensor fusion model of the present invention;
FIG. 9 is a block diagram of a specific framework for sensor data fusion according to the present invention;
fig. 10 is a schematic diagram illustrating alignment effects of point cloud data according to the present invention, where a in fig. 10 is the point cloud data before alignment, and b in fig. 10 is the point cloud data after alignment;
FIG. 11 is a schematic diagram of the PID thread controlling the mobile robot to perform local positioning navigation according to the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined or explained in subsequent figures. Also, in the description of the present invention, the terms "first", "second", and the like are used for distinguishing between descriptions and not necessarily for describing a relative importance or implying any actual relationship or order between such entities or operations.
Referring to fig. 1, the autonomous positioning navigation system includes a mobile robot and a remote control host. The remote control host can adopt a computer, is provided with an Ubuntu18.04 and an ROS system with a Melodic version, communicates with the mobile robot by using WIFI in the same local area network, is mainly used for remotely controlling the mobile robot, and shares files through NFS service to improve the development efficiency of programs.
The mobile robot should meet several requirements: (1) Dynamic environment information such as high-precision position information, posture information and the like (the posture information comprises the position information and the posture information) can be obtained in real time; (2) Under the condition of no human intervention, the system can work stably and efficiently for a long time; (3) The barrier can be avoided in time, and the safety of human and property is not endangered; (4) can be controlled wirelessly by a remote control host; (5) The target station can be automatically identified, and corresponding tasks are executed.
The mobile robot work environment that this scheme relates to is mainly indoor, and ground is mostly level and smooth ground, considers the people that is indoor mostly material frame, wall body and removal again, therefore mobile robot's removal chassis need have considerable flexibility in narrow and small space, functions such as rotation in situ and translation. Compared with the classical two-wheel differential speed, the Mecanum has more freedom degree in the Y direction, can enable the mobile robot to flexibly move in a narrow passageway, and can quickly correct the self pose to achieve the target pose, so the practical scheme selects the Mecanum as the driving mode of the mobile chassis.
According to the performance requirement of the mobile robot, please refer to fig. 2, the core of the mobile robot is a real-time control unit and a positioning navigation unit. The real-time control unit is controlled by a chassis core controller and is mainly responsible for controlling a mobile chassis of the mobile robot, driving a motor and other tasks with higher real-time requirements. The positioning navigation unit is controlled by a positioning navigation core controller and is mainly responsible for tasks such as processing images and pose estimation which need a large amount of calculation.
Referring to fig. 2, the mobile robot includes a real-time control unit and a positioning navigation unit, the real-time control unit includes a chassis core controller, the chassis core controller is connected to an inertia measurement unit, a wheel speed meter and a motor, the positioning navigation unit includes a positioning navigation core controller, and the positioning navigation core controller is connected to a laser radar and a camera.
The data that chassis core control ware was handled is simple relatively, promptly according to the control of the instruction execution corresponding motion of remote control host computer, this process needs high frequency driving motor and reads fast meter control motor rotational speed of wheel in real time, so need the controller of a high real-time, this embodiment adopts STM32F103RCT6 singlechip as chassis core control ware, this singlechip dominant frequency is 72MHz, possess 64Pin Pin, have fairly abundant peripheral hardware, cooperation FreeRTOS multitask system can obtain the ability of real-time processing multitask, this singlechip accords with this engineering demand.
Compared with a chassis core controller, the data processed by the positioning and navigation core controller is more complex, for example, the data with a large bandwidth such as an image needs to be processed in real time, but the moving speed of the mobile robot does not exceed 2.1m/s in the indoor environment due to safety considerations, so that the frequency of positioning and issuing navigation points does not need to be too high, and in consideration of the problems of cost and power consumption, the scheme selects an Soc chip with an ARM architecture. Because the positioning navigation and the image processing relate to the situation of multi-thread and multi-task cooperative work, and the more the number of cores of the Soc chip is, the stronger the capacity of processing multi-task is, the embodiment adopts a high-performance multi-core Soc chip-RK 3399 to control the positioning navigation unit of the mobile robot. The Soc chip has 2A 72 big cores and 4A 53 small cores, supports 4GB LPDDR4 RAM, and has a main frequency of 2.0GHz.
In order to realize the autonomous movement of the mobile robot, the mobile robot is required to have the capabilities of autonomous positioning and navigation, the scheme establishes a map based on the Gmapping algorithm, utilizes data of an inertia measurement unit, a wheel speed meter and a laser radar, and realizes global navigation positioning through an adaptive Monte Carlo positioning Algorithm (AMCL), so that the mobile robot moves from the current position to a position near a target station; and then, performing vision-aided fusion by using data of the inertial measurement unit, a wheel speed meter and a laser radar and information of the two-dimensional code acquired by a camera, and realizing local navigation and positioning by using a Dynamic Window Algorithm (DWA), so that the position of the mobile robot near a target station is corrected and optimized to an accurate position to complete the position at the target station.
The invention is realized by the following technical scheme, and the autonomous positioning navigation method of the mobile robot needs to be completed: map construction, autonomous positioning, navigation (path planning), wherein map construction and autonomous positioning can be performed simultaneously. The map construction is a process of changing an unknown environment into a known environment by the mobile robot, namely a step of recognizing the environment by the mobile robot; the autonomous positioning is that the mobile robot scans the surrounding environment in the known environment to match with a relative coordinate system or an absolute coordinate system to obtain pose information in the coordinate system, and the quality of a positioning result directly influences the navigation effect; the navigation is that the mobile robot autonomously plans an optimal path to reach the target station according to the issued target station information.
Specifically, the autonomous positioning and navigation method for the mobile robot comprises the following steps:
and S1, constructing a grid map based on the Gmapping algorithm by combining data of the laser radar and data of the wheel speed meter, and simultaneously formulating a map coordinate system corresponding to the grid map. And receiving target station information issued by the remote control host, and analyzing the position of the target station information in a map coordinate system.
The map construction method adopts the Gmapping algorithm as a map construction scheme of the mobile robot, the Gmapping algorithm is based on a most common open source algorithm for simultaneous positioning and map construction (SLAM technology), an indoor map can be constructed in real time, the calculation amount required when a small scene map is constructed is small, and the precision is high. The ROS system of the remote control host provides a support package of the open source algorithm, and a perfect calling interface is reserved for a developer. The gmaping algorithm does not rely on a laser radar with a high brushing frequency, but additionally acquires wheel speed meter data of the mobile robot. Referring to fig. 3, the data of the laser radar and the data of the wheel speed meter are input into the gmaping algorithm to obtain the indoor grid map.
Gmapping is an algorithm based on RBPF particle filtering, can effectively solve the problems of simultaneous positioning and map building, and comprises the following steps according to a Bayesian algorithm:
Figure 345394DEST_PATH_IMAGE001
Figure 859552DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 989182DEST_PATH_IMAGE003
representing the estimated pose of the mobile robot, which is the posterior estimation of the current moment;
x 1:t representing the trajectory, m representing the probability of the end point of the lidar sweep being on the grid map, u 1:t-1 Is a velocity observation of the mobile robot, z 1:t Observing a point cloud data value of the laser radar; 1;
Figure 272396DEST_PATH_IMAGE004
the estimated track of the mobile robot is represented and is the posterior track of the current moment;
Figure 94858DEST_PATH_IMAGE005
a priori estimate representing the probability of the grid map of the mobile robot under each observation;
Figure 96312DEST_PATH_IMAGE006
it is shown that the map is constructed with the mobile robot trajectory and sensor data (data of laser radar and data of wheel speed meter) known.
Therefore, the RBPF particle filtering algorithm separates the positioning and mapping and can perform the positioning and mapping simultaneously. The RBPF particle filtering algorithm has the defects of large number of used particles and frequent resampling, which causes large calculation amount and memory consumption, particle degradation and particle diversity reduction.
The matching accuracy of the laser radar is much higher than that of a wheel speed meter, the matching variance is much smaller, if the RBPF particle filter adopts laser radar matching as proposed distribution, a relatively concentrated distribution is obtained, the sampling range can be limited to a relatively small area, and the number of particles is reduced. And the particle dissipation can be relieved by reducing the resampling times, and whether the particle resampling is carried out is determined by directly evaluating the dispersion degree of all the particle weights according to the following formula:
Figure 29633DEST_PATH_IMAGE007
wherein i represents the number of particles sampled from the beginning, N represents the total number of particles,
Figure 167353DEST_PATH_IMAGE008
the weight of the particles is represented by,
Figure 160717DEST_PATH_IMAGE009
indicating the degree of dispersion of all particle weights.
When N is present eff And when the difference is smaller than a certain threshold value, indicating that the particle difference is overlarge, and performing resampling, otherwise, not performing sampling. However, the problems of RAM (random access memory) occupation and particle iteration efficiency also need to be considered when constructing the grid map, for example, the grid resolution of the map constructed in a space of 100m × 100m is 1cm, each grid occupies one Byte, the size of the map carried by each particle is 100mb, the RAM occupied by 100 particles is about 10GB, and the memory occupation is very large. And because the usable memory of the RK3399 processor adopted by the system is only 3.8GB, the map precision is not set to be too high, otherwise, the memory overflow is caused, and the system crash is finally caused. Therefore, in order to save memory occupation and improve the particle iteration efficiency, the grid resolution of the constructed map is set to be 5cm.
Before the path of the mobile robot is planned, the remote control host sends pose information of a target station on a grid map to the mobile robot through WIFI, and then the mobile robot plans an optimal path according to the current pose of the mobile robot and the pose of the target station, so that the mobile robot moves to the position near the target station from the current position.
Step S2: fusing the data of the inertia measurement unit and the data of the wheel speed meter based on an AMCL algorithm to obtain an AMCL positioning result; and scanning, matching and optimizing the AMCL positioning result by using the data of the laser radar to obtain the pose information of the mobile robot, and calculating the path planning from the current position of the mobile robot to a target station to realize global positioning navigation.
Referring to fig. 4, autonomous positioning and navigation supplement each other, and navigation is performed in the positioning process and positioning is performed in the navigation process, so positioning and navigation are combined in this step for description, and positioning and navigation include global positioning navigation and local positioning navigation, which realizes global positioning navigation in this step, and step S3 realizes local positioning navigation.
After all positioning navigation is fused according to data of an inertial measurement unit and a wheel speed meter, a positioning result is obtained by using an adaptive Monte Carlo positioning Algorithm (AMCL), and then the data of a laser radar is used for optimizing the positioning result to obtain the current pose information of the mobile robot and the path plan of the current position to the target station, so that the mobile robot reaches the position near the target station according to the path plan, and the global positioning navigation is realized.
The wheel speed meter, the inertia measurement unit and the laser radar are used as sensors for positioning the mobile robot, and different sensors have different data types and can be converted into parameters required by positioning after processing. The data of a plurality of sensors have different confidence degrees, and in order to obtain higher positioning accuracy and robustness after fusion, a reasonable fusion positioning scheme needs to be designed according to the selected sensor data. It should be noted that, for convenience of description, the inertial measurement unit, the wheel speed meter, the lidar and the camera in fig. 2 are collectively referred to as a sensor, and data of the inertial measurement unit, the wheel speed meter, the lidar and the camera is referred to as sensor data.
When the positioning is carried out in an indoor environment, a plurality of adverse factors exist, the requirement on positioning accuracy is high, the single sensor data is difficult to meet the positioning requirement of the indoor mobile robot due to the limitation of accuracy, positioning range stability and the like, and a good positioning effect can be achieved only by fusing a plurality of sensor data. The inertial measurement unit and the wheel speed meter are used as auxiliary positioning sensors, the relative pose of the mobile robot can be obtained through integration, the real-time performance is good, but the error generated at the previous moment can be accumulated to the next integration, so that the accumulated error exists, the long-time working precision cannot be guaranteed, and the inertial measurement unit and the wheel speed meter are required to be matched with radar laser for use.
In order to achieve better sensor data fusion, preprocessing of sensor data is also essential. According to the scheme, the laser radar is used for obtaining data based on a triangular distance measurement method, according to the optical characteristics of laser beams, the laser beams can be emitted after contacting with an object, the reflected laser beams can return to a linear CCD of the laser radar to form a reflection angle, then the distance between the laser radar and the object to be measured is calculated according to a similar triangular theorem, and the distance is depth information.
The laser radar acquires a frame of point cloud data, a probe of radar laser needs to be rotated by 360 degrees, but when the laser radar rotates, the point cloud data can be distorted to a certain extent due to the motion of the mobile robot, and the distortion degree is in direct proportion to the motion speed of the mobile robot. When the distortion is large, the robustness of the system is reduced, and the positioning precision is influenced, so that the smaller the distortion degree is, the better the positioning and real-time path planning effects of the mobile robot are.
The reason why the point cloud data is distorted is as follows: when the laser radar is static relative to the obstacle, the relative distance between the obstacle and the laser radar is not distorted; when the laser radar rotates, the radar laser probe can output one frame of complete point cloud data only by scanning for one circle (360 degrees), so that the origin of the laser beam is continuously changed, and the point cloud data is distorted. Here, since the laser beam emitted from the laser radar is reflected when it reaches the object, the point at which the laser beam is emitted is referred to as an origin, and the point at which the laser beam is emitted is referred to as an end point.
Therefore, the distortion of the point cloud data needs to be corrected, firstly, the original point of the laser radar rotates one by one according to the sequence of 0-360 degrees, the measuring time of each frame of point cloud data can be calculated according to the received point cloud data timestamp, and then the timestamp of each terminal point measurement can be calculated according to the proportion of the occupied rotating angle. The method comprises the following specific steps:
(1) Calculating the measurement time of each end point in the current scanning period (360 degrees), wherein the horizontal angle of the laser beam end point is equivalent to the ratio of the horizontal angle to the measurement time:
Figure 649467DEST_PATH_IMAGE010
wherein, T p The measured time of the current endpoint relative to the first endpoint,
Figure 120900DEST_PATH_IMAGE011
angle of the current end point with respect to the first end point, T n Time stamp for current frame point cloud data, T o Is the time stamp of the last frame of point cloud data.
For example, since the scanning period is 360 degrees, a frame of point cloud data is generated only when scanning a period, and if scanning 1 degree generates an end point, there are 360 end points in a frame of point cloud data, and if the current end point is the 30 th end point, T is p The measurement time of the 30 th endpoint is shown,
Figure 113127DEST_PATH_IMAGE011
representing the angle of the 30 th end point relative to the 1 st end point, i.e. 30 degrees, T n -T o Which represents the time of scanning one cycle, i.e., the time required to scan 360 degrees.
(2) Converting the point cloud data to a radar coordinate system:
Figure 542971DEST_PATH_IMAGE012
wherein (X, Y) is the coordinate of a certain end point on the radar coordinate system, D p Depth information measured for the lidar, i.e., the distance from the origin to the destination of the laser beam.
(3) Calculating the proportion of the measurement time of the current end point in the whole scanning period, wherein for the laser radar, the angle proportion is equivalent to the time proportion, and the angle proportion relative to the end scanning point is as follows:
Figure 987859DEST_PATH_IMAGE013
wherein pro is the distortion degree of the current end point relative to the end scanning point, namely, the closer to the end scanning point, the smaller the distortion;
Figure 262982DEST_PATH_IMAGE014
for the current end point relative to the radar coordinate systemThe rotation angle of the X-axis is used because the lidar scanning does not necessarily start from 0 DEG
Figure 640874DEST_PATH_IMAGE015
Indicates the starting scanning point, and
Figure 976040DEST_PATH_IMAGE016
indicating the end scan point.
The end scanning point indicates the end point of 360 degrees, and since the distances of 0 degrees (start scanning point) and 360 degrees (end scanning point) are the closest, the closer the laser beam is to 360 degrees, the smaller the distortion is generated. Since the end point of the laser beam is very far from the end point of the laser beam at 0 degree compared to 180 degrees, the distortion is large at this time. However, in the radar coordinate system, if the Y-axis is set to 0 degree in advance, the point cloud data of one frame of the laser beam may not start to be scanned from the Y-axis, so an initial scanning point is set
Figure 173804DEST_PATH_IMAGE015
But actually
Figure 987039DEST_PATH_IMAGE060
Or equal to 360 degrees.
(4) And calculating the motion of the current end point relative to the end scanning point, wherein the conversion relation from the current end point to a radar coordinate system is as follows:
Figure 950928DEST_PATH_IMAGE017
wherein the content of the first and second substances,
Figure 191417DEST_PATH_IMAGE018
the coordinates of the corrected end point can be expressed by (X, Y) in a radar coordinate system; p is a radical of o The point cloud data of the laser radar before correction; slerp (pro, Q) is a spherical interpolation process of quaternion; pro is the angular proportion of the current end point relative to the end scanning point, namely the rotation proportion; q is generated by rotation of mobile robot in laser radar scanning processA quaternion; d t The displacement vector is generated by the mobile robot in the scanning process of the laser radar.
In the above scheme, it is still assumed that a frame of point cloud data generated by scanning a laser radar in one period (360 degrees) has 360 end points, and the above calculation needs to be performed once for each end point, so that the frame of point cloud data can be corrected. As shown in fig. 5, a in fig. 5 indicates lidar point cloud data before correction, and b in fig. 5 indicates lidar point cloud data after correction.
Fig. 6 shows a schematic diagram of a mobile chassis model of a mobile robot, where the radius of wheels of a mecanum is R, a cartesian coordinate system, that is, a robot coordinate system, is established with the center of the mobile chassis, the Z-axis of the robot coordinate system is perpendicular to the plane of the mobile chassis, and the angular velocity of the mobile robot rotating around the Z-axis is
Figure 142055DEST_PATH_IMAGE020
The moving speeds of the mobile robot in the X-axis direction and the Y-axis direction in the robot coordinate system are represented by Ovx and Ovy, respectively. Ov1, ov2, ov3, ov4 are the speeds of the four wheels respectively,
Figure 493402DEST_PATH_IMAGE022
Figure 580307DEST_PATH_IMAGE023
Figure 257276DEST_PATH_IMAGE024
Figure 429631DEST_PATH_IMAGE025
the angular velocities of the four wheels are respectively, ov1X and Ov1Y are respectively components of Ov1 in the X-axis direction and the Y-axis direction, d1 is 1/2 of the wheel base of the moving chassis, d2 is 1/2 of the wheel base of the moving chassis, and a is the rotation angle of the wheels.
The kinematic models of the four wheels are substantially the same, and taking the first wheel as an example, ov1 can be represented by components Ov1X and Ov1Y in the directions of the X axis and the Y axis and a rotation angle a of the wheel, which can be specifically represented as:
Figure 584669DEST_PATH_IMAGE061
wherein the content of the first and second substances,
Figure 791659DEST_PATH_IMAGE022
indicating the angular velocity of the first wheel. The above formula is calculated by taking the center position of the wheel as the origin, and the general calculation is performed by taking the center position of the mobile chassis as the origin of the robot coordinate system, i.e. the point O in fig. 6, so that the wheel speed can be expressed on the robot coordinate system as follows:
Figure 373950DEST_PATH_IMAGE019
by the above equation, the angular velocity of the first wheel can be solved:
Figure 33602DEST_PATH_IMAGE062
the same applies to the second wheel, the third wheel, the fourth wheel and the first wheel, d represents the sum of d1 and d2, and the angular velocities of the four wheels of the mobile robot can be expressed as:
Figure 726751DEST_PATH_IMAGE021
wherein J is jacobi matrix, and because of the property of full rank of the matrix, the transposition of J is multiplied in the formula at the same time to obtain:
Figure 788248DEST_PATH_IMAGE063
the final kinematic equation of the mobile robot can be obtained by sorting the above formula:
Figure 807020DEST_PATH_IMAGE026
the data of the wheel speed meter can be collected at a given frequency by a chassis core controller of the mobile robot, wherein N is used 1 ~N 4 Indicating the motor speed (equivalent to the motor speed) of the four wheels collected by the wheel speed meter
Figure 953967DEST_PATH_IMAGE022
Figure 185229DEST_PATH_IMAGE025
) R is the radius of the wheel of the mobile chassis, and L is used 1 ~L 4 Representing the mileage covered by four wheels at time t, one can then obtain:
Figure 366811DEST_PATH_IMAGE064
angle of rotation of mobile robot around Z axis
Figure 290905DEST_PATH_IMAGE028
Can be expressed as:
Figure 925149DEST_PATH_IMAGE029
in the wheel speed meter model, the input quantities of the whole system are the total mileage L and the rotation angle of the mobile robot
Figure 225680DEST_PATH_IMAGE028
The total mileage L and the rotation angle
Figure 996190DEST_PATH_IMAGE028
Unifying as a function u (t), the wheel speed meter model can be expressed as:
Figure 622343DEST_PATH_IMAGE030
wherein x (t) is a wheel speed meter model at the time t; x (t-1) is a wheel speed meter model at the t-1 moment; u (t) is the total mileage L and the rotation angle of the mobile robot
Figure 478304DEST_PATH_IMAGE028
A unified function.
According to the scheme, the moving chassis adopts the characteristics of Mecanum, only has two movement modes of translation and rotation, and the rotation is in-situ rotation without radius, so that the wheel speed meter only has the corresponding two modes of translation and rotation. However, because the range and the angle generated by translation and rotation are unified in one model, the motion models of the two modes are consistent.
When the mobile robot is in a state of translational motion or rotational motion, the position and attitude information of the mobile robot is calculated by using x, y,
Figure 316947DEST_PATH_IMAGE028
To illustrate, the translational motion model or the rotational motion model of the mobile robot can be expressed as:
Figure 207542DEST_PATH_IMAGE065
the wheel speed meter model corresponding to the translational motion model or the rotational motion model can be expressed as:
Figure 473439DEST_PATH_IMAGE066
the position of the mobile robot is expressed in the form of coordinates, so that the motion of the mobile robot can be regarded as rigid motion, the transformation of the coordinates can be regarded as the combination of translation and rotation, and several coordinate systems are needed to better describe the motion process, namely a map coordinate system and a robot coordinate system. The map coordinate system is fixed relative to the room, and can be regarded as a reference coordinate system, for example, a certain point on the ground in the room is taken as the origin of the map coordinate system, and the plane where the X axis and the Y axis are located is the ground plane. The robot coordinate system is a plane where the moving chassis is located is taken as the X-axis and the Y-axis of the robot coordinate system as shown in fig. 6, and the center point O is the origin of the robot coordinate system.
According to the kinematic model analysis of the mobile chassis, the difference value between the two poses of the mobile robot in the motion process can be obtained as
Figure 82274DEST_PATH_IMAGE067
Figure 724608DEST_PATH_IMAGE068
Figure 478500DEST_PATH_IMAGE069
. As used herein
Figure 180876DEST_PATH_IMAGE070
Figure 277008DEST_PATH_IMAGE071
Figure 191875DEST_PATH_IMAGE072
To show, however
Figure 57063DEST_PATH_IMAGE070
Figure 664761DEST_PATH_IMAGE071
Figure 248190DEST_PATH_IMAGE072
All the data are expressed under a robot coordinate system, in the actual calculation, the vector in the robot coordinate system needs to be mapped to a map coordinate system to be solved, otherwise, the pose of the mobile robot under the robot coordinate system is always unchanged no matter how the mobile robot moves. It should be noted here that since the front side of the mobile robot (i.e. the direction in which the probe of the camera faces) does not move up and down, only the translation of the mobile robot is requiredOr rotation, to produce angular changes of the mobile robot about the Z-axis, so that here the robot coordinate system is not represented using Z-axis coordinates, but using rotation angles
Figure 232326DEST_PATH_IMAGE028
To indicate. When the robot coordinate system is converted to the map coordinate system, the x, y,
Figure 686441DEST_PATH_IMAGE028
To represent pose information of the mobile robot.
The chassis core controller taking STM32 as the core acquires the angular velocities of four wheels of the mobile chassis, and then the pose information of the mobile chassis is obtained according to a kinematic model, wherein the pose information is represented by vx, vy, and,
Figure 730621DEST_PATH_IMAGE073
Represents Ovx, ovy, or,
Figure 535765DEST_PATH_IMAGE020
Is convenient for vx, vy and,
Figure 323593DEST_PATH_IMAGE073
Integration is performed to convert to the required mileage information. The dead reckoning is highly dependent on a sensor of the mobile robot, and the pose of the mobile robot is estimated according to the speed and angle transformation. FIG. 7 shows that p1, p2, p3, and p4 are several continuous poses of the mobile robot in the map coordinate system, and the motion trail of the mobile robot can be obtained by connecting p1, p2, p3, and p4 to pn, which can be known by simple geometric calculation,
Figure 897794DEST_PATH_IMAGE074
are equal, i.e.:
Figure 112874DEST_PATH_IMAGE075
here pn-1 to pn means a very small time
Figure 139736DEST_PATH_IMAGE076
The pose of the mobile robot changes, and the magnification processing is performed in fig. 7 for easy viewing. The transformation matrix T in which the pose of the mobile robot is projected from the robot coordinate system to the map coordinate system can be expressed as:
Figure 465675DEST_PATH_IMAGE031
the transformation matrix T transforms the coordinates of a point, and the variables dx and dy in the X-axis and Y-axis directions of the robot coordinate system are represented by a variable quantity and are vectors in the calculation, so that when the mobile robot moves from p1 to p2, the coordinates of the mobile robot are changed from p1 to p2 in the map coordinate system
Figure 159962DEST_PATH_IMAGE032
Switch to
Figure 280365DEST_PATH_IMAGE033
The process comprises the following steps:
Figure 60102DEST_PATH_IMAGE034
in the map coordinate system, from the time p1 to the time pn, assuming that the displacement of the mobile robot is l, the motion of the mobile robot can be regarded as an accumulation process, starting from the initial time, according to the displacement l and the rotation angle at each time
Figure 455311DEST_PATH_IMAGE028
Then, when the pn position is reached, the pose of the mobile robot can be represented as follows:
Figure 472946DEST_PATH_IMAGE035
Figure 295408DEST_PATH_IMAGE036
wherein (x) n ,y n ) Coordinates (x) of the mobile robot at the nth position 1 ,y 1 ) Coordinates at the 1 st position; i denotes the ith position, l i Indicating the amount of displacement when reaching the ith position;
Figure 296862DEST_PATH_IMAGE037
indicating the angle of rotation when the ith position is reached.
As can be seen from the above formula, dead reckoning refers to the integral summation of displacement and angle, and the pose at the next moment is calculated by analyzing each current pose and sensor data through a kinematic model. However, this is an ideal situation, which is established when the data used by the wheel speed meter is truly reliable. However, no matter how expensive, the sensor with high precision inevitably has errors, and the track estimation is to accumulate each pose, and then the errors are accumulated at the same time, so the existence of the accumulated errors is considered in the calculation, as shown in the following formula:
Figure 964604DEST_PATH_IMAGE077
wherein
Figure 367903DEST_PATH_IMAGE078
Figure 95688DEST_PATH_IMAGE079
Figure 584438DEST_PATH_IMAGE080
That is, the expression error is generated by the interference of external noise on the mobile robot, and the error can be classified into a systematic error and a random error. The system error can be carried out by calibrating the wheel speed meterEvasion, errors that can be quantified, such as changes in wheel diameter, changes in coefficient of friction, and the like. However, random errors cannot be modeled and quantified, for example, when the mobile robot idles due to slipping, a wheel speed meter can really record the rotation speed data of the motor, but the estimation of the pose of the mobile robot is greatly influenced if the body of the mobile robot does not perform corresponding displacement. Therefore, random errors cannot be removed, and only estimation can be performed in other ways to be as close to a true value as possible, for example, the estimation accuracy of the wheel speed meter can be greatly improved by being fused with an inertial measurement unit.
The single sensor has respective defects, such as large error between the estimated value and the actual value of the rotation angle obtained by the wheel speed meter, and the like. Therefore, how to effectively fuse the positioning data is the key for improving the overall positioning accuracy and stability of the mobile robot. In many cases of sensor data fusion, the complementation between sensors can improve the positioning performance of the system, and the redundant data between sensors can improve the robustness of the system, as can be more clearly understood from the multi-sensor fusion model in fig. 8.
According to the sensor fusion model, a specific framework of the sensor data fusion of the embodiment is shown in fig. 9 (where IMU represents data collected by the inertial measurement unit). The inertial measurement unit and the wheel speed meter can obtain a positioning result with high precision and real-time performance, but the positioning result is continuously diverged, so that the SLAM positioning result based on the laser radar is required to be fused. The SLAM algorithm based on the laser radar has high precision, does not diverge, but has poor real-time performance and certain limitation in real-time path planning. Therefore, a positioning system with high real-time performance needs to be combined, and finally the two systems form a complementary relationship, namely, an inertial measurement unit and a wheel speed meter are used for positioning in a short time, and then a laser radar is used for correcting errors of real-time positioning so as to realize global positioning navigation.
S3, after the mobile robot reaches a target station according to the path plan, acquiring two-dimensional code information arranged on the target station through a camera, and fusing data of the camera and data of the laser radar to obtain vision auxiliary positioning; and performing error correction on the current pose information of the mobile robot according to vision-assisted positioning so as to obtain the accurate position of the target station and realize local positioning navigation.
Although the positioning accuracy is obviously improved after the inertial measurement unit and the wheel speed meter are used for positioning and the laser radar is used for correcting the error of real-time positioning, the mobile robot needs to grab at a target station, but the stroke of the mechanical arm is limited, so that higher requirements are provided for the positioning accuracy of the mobile robot at the target station. In order to realize the positioning and obstacle avoidance of the mobile robot at a target station, most of the mature schemes at present use a laser radar as a main sensor, but the positioning frequency of the laser radar is low. In order to compensate the defect, the scheme integrates an Inertial Measurement Unit (IMU) and a wheel speed meter to assist positioning so as to improve the positioning accuracy and speed. But because of the actual handling task very high to the positioning accuracy requirement of mobile robot in target station department, but laser radar positioning error commonly used is difficult to stabilize within 5cm, considers the cost problem again, consequently this scheme installs the camera of a low cost additional and realizes vision assistance-localization real-time as the precision and remedies, preferred monocular camera. The common camera does not have depth information (namely distance information between a camera probe and an obstacle), so that the point cloud data of the laser radar can be mapped into an image collected by the camera, the image has the depth information, and the depth information is acquired during local positioning navigation.
And (3) combining the mathematical models of various sensor positioning given in the step (S2), applying an extended Kalman filtering algorithm to positioning based on an inertial measurement unit and a wheel speed meter, and applying an adaptive Monte Carlo positioning Algorithm (AMCL) to positioning based on the inertial measurement unit, the wheel speed meter and a laser radar. The method is based on the particles after AMCL convergence, the problem of alignment of the point cloud data of the laser radar and the grid map is optimized by adopting a Scan-Match method, and the positioning error is further reduced, so that the mobile robot can accurately reach a target station to complete strict actions.
When the mobile robot starts local positioning navigation, because the AMCL does not have a definite initial pose and the alignment rate of the scanning end point of the laser radar and an obstacle is low when the mobile robot works in a complex environment with geometrical characteristics, a certain error exists between the estimated pose and the real pose of the mobile robot output by the AMCL, and the positioning precision is difficult to guarantee even after particles are converged.
According to the scheme, the particles after AMCL convergence are used as an initial value of the Scan-Match by using a Scan-Match method, a Newton Gaussian method is adopted to iteratively align the terminal point and the grid map, a very accurate pose is provided for a Scan-Match estimation result, and finally the optimized pose is output.
Because the resolution ratio of the constructed raster map is large, but the point cloud data needs to form accurate positioning in a map coordinate system, the point cloud data scanned by the laser radar is aligned with the raster map through scanning matching optimization, the problem with the laser radar can be converted into the problem of scanning matching optimization to be considered, and the expression is as follows:
Figure 321450DEST_PATH_IMAGE038
Figure 313677DEST_PATH_IMAGE039
wherein the content of the first and second substances,
Figure 209433DEST_PATH_IMAGE040
representing the real pose of the mobile robot;
Figure 185479DEST_PATH_IMAGE041
the estimated pose of the mobile robot representing the AMCL output;
Figure 460603DEST_PATH_IMAGE042
representing a deviation between the true pose and the estimated pose;
Figure 572915DEST_PATH_IMAGE043
indicating that the mobile robot is in the map coordinate system
Figure 908082DEST_PATH_IMAGE041
End point coordinates of radar laser scanning at the location;
Figure 840266DEST_PATH_IMAGE044
representing the probability of the scanned end point coordinates on the grid map.
Estimation pose of mobile robot output by AMCL
Figure 919080DEST_PATH_IMAGE041
And (3) integrating the inertial measurement unit, the wheel speed meter and the laser radar in the step (S2) to obtain the pose of the mobile robot, namely the pose of the mobile robot reaching the position close to the target station after the global positioning navigation is finished. Because of the actual reached pose of the mobile robot
Figure 885899DEST_PATH_IMAGE040
Estimated pose with AMCL output
Figure 126387DEST_PATH_IMAGE041
There will be errors, so when navigating by local positioning, the errors need to be reduced
Figure 77026DEST_PATH_IMAGE042
Taken into account together.
The coordinate of the scanning end point of the laser radar under the map coordinate system is as follows:
Figure 693952DEST_PATH_IMAGE045
wherein the content of the first and second substances,
Figure 515278DEST_PATH_IMAGE046
Figure 926667DEST_PATH_IMAGE047
representing the position coordinates of the mobile robot in a map coordinate system;
Figure 99023DEST_PATH_IMAGE048
Figure 254061DEST_PATH_IMAGE049
representing the distance of the origin of the radar coordinate system relative to the origin of the robot coordinate system;
Figure 195472DEST_PATH_IMAGE050
representing point cloud data depth information;
Figure 43342DEST_PATH_IMAGE028
representing the rotation angle of the mobile robot under a map coordinate system;
Figure 702993DEST_PATH_IMAGE051
representing the angle of the origin of the radar coordinate system relative to the origin of the robot coordinate system.
The goal of the Scan-Match method is to give an initial value
Figure 396143DEST_PATH_IMAGE041
Estimating the deviation by means of Newton's Gaussian iteration
Figure 723219DEST_PATH_IMAGE042
So that the radar coordinate system and the grid map are aligned. Obtaining an optimal solution of pose through multiple iterations
Figure 476411DEST_PATH_IMAGE041
To make the original model
Figure 357780DEST_PATH_IMAGE040
The sum of the squared residuals is minimal, so there are:
Figure 120199DEST_PATH_IMAGE052
requiring the minimum sum of the squares of the residuals, i.e. let S be paired
Figure 36203DEST_PATH_IMAGE040
Is equal to 0, the expression:
Figure 960297DEST_PATH_IMAGE081
Figure 594540DEST_PATH_IMAGE082
Figure 892142DEST_PATH_IMAGE083
Figure 397072DEST_PATH_IMAGE085
where M represents the probability of the end point coordinate of the lidar sweep being on the grid map.
Since the grid map is discrete and discontinuous, in order to obtain the derivative of the grid map to the original point of the laser beam
Figure 757647DEST_PATH_IMAGE086
And solving by a bilinear interpolation method. In the present embodiment, a bilinear interpolation method is used, which performs interpolation in the Y direction of the map coordinate system first, and then performs interpolation in the X direction on the interpolation result in the Y direction.
Firstly, interpolation is carried out in the Y direction:
Figure 613607DEST_PATH_IMAGE053
Figure 717829DEST_PATH_IMAGE054
then, interpolating in the X direction to obtain the probability of P points on the grid map:
Figure 342846DEST_PATH_IMAGE055
since the grid map has several square grids of comparable area, each grid having four vertices, wherein,
Figure 874321DEST_PATH_IMAGE056
Figure 217578DEST_PATH_IMAGE057
the coordinates of the lower left vertex of any grid are represented,
Figure 125491DEST_PATH_IMAGE058
Figure 605014DEST_PATH_IMAGE059
the coordinates of the top right vertex of the grid are expressed, and x and y express the coordinates of any point P in the grid; f (R1) and f (R2) represent interpolation results in the Y direction, and f (P1), f (P2), f (P3) and f (P4) represent interpolation results of four vertexes of the grid respectively; m (P) represents the probability of a point P being within the grid, P 1 ~P 4 Representing the probability of four vertices of the grid. Since the length and width of each grid are equal and constant, there is a relationship in the calculation:
Figure 510653DEST_PATH_IMAGE087
therefore, it is not only easy to use
Figure 606785DEST_PATH_IMAGE088
Can be expressed as:
Figure 318389DEST_PATH_IMAGE090
the partial derivative of the original point of the laser beam on the grid map can be calculated through bilinear interpolation, and the expression is as follows:
Figure 652419DEST_PATH_IMAGE091
the Scan-Match method optimizes the objective function by adopting Newton Gaussian iteration, and takes the output of AMCL as the initial value of the objective function
Figure 260117DEST_PATH_IMAGE041
Aligning the scanning end point of the laser radar with the edge of the obstacle through layer-by-layer iteration, and obtaining the deviation by the optimized point cloud data alignment effect as shown in fig. 10, wherein a in fig. 10 is the point cloud data before alignment, and b in fig. 10 is the point cloud data after alignment
Figure 46808DEST_PATH_IMAGE042
To make the residual sum of squares S reach the minimum value and finally obtain the optimal solution
Figure 296524DEST_PATH_IMAGE092
The DWA is adopted as a path planner when the mobile robot navigates in local positioning, and in order to better understand the operation process of the DWA, the DWA comprises the following steps:
(1) And establishing a kinematic model.
When the kinematics model is established, the DWA defaults that the mobile robot does uniform motion in a very short time, and establishes the kinematics model based on the uniform motion for the subsequent track prediction.
(2) And (6) sampling the speed.
In velocity sampling, the DWA establishes the velocity constraint of the mobile robot and the minimum distance constraint of the root obstacle.
(3) And (6) scoring preferentially.
In the scoring process, the number of obstacles on each simulated path of the mobile robot, the time duration and the like are used as scoring bases, and finally, one path with the highest score is selected as a local planning result.
The positioning accuracy is limited by the resolution of the grid map, the positioning accuracy is between plus or minus 10cm, and the maximum error of the rotation angle reaches plus or minus 0.5 radian. The mobile robot needs higher local precision for subsequent actual grabbing tasks, which requires higher precision extraction rate. In order to further improve the accuracy of local positioning navigation, the scheme uses a two-dimensional code positioning technology to solve the problem of high-accuracy local positioning, namely a Data Matrix positioning technology (a two-dimensional code encoding mode). However, the Data Matrix positioning technology has certain limitations because the Data Matrix positioning technology without depth information needs the mobile robot to move continuously to acquire the correction scale of the two-dimensional code, the method depends on higher calculation power and better initial calibration value of the two-dimensional code, the positioning frequency is only 5Hz, and the correction flexibility of the system is not good. In other researches, a depth camera is adopted to acquire target depth information, but the cost of the depth camera is high, the data noise of the depth camera is large, and the point cloud data volume of an image is too large, so that the real-time requirement of an embedded platform is not facilitated. The method combines the existing laser radar of the mobile robot, the mode of realizing vision auxiliary positioning by fusing the camera with the laser radar with low cost and high efficiency is utilized, and the image obtained by the camera is fused with the depth information obtained by the laser radar.
The visual auxiliary positioning depends on an easily-recognized target, the two-dimensional code has obvious characteristics and can contain detailed information of a target station, the laying and finishing cost is extremely low, the QR two-dimensional code is preferred, abundant information can be stored in the QR two-dimensional code, other two-dimensional codes can be selected, and the two-dimensional code mark with the size of 10cm multiplied by 10cm is selected as the recognition target. According to the scheme, the two-dimensional code needs to be manually and vertically pasted at the target station, the position of the two-dimensional code in a map coordinate system is calibrated, when the mobile robot reaches the position close to the target station, the two-dimensional code is automatically identified and tracked, and the relative pose of the mobile robot is calculated to correct the positioning error of the AMCL (the weight of particles close to the target station is properly increased), so that the accuracy of local positioning navigation is improved.
The two-dimension code is recognized by the deep learning framework, the two-dimension code identification is recognized and intercepted firstly, the image of the two-dimension code identification is processed, and then the openCV is utilized to read information on the two-dimension code, so that the recognition rate of the two-dimension code can be greatly improved. After the two-dimension code information is read, the coordinate of the two-dimension code is converted into a camera coordinate system from a radar coordinate system, so that the depth information of the plane where the two-dimension code is located can be obtained, the relative pose of the mobile robot and the two-dimension code can be efficiently calculated, and the possibility of tracking the two-dimension code is provided.
It should be noted here that the camera coordinate system is a two-dimensional coordinate system with the center point of the image captured by the camera as the origin, and since the image directly captured by the camera has no depth information (i.e. no distance information between the camera probe and the obstacle), the depth information obtained by the laser radar is projected onto the camera coordinates, so that the image captured by the camera can have the depth information. The two-dimensional code is identified and tracked through a camera, so that when the camera shoots the two-dimensional code, the two-dimensional code does not have depth information in camera coordinates, and the depth information in radar coordinates needs to be combined, so that a radar coordinate system is converted into a camera coordinate system.
In the deep learning framework, a YOLO-Fastest model needs to be trained and an NCNN reasoning framework needs to be deployed, and the steps of training the YOLO-Fastest model are as follows:
(1) And (4) collecting the two-dimensional code dataset.
In order to train a YOLO-Fastest model, a data set needs to be established, a camera is fixed on the mobile robot, the camera is started by using the remote control host, and image information issued by the camera is recorded by using the rossbag. Shooting the two-dimensional codes from different angles, finally sending the recorded image information to a remote control host, selecting some more classical images, and then storing the images to form a data set.
(2) And calibrating the two-dimension code.
And performing frame selection and annotation on the two-dimensional code identifications in the images by using a Label Image annotation tool, generating a description file for each Image, and dividing the prepared data set into a training set and a test set according to the following formula of 9.
(3) And configuring parameters of a YOLO-Fastest model, and training the YOLO-Fastest model by using Darkent.
The weights and parameters of the YOLO-Fastest model are now converted into formats supported by NCNN using the NCNN inference framework program. According to the characteristics of the NCNN multi-core parallel operation, the inference thread of the NCNN is fixed on two A53 small cores, so that the inference efficiency is improved, and meanwhile, the CPU resource consumption is limited. And finally, after the image detection is finished, the detection information is issued to the node of the remote control host.
Of course, the DNN reasoning framework can be adopted, and the two reasoning frameworks are compared, so that the two reasoning frameworks are measured in the aspects of reasoning speed and resource consumption. The method aims to select an inference framework which is higher in efficiency and meets the real-time requirement from the two. The experiment is divided into two steps in total:
the first step is as follows: the DNN and the NCNN infer the same model, the frame rates of the DNN and the NCNN are recorded and compared, and the higher the frame rate is, the better the real-time performance is;
the second step is that: and the DNN and the NCNN reason the same model, record the conditions of the two in the CPU resource, compare the conditions, and finally calculate the reasoning efficiency, wherein the less the consumed CPU resource is, the lower the calculation power is.
And (3) completing the frame rate test in the first step, continuously operating for 10 minutes by using two reasoning frames respectively, and exporting and analyzing the frame rate data of the two reasoning frames. The model reasoning experiment comparison aims at comparing the real-time performance of two reasoning frameworks, and the DNN reasoning speed is faster than NCNN but only has a difference of about 3 frames from the frame rate comparison. While the average frame rate of the NCNN is still 18.4, the requirement of real-time performance can be still ensured, the frame rate fluctuation is small, and the inference speed is more stable, so that the NCNN and the frame rate fluctuation can both meet the requirement of real-time performance.
Then, the conditions of the CPU resource occupation of the two CPU resources are compared, and the CPU resource consumed by the DNN is far higher than that consumed by the NCNN. It was calculated that NCNN consumed only 43.7% of the DNN, while its inference speed corresponded to 86.4% of DNN.
And then, respectively calculating the ratio of the inference speed to the CPU resource consumption of the two devices to obtain the inference efficiencies of the two devices, wherein the average efficiency of DNN is only 26.625, while the average efficiency of NCNN reaches 52.285. Therefore, the efficiency of the NCNN inference framework on RK3399 is higher, only two A53 corelets are occupied, 4 remaining cores of the RK3399 can be used as the services of the remote control host, and the real-time performance of the positioning services of the remote control host is hardly influenced, so that the embodiment optimally selects the NCNN as the inference framework of visual auxiliary positioning.
When the camera detects the two-dimensional code, the distance cannot be determined due to the fact that no depth information exists, and whether a probe of the camera on the mobile robot is vertically aligned with a plane where the two-dimensional code is located cannot be determined, so that point cloud data of the laser radar needs to be fused. In order to project an end point obtained from a radar coordinate system into a camera coordinate system, the transformation relation from the radar coordinate system to the camera coordinate system needs to be calibrated, a joint calibration method of an open-sight laser radar and a camera is adopted, and the method is realized based on a single line laser beam of the laser radar and external parameters of the camera.
The calibration plate with the size of 50cm × 50cm is used, the calibration plate can rotate and translate at random at a position 0.3-1.3 meters away from the camera, the end point of a laser beam can be projected onto the calibration plate, and images of the calibration plate at different poses and laser radar point cloud data at the same moment are recorded through the rossbag. This step is called on-line calibration, and by this calibration, the pose information of a series of calibration plates relative to the camera coordinate system is obtained, and the quality of the pose information will determine the result of the subsequent calibration.
A series of position and orientation information is obtained through online calibration, the possible projected linear direction of the laser radar during translation can be estimated according to the position and orientation of a calibration plate in offline calibration, then a similar translation straight line is searched in the appointed range of the laser radar, point cloud data forming the straight line is extracted, finally external parameter data of a camera and the laser radar can be obtained according to the extracted point cloud data and the position and orientation of the calibration plate, and a calibration result is stored in a result parameter file for a subsequent program to obtain the calibration result. Through the external parameter matrix and the camera internal parameter, the laser radar point cloud data can be projected onto the image, and the radar coordinate system is converted into the camera coordinate system.
And obtaining the coordinates of the area where the two-dimensional code is located through image identification, and obtaining the pose of the two-dimensional code relative to a camera coordinate system through the mapping of a laser radar. In order to ensure the accuracy and robustness of the system, points of a plane where the two-dimensional code is located need to be fitted into a straight line, and noise points need to be removed, that is, since the two-dimensional code is vertically attached to the target station and is a transverse line in a top view, a probe of the camera is perpendicular to the ground, and is a transverse line in a top view, a connecting line from a midpoint of the line of the two-dimensional code to a midpoint of the line of the probe of the camera needs to be perpendicular to the line of the two-dimensional code and the line of the probe of the camera, and the line needs to be fitted. The method comprises the following specific steps:
step d1: in the camera coordinate system, let the linear equation be
Figure 281797DEST_PATH_IMAGE093
Wherein X is the distance of the laser radar point cloud data in the X-axis direction; y is depth information of radar laser point cloud data; k is the slope of the straight line;
step d2: let known point set (x) 1 ,y 1 )...(x n ,y n );
And d3: minimizing the sum of the squared residuals of error equation f:
Figure 325976DEST_PATH_IMAGE094
from the extreme theorem, it can be seen that the extreme value is obtained when the first derivative of the error equation is equal to 0, and therefore the first derivative is derived with respect to k and b, and k and b are solved, so that the error equation f takes the minimum value. The following can be obtained:
Figure 131121DEST_PATH_IMAGE095
Figure 387790DEST_PATH_IMAGE096
in order to ensure that the water-soluble organic acid,
Figure 227570DEST_PATH_IMAGE097
Figure 177072DEST_PATH_IMAGE098
Figure 735092DEST_PATH_IMAGE099
Figure 795452DEST_PATH_IMAGE100
obtaining:
Figure 486809DEST_PATH_IMAGE101
Figure 872791DEST_PATH_IMAGE102
therefore, the distance from the camera of the mobile robot to the two-dimensional code plane can be obtained by the following formula:
Figure 386949DEST_PATH_IMAGE103
where d is the distance and also the depth information.
The yaw angle of the two-dimensional code plane relative to the mobile robot is reflected by the slope k of the straight line, and when the mobile robot camera is perpendicular to the two-dimensional code plane, the k value approaches 0, so that the k value is better when the k is defined to approach 0, and the k value is worse. In order to improve the calculation efficiency and simplify the calculation, the quality of the yaw angle of the mobile robot is measured only by evaluating the quality of the k value, so the relative yaw angle is not calculated through the k value. The included angle of the mobile robot camera relative to the plane of the two-dimensional code can be obtained through the straight line, and the distance of the camera relative to the two-dimensional code can also be obtained, so that the tracking of the pose of the two-dimensional code is realized.
The local positioning navigation needs to be considered from three dimensions, namely the distance between the camera and the two-dimensional code, whether the two-dimensional code is in the center of the camera image, and whether the mobile robot camera is perpendicular to the two-dimensional code mark, and the three dimensions are quantized in the foregoing.
In summary, referring to fig. 11, three PID threads are required to be opened in the remote control host to control the two-dimensional code tracking operation. In the scheme, the tracking motion of the mobile robot is abstracted into three closed-loop controls, namely a yaw correction ring, a distance keeping ring and a transverse deviation ring. The feedback quantity of the Yaw correction loop is the k value of the target straight line, the loop output quantity is the rotating speed of the Yaw axis, and the robot camera is moved to be perpendicular to the two-dimensional code plane when k approaches 0. The feedback of the distance keeping loop can be obtained through a distance formula from a mobile robot camera to a two-dimensional code plane, and the loop output quantity is the speed of the mobile robot in the X-axis direction under a map coordinate system. The feedback of the transverse deviation loop is the position of the center of the two-dimensional code in a camera coordinate system, and the loop output quantity is the speed of the mobile robot in the Y-axis direction under a map coordinate system.
Since three different PID threads are opened, three corrections are generated, and speed information is sent once for each correction, which results in data coverage and poor tracking effect. Therefore, three flag bits are used for waiting for data synchronization, and the speed in the X-axis direction, the speed in the Y-axis direction and the rotating speed of the Yaw axis are packaged into the same frame of data and then transmitted, so that the transmission efficiency is improved, and the mutual coverage among data is avoided. The mobile robot can perform local positioning navigation according to each frame of data, so that the mobile robot can accurately reach a target station and perform strict actions.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (3)

1. An autonomous positioning and navigation method of a mobile robot is characterized in that:
the mobile robot comprises a chassis core controller and a positioning navigation core controller, wherein the chassis core controller is connected with an inertia measuring unit and a wheel speed meter, and the positioning navigation core controller is connected with a laser radar and a camera;
the autonomous positioning navigation method comprises the following steps:
s1, constructing a grid map based on a Gmapping algorithm by combining data of a laser radar and data of a wheel speed meter, and simultaneously formulating a map coordinate system corresponding to the grid map; receiving target station information issued by a remote control host, and analyzing the position of the target station information in a map coordinate system;
s2, fusing the data of the inertia measurement unit and the data of the wheel speed meter based on an AMCL algorithm to obtain an AMCL positioning result; scanning, matching and optimizing the AMCL positioning result by using data of the laser radar to obtain position and attitude information of the mobile robot, and calculating a path plan from the current position of the mobile robot to a target station to realize global positioning navigation;
before scanning, matching and optimizing the AMCL positioning result by using the data of the laser radar, the distortion generated by the point cloud data of the laser radar needs to be corrected, and the method specifically comprises the following steps:
calculating the measuring time of each end point in the current scanning period, wherein the horizontal angle of the end point of the laser beam is equivalent to the ratio of the horizontal angle to the measuring time:
Figure DEST_PATH_IMAGE001
wherein the scanning period is 360 degrees, T p The measured time of the current endpoint relative to the first endpoint,
Figure DEST_PATH_IMAGE002
angle of the current end point with respect to the first end point, T n Time stamp for current frame point cloud data, T o A timestamp of the previous frame of point cloud data;
converting the point cloud data to a radar coordinate system:
Figure DEST_PATH_IMAGE003
wherein, (X, Y)As coordinates of a certain end point on the radar coordinate system, D p Depth information measured for the laser radar, that is, the distance from the origin of the laser beam to the end point;
calculating the angle proportion of the current end point relative to the end scanning point:
Figure DEST_PATH_IMAGE004
wherein pro is the distortion degree of the current end point relative to the end scanning point;
Figure DEST_PATH_IMAGE005
is the rotation angle of the current end point with respect to the X-axis of the radar coordinate system,
Figure DEST_PATH_IMAGE006
which represents the starting scanning point of the scanning,
Figure DEST_PATH_IMAGE007
representing a termination scan point;
the conversion relation from the current endpoint to the radar coordinate system is as follows:
Figure DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE009
the coordinates of the corrected terminal point are expressed by (X, Y) in a radar coordinate system; p is a radical of o The point cloud data is the laser radar point cloud data before correction; slerp (pro, Q) is a spherical interpolation process of quaternion; pro is the angle proportion of the current end point relative to the end scanning point; q is a quaternion generated by the rotation of the mobile robot in the scanning process of the laser radar; d t The displacement vector is generated by the mobile robot in the laser radar scanning process;
s3, after the mobile robot reaches a target station according to the path plan, acquiring two-dimensional code information arranged on the target station through a camera, and fusing the data of the laser radar and the data of the camera to obtain vision auxiliary positioning; performing error correction on the current pose information of the mobile robot according to vision-assisted positioning so as to obtain the accurate position of a target station and realize local positioning navigation;
recognizing the two-dimensional code by using a deep learning framework, recognizing and intercepting the two-dimensional code identifier, processing the picture of the two-dimensional code identifier, and reading information on the two-dimensional code by using an openCV; after the two-dimension code information is read, the coordinate of the two-dimension code is converted into a camera coordinate system from a radar coordinate system, so that the depth information of the plane where the two-dimension code is located can be obtained, and the relative pose of the mobile robot and the two-dimension code can be efficiently calculated;
in a deep learning framework, training a YOLO-Fastest model and deploying an NCNN inference framework, wherein the training of the YOLO-Fastest model comprises the following steps:
(1) Collecting a two-dimensional code dataset;
establishing a data set for training a YOLO-Fastest model, fixing a camera on a mobile robot, starting the camera by using a remote control host, and recording image information issued by the camera by using Rosbag; shooting the two-dimensional codes from different angles, finally sending the recorded image information to a remote control host, and selecting partial images to be stored to form a data set;
(2) Calibrating the two-dimension code;
performing frame selection and labeling on the two-dimensional code identification in the Image by using a Label Image labeling tool, generating a description file for each Image, and dividing the prepared data set into a training set and a test set according to the following steps of 9;
(3) Configuring parameters of a YOLO-Fastest model, and training the YOLO-Fastest model by using Darkent;
the NCNN inference framework deployed may also be a DNN inference framework.
2. The autonomous positioning and navigation method of a mobile robot according to claim 1, characterized in that: the step of using the data of the laser radar to carry out scanning matching optimization on the AMCL positioning result comprises the following steps:
using a Scan-Match method to take the particles converged by the AMCL as an initial value of the Scan-Match, and adopting a Newton Gaussian method to iteratively align the terminal point with the grid map:
Figure DEST_PATH_IMAGE010
Figure DEST_PATH_IMAGE011
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE012
representing the real pose of the mobile robot;
Figure DEST_PATH_IMAGE013
the estimated pose of the mobile robot representing the AMCL output;
Figure DEST_PATH_IMAGE014
representing a deviation between the true pose and the estimated pose;
Figure DEST_PATH_IMAGE015
indicating that the mobile robot is under the map coordinate system
Figure 552853DEST_PATH_IMAGE013
A terminal coordinate of the radar laser scan at the location;
Figure DEST_PATH_IMAGE016
representing the probability of the scanned end point coordinate on the grid map;
the coordinate of the end point of the laser radar scanning in the map coordinate system is as follows:
Figure DEST_PATH_IMAGE017
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE018
Figure DEST_PATH_IMAGE019
representing the position coordinates of the mobile robot in a map coordinate system;
Figure DEST_PATH_IMAGE020
Figure DEST_PATH_IMAGE021
representing the distance of the origin of the radar coordinate system relative to the origin of the robot coordinate system;
Figure DEST_PATH_IMAGE022
representing point cloud data depth information;
Figure DEST_PATH_IMAGE023
representing the rotation angle of the mobile robot under a map coordinate system;
Figure DEST_PATH_IMAGE024
representing the angle of the origin of the radar coordinate system relative to the origin of the robot coordinate system;
obtaining an optimal solution of pose through multiple iterations
Figure 331977DEST_PATH_IMAGE013
To make the original model
Figure 954458DEST_PATH_IMAGE012
The sum of the squared residuals of (a) is minimal, as follows:
Figure DEST_PATH_IMAGE025
using a bilinear interpolation method, firstly interpolating in the Y direction:
Figure DEST_PATH_IMAGE026
Figure DEST_PATH_IMAGE027
then, carrying out interpolation on the X direction to obtain the probability of P points on the grid map:
Figure DEST_PATH_IMAGE028
since the grid map has several square grids of equal area, each grid having four vertices, wherein,
Figure DEST_PATH_IMAGE029
Figure DEST_PATH_IMAGE030
the coordinates of the lower left vertex of any grid are represented,
Figure DEST_PATH_IMAGE031
Figure DEST_PATH_IMAGE032
the coordinates of the top right vertex of any grid are shown, and x and y represent the coordinates of any point P in the grid; f (R1) and f (R2) represent interpolation results in the Y direction, and f (P1), f (P2), f (P3) and f (P4) represent interpolation results of four vertexes of the grid respectively; m (P) represents the probability that the P point is within the grid.
3. The autonomous positioning and navigation method for mobile robots according to claim 1, characterized in that: after the step of performing error correction on the current pose information of the mobile robot according to vision-assisted positioning so as to obtain the accurate position of the target station, the method further comprises the following steps: the method comprises the following steps of starting three PID threads to control the tracking action of the two-dimensional code, specifically:
abstracting the tracking motion of the mobile robot into three closed-loop controls which are respectively a yaw correction ring, a distance retaining ring and a transverse deviation ring;
the output quantity of the Yaw correction ring is the rotating speed of a Yaw axis, the output quantity of the distance keeping ring is the speed of the mobile robot in the X-axis direction under a map coordinate system, and the output quantity of the transverse deviation ring is the speed of the mobile robot in the Y-axis direction under the map coordinate system;
packaging the speed in the X-axis direction, the speed in the Y-axis direction and the rotating speed of the Yaw axis into the same frame of data and then sending the same frame of data to the mobile robot, so that the mobile robot can perform local positioning navigation according to each frame of data.
CN202211067758.8A 2022-09-01 2022-09-01 Autonomous positioning and navigation method for mobile robot Active CN115218891B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211067758.8A CN115218891B (en) 2022-09-01 2022-09-01 Autonomous positioning and navigation method for mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211067758.8A CN115218891B (en) 2022-09-01 2022-09-01 Autonomous positioning and navigation method for mobile robot

Publications (2)

Publication Number Publication Date
CN115218891A CN115218891A (en) 2022-10-21
CN115218891B true CN115218891B (en) 2022-12-27

Family

ID=83617007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211067758.8A Active CN115218891B (en) 2022-09-01 2022-09-01 Autonomous positioning and navigation method for mobile robot

Country Status (1)

Country Link
CN (1) CN115218891B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315175B (en) * 2023-09-28 2024-05-14 广东拓普视科技有限公司 Composition positioning device and method based on robot

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107457784A (en) * 2017-08-07 2017-12-12 四川汇源光通信有限公司 Intelligent Mobile Robot GPS Big Dippeves Differential positioning and air navigation aid
CN110227876A (en) * 2019-07-15 2019-09-13 西华大学 Robot welding autonomous path planning method based on 3D point cloud data
CN112525202A (en) * 2020-12-21 2021-03-19 北京工商大学 SLAM positioning and navigation method and system based on multi-sensor fusion
CN112650255A (en) * 2020-12-29 2021-04-13 杭州电子科技大学 Robot indoor and outdoor positioning navigation system method based on vision and laser radar information fusion
CN113311411A (en) * 2021-04-19 2021-08-27 杭州视熵科技有限公司 Laser radar point cloud motion distortion correction method for mobile robot

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100601946B1 (en) * 2004-03-26 2006-07-14 삼성전자주식회사 Method of global localization for robot
CN113592989B (en) * 2020-04-14 2024-02-20 广东博智林机器人有限公司 Three-dimensional scene reconstruction system, method, equipment and storage medium
CN112268561A (en) * 2020-10-12 2021-01-26 西北工业大学 Monte Carlo positioning method of robot fusing magnetic field information
CN114419147A (en) * 2021-11-16 2022-04-29 新兴际华集团有限公司 Rescue robot intelligent remote human-computer interaction control method and system
CN114459470A (en) * 2022-01-19 2022-05-10 国网江西省电力有限公司超高压分公司 Inspection robot positioning method based on multi-sensor fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107457784A (en) * 2017-08-07 2017-12-12 四川汇源光通信有限公司 Intelligent Mobile Robot GPS Big Dippeves Differential positioning and air navigation aid
CN110227876A (en) * 2019-07-15 2019-09-13 西华大学 Robot welding autonomous path planning method based on 3D point cloud data
CN112525202A (en) * 2020-12-21 2021-03-19 北京工商大学 SLAM positioning and navigation method and system based on multi-sensor fusion
CN112650255A (en) * 2020-12-29 2021-04-13 杭州电子科技大学 Robot indoor and outdoor positioning navigation system method based on vision and laser radar information fusion
CN113311411A (en) * 2021-04-19 2021-08-27 杭州视熵科技有限公司 Laser radar point cloud motion distortion correction method for mobile robot

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于局部图优化的激光定位与建图算法研究;陈春旭;《中国优秀硕士学位论文全文数据库基础科学辑》;20210915;第26-44页第4章 *
基于机器人激光定位的一种改进AMCL算法;冯佳萌等;《激光与光电子学进展》;20211030;第3-4页 *
基于激光雷达和视觉信息的室内机器人定位研究;焦传佳;《中国优秀硕士学位论文全文数据库信息科技辑》;20220315;第6-57页第2-5章 *

Also Published As

Publication number Publication date
CN115218891A (en) 2022-10-21

Similar Documents

Publication Publication Date Title
US11499832B1 (en) Method for constructing a map while performing work
CN111897332B (en) Semantic intelligent substation robot humanoid inspection operation method and system
CN107741234B (en) Off-line map construction and positioning method based on vision
CN112525202A (en) SLAM positioning and navigation method and system based on multi-sensor fusion
Shen et al. Autonomous multi-floor indoor navigation with a computationally constrained MAV
CN109186606B (en) Robot composition and navigation method based on SLAM and image information
Schäfer et al. Multicopter unmanned aerial vehicle for automated inspection of wind turbines
JP2019207678A (en) Unsupervised learning of metric representations from slow features
González-Sieira et al. Autonomous navigation for UAVs managing motion and sensing uncertainty
CN113674412B (en) Pose fusion optimization-based indoor map construction method, system and storage medium
CN115218891B (en) Autonomous positioning and navigation method for mobile robot
CN113238554A (en) Indoor navigation method and system based on SLAM technology integrating laser and vision
Kohlhepp et al. Sequential 3D-SLAM for mobile action planning
WO2022256815A1 (en) Topology processing for waypoint-based navigation maps
CN112068152A (en) Method and system for simultaneous 2D localization and 2D map creation using a 3D scanner
Tavakoli et al. Cooperative multi-agent mapping of three-dimensional structures for pipeline inspection applications
Lin et al. Graph-based SLAM in indoor environment using corner feature from laser sensor
Zhao et al. Relative localization for uavs based on april-tags
Stachniss et al. How to learn accurate grid maps with a humanoid
Rekleitis et al. Over-the-horizon, autonomous navigation for planetary exploration
Roggeman et al. Embedded vision-based localization and model predictive control for autonomous exploration
Yu et al. Indoor Localization Based on Fusion of AprilTag and Adaptive Monte Carlo
US11865724B2 (en) Movement control method, mobile machine and non-transitory computer readable storage medium
Thompson et al. Localisation for autonomous humanoid navigation
Huang et al. A real-time fast incremental SLAM method for indoor navigation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant