CN113096190A - Omnidirectional mobile robot navigation method based on visual map building - Google Patents
Omnidirectional mobile robot navigation method based on visual map building Download PDFInfo
- Publication number
- CN113096190A CN113096190A CN202110328801.0A CN202110328801A CN113096190A CN 113096190 A CN113096190 A CN 113096190A CN 202110328801 A CN202110328801 A CN 202110328801A CN 113096190 A CN113096190 A CN 113096190A
- Authority
- CN
- China
- Prior art keywords
- mobile robot
- map
- depth camera
- omnidirectional mobile
- navigation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000000007 visual effect Effects 0.000 title claims abstract description 23
- 238000012549 training Methods 0.000 claims abstract description 29
- 238000005457 optimization Methods 0.000 claims abstract description 8
- 239000011159 matrix material Substances 0.000 claims description 24
- 238000001514 detection method Methods 0.000 claims description 22
- 238000012937 correction Methods 0.000 claims description 9
- 238000001914 filtration Methods 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 9
- 238000012360 testing method Methods 0.000 claims description 9
- 238000013519 translation Methods 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 7
- 230000003287 optical effect Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 6
- 238000012795 verification Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 230000004069 differentiation Effects 0.000 claims description 3
- 230000010354 integration Effects 0.000 claims description 3
- 238000004806 packaging method and process Methods 0.000 claims description 3
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 10
- 230000000694 effects Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0223—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0234—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
- G05D1/0236—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0251—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Automation & Control Theory (AREA)
- Aviation & Aerospace Engineering (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Electromagnetism (AREA)
- Data Mining & Analysis (AREA)
- Optics & Photonics (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Manipulator (AREA)
Abstract
The invention belongs to the technical field of computer vision and mobile robot path planning, and discloses an omnidirectional mobile robot navigation method based on visual map building, which comprises the following steps: (1) the method comprises the steps of (1) building an omnidirectional mobile robot hardware platform, (2) building an omnidirectional mobile robot software system, (3) carrying out recognition training on a working environment based on YOLOv5, (4) building a visual navigation map based on ORB _ SLAM3, and (5) carrying out navigation and path optimization in a working area. The invention has the advantages that: the vision sensor is used for constructing and navigating the vision map, the hardware cost is reduced compared with a laser sensor, and meanwhile, the omnidirectional mobile robot is convenient to navigate by adopting an omnidirectional moving mode of Mecanum wheels.
Description
Technical Field
The invention relates to an omnidirectional mobile robot navigation method based on visual mapping, and belongs to the technical field of computer vision and mobile robot path planning.
Background
In recent years, high and new technologies represented by computer vision, artificial intelligence, SLAM, and the like have rapidly grown, and robotics has been rapidly developed with the improvement of performance of hardware such as sensors and processors. At present, the proportion of industrial robots in the global market is up to 80%, and with the change of the demand and the development of the technology, the proportion of mobile robots facing services in the future is greatly improved, for example, inspection robots, floor sweeping robots, security robots and the like are the application of the mobile robots which are mainstream at present, and the mobile robots have entered into the daily life of people. However, with the continuous upgrading of industrial structures and the continuous change of people's demands, people present new challenges to the intellectualization and autonomy of mobile robots. The mobile robot vision SLAM and navigation are one of the research directions of robot control and computer vision, wherein the research contents of the omnidirectional mobile robot navigation system based on the vision map building comprise map building, path planning, autonomous positioning and the like.
At present, most of existing mobile robot navigation systems use a laser radar sensor to establish a map and navigate, the laser radar is expensive and cannot be widely popularized and used, and the map establishing and navigation method based on a visual sensor is low in price but greatly reduced in precision compared with the laser sensor. In addition, the mobile robot based on traditional magnetic tracks, two-dimensional codes and other rail-bound navigation has a small working range, a working scene is fixed, space resources are wasted, the guide tracks or the two-dimensional codes need to be paved again when the working scene changes, and the mobile robot based on four-wheel and other rail-bound navigation is difficult to work in a narrow space environment. Meanwhile, the time complexity of traditional path planning algorithms for mobile robot navigation, such as a-x and Dijkstra algorithms, is not fast enough, the path planning based on the sampling RRT algorithm is not optimal, the path planning based on the reinforcement learning algorithm and other algorithms needs to be trained in advance in a large amount, and strong hardware support is needed. In view of the above three aspects, it is necessary to design a new omnidirectional mobile robot mapping and navigation method, so that the mobile robot can construct a map with guaranteed accuracy at lower cost in a working environment, and a path planning method is designed so that the mobile robot can complete a work task more efficiently and more smoothly in the working environment.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide an omnidirectional mobile robot navigation method based on visual map building. The method adopts the omnidirectional Mecanum wheel movement and vision method to map the working environment, simultaneously utilizes the deep learning method to identify the working environment, carries out auxiliary positioning on the omnidirectional mobile robot, and utilizes the built map and the corresponding path planning algorithm to navigate, thereby reducing the problems of expensive navigation cost and single working scene of the current mobile robot and ensuring that the robot can safely and stably complete the working task.
In order to achieve the above purpose and solve the problems existing in the prior art, the invention adopts the technical scheme that: an omnidirectional mobile robot navigation method based on visual mapping is characterized by comprising the following steps:
step 1, constructing an omnidirectional mobile robot hardware platform, adopting omnidirectional Mecanum wheels and matching with a motor for driving, installing a depth camera at the front end of the top layer of the omnidirectional mobile robot as a visual sensor for image acquisition, installing an industrial personal computer and a lower computer at the bottom layer of the omnidirectional mobile robot, and matching with a power supply and a display device;
(a) calibrating internal parameters of a depth camera, preparing a checkerboard calibration board, recording a depth camera data set by using an image recording tool, reducing the image acquisition frequency to 4Hz, calibrating the internal parameters of the depth camera by using a calibration tool, describing an internal parameter matrix K by using a formula (1),
in the formula (f)xRepresenting the focal length in pixels, f, in the x-axis direction of the depth camerayRepresenting the focal length, u, in pixels in the y-axis direction of the depth camera0Representing the difference in pixels of the optical axis from the center of the image in the direction of the x-axis, v0Representing the difference of the optical axis in the y-axis direction from the center of the image in units of pixels;
(b) establishing a world coordinate system in an omnidirectional mobile robot operating system by taking the center of a second layer of the omnidirectional mobile robot as an origin, starting a depth camera and subscribing nodes published by the depth camera, calibrating external parameters of the depth camera, determining an external parameter rotation matrix R and an external parameter translation matrix T from the depth camera coordinate system to the world coordinate system, describing by a formula (2),
in the formula, P0Is a depth camera coordinate system, PwIs a world coordinate system, R is an external reference rotation matrix, T is an external reference translation matrix, (x)0,y0,z0) Is the three-dimensional coordinate of the object under the camera coordinate system, (x)w,yw,zw) The three-dimensional coordinates of the object under the world coordinate system;
(c) determining the control direction of a motor according to the kinematics principle of Mecanum wheels, compiling corresponding direction and speed control functions, controlling the omnidirectional mobile robot to move forwards, backwards, leftwards, rightwards, backwards, leftwards, backwards and forwards in ten directions, describing the kinematics decomposition of the Mecanum wheels by an equation (3),
in the formula, vw1,vw2,vw3,vw4At 4 Mecanum wheels, vtxSpeed of movement of the x-axis, vtyThe speed of y-axis motion, omega is the rotation angular speed, a is the width of the omnidirectional mobile robot, and b is the length of the omnidirectional mobile robot;
step 3, performing recognition training on the working environment based on YOLOv5, specifically comprising the following substeps:
(a) starting a depth camera node to acquire images of a working environment, extracting one frame every 0.1s, making a data set by using an image recording tool, marking door and safety exit features in the data set by using a picture marking tool to obtain corresponding information files comprising a calibration name and position information, and dividing the data set into a training set, a testing set and a verification set, wherein the percentage is respectively 90%, 5% and 5%;
(b) carrying out relevant configuration on a YOLOv5 algorithm, modifying a training name to be a corresponding feature label name, selecting a training network, modifying anchor points in the training network to be the labeled feature quantity, modifying the hyper-parameters in the training network, wherein the filter quantity is described by an equation (4),
filter=3*(classes+5) (4)
in the formula, classes is the marked quantity, and the filter is a grid parameter in the super parameter;
(c) training and testing YOLOv5, selecting official pre-training weight, then performing 50000 times of training to reduce the loss function to be below 0.3, testing the trained weight in a verification set to ensure that the detection success rate of each detection target is above 98%, otherwise, continuing to perform weight optimization, and finally packaging the trained weight and a detection algorithm into ROS nodes to prepare for the next step;
step 4, constructing a visual navigation map based on ORB _ SLAM3, driving the omnidirectional mobile robot to move in a working environment, and constructing the visual navigation map, wherein the visual navigation map specifically comprises the following substeps:
(a) the method comprises the steps that an omnidirectional mobile robot driving node, a keyboard control node, a depth camera starting node and an ORB _ SLAM3 sparse point cloud map building node are sequentially started, the depth camera can scan the whole working area by means of movement of a keyboard in the whole working area, an ORB _ SLAM3 node subscribes an RGB map and a depth map issued by the depth camera, a key frame is selected to build a sparse point cloud map, the position of the depth camera is recorded at the same time, the depth camera is stored in a track map file, the number of selected feature points in the key frame is increased, the sparse point cloud map is converted into a semi-dense point cloud map, and meanwhile, noise points of the semi-dense point cloud map are removed by means of voxel filtering and straight-through filtering in a point;
(b) starting octree service, creating octree objects, setting the resolution ratio of the octree objects to be 0.05, reading information in a track map file built in an ORB _ SLAM3, wherein the information comprises the frame numbers, positions and posture quaternions of key frames, converting and splicing feature points in the key frames into an octree map, adding color information according to the difference of the height of the octree map to facilitate observation and judge whether the result is accurate or not, setting the height of the octree map to be 0.1-2.0 m, and filtering out ground and ceiling irrelevant information;
(c) starting an ROS detection node of YOLOv5, detecting door and safety exit features in a working environment, after specific features are detected, a detection algorithm selects the detected features with a rectangular frame, calculates corresponding centroid positions through four vertexes of the rectangular frame, aligns an RGB image acquired by a depth camera at the moment with a depth map to obtain depth values at the corresponding centroid positions in the depth map, obtains the spatial positions of the centroid of the rectangular frame under an image coordinate system under a depth camera coordinate system according to a depth camera internal reference matrix K obtained in step 2, obtains the positions of the centroid of the rectangular frame under a world coordinate system according to a depth camera external reference translation matrix T and an external reference rotation matrix R obtained in step 2, converts the obtained octree map into a two-dimensional grid map by using a point cloud tree service map conversion function package, and maps the position coordinates and target information in the two-dimensional grid map, and releasing the information in the ROS for navigation and path planning;
and 5, performing navigation and path optimization in the working area, and specifically comprising the following substeps:
(a) setting a plurality of omnidirectional mobile robot navigation points in a built two-dimensional grid map according to the principle of maximum coverage of an effective area, so that the omnidirectional mobile robot can continuously navigate in a working environment, sending a new navigation task only after reaching a next target point from a current target point in the navigation process, otherwise not sending the new navigation task, controlling the navigation speed by using a dynamic parameter feedback method, setting corresponding PID (proportion integration differentiation) parameters and the navigation speed of the robot at a dynamic parameter configuration client, and setting different navigation speeds at different road sections;
(b) after the selection of the navigation points is finished, firstly, RRT algorithm is selected among the navigation points for preliminary path planning, the path is optimized at a turning position by using bresenham circle drawing algorithm after the path planning is finished, firstly, whether the path is positioned at the turning position is judged, the judgment condition is described by an equation (5),
in the formula, the coordinate (x)1,y1) As the coordinates of the current node in the actual map, (x)2,y2) As the next node coordinate of the current node, K1Is the slope between the current node and the next node, K2Is the slope between the current node and the previous node, if the difference between the slopes is greater than 0.45, the current position is determined to be a turn, and then the value is expressed by (x)1,y2) Establishing a coordinate system for the origin, with the origin as the center of a circle, (x)0,y0)=(0,|y1-y2| y) as a starting point, r ═ y1-y2I is radius, arc processing is carried out on the path by applying bresenham circle drawing algorithm, and is described by formula (6),
d=3-2r (6)
in the formula, d is the initial value of the decision parameter, r is the radius of the circular arc, if d is less than 0, processing is carried out according to the formula (7), otherwise, processing is carried out according to the formula (8), and updated (x) is obtainedi,yj) If xi<yjThen will (x)i,yj) Join the path while continuing the step untilTo xi=yj1/8 arc paths are updated, then, 1/8 arc paths are obtained according to symmetry of the circle, so that a complete 1/4 arc path is obtained, and the path drawing at the turning position is completed;
in the formula (x)i,yj) As the current node coordinate, (x)i+1,yj+1) The updated node coordinates are represented by i and j, and the i and j are counting marks;
(c) converting the depth information of the depth camera into two-dimensional data by using an image conversion toolkit after path planning is finished, giving the initial position of the omnidirectional mobile robot, positioning by using a Monte Carlo positioning method, simultaneously starting a YOLOv5 target detection thread in the moving process, calculating the position of the target in a map when the specific target is detected, simultaneously comparing the position with the historical position obtained in the step 3, judging whether the position of the omnidirectional mobile robot needs to be corrected or not, and describing the judgment condition by an equation (9),
|xn-xp|+|yn-yp|>0.2(9)
in the formula (x)n,yn) As the detected position coordinates of the current target, (x)p,yp) The position of the omnidirectional mobile robot is corrected for the historical position coordinates of the target when the judgment condition is satisfied, and the correction is described by an equation (10),
in the formula (x)t,yt) For the current position of the omnidirectional mobile robot, and for avoiding repeated correction calculation, when the target is identified and the position correction of the omnidirectional mobile robot is completedAnd starting the detection thread for re-correction after the omnidirectional mobile robot moves for 3 seconds.
The invention has the beneficial effects that: the omnidirectional mobile robot navigation method based on visual mapping comprises the following steps: (1) the method comprises the steps of (1) building an omnidirectional mobile robot hardware platform, (2) building an omnidirectional mobile robot software system, (3) carrying out recognition training on a working environment based on YOLOv5, (4) building a visual navigation map based on ORB _ SLAM3, and (5) carrying out navigation and path optimization in a working area. Compared with the prior art: the invention has the following advantages: firstly, utilize the vision sensor to carry out vision map and found and navigate, compare with laser sensor and reduced the hardware cost, adopt the omnidirectional movement mode of mecanum wheel simultaneously, make things convenient for omnidirectional movement robot to navigate. Secondly, a deep learning mode is adopted to identify a specific target of a working environment, and auxiliary positioning is carried out on the omnidirectional mobile robot, so that the omnidirectional mobile robot can realize a navigation task more stably. And thirdly, fusing the RRT path planning algorithm based on sampling with a bresenham circle drawing algorithm, so that the omnidirectional mobile robot can more quickly pass through a turning place.
Drawings
FIG. 1 is a flow chart of the method steps of the present invention.
Fig. 2 is an assembly model diagram of the omnidirectional mobile robot.
Fig. 3 is a diagram of a real object of the omnidirectional mobile robot.
Fig. 4 is a schematic diagram of coordinate system conversion.
Fig. 5 is a diagram of the effect of object recognition.
In the figure: (a) is a door identification effect diagram, and (b) is an emergency exit identification effect diagram.
Fig. 6 is a work area sparse point cloud map.
In the figure: (a) is a feature point extraction map, and (b) is a sparse point cloud map.
FIG. 7 is a work area semi-dense point cloud map.
FIG. 8 is a work area octree map.
FIG. 9 is a two-dimensional grid map of a work area.
Fig. 10 is a schematic map of the object mapping.
Fig. 11 is a path planning effect diagram.
Detailed Description
The invention will be further explained with reference to the drawings.
As shown in fig. 1, the omnidirectional mobile robot navigation method based on visual mapping includes the following steps:
step 1, building a hardware platform of the omnidirectional mobile robot, assembling a model diagram of the omnidirectional mobile robot, as shown in fig. 2, driving by adopting omnidirectional Mecanum wheels and matching with a motor, installing a depth camera at the front end of the top layer of the omnidirectional mobile robot as a vision sensor to obtain images, installing an industrial personal computer and a lower computer at the bottom layer of the omnidirectional mobile robot, matching with a power supply and a display device, and finally obtaining a real object diagram of the omnidirectional mobile robot, as shown in fig. 3.
(a) calibrating internal parameters of a depth camera, preparing a checkerboard calibration board, recording a depth camera data set by using an image recording tool, reducing the image acquisition frequency to 4Hz, calibrating the internal parameters of the depth camera by using a calibration tool, describing an internal parameter matrix K by using a formula (1),
in the formula (f)xRepresenting the focal length in pixels, f, in the x-axis direction of the depth camerayRepresenting the focal length, u, in pixels in the y-axis direction of the depth camera0Representing the difference in pixels of the optical axis from the center of the image in the direction of the x-axis, v0Representing the difference of the optical axis in the y-axis direction from the center of the image in units of pixels;
(b) establishing a world coordinate system in an omnidirectional mobile robot operating system by taking the center of a second layer of the omnidirectional mobile robot as an origin, starting a depth camera and subscribing nodes published by the depth camera, calibrating external parameters of the depth camera, determining an external parameter rotation matrix R and an external parameter translation matrix T from the depth camera coordinate system to the world coordinate system, describing by a formula (2),
in the formula, P0Is a depth camera coordinate system, PwIs a world coordinate system, R is an external reference rotation matrix, T is an external reference translation matrix, (x)0,y0,z0) Is the three-dimensional coordinate of the object under the camera coordinate system, (x)w,yw,zw) The coordinate system of the relative position between the depth camera coordinate system and the world coordinate system is transformed into a schematic diagram, which is shown in fig. 4, for the three-dimensional coordinates of the object in the world coordinate system.
(c) Determining the control direction of a motor according to the kinematics principle of Mecanum wheels, compiling corresponding direction and speed control functions, controlling the omnidirectional mobile robot to move forwards, backwards, leftwards, rightwards, backwards, leftwards, backwards and forwards in ten directions, describing the kinematics decomposition of the Mecanum wheels by an equation (3),
in the formula, vw1,vw2,vw3,vw4At 4 Mecanum wheels, vtxSpeed of movement of the x-axis, vtyThe speed of y-axis motion, omega is the rotation angular speed, a is the width of the omnidirectional mobile robot, and b is the length of the omnidirectional mobile robot;
step 3, performing recognition training on the working environment based on YOLOv5, specifically comprising the following substeps:
(a) starting a depth camera node to acquire images of a working environment, extracting one frame every 0.1s, making a data set by using an image recording tool, marking door and safety exit features in the data set by using a picture marking tool to obtain corresponding information files comprising a calibration name and position information, and dividing the data set into a training set, a testing set and a verification set, wherein the percentage is respectively 90%, 5% and 5%;
(b) carrying out relevant configuration on a YOLOv5 algorithm, modifying a training name to be a corresponding feature label name, selecting a training network, modifying anchor points in the training network to be the labeled feature quantity, modifying the hyper-parameters in the training network, wherein the filter quantity is described by an equation (4),
filter=3*(classes+5) (4)
in the formula, classes is the marked quantity, and the filter is a grid parameter in the super parameter;
(c) training and testing YOLOv5, selecting official pre-training weight, then performing 50000 times of training to reduce the loss function to be below 0.3, testing the trained weight in a verification set to ensure that the detection success rate of each detection target is above 98%, otherwise, continuing to perform weight optimization, finally packaging the trained weight and detection algorithm into ROS nodes for preparation of the next step, and preparing a working area detection target recognition effect graph in the next step, wherein the graph is shown in FIG. 5.
Step 4, constructing a visual navigation map based on ORB _ SLAM3, driving the omnidirectional mobile robot to move in a working environment, and constructing the visual navigation map, wherein the visual navigation map specifically comprises the following substeps:
(a) the method comprises the steps that an omnidirectional mobile robot driving node, a keyboard control node, a depth camera starting node and an ORB _ SLAM3 sparse point cloud map building node are sequentially started, the depth camera can scan the whole working area by means of movement of a keyboard in the whole working area, an ORB _ SLAM3 node subscribes an RGB map and a depth map issued by the depth camera, a key frame is selected to build a sparse point cloud map, the position of the depth camera is recorded at the same time, the sparse point cloud map is stored in a track map file, the working area sparse point cloud map is shown in figure 6, the number of selected feature points in the key frame is increased, the sparse point cloud map is converted into a semi-dense point cloud map, meanwhile, noise points of the semi-dense point cloud map are removed by means of voxel filtering and straight-through filtering in a point cloud library, and the semi-dense.
(b) Starting octree service, creating octree objects, setting the resolution of the octree objects to be 0.05, reading information in a track map file built in an ORB _ SLAM3, wherein the information comprises the frame numbers, positions and posture quaternions of key frames, converting and splicing feature points in the key frames into an octree map, adding color information according to the difference of the height of the octree map to facilitate observation and judge whether the result is accurate, setting the height of the octree map to be 0.1-2.0 m, filtering out ground and ceiling irrelevant information, and working area octree map, as shown in FIG. 8.
(c) Starting an ROS detection node of YOLOv5, detecting door and safety exit features in a working environment, after specific features are detected, a detection algorithm selects the detected features with a rectangular frame, calculates corresponding centroid positions through four vertexes of the rectangular frame, aligns an RGB image acquired by a depth camera at the moment with a depth map to obtain depth values at the corresponding centroid positions in the depth map, obtains the spatial positions of the centroid of the rectangular frame under an image coordinate system under a depth camera coordinate system according to a depth camera internal reference matrix K obtained in step 2, obtains the positions of the centroid of the rectangular frame under a world coordinate system according to a depth camera external reference translation matrix T and an external reference rotation matrix R obtained in step 2, converts the obtained octree map into a two-dimensional grid map by using a point cloud tree service map conversion function package, and maps the position coordinates and target information in the two-dimensional grid map, and releasing the information in the ROS for navigation and path planning; the two-dimensional grid map of the working area is shown in fig. 9, and the target map is shown in fig. 10.
And 5, performing navigation and path optimization in the working area, and specifically comprising the following substeps:
(a) setting a plurality of omnidirectional mobile robot navigation points in a built two-dimensional grid map according to the principle of maximum coverage of an effective area, so that the omnidirectional mobile robot can continuously navigate in a working environment, sending a new navigation task only after reaching a next target point from a current target point in the navigation process, otherwise not sending the new navigation task, controlling the navigation speed by using a dynamic parameter feedback method, setting corresponding PID (proportion integration differentiation) parameters and the navigation speed of the robot at a dynamic parameter configuration client, and setting different navigation speeds at different road sections;
(b) after the selection of the navigation points is finished, firstly, RRT algorithm is selected among the navigation points for preliminary path planning, the path is optimized at a turning position by using bresenham circle drawing algorithm after the path planning is finished, firstly, whether the path is positioned at the turning position is judged, the judgment condition is described by an equation (5),
in the formula, the coordinate (x)1,y1) As the coordinates of the current node in the actual map, (x)2,y2) As the next node coordinate of the current node, K1Is the slope between the current node and the next node, K2Is the slope between the current node and the previous node, if the difference between the slopes is greater than 0.45, the current position is determined to be a turn, and then the value is expressed by (x)1,y2) Establishing a coordinate system for the origin, with the origin as the center of a circle, (x)0,y0)=(0,|y1-y2| y) as a starting point, r ═ y1-y2I is radius, arc processing is carried out on the path by applying bresenham circle drawing algorithm, and is described by formula (6),
d=3-2r (6)
in the formula, d is the initial value of the decision parameter, r is the radius of the circular arc, if d is less than 0, processing is carried out according to the formula (7), otherwise, processing is carried out according to the formula (8), and updated (x) is obtainedi,yj) If xi<yjThen will (x)i,yj) Add path while continuing this step until xi=yjThe node updating is completed by 1/8 arc paths, then another 1/8 arc paths are obtained according to symmetry of the circle, so that a complete 1/4 arc path is obtained, the path drawing at the turning is completed, and the path planning effect graph is shown in fig. 11.
In the formula (x)i,yj) As the current node coordinate, (x)i+1,yj+1) The updated node coordinates are represented by i and j, and the i and j are counting marks;
(c) converting the depth information of the depth camera into two-dimensional data by using an image conversion toolkit after path planning is finished, giving the initial position of the omnidirectional mobile robot, positioning by using a Monte Carlo positioning method, simultaneously starting a YOLOv5 target detection thread in the moving process, calculating the position of the target in a map when the specific target is detected, simultaneously comparing the position with the historical position obtained in the step 3, judging whether the position of the omnidirectional mobile robot needs to be corrected or not, and describing the judgment condition by an equation (9),
|xn-xp|+|yn-yp|>0.2 (9)
in the formula (x)n,yn) As the detected position coordinates of the current target, (x)p,yp) The position of the omnidirectional mobile robot is corrected for the historical position coordinates of the target when the judgment condition is satisfied, and the correction is described by an equation (10),
in the formula (x)t,yt) For the current position of the omnidirectional mobile robot, in order to avoid repeated correction calculation, when the target is identified and the position of the omnidirectional mobile robot is corrected, the omnidirectional mobile robot moves for 3 seconds and then starts a detection thread to perform correction again.
Claims (1)
1. An omnidirectional mobile robot navigation method based on visual mapping is characterized by comprising the following steps:
step 1, constructing an omnidirectional mobile robot hardware platform, adopting omnidirectional Mecanum wheels and matching with a motor for driving, installing a depth camera at the front end of the top layer of the omnidirectional mobile robot as a visual sensor for image acquisition, installing an industrial personal computer and a lower computer at the bottom layer of the omnidirectional mobile robot, and matching with a power supply and a display device;
step 2, building an omnidirectional mobile robot software system, comprising the following substeps:
(a) calibrating internal parameters of a depth camera, preparing a checkerboard calibration board, recording a depth camera data set by using an image recording tool, reducing the image acquisition frequency to 4Hz, calibrating the internal parameters of the depth camera by using a calibration tool, describing an internal parameter matrix K by using a formula (1),
in the formula (f)xRepresenting the focal length in pixels, f, in the x-axis direction of the depth camerayRepresenting the focal length, u, in pixels in the y-axis direction of the depth camera0Representing the difference in pixels of the optical axis from the center of the image in the direction of the x-axis, v0Representing the difference of the optical axis in the y-axis direction from the center of the image in units of pixels;
(b) establishing a world coordinate system in an omnidirectional mobile robot operating system by taking the center of a second layer of the omnidirectional mobile robot as an origin, starting a depth camera and subscribing nodes published by the depth camera, calibrating external parameters of the depth camera, determining an external parameter rotation matrix R and an external parameter translation matrix T from the depth camera coordinate system to the world coordinate system, describing by a formula (2),
in the formula, P0Is a depth camera coordinate system, PwIs a world coordinate system, R is an external reference rotation matrix, T is an external reference translation matrix, (x)0,y0,z0) Is the three-dimensional coordinate of the object under the camera coordinate system, (x)w,yw,zw) The three-dimensional coordinates of the object under the world coordinate system;
(c) determining the control direction of a motor according to the kinematics principle of Mecanum wheels, compiling corresponding direction and speed control functions, controlling the omnidirectional mobile robot to move forwards, backwards, leftwards, rightwards, backwards, leftwards, backwards and forwards in ten directions, describing the kinematics decomposition of the Mecanum wheels by an equation (3),
in the formula, vw1,vw2,vw3,vw4At 4 Mecanum wheels, vtxSpeed of movement of the x-axis, vtyThe speed of y-axis motion, omega is the rotation angular speed, a is the width of the omnidirectional mobile robot, and b is the length of the omnidirectional mobile robot;
step 3, performing recognition training on the working environment based on YOLOv5, specifically comprising the following substeps:
(a) starting a depth camera node to acquire images of a working environment, extracting one frame every 0.1s, making a data set by using an image recording tool, marking door and safety exit features in the data set by using a picture marking tool to obtain corresponding information files comprising a calibration name and position information, and dividing the data set into a training set, a testing set and a verification set, wherein the percentage is respectively 90%, 5% and 5%;
(b) carrying out relevant configuration on a YOLOv5 algorithm, modifying a training name to be a corresponding feature label name, selecting a training network, modifying anchor points in the training network to be the labeled feature quantity, modifying the hyper-parameters in the training network, wherein the filter quantity is described by an equation (4),
filter=3*(classes+5) (4)
in the formula, classes is the marked quantity, and the filter is a grid parameter in the super parameter;
(c) training and testing YOLOv5, selecting official pre-training weight, then performing 50000 times of training to reduce the loss function to be below 0.3, testing the trained weight in a verification set to ensure that the detection success rate of each detection target is above 98%, otherwise, continuing to perform weight optimization, and finally packaging the trained weight and a detection algorithm into ROS nodes to prepare for the next step;
step 4, constructing a visual navigation map based on ORB _ SLAM3, driving the omnidirectional mobile robot to move in a working environment, and constructing the visual navigation map, wherein the visual navigation map specifically comprises the following substeps:
(a) the method comprises the steps that an omnidirectional mobile robot driving node, a keyboard control node, a depth camera starting node and an ORB _ SLAM3 sparse point cloud map building node are sequentially started, the depth camera can scan the whole working area by means of movement of a keyboard in the whole working area, an ORB _ SLAM3 node subscribes an RGB map and a depth map issued by the depth camera, a key frame is selected to build a sparse point cloud map, the position of the depth camera is recorded at the same time, the depth camera is stored in a track map file, the number of selected feature points in the key frame is increased, the sparse point cloud map is converted into a semi-dense point cloud map, and meanwhile, noise points of the semi-dense point cloud map are removed by means of voxel filtering and straight-through filtering in a point;
(b) starting octree service, creating octree objects, setting the resolution ratio of the octree objects to be 0.05, reading information in a track map file built in an ORB _ SLAM3, wherein the information comprises the frame numbers, positions and posture quaternions of key frames, converting and splicing feature points in the key frames into an octree map, adding color information according to the difference of the height of the octree map to facilitate observation and judge whether the result is accurate or not, setting the height of the octree map to be 0.1-2.0 m, and filtering out ground and ceiling irrelevant information;
(c) starting an ROS detection node of YOLOv5, detecting door and safety exit features in a working environment, after specific features are detected, a detection algorithm selects the detected features with a rectangular frame, calculates corresponding centroid positions through four vertexes of the rectangular frame, aligns an RGB image acquired by a depth camera at the moment with a depth map to obtain depth values at the corresponding centroid positions in the depth map, obtains the spatial positions of the centroid of the rectangular frame under an image coordinate system under a depth camera coordinate system according to a depth camera internal reference matrix K obtained in step 2, obtains the positions of the centroid of the rectangular frame under a world coordinate system according to a depth camera external reference translation matrix T and an external reference rotation matrix R obtained in step 2, converts the obtained octree map into a two-dimensional grid map by using a point cloud tree service map conversion function package, and maps the position coordinates and target information in the two-dimensional grid map, and releasing the information in the ROS for navigation and path planning;
and 5, performing navigation and path optimization in the working area, and specifically comprising the following substeps:
(a) setting a plurality of omnidirectional mobile robot navigation points in a built two-dimensional grid map according to the principle of maximum coverage of an effective area, so that the omnidirectional mobile robot can continuously navigate in a working environment, sending a new navigation task only after reaching a next target point from a current target point in the navigation process, otherwise not sending the new navigation task, controlling the navigation speed by using a dynamic parameter feedback method, setting corresponding PID (proportion integration differentiation) parameters and the navigation speed of the robot at a dynamic parameter configuration client, and setting different navigation speeds at different road sections;
(b) after the selection of the navigation points is finished, firstly, RRT algorithm is selected among the navigation points for preliminary path planning, the path is optimized at a turning position by using bresenham circle drawing algorithm after the path planning is finished, firstly, whether the path is positioned at the turning position is judged, the judgment condition is described by an equation (5),
in the formula, the coordinate (x)1,y1) As the coordinates of the current node in the actual map, (x)2,y2) As the next node coordinate of the current node, K1Is the slope between the current node and the next node, K2Is the slope between the current node and the previous node, if the difference between the slopes is greater than 0.45, the current position is determined to be a turn, and then the value is expressed by (x)1,y2) Establishing a coordinate system for the origin, with the origin as the center of a circle, (x)0,y0)=(0,|y1-y2| y) as a starting point, r ═ y1-y2I is radius, arc processing is carried out on the path by applying bresenham circle drawing algorithm, and is described by formula (6),
d=3-2r (6)
in the formula, d is the initial value of the decision parameter, r is the radius of the circular arc, if d is less than 0, processing is carried out according to the formula (7), otherwise, processing is carried out according to the formula (8), and updated (x) is obtainedi,yj) If xi<yjThen will (x)i,yj) Add path while continuing this step until xi=yj1/8 arc paths are updated, then, 1/8 arc paths are obtained according to symmetry of the circle, so that a complete 1/4 arc path is obtained, and the path drawing at the turning position is completed;
in the formula (x)i,yj) As the current node coordinate, (x)i+1,yj+1) The updated node coordinates are represented by i and j, and the i and j are counting marks;
(c) converting the depth information of the depth camera into two-dimensional data by using an image conversion toolkit after path planning is finished, giving the initial position of the omnidirectional mobile robot, positioning by using a Monte Carlo positioning method, simultaneously starting a YOLOv5 target detection thread in the moving process, calculating the position of the target in a map when the specific target is detected, simultaneously comparing the position with the historical position obtained in the step 3, judging whether the position of the omnidirectional mobile robot needs to be corrected or not, and describing the judgment condition by an equation (9),
|xn-xp|+|yn-yp|>0.2 (9)
in the formula (x)n,yn) As the detected position coordinates of the current target, (x)p,yp) Is the historical position coordinates of the targetWhen the condition is satisfied, the position of the omnidirectional mobile robot is corrected, which is described by the formula (10),
in the formula (x)t,yt) For the current position of the omnidirectional mobile robot, in order to avoid repeated correction calculation, when the target is identified and the position of the omnidirectional mobile robot is corrected, the omnidirectional mobile robot moves for 3 seconds and then starts a detection thread to perform correction again.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110328801.0A CN113096190B (en) | 2021-03-27 | 2021-03-27 | Omnidirectional mobile robot navigation method based on visual mapping |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110328801.0A CN113096190B (en) | 2021-03-27 | 2021-03-27 | Omnidirectional mobile robot navigation method based on visual mapping |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113096190A true CN113096190A (en) | 2021-07-09 |
CN113096190B CN113096190B (en) | 2024-01-05 |
Family
ID=76670167
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110328801.0A Active CN113096190B (en) | 2021-03-27 | 2021-03-27 | Omnidirectional mobile robot navigation method based on visual mapping |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113096190B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114089753A (en) * | 2021-11-11 | 2022-02-25 | 江苏科技大学 | Night astronomical assistant observation method based on wheeled robot |
CN114594768A (en) * | 2022-03-03 | 2022-06-07 | 安徽大学 | Mobile robot navigation decision-making method based on visual feature map reconstruction |
CN117234216A (en) * | 2023-11-10 | 2023-12-15 | 武汉大学 | Robot deep reinforcement learning motion planning method and computer readable medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109682381A (en) * | 2019-02-22 | 2019-04-26 | 山东大学 | Big visual field scene perception method, system, medium and equipment based on omnidirectional vision |
CN110501017A (en) * | 2019-08-12 | 2019-11-26 | 华南理工大学 | A kind of Mobile Robotics Navigation based on ORB_SLAM2 ground drawing generating method |
CN111637892A (en) * | 2020-05-29 | 2020-09-08 | 湖南工业大学 | Mobile robot positioning method based on combination of vision and inertial navigation |
-
2021
- 2021-03-27 CN CN202110328801.0A patent/CN113096190B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109682381A (en) * | 2019-02-22 | 2019-04-26 | 山东大学 | Big visual field scene perception method, system, medium and equipment based on omnidirectional vision |
CN110501017A (en) * | 2019-08-12 | 2019-11-26 | 华南理工大学 | A kind of Mobile Robotics Navigation based on ORB_SLAM2 ground drawing generating method |
CN111637892A (en) * | 2020-05-29 | 2020-09-08 | 湖南工业大学 | Mobile robot positioning method based on combination of vision and inertial navigation |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114089753A (en) * | 2021-11-11 | 2022-02-25 | 江苏科技大学 | Night astronomical assistant observation method based on wheeled robot |
CN114594768A (en) * | 2022-03-03 | 2022-06-07 | 安徽大学 | Mobile robot navigation decision-making method based on visual feature map reconstruction |
CN117234216A (en) * | 2023-11-10 | 2023-12-15 | 武汉大学 | Robot deep reinforcement learning motion planning method and computer readable medium |
CN117234216B (en) * | 2023-11-10 | 2024-02-09 | 武汉大学 | Robot deep reinforcement learning motion planning method and computer readable medium |
Also Published As
Publication number | Publication date |
---|---|
CN113096190B (en) | 2024-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zou et al. | A comparative analysis of LiDAR SLAM-based indoor navigation for autonomous vehicles | |
CN113096190A (en) | Omnidirectional mobile robot navigation method based on visual map building | |
Thorpe et al. | Vision and navigation for the Carnegie Mellon Navlab | |
Wulf et al. | Colored 2D maps for robot navigation with 3D sensor data | |
CN113870343B (en) | Relative pose calibration method, device, computer equipment and storage medium | |
CN112734765B (en) | Mobile robot positioning method, system and medium based on fusion of instance segmentation and multiple sensors | |
CN113189613B (en) | Robot positioning method based on particle filtering | |
Li et al. | Hybrid filtering framework based robust localization for industrial vehicles | |
Holz et al. | Continuous 3D sensing for navigation and SLAM in cluttered and dynamic environments | |
CN115639823A (en) | Terrain sensing and movement control method and system for robot under rugged and undulating terrain | |
CN113778096B (en) | Positioning and model building method and system for indoor robot | |
CN114266821A (en) | Online positioning method and device, terminal equipment and storage medium | |
CN114529585A (en) | Mobile equipment autonomous positioning method based on depth vision and inertial measurement | |
Atas et al. | Elevation state-space: Surfel-based navigation in uneven environments for mobile robots | |
Rekleitis et al. | Over-the-horizon, autonomous navigation for planetary exploration | |
CN115690343A (en) | Robot laser radar scanning and mapping method based on visual following | |
Blaer et al. | Two stage view planning for large-scale site modeling | |
Nuchter et al. | Extracting drivable surfaces in outdoor 6d slam | |
Sun et al. | A vision-based perception framework for outdoor navigation tasks applicable to legged robots | |
Belter et al. | Keyframe-Based local normal distribution transform occupancy maps for environment mapping | |
CN112731918B (en) | Ground unmanned platform autonomous following system based on deep learning detection tracking | |
Zhang et al. | Colorful Reconstruction from Solid-State-LiDAR and Monocular Version | |
CN111673731B (en) | Path determination method | |
CN115049910A (en) | Foot type robot mapping and navigation method based on binocular vision odometer | |
Cano¹ et al. | Check for updates |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |