CN111399505B - Mobile robot obstacle avoidance method based on neural network - Google Patents

Mobile robot obstacle avoidance method based on neural network Download PDF

Info

Publication number
CN111399505B
CN111399505B CN202010173908.8A CN202010173908A CN111399505B CN 111399505 B CN111399505 B CN 111399505B CN 202010173908 A CN202010173908 A CN 202010173908A CN 111399505 B CN111399505 B CN 111399505B
Authority
CN
China
Prior art keywords
robot
obstacle avoidance
obstacle
neural network
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010173908.8A
Other languages
Chinese (zh)
Other versions
CN111399505A (en
Inventor
朱威
汤如
巫浩奇
龙德
何德峰
郑雅羽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202010173908.8A priority Critical patent/CN111399505B/en
Publication of CN111399505A publication Critical patent/CN111399505A/en
Application granted granted Critical
Publication of CN111399505B publication Critical patent/CN111399505B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to a mobile robot obstacle avoidance method based on a neural network, which comprises the following steps: (1) Determining obstacle avoidance parameters according to the size and the driving mode of the robot; (2) Inputting a depth image, preprocessing the depth image, and dividing a foreground region; (3) building an end-to-end obstacle avoidance neural network; (4) constructing a data set, and training an obstacle avoidance neural network; (5) Collecting a depth image, and carrying out the same pretreatment in the step (2) to obtain a foreground area; (6) If a large obstacle exists in the foreground area, performing the obstacle avoidance treatment on the large obstacle, and then performing the step (8), otherwise, performing the step (7); (7) Inputting an image of a foreground area into the obstacle avoidance neural network, and outputting a steering angle and a moving speed of the robot; and (8) obstacle avoidance is completed. According to the invention, the mobile robot obstacle avoidance is performed by using the depth image and the convolutional neural network, the characteristics and the parameter setting are not required to be manually extracted, and the obstacle avoidance can be accurately performed in an outdoor complex scene.

Description

Mobile robot obstacle avoidance method based on neural network
Technical Field
The invention belongs to the field of robots, and particularly relates to a mobile robot obstacle avoidance method based on a neural network.
Background
The robot can replace human beings to engage in heavy and tedious physical labor, and relates to multi-disciplinary knowledge such as machinery, electronics, sensors, computers, artificial intelligence and the like. The types and forms of robots are various, and the mainstream forms on the market are mobile robots and arm robots. The mobile robot is actively used in factories, markets, warehouses and other occasions, and can be used as a mobile platform of other robots, for example, the wheeled robot and the mechanical arm can assist in completing tasks such as door opening, tea delivery and the like, and the wheeled robot and the multi-axis cradle head can complete tasks such as target tracking, inspection and the like. In the application occasions of the two typical robots, the auxiliary work of the mechanical arm, the cloud deck and the mobile robot is complemented. The most important of mobile robots is the mobility, which determines the performance of the robot to a certain extent, and the robot can quickly and accurately reach a given position under the condition of safety, so that the mobile robot has very important significance. In the moving process of the mobile robot, obstacle avoidance is the most important function, so that not only can friction between other objects and the robot be avoided, but also the robot can be protected from being damaged to a great extent. The obstacle avoidance should have timeliness and accuracy, and should also have high enough accuracy to reduce the impact on autonomous navigation while ensuring timeliness.
The current implementation means of the obstacle avoidance technology have various modes, the environment can be perceived through vision or ultrasonic, laser, radar, TOF equidistant sensors, and the movement of the robot is adjusted after the related information such as the distance, the size, the shape and the position of the obstacle is acquired, so that the robot is prevented from colliding with the obstacle in the moving process. Zhang Wuyang et al propose a method for avoiding obstacles of unmanned aerial vehicle based on deep learning (see "four-rotor unmanned aerial vehicle monocular vision obstacle avoidance method based on deep learning", computer application, 2019, 39 (04): 1001-1005), the method adopts Faster R-CNN network frame to select the measured target, calculates the frame size and then estimates the distance between the obstacle and the unmanned aerial vehicle by using similar triangle, although the obstacle avoidance can be realized, the error reaches +/-0.5 m, and the unmanned aerial vehicle must fly at low speed to ensure real-time.
The patent with the application number of CN201910454798.X discloses a laser and vision fusion obstacle avoidance method, which mainly adopts a YOLO target detection network to detect specific obstacles, but does not directly avoid the obstacle according to the visual information of the obstacle, and the core obstacle avoidance function uses a Depthimage_to_laser function package in a robot operating system ROS, and the principle is that depth image data are converted into laser data, and then the laser radar data are fused, so that the robot obstacle avoidance is realized. The robot obstacle avoidance and the deep learning in the mode have no direct relation, the pixel positions of the obstacles in the RGB image are extracted by using a deep learning method, the obstacle avoidance mode of the core is a function packet for converting the depth map into radar data, and the radar data are transmitted into a navigation function packet. Although the obstacle avoidance effect is better than that of a single laser sensor mode, the method has the defect that YOLO only can frame and select the existing obstacle in the data set when detecting the obstacle, and the detection of the unknown obstacle in a complex scene cannot be realized. The patent with the application number of CN201910388213.9 discloses an automatic driving control method based on deep learning, which uses a GPS and IMU combined mode to obtain the pose of a trolley, and adopts three indexes of maximum continuous road surface length, road surface gradient and the furthest distance reached by the road surface to evaluate the current road surface condition so as to obtain a local optimal path. The GPS positioning method is adopted in the patent, so that the trolley can quickly select the crossroad and the rotary island, but the detection effect of the transformation of different pavements and pavement materials is not ideal, and the false recognition is easy to cause.
Disclosure of Invention
In order to solve the problem of collision possibly generated in the moving process of a robot, the invention provides a mobile robot obstacle avoidance method based on a neural network, which comprises the following six parts: the method comprises the following steps of robot obstacle avoidance parameter determination, input image preprocessing, end-to-end obstacle avoidance neural network design, data set production and network model training, obstacle avoidance of large obstacles and obstacle avoidance neural network model reasoning, and the specific method is as follows:
(1) Determining obstacle avoidance parameters according to the size and the driving mode of the robot;
and determining obstacle avoidance parameters according to the size and the type of the robot. The distance Dist between the robot and the obstacle is defined as the minimum distance from the circumscribed circle obtained by the projection of the robot to the horizontal plane to the circumscribed circle obtained by the projection of the obstacle to the horizontal plane, and the distance Dist can be calculated according to the depth map after the calibration of the depth camera. The invention is divided into three types of areas according to the distance between the obstacle and the robot:
(1) safety area: dist is greater than or equal to TH dmax . At this time, the robot can normally pass through an area where no obstacle exists in front of the robot or the depth camera cannot detect the robot. TH (TH) dmax The value range of (2) is [5,15 ]]In meters.
(2) Obstacle region: dist is greater than TH dmin And the distance is less than TH dmax Is a region of (a) in the above-mentioned region(s). TH (TH) dmin The value range of (2) is [0.5,2 ]]In meters.
(3) Impact area: dist is less than or equal to TH dmin The robot is most likely to collide when the robot stops moving immediately.
The steering structure of the robot is an Ackerman steering structure. The steering angle of the robot is defined as the included angle between the straight line where the two circle centers of the rear wheel (driving wheel) are located and the straight line where the two circle centers of the front wheel (steering wheel) are located. If the straight line where the front wheel and the rear wheel are positioned is parallel, the rotation angle is 0 degrees. The moving speed of the robot is defined as the linear speed of the rear wheel.
(2) Inputting a depth image, preprocessing the depth image, and dividing a foreground region;
the working scene of the outdoor mobile robot has high uncertainty, part of background parts in the depth image acquired by the camera can influence the obstacle avoidance of the mobile robot, and if the depth image is directly input into the neural network, the convolution layer can extract the characteristics of the background, so that the obstacle needs to be segmented out first. Therefore, the invention firstly zooms the input depth image, extracts the foreground region and then uses the foreground region for subsequent processing, and comprises the following specific steps:
(2-1) image scaling
Because the depth images of different depth cameras are different in size, the images are cut into a proper proportion and then scaled, so that the time of subsequent traversal and convolution operation is reduced.
The invention uses the center of the input depth image as a reference, cuts the reference point in four directions of up, down, left and right, cuts out the image with the size of Pix multiplied by Pix, and the Pix range is [219,424]. The cropped image is scaled, the aspect ratio of the image is kept at 1 during scaling, and the image is scaled to 219×219. The scaled image is called as an original image, the original image is copied to obtain a backup image, and the backup image is identical to the original image.
(2-2) binarizing the backup map
As the difference value of the foreground gray value and the background gray value in the depth map can be changed along with different scenes, the difference between the foreground gray value and the background gray value can be very large in open outdoor occasions, and the difference between the foreground gray value and the background gray value is very small in indoor occasions, corridor occasions and the like, and the foreground is difficult to be segmented by using a fixed threshold value. The present invention therefore uses the oxford OTSU method for segmentation. The foreground portion of the image segmented by the oxford method is set to be white, and the background portion is set to be black. The image area formed by foreground pixel points which have the same pixel value and are adjacent to each other in the binarized image is called a connected block.
(2-3) extraction of original foreground region
Counting each white connected block after the backup graph is segmented, taking the difference between the maximum value and the minimum value of the abscissa of the boundary as a block width, and taking the difference between the maximum value and the minimum value of the ordinate of the boundary as a block height; the rejection width and height are smaller than the threshold value TH wh Is set to black, TH wh The value range of (2) is [2,5 ]]。
And (2-4) taking the white area remained after the backup image is removed from the small connected blocks as a mask, extracting the interested area from the original image, reserving the foreground area corresponding to the depth image, and setting the background area as black.
(3) End-to-end obstacle avoidance neural network construction
The obstacle avoidance neural network is modified on the basis of a deep convolutional neural network AlexNet. AlexNet is the mountain-climbing operation of deep convolutional neural networks in the image field, which contains eight layers of networks in total, the first five layers being convolutional layers and the last three being fully connected layers. The first two convolutional layers and the fifth convolutional layer have a pooling layer, and the other two convolutional layers have no pooling. The obstacle avoidance neural network is modified based on an AlexNet network, and specifically comprises the following steps:
(3-1) modification of convolutional and pooling layers
In the convolution layer, since the effect of the extracted features of the 11×11 convolution kernels of the original network is not ideal, the convolution filter size of the first layer is set to 3×3, the step size is 2, the number of convolution kernels is increased from 96 to 256, and the input picture is changed from 227×227×3 to 219×219×1.
The pooling layer is arranged after the convolution layer, so that the size of the matrix can be effectively reduced, parameters in the final full-connection layer are reduced, and the pooling layer can be used for accelerating the calculation speed and preventing the overfitting problem. To prevent premature loss of image detail, the first layer of pooling is eliminated, making the obstacle avoidance network difficult to identify small obstacles. Since the convolution kernel size has been reduced, in the second layer and the fifth layer, in order to keep overlapping pooling, the step length of the pooling layer in the second layer and the fifth layer is adjusted to be 1, the window size is adjusted to be 2×2, so that generalization of the model is ensured, and meanwhile, the calculation speed is increased.
(3-2) deletion of LRN layer and addition of normalization layer
The LRN layer in AlexNet is a "side-suppression" function in simulated neurobiology, where the data is normalized without changing the data size and dimensions in order to increase the generalization ability of the model. In practice, the LRN layer does not provide a significant improvement, but rather increases the amount of data, so all LRN layers are deleted to simplify the model.
Because the LRN layer does not result in a higher performance to power ratio, normalizing Batch Normalization using the normalization layer after the fifth layer (last layer of convolution layer) pooling operation is equivalent to normalizing each feature in each batch during the training phase, enabling the model to use a higher learning rate lr during the training phase to reduce training time.
(3-3) substitution of an activation function
The first five layers need to execute the activation functions after convolution filtering and pooling operations, and the LRN layers are deleted due to the large calculation amount of the activation functions such as Sigmoid, tanh and the like and gradient dispersion phenomenon. Therefore, in order to ensure the model generalization capability, the invention replaces the ReLU activation function in the AlexNet network with Swish. Swish is a new activation function proposed by Google brain, and solves the problem that the ReLU part weight cannot be updated. Substitution of the activation function increases training time but increases model generalization.
(3-4) replacement of full connection layer
The invention uses global average pooling (Global Average Pooling, GAP) to replace the last three layers of full-connection layer of AlexNet, which can reduce model reasoning time and increase model predictive performance.
(4) Constructing a data set, and training the obstacle avoidance neural network;
(4-1) acquisition of data
The main stream obstacle avoidance method of the robot at present is radar, ultrasonic and binocular vision, so that end-to-end obstacle avoidance research by using depth images is less, and a unified public data set is not available, so that a self-made data set is required. The data set manufacturing process of the invention comprises the following steps: placing the robot in an outdoor environment, placing obstacles with different heights and volumes in front of the robot, controlling the robot to move to avoid the obstacles through a remote controller, recording depth images of the obstacles, steering angles of front wheels of the robot and driving speeds of rear wheels under different conditions, and storing the depth images, the steering angles of the front wheels and the driving speeds of the rear wheels to the local place according to time stamps. The types of the barriers in the collection process are as many as possible; the placement position and the posture of the obstacle are also different as much as possible; the scene also changes randomly. The invention only collects in the safety area and the obstacle area in the step (1), and does not collect in the collision area.
(4-2) treatment of small corners
After data acquisition is completed, data are counted, a small rotation angle is defined, and the absolute value of a steering angle is smaller than theta, wherein the range of theta is [5 degrees, 10 degrees ]. And carrying out nonlinear increase on the samples with small corners when encountering obstacles in the collected samples so as to ensure the training effect. The nonlinear function selects a normal distribution function N (0, 100) with a mean value of 0 and a variance of 10, and the sample rotation angle is enlarged to [ -19.6 degrees, 19.6 degrees ] after Gaussian oscillation.
(4-3) labeling of datasets
Pairing the control command with the depth map at the current moment, and driving the front wheel steering structure robot by the rear wheel: defining a label as a steering angle and a moving speed; for a robot with a two-wheel motion differential structure: the steering angle is defined according to the speed difference of the left wheel and the right wheel, if the speeds of the left wheel and the right wheel are completely the same, the steering angle is 0, if the left wheel is larger than the right wheel, the right wheel is defined, and otherwise, the left wheel is defined as the left wheel. The steering angle is defined as 0 when the robot is in straight running (including forward running and backward running), negative when the robot is in left turning, and positive when the robot is in right turning.
The steering angles in the label are specifically divided into seven integer grades, namely [ -30 degrees, -20 degrees, -10 degrees, -0 degrees, -10 degrees, -20 degrees and-30 degrees ], the steering angles in all samples are required to be classified into one grade in a mode of rounding to zero, and the moving speed of the robot in the label is divided into three grades, namely full speed, half speed and stop.
After the label of each frame of image is determined, the samples with the rotation angle of 0 DEG and the moving speed of not full speed are manually removed, so that the model is prevented from decelerating under the condition of straight going during reasoning.
(4-4) obstacle avoidance neural network model training
The training algorithm selects a random gradient descent method with a momentum of 0.9. The batch size of the neural network is adjusted from 256 to 32. In order to avoid that the model generates too many local minimum points under the condition of adjusting the batch size, the model is difficult to converge, so that the initial learning rate lr is adjusted from 0.01 to 0.02, which is equivalent to increasing the step size, and the model can cross the local minimum points.
Training n using learning rate lr of 0.02 in training 1 Wheels, i.e. n 1 Epoch training n with lr of 0.002 2 Epoch training n with lr of 0.0002 3 Reducing lr to 0.00002 after Epoch, ending training if the accuracy of the verification set is hardly improved, wherein the improvement threshold is set by a person skilled in the art; if the accuracy of the verification set is in the process of increasing, the number of training times can be increased by properly increasing the number of epochs. Wherein n is 1 The value range of (2) is [1,90 ]];n 2 The value range of (2) is [91,100 ]];n 3 The value range of (2) is [80,100 ]]The method comprises the steps of carrying out a first treatment on the surface of the The value range of n is related to the training sample and the batch size, 10 ephs need to be passed when lr is changed, then judgment is carried out, and the learning rate is changed when the accuracy of the verification set in each eph is not increased any more.
(5) Collecting a depth image, and carrying out the same pretreatment in the step 2 to obtain a foreground area;
(6) If a large obstacle exists in the foreground area, performing the obstacle avoidance treatment on the large obstacle, and then performing the step 8, otherwise, performing the step 7;
(6-1) discrimination of Large obstacle
According to the robot collision parameters determined in the step (1), if the pixel width h of the foreground segmented in the step (2) is larger than 219-TH in the collision area width Wherein 219 is the image width of the input obstacle avoidance network, TH width Is a parameter, and the value range is [1,50]. If the width h is greater than 219-TH width The fact that the volume of the obstacle occupies a large space in the view field of the camera is explained, at the moment, the obstacle avoidance network is difficult to infer an obstacle avoidance instruction of the robot, and the robot needs to independently use the obstacle avoidance instruction in the step (6-2) to replace the obstacle avoidance network for reasoning aiming at the large obstacle avoidance; otherwise, the judgment requirement of the large obstacle is not met, the step (6-2) is skipped, and the step (7) is skipped.
(6-2) obstacle avoidance Path planning for Large obstacle
For some obstacles far surpass the robot in volume, the robot cannot quickly avoid, and the obstacle is avoided by adopting a detour method. Let the starting point and the target point be q respectively start And q goal . At the initial time i=0, let q be start The straight line intersection point with the obstacle is an impact point q L . The robot first moves around the obstacle until returning q L And (5) a dot. Then, the nearest point to the target on the periphery of the obstacle is determined and moved to the point called the departure point q H . From q H The start robot again moves in a straight line towards the target. If can reach q goal Then obstacle avoidance is completed, otherwise the impact point q is updated L And a departure point q H Until reaching the target point q goal . The efficiency of such an algorithm, while low, can ensure that the robot can reach any reachable target. And (3) in the process of planning the obstacle avoidance path of the large obstacle, the robot does not execute the step (7).
(7) Inputting an image of a foreground area into the obstacle avoidance neural network, and outputting a steering angle and a moving speed of the robot;
and (3) acquiring a depth image by using a depth camera, preprocessing the input depth image in the step (2), inputting the processed depth image into the obstacle avoidance neural network trained in the step (4) for reasoning when no large obstacle exists in the foreground region, and outputting a steering angle and a moving speed. The steering angle of the robot obtained by reasoning is one of seven angle grades in the step (4-3), and the moving speed is one of three grades.
(8) And (5) obstacle avoidance is completed.
Compared with the prior art, the invention has the following beneficial effects: compared with the method that a depth camera is directly used as a distance sensor, the neural network method provided by the patent has higher robustness, does not need to manually extract characteristics and parameter setting, and can accurately avoid the obstacle in an outdoor complex scene; meanwhile, the defect that the 2D laser radar can only scan two-dimensional plane obstacles is overcome, and autonomous obstacle avoidance can be realized in an area which cannot be scanned by the radar; the situation of pavement material change can still be identified normally. The obstacle avoidance path of the large obstacle is selected to bypass the obstacle, the optimal bypass point is selected to avoid the obstacle, the obstacle avoidance time is prolonged, the obstacle avoidance performance is guaranteed, the data coupling to a laser radar or a depth camera is small, and when a certain sensor has a problem, the robot still cannot collide.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a schematic diagram of a path plan for obstacle avoidance of a large obstacle, with the central solid line portion showing the large obstacle.
Detailed Description
The invention is described in detail below with reference to examples and figures, but the invention is not limited thereto. The mobile robot obstacle avoidance method is realized based on a robot operating system ROS platform. The robot platform is a self-made four-wheel mobile robot, the structure of the robot is rear wheel drive, and the front wheels are ackerman steering mechanisms. The rear wheel drive motor is a 24V BLDC (direct current brushless motor) with continuous maximum torque: maximum rotation speed at 3n·m: 469rpm. The front wheel steering is a magnetic encoding 380KG cm bus steering engine, and the position of the steering engine can be read.
The depth camera selects an Intel RealSense camera, the model is D435i, and a computer on the robot uses Intel i7-6700HQ, 16GB RAM and NVIDIAGTX970 (4 GB GDDR); the operating system is Ubuntu16.04+ROS Kinetic; the deep learning framework is Pytorch.
As shown in fig. 1, a mobile robot obstacle avoidance method based on a neural network mainly includes the following steps:
(1) Determining obstacle avoidance parameters according to the size and the driving mode of the robot;
(2) Inputting a depth image, preprocessing the depth image, and dividing a foreground region;
(3) Building an end-to-end obstacle avoidance neural network;
(4) Constructing a data set, and training the obstacle avoidance neural network;
(5) Collecting a depth image, and carrying out the same pretreatment in the step 2 to obtain a foreground area;
(6) If a large obstacle exists in the foreground area, performing the obstacle avoidance treatment on the large obstacle, and then performing the step 8, otherwise, performing the step 7;
(7) Inputting an image of a foreground area into the obstacle avoidance neural network, and outputting a steering angle and a moving speed of the robot;
(8) And (5) obstacle avoidance is completed.
The step (1) specifically comprises:
and determining obstacle avoidance parameters according to the size and the type of the robot. The distance Dist between the robot and the obstacle is defined as the minimum distance from the circumscribed circle obtained by the projection of the robot to the horizontal plane to the circumscribed circle obtained by the projection of the obstacle to the horizontal plane, and the distance Dist can be calculated according to the depth map after the calibration of the depth camera. The invention is divided into three types of areas according to the distance between the obstacle and the robot:
(1) safety area: dist is greater than or equal to TH dmax . At this time, the robot can normally pass through an area where no obstacle exists in front of the robot or the depth camera cannot detect the robot. TH (TH) dmax The value of (2) is 10 meters.
(2) Obstacle region: dist is greater than TH dmin And the distance is less than TH dmax Is a region of (a) in the above-mentioned region(s). TH (TH) dmin The value of (2) is 1 meter.
(3) Impact area: dist is less than or equal to TH dmin The robot is most likely to collide when the robot stops moving immediately.
The steering structure of the robot is an Ackerman steering structure. The steering angle of the robot is defined as the included angle between the straight line where the two circle centers of the rear wheel (driving wheel) are located and the straight line where the two circle centers of the front wheel (steering wheel) are located. If the straight line where the front wheel and the rear wheel are positioned is parallel, the rotation angle is 0 degrees. The moving speed of the robot is defined as the linear speed of the rear wheel.
The step (2) specifically comprises:
(2-1) cutting the reference point in four directions up, down, left and right with the center of the input depth image as a reference, and cutting out an image of pix×pix size, wherein the value of Pix is 424. The cropped image is scaled, the aspect ratio of the image is kept at 1 during scaling, and the image is scaled to 219×219. The scaled image is called as an original image, the original image is copied to obtain a backup image, and the backup image is identical to the original image.
(2-2) segmentation was performed using the Otsu method, and the image segmented by the Otsu method was set to white in the foreground portion and black in the background portion.
(2-3) extraction of original foreground region
Counting each white connected block after the backup graph is segmented, taking the difference between the maximum value and the minimum value of the abscissa of the boundary as a block width, and taking the difference between the maximum value and the minimum value of the ordinate of the boundary as a block height; the rejection width and height are smaller than the threshold value TH wh Is set to black, TH wh The value of (2) is 3. And taking the white region remained after the backup image is removed from the small communication blocks as a mask, extracting the region of interest from the original image, reserving the foreground region corresponding to the depth image, and setting the background region as black.
The step (3) specifically comprises:
(3-1) the convolution filter size of the first layer of the AlexNet network is set to 3×3, the step size is 2, the number of convolution kernels is increased from 96 to 256, and the input picture is changed from 227×227×3 to 219×219×1.
The pooling of the first layer is eliminated. Because the convolution kernel size is reduced, in order to keep overlapping pooling, the pooling layer step length is adjusted to be 1, the window size is adjusted to be 2 multiplied by 2, the generalization of the model is ensured, and meanwhile, the calculation speed is increased.
(3-2) because the LRN layers do not deliver higher performance to power ratios, deleting all of the LRN layers, using normalization Batch Normalization processing, equating to normalization of each feature in each batch during the training phase, enabling the model to use higher learning rates lr during the training phase to reduce training time.
(3-3) ReLU activation function in AlexNet network is replaced by Swish. Swish is a new activation function proposed by Google brain, and solves the problem that the ReLU part weight cannot be updated. Substitution of the activation function increases training time but increases model generalization.
(3-4) global average pooling GAP was used instead of the last three full connection layers of alexent. Such modifications may reduce model inference time and increase predictive performance of the model. After the GAP is used for replacing the full connection layer, the softmax classifier is deleted, the output of the GAP is directly used as classification output, and the final output value is the inferred steering angle and the inferred moving speed of the robot.
The step (4) specifically comprises:
(4-1) data set acquisition
Placing the robot in an outdoor environment, placing obstacles with different heights and volumes in front of the robot, controlling the robot to move to avoid the obstacles through a remote controller, recording depth images of the obstacles, steering angles of front wheels of the robot and driving speeds of rear wheels under different conditions, and storing the depth images, the steering angles of the front wheels and the driving speeds of the rear wheels to the local place according to time stamps. The types of the barriers in the collection process are as many as possible; the placement position and the posture of the obstacle are also different as much as possible; the scene also changes randomly. The invention only collects in the safety area and the obstacle area in the step (1), and does not collect in the collision area. The image data and the steering and forward data sent by the remote control at the same time are recorded using the rosbag record tool in ROS and saved as a bag file.
(4-2) treatment of small corners
After data acquisition is completed, data are counted, a small rotation angle is defined, the absolute value of a steering angle is smaller than theta, and the value of theta is 9 degrees.
And carrying out nonlinear increase on the samples with small corners when encountering obstacles in the collected samples so as to ensure the training effect. The nonlinear function selects a normal distribution function N (0, 100) with a mean value of 0 and a variance of 10, and the sample rotation angle is enlarged to [ -19.6 degrees, 19.6 degrees ] after Gaussian oscillation.
(4-3) labeling of datasets
And matching the control instruction with the depth map at the current moment, and defining the labels as steering angles and moving speeds. The steering angle is defined as 0 when the robot moves straight, negative when the robot turns left, and positive when the robot turns right. The steering angles in the label are specifically divided into seven integer grades, namely [ -30 degrees, -20 degrees, -10 degrees, -0 degrees, 10 degrees, 20 degrees and 30 degrees ], and the steering angles in all samples are required to be returned to one grade in a mode of rounding to zero. The moving speed of the robot in the label is divided into three gears, namely full speed, half speed and stop. After the label of each frame of image is determined, the samples with the rotation angle of 0 DEG and the moving speed of not full speed are manually removed, so that the model is prevented from decelerating under the condition of straight going during reasoning.
(4-4) obstacle avoidance network training
The training algorithm selects a random gradient descent method with a momentum of 0.9. The batch size of the neural network is adjusted from 256 to 32. The initial learning rate lr is adjusted from 0.01 to 0.02, which is equivalent to increasing the step length, so that the model can cross the local minimum point. Training n using learning rate lr of 0.02 in training 1 Wheels, i.e. n 1 Epoch training n with lr of 0.002 2 Epoch training n with lr of 0.0002 3 And (3) reducing lr to 0.00002, finishing training if the accuracy of the verification set is hardly improved, and properly increasing the Epoch and increasing the training times if the accuracy of the verification set is increased. n is n 1 The value of (2) is 90; n is n 2 The value of (2) is 100; n is n 3 The value of (2) is 100. When lr is changed, 10 ephemers need to be passed and then judgment is carried out, and when the accuracy of the verification set in each ephemer is not increased, the learning rate is changed.
The step (5) specifically comprises:
collecting a depth image, and carrying out the same pretreatment in the step 2 to obtain a foreground area;
the step (6) specifically comprises:
if a large obstacle exists in the foreground area, performing the obstacle avoidance treatment on the large obstacle, and then performing the step 8, otherwise, performing the step 7;
wherein, the liquid crystal display device comprises a liquid crystal display device,
(6-1) discrimination of large obstacle;
according to the robot collision parameters determined in the step (1), if the pixel width h of the foreground segmented in the step (2) is larger than 219-TH in the collision area width Wherein 219 is the image width of the input obstacle avoidance network, TH width Is a threshold parameter, and the value is 10.
When the width h is greater than 219-10=209, the obstacle avoidance instruction in the step (6-2) is required to be independently used for replacing obstacle avoidance network reasoning; if the determination requirement of the large obstacle is not satisfied, the step (6-2) is skipped, and the process proceeds to step (7).
(6-2) obstacle avoidance path planning of the large obstacle, namely obstacle avoidance treatment of the large obstacle;
as shown in FIG. 2, let the starting point and the target point be q start And q goal . At the initial time i=0, let q be start The straight line intersection point with the obstacle is an impact point q L . The robot first moves around the obstacle until returning q L And (5) a dot. Then, the nearest point to the target on the periphery of the obstacle is determined and moved to the point called the departure point q H . From q H The start robot again moves in a straight line towards the target. If can reach q goal Then obstacle avoidance is completed, otherwise the impact point q is updated L And a departure point q H Until reaching the target point q goal . The efficiency of such an algorithm, while low, can ensure that the robot can reach any reachable target. And (3) when the robot performs obstacle avoidance path planning of the large obstacle, the step (7) is not executed any more, and the obstacle avoidance task of the robot is completed.
The step (7) specifically comprises:
and (3) acquiring a depth image by using a depth camera, preprocessing the input depth image in the step (2), inputting the image of the foreground area into the obstacle avoidance neural network trained in the step (4) for reasoning, and outputting the rotation angle and the moving speed of the robot. The robot corner obtained by reasoning is one of seven angle grades in the step (4-3), and the moving speed is one of three grades. And assigning the reasoning value to the steering motor and the driving motor to finish the obstacle avoidance task of the robot.
And (8) completing obstacle avoidance.

Claims (9)

1. A mobile robot obstacle avoidance method based on a neural network is characterized in that: the method comprises the following steps:
step 1: determining obstacle avoidance parameters according to the size and the driving mode of the robot;
step 2: inputting a depth image, preprocessing the depth image, and dividing a foreground region;
step 3: and building an end-to-end obstacle avoidance neural network which is an improved AlexNet network:
the convolution filter of the first layer of the improved AlexNet network is set to be 3 multiplied by 3, the step length is 2, and the number of convolution kernels is 256; the pooling layer in the first layer is deleted;
the second layer and the fifth layer of the improved AlexNet network are reserved, the step length of the pooling layer in the second layer and the fifth layer is 1, and the sliding window is 2 multiplied by 2;
all LRN layer deletions of the modified AlexNet network;
adding a normalization layer for unified normalization processing after the fifth layer of the improved AlexNet network;
ReLU activation functions in the improved AlexNet network are replaced by Swish;
deleting the full connection layer and the softmax classifier of the last three layers of the improved AlexNet network, adding global average pooling GAP, taking the output of the GAP as classification output, and outputting the steering motor angle of the robot and the moving speed of the robot;
step 4: constructing a data set, and training the obstacle avoidance neural network;
step 5: collecting a depth image, and carrying out the same pretreatment in the step 2 to obtain a foreground area;
step 6: if a large obstacle exists in the foreground area, performing the obstacle avoidance treatment on the large obstacle, and then performing the step 8, otherwise, performing the step 7;
step 7: inputting an image of a foreground area into the obstacle avoidance neural network, and outputting a steering angle and a moving speed of the robot;
step 8: and (5) obstacle avoidance is completed.
2. The mobile robot obstacle avoidance method based on neural network of claim 1, wherein: in the step 1, the obstacle avoidance parameters include definition of a distance Dist between the robot and the obstacle and an area divided based on the distance Dist; and determining the steering angle gear of the robot according to the steering structure of the robot.
3. The mobile robot obstacle avoidance method based on neural network of claim 2, wherein: dividing a safety area, an obstacle area and a collision area based on a distance Dist between the robot and the obstacle:
with the distance Dist between the robot and the obstacle greater than or equal to TH dmax The area of (2) is a safe area;
with the distance Dist between the robot and the obstacle greater than TH dmin And the distance is less than TH dmax Is an obstacle region;
with the distance Dist between the robot and the obstacle smaller than or equal to TH dmin Is a collision zone;
0<TH dmin <TH dmax
4. the mobile robot obstacle avoidance method based on neural network of claim 1, wherein: in the step 2, the pretreatment includes the following steps:
step 2.1: taking the center point of the depth image as a datum point, taking the datum point as a center, cutting out the datum point to obtain a Pix image, reducing the Pix image to a preset value by a fixed length-width ratio, and copying the original image to obtain a backup image; pix e [219,424 ];
step 2.2: binarizing the backup image, wherein the foreground is white and the background is black after the binarization processing;
step 2.3: counting each white connected block in the binarized backup image, taking the difference between the maximum value and the minimum value of the abscissa of the boundary as a block width, taking the difference between the maximum value and the minimum value of the ordinate of the boundary as a block height, and taking the width and the height as a block height, wherein the width is smaller than a threshold value TH wh The white connecting block of (2) is black; TH (TH) wh The value range of (2) is [2,5 ]];
Step 2.4: and extracting the region of interest of the original image by taking the white region of the processed backup image as a mask, reserving the region corresponding to the mask as a foreground region, and setting the background region outside the foreground region as black.
5. The mobile robot obstacle avoidance method based on a neural network of claim 4, wherein: in the step 2.2, binarization processing is performed on the backup image by an oxford method.
6. The mobile robot obstacle avoidance method based on neural network of claim 1, wherein: the step 4 comprises the following steps:
step 4.1: placing the robot outdoors, placing a plurality of obstacles with different heights and volumes in front of the robot, controlling the robot to move to avoid the obstacles through a remote controller, recording depth images of the obstacles, steering angles of the robot and moving speed of the robot under different conditions, and storing the depth images, steering angles and moving speed of the robot to the local according to a time stamp;
step 4.2: after data acquisition is completed, counting data, defining a small rotation angle with an absolute value of a steering angle smaller than theta, and carrying out nonlinear increase processing on a data sample of the small rotation angle when encountering an obstacle; θ ε [5 °,10 ° ];
step 4.3: pairing a control instruction of a remote controller with a depth image at a corresponding moment, and defining a label as a steering angle and a moving speed; after the label of each frame of image is determined, manually eliminating samples with the rotation angle of 0 DEG and the moving speed of not being full speed;
step 4.4: training an obstacle avoidance neural network by a random gradient descent method, wherein the momentum is 0.9; batch size of the neural network is 32, initial learning rate lr is 0.02, and training n 1 Round, training n using lr of 0.002 2 Round, training n using lr of 0.0002 3 A wheel;
and finally, reducing lr to 0.00002, ending training if the accuracy of the verification set is improved to be smaller than a threshold value, otherwise, increasing training times.
7. The mobile robot obstacle avoidance method based on neural network of claim 6, wherein: in the step 4.2, the nonlinear function of the nonlinear increase process selects a normal distribution function N (0, 100) with a mean value of 0 and a variance of 10, and the data sample angle is increased to [ -19.6 degrees, 19.6 degrees ] after gaussian oscillation.
8. A mobile robot obstacle avoidance method based on neural network as claimed in claim 3, wherein: the step 6 comprises the following steps:
step 6.1: in the collision area, if the pixel width of the segmented foreground area is larger than a preset value h, performing the next step, otherwise, performing the step 7;
step 6.2: let the starting point and the target point be q respectively start And q goal The method comprises the steps of carrying out a first treatment on the surface of the Let q start The straight line intersection point with the obstacle is an impact point q L
Step 6.3: the robot first moves around the obstacle until returning q L A point of judgment of a departure point q closest to the target point around the obstacle H Move to the departure point q H The method comprises the steps of carrying out a first treatment on the surface of the From q H Starting the robot to drive to the target point along the straight line again;
step 6.4: if arrive at q goal Step 8 is performed, otherwise the impact point q is updated L And returning to the step 6.3.
9. The mobile robot obstacle avoidance method based on neural network of claim 8, wherein: in the step 6.1, the preset value h is the image width of the input obstacle avoidance network and the parameter TH width Is the difference of (5) TH width The value range of (2) is [1,50 ]]。
CN202010173908.8A 2020-03-13 2020-03-13 Mobile robot obstacle avoidance method based on neural network Active CN111399505B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010173908.8A CN111399505B (en) 2020-03-13 2020-03-13 Mobile robot obstacle avoidance method based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010173908.8A CN111399505B (en) 2020-03-13 2020-03-13 Mobile robot obstacle avoidance method based on neural network

Publications (2)

Publication Number Publication Date
CN111399505A CN111399505A (en) 2020-07-10
CN111399505B true CN111399505B (en) 2023-06-30

Family

ID=71428731

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010173908.8A Active CN111399505B (en) 2020-03-13 2020-03-13 Mobile robot obstacle avoidance method based on neural network

Country Status (1)

Country Link
CN (1) CN111399505B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111831010A (en) * 2020-07-15 2020-10-27 武汉大学 Unmanned aerial vehicle obstacle avoidance flight method based on digital space slice
CN111975769A (en) * 2020-07-16 2020-11-24 华南理工大学 Mobile robot obstacle avoidance method based on meta-learning
CN112034847B (en) * 2020-08-13 2021-04-13 广州仿真机器人有限公司 Obstacle avoidance method and device of split type simulation robot with double walking modes
TWI756844B (en) * 2020-09-25 2022-03-01 財團法人工業技術研究院 Automated guided vehicle navigation device and method thereof
CN112363513A (en) * 2020-11-25 2021-02-12 珠海市一微半导体有限公司 Obstacle classification and obstacle avoidance control method based on depth information
TWI757999B (en) * 2020-12-04 2022-03-11 國立陽明交通大學 Real-time obstacle avoidance system, real-time obstacle avoidance method and unmanned vehicle with real-time obstacle avoidance function
CN112720465A (en) * 2020-12-15 2021-04-30 大国重器自动化设备(山东)股份有限公司 Control method of artificial intelligent disinfection robot
CN113514544A (en) * 2020-12-29 2021-10-19 大连理工大学 Mobile robot pavement material identification method based on sound characteristics
CN113419555B (en) * 2021-05-20 2022-07-19 北京航空航天大学 Monocular vision-based low-power-consumption autonomous obstacle avoidance method and system for unmanned aerial vehicle
CN113485326A (en) * 2021-06-28 2021-10-08 南京深一科技有限公司 Autonomous mobile robot based on visual navigation
CN113721618A (en) * 2021-08-30 2021-11-30 中科新松有限公司 Plane determination method, device, equipment and storage medium
CN114296443B (en) * 2021-11-24 2023-09-12 贵州理工学院 Unmanned modularized combine harvester
CN114115282B (en) * 2021-11-30 2024-01-19 中国矿业大学 Unmanned device of mine auxiliary transportation robot and application method thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103455034A (en) * 2013-09-16 2013-12-18 苏州大学张家港工业技术研究院 Avoidance path planning method based on closest distance vector field histogram
WO2017091008A1 (en) * 2015-11-26 2017-06-01 삼성전자주식회사 Mobile robot and control method therefor
CN107506711A (en) * 2017-08-15 2017-12-22 江苏科技大学 Binocular vision obstacle detection system and method based on convolutional neural networks
CN108648161A (en) * 2018-05-16 2018-10-12 江苏科技大学 The binocular vision obstacle detection system and method for asymmetric nuclear convolutional neural networks
CN109947093A (en) * 2019-01-24 2019-06-28 广东工业大学 A kind of intelligent barrier avoiding algorithm based on binocular vision
CN110262487A (en) * 2019-06-12 2019-09-20 深圳前海达闼云端智能科技有限公司 A kind of obstacle detection method, terminal and computer readable storage medium
WO2019199027A1 (en) * 2018-04-09 2019-10-17 엘지전자 주식회사 Robot cleaner

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103455034A (en) * 2013-09-16 2013-12-18 苏州大学张家港工业技术研究院 Avoidance path planning method based on closest distance vector field histogram
WO2017091008A1 (en) * 2015-11-26 2017-06-01 삼성전자주식회사 Mobile robot and control method therefor
CN107506711A (en) * 2017-08-15 2017-12-22 江苏科技大学 Binocular vision obstacle detection system and method based on convolutional neural networks
WO2019199027A1 (en) * 2018-04-09 2019-10-17 엘지전자 주식회사 Robot cleaner
CN108648161A (en) * 2018-05-16 2018-10-12 江苏科技大学 The binocular vision obstacle detection system and method for asymmetric nuclear convolutional neural networks
CN109947093A (en) * 2019-01-24 2019-06-28 广东工业大学 A kind of intelligent barrier avoiding algorithm based on binocular vision
CN110262487A (en) * 2019-06-12 2019-09-20 深圳前海达闼云端智能科技有限公司 A kind of obstacle detection method, terminal and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于立体视觉的移动机器人避障技术研究;张天翼;《中国优秀硕士学位论文全文数据库信息科技辑》;20190215;全文 *

Also Published As

Publication number Publication date
CN111399505A (en) 2020-07-10

Similar Documents

Publication Publication Date Title
CN111399505B (en) Mobile robot obstacle avoidance method based on neural network
US11691648B2 (en) Drivable surface identification techniques
US11427225B2 (en) All mover priors
US11790668B2 (en) Automated road edge boundary detection
DeSouza et al. Vision for mobile robot navigation: A survey
WO2018055378A1 (en) Autonomous route determination
CN107092264A (en) Towards the service robot autonomous navigation and automatic recharging method of bank's hall environment
CN114842438A (en) Terrain detection method, system and readable storage medium for autonomous driving vehicle
Matsushita et al. On-line road boundary modeling with multiple sensory features, flexible road model, and particle filter
Hua et al. Small obstacle avoidance based on RGB-D semantic segmentation
CN112477533B (en) Dual-purpose transport robot of facility agriculture rail
CN110389587A (en) A kind of robot path planning's new method of target point dynamic change
CN116576857A (en) Multi-obstacle prediction navigation obstacle avoidance method based on single-line laser radar
CN113031597A (en) Autonomous obstacle avoidance method based on deep learning and stereoscopic vision
CN115223039A (en) Robot semi-autonomous control method and system for complex environment
Gajjar et al. A comprehensive study on lane detecting autonomous car using computer vision
Zhou et al. Automated process for incorporating drivable path into real-time semantic segmentation
Mutz et al. Following the leader using a tracking system based on pre-trained deep neural networks
WO2023155903A1 (en) Systems and methods for generating road surface semantic segmentation map from sequence of point clouds
Wang et al. DRR-LIO: A dynamic-region-removal-based LiDAR inertial odometry in dynamic environments
US20220377973A1 (en) Method and apparatus for modeling an environment proximate an autonomous system
Yildiz et al. CNN based sensor fusion method for real-time autonomous robotics systems
Rekhawar et al. Deep learning based detection, segmentation and vision based pose estimation of staircase
Andersen et al. Vision assisted laser scanner navigation for autonomous robots
Buckeridge et al. Autonomous social robot navigation in unknown urban environments using semantic segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant