CN111399505A - Mobile robot obstacle avoidance method based on neural network - Google Patents

Mobile robot obstacle avoidance method based on neural network Download PDF

Info

Publication number
CN111399505A
CN111399505A CN202010173908.8A CN202010173908A CN111399505A CN 111399505 A CN111399505 A CN 111399505A CN 202010173908 A CN202010173908 A CN 202010173908A CN 111399505 A CN111399505 A CN 111399505A
Authority
CN
China
Prior art keywords
robot
obstacle avoidance
obstacle
neural network
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010173908.8A
Other languages
Chinese (zh)
Other versions
CN111399505B (en
Inventor
朱威
汤如
巫浩奇
龙德
何德峰
郑雅羽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202010173908.8A priority Critical patent/CN111399505B/en
Publication of CN111399505A publication Critical patent/CN111399505A/en
Application granted granted Critical
Publication of CN111399505B publication Critical patent/CN111399505B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to a mobile robot obstacle avoidance method based on a neural network, which comprises the following steps: (1) determining obstacle avoidance parameters according to the size and the driving mode of the robot; (2) inputting a depth image, preprocessing the depth image, and segmenting a foreground region; (3) constructing an end-to-end obstacle avoidance neural network; (4) constructing a data set, and training an obstacle avoidance neural network; (5) collecting a depth image, and carrying out the same pretreatment in the step (2) to obtain a foreground area; (6) if a large obstacle exists in the foreground area, performing obstacle avoidance processing on the large obstacle, and performing the step (8), otherwise, performing the step (7); (7) inputting the image of the foreground area into the obstacle avoidance neural network, and outputting the steering angle and the moving speed of the robot; (8) and (5) obstacle avoidance is finished. The invention utilizes the depth image and the convolutional neural network to avoid the obstacle of the mobile robot, does not need to manually extract the characteristics and set the parameters, and can accurately avoid the obstacle under the outdoor complex scene.

Description

Mobile robot obstacle avoidance method based on neural network
Technical Field
The invention belongs to the field of robots, and particularly relates to a mobile robot obstacle avoidance method based on a neural network.
Background
The robot can replace human to engage in heavy and fussy physical labor, and relates to knowledge of multiple subjects such as machinery, electronics, sensors, computers, artificial intelligence and the like. The types and forms of robots are various, and the mainstream forms in the market at present are mobile robots and arm robots. The mobile robot is already active in occasions such as factories, markets, warehouses and the like, and sometimes can be used as a mobile platform of other robots, for example, the wheel robot and the mechanical arm assist in completing tasks such as door opening, tea holding and water delivery and the like, and the wheel robot and the multi-axis tripod head complete tasks such as target tracking, routing inspection and the like. In the two typical robot application occasions, the assistance work of the mechanical arm, the cradle head and the mobile robot is complementary. The most important thing for a mobile robot is the mobility, which determines the performance of the robot to some extent, and it is very important that the robot can reach a given position quickly and accurately in a safe condition. In the moving process of the mobile robot, obstacle avoidance is the most important function, so that friction between other objects and the robot can be avoided, and the robot can be protected from being damaged to a great extent. The obstacle avoidance should have timeliness and accuracy, and also should have high enough accuracy under the condition of ensuring timeliness, so as to reduce the influence on autonomous navigation.
The existing obstacle avoidance technology has multiple implementation means, environment can be sensed through vision or super-equal wave, laser, radar, TOF and other sensors, the motion of the robot is adjusted after relevant information such as the distance, the size, the shape, the position and the like of an obstacle is acquired, and the robot is prevented from colliding with the obstacle in the moving process. The sun, the sun and the like provide an unmanned aerial vehicle obstacle avoidance method based on deep learning (see a four-rotor unmanned aerial vehicle monocular vision obstacle avoidance method based on deep learning, and computer application, 2019, 39(04):1001 + 1005). The method adopts an Faster R-CNN network to frame a detected target, calculates the size of a frame and then estimates the distance between an obstacle and the unmanned aerial vehicle by using a similar triangle, although obstacle avoidance can be realized, the error reaches +/-0.5 m, and the unmanned aerial vehicle can ensure real-time performance only by low-speed flight.
The patent with the application number of CN201910454798.X discloses an obstacle avoidance method with integration of laser and vision, which mainly adopts a YO L O target detection network to detect a specific obstacle, but does not directly avoid the obstacle according to the visual information of the obstacle, the core obstacle avoidance function uses a Depthimage _ to _ laserscan function packet in a robot operating system ROS, the principle is that depth image data is converted into laser data, then the data of a laser radar is integrated to realize obstacle avoidance of a robot, the robot obstacle avoidance and the depth learning of the robot in the method have no direct relation, only a depth learning method is used to extract the pixel position of the obstacle in an RGB image, the core obstacle avoidance method is a function packet for converting the depth image into the data, and then the radar data is transmitted into a navigation function packet.
Disclosure of Invention
In order to solve the problem of collision possibly generated in the moving process of the robot, the invention provides a mobile robot obstacle avoidance method based on a neural network, which specifically comprises the following six parts: the method comprises the following steps of robot obstacle avoidance parameter determination, input image preprocessing, end-to-end obstacle avoidance neural network design, data set manufacturing and network model training, obstacle avoidance of large obstacles and obstacle avoidance neural network model reasoning, and specifically comprises the following steps:
(1) determining obstacle avoidance parameters according to the size and the driving mode of the robot;
and determining obstacle avoidance parameters according to the size and the type of the robot. The distance Dist between the robot and the obstacle is defined as the minimum distance from a circumscribed circle obtained by projection of the robot to a horizontal plane to a circumscribed circle obtained by projection of the obstacle to the horizontal plane, and the distance Dist can be obtained by calculation according to a depth map after calibration of a depth camera. The invention is divided into three types of areas according to the distance between the barrier and the robot:
① secure region Dist is greater than or equal to THdmax. At the moment, no obstacle exists in front of the robot or the robot can normally pass through an area which cannot be detected by the depth camera. THdmaxHas a value range of [5,15 ]]In meters.
② obstacle region Dist is greater than THdminAnd the distance is less than THdmaxThe area of (a). THdminHas a value range of [0.5,2 ]]In meters.
③ Collision zone Dist is equal to or less than THdminThe robot is most likely to collide at meter time, at which point the robot immediately stops moving.
The steering structure of the robot is an Ackerman steering structure. The steering angle of the robot is defined as the included angle between the straight line of the two circle centers of the rear wheel (driving wheel) and the straight line of the two circle centers of the front wheel (steering wheel). If the straight lines of the front wheel and the rear wheel are parallel, the rotation angle is 0 degree. The moving speed of the robot is defined as the linear speed of the rear wheel.
(2) Inputting a depth image, preprocessing the depth image, and segmenting a foreground region;
the working scene of the outdoor mobile robot has high uncertainty, part of background parts in a depth image acquired by a camera can influence the obstacle avoidance of the mobile robot, and if the depth image is directly input into a neural network, the convolution layer can possibly extract the characteristics of the background, so that the obstacle needs to be segmented out first. Therefore, the invention firstly zooms the input depth image, extracts the foreground area of the input depth image and then uses the foreground area for subsequent processing, and the specific steps are as follows:
(2-1) image scaling
Because the depth images of the cameras with different depths have different sizes, the images are cut into proper proportion and then are zoomed, and the subsequent traversal and convolution operation time is reduced.
The method includes the steps of cutting a reference point in four directions of up, down, left and right by taking the center of an input depth image as a reference, cutting an image with the size of Pix × Pix, wherein the range of Pix is [219,424 ]. zooming the cut image, keeping the aspect ratio of the image to be 1 during zooming, zooming the image to 219 × 219, and copying the original image to obtain a backup image by taking the zoomed image as the original image, wherein the backup image is completely the same as the original image.
(2-2) binarizing the backup graph
Because the difference value of the gray values of the foreground and the background in the depth map changes along with different scenes, the difference value of the gray values of the foreground and the background is very large in outdoor open occasions, and the difference value of the gray values of the foreground and the background is very small in indoor occasions, corridors and other occasions, and the foreground is difficult to segment by using a fixed threshold. Therefore, the present invention uses Otsu OTSU for segmentation. In the image segmented by Otsu method, the foreground part is set to be white and the background part is set to be black. And (3) an image area which is composed of foreground pixel points with the same pixel value and adjacent positions in the binary image is called a connected block.
(2-3) extracting the foreground area of the original image
Counting each white connected block after the backup image is divided, taking the difference between the maximum value and the minimum value of the abscissa of the boundary as the block width, and taking the difference between the maximum value and the minimum value of the ordinate of the boundary as the block height; eliminating width and height smaller than threshold value THwhWhite blocks of (2) are put inIs black, THwhIs in the value range of [2,5 ]]。
And (2-4) taking the white area left after the small connected blocks are removed from the backup image as a mask, extracting the interested area of the original image, reserving the foreground area corresponding to the depth image, and setting the background area to be black.
(3) End-to-end obstacle avoidance neural network construction
The obstacle avoidance neural network provided by the invention is obtained by modifying the deep convolution neural network AlexNet. AlexNet is the action of a deep convolutional neural network in the image field, wherein one deep convolutional neural network comprises eight layers of networks, the first five layers are convolutional layers, and the last three layers are full-connection layers. The first two convolutional layers and the fifth convolutional layer have pooling layers, and the other two convolutional layers have no pooling. The obstacle avoidance neural network is modified based on an AlexNet network, and specifically comprises the following steps:
(3-1) modification of convolutional and pooling layers
In the convolutional layer, since the effect of extracting features of the 11 × 11 convolutional kernel of the original network is not ideal, the convolutional filter size of the first layer is set to 3 × 3, the step size is 2, the number of convolutional kernels is increased from 96 to 256, and the input picture is modified from 227 × 227 × 3 to 219 × 219 × 1.
The pooling layer can very effectively reduce the size of the matrix after the occurrence of the convolutional layer, thereby reducing parameters in the last fully connected layer, and the pooling layer can be used for accelerating the calculation speed and also has the function of preventing the over-fitting problem.
(3-2) L deletion of RN layer and addition of normalization layer
In AlexNet, the L RN layer simulates the side inhibition function in neurobiology, and data is normalized under the condition that the size and the dimensionality of the data are not changed so as to improve the generalization capability of the model.
Since L RN layer does not bring higher performance power consumption ratio, Normalization layer is used to normalize the Batch Normalization process after the fifth layer (last convolution layer) pooling operation, which is equivalent to normalizing each feature in each Batch in the training phase, so that the model can use higher learning rate lr in the training phase to reduce the training time.
(3-3) replacement of activation function
The activation functions of the first five layers need to be executed after convolution filtering and pooling, and due to the fact that the calculated amount of the activation functions such as Sigmoid and tanh is large, gradient dispersion phenomenon exists, and L RN layers are deleted.
(3-4) replacement of fully connected layer
The invention uses Global Average Pooling (GAP) to replace the last three full connection layers of AlexNet, so that the model reasoning time can be reduced and the model prediction performance can be increased.
(4) Constructing a data set, and training the obstacle avoidance neural network;
(4-1) acquisition of data
The current mainstream obstacle avoidance methods of robots are radar, ultrasonic and binocular vision, end-to-end obstacle avoidance research by using depth images is less, and no public data set with unified standards exists, so that a data set needs to be made by oneself. The data set making process of the invention is as follows: the robot is placed in an outdoor environment, obstacles with different heights and volumes are placed in front of the robot, the robot is controlled by a remote controller to move to avoid the obstacles, depth images of the obstacles under different conditions, the steering angle of the front wheel of the robot and the driving speed of the rear wheel of the robot are recorded, and the driving speeds are stored locally according to time stamps. The types of obstacles in the acquisition process are as many as possible; the placing position and the posture of the barrier are different as much as possible; the scene also changes randomly. The invention only collects in the safe area and the obstacle area in the step (1) and does not collect in the collision area.
(4-2) treatment of Small corners
After the data acquisition is finished, counting the data, and defining that the steering angle absolute value is smaller than theta as a small rotation angle, wherein the theta is in a range of [5 degrees and 10 degrees ]. And carrying out nonlinear increase on samples with small rotation angles when meeting the obstacles in the collected samples so as to ensure the training effect. The nonlinear function selects a normal distribution function N (0,100) with a mean of 0 and a variance of 10, and such sample turns expand to [ -19.6 °,19.6 ° ] after Gaussian oscillation.
(4-3) labeling of data set
And matching the control command with the depth map at the current moment, and for the robot with a rear wheel driving front wheel steering structure: defining the label as a steering angle and a moving speed; for a two-wheel motion differential structure robot: and defining a steering angle according to the speed difference of the left wheel and the right wheel, wherein if the speeds of the left wheel and the right wheel are completely the same, the steering angle is 0, if the left wheel is larger than the right wheel, the right turn is defined, and if the left wheel is larger than the right wheel, the left turn is defined. The steering angle of the robot is defined to be 0 when the robot moves straight (including forward and backward), the steering angle is negative when the robot rotates left, and the steering angle is positive when the robot rotates right.
The steering angle in the label is specifically divided into seven integer steps, which are [ -30 °, -20 °, -10 °, 0 °,10 °, 20 °, 30 ° ], the steering angle in all samples must be sorted into one step in a manner of rounding to zero, and the moving speed of the robot in the label is divided into three steps, which are full speed, half speed, and stop, respectively.
After the label of each frame image is determined, the samples with the rotation angle of 0 degrees and the moving speed of not full speed are manually rejected, so that the model is prevented from decelerating in the case of straight line reasoning.
(4-4) obstacle avoidance neural network model training
The training algorithm selects a random gradient descent method with a momentum of 0.9. The batch size of the neural network is adjusted from 256 to 32. In order to avoid the model from generating too many local minimum points when adjusting the batch size and making convergence difficult, the initial learning rate lr is adjusted from 0.01 to 0.02, which corresponds to increasing the step size, so that the model can go over the local minimum points.
Training n using a learning rate lr of 0.02 in training1Wheels, i.e. n1An Epoch, training n with lr of 0.0022An Epoch, training n with lr of 0.00023After Epoch is performed, lr is reduced to 0.00002, if the accuracy of the verification set is almost not improved, the training can be finished, and the improvement threshold is set by a person skilled in the art; if the accuracy of the verification set is increased, the Epoch can be properly increased, and the training times are increased. Wherein n is1Has a value range of [1,90 ]];n2Has a value range of [91,100 ]];n3Is in the range of [80,100 ]](ii) a The value range of n is related to the training sample and the batch size, when lr is changed, judgment is carried out after 10 epochs are passed, and when the accuracy of the verification set in each Epoch is not increased any more, the learning rate is changed.
(5) Collecting a depth image, and carrying out the same pretreatment in the step 2 to obtain a foreground area;
(6) if a large obstacle exists in the foreground area, performing large obstacle avoiding processing, and then performing step 8, otherwise, performing step 7;
(6-1) discrimination of Large obstacle
According to the robot collision parameters determined in the step (1), in a collision area, if the pixel width h of the foreground segmented in the step (2) is larger than 219-THwidthWhere 219 is the image width, TH, of the input obstacle avoidance networkwidthIs a parameter, the value range is [1,50 ]]. If the width h is larger than 219-THwidthIndicating that the volume of the obstruction occupies a large area in the field of view of the cameraIn the space of (3), at this time, the obstacle avoidance network is difficult to deduce the obstacle avoidance instruction of the robot, and the robot needs to use the obstacle avoidance instruction of the step (6-2) to substitute the obstacle avoidance network for reasoning aiming at the obstacle avoidance of the large obstacle; otherwise, the judgment requirement of the large obstacle is not met, the step (6-2) is skipped, and the step (7) is skipped.
(6-2) obstacle avoidance path planning of large obstacles
For some obstacles far beyond the robot in volume, the robot cannot avoid the obstacles quickly and avoids the obstacles by adopting a bypassing method. Let the starting point and target point be q respectivelystartAnd q isgoal. At an initial time i equal to 0, let qstartThe point of intersection with the barrier is the impact point qL. The robot first moves around the obstacle until it returns qLAnd (4) point. Then, the point on the periphery of the obstacle closest to the target is determined and moved to this point, which is called the departure point qH. From qHThe robot is started to travel again in a straight line towards the target. If q can be reachedgoalThe obstacle avoidance is completed, otherwise, the impact point q is updatedLAnd departure point qHUntil reaching the target point qgoal. Although the efficiency of the algorithm is low, the robot can be guaranteed to reach any reachable target. And (5) the robot does not execute the step (7) in the process of planning the obstacle avoidance path of the large obstacle.
(7) Inputting the image of the foreground area into the obstacle avoidance neural network, and outputting the steering angle and the moving speed of the robot;
and (3) acquiring a depth image by using a depth camera, inputting the depth image into the obstacle avoidance neural network trained in the step (4) for reasoning when no large obstacle exists in the foreground region after the input depth image preprocessing in the step (2), and outputting a steering angle and a moving speed. And (4) the steering angle of the robot obtained by inference is one of seven angle grades in the step (4-3), and the moving speed is one of three grades.
(8) And (6) obstacle avoidance is completed.
Compared with the prior art, the invention has the following beneficial effects: compared with the method that a depth camera is directly used as a distance sensor, the neural network method provided by the patent has higher robustness, does not need manual feature extraction and parameter setting, and can accurately avoid the obstacle under an outdoor complex scene; meanwhile, the defect that the 2D laser radar can only scan two-dimensional plane obstacles is overcome, and autonomous obstacle avoidance can be realized in areas where the radar cannot scan; the method can still normally identify the situation of the change of the road surface material. For the obstacle avoidance path of a large obstacle, a detour method is selected to surround the obstacle, and then the optimal detour point is selected to avoid the obstacle, so that the obstacle avoidance time is prolonged, the obstacle avoidance performance is guaranteed, the data coupling to a laser radar or a depth camera is small, and when a certain sensor has a problem, the robot still cannot collide.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a schematic diagram of path planning for obstacle avoidance of a large obstacle, and the central solid line part shows the large obstacle.
Detailed Description
The mobile robot obstacle avoidance method is realized based on a ROS platform of a robot operating system, the robot platform is a self-made four-wheel mobile robot, the robot is structurally driven by a rear wheel, front wheels are Ackermann steering mechanisms, a rear wheel driving motor is 24V B L DC (direct current brushless motor), the maximum torque is continued under 3N · m, the maximum rotating speed is 469rpm, the front wheel steering is a magnetic coding 380KG · cm bus steering engine, and the steering engine position can be read.
The depth camera selects an Intel RealSense camera with the model number of D435i, and a computer on the robot uses Inteli7-6700HQ, 16GB RAM and NVIDIAGTX970(4GB GDDR); the operating system is Ubuntu16.04+ ROS Kinetic; the deep learning framework is Pythrch.
As shown in fig. 1, a mobile robot obstacle avoidance method based on a neural network mainly includes the following steps:
(1) determining obstacle avoidance parameters according to the size and the driving mode of the robot;
(2) inputting a depth image, preprocessing the depth image, and segmenting a foreground region;
(3) constructing an end-to-end obstacle avoidance neural network;
(4) constructing a data set, and training the obstacle avoidance neural network;
(5) collecting a depth image, and carrying out the same pretreatment in the step 2 to obtain a foreground area;
(6) if a large obstacle exists in the foreground area, performing large obstacle avoiding processing, and then performing step 8, otherwise, performing step 7;
(7) inputting the image of the foreground area into the obstacle avoidance neural network, and outputting the steering angle and the moving speed of the robot;
(8) and (5) obstacle avoidance is finished.
The step (1) specifically comprises the following steps:
and determining obstacle avoidance parameters according to the size and the type of the robot. The distance Dist between the robot and the obstacle is defined as the minimum distance from a circumscribed circle obtained by projection of the robot to a horizontal plane to a circumscribed circle obtained by projection of the obstacle to the horizontal plane, and the distance Dist can be obtained by calculation according to a depth map after calibration of a depth camera. The invention is divided into three types of areas according to the distance between the barrier and the robot:
① secure region Dist is greater than or equal to THdmax. At the moment, no obstacle exists in front of the robot or the robot can normally pass through an area which cannot be detected by the depth camera. THdmaxIs 10 meters.
② obstacle region Dist is greater than THdminAnd the distance is less than THdmaxThe area of (a). THdminIs 1 meter.
③ Collision zone Dist is equal to or less than THdminThe robot is most likely to collide at meter time, at which point the robot immediately stops moving.
The steering structure of the robot is an Ackerman steering structure. The steering angle of the robot is defined as the included angle between the straight line of the two circle centers of the rear wheel (driving wheel) and the straight line of the two circle centers of the front wheel (steering wheel). If the straight lines of the front wheel and the rear wheel are parallel, the rotation angle is 0 degree. The moving speed of the robot is defined as the linear speed of the rear wheel.
The step (2) specifically comprises the following steps:
(2-1) cutting the reference point in four directions of up, down, left and right by taking the center of the input depth image as a reference, cutting out an image with the size of Pix × Pix, wherein the value of Pix is 424, zooming the cut image, keeping the aspect ratio of the image to be 1 during zooming, zooming the image to 219 × 219, and copying the original image to obtain a backup image by taking the zoomed image as the original image, wherein the backup image is completely the same as the original image.
(2-2) using Otsu OTSU to segment, wherein the foreground part is white and the background part is black in the image segmented by Otsu.
(2-3) extracting the foreground area of the original image
Counting each white connected block after the backup image is divided, taking the difference between the maximum value and the minimum value of the abscissa of the boundary as the block width, and taking the difference between the maximum value and the minimum value of the ordinate of the boundary as the block height; eliminating width and height smaller than threshold value THwhWhite block of (2), set it to black, THwhIs 3. And taking the white area left after the small connected blocks are removed from the backup image as a mask, extracting the region of interest of the original image, reserving the foreground area corresponding to the depth image, and setting the background area to be black.
The step (3) specifically comprises the following steps:
(3-1) the convolution filter size of the first layer of the AlexNet network is set to be 3 × 3, the step size is 2, the number of convolution kernels is increased from 96 to 256, and the input picture is modified from 227 × 227 × 3 to 219 × 219 × 1.
Since the convolution kernel size has been reduced, to maintain overlapping pooling, the pooling layer step size is adjusted to 1 and the window size is adjusted to 2 × 2, ensuring the generalization of the model while speeding up the computation.
(3-2) since L RN layers do not bring higher performance power consumption ratio, all L RN layers are deleted, and normalized Batch Normalization processing is used, which is equivalent to performing Normalization operation on each feature in each Batch in the training phase, so that the model can use higher learning rate lr in the training phase to reduce the training time.
(3-3) the activation function Re L U in the AlexNet network is replaced by Swish, which is a new activation function proposed by Google brain, and solves the problem that the partial weight Re L U cannot be updated.
(3-4) use global average pooling GAP instead of AlexNet's last three fully connected layers. Such modifications may reduce model inference time and increase the predictive performance of the model. After GAP is used for replacing a full connection layer, the softmax classifier is deleted, the output of the GAP is directly used as the classification output, and the final output values are the deduced steering angle and the robot moving speed of the robot.
The step (4) specifically comprises the following steps:
(4-1) data set Collection
The robot is placed in an outdoor environment, obstacles with different heights and volumes are placed in front of the robot, the robot is controlled by a remote controller to move to avoid the obstacles, depth images of the obstacles under different conditions, the steering angle of the front wheel of the robot and the driving speed of the rear wheel of the robot are recorded, and the driving speeds are stored locally according to time stamps. The types of obstacles in the acquisition process are as many as possible; the placing position and the posture of the barrier are different as much as possible; the scene also changes randomly. The invention only collects in the safe area and the obstacle area in the step (1) and does not collect in the collision area. Image data at the same time and steering and forward data sent by the remote control are recorded and saved as a bag file using the ROSs bag record tool in the ROS.
(4-2) treatment of Small corners
After data acquisition is finished, data are counted, a small rotation angle is defined when the absolute value of the steering angle is smaller than theta, and the value of theta is 9 degrees.
And carrying out nonlinear increase on samples with small rotation angles when meeting the obstacles in the collected samples so as to ensure the training effect. The nonlinear function selects a normal distribution function N (0,100) with a mean of 0 and a variance of 10, and such sample turns expand to [ -19.6 °,19.6 ° ] after Gaussian oscillation.
(4-3) labeling of data set
And matching the control command with the depth map at the current moment, and defining the label as a steering angle and a moving speed. The steering angle is defined as 0 when the robot moves straight, negative when the robot turns left and positive when the robot turns right. The steering angle in the label is specifically divided into seven integral steps, which are [ -30 °, -20 °, -10 °, 0 °,10 °, 20 °, 30 ° ], and the steering angles in all samples need to be sorted into one step in a manner of being rounded to zero. The moving speed of the robot in the label is divided into three gears, namely full speed, half speed and stop. After the label of each frame image is determined, the samples with the rotation angle of 0 degrees and the moving speed of not full speed are manually rejected, so that the model is prevented from decelerating in the case of straight line reasoning.
(4-4) obstacle avoidance network training
The training algorithm selects a random gradient descent method with a momentum of 0.9. The batch size of the neural network is adjusted from 256 to 32. The initial learning rate lr is adjusted from 0.01 to 0.02, which corresponds to increasing the step size so that the model can cross the local minimum point. Training n using a learning rate lr of 0.02 in training1Wheels, i.e. n1An Epoch, training n with lr of 0.0022An Epoch, training n with lr of 0.00023And finally, reducing lr to 0.00002 for each Epoch, ending the training if the accuracy of the verification set is almost not improved, and increasing epochs and training times if the accuracy of the verification set is increased. n is1Is 90; n is2Is 100; n is3Is 100. When the lr is changed, the judgment needs to be carried out after 10 epochs are passed, and the learning rate is changed when the accuracy of the verification set does not increase in each Epoch.
The step (5) specifically comprises the following steps:
collecting a depth image, and carrying out the same pretreatment in the step 2 to obtain a foreground area;
the step (6) specifically comprises the following steps:
if a large obstacle exists in the foreground area, performing large obstacle avoiding processing, and then performing step 8, otherwise, performing step 7;
wherein the content of the first and second substances,
(6-1) distinguishing a large obstacle;
according to the robot collision parameters determined in the step (1), in a collision area, if the pixel width h of the foreground segmented in the step (2) is larger than 219-THwidthWhere 219 is the image width, TH, of the input obstacle avoidance networkwidthIs a threshold parameter, the value is 10.
When the width h is larger than 219-10, 209, the obstacle avoidance instruction of the step (6-2) needs to be used independently to replace the obstacle avoidance network inference; if the large obstacle judgment requirement is not met, skipping (6-2) and entering step (7).
(6-2) planning an obstacle avoidance path of the large obstacle, namely, carrying out obstacle avoidance processing on the large obstacle;
as shown in FIG. 2, let the starting point and the target point be q respectivelystartAnd q isgoal. At an initial time i equal to 0, let qstartThe point of intersection with the barrier is the impact point qL. The robot first moves around the obstacle until it returns qLAnd (4) point. Then, the point on the periphery of the obstacle closest to the target is determined and moved to this point, which is called the departure point qH. From qHThe robot is started to travel again in a straight line towards the target. If q can be reachedgoalThe obstacle avoidance is completed, otherwise, the impact point q is updatedLAnd departure point qHUntil reaching the target point qgoal. Although the efficiency of the algorithm is low, the robot can be guaranteed to reach any reachable target. And (5) when the robot carries out obstacle avoidance path planning of the large obstacle, the step (7) is not executed any more, and the robot obstacle avoidance task is completed.
The step (7) specifically comprises:
and (3) acquiring a depth image by using a depth camera, inputting the depth image in the step (2) for preprocessing, inputting the image in the foreground region into the obstacle avoidance neural network trained in the step (4) for reasoning, and outputting the rotation angle and the moving speed of the robot. And (4) the rotation angle of the robot obtained by inference is one of seven angle grades in the step (4-3), and the moving speed is one of three grades. And assigning the inference value to a steering motor and a driving motor to complete the robot obstacle avoidance task.
And (8) obstacle avoidance is completed.

Claims (10)

1. A mobile robot obstacle avoidance method based on a neural network is characterized in that: the method comprises the following steps:
step 1: determining obstacle avoidance parameters according to the size and the driving mode of the robot;
step 2: inputting a depth image, preprocessing the depth image, and segmenting a foreground region;
and step 3: constructing an end-to-end obstacle avoidance neural network;
and 4, step 4: constructing a data set, and training the obstacle avoidance neural network;
and 5: collecting a depth image, and carrying out the same pretreatment in the step 2 to obtain a foreground area;
step 6: if a large obstacle exists in the foreground area, performing large obstacle avoiding processing, and then performing step 8, otherwise, performing step 7;
and 7: inputting the image of the foreground area into the obstacle avoidance neural network, and outputting the steering angle and the moving speed of the robot;
and 8: and (5) obstacle avoidance is finished.
2. The obstacle avoidance method for the mobile robot based on the neural network as claimed in claim 1, wherein: in the step 1, the obstacle avoidance parameters include the definition of a distance Dist between the robot and the obstacle and an area divided based on the distance Dist; and determining the steering angle gear of the robot according to the steering structure of the robot.
3. The obstacle avoidance method for the mobile robot based on the neural network as claimed in claim 2, wherein: dividing a safety region, an obstacle region and a collision region based on the distance Dist between the robot and the obstacle:
dist between the robot and the obstacle is greater than or equal to THdmaxThe area of (1) is a security area;
dist between the robot and the obstacle is larger than THdminAnd the distance is less than THdmaxThe area of (a) is an obstacle area;
with robots and obstaclesDist between obstacles is less than or equal to THdminThe area of (a) is a collision area; TH is more than 0dmin<THdmax
4. The obstacle avoidance method for the mobile robot based on the neural network as claimed in claim 1, wherein: in the step 2, the pretreatment comprises the following steps:
step 2.1, cutting outwards from the reference point by taking the central point of the depth image as the reference point and taking the reference point as the center to obtain a Pix × Pix image, reducing the image to a preset value by a fixed length-width ratio, and copying the original image to obtain a backup image, wherein the Pix ∈ [219,424 ];
step 2.2: carrying out binarization processing on the backup image, wherein the foreground is white and the background is black after the binarization processing;
step 2.3: counting each white connected block in the backup image after binarization, taking the difference between the maximum value and the minimum value of the horizontal coordinate of the boundary as the block width, taking the difference between the maximum value and the minimum value of the vertical coordinate of the boundary as the block height, and enabling the width and the height to be smaller than a threshold value THwhThe white communicating block is black; THwhIs in the value range of [2,5 ]];
Step 2.4: and taking the white area of the processed backup image as a mask, extracting the interested area of the original image, reserving the area corresponding to the mask as a foreground area, and setting the background area outside the foreground area as black.
5. The obstacle avoidance method for the mobile robot based on the neural network as claimed in claim 4, wherein: in the step 2.2, the backup image is binarized by the Otsu method.
6. The obstacle avoidance method for the mobile robot based on the neural network as claimed in claim 1, wherein: in the step 3, the end-to-end obstacle avoidance neural network is an improved AlexNet network:
the convolution filter of the first layer of the improved AlexNet network is set to be 3 × 3, the step length is 2, the number of convolution kernels is 256;
reserving a second layer and a fifth layer of a pooling layer of the improved AlexNet network, wherein the step size of the pooling layer in the second layer and the fifth layer is 1, and the sliding window is 2 × 2;
all L RN layers of the modified AlexNet network are deleted;
a normalization layer for uniformly performing normalization processing is added behind the fifth layer of the improved AlexNet network;
the improved AlexNet network takes an activation function in the AlexNet network as Swish;
and deleting the full connection layer and softmax classifier of the last three layers of the improved AlexNet network, adding a global average pooling GAP, taking the output of the GAP as classification output, and outputting the steering motor angle and the moving speed of the robot.
7. The obstacle avoidance method for the mobile robot based on the neural network as claimed in claim 1, wherein: the step 4 comprises the following steps:
step 4.1: placing the robot outdoors, placing a plurality of obstacles with different heights and volumes in front of the robot, controlling the robot to move to avoid the obstacles through a remote controller, recording depth images of the obstacles, steering angles of the robot and moving speed of the robot under different conditions, and storing the depth images, the steering angles and the moving speed of the robot to the local according to timestamps;
step 4.2, after data acquisition is finished, counting data, defining a small rotation angle with the absolute value of a steering angle smaller than theta, and carrying out nonlinear augmentation processing on a data sample of the small rotation angle when the small rotation angle meets an obstacle, wherein theta ∈ is 5 degrees and 10 degrees;
step 4.3: matching a control instruction of the remote controller with the depth image at the corresponding moment, and defining a label as a steering angle and a moving speed; after the label of each frame image is determined, manually rejecting samples with the rotation angle of 0 degrees and the moving speed of not full speed;
step 4.4: training an obstacle avoidance neural network by a random gradient descent method, wherein the momentum is 0.9; the batch size of the neural network was 32, the initial learning rate lr was 0.02, and n was trained1Wheel, makeTraining n with lr at 0.0022Round, training n with lr of 0.00023A wheel;
finally, lr is reduced to 0.00002, if the accuracy of the verification set is improved to be smaller than a threshold value, the training is ended, and otherwise, the training times are increased.
8. The obstacle avoidance method for the mobile robot based on the neural network as claimed in claim 7, wherein: in step 4.2, the nonlinear function of the nonlinear augmentation process selects a normal distribution function N (0,100) with a mean value of 0 and a variance of 10, and the rotation angle of the data sample is augmented to [ -19.6 °,19.6 ° ]afterthe gaussian oscillation.
9. The obstacle avoidance method for the mobile robot based on the neural network as claimed in claim 3, wherein: the step 6 comprises the following steps:
step 6.1: in the collision area, if the pixel width of the foreground area obtained by segmentation is larger than a preset value, the next step is carried out, otherwise, the step 7 is carried out;
step 6.2: let the starting point and target point be q respectivelystartAnd q isgoal(ii) a Let qstartThe point of intersection with the barrier is the impact point qL
Step 6.3: the robot first moves around the obstacle until it returns qLPoint, judging the nearest departure point q around the obstacle from the target pointHMove to the departure point qH(ii) a From qHStarting the robot to drive to the target point along the straight line again;
step 6.4: if q is reachedgoalIf yes, go to step 8, otherwise update the impact point qLAnd returning to the step 6.3.
10. The obstacle avoidance method for the mobile robot based on the neural network as claimed in claim 9, wherein: in the step 6.1, the preset value h is the image width and the parameter TH of the input obstacle avoidance networkwidthDifference of (D), THwidthHas a value range of [1,50 ]]。
CN202010173908.8A 2020-03-13 2020-03-13 Mobile robot obstacle avoidance method based on neural network Active CN111399505B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010173908.8A CN111399505B (en) 2020-03-13 2020-03-13 Mobile robot obstacle avoidance method based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010173908.8A CN111399505B (en) 2020-03-13 2020-03-13 Mobile robot obstacle avoidance method based on neural network

Publications (2)

Publication Number Publication Date
CN111399505A true CN111399505A (en) 2020-07-10
CN111399505B CN111399505B (en) 2023-06-30

Family

ID=71428731

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010173908.8A Active CN111399505B (en) 2020-03-13 2020-03-13 Mobile robot obstacle avoidance method based on neural network

Country Status (1)

Country Link
CN (1) CN111399505B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111831010A (en) * 2020-07-15 2020-10-27 武汉大学 Unmanned aerial vehicle obstacle avoidance flight method based on digital space slice
CN111975769A (en) * 2020-07-16 2020-11-24 华南理工大学 Mobile robot obstacle avoidance method based on meta-learning
CN112034847A (en) * 2020-08-13 2020-12-04 广州仿真机器人有限公司 Obstacle avoidance method and device of split type simulation robot with double walking modes
CN112363513A (en) * 2020-11-25 2021-02-12 珠海市一微半导体有限公司 Obstacle classification and obstacle avoidance control method based on depth information
CN112720465A (en) * 2020-12-15 2021-04-30 大国重器自动化设备(山东)股份有限公司 Control method of artificial intelligent disinfection robot
CN113419555A (en) * 2021-05-20 2021-09-21 北京航空航天大学 Monocular vision-based low-power-consumption autonomous obstacle avoidance method and system for unmanned aerial vehicle
CN113485326A (en) * 2021-06-28 2021-10-08 南京深一科技有限公司 Autonomous mobile robot based on visual navigation
CN113514544A (en) * 2020-12-29 2021-10-19 大连理工大学 Mobile robot pavement material identification method based on sound characteristics
CN113721618A (en) * 2021-08-30 2021-11-30 中科新松有限公司 Plane determination method, device, equipment and storage medium
TWI756844B (en) * 2020-09-25 2022-03-01 財團法人工業技術研究院 Automated guided vehicle navigation device and method thereof
CN114115282A (en) * 2021-11-30 2022-03-01 中国矿业大学 Unmanned device of mine auxiliary transportation robot and using method thereof
TWI757999B (en) * 2020-12-04 2022-03-11 國立陽明交通大學 Real-time obstacle avoidance system, real-time obstacle avoidance method and unmanned vehicle with real-time obstacle avoidance function
CN114296443A (en) * 2021-11-24 2022-04-08 贵州理工学院 Unmanned modular combine harvester

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103455034A (en) * 2013-09-16 2013-12-18 苏州大学张家港工业技术研究院 Avoidance path planning method based on closest distance vector field histogram
WO2017091008A1 (en) * 2015-11-26 2017-06-01 삼성전자주식회사 Mobile robot and control method therefor
CN107506711A (en) * 2017-08-15 2017-12-22 江苏科技大学 Binocular vision obstacle detection system and method based on convolutional neural networks
CN108648161A (en) * 2018-05-16 2018-10-12 江苏科技大学 The binocular vision obstacle detection system and method for asymmetric nuclear convolutional neural networks
CN109947093A (en) * 2019-01-24 2019-06-28 广东工业大学 A kind of intelligent barrier avoiding algorithm based on binocular vision
CN110262487A (en) * 2019-06-12 2019-09-20 深圳前海达闼云端智能科技有限公司 A kind of obstacle detection method, terminal and computer readable storage medium
WO2019199027A1 (en) * 2018-04-09 2019-10-17 엘지전자 주식회사 Robot cleaner

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103455034A (en) * 2013-09-16 2013-12-18 苏州大学张家港工业技术研究院 Avoidance path planning method based on closest distance vector field histogram
WO2017091008A1 (en) * 2015-11-26 2017-06-01 삼성전자주식회사 Mobile robot and control method therefor
CN107506711A (en) * 2017-08-15 2017-12-22 江苏科技大学 Binocular vision obstacle detection system and method based on convolutional neural networks
WO2019199027A1 (en) * 2018-04-09 2019-10-17 엘지전자 주식회사 Robot cleaner
CN108648161A (en) * 2018-05-16 2018-10-12 江苏科技大学 The binocular vision obstacle detection system and method for asymmetric nuclear convolutional neural networks
CN109947093A (en) * 2019-01-24 2019-06-28 广东工业大学 A kind of intelligent barrier avoiding algorithm based on binocular vision
CN110262487A (en) * 2019-06-12 2019-09-20 深圳前海达闼云端智能科技有限公司 A kind of obstacle detection method, terminal and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张天翼: "基于立体视觉的移动机器人避障技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111831010A (en) * 2020-07-15 2020-10-27 武汉大学 Unmanned aerial vehicle obstacle avoidance flight method based on digital space slice
CN111975769A (en) * 2020-07-16 2020-11-24 华南理工大学 Mobile robot obstacle avoidance method based on meta-learning
CN112034847A (en) * 2020-08-13 2020-12-04 广州仿真机器人有限公司 Obstacle avoidance method and device of split type simulation robot with double walking modes
CN112034847B (en) * 2020-08-13 2021-04-13 广州仿真机器人有限公司 Obstacle avoidance method and device of split type simulation robot with double walking modes
TWI756844B (en) * 2020-09-25 2022-03-01 財團法人工業技術研究院 Automated guided vehicle navigation device and method thereof
US11636612B2 (en) 2020-09-25 2023-04-25 Industrial Technology Research Institute Automated guided vehicle navigation device and method thereof
CN112363513A (en) * 2020-11-25 2021-02-12 珠海市一微半导体有限公司 Obstacle classification and obstacle avoidance control method based on depth information
TWI757999B (en) * 2020-12-04 2022-03-11 國立陽明交通大學 Real-time obstacle avoidance system, real-time obstacle avoidance method and unmanned vehicle with real-time obstacle avoidance function
CN112720465A (en) * 2020-12-15 2021-04-30 大国重器自动化设备(山东)股份有限公司 Control method of artificial intelligent disinfection robot
CN113514544A (en) * 2020-12-29 2021-10-19 大连理工大学 Mobile robot pavement material identification method based on sound characteristics
CN113419555A (en) * 2021-05-20 2021-09-21 北京航空航天大学 Monocular vision-based low-power-consumption autonomous obstacle avoidance method and system for unmanned aerial vehicle
CN113419555B (en) * 2021-05-20 2022-07-19 北京航空航天大学 Monocular vision-based low-power-consumption autonomous obstacle avoidance method and system for unmanned aerial vehicle
CN113485326A (en) * 2021-06-28 2021-10-08 南京深一科技有限公司 Autonomous mobile robot based on visual navigation
CN113721618A (en) * 2021-08-30 2021-11-30 中科新松有限公司 Plane determination method, device, equipment and storage medium
CN114296443A (en) * 2021-11-24 2022-04-08 贵州理工学院 Unmanned modular combine harvester
CN114296443B (en) * 2021-11-24 2023-09-12 贵州理工学院 Unmanned modularized combine harvester
CN114115282A (en) * 2021-11-30 2022-03-01 中国矿业大学 Unmanned device of mine auxiliary transportation robot and using method thereof
CN114115282B (en) * 2021-11-30 2024-01-19 中国矿业大学 Unmanned device of mine auxiliary transportation robot and application method thereof

Also Published As

Publication number Publication date
CN111399505B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
CN111399505B (en) Mobile robot obstacle avoidance method based on neural network
DeSouza et al. Vision for mobile robot navigation: A survey
Dong et al. Real-time avoidance strategy of dynamic obstacles via half model-free detection and tracking with 2d lidar for mobile robots
CN113176585B (en) Pavement anomaly detection method based on three-dimensional laser radar
EP3516582A1 (en) Autonomous route determination
CN111360780A (en) Garbage picking robot based on visual semantic SLAM
Kim et al. End-to-end deep learning for autonomous navigation of mobile robot
CN114474061B (en) Cloud service-based multi-sensor fusion positioning navigation system and method for robot
Hua et al. Small obstacle avoidance based on RGB-D semantic segmentation
EP4058984A1 (en) Geometry-aware instance segmentation in stereo image capture processes
Ouyang et al. A cgans-based scene reconstruction model using lidar point cloud
Protasov et al. Cnn-based omnidirectional object detection for hermesbot autonomous delivery robot with preliminary frame classification
CN109960278B (en) LGMD-based bionic obstacle avoidance control system and method for unmanned aerial vehicle
You et al. End-to-end deep learning for reverse driving trajectory of autonomous bulldozer
Gajjar et al. A comprehensive study on lane detecting autonomous car using computer vision
WO2023155903A1 (en) Systems and methods for generating road surface semantic segmentation map from sequence of point clouds
Zhou et al. Automated process for incorporating drivable path into real-time semantic segmentation
Hu et al. Robot-assisted mobile scanning for automated 3D reconstruction and point cloud semantic segmentation of building interiors
Mutz et al. Following the leader using a tracking system based on pre-trained deep neural networks
Zhao et al. Improving Autonomous Vehicle Visual Perception by Fusing Human Gaze and Machine Vision
Wang et al. DRR-LIO: A dynamic-region-removal-based LiDAR inertial odometry in dynamic environments
US20220377973A1 (en) Method and apparatus for modeling an environment proximate an autonomous system
CN116189150A (en) Monocular 3D target detection method, device, equipment and medium based on fusion output
Yildiz et al. CNN based sensor fusion method for real-time autonomous robotics systems
Rekhawar et al. Deep learning based detection, segmentation and vision based pose estimation of staircase

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant