CN112902981B - Robot navigation method and device - Google Patents

Robot navigation method and device Download PDF

Info

Publication number
CN112902981B
CN112902981B CN202110116337.9A CN202110116337A CN112902981B CN 112902981 B CN112902981 B CN 112902981B CN 202110116337 A CN202110116337 A CN 202110116337A CN 112902981 B CN112902981 B CN 112902981B
Authority
CN
China
Prior art keywords
lane line
region
image
line information
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110116337.9A
Other languages
Chinese (zh)
Other versions
CN112902981A (en
Inventor
秦家虎
张展鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202110116337.9A priority Critical patent/CN112902981B/en
Publication of CN112902981A publication Critical patent/CN112902981A/en
Application granted granted Critical
Publication of CN112902981B publication Critical patent/CN112902981B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3415Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a robot navigation method and a device, wherein the robot navigation method comprises the following steps: processing the running area image of the robot, and determining an interested area image in the running area image; determining obstacle position information and center lane line information in the region-of-interest image, wherein the center lane line information is used for representing an original driving path of the robot; updating the center lane line information according to the obstacle position information to obtain updated center lane line information; and controlling the robot to navigate autonomously according to the updated central lane line information. The robot navigation method provided by the embodiment of the invention improves the calculation speed, reduces the consumption of calculation resources, and solves the problems of unsatisfactory processing effect and poor calculation precision in the prior art in the treatment of extreme cases such as serious deflection of lane orientation, interference of related indication in the middle of a road, shade influence on two sides of the road and the like.

Description

Robot navigation method and device
Technical Field
The invention belongs to the technical field of robot navigation, and particularly relates to a robot navigation method and device.
Background
In the past ten years, the development of the emerging fields of mobile robots, unmanned robots and the like is rapid, the development is on the go, and simultaneously, along with the new revolutionary technology update in the fields of information science and computer science, more and more new algorithms emerge, and the new algorithms help the mobile robots to step up one step after another. Among all the problems related to mobile robots, autonomous navigation and autonomous obstacle avoidance are basic research problems, and only one accurate autonomous navigation algorithm is realized, so that the mobile robots really have 'autonomous consciousness' and can complete intelligent work on the basis. The autonomous navigation of the mobile robot specifically refers to that the mobile robot can realize free walking and autonomous obstacle avoidance in an unknown environment through a design algorithm, and complete movement from a starting point to a target point is completed. In practical application, the problem is very extensive, the robot faces different working environments, each working environment is different, the best fitting algorithm is also different, the indoor environment tends to be complex, the robot is irregularly circulated, the traditional image analysis algorithm is difficult to use, the current research focuses on solving the problem of robot navigation in a real road scene, and meanwhile, the research on autonomous navigation in the real road scene is helpful for moving to the unmanned field, and the robot has higher universality.
In the existing autonomous navigation research, the method is mainly divided into two genres, one is a method based on traditional image feature analysis, and in a real road scene, lane lines are more universal compared with other disordered objects, and generally, a quadratic equation or a cubic equation can be used for fitting the lane lines, so that the general processing flow is to analyze image features to extract relevant regions of interest RoI and general lane line shapes firstly, and then the functions are used for fitting the lane lines; secondly, a deep learning method based on the severe heat in recent years. The two methods have advantages and disadvantages, and in general, the method for applying the image features has lower time cost but poorer precision; while with the deep learning approach, higher time and computation costs are required, we pursue the best performance under fixed computing conditions, in conjunction with the laboratory platform. Therefore, aiming at the problems of poor calculation precision and relatively large consumption of calculation resources by a deep learning method of the traditional algorithm, how to save the calculation resources on the premise of ensuring the calculation precision is a problem to be solved urgently at present, and in addition, the existing algorithm has poor processing effect and poor calculation precision on extreme conditions such as serious deflection of lane orientation, interference of related indication in the middle of a road, shade influence on two sides of the road and the like.
Disclosure of Invention
First technical problem
In view of the above, the present invention provides a robot navigation method and apparatus to at least partially solve the problems in the prior art.
(II) technical scheme
A method of robot navigation, comprising:
processing the running area image of the robot, and determining an interested area image in the running area image;
determining obstacle position information and center lane line information in the region-of-interest image, wherein the center lane line information is used for representing an original driving path of the robot;
updating the center lane line information according to the obstacle position information to obtain updated center lane line information; and
and controlling the robot to navigate autonomously according to the updated central lane line information.
According to an embodiment of the present invention, updating center lane line information according to obstacle position information, the obtaining updated center lane line information includes:
marking an obstacle in the region of interest image;
determining edge lines of obstacles in the image of the region of interest to determine a non-travelable region; and
and updating the central lane line information according to the non-drivable area to obtain updated central lane line information.
According to an embodiment of the present invention, controlling autonomous navigation of a robot according to updated center lane line information includes:
calculating the deviation between the current position of the robot and the updated central lane line; and
and calculating the travelling speed and the rotation angle of the robot according to the deviation.
According to an embodiment of the present invention, processing a travel area image of a robot, determining a region-of-interest image in the travel area image includes:
acquiring an image feature extraction model of a region of interest;
inputting a driving region image of the robot into a region-of-interest image feature extraction model, and outputting normalized region-of-interest image features, wherein the normalized region-of-interest image features comprise normalized center point coordinates of the region-of-interest image, and normalized length and normalized width of the region-of-interest image;
converting the normalized region of interest image features into region of interest image features, wherein the region of interest image features include center point coordinates of the region of interest image, and length and width of the region of interest image; and
and selecting the region of interest image from the running region image of the robot according to the region of interest image characteristics.
According to an embodiment of the present invention, acquiring a region-of-interest image feature extraction model includes:
obtaining a skeleton network model;
adding a full-connection layer into the skeleton network model to obtain an initial region-of-interest image feature extraction model;
training an initial region of interest image feature extraction model to obtain a trained region of interest image feature extraction model.
According to an embodiment of the present invention, determining obstacle position information in a region-of-interest image includes:
obtaining an obstacle detection model;
and inputting the region of interest image into an obstacle detection model, and outputting obstacle position information.
According to an embodiment of the present invention, determining obstacle position information in a region-of-interest image includes:
obtaining an obstacle detection model;
adjusting the network structure of the obstacle detection model;
training the adjusted obstacle detection model to obtain a trained obstacle detection model;
and inputting the region-of-interest image into the trained obstacle detection model, and outputting obstacle position information.
According to an embodiment of the present invention, determining center lane line information in a region of interest image includes:
extracting edge characteristics of the region-of-interest image by using an edge detection algorithm to obtain a rough lane line contour;
performing color feature extraction on the region-of-interest image by using a color extraction algorithm to obtain a rough segmentation map, performing irrelevant noise filtering on the rough segmentation map to obtain a fine segmentation map, and performing edge feature extraction on the fine segmentation map by using an edge detection algorithm to obtain a fine lane line contour;
combining the rough lane line contour and the fine lane line contour to obtain a lane line contour;
and obtaining central lane line information by using the lane line profile.
According to an embodiment of the present invention, the center lane line information using the lane line profile includes:
clustering the lane line contours to obtain a left lane line contour and a right lane line contour marked by pixel points respectively;
respectively performing curve fitting on the left lane line contour and the right lane line contour to acquire left lane line information and right lane line information;
and obtaining center lane line information by using the left lane line information and the right lane line information.
A robotic navigation device comprising:
the processing module is used for processing the running area image of the robot and determining an interested area image in the running area image;
the system comprises a determining module, a detecting module and a judging module, wherein the determining module is used for determining obstacle position information and center lane line information in an interested area image, wherein the center lane line information is used for representing an original driving path of the robot;
the acquisition module is used for updating the center lane line information according to the obstacle position information to obtain updated center lane line information; and
and the control module is used for controlling the autonomous navigation of the robot according to the updated central lane line information.
(III) beneficial effects
According to the robot navigation method provided by the embodiment of the invention, the region of interest image in the driving region image is used as the original processing object, so that the calculation cost of the subsequent steps is reduced, the calculation speed is improved, and the consumption of calculation resources is reduced. In addition, in the method, the obstacle position information and the center lane line information in the image of the region of interest are determined, lane line detection and obstacle detection are realized at the same time, and the problems of unsatisfactory processing effect, poor calculation accuracy and the like in the prior art in the treatment of extreme conditions such as serious deflection of lane orientation, interference of related indication in the middle of a road, tree shadow influence on both sides of the road and the like can be solved by avoiding obstacle and updating the center lane line and controlling the autonomous navigation of the robot according to the updated center lane line information. The robot navigation method provided by the embodiment of the invention considers the problem of example limitation in practical engineering application, aims to accurately detect the drivable region of the mobile robot under the constraint of limited examples, considers the follow-up migration to the automatic driving field, ensures the instantaneity of the algorithm, and has higher application value and commercial value.
Drawings
Fig. 1 is a flowchart of a robot navigation method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a normalized region of interest image and the region of interest image in a driving region image according to an embodiment of the present invention.
Fig. 3 is a network configuration diagram of a Yolov3 network employed by an obstacle detection model according to an embodiment of the invention.
Fig. 4 is a flowchart of determining center lane line information in a region of interest image according to an embodiment of the present invention.
Fig. 5 is a block diagram of a structure of a robot navigation device according to an embodiment of the present invention.
Detailed Description
The present invention will be further described in detail below with reference to specific embodiments and with reference to the accompanying drawings, in order to make the objects, technical solutions and advantages of the present invention more apparent.
Based on the fact that the traditional algorithm is poor in calculation accuracy and the deep learning method is relatively high in calculation resource consumption, in the process of achieving the method, the fact that the traditional algorithm and the deep learning method are combined with each other is found, the highest cost performance can be achieved, and calculation resources can be saved on the premise that the calculation accuracy is guaranteed.
In view of the foregoing, an embodiment of the present invention provides a robot navigation method, and fig. 1 is a schematic flow chart of the present invention for implementing a mobile robot autonomous navigation obstacle avoidance method based on deep learning and conventional image features. Specifically, the method includes the following operations S101 to S104:
in operation S101: processing the running area image of the robot, and determining an interested area image in the running area image;
in operation S102: determining obstacle position information and center lane line information in the region-of-interest image, wherein the center lane line information is used for representing an original driving path of the robot;
in operation S103: updating the center lane line information according to the obstacle position information to obtain updated center lane line information; and
in operation S104: and controlling the robot to navigate autonomously according to the updated central lane line information.
In the process of realizing the invention, the selection of the region of interest is found to be critical, and road scenes irrelevant to detection can be screened in advance. In addition, in the method, the obstacle position information and the center lane line information in the image of the region of interest are determined, lane line detection and obstacle detection are realized at the same time, and the problems of unsatisfactory processing effect, poor calculation accuracy and the like in the prior art in the treatment of extreme conditions such as serious deflection of lane orientation, interference of related indication in the middle of a road, tree shadow influence on both sides of the road and the like can be solved by avoiding obstacle and updating the center lane line and controlling the autonomous navigation of the robot according to the updated center lane line information.
Wherein, according to an embodiment of the present invention, in operation S101, a travel area image of a robot is processed, and determining a region of interest image in the travel area image includes operations S201 to S204:
in operation S201: acquiring an image feature extraction model of a region of interest; specifically, acquiring the region-of-interest image feature extraction model includes:
(1) And obtaining a skeleton network model. Since the advent of classical image classification networks AlexNet, deep learning has progressed rapidly, and by the time of res net, the accuracy of image classification has been comparable to that of shoulder humans, however, image classification networks can be applied not only in the field of image classification, but more importantly, they provide an effective feature extraction method that can extract image-related features after a limited number of convolutions for sharing by other image tasks. In the embodiment of the invention, the ResNet with higher selection precision of the skeleton network model ensures that the convolution information of the low feature layer can be transferred to the high convolution layer certainly through the jump structure, thereby ensuring the effectiveness of feature extraction.
(2) Adding a full-connection layer into a skeleton network model ResNet to obtain an initial region-of-interest image feature extraction model; because the selection of the region of interest is crucial, road scenes irrelevant to detection can be screened in advance, and the calculation cost of subsequent steps is reduced, therefore, a feature map of the region of interest RoI is selected as the output of a skeleton network model, and because the skeleton structure of the skeleton network model is only used for extracting image features, and further compression is required to extract an expected result.
(3) After the initial region of interest image feature extraction model is established through the operation, the initial region of interest image feature extraction model needs to be trained, and the trained region of interest image feature extraction model is obtained. Specifically, since the ResNet network has obtained a good effect on the classified data set, the network parameters are not retrained, only a single-layer fully-connected network is required to be trained, and the MSE mean square loss function can be selected by the loss function during training.
According to the embodiment of the invention, the existing skeleton network model is selected as the basic model, the full-connection layer is added in the skeleton network model for adjustment and training, and then the image characteristic extraction model of the region of interest is obtained, so that only one small network (namely the full-connection layer) is required to be trained in the training process, the training speed is high, the computing resource is saved, meanwhile, the mature network is adopted as the skeleton network model, the achievement obtained in the related field in the industry is fully referred, the computing accuracy is guaranteed, the advantages of deep learning and the advantages of the traditional image characteristic model are fully combined, and the optimal effect under the condition of limiting an example is pursued.
In operation S202: the method comprises the steps of inputting a driving region image of a robot into a region-of-interest image feature extraction model, outputting normalized region-of-interest image features, normalizing data before each layer of convolution in order to keep the effectiveness of the region-of-interest image feature extraction model, specifically normalizing all data to be between (0 and 1), and designing and optimizing a convolution network according to the normalized data, wherein the output of the region-of-interest image feature extraction model is also normalized region-of-interest image features based on the normalization center point coordinates of the region-of-interest image, and the normalized length and the normalized width of the region-of-interest image.
In operation S203: the normalized region of interest image features are converted to region of interest image features, wherein the region of interest image features include center point coordinates of the region of interest image, and a length and width of the region of interest image. Because the output is the normalized feature in operation S202, it is necessary to convert the normalized region-of-interest image feature into a region-of-interest image feature (actual image feature).
The following formula is adopted during conversion:
x center =w×x predict (one)
y center =h×y predict (II)
Wherein x is center Center point coordinates, y, representing an image of a region of interest center Center point coordinates, x, representing an image of a region of interest predict Normalized center point coordinates, y, representing an image of a region of interest predict Normalized center point coordinates, h, representing an image of a region of interest RoI Representing the width, W, of the region of interest image RoI The length of the region of interest image is represented, h represents the width of the travel region image, w represents the length of the travel region image, h predict Representing normalized width, W, of region of interest image predict Representing normalized length of region of interest image, P h And P w Is a preset super parameter.
As can be further appreciated with reference to fig. 2, fig. 2 is a schematic diagram of a normalized region of interest image and a region of interest image in a travel region image according to an embodiment of the present invention. As shown in fig. 2, the estimated ROI area is a normalized region of interest image selected according to the normalized center point coordinates of the region of interest image and the normalized length and width frames of the region of interest image, and the actual ROI area is an (actual) region of interest image selected according to the center point coordinates of the (actual) region of interest image and the length and width frames of the region of interest image.
In operation S204: and selecting the region of interest image in the running region image of the robot according to the region of interest image characteristics, namely selecting the actual ROI region as in the frame in figure 2.
According to an embodiment of the present invention, in operation S102: determining obstacle location information in the region of interest image includes:
the method includes acquiring an obstacle detection model, and inputting a region of interest image into the obstacle detection model, outputting obstacle position information, specifically, as position coordinates of an obstacle.
While the image classification field has obtained huge technological leaps, the related technology of the existing target detection network is very mature, the image detection also realizes the development of great footage, mainly develops into a double-stage detection network represented by the RCNN series and a single-stage detection network represented by the Yolo, SSD and other networks, and the Yolov3 of the latest third version is greatly leaped in speed and precision, so that the Yolov3 network is selected as an obstacle detection model in the embodiment of the invention, and fig. 3 is a network structure diagram of the Yolov3 network adopted by the obstacle detection model according to the embodiment of the invention, and the network structure of the Yolov3 network can be known by referring to fig. 3. The Yolov3 network is trained on a VOC or coco data set, more than 1000 common objects can be identified, so that the trained Yolov3 network is used for detecting obstacles on an interested area image, common obstacles can be detected, then the detected obstacles are affine transformed onto a driving area image, labeling is carried out on the driving area image, and obstacle position information is obtained.
It should be noted that, in order to obtain a better detection effect, the size of the image input into the obstacle detection model needs to be fixed, so before the image of the region of interest is input into the obstacle detection model, the image of the region of interest needs to be adjusted to a preset size by using a correlation function, and then the image of the region of interest is input into the obstacle detection model.
The pre-trained Yolov3 network has the capability of detecting more than 1000 objects, and the capability is redundant for the application scene of the embodiment of the invention, so that the network can be simplified on the basis of the original Yolov3 network for saving the calculation cost, and the small network only containing interested obstacles is trained in consideration of the fact that the obstacles on the road mainly needing to be avoided are related objects such as automobiles, trucks and the like. The method has the advantages of being beneficial to improving the operation speed, reducing the calculation cost and better meeting the real-time requirement. According to an embodiment of the present invention, optionally, after fine tuning and training the network structure of the selected obstacle detection model, the trained obstacle detection model is used to obtain the obstacle position information, where in operation S102: determining obstacle location information in the region of interest image includes:
obtaining an obstacle detection model;
adjusting the network structure of the obstacle detection model;
training the adjusted obstacle detection model to obtain a trained obstacle detection model;
and inputting the region-of-interest image into the trained obstacle detection model, and outputting obstacle position information.
Fig. 4 is a flowchart of determining center lane line information in a region of interest image according to an embodiment of the present invention.
According to an embodiment of the present invention, in operation S102: determining center lane line information in the region of interest image includes:
(1) Extracting edge characteristics of the region-of-interest image by using an edge detection algorithm to obtain a rough lane line contour; at this time, the object of the processing is the region of interest image selected in the driving region image of the robot according to the region of interest image feature in operation S204, that is, the actual ROI region selected as the frame in fig. 2, which is helpful for removing the noise region and improving the calculation accuracy. Optionally, the edge detection algorithm may adopt a Sobel edge detection operator or a Canny edge detection operator, and the edge detection algorithm is used to extract edge characteristics of the region of interest image, so as to obtain a rough lane line contour, and in this process, more noise is introduced with a high possibility, and the rough lane line contour can only be used as a preliminary detection.
(2) The method comprises the steps of carrying out color feature extraction on an image of a region of interest by using a color extraction algorithm to obtain a rough segmentation map, generating a binary image by using the color extraction algorithm, extracting a lane region, analyzing the color of a lane line region, converting a color space, extracting the lane region by using the color to finally obtain a binary image, representing the lane region by using white pixel points and representing a background region by using black pixel points, wherein the operation still can extract noise regions with similar colors, and further denoising detection is needed.
Therefore, irrelevant noise filtering is carried out on the rough segmentation map, a fine segmentation map is obtained, noise generation mainly comprises two cases, one case is that black noise appears in a white area, the other case is that white noise appears in a black area, the two cases can be transferred into a problem processing mode, the thought of corrosion and expansion is used for carrying out noise elimination by combining surrounding environments, a specific implementation algorithm is that if x units above and below a pixel point are white or black, the pixel point is set to be the same color as the pixel point, specifically, an image is moved up by x units and is subjected to operation with an original image, the image is moved down by x units and is subjected to operation with the original image, finally, two cases are subjected to operation with the noise, and a relatively accurate binary image capable of distinguishing the area from the background can be obtained through the noise elimination operation, namely the fine segmentation map.
And then, using an edge detection algorithm, adopting a Sobel edge detection operator or a Canny edge detection operator to extract edge characteristics of the fine segmentation map and obtain a fine lane line contour.
(3) Combining the rough lane line contour and the fine lane line contour to obtain a lane line contour; this is done mainly because the merging of the rough lane line contour and the fine lane line contour, i.e. the lane line contour edge points considered on both contour images, will be eventually marked as lane line contours, taking into account the new noise that may be introduced when the edge feature extraction is performed on the fine segmentation map.
(4) Clustering the lane line contours to obtain a left lane line contour and a right lane line contour marked by pixel points respectively; the lane line contours at the positions marked by the pixel points still cannot meet the subsequent fitting requirements, so that the lane line contours need to be clustered to obtain left lane line contours and right lane line contours.
(5) And performing curve fitting on the pixel points clustered by the operation to obtain a final result, namely performing curve fitting on the left lane line contour and the right lane line contour respectively to obtain left lane line information and right lane line information, namely a left lane line function curve and a right lane line function curve.
(6) And obtaining center lane line information by using the left lane line information and the right lane line information. Specifically, based on the two lane line function curves obtained by the operation, the middle line of the two lane lines is fitted, that is, a center function curve is obtained by averaging the two function curves, and corresponding labeling is performed in the driving area image. This center function curve is marked as the center line of the road, (i.e., center lane line information is obtained). Ideally, the mobile robot should travel forward along this central lane line, which is the best curve to ensure that the angle is not offset, and in case of a severe lane heading skew, it can also be ensured that the robot does not deviate from the driving path.
According to an embodiment of the present invention, in operation S103, updating center lane line information according to obstacle position information, the obtaining updated center lane line information includes:
first, an obstacle is marked in the region of interest image according to the obstacle position coordinates detected in operation S102. Then, determining edge lines of obstacles in the region-of-interest image to determine a non-travelable region; since the influence of the obstacle on the lane center line needs to be sufficiently considered due to the presence of the obstacle, the obstacle coordinate center is marked as a non-drivable area by default. And finally, updating the central lane line information according to the non-drivable area to obtain updated central lane line information. This operation is a specific method of performing obstacle avoidance update on the center lane line to obtain updated center lane line information, see the method of "obtaining center lane line information using left lane line information and right lane line information" in step (6) of the detailed description section of "determine center lane line information in region of interest image" in operation S102.
According to an embodiment of the present invention, in operation S104, controlling autonomous navigation of the robot according to the updated center lane line information includes:
calculating the deviation between the current position of the robot and the updated central lane line; and calculating the advancing speed and the rotating angle of the robot according to the deviation by the control part so as to control the autonomous navigation of the mobile robot.
An embodiment of the present invention also provides a robot navigation device, and fig. 5 is a block diagram of a robot navigation device 500 according to an embodiment of the present invention, and the robot navigation device 500 may be used to implement the method described with reference to fig. 1. As shown in fig. 5, the robotic navigation device 500 includes: a processing module 501, a determining module 502, an obtaining module 503 and a control module 504. Specifically:
the processing module 501 is configured to process a driving area image of the robot, and determine an area image of interest in the driving area image;
a determining module 502, configured to determine obstacle location information and center lane line information in the region of interest image, where the center lane line information is used to characterize an original travel path of the robot;
an obtaining module 503, configured to update center lane line information according to the obstacle position information, and obtain updated center lane line information; and
and the control module 504 is used for controlling the autonomous navigation of the robot according to the updated central lane line information.
According to the embodiment of the invention, the processing module 501 determines the region-of-interest image in the driving region image, and the processing module takes the region-of-interest image in the driving region image as an original processing object, so that the calculation cost of the subsequent steps is reduced, the calculation speed is improved, and the consumption of calculation resources is reduced. Through the determining module 502, the obstacle position information and the center lane line information in the interested area image are determined, meanwhile, lane line detection and obstacle detection are realized, and through the obtaining module 503, the center lane line is kept away from the obstacle and is updated, and the robot autonomous navigation is controlled according to the updated center lane line information, the problems that in the prior art, extreme conditions are handled, such as serious deflection of lane orientation, interference of related instructions in the middle of a road, shade influence on two sides of the road and the like, the processing effect is not ideal, and the calculation accuracy is poor can be solved.
It should be noted that, in the embodiment of the present disclosure, the robot navigation device portion corresponds to the robot navigation method portion in the embodiment of the present disclosure, and the description of the robot navigation device portion specifically refers to the robot navigation method portion, which is not described herein.
Any number of modules, sub-modules, units, sub-units, or at least some of the functionality of any number of the sub-units according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented as split into multiple modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or in any other reasonable manner of hardware or firmware that integrates or encapsulates the circuit, or in any one of or a suitable combination of three of software, hardware, and firmware. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be at least partially implemented as computer program modules, which when executed, may perform the corresponding functions.
For example, any of the processing module 501, the determining module 502, the obtaining module 503, and the control module 504 may be combined and implemented in one module/unit/sub-unit, or any of the modules/units/sub-units may be split into a plurality of modules/units/sub-units. Alternatively, at least some of the functionality of one or more of these modules/units/sub-units may be combined with at least some of the functionality of other modules/units/sub-units and implemented in one module/unit/sub-unit. At least one of the processing module 501, the determination module 502, the acquisition module 503, the control module 504 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or by hardware or firmware in any other reasonable way of integrating or packaging circuitry, or in any one of or a suitable combination of three of software, hardware, and firmware, according to embodiments of the present disclosure. Alternatively, at least one of the processing module 501, the determining module 502, the obtaining module 503, the control module 504 may be at least partly implemented as a computer program module, which when executed may perform the corresponding functions.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the invention thereto, but to limit the invention thereto, and any modifications, equivalents, improvements and equivalents thereof may be made without departing from the spirit and principles of the invention.

Claims (6)

1. A method of robot navigation, comprising:
processing a driving area image of the robot, and determining an interested area image in the driving area image;
determining obstacle position information and center lane line information in the region-of-interest image, wherein the center lane line information is used for representing an original driving path of the robot;
updating the center lane line information according to the obstacle position information to obtain updated center lane line information; and
controlling the robot to autonomously navigate according to the updated central lane line information;
wherein, processing the driving area image of the robot, determining the interested area image in the driving area image comprises: acquiring an image feature extraction model of a region of interest; inputting a driving region image of the robot into the region-of-interest image feature extraction model, and outputting normalized region-of-interest image features, wherein the normalized region-of-interest image features comprise normalized center point coordinates of the region-of-interest image, and normalized length and normalized width of the region-of-interest image; converting the normalized region of interest image features into region of interest image features, wherein the region of interest image features include center point coordinates of the region of interest image, and a length and a width of the region of interest image; selecting the region of interest image from the running region image of the robot according to the region of interest image characteristics;
the obtaining the image feature extraction model of the region of interest comprises the following steps: obtaining a skeleton network model; adding a full-connection layer into the skeleton network model to obtain an initial region-of-interest image feature extraction model; training the initial region-of-interest image feature extraction model to obtain a trained region-of-interest image feature extraction model;
wherein determining obstacle position information in the region of interest image comprises: obtaining an obstacle detection model; inputting the region of interest image into an obstacle detection model, and outputting the obstacle position information;
wherein determining center lane line information in the region of interest image comprises: extracting edge characteristics of the region-of-interest image by using an edge detection algorithm to obtain a rough lane line contour; performing color feature extraction on the region-of-interest image by using a color extraction algorithm to obtain a rough segmentation map, performing irrelevant noise filtering on the rough segmentation map to obtain a fine segmentation map, and performing edge feature extraction on the fine segmentation map by using an edge detection algorithm to obtain a fine lane line contour; combining the rough lane line contour and the fine lane line contour to obtain a lane line contour; and obtaining the central lane line information by utilizing the lane line profile.
2. The method of claim 1, wherein updating the center lane line information according to the obstacle location information, the updated center lane line information comprising:
labeling the obstacle in the region of interest image;
determining an edge line of the obstacle in the region of interest image to determine a non-travelable region; and
and updating the central lane line information according to the non-drivable area to obtain updated central lane line information.
3. The method of claim 1, wherein controlling the robotic autonomous navigation in accordance with the updated center lane line information comprises:
calculating the deviation between the current position of the robot and the updated center lane line; and
and calculating the travelling speed and the rotation angle of the robot according to the deviation.
4. The method of claim 1, wherein determining obstacle location information in the region of interest image comprises:
obtaining an obstacle detection model;
adjusting the network structure of the obstacle detection model;
training the adjusted obstacle detection model to obtain a trained obstacle detection model;
and inputting the region of interest image into the trained obstacle detection model, and outputting the obstacle position information.
5. The method according to claim 1, wherein: the obtaining the center lane line information by using the lane line profile includes:
clustering the lane line contours to obtain a left lane line contour and a right lane line contour marked by pixel points respectively;
respectively performing curve fitting on the left lane line contour and the right lane line contour to acquire left lane line information and right lane line information;
and obtaining the center lane line information by utilizing the left lane line information and the right lane line information.
6. A robotic navigation device based on the method of any one of claims 1-5, comprising:
the processing module is used for processing the running area image of the robot and determining an interested area image in the running area image;
the determining module is used for determining obstacle position information and center lane line information in the region-of-interest image, wherein the center lane line information is used for representing an original running path of the robot;
the acquisition module is used for updating the central lane line information according to the obstacle position information to obtain updated central lane line information; and
and the control module is used for controlling the autonomous navigation of the robot according to the updated central lane line information.
CN202110116337.9A 2021-01-26 2021-01-26 Robot navigation method and device Active CN112902981B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110116337.9A CN112902981B (en) 2021-01-26 2021-01-26 Robot navigation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110116337.9A CN112902981B (en) 2021-01-26 2021-01-26 Robot navigation method and device

Publications (2)

Publication Number Publication Date
CN112902981A CN112902981A (en) 2021-06-04
CN112902981B true CN112902981B (en) 2024-01-09

Family

ID=76119402

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110116337.9A Active CN112902981B (en) 2021-01-26 2021-01-26 Robot navigation method and device

Country Status (1)

Country Link
CN (1) CN112902981B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269130A (en) * 2021-06-11 2021-08-17 国电瑞源(西安)智能研究院有限公司 Visual path searching method based on artificial neural network
CN115082898A (en) * 2022-07-04 2022-09-20 小米汽车科技有限公司 Obstacle detection method, obstacle detection device, vehicle, and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150146374A (en) * 2014-06-20 2015-12-31 주식회사 세인전장 System for lane recognition using environmental information and method thereof
CN107860391A (en) * 2017-02-13 2018-03-30 问众智能信息科技(北京)有限公司 Automobile accurate navigation method and device
CN110962847A (en) * 2019-11-26 2020-04-07 清华大学苏州汽车研究院(吴江) Lane centering auxiliary self-adaptive cruise trajectory planning method and system
CN111178253A (en) * 2019-12-27 2020-05-19 深圳佑驾创新科技有限公司 Visual perception method and device for automatic driving, computer equipment and storage medium
CN111666921A (en) * 2020-06-30 2020-09-15 腾讯科技(深圳)有限公司 Vehicle control method, apparatus, computer device, and computer-readable storage medium
WO2020199593A1 (en) * 2019-04-04 2020-10-08 平安科技(深圳)有限公司 Image segmentation model training method and apparatus, image segmentation method and apparatus, and device and medium
CN112214022A (en) * 2016-10-11 2021-01-12 御眼视觉技术有限公司 Navigating a vehicle based on detected obstacles

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150146374A (en) * 2014-06-20 2015-12-31 주식회사 세인전장 System for lane recognition using environmental information and method thereof
CN112214022A (en) * 2016-10-11 2021-01-12 御眼视觉技术有限公司 Navigating a vehicle based on detected obstacles
CN107860391A (en) * 2017-02-13 2018-03-30 问众智能信息科技(北京)有限公司 Automobile accurate navigation method and device
WO2020199593A1 (en) * 2019-04-04 2020-10-08 平安科技(深圳)有限公司 Image segmentation model training method and apparatus, image segmentation method and apparatus, and device and medium
CN110962847A (en) * 2019-11-26 2020-04-07 清华大学苏州汽车研究院(吴江) Lane centering auxiliary self-adaptive cruise trajectory planning method and system
CN111178253A (en) * 2019-12-27 2020-05-19 深圳佑驾创新科技有限公司 Visual perception method and device for automatic driving, computer equipment and storage medium
CN111666921A (en) * 2020-06-30 2020-09-15 腾讯科技(深圳)有限公司 Vehicle control method, apparatus, computer device, and computer-readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
针对车辆与行人检测的感兴趣区域自适应分割算法;张文影;李礼夫;;科学技术与工程(第05期);1967-1972 *

Also Published As

Publication number Publication date
CN112902981A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN109784333B (en) Three-dimensional target detection method and system based on point cloud weighted channel characteristics
Kluge Extracting road curvature and orientation from image edge points without perceptual grouping into features
Yan et al. A method of lane edge detection based on Canny algorithm
CN105809184B (en) Method for real-time vehicle identification and tracking and parking space occupation judgment suitable for gas station
CN112902981B (en) Robot navigation method and device
WO2015010451A1 (en) Method for road detection from one image
CN106919902B (en) Vehicle identification and track tracking method based on CNN
CN111860439A (en) Unmanned aerial vehicle inspection image defect detection method, system and equipment
US11900676B2 (en) Method and apparatus for detecting target in video, computing device, and storage medium
CN110705342A (en) Lane line segmentation detection method and device
CN109685827B (en) Target detection and tracking method based on DSP
CN111738033B (en) Vehicle driving information determination method and device based on plane segmentation and vehicle-mounted terminal
Ma et al. Crlf: Automatic calibration and refinement based on line feature for lidar and camera in road scenes
CN112927303A (en) Lane line-based automatic driving vehicle-mounted camera pose estimation method and system
Farag et al. An advanced road-lanes finding scheme for self-driving cars
Liu et al. Towards industrial scenario lane detection: Vision-based agv navigation methods
CN113516853B (en) Multi-lane traffic flow detection method for complex monitoring scene
CN107437071B (en) Robot autonomous inspection method based on double yellow line detection
CN113110443B (en) Robot tracking and positioning method based on camera
CN112801021B (en) Method and system for detecting lane line based on multi-level semantic information
Chen et al. Vision‐based autonomous land vehicle guidance in outdoor road environments using combined line and road following techniques
CN113191281A (en) ORB feature extraction method based on region of interest and adaptive radius
CN113221739A (en) Monocular vision-based vehicle distance measuring method
CN112069924A (en) Lane line detection method, lane line detection device and computer-readable storage medium
Valente et al. Real-time method for general road segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant