CN112902981A - Robot navigation method and device - Google Patents

Robot navigation method and device Download PDF

Info

Publication number
CN112902981A
CN112902981A CN202110116337.9A CN202110116337A CN112902981A CN 112902981 A CN112902981 A CN 112902981A CN 202110116337 A CN202110116337 A CN 202110116337A CN 112902981 A CN112902981 A CN 112902981A
Authority
CN
China
Prior art keywords
lane line
region
image
interest
line information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110116337.9A
Other languages
Chinese (zh)
Other versions
CN112902981B (en
Inventor
秦家虎
张展鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202110116337.9A priority Critical patent/CN112902981B/en
Publication of CN112902981A publication Critical patent/CN112902981A/en
Application granted granted Critical
Publication of CN112902981B publication Critical patent/CN112902981B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3415Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Abstract

The invention discloses a robot navigation method and a device, wherein the robot navigation method comprises the following steps: processing the driving area image of the robot, and determining an interested area image in the driving area image; determining obstacle position information and center lane line information in the region-of-interest image, wherein the center lane line information is used for representing an original driving path of the robot; updating the central lane line information according to the position information of the obstacle to obtain updated central lane line information; and controlling the robot to autonomously navigate according to the updated central lane line information. The robot navigation method provided by the embodiment of the invention improves the calculation speed, reduces the consumption of calculation resources, and solves the problems of unsatisfactory processing effect and poor calculation accuracy in the prior art in processing extreme conditions, such as serious lane orientation deviation, interference of related instructions in the middle of a road, tree shadow influence on two sides of the road and the like.

Description

Robot navigation method and device
Technical Field
The invention belongs to the technical field of robot navigation, and particularly relates to a robot navigation method and device.
Background
In the past more than ten years, the development of new fields such as mobile robots and unmanned driving is rapid and has not been developed, and meanwhile, with the new revolutionary technology updating of the fields of information science and computer science, more and more new algorithms emerge to help the mobile robots to take a new step. In all the topics related to the mobile robot, autonomous navigation and autonomous obstacle avoidance are fundamental stone research topics, and only if a relatively accurate autonomous navigation algorithm is realized, the mobile robot can complete some intelligent work on the basis that the mobile robot really has 'autonomous awareness'. The autonomous navigation of the mobile robot specifically refers to that the mobile robot can freely walk and autonomously avoid obstacles in an unknown environment through a design algorithm, and complete movement from a starting point to a target point is completed. In practical application, the problem is very extensive, the robot faces different working environments, each working environment is different, the optimal fitting algorithm is different, the indoor environment tends to be more complex and irregular, the traditional image analysis algorithm is difficult to use, the current research focuses on solving the robot navigation problem of a real road scene, and meanwhile, the autonomous navigation research of the real road scene is beneficial to moving to the field of unmanned driving, and the robot has higher universality.
In the existing research of autonomous navigation, two genres are mainly divided, one is a method based on traditional image feature analysis, under a real road scene, a lane line has more universality compared with other disorderly objects, and a quadratic equation or a cubic equation can be usually used for fitting the lane line, so that a general processing flow is to analyze image features to extract a relevant region of interest RoI and a rough lane line shape and then fit the region of interest RoI and the rough lane line shape by a function; the other is a deep learning method based on the heat in recent years. The two methods have advantages and disadvantages, and generally, the method using the image characteristics has smaller time cost but poorer precision; the deep learning method requires high time cost and calculation cost, and the optimal effect under a fixed calculation is pursued by combining a laboratory platform. Therefore, aiming at the problems that the traditional algorithm is poor in calculation accuracy and the deep learning method is large in calculation resource consumption, how to save calculation resources on the premise of ensuring the calculation accuracy is a problem to be solved urgently at present, and in addition, the existing algorithm is poor in processing effect and poor in calculation accuracy under extreme conditions, such as severe lane orientation deviation, interference of relevant instructions in the middle of a road, tree shadow influence on two sides of the road and the like.
Disclosure of Invention
Problem (A)
In view of the above, the present invention provides a robot navigation method and apparatus to at least partially solve the problems in the prior art.
(II) technical scheme
A robot navigation method, comprising:
processing the driving area image of the robot, and determining an interested area image in the driving area image;
determining obstacle position information and center lane line information in the region-of-interest image, wherein the center lane line information is used for representing an original driving path of the robot;
updating the central lane line information according to the position information of the obstacle to obtain updated central lane line information; and
and controlling the robot to autonomously navigate according to the updated central lane line information.
According to the embodiment of the present invention, updating the center lane line information according to the obstacle position information, and obtaining the updated center lane line information includes:
marking an obstacle in the region-of-interest image;
determining an edge line of an obstacle in the region-of-interest image to determine an untravelable region; and
and updating the central lane line information according to the non-driving area to obtain the updated central lane line information.
According to an embodiment of the present invention, controlling the autonomous navigation of the robot according to the updated center lane line information includes:
calculating the deviation between the current position of the robot and the updated central lane line; and
and calculating the traveling speed and the rotation angle of the robot according to the deviation.
According to an embodiment of the present invention, processing a travel area image of a robot, determining a region-of-interest image in the travel area image includes:
acquiring an image feature extraction model of a region of interest;
inputting a running area image of the robot into an interested area image feature extraction model, and outputting normalized interested area image features, wherein the normalized interested area image features comprise normalized central point coordinates of the interested area image, and normalized length and normalized width of the interested area image;
converting the normalized region-of-interest image features into region-of-interest image features, wherein the region-of-interest image features comprise the center point coordinates of the region-of-interest image, and the length and width of the region-of-interest image; and
and selecting the region-of-interest image in the frame in the driving region image of the robot according to the region-of-interest image characteristics.
According to the embodiment of the invention, the obtaining of the region-of-interest image feature extraction model comprises the following steps:
obtaining a skeleton network model;
adding a full connection layer in a skeleton network model to obtain an initial region-of-interest image feature extraction model;
and training the initial interesting region image feature extraction model to obtain the trained interesting region image feature extraction model.
According to an embodiment of the present invention, determining obstacle position information in the region-of-interest image includes:
obtaining an obstacle detection model;
and inputting the image of the region of interest into the obstacle detection model, and outputting obstacle position information.
According to an embodiment of the present invention, determining obstacle position information in the region-of-interest image includes:
obtaining an obstacle detection model;
adjusting a network structure of the obstacle detection model;
training the adjusted obstacle detection model to obtain a trained obstacle detection model;
and inputting the image of the region of interest into the trained obstacle detection model, and outputting obstacle position information.
According to an embodiment of the present invention, determining center lane line information in the region-of-interest image includes:
performing edge feature extraction on the image of the region of interest by using an edge detection algorithm to obtain a rough lane line profile;
carrying out color feature extraction on the image of the region of interest by using a color extraction algorithm to obtain a rough segmentation map, carrying out irrelevant noise filtering on the rough segmentation map to obtain a fine segmentation map, and carrying out edge feature extraction on the fine segmentation map by using an edge detection algorithm to obtain a fine lane line profile;
merging the rough lane line profile and the fine lane line profile to obtain the lane line profile;
and obtaining the central lane line information by using the lane line profile.
According to an embodiment of the present invention, the center lane line information using the lane line profile includes:
clustering the lane line contours to obtain a left lane line contour and a right lane line contour which are respectively marked by pixel points;
respectively performing curve fitting on the left lane line outline and the right lane line outline to acquire left lane line information and right lane line information;
and obtaining the central lane line information by using the left lane line information and the right lane line information.
A robotic navigation device, comprising:
the processing module is used for processing the driving area image of the robot and determining an interested area image in the driving area image;
the determining module is used for determining obstacle position information and central lane line information in the interesting region image, wherein the central lane line information is used for representing an original driving path of the robot;
the acquisition module is used for updating the central lane line information according to the position information of the obstacle to obtain the updated central lane line information; and
and the control module is used for controlling the autonomous navigation of the robot according to the updated central lane line information.
(III) advantageous effects
According to the robot navigation method provided by the embodiment of the invention, the region-of-interest image in the driving region image is used as an original processing object, so that the calculation overhead of subsequent steps is reduced, the calculation speed is increased, and the consumption of calculation resources is reduced. In addition, in the method, obstacle position information and center lane line information in the interested area image are determined, lane line detection and obstacle detection are simultaneously realized, the center lane line is subjected to obstacle avoidance and updating, and autonomous navigation of the robot is controlled according to the updated center lane line information, so that the problems that in the prior art, the processing effect is unsatisfactory and the calculation accuracy is poor when extreme conditions such as severe lane orientation deviation, interference of relevant indication in the middle of a road, tree shadow influence on two sides of the road and the like are processed can be solved. The robot navigation method provided by the embodiment of the invention considers the problem of example limitation in practical engineering application, strives to detect the travelable area of the mobile robot more accurately under the constraint of the limited examples, considers the subsequent migration to the field of automatic driving, ensures the real-time performance of the algorithm and has higher application value and commercial value.
Drawings
Fig. 1 is a flowchart of a robot navigation method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a normalized region-of-interest image and a region-of-interest image in a driving area image according to an embodiment of the present invention.
Fig. 3 is a network structure diagram of the Yolov3 network employed by the obstacle detection model according to the embodiment of the present invention.
Fig. 4 is a flow chart for determining center lane line information in a region of interest image in accordance with an embodiment of the present invention.
Fig. 5 is a block diagram of a robot navigation device according to an embodiment of the present invention.
Detailed Description
In order that the objects, technical solutions and advantages of the present invention will become more apparent, the present invention will be further described in detail with reference to the accompanying drawings in conjunction with the following specific embodiments.
Based on the fact that the traditional algorithm is poor in calculation accuracy and a deep learning method is high in calculation resource consumption, the fact that the method combining the traditional algorithm and the deep learning method can achieve the highest cost performance and save calculation resources on the premise of guaranteeing calculation accuracy is found in the process of achieving the method.
In view of this, an embodiment of the present invention provides a robot navigation method, and fig. 1 is a flowchart illustrating a mobile robot autonomous navigation obstacle avoidance method based on deep learning and traditional image features. Specifically, the method includes the following operations S101 to S104:
in operation S101: processing the driving area image of the robot, and determining an interested area image in the driving area image;
in operation S102: determining obstacle position information and center lane line information in the region-of-interest image, wherein the center lane line information is used for representing an original driving path of the robot;
in operation S103: updating the central lane line information according to the position information of the obstacle to obtain updated central lane line information; and
in operation S104: and controlling the robot to autonomously navigate according to the updated central lane line information.
The robot navigation method provided by the embodiment of the invention reduces the calculation overhead of subsequent steps, improves the calculation speed and reduces the consumption of calculation resources by taking the image of the region of interest in the driving area image as an original processing object. In addition, in the method, obstacle position information and center lane line information in the interested area image are determined, lane line detection and obstacle detection are simultaneously realized, the center lane line is subjected to obstacle avoidance and updating, and autonomous navigation of the robot is controlled according to the updated center lane line information, so that the problems that in the prior art, the processing effect is unsatisfactory and the calculation accuracy is poor when extreme conditions such as severe lane orientation deviation, interference of relevant indication in the middle of a road, tree shadow influence on two sides of the road and the like are processed can be solved.
Wherein, according to an embodiment of the present invention, in operation S101, processing the travel area image of the robot, and determining the region-of-interest image in the travel area image includes operations S201 to S204:
in operation S201: acquiring an image feature extraction model of a region of interest; specifically, the obtaining of the region-of-interest image feature extraction model includes:
(1) and obtaining a skeleton network model. Since the classic image classification network AlexNet appeared, deep learning developed rapidly, and by resenet, the accuracy of image classification was comparable to that of human shoulder, but image classification networks can be applied to the field of image classification, and more importantly, they provide an effective feature extraction method, and after a finite number of convolutions, the features related to images can be extracted for sharing by other image tasks. In the embodiment of the invention, the framework network model selects ResNet with higher precision, ensures that the convolution information of a low characteristic layer can be transmitted to a high convolution layer by a jump structure, and ensures the effectiveness of characteristic extraction.
(2) Adding a full connection layer in a skeleton network model ResNet to obtain an initial region-of-interest image feature extraction model; because the selection of the region of interest is crucial, road scenes irrelevant to detection can be screened in advance, the calculation cost of subsequent steps is reduced, therefore, the feature map of the RoI is selected as the output of the skeleton network model, and since the skeleton structure of the skeleton network model is only used to extract the image features, further compression is needed to extract the expected result, in the embodiment of the present invention, adding a full connection layer in the skeleton network model, converting the output characteristic diagram of the interested region image into the central point coordinate of the interested region image and the length and width coordinate pair of the interested region image, i.e. the conversion of the features of the skeletal network model into predetermined output values is done by adding a fully connected layer, the center point coordinate of the interested area image and the length and width coordinate pair of the interested area image are used for representing the position and the size of the interested area image in the driving area image.
(3) After the initial region-of-interest image feature extraction model is established through the above operation, the initial region-of-interest image feature extraction model needs to be trained, and the trained region-of-interest image feature extraction model is obtained. Specifically, because the ResNet network has already achieved a good effect on classifying the data set, the network parameters are not retrained any more, but only a single-layer fully-connected network is trained, and during training, the MSE mean-square loss function can be selected as the loss function.
According to the embodiment of the invention, the existing skeleton network model is selected as a basic model, the image feature extraction model of the region of interest is obtained after the full connection layer is added into the skeleton network model for adjustment and training, the training speed is high and the computing resources are saved because only one small network (namely the full connection layer) needs to be trained in the training process, and meanwhile, the mature network is adopted as the skeleton network model, the achievement which is obtained in the related field in the industry at present is fully used for reference, the computing accuracy is guaranteed, the advantages of deep learning and the traditional image feature model are fully combined, and the optimal effect under the condition of limiting the calculation example is pursued.
In operation S202: the method comprises the steps of inputting a driving area image of a robot into a region-of-interest image feature extraction model, outputting normalized region-of-interest image features, normalizing data before each layer of convolution in order to keep effectiveness of the region-of-interest image feature extraction model, specifically normalizing all data to be (0, 1), and designing and optimizing a convolution network aiming at the normalized data, wherein the output of the region-of-interest image feature extraction model is also the normalized region-of-interest image features based on the normalized region-of-interest image features, wherein the normalized region-of-interest image features comprise normalized center point coordinates of the region-of-interest image and normalized length and normalized width of the region-of-interest image.
In operation S203: and converting the normalized region-of-interest image features into region-of-interest image features, wherein the region-of-interest image features comprise the center point coordinates of the region-of-interest image, and the length and width of the region-of-interest image. Since the normalized feature is output in operation S202, it is necessary to convert the normalized region-of-interest image feature into a region-of-interest image feature (actual image feature).
When converting, the following formula is adopted:
xcenter=w×xpredict(A)
ycenter=h×ypredict(II)
Figure BDA0002916651040000081
Figure BDA0002916651040000082
Wherein x iscenterCoordinates of the center point, y, representing the image of the region of interestcenterCoordinates of center point, x, representing the image of the region of interestpredictNormalized center point coordinate, y, representing the region of interest imagepredictNormalized center point coordinates, h, representing the region of interest imageRoIWidth, W, of the image of the region of interestRoIRepresenting the length of the region-of-interest image, h representing the width of the driving region image, w representing the length of the driving region image, hpredictRepresenting the normalized width, W, of the image of the region of interestpredictRepresenting the normalized length, P, of the image of the region of interesthAnd PwIs a preset hyper-parameter.
As can be further appreciated with reference to fig. 2, fig. 2 is a schematic illustration of a normalized region of interest image and a region of interest image in a driving area image, in accordance with an embodiment of the present invention. As shown in fig. 2, the estimated ROI region is a normalized ROI image selected according to the normalized center coordinates of the ROI image and the normalized length and normalized width frames of the ROI image, and the actual ROI region is a (actual) ROI image selected according to the center coordinates of the (actual) ROI image and the length and width frames of the ROI image.
In operation S204: and (4) framing a region-of-interest image in the driving region image of the robot according to the region-of-interest image characteristics, namely the actual ROI region as framed in FIG. 2.
According to an embodiment of the present invention, in operation S102: determining obstacle position information in the region-of-interest image includes:
the method comprises the steps of obtaining an obstacle detection model, inputting an image of a region of interest into the obstacle detection model, and outputting obstacle position information, specifically, outputting position coordinates of an obstacle.
While great technical leaps are achieved in the field of image classification, the related technology of the existing target detection network is mature, image detection is developed greatly, and the method is mainly developed into a dual-stage detection network represented by an RCNN series and a single-stage detection network represented by a network such as Yolo, SSD, and the like, while a Yolo, i.e., Yolo 3, in the third latest version has great leaps in speed and precision, so that a Yolo 3 network is selected as an obstacle detection model in the embodiment of the invention, fig. 3 is a network structure diagram of a Yolo 3 network adopted by the obstacle detection model according to the embodiment of the invention, and the network structure of the Yolo 3 network can be known by referring to fig. 3. The Yolov3 network is trained on a VOC or coco data set, and can identify more than 1000 common objects, so that the trained Yolov3 network is used for detecting obstacles in an image of an area of interest, common obstacles can be detected, the detected obstacles are affine transformed to the image of a driving area, and the image of the driving area is labeled to obtain the position information of the obstacles.
It should be noted that, in order to obtain a better detection effect, the size of the image input into the obstacle detection model needs to be fixed, and therefore, before the image of the region of interest is input into the obstacle detection model, the image of the region of interest needs to be adjusted by a preset size by using a correlation function and then input into the obstacle detection model.
The pre-trained Yolov3 network has the capability of detecting more than 1000 objects, and the capability is redundant for the application scene of the embodiment of the invention, so in order to save the calculation cost, the method can be simplified on the basis of the original Yolov3 network, and considering that the obstacles which are mainly needed to be avoided on the road are related objects such as automobiles and trucks, therefore, a small network only containing the interested obstacles is trained. The method has the advantages of improving the operation speed, reducing the calculation cost and better meeting the requirement of real-time property. According to the embodiment of the present invention, optionally, after the network structure of the selected obstacle detection model is fine-tuned and trained, the trained obstacle detection model is used to obtain obstacle position information, and at this time, in operation S102: determining obstacle position information in the region-of-interest image includes:
obtaining an obstacle detection model;
adjusting a network structure of the obstacle detection model;
training the adjusted obstacle detection model to obtain a trained obstacle detection model;
and inputting the image of the region of interest into the trained obstacle detection model, and outputting obstacle position information.
Fig. 4 is a flow chart for determining center lane line information in a region of interest image in accordance with an embodiment of the present invention.
According to an embodiment of the present invention, in operation S102: determining center lane line information in the region of interest image includes:
(1) performing edge feature extraction on the image of the region of interest by using an edge detection algorithm to obtain a rough lane line profile; at this time, the processing target is the region-of-interest image selected in the frame in the travel region image of the robot according to the region-of-interest image features in operation S204, that is, the actual ROI region selected in the frame in fig. 2, which helps to eliminate the noise region and improve the calculation accuracy. Optionally, the edge detection algorithm may adopt a Sobel edge detection operator or a Canny edge detection operator, and the edge detection algorithm is used to perform edge feature extraction on the image of the region of interest to obtain a rough lane line profile.
(2) The method comprises the steps of utilizing a color extraction algorithm to extract color features of an image of an interested area to obtain a rough segmentation graph, generating a binary image through the color extraction algorithm, extracting a lane area, analyzing the color of the lane line area, transforming a color space, extracting the lane area through the color to finally obtain a binary image, representing the lane area by white pixel points and representing a background area by black pixel points, wherein the operation still can extract noise areas with similar colors, and further denoising detection is needed.
Therefore, the rough segmentation chart is filtered out irrelevant noise to obtain the fine segmentation chart, the generation of the noise mainly comprises two conditions, one is the case where black noise occurs in the white area, and the other is the case where white noise occurs in the black area, the two situations can be transferred into a problem treatment, the idea of corrosion and expansion is used for reference, the noise elimination is carried out by combining the surrounding environment, the specific realization algorithm is, if the upper and lower x units of a pixel are both white or both black, then the pixel is set to the same color, specifically, the image is shifted up by x units, doing or operating with the original image, shifting the image down by x units, doing or operating with the original image, and finally doing or operating with the original image and the original image, and after the noise removing operation, the method can obtain a more accurate binary image which can distinguish the vehicle-to-area from the background, namely a fine segmentation image.
And then, performing edge feature extraction on the fine segmentation graph by using an edge detection algorithm and adopting a Sobel edge detection operator or a Canny edge detection operator to obtain a fine lane line profile.
(3) Merging the rough lane line profile and the fine lane line profile to obtain the lane line profile; in this way, mainly because new noise may be introduced when the edge feature extraction is performed on the fine segmentation map, the rough lane line profile and the fine lane line profile are merged, that is, the edge points of the lane line profiles considered on the two profile images are finally marked as the lane line profiles.
(4) Clustering the lane line contours to obtain a left lane line contour and a right lane line contour which are respectively marked by pixel points; the lane line profile marked by the pixel points still cannot meet the subsequent fitting requirement, so that the lane line profiles need to be clustered to obtain a left lane line profile and a right lane line profile.
(5) And performing curve fitting on the pixels clustered by the operation to obtain a final result, namely performing curve fitting on the left lane line contour and the right lane line contour respectively to obtain left lane line information and right lane line information, namely a left lane line function curve and a right lane line function curve.
(6) And obtaining the central lane line information by using the left lane line information and the right lane line information. Specifically, based on the two lane line function curves obtained by the above operation, a middle line of the two lane lines is fitted, that is, a central function curve is obtained by averaging the two function curves, and corresponding labeling is performed in the driving region image. This center function curve is labeled as the centerline of the road, (i.e., center lane line information is obtained). Ideally, the mobile robot should travel forward along this central lane line, which is the best curve to ensure that the angle does not deviate, and in the case of a heavily skewed lane orientation, it can also ensure that the robot does not deviate from the driving route.
According to an embodiment of the present invention, in operation S103, updating the center lane line information according to the obstacle position information, and obtaining the updated center lane line information includes:
first, an obstacle is marked in the region-of-interest image according to the obstacle position coordinates detected in operation S102. Then, determining the edge line of the obstacle in the interested area image to determine the non-driving area; due to the presence of the obstacle, the influence of the obstacle on the lane center line needs to be sufficiently considered, and therefore the obstacle coordinate center is marked as a no-drive area by default. And finally, updating the central lane line information according to the non-driving area to obtain the updated central lane line information. This operation is to perform obstacle avoidance update on the center lane line, and a specific method of obtaining updated center lane line information is described in the method of "obtaining center lane line information using left lane line information and right lane line information" in the step (6) of the detailed description section of "determining center lane line information in region of interest" in operation S102.
According to an embodiment of the present invention, the controlling the autonomous navigation of the robot according to the updated center lane line information in operation S104 includes:
calculating the deviation between the current position of the robot and the updated central lane line; and calculating the traveling speed and the rotation angle of the robot according to the deviation through the controller, so as to control the autonomous navigation of the mobile robot.
Fig. 5 is a block diagram illustrating a robot navigation apparatus 500 according to an embodiment of the present invention, where the robot navigation apparatus 500 may be used to implement the method described with reference to fig. 1. As shown in fig. 5, the robot navigation device 500 includes: a processing module 501, a determining module 502, an obtaining module 503 and a control module 504. Specifically, the method comprises the following steps:
the processing module 501 is configured to process a driving area image of the robot and determine an area-of-interest image in the driving area image;
a determining module 502, configured to determine obstacle position information and center lane line information in the region of interest image, where the center lane line information is used to represent an original driving path of the robot;
the obtaining module 503 is configured to update the center lane line information according to the obstacle position information, so as to obtain updated center lane line information; and
and the control module 504 is configured to control autonomous navigation of the robot according to the updated center lane line information.
According to the embodiment of the invention, the region-of-interest image in the driving region image is determined by the processing module 501, and the region-of-interest image in the driving region image is used as an original processing object, so that the calculation overhead of subsequent steps is reduced, the calculation speed is increased, and the consumption of calculation resources is reduced. The obstacle position information and the center lane line information in the image of the region of interest are determined through the determining module 502, lane line detection and obstacle detection are simultaneously achieved, the center lane line is subjected to obstacle avoidance and updating through the acquiring module 503, autonomous navigation of the robot is controlled according to the updated center lane line information, and the problems that in the prior art, the processing effect is not ideal, the calculation accuracy is poor when extreme conditions such as severe lane orientation deviation, interference of relevant indication in the middle of a road, tree shadow influence on two sides of the road and the like are processed can be solved.
It should be noted that the robot navigation device portion in the embodiment of the present disclosure corresponds to the robot navigation method portion in the embodiment of the present disclosure, and the description of the robot navigation device portion specifically refers to the robot navigation method portion, which is not described herein again.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any plurality of the processing module 501, the determining module 502, the obtaining module 503 and the controlling module 504 may be combined and implemented in one module/unit/sub-unit, or any one of the modules/units/sub-units may be split into a plurality of modules/units/sub-units. Alternatively, at least part of the functionality of one or more of these modules/units/sub-units may be combined with at least part of the functionality of other modules/units/sub-units and implemented in one module/unit/sub-unit. According to an embodiment of the present disclosure, at least one of the processing module 501, the determining module 502, the obtaining module 503 and the controlling module 504 may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented by any one of three implementations of software, hardware and firmware, or any suitable combination of any of them. Alternatively, at least one of the processing module 501, the determining module 502, the obtaining module 503, the controlling module 504 may be at least partly implemented as a computer program module, which when executed may perform a corresponding function.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention and are not intended to limit the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A robot navigation method, comprising:
processing a driving area image of the robot, and determining an interested area image in the driving area image;
determining obstacle position information and central lane line information in the region-of-interest image, wherein the central lane line information is used for representing an original driving path of the robot;
updating the central lane line information according to the position information of the obstacle to obtain updated central lane line information; and
and controlling the robot to autonomously navigate according to the updated central lane line information.
2. The method of claim 1, wherein updating the center lane line information according to the obstacle position information, and obtaining updated center lane line information comprises:
marking the obstacle in the region of interest image;
determining an edge line of the obstacle in the region-of-interest image to determine an untravelable region; and
and updating the central lane line information according to the non-driving area to obtain the updated central lane line information.
3. The method of claim 1, wherein controlling the robotic autonomous navigation according to the updated center lane line information comprises:
calculating the deviation between the current position of the robot and the updated central lane line; and
and calculating the traveling speed and the rotation angle of the robot according to the deviation.
4. The method of claim 1, wherein processing a travel area image of the robot, determining a region-of-interest image in the travel area image comprises:
acquiring an image feature extraction model of a region of interest;
inputting the driving area image of the robot into the region-of-interest image feature extraction model, and outputting normalized region-of-interest image features, wherein the normalized region-of-interest image features comprise normalized center point coordinates of the region-of-interest image, and normalized length and normalized width of the region-of-interest image;
converting the normalized region-of-interest image features into region-of-interest image features, wherein the region-of-interest image features include center point coordinates of the region-of-interest image, and a length and a width of the region-of-interest image; and
and selecting the interested area image in the running area image of the robot according to the interested area image characteristics.
5. The method of claim 4, wherein said acquiring a region of interest image feature extraction model comprises:
obtaining a skeleton network model;
adding a full connection layer in the skeleton network model to obtain an initial region-of-interest image feature extraction model;
and training the initial interesting region image feature extraction model to obtain the trained interesting region image feature extraction model.
6. The method of claim 1, wherein determining obstacle location information in the region of interest image comprises:
obtaining an obstacle detection model;
and inputting the image of the region of interest into an obstacle detection model, and outputting the position information of the obstacle.
7. The method of claim 1, wherein determining obstacle location information in the region of interest image comprises:
obtaining an obstacle detection model;
adjusting a network structure of the obstacle detection model;
training the adjusted obstacle detection model to obtain a trained obstacle detection model;
and inputting the image of the region of interest into the trained obstacle detection model, and outputting the position information of the obstacle.
8. The method of claim 1, wherein determining center lane line information in the region of interest image comprises:
performing edge feature extraction on the image of the region of interest by using an edge detection algorithm to obtain a rough lane line profile;
carrying out color feature extraction on the image of the region of interest by using a color extraction algorithm to obtain a rough segmentation map, carrying out irrelevant noise filtering on the rough segmentation map to obtain a fine segmentation map, and carrying out edge feature extraction on the fine segmentation map by using an edge detection algorithm to obtain a fine lane line profile;
merging the rough lane line profile and the fine lane line profile to obtain a lane line profile;
and obtaining the central lane line information by using the lane line profile.
9. The method of claim 8, wherein: obtaining the center lane line information using the lane line profile includes:
clustering the lane line profiles to obtain a left lane line profile and a right lane line profile which are respectively marked by pixel points;
respectively performing curve fitting on the left lane line profile and the right lane line profile to obtain left lane line information and right lane line information;
and obtaining the central lane line information by using the left lane line information and the right lane line information.
10. A robotic navigation device, comprising:
the processing module is used for processing a driving area image of the robot and determining an interested area image in the driving area image;
the determination module is used for determining obstacle position information and central lane line information in the region-of-interest image, wherein the central lane line information is used for representing an original driving path of the robot;
the acquisition module is used for updating the central lane line information according to the obstacle position information to obtain updated central lane line information; and
and the control module is used for controlling the autonomous navigation of the robot according to the updated central lane line information.
CN202110116337.9A 2021-01-26 2021-01-26 Robot navigation method and device Active CN112902981B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110116337.9A CN112902981B (en) 2021-01-26 2021-01-26 Robot navigation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110116337.9A CN112902981B (en) 2021-01-26 2021-01-26 Robot navigation method and device

Publications (2)

Publication Number Publication Date
CN112902981A true CN112902981A (en) 2021-06-04
CN112902981B CN112902981B (en) 2024-01-09

Family

ID=76119402

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110116337.9A Active CN112902981B (en) 2021-01-26 2021-01-26 Robot navigation method and device

Country Status (1)

Country Link
CN (1) CN112902981B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269130A (en) * 2021-06-11 2021-08-17 国电瑞源(西安)智能研究院有限公司 Visual path searching method based on artificial neural network
CN115082898A (en) * 2022-07-04 2022-09-20 小米汽车科技有限公司 Obstacle detection method, obstacle detection device, vehicle, and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150146374A (en) * 2014-06-20 2015-12-31 주식회사 세인전장 System for lane recognition using environmental information and method thereof
CN107860391A (en) * 2017-02-13 2018-03-30 问众智能信息科技(北京)有限公司 Automobile accurate navigation method and device
CN110962847A (en) * 2019-11-26 2020-04-07 清华大学苏州汽车研究院(吴江) Lane centering auxiliary self-adaptive cruise trajectory planning method and system
CN111178253A (en) * 2019-12-27 2020-05-19 深圳佑驾创新科技有限公司 Visual perception method and device for automatic driving, computer equipment and storage medium
CN111666921A (en) * 2020-06-30 2020-09-15 腾讯科技(深圳)有限公司 Vehicle control method, apparatus, computer device, and computer-readable storage medium
WO2020199593A1 (en) * 2019-04-04 2020-10-08 平安科技(深圳)有限公司 Image segmentation model training method and apparatus, image segmentation method and apparatus, and device and medium
CN112214022A (en) * 2016-10-11 2021-01-12 御眼视觉技术有限公司 Navigating a vehicle based on detected obstacles

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150146374A (en) * 2014-06-20 2015-12-31 주식회사 세인전장 System for lane recognition using environmental information and method thereof
CN112214022A (en) * 2016-10-11 2021-01-12 御眼视觉技术有限公司 Navigating a vehicle based on detected obstacles
CN107860391A (en) * 2017-02-13 2018-03-30 问众智能信息科技(北京)有限公司 Automobile accurate navigation method and device
WO2020199593A1 (en) * 2019-04-04 2020-10-08 平安科技(深圳)有限公司 Image segmentation model training method and apparatus, image segmentation method and apparatus, and device and medium
CN110962847A (en) * 2019-11-26 2020-04-07 清华大学苏州汽车研究院(吴江) Lane centering auxiliary self-adaptive cruise trajectory planning method and system
CN111178253A (en) * 2019-12-27 2020-05-19 深圳佑驾创新科技有限公司 Visual perception method and device for automatic driving, computer equipment and storage medium
CN111666921A (en) * 2020-06-30 2020-09-15 腾讯科技(深圳)有限公司 Vehicle control method, apparatus, computer device, and computer-readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张文影;李礼夫;: "针对车辆与行人检测的感兴趣区域自适应分割算法", 科学技术与工程, no. 05, pages 1967 - 1972 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269130A (en) * 2021-06-11 2021-08-17 国电瑞源(西安)智能研究院有限公司 Visual path searching method based on artificial neural network
CN115082898A (en) * 2022-07-04 2022-09-20 小米汽车科技有限公司 Obstacle detection method, obstacle detection device, vehicle, and storage medium

Also Published As

Publication number Publication date
CN112902981B (en) 2024-01-09

Similar Documents

Publication Publication Date Title
Zhou et al. Efficient road detection and tracking for unmanned aerial vehicle
Kluge Extracting road curvature and orientation from image edge points without perceptual grouping into features
Zhang et al. Geometric constrained joint lane segmentation and lane boundary detection
WO2015010451A1 (en) Method for road detection from one image
CN106919902B (en) Vehicle identification and track tracking method based on CNN
Wang et al. Lane detection based on random hough transform on region of interesting
CN110956069B (en) Method and device for detecting 3D position of pedestrian, and vehicle-mounted terminal
CN114365200A (en) Structural annotation
CN110705342A (en) Lane line segmentation detection method and device
CN112902981A (en) Robot navigation method and device
CN110262487B (en) Obstacle detection method, terminal and computer readable storage medium
CN111738033B (en) Vehicle driving information determination method and device based on plane segmentation and vehicle-mounted terminal
Ma et al. Crlf: Automatic calibration and refinement based on line feature for lidar and camera in road scenes
Getahun et al. A deep learning approach for lane detection
Truong et al. New lane detection algorithm for autonomous vehicles using computer vision
Liu et al. Towards industrial scenario lane detection: Vision-based agv navigation methods
CN112801021B (en) Method and system for detecting lane line based on multi-level semantic information
CN113902862A (en) Vision SLAM loop verification system based on consistency cluster
CN107437071B (en) Robot autonomous inspection method based on double yellow line detection
Valente et al. Real-time method for general road segmentation
CN113592947B (en) Method for realizing visual odometer by semi-direct method
Yang et al. A novel vision-based framework for real-time lane detection and tracking
US10373004B1 (en) Method and device for detecting lane elements to plan the drive path of autonomous vehicle by using a horizontal filter mask, wherein the lane elements are unit regions including pixels of lanes in an input image
Wakatsuki et al. Development of a robot car by single line search method for white line detection with FPGA
Dargazany et al. Stereo-based terrain traversability estimation using surface normals

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant