CN110263713B - Lane line detection method, lane line detection device, electronic device, and storage medium - Google Patents
Lane line detection method, lane line detection device, electronic device, and storage medium Download PDFInfo
- Publication number
- CN110263713B CN110263713B CN201910536130.XA CN201910536130A CN110263713B CN 110263713 B CN110263713 B CN 110263713B CN 201910536130 A CN201910536130 A CN 201910536130A CN 110263713 B CN110263713 B CN 110263713B
- Authority
- CN
- China
- Prior art keywords
- lane
- lane line
- image
- boundary point
- grid
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The application provides a lane line detection method and device, electronic equipment and a storage medium. Wherein, the method comprises the following steps: inputting the acquired image to be detected into a preset target detection model, and acquiring first detection information, second detection information and third detection information of each grid in the image; respectively carrying out non-maximum value suppression processing on the first detection information and the second detection information of each grid, and acquiring the position of each lane line boundary point, the position of each lane center point and the corresponding lane width in the image; and determining the lane lines in the image according to the positions of the boundary points of the lane lines in the image, the positions of the center points of the lane lines, the corresponding lane widths, the directions and the positions of the boundary points of the lane lines. Therefore, by the lane line detection method, not only are fitting errors reduced and lane line detection accuracy improved, but also the calculation complexity is low and the time consumption is short.
Description
Technical Field
The present disclosure relates to the field of computer application technologies, and in particular, to a lane line detection method and apparatus, an electronic device, and a storage medium.
Background
In an automatic driving scene, the lane line is taken as important static semantic information, so that the significance is great for driving decision, and the curvature and direction information of the lane line have significance for driving decision.
In the related art, most of the conventional lane line direction detection methods fit a lane line after detecting a lane line candidate line or a boundary point, and calculate the curvature of each point through a fitting equation, thereby obtaining the curvature and direction information of the lane line. However, in the lane line detection method, because fitting errors exist in fitting, the accuracy of detecting the direction of the lane line is affected, and after the lane line is detected, an algorithm needs to be additionally designed to calculate the direction information of the lane line, so that the algorithm is high in complexity and long in time consumption.
Disclosure of Invention
The lane line detection method, the lane line detection device, the electronic equipment and the storage medium are used for solving the problems that the lane line detection method in the related technology is not only low in lane line direction detection precision, but also high in algorithm complexity and long in time consumption.
An embodiment of an aspect of the present application provides a lane line detection method, including: acquiring an image to be detected; inputting the image into a preset target detection model, and acquiring first detection information, second detection information and third detection information of each grid in the image, wherein the first detection information comprises: lane line boundary point lateral deviation and lane line boundary point score; the second detection information includes: lane center point lateral deviation, lane center point fraction and a prediction frame width adjustment value corresponding to each prediction frame; the third detection information includes: the included angle between the connecting line of the boundary point of the lane line in the grid and the boundary point of the lane line in the adjacent grid above and the horizontal direction; carrying out non-maximum suppression processing on the first detection information of each grid to obtain the position of each lane line boundary point in the image; carrying out non-maximum suppression processing on the second detection information of each grid to obtain the position of the center point of each lane in the image and the corresponding lane width; and determining the lane lines in the image according to the positions of the boundary points of the lane lines in the image, the positions of the center points of the lane lines, the corresponding lane widths, the directions and the positions of the boundary points of the lane lines.
The lane line detection device that this application another aspect embodiment provided includes: the acquisition module is used for acquiring an image to be detected; an input module, configured to input the image into a preset target detection model, and acquire first detection information, second detection information, and third detection information of each grid in the image, where the first detection information includes: lane line boundary point lateral deviation and lane line boundary point score; the second detection information includes: lane center point lateral deviation, lane center point fraction and a prediction frame width adjustment value corresponding to each prediction frame; the third detection information includes: the included angle between the connecting line of the boundary point of the lane line in the grid and the boundary point of the lane line in the adjacent grid above and the horizontal direction; the first processing module is used for carrying out non-maximum suppression processing on the first detection information of each grid to obtain the position of each lane line boundary point in the image; the second processing module is used for carrying out non-maximum value suppression processing on the second detection information of each grid to obtain the position of the center point of each lane in the image and the corresponding lane width; and the determining module is used for determining the lane lines in the image according to the positions of the boundary points of the lane lines in the image, the positions of the center points of the lane lines, the corresponding lane widths, the directions and the positions of the boundary points of the lane lines.
An embodiment of another aspect of the present application provides an electronic device, which includes: memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor implements the lane line detection method as described above when executing the program.
In another aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the lane line detection method as described above.
In another aspect of the present application, a computer program is provided, which is executed by a processor to implement the lane line detection method according to the embodiment of the present application.
The lane line detection method, the lane line detection device, the electronic device, the computer-readable storage medium, and the computer program provided in the embodiments of the present application may input an acquired image to be detected into a preset target detection model to acquire first detection information, second detection information, and third detection information of each grid in the image, and perform non-maximum suppression processing on the first detection information and the second detection information of each grid, respectively, to acquire a position of each lane line boundary point and a position of each lane center point in the image, and further determine a lane line in the image according to the position of each lane line boundary point in the image, the position of each lane center point, and a corresponding lane width, a direction of each lane line boundary point, and the position. Therefore, the image to be detected is divided into a plurality of grids, the trained target detection model is used for detecting lane line boundary points, lane line boundary point directions, lane central points and lane widths in each grid, and then lane lines in the image can be determined according to the detected lane line boundary points, lane central points, lane widths and lane line boundary point directions, so that the direction information of the lane lines can be directly obtained in the lane line detection process, the fitting error is reduced, the lane line detection precision is improved, the calculation complexity is low, and the consumed time is short.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flow chart of a lane line detection method according to an embodiment of the present disclosure;
FIG. 2 is a schematic view of an angle between a horizontal direction and a line connecting a lane line boundary point in a grid and a lane line boundary point in an adjacent grid above;
fig. 3 is a schematic flowchart of another lane line detection method according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a lane line detection apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the like or similar elements throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The embodiment of the application provides a lane line detection method aiming at the problems of low lane line direction detection precision, high algorithm complexity and long time consumption of the lane line detection method in the related technology.
The lane line detection method provided by the embodiment of the application can input the acquired image to be detected into a preset target detection model to acquire the first detection information, the second detection information and the third detection information of each grid in the image, respectively perform non-maximum suppression processing on the first detection information and the second detection information of each grid to acquire the position of each lane line boundary point, the position of each lane center point and the corresponding lane width in the image, and further determine the lane lines in the image according to the position of each lane line boundary point, the position of each lane center point and the corresponding lane width in the image, the direction and the position of each lane line boundary point. Therefore, the image to be detected is divided into a plurality of grids, the trained target detection model is used for detecting lane line boundary points, lane line boundary point directions, lane central points and lane widths in each grid, and then lane lines in the image can be determined according to the detected lane line boundary points, lane central points, lane widths and lane line boundary point directions, so that the direction information of the lane lines can be directly obtained in the lane line detection process, the fitting error is reduced, the lane line detection precision is improved, the calculation complexity is low, and the consumed time is short.
The following describes in detail a lane line detection method, apparatus, electronic device, storage medium, and computer program provided by the present application with reference to the drawings.
Fig. 1 is a schematic flow chart of a lane line detection method according to an embodiment of the present disclosure.
As shown in fig. 1, the lane line detection method includes the following steps:
It should be noted that the lane line detection method according to the embodiment of the present application may be executed by the lane line detection device provided in the present application. In practical use, the lane line detection method provided by the embodiment of the application can be applied to the field of automatic driving, and road condition information is provided for automatic driving vehicles, so that the lane line detection device provided by the embodiment of the application can be configured in any vehicle to execute the lane line detection method provided by the application.
In the embodiment of the application, the acquisition mode of the image to be detected can be determined according to a specific application scene. For example, when the lane line detection device of the embodiment of the present application is applied to an autonomous driving vehicle, road information in front of the vehicle, which is acquired by a camera in the autonomous driving vehicle, may be acquired as an image to be detected. Specifically, the lane line detection device can directly establish communication connection with the camera so as to directly acquire a real-time image acquired by the camera; or the camera may store the acquired image in a storage device of the vehicle, so that the lane line detection apparatus may also acquire the image to be detected from the storage device of the vehicle.
The preset target Detection model may be a pre-trained one-stage target Detection model, such as a You Only Look one: united, Real-Time Object Detection V2(Yolo V2) algorithm model, a Single Shot multitox Detector (SSD) algorithm model, but is not limited thereto.
The lane line boundary point lateral deviation refers to the lateral deviation between the lane line boundary point and the upper left corner coordinate of the grid where the lane line boundary point is located; the lane line boundary point score is the confidence of the lane line boundary point, and can reflect the reliability of the predicted lane line boundary point.
The prediction frame is defined in a preset target detection model, has a certain size and position, is not directly related to an image to be detected and a grid in the image, and is a tool for carrying out target detection on the image. In actual use, the number, initial size, position, and the like of the prediction frames may be preset according to actual needs such as required prediction accuracy, calculation complexity, and the like, which is limited in the embodiment of the present application, for example, the number of the prediction frames may be 5.
The lane central point lateral deviation refers to the lateral deviation between the lane central point and the coordinates of the upper left corner of the grid where the lane central point is located; the lane center point score is the confidence of the lane center point corresponding to the prediction frame, and can reflect the reliability of the lane center point corresponding to the prediction frame; and the prediction frame width adjusting value is used for adjusting the width of the prediction frame to obtain the current width value of the prediction frame.
Preferably, since the target detection model in the embodiment of the present application is used to detect the lane center point and the lane width, the prediction box may be defined as a line segment having a certain position and width, so that only the width adjustment value of the prediction box may be included in the detection information to adjust the width of the prediction box.
The included angle between the connecting line of the boundary point of the lane line in the grid and the boundary point of the lane line in the adjacent grid above and the horizontal direction can provide the direction information of the lane line.
As a possible implementation manner, in order to ensure the training stability, the included angle can be normalized to be between 0 and 1. As shown in fig. 2, it is a schematic diagram of an angle between a connecting line between a lane line boundary point in a grid and a lane line boundary point in an adjacent grid above and a horizontal direction, where θ is an angle between a connecting line between a lane line boundary point and a lane line boundary point in an adjacent grid above and a horizontal direction. Assuming that the coordinates of a boundary point of a lane line are (x0, y0) and the coordinates of a boundary point of a lane line in an adjacent grid above the boundary point of a lane line are (x1, y1), the angle θ between the horizontal direction and the connecting line between the two points can be expressed by the following formula:
in the embodiment of the application, an image to be detected may be firstly divided into a plurality of grids, the image to be detected is input into a preset target detection model, and a feature map of the image is obtained through a convolution portion of the preset target detection model, wherein each point in the feature map corresponds to one grid in the image. And then, according to the obtained feature map and the image to be detected, acquiring first detection information, second detection information and third detection information of each grid in the image through a regression part of a preset target detection model.
It should be noted that each mesh in the image is used to predict the target centered in the mesh. In practical use, the size of the grid can be preset according to actual needs, and the embodiment of the application does not limit the size. For example, the size of the image to be detected is 1920 × 640 pixels, and the image to be detected is divided into a plurality of grids with the size of 16 × 16 pixels, that is, the size of the obtained feature map is 120 × 40 pixels.
In the embodiment of the application, when lane line boundary points are predicted, the accuracy of predicting each grid by the target detection model may be different, so that errors of lateral deviation of the lane line boundary points included in the first detection information of some grids are large, and therefore, the grid with high accuracy of lateral deviation of the lane line boundary points in the first detection information can be selected according to the first detection information of each grid, and then the position of each lane line boundary point in the image is determined according to information such as lateral deviation of the lane line boundary points of the grid with high accuracy.
Specifically, the grid with higher accuracy of the lateral deviation of the boundary point of the lane line in the first detection information can be determined by performing non-maximum suppression processing on the first detection information of each grid. That is, in a possible implementation form of the embodiment of the present application, the step 103 may include:
aiming at each row of grids in the image, selecting the grids with the corresponding lane line boundary point score larger than a first threshold value as target grids at intervals of preset step length;
and for each target grid, determining the position of a lane line boundary point in the image according to the lateral deviation of the lane line boundary point corresponding to the target grid and the coordinates of the target grid.
In the embodiment of the application, the lane line boundary point score in the first detection information may reflect the accuracy of the lateral deviation prediction of the lane line boundary point in the first detection information, so that the target grid may be determined according to the lane line boundary point score corresponding to the first detection information of each grid.
Specifically, the larger the score of the lane line boundary point corresponding to the first detection information is, the more accurate the lateral deviation of the lane line boundary point corresponding to the first prediction information is, so that in each row of grids, every preset step length, the grid with the score of the lane line boundary point larger than the first threshold value is determined as the target grid.
For example, the size of the image to be detected is 1920 × 640 pixels, the size of each grid is 16 × 16 pixels, that is, the image to be detected includes 120 × 40 grids, the preset step is 160 pixels, that is, in each row of grids, it is determined every 160 pixels whether the grid corresponding to the 160 pixels includes a grid having a lane line boundary point score greater than a first threshold, that is, it is determined every 10 grids whether the 10 grids include a grid having a lane line boundary point score greater than the first threshold, and if so, the grid having a lane line boundary point score greater than the first threshold is determined as the target grid.
It should be noted that the above examples are only illustrative and should not be construed as limiting the present application. In actual use, the step length and the first threshold value may be preset according to actual needs, which is not limited in the embodiment of the present application.
In the embodiment of the application, after the target grids are determined, the position of the lane line boundary point corresponding to each target grid may be determined according to the lateral deviation of the lane line boundary point corresponding to the first detection information of each target grid and the coordinates of the target grids, so as to determine all lane line boundary points in the image, that is, one lane line boundary point in the image corresponding to each target grid.
Specifically, the lateral deviation of the boundary point of the lane line refers to a difference value between the abscissa of the boundary point of the lane line relative to the abscissa of the upper left corner of the grid to which the boundary point belongs, so that the coordinate of the upper left corner of the target grid can be determined first, and then the coordinate of the boundary point of the lane line corresponding to the target grid, that is, the position of one boundary point of the lane line in the image, is determined according to the lateral deviation of the boundary point of the lane line corresponding to the first detection information of the target grid.
And 104, performing non-maximum value suppression processing on the second detection information of each grid, and acquiring the position of the center point of each lane in the image and the corresponding lane width.
In the embodiment of the application, a plurality of prediction frames are preset to detect the target (namely, the center point of the lane) in each grid in the image so as to ensure the accuracy of lane line detection. And because the sizes of the plurality of prediction frames are different, the accuracy of the second detection information corresponding to each prediction frame is different, so that the prediction frame with the highest accuracy corresponding to each grid can be determined according to the second detection information of each grid, and further, the lane width corresponding to the position of the lane center point and the position of each lane center point in each grid, namely the position of each lane center point in the image and the corresponding lane width, can be determined according to the lateral deviation of the lane center point, the adjustment value of the width of the prediction frame and the like corresponding to the prediction frame with the highest accuracy corresponding to each grid.
Specifically, the non-maximum suppression processing may be performed on the second detection information of each mesh to determine a prediction frame with the highest accuracy corresponding to each mesh, so as to determine the position of the center point of each lane in the image and the corresponding lane width. That is, in a possible implementation form of the embodiment of the present application, the step 104 may include:
for each grid in the image, determining a prediction frame with the maximum score of the center point of the corresponding lane in the grid as an optimal prediction frame corresponding to the grid;
selecting the corresponding optimal prediction frame with the lane center point score larger than a second threshold value as a target prediction frame at intervals of preset step length for each row of grids;
and for each target prediction frame, determining the position of a lane central point in the image and the corresponding lane width according to the lane central point lateral deviation corresponding to the target prediction frame and the prediction frame width adjustment value.
In the embodiment of the application, the lane line center point score corresponding to the prediction frame can reflect the accuracy of the prediction frame for the lane center point, so that the optimal prediction frame corresponding to each grid can be determined according to the lane center point score corresponding to each prediction frame corresponding to each grid. Specifically, the greater the score of the center point of the lane line corresponding to the prediction frame, the more accurate the lateral deviation of the center point of the lane corresponding to the prediction frame, so that the prediction frame with the largest score of the center point of the lane corresponding to each grid can be determined as the optimal prediction frame corresponding to each grid.
After the optimal prediction frame corresponding to each grid in the image is determined, the target prediction frame corresponding to each line of the grid can be selected from the optimal prediction frames corresponding to each line of the grid according to the preset step length. Specifically, the optimal prediction frame with the score of the lane center point larger than the second threshold may be determined as the target prediction frame every preset step length.
For example, the size of the image to be detected is 1920 × 640 pixels, the size of each grid is 16 × 16 pixels, that is, the image to be detected includes 120 × 40 grids, the preset step is 160 pixels, that is, in each row of grids, every 160 pixels determine whether the optimal prediction frame corresponding to the grid corresponding to the 160 pixels contains the optimal prediction frame whose lane center point score is greater than the second threshold, that is, every 10 grids determine whether the optimal prediction frame corresponding to the 10 grids contains the optimal prediction frame whose lane center point score is greater than the second threshold, and if so, the optimal prediction frame whose lane center point score is greater than the second threshold is determined as the target prediction frame.
It should be noted that the above examples are only illustrative and should not be construed as limiting the present application. In actual use, the step length and the second threshold value may be preset according to actual needs, which is not limited in the embodiment of the present application.
In the embodiment of the application, after the target prediction frames are determined, the position of the lane center and the corresponding lane width of each target prediction frame may be determined according to the lateral deviation of the lane center point and the adjustment value of the width of the prediction frame corresponding to each target prediction frame, so as to determine the positions of all the lane center points and the corresponding lane widths in the images, that is, one lane center point in the image corresponding to each target prediction frame.
Specifically, the method for determining the position of the center point of the lane and the corresponding lane width in the image according to the lateral deviation of the center point of the lane corresponding to the target prediction frame and the adjustment value of the width of the prediction frame comprises the following steps:
for each target prediction frame, determining the position of a lane central point in the image according to the lane central point transverse deviation corresponding to the target prediction frame and the coordinates of the grid to which the target prediction frame belongs;
and determining the lane width corresponding to the lane central point according to the width adjustment value of the prediction frame corresponding to the target prediction frame and the width of the target prediction frame.
In the embodiment of the application, the lateral deviation of the lane center point corresponding to the prediction frame is a difference value between the abscissa of the lane center point and the abscissa of the upper left corner of the grid to which the prediction frame belongs, so that the coordinates of the upper left corner of the grid to which the target prediction frame belongs can be determined according to the position of the target prediction frame, and the coordinates of the lane center point corresponding to the target prediction frame, namely the position of a lane center point in the image, can be determined according to the lateral deviation of the lane center point corresponding to the target prediction frame.
It should be noted that, when a preset target detection model is trained, the lane width in the training data may be used as the width of the prediction frame, so that when a lane line is detected, the current width of the prediction frame may be determined as the lane width. Therefore, the lane width corresponding to the lane center point corresponding to the target prediction frame can be determined according to the prediction frame width adjustment value corresponding to the target prediction frame and the width of the target prediction frame. Specifically, the sum of the width of the target prediction frame and the prediction frame width adjustment value corresponding to the target prediction frame may be determined as the lane width corresponding to a lane center point corresponding to the target prediction frame.
And 105, determining the lane lines in the image according to the positions of the boundary points of the lane lines in the image, the positions of the center points of the lane lines, the corresponding lane widths, the directions and the positions of the boundary points of the lane lines.
In the embodiment of the application, after the position of each lane line boundary point, the position of each lane center point, the lane width corresponding to each lane center point, the direction of each lane line boundary point, and the position included in the image are determined, each point included in the lane line may be determined according to the position of each lane line boundary point, the position of each lane center point, the corresponding lane width, the direction of each lane line boundary point, and the position, so as to determine the lane line in the image.
Specifically, the points on the lane line may be determined according to the positions of the lane line boundary points, and then the points on the lane line may be supplemented according to the positions of the center points of the lanes and the corresponding lane widths, and then curve fitting may be performed according to the directions of the lane line boundary points and the lane line boundary points, so as to obtain the lane line in the image. That is, in a possible implementation form of the embodiment of the present application, the step 105 may include:
aiming at a preset area of each row of grids at every preset step length, judging whether a lane line boundary point exists in the preset area or not;
if the boundary point of the lane line exists, the boundary point of the lane line exists as a point on the lane line;
if no lane line boundary point exists and an estimated boundary point determined according to a lane central point and lane width exists, taking the estimated boundary point as a point on the lane line;
determining the direction of the lane line boundary point matched with the position of the lane line boundary point as the direction of the lane line boundary point for each lane line boundary point on the lane line in the image;
and performing curve fitting according to each lane line boundary point on the lane lines in the image and the corresponding direction to obtain the lane lines in the image.
The preset areas are preset areas with certain positions and sizes, the sizes of the preset areas are the same, the intervals between the preset areas in each row of grids are the same, and namely the difference value of the horizontal coordinates of the upper left corners of two adjacent preset areas in each row of grids is a preset step length.
For example, the size of the grid in the image is 16 × 16 pixels, the preset step size is 160 pixels, and the size of the preset area is 16 × 80 pixels, so that the interval between the preset areas in each row of the grid is 80 pixels.
It should be noted that the above examples are only illustrative and should not be construed as limiting the present application. In practical use, the step length and the specific size of the preset area can be preset according to actual needs, which is not limited in the embodiment of the application.
As a possible implementation, it may be determined whether each row of the grid contains a point on the lane line. For each line of grids, whether each preset area of each line of grids at intervals of preset step length contains the lane line boundary point or not can be determined according to the position of each lane line boundary point, and if the lane line boundary point is contained, the existing lane line boundary point can be used as a point on a lane line; if not, determining an estimated boundary point corresponding to each lane center point according to each lane center point and lane width, further determining whether each preset area of each row of grids at every preset step length contains the estimated boundary point according to the position of each estimated boundary point, and determining the existing estimated boundary point as a point on the lane line.
Specifically, a lane usually includes a left lane line and a right lane line, so that the lane lines on the left and right sides can be determined according to the position of the center point of each lane and the width of the lane corresponding to the center point. That is, in a possible implementation form of the embodiment of the present application, the step of determining the inferred boundary point may include:
subtracting a half of the corresponding lane width from the transverse coordinate of the lane central point position aiming at each lane central point in the image to obtain the position of a left lane line presumed boundary point corresponding to the lane central point;
and adding the transverse coordinate of the position of the center point of the lane to a half of the width of the corresponding lane to obtain the position of the right lane line presumed boundary point corresponding to the center point of the lane.
It can be understood that, the distance between each lane central point in the image and the corresponding lane line is half of the lane width, that is, a point where the vertical coordinate is the same as the vertical coordinate of the lane central point position and the difference between the horizontal coordinate and the horizontal coordinate of the lane central point position is half of the lane width is located on the lane line corresponding to the lane central point.
In the embodiment of the application, the horizontal coordinate of the lane central point position can be subtracted by half of the corresponding lane width, the horizontal coordinate of the left lane line presumption boundary point corresponding to the lane central point is determined, and the vertical coordinate of the lane central point position is used as the vertical coordinate of the left lane line presumption boundary point corresponding to the lane central point, so that the position of the left lane line presumption boundary point corresponding to the lane central point is determined; correspondingly, the horizontal coordinate of the lane central point position may be added to a half of the lane width corresponding thereto, to determine the horizontal coordinate of the right lane line presumed boundary point corresponding to the lane central point, and the vertical coordinate of the lane central point position may be used as the vertical coordinate of the right lane line presumed boundary point corresponding to the lane central point, thereby determining the position of the right lane line presumed boundary point corresponding to the lane central point.
It can be understood that the connecting line of the points on each lane line is the lane line, so that after the positions of the points on each lane line are determined, the direction of each lane line boundary point can be determined according to the positions of the lane line boundary points included in the points on the lane line and the third detection information of each grid, that is, the direction of the lane line boundary point corresponding to the grid to which the lane line boundary point belongs is determined as the direction of the lane line boundary point, and then curve fitting is performed on the position of each lane line boundary point and the corresponding direction to obtain the lane line in the image.
The lane line detection method provided by the embodiment of the application can input the acquired image to be detected into a preset target detection model to acquire the first detection information, the second detection information and the third detection information of each grid in the image, respectively perform non-maximum suppression processing on the first detection information and the second detection information of each grid to acquire the position of each lane line boundary point, the position of each lane center point and the corresponding lane width in the image, and further determine the lane line in the image according to the position of each lane line boundary point, the position of each lane center point and the corresponding lane width in the image. Therefore, the image to be detected is divided into a plurality of grids, the trained target detection model is used for detecting lane line boundary points, lane line boundary point directions, lane central points and lane widths in each grid, and then lane lines in the image can be determined according to the detected lane line boundary points, lane central points, lane widths and lane line boundary point directions, so that the direction information of the lane lines can be directly obtained in the lane line detection process, the fitting error is reduced, the lane line detection precision is improved, the calculation complexity is low, and the consumed time is short.
In a possible implementation form of the present application, the preset target detection model may be obtained by training a large amount of training data, and the performance of the target detection model is continuously optimized through a loss function, so that the performance of the preset target detection model meets the actual application requirement.
The following further describes the lane line detection method provided in the embodiment of the present application with reference to fig. 3.
Fig. 3 is a schematic flow chart of another lane line detection method according to an embodiment of the present disclosure.
As shown in fig. 3, the lane line detection method includes the following steps:
The training data may include a large amount of image data and labeling information for each image data. It should be noted that the image data included in the training data and the labeling information on the image data are related to the specific use of the target detection model. For example, if the target detection model is used for face detection, the training data may include a large number of images including faces and labeling information of the faces in the images; if the target detection model of the embodiment of the present application is used for lane line detection and needs to predict the positions of lane boundary points, the positions of lane center points, lane widths, and the included angles between the connection lines of adjacent lane line boundary points and the horizontal direction, the training data may include a large number of images including lane lines, and label information of the positions of the real lane line boundary points in the images, label information of the included angles between the connection lines of adjacent real lane line boundary points and the horizontal direction, label information of the positions of the real lane center points, and label information of the real lane widths corresponding to the real lane center points.
It should be noted that, in order to ensure the accuracy of the finally obtained target detection model, the training data needs to have a certain scale, so that the number of images included in the training data can be preset in advance, and when the training data is obtained, the number of images included in the training data must be greater than the preset number to ensure the performance of the target detection model. In actual use, the number of images included in the training data may be preset according to actual needs, which is not limited in the embodiment of the present application.
In the embodiment of the present application, there are various ways to acquire the training data, for example, images including lane lines may be collected from a network, or image data acquired in an actual application scene (for example, an automatic driving scene) may be used as the training data, and after the image data is acquired, the image data is labeled to obtain the position of each real lane line boundary point in the image, an included angle between a connection line of each adjacent real lane line boundary point and the horizontal direction, the position of each real lane center point, and a corresponding real lane width.
In an embodiment of the present application, the initial target detection model may be trained using training data, namely, image data in training data are sequentially input into an initial target detection model to obtain first detection information, second detection information and third detection information corresponding to each image data, further, according to the first detection information, the second detection information and the third detection information corresponding to each grid in each image data, the position of each real lane line boundary point corresponding to each image data, the included angle between the connecting line of each adjacent real lane line boundary point and the horizontal direction, the position of each real lane central point and the corresponding real lane width, determining the current value of the loss function, if the current value of the loss function meets the preset condition, determining that the current performance of the target detection model meets the requirements, and finishing the training of the target detection model; if the current value of the loss function does not meet the preset condition, the current performance of the target detection model can be determined not to meet the requirement, so that the parameters of the target detection model can be optimized, and the training data is continuously utilized to train the target detection model after the parameters are optimized until the loss function of the target detection model meets the preset condition.
It should be noted that, the smaller the value of the loss function is, the closer the first detection information, the second detection information, and the third detection information output by the target detection model and the position of the real lane line boundary point, the included angle between the connecting line of each adjacent real lane line boundary point and the horizontal direction, the position of the real lane center point, and the corresponding real lane width are, that is, the better the performance of the target detection model is, therefore, the preset condition that the loss function of the target detection model needs to satisfy may be that the value of the loss function is smaller than the preset threshold. During practical use, the preset condition that the loss function needs to meet can be preset according to the actual need, and the embodiment of the application does not limit the preset condition.
Preferably, in this embodiment of the present application, when the target detection model is trained, six portions of lateral deviation of a lane center point, a lane center point score, a lane width, a lane line boundary point lateral deviation, a lane line boundary point score, and an included angle between a connection line of adjacent lane line boundary points and the horizontal direction may be regressed, that is, a loss function of the target detection model may be divided into six portions to punish losses of the six portions of lateral deviation of the lane center point, the lane center point score, the lane width, the lane line boundary point lateral deviation, the lane line boundary point score, and an included angle between a connection line of adjacent lane line boundary points and the horizontal direction, respectively, so as to further improve accuracy of the finally obtained target detection model. Optionally, the L2 norm loss function may be used to perform regression on the lateral deviation of the lane center point, the score of the lane center point, the lateral deviation of the lane line boundary point, and the included angle between the connection line of the adjacent lane line boundary points and the horizontal direction, the L1smooth loss function may be used to perform regression on the lane width, and the cross entropy loss function may be used to perform regression on the score of the lane line boundary point. In practical use, the loss functions corresponding to the respective portions may be selected according to practical needs, which is not limited in the embodiment of the present application.
It should be noted that, when the loss function of the target detection model is divided into six parts, the training of the target detection model can be completed when the six parts of the loss function respectively meet the preset conditions; or, when the sum of the values of the six parts of the loss function meets a preset condition, the training of the target detection model can be completed, which is not limited in the embodiment of the present application.
In the embodiment of the present application, the preset target detection model may include a convolution portion and a regression portion, the image to be detected is input into the preset target detection model, and a feature map of the image may be obtained through the convolution portion of the preset target detection model, where each point in the feature map corresponds to one mesh in the image. And then, according to the obtained feature map and the image to be detected, acquiring first detection information, second detection information and third detection information of each grid in the image through a regression part of a preset target detection model.
Furthermore, the target detection model of the embodiment of the application can combine shallow features and deep features of the image to extract more effective structural features, so that the accuracy of the target detection model is improved. In a possible implementation form of the embodiment of the application, the convolution part is configured to obtain bottom-layer features of the image at different depths, and perform dimensionality reduction, deconvolution and joint convolution operations on the bottom-layer features at different depths to obtain a feature map corresponding to the image; the characteristic diagram comprises: feature points corresponding to each grid in the image;
the regression part is used for acquiring first detection information, second detection information and third detection information of each grid by combining the images and the corresponding feature maps.
It should be noted that the neural network model used in the target detection model in the embodiment of the present application may include a plurality of convolution layers, so that different depths of convolution operations may be performed on the image through the plurality of convolution layers of the convolution portion to obtain bottom layer features of different depths corresponding to the image, where the depths of the bottom layer features are different, and the sizes of the corresponding feature maps are also different. For example, the size of the feature map of the bottom-layer feature conv5_5 is 1/32 of the image, the size of the feature map of the bottom-layer feature conv6_5 is 1/64 of the image, and the size of the feature map of the bottom-layer feature conv7_5 is 1/128 of the image.
After the bottom layer features of different depths corresponding to the image are obtained, dimensionality reduction can be performed on the bottom layer features of different depths, for example, convolution operation can be performed on the bottom layer features of different depths through a 1 × 1 convolution kernel, so that a feature map obtained after dimensionality reduction is performed on the bottom layer features of different depths is obtained, deconvolution operation of different depths is performed on the bottom layer features of different depths after dimensionality reduction, so that the bottom layer features of different depths after dimensionality reduction have the same size, namely the size of the bottom layer features of different depths after dimensionality reduction is the same as the number of grids included in the image. For example, if the size of the grid in the image is 16 × 16 pixels, the sizes of the feature maps obtained after deconvolution operations of different depths are performed on the underlying features of different depths after dimensionality reduction are 1/16 of the image. And finally, carrying out joint convolution operation on the feature maps subjected to the deconvolution operation with different depths so as to obtain the feature map corresponding to the image, wherein each feature point in the feature map corresponds to one grid in the image.
It should be noted that the regression portion of the target detection model also includes multiple regression layers, some of the regression layers are used to obtain first detection information of each grid in the image, some of the regression layers are used to obtain second detection information of each grid in the image, and some of the regression layers are used to obtain third detection information of each grid in the image.
And 204, performing non-maximum suppression processing on the first detection information of each grid to acquire the position of each lane line boundary point in the image.
The detailed implementation process and principle of the step 204-206 can refer to the detailed description of the above embodiments, and are not described herein again.
The lane line detection method provided by the embodiment of the application can train an initial target detection model by using acquired training data until a loss function of the target detection model meets a preset condition, input the acquired image to be detected into the preset target detection model to acquire first detection information, second detection information and third detection information of each grid in the image, respectively perform non-maximum suppression processing on the first detection information and the second detection information of each grid to acquire the position of each lane line boundary point, the position of each lane center point and the corresponding lane width in the image, and further determine a lane line in the image according to the position of each lane line boundary point, the position of each lane center point and the corresponding lane width in the image, the direction and the position of each lane line boundary point. Therefore, the initial target detection model is trained through a large amount of training data, and the trained target detection model is used for detecting lane line boundary points, lane line boundary point directions, lane center points and lane widths of each grid in the image, so that the lane line detection precision is improved, the calculation complexity is low, the time consumption is short, and the performance of the target detection model is further optimized.
In order to realize the above embodiment, the present application further provides a lane line detection device.
Fig. 4 is a schematic structural diagram of a lane line detection device according to an embodiment of the present application.
As shown in fig. 4, the lane line detection device 30 includes:
an obtaining module 31, configured to obtain an image to be detected;
an input module 32, configured to input the image into a preset target detection model, and obtain first detection information, second detection information, and third detection information of each grid in the image, where the first detection information includes: lane line boundary point lateral deviation and lane line boundary point score; the second detection information includes: lane center point lateral deviation, lane center point fraction and a prediction frame width adjustment value corresponding to each prediction frame; the third detection information includes: the included angle between the connecting line of the boundary point of the lane line in the grid and the boundary point of the lane line in the adjacent grid above and the horizontal direction;
a first processing module 33, configured to perform non-maximum suppression processing on the first detection information of each grid, and obtain the position of each lane line boundary point in the image;
the second processing module 34 is configured to perform non-maximum suppression processing on the second detection information of each grid, and acquire a position of a center point of each lane in the image and a corresponding lane width;
the determining module 35 is configured to determine a lane line in the image according to the position of each lane line boundary point in the image, the position of each lane center point, the corresponding lane width, the direction of each lane line boundary point, and the position.
In practical use, the lane line detection device provided in the embodiment of the present application may be configured in any electronic device to execute the lane line detection method.
The lane line detection device provided by the embodiment of the application can input the acquired image to be detected into a preset target detection model to acquire the first detection information, the second detection information and the third detection of each grid in the image, respectively perform non-maximum suppression processing on the first detection information and the second detection information of each grid to acquire the position of each lane line boundary point, the position of each lane central point and the corresponding lane width in the image, and further determine the lane lines in the image according to the position of each lane line boundary point, the position of each lane central point and the corresponding lane width in the image, the direction and the position of each lane line boundary point. Therefore, the image to be detected is divided into a plurality of grids, the trained target detection model is used for detecting lane line boundary points, lane line boundary point directions, lane central points and lane widths in each grid, and then lane lines in the image can be determined according to the detected lane line boundary points, lane central points, lane widths and lane line boundary point directions, so that the direction information of the lane lines can be directly obtained in the lane line detection process, the fitting error is reduced, the lane line detection precision is improved, the calculation complexity is low, and the consumed time is short.
In a possible implementation form of the present application, the first processing module 33 is specifically configured to:
aiming at each row of grids in the image, selecting the grids with the corresponding lane line boundary point score larger than a first threshold value as target grids at intervals of preset step length;
and for each target grid, determining the position of a lane line boundary point in the image according to the lateral deviation of the lane line boundary point corresponding to the target grid and the coordinates of the target grid.
In a possible implementation form of the present application, the second processing module 34 is specifically configured to:
for each grid in the image, determining a prediction frame with the maximum score of the center point of the corresponding lane in the grid as an optimal prediction frame corresponding to the grid;
selecting the corresponding optimal prediction frame with the lane center point score larger than a second threshold value as a target prediction frame at intervals of preset step length for each row of grids;
and for each target prediction frame, determining the position of a lane central point in the image and the corresponding lane width according to the lane central point lateral deviation corresponding to the target prediction frame and the prediction frame width adjustment value.
Further, in another possible implementation form of the present application, the second processing module 34 is further configured to:
for each target prediction frame, determining the position of a lane central point in the image according to the lane central point transverse deviation corresponding to the target prediction frame and the coordinates of the grid to which the target prediction frame belongs;
and determining the lane width corresponding to the lane central point according to the width adjustment value of the prediction frame corresponding to the target prediction frame and the width of the target prediction frame.
In a possible implementation form of the present application, the determining module 35 is specifically configured to:
aiming at a preset area of each row of grids at every preset step length, judging whether a lane line boundary point exists in the preset area or not;
if the boundary point of the lane line exists, the boundary point of the lane line exists as a point on the lane line;
if no lane line boundary point exists and an estimated boundary point determined according to a lane central point and lane width exists, taking the estimated boundary point as a point on the lane line;
determining the direction of the lane line boundary point matched with the position of the lane line boundary point as the direction of the lane line boundary point for each lane line boundary point on the lane line in the image;
and performing curve fitting according to each lane line boundary point on the lane lines in the image and the corresponding direction to obtain the lane lines in the image.
In one possible implementation form of the present application, the target detection model includes: a convolution portion and a regression portion;
the convolution part is used for acquiring bottom layer features of the image at different depths, and performing dimensionality reduction, deconvolution and joint convolution operation on the bottom layer features at different depths to obtain a feature map corresponding to the image; the characteristic diagram comprises: feature points corresponding to each grid in the image;
the regression part is used for acquiring first detection information, second detection information and third detection information of each grid by combining the images and the corresponding feature maps.
Further, in another possible implementation form of the present application, the lane line detecting device 30 further includes: a training module;
correspondingly, the obtaining module 31 is further configured to obtain training data, where the training data includes: the images are larger than the preset number, the position of each real lane line boundary point in the images, the included angle between the connecting line of each adjacent real lane line boundary point and the horizontal direction, the position of each real lane center point and the corresponding real lane width;
the training module is specifically configured to train an initial target detection model by using the training data until a loss function of the target detection model meets a preset condition; and the loss function is determined according to the position of each real lane line boundary point in the image, the included angle between the connecting line of each adjacent real lane line boundary point and the horizontal direction, the position of each real lane center point and the corresponding real lane width, and the first detection information, the second detection information and the third detection information of each grid in the image.
It should be noted that the explanation of the embodiment of the lane line detection method shown in fig. 1 and 3 is also applicable to the lane line detection device 30 of this embodiment, and is not repeated here.
The lane line detection device provided by the embodiment of the application can train an initial target detection model by using acquired training data until a loss function of the target detection model meets a preset condition, input the acquired image to be detected into the preset target detection model to acquire first detection information, second detection information and third detection information of each grid in the image, respectively perform non-maximum suppression processing on the first detection information and the second detection information of each grid to acquire the position of each lane line boundary point, the position of each lane center point and the corresponding lane width in the image, and further determine a lane line in the image according to the position of each lane line boundary point, the position of each lane center point and the corresponding lane width in the image, the direction and the position of each lane line boundary point. Therefore, the initial target detection model is trained through a large amount of training data, and the trained target detection model is used for detecting lane line boundary points, lane line boundary point directions, lane center points and lane widths of each grid in the image, so that the lane line detection precision is improved, the calculation complexity is low, the time consumption is short, and the performance of the target detection model is further optimized.
In order to implement the above embodiments, the present application further provides an electronic device.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 5, the electronic device 200 includes:
a memory 210 and a processor 220, a bus 230 connecting different components (including the memory 210 and the processor 220), wherein the memory 210 stores a computer program, and when the processor 220 executes the program, the lane line detection method according to the embodiment of the present application is implemented.
A program/utility 280 having a set (at least one) of program modules 270, including but not limited to an operating system, one or more application programs, other program modules, and program data, each of which or some combination thereof may comprise an implementation of a network environment, may be stored in, for example, the memory 210. The program modules 270 generally perform the functions and/or methodologies of the embodiments described herein.
The processor 220 executes various functional applications and data processing by executing programs stored in the memory 210.
It should be noted that, for the implementation process and the technical principle of the electronic device of this embodiment, reference is made to the foregoing explanation of the lane line detection method in the embodiment of the present application, and details are not described here again.
The electronic device provided by the embodiment of the application may execute the lane line detection method as described above, and input the acquired image to be detected into a preset target detection model to acquire the first detection information, the second detection information, and the third detection information of each grid in the image, and perform non-maximum suppression processing on the first detection information and the second detection information of each grid respectively to acquire the position of each lane line boundary point, the position of each lane center point, and the corresponding lane width in the image, and further determine the lane line in the image according to the position of each lane line boundary point, the position of each lane center point, the corresponding lane width, the direction of each lane line boundary point, and the position in the image. Therefore, the image to be detected is divided into a plurality of grids, the trained target detection model is used for detecting lane line boundary points, lane line boundary point directions, lane central points and lane widths in each grid, and then lane lines in the image can be determined according to the detected lane line boundary points, lane central points, lane widths and lane line boundary point directions, so that the direction information of the lane lines can be directly obtained in the lane line detection process, the fitting error is reduced, the lane line detection precision is improved, the calculation complexity is low, and the consumed time is short.
In order to implement the above embodiments, the present application also proposes a computer-readable storage medium.
The computer-readable storage medium stores thereon a computer program, and the computer program is executed by a processor to implement the lane line detection method according to the embodiment of the present application.
In order to implement the foregoing embodiments, a further embodiment of the present application provides a computer program, which when executed by a processor, implements the lane line detection method according to the embodiments of the present application.
In an alternative implementation, the embodiments may be implemented in any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the consumer electronic device, partly on the consumer electronic device, as a stand-alone software package, partly on the consumer electronic device and partly on a remote electronic device, or entirely on the remote electronic device or server. In the case of remote electronic devices, the remote electronic devices may be connected to the consumer electronic device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external electronic device (e.g., through the internet using an internet service provider).
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
Claims (17)
1. A lane line detection method is characterized by comprising the following steps:
acquiring an image to be detected;
inputting the image into a preset target detection model, and acquiring first detection information, second detection information and third detection information of each grid in the image, wherein the first detection information comprises: lane line boundary point lateral deviation and lane line boundary point score; the second detection information includes: lane center point lateral deviation, lane center point fraction and a prediction frame width adjustment value corresponding to each prediction frame; the lane line boundary point lateral deviation refers to the lateral deviation between the lane line boundary point and the upper left corner coordinate of the grid where the lane line boundary point is located, and the lane center point lateral deviation refers to the lateral deviation between the center point of the lane and the upper left corner coordinate of the grid where the lane line boundary point is located; the third detection information includes: the included angle between the connecting line of the boundary point of the lane line in the grid and the boundary point of the lane line in the adjacent grid above and the horizontal direction;
carrying out non-maximum suppression processing on the first detection information of each grid to obtain the position of each lane line boundary point in the image;
carrying out non-maximum suppression processing on the second detection information of each grid to obtain the position of the center point of each lane in the image and the corresponding lane width;
and determining the lane lines in the image according to the positions of the boundary points of the lane lines in the image, the positions of the center points of the lane lines, the corresponding lane widths, the directions and the positions of the boundary points of the lane lines.
2. The method according to claim 1, wherein the performing non-maximum suppression processing on the first detection information of each grid to obtain the position of each lane line boundary point in the image comprises:
aiming at each row of grids in the image, selecting the grids with the corresponding lane line boundary point score larger than a first threshold value as target grids at intervals of preset step length;
and for each target grid, determining the position of a lane line boundary point in the image according to the lateral deviation of the lane line boundary point corresponding to the target grid and the coordinates of the target grid.
3. The method according to claim 1, wherein the performing non-maximum suppression processing on the second detection information of each mesh to obtain the position of each lane center point and the corresponding lane width in the image comprises:
for each grid in the image, determining a prediction frame with the maximum score of the center point of the corresponding lane in the grid as an optimal prediction frame corresponding to the grid;
selecting the corresponding optimal prediction frame with the lane center point score larger than a second threshold value as a target prediction frame at intervals of preset step length for each row of grids;
and for each target prediction frame, determining the position of a lane central point in the image and the corresponding lane width according to the lane central point lateral deviation corresponding to the target prediction frame and the prediction frame width adjustment value.
4. The method of claim 3, wherein the determining, for each target prediction frame, a position of a lane center point and a corresponding lane width in the image according to a lateral lane center point offset and a prediction frame width adjustment value corresponding to the target prediction frame comprises:
for each target prediction frame, determining the position of a lane central point in the image according to the lane central point transverse deviation corresponding to the target prediction frame and the coordinates of the grid to which the target prediction frame belongs;
and determining the lane width corresponding to the lane central point according to the width adjustment value of the prediction frame corresponding to the target prediction frame and the width of the target prediction frame.
5. The method of claim 1, wherein determining the lane lines in the image according to the positions of the boundary points of the respective lane lines, the positions of the center points of the respective lanes, and the corresponding lane widths comprises:
aiming at a preset area of each row of grids at every preset step length, judging whether a lane line boundary point exists in the preset area or not;
if the boundary point of the lane line exists, the boundary point of the lane line exists as a point on the lane line;
if no lane line boundary point exists and an estimated boundary point determined according to a lane central point and lane width exists, taking the estimated boundary point as a point on the lane line;
determining the direction of the lane line boundary point matched with the position of the lane line boundary point as the direction of the lane line boundary point for each lane line boundary point on the lane line in the image;
and performing curve fitting according to each lane line boundary point on the lane lines in the image and the corresponding direction to obtain the lane lines in the image.
6. The method of claim 1, wherein the object detection model comprises: a convolution portion and a regression portion;
the convolution part is used for acquiring bottom layer features of the image at different depths, and performing dimensionality reduction, deconvolution and joint convolution operation on the bottom layer features at different depths to obtain a feature map corresponding to the image; the characteristic diagram comprises: feature points corresponding to each grid in the image;
the regression part is used for acquiring first detection information, second detection information and third detection information of each grid by combining the images and the corresponding feature maps.
7. The method according to claim 1, wherein before inputting the image into a preset target detection model and obtaining the first detection information, the second detection information and the third detection information of each grid in the image, the method further comprises:
obtaining training data, the training data comprising: the images are larger than the preset number, the position of each real lane line boundary point in the images, the included angle between the connecting line of each adjacent real lane line boundary point and the horizontal direction, the position of each real lane center point and the corresponding real lane width;
training an initial target detection model by using the training data until a loss function of the target detection model meets a preset condition; and the loss function is determined according to the position of each real lane line boundary point in the image, the included angle between the connecting line of each adjacent real lane line boundary point and the horizontal direction, the position of each real lane center point and the corresponding real lane width, and the first detection information, the second detection information and the third detection information of each grid in the image.
8. A lane line detection apparatus, comprising:
the acquisition module is used for acquiring an image to be detected;
an input module, configured to input the image into a preset target detection model, and acquire first detection information, second detection information, and third detection information of each grid in the image, where the first detection information includes: lane line boundary point lateral deviation and lane line boundary point score; the second detection information includes: lane center point lateral deviation, lane center point fraction and a prediction frame width adjustment value corresponding to each prediction frame; the lane line boundary point lateral deviation refers to the lateral deviation between the lane line boundary point and the upper left corner coordinate of the grid where the lane line boundary point is located, and the lane center point lateral deviation refers to the lateral deviation between the center point of the lane and the upper left corner coordinate of the grid where the lane line boundary point is located; the third detection information includes: the included angle between the connecting line of the boundary point of the lane line in the grid and the boundary point of the lane line in the adjacent grid above and the horizontal direction;
the first processing module is used for carrying out non-maximum suppression processing on the first detection information of each grid to obtain the position of each lane line boundary point in the image;
the second processing module is used for carrying out non-maximum value suppression processing on the second detection information of each grid to obtain the position of the center point of each lane in the image and the corresponding lane width;
and the determining module is used for determining the lane lines in the image according to the positions of the boundary points of the lane lines in the image, the positions of the center points of the lane lines, the corresponding lane widths, the directions and the positions of the boundary points of the lane lines.
9. The apparatus of claim 8, wherein the first processing module is specifically configured to,
aiming at each row of grids in the image, selecting the grids with the corresponding lane line boundary point score larger than a first threshold value as target grids at intervals of preset step length;
and for each target grid, determining the position of a lane line boundary point in the image according to the lateral deviation of the lane line boundary point corresponding to the target grid and the coordinates of the target grid.
10. The apparatus of claim 8, wherein the second processing module is specifically configured to,
for each grid in the image, determining a prediction frame with the maximum score of the center point of the corresponding lane in the grid as an optimal prediction frame corresponding to the grid;
selecting the corresponding optimal prediction frame with the lane center point score larger than a second threshold value as a target prediction frame at intervals of preset step length for each row of grids;
and for each target prediction frame, determining the position of a lane central point in the image and the corresponding lane width according to the lane central point lateral deviation corresponding to the target prediction frame and the prediction frame width adjustment value.
11. The apparatus of claim 10, wherein the second processing module is specifically configured to,
for each target prediction frame, determining the position of a lane central point in the image according to the lane central point transverse deviation corresponding to the target prediction frame and the coordinates of the grid to which the target prediction frame belongs;
and determining the lane width corresponding to the lane central point according to the width adjustment value of the prediction frame corresponding to the target prediction frame and the width of the target prediction frame.
12. The apparatus of claim 8, wherein the means for determining is configured to,
aiming at a preset area of each row of grids at every preset step length, judging whether a lane line boundary point exists in the preset area or not;
if the boundary point of the lane line exists, the boundary point of the lane line exists as a point on the lane line;
if no lane line boundary point exists and an estimated boundary point determined according to a lane central point and lane width exists, taking the estimated boundary point as a point on the lane line;
determining the direction of the lane line boundary point matched with the position of the lane line boundary point as the direction of the lane line boundary point for each lane line boundary point on the lane line in the image;
and performing curve fitting according to each lane line boundary point on the lane lines in the image and the corresponding direction to obtain the lane lines in the image.
13. The apparatus of claim 8, wherein the object detection model comprises: a convolution portion and a regression portion;
the convolution part is used for acquiring bottom layer features of the image at different depths, and performing dimensionality reduction, deconvolution and joint convolution operation on the bottom layer features at different depths to obtain a feature map corresponding to the image; the characteristic diagram comprises: feature points corresponding to each grid in the image;
the regression part is used for acquiring first detection information, second detection information and third detection information of each grid by combining the images and the corresponding feature maps.
14. The apparatus of claim 8, further comprising: a training module;
the obtaining module is further configured to obtain training data, where the training data includes: the images are larger than the preset number, the position of each real lane line boundary point in the images, the included angle between the connecting line of each adjacent real lane line boundary point and the horizontal direction, the position of each real lane center point and the corresponding real lane width;
the training module is used for training an initial target detection model by adopting the training data until a loss function of the target detection model meets a preset condition; and the loss function is determined according to the position of each real lane line boundary point in the image, the included angle between the connecting line of each adjacent real lane line boundary point and the horizontal direction, the position of each real lane center point and the corresponding real lane width, and the first detection information, the second detection information and the third detection information of each grid in the image.
15. An electronic device, comprising:
memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor implements the lane line detection method according to any of claims 1 to 7 when executing the program.
16. A computer-readable storage medium on which a computer program is stored, the program, when being executed by a processor, implementing the lane line detection method according to any one of claims 1 to 7.
17. A computer program product implementing the lane line detection method of any one of claims 1-7 when executed by an instruction processor in the computer program product.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910536130.XA CN110263713B (en) | 2019-06-20 | 2019-06-20 | Lane line detection method, lane line detection device, electronic device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910536130.XA CN110263713B (en) | 2019-06-20 | 2019-06-20 | Lane line detection method, lane line detection device, electronic device, and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110263713A CN110263713A (en) | 2019-09-20 |
CN110263713B true CN110263713B (en) | 2021-08-10 |
Family
ID=67919749
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910536130.XA Active CN110263713B (en) | 2019-06-20 | 2019-06-20 | Lane line detection method, lane line detection device, electronic device, and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110263713B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113327456A (en) * | 2020-02-28 | 2021-08-31 | 华为技术有限公司 | Lane structure detection method and device |
CN111460073B (en) * | 2020-04-01 | 2023-10-20 | 北京百度网讯科技有限公司 | Lane line detection method, device, equipment and storage medium |
CN111860155B (en) * | 2020-06-12 | 2022-04-29 | 华为技术有限公司 | Lane line detection method and related equipment |
CN112132109B (en) * | 2020-10-10 | 2024-09-06 | 阿波罗智联(北京)科技有限公司 | Lane line processing and lane positioning method, device, equipment and storage medium |
CN112229412B (en) * | 2020-10-21 | 2024-01-30 | 腾讯科技(深圳)有限公司 | Lane positioning method and device, storage medium and server |
CN112721926B (en) * | 2021-02-25 | 2023-05-09 | 深圳市科莱德电子有限公司 | Automatic driving automobile lane keeping control method and system based on block chain |
CN113721602B (en) * | 2021-07-28 | 2024-04-05 | 广州小鹏汽车科技有限公司 | Reference line processing method, device, equipment and storage medium |
CN114495063B (en) * | 2022-01-26 | 2024-09-10 | 深圳力维智联技术有限公司 | Lane departure degree detection method and readable storage medium |
CN115049995B (en) * | 2022-02-22 | 2023-07-04 | 阿波罗智能技术(北京)有限公司 | Lane line detection method and device, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107909007A (en) * | 2017-10-27 | 2018-04-13 | 上海识加电子科技有限公司 | Method for detecting lane lines and device |
CN109740469A (en) * | 2018-12-24 | 2019-05-10 | 百度在线网络技术(北京)有限公司 | Method for detecting lane lines, device, computer equipment and storage medium |
CN109829351A (en) * | 2017-11-23 | 2019-05-31 | 华为技术有限公司 | Detection method, device and the computer readable storage medium of lane information |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103617412B (en) * | 2013-10-31 | 2017-01-18 | 电子科技大学 | Real-time lane line detection method |
JP6220327B2 (en) * | 2014-07-23 | 2017-10-25 | 株式会社Soken | Traveling lane marking recognition device, traveling lane marking recognition program |
-
2019
- 2019-06-20 CN CN201910536130.XA patent/CN110263713B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107909007A (en) * | 2017-10-27 | 2018-04-13 | 上海识加电子科技有限公司 | Method for detecting lane lines and device |
CN109829351A (en) * | 2017-11-23 | 2019-05-31 | 华为技术有限公司 | Detection method, device and the computer readable storage medium of lane information |
CN109740469A (en) * | 2018-12-24 | 2019-05-10 | 百度在线网络技术(北京)有限公司 | Method for detecting lane lines, device, computer equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
Robust Grid-Based Road Detection for ADAS and Autonomous Vehicles in Urban Environments;Richard Matthaei等;《 Proceedings of the 16th International Conference on Information Fusion》;20131021;第938-944页 * |
基于实例分割方法的复杂场景下车道线检测;姜立标等;《机械设计与制造工程》;20190531;第48卷(第5期);第113-118页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110263713A (en) | 2019-09-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110263713B (en) | Lane line detection method, lane line detection device, electronic device, and storage medium | |
CN110232368B (en) | Lane line detection method, lane line detection device, electronic device, and storage medium | |
US11210534B2 (en) | Method for position detection, device, and storage medium | |
CN110276293B (en) | Lane line detection method, lane line detection device, electronic device, and storage medium | |
CN110263714B (en) | Lane line detection method, lane line detection device, electronic device, and storage medium | |
CN109242903B (en) | Three-dimensional data generation method, device, equipment and storage medium | |
US11113836B2 (en) | Object detection method, device, apparatus and computer-readable storage medium | |
CN109188457B (en) | Object detection frame generation method, device, equipment, storage medium and vehicle | |
US11763575B2 (en) | Object detection for distorted images | |
US10282623B1 (en) | Depth perception sensor data processing | |
CN109558854B (en) | Obstacle sensing method and device, electronic equipment and storage medium | |
US11688078B2 (en) | Video object detection | |
CN112200131A (en) | Vision-based vehicle collision detection method, intelligent terminal and storage medium | |
CN112613424A (en) | Rail obstacle detection method, rail obstacle detection device, electronic apparatus, and storage medium | |
CN113619606B (en) | Obstacle determination method, device, equipment and storage medium | |
US20210350142A1 (en) | In-train positioning and indoor positioning | |
CN114445697A (en) | Target detection method and device, electronic equipment and storage medium | |
CN113570622A (en) | Obstacle determination method and device, electronic equipment and storage medium | |
CN109885392B (en) | Method and device for allocating vehicle-mounted computing resources | |
CN116343169A (en) | Path planning method, target object motion control device and electronic equipment | |
CN109188419A (en) | Detection method, device, computer equipment and the storage medium of barrier speed | |
CN115311634A (en) | Lane line tracking method, medium and equipment based on template matching | |
CN113869163B (en) | Target tracking method and device, electronic equipment and storage medium | |
CN111832368B (en) | Training method, training device and application of drivable area detection model | |
CN114115293A (en) | Robot obstacle avoidance method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |