Automatic lane line extraction method
Technical Field
The invention relates to the technical field of automatic driving high-precision map making, in particular to an automatic lane line extraction method.
Background
The lane lines are left and right side lines for determining a lane range, generally, the lane range is determined by the lane lines printed on the ground, and the lane range is roughly divided into five types, namely a single dotted line, a single solid line, a double dotted line and a virtual solid line, so that the vehicle is ensured to run in a correct lane, safety guarantee is provided for vehicle running, the lane lines are the most important road elements in automatic driving high-precision map making, and the extraction method in the prior art has the defects of low precision and low automation rate.
Disclosure of Invention
In order to solve the above technical problems, the present invention provides an automatic lane line extraction method, comprising the following steps
S1, acquiring image data of the current frame, and extracting pixel coordinates of a lane line from the image based on a deep learning lane line detection model LaneATT;
s2, acquiring laser radar data of the current frame, screening a plurality of rings which are close to the vehicle and can scan the ground, projecting the point cloud to a pixel coordinate system of the image to calculate intersection points of the rings and the lane lines, and performing inverse calculation on the intersection points to a laser radar three-dimensional coordinate system through a line segment intersection algorithm;
s3, converting the three-dimensional points of the extracted lane into a map coordinate system based on the pose information at the current moment;
s4, performing distance clustering on all lane points extracted in one area by adopting a DBSCAN algorithm, and filtering noise points to obtain preliminary lane line data;
and S5, merging, supplementing and interrupting the lane lines by adopting a transverse extension algorithm and a longitudinal distance judgment method for a plurality of discrete lane lines on the same road section, and finally outputting complete lane line data.
The technical scheme of the invention is further defined as follows:
further, the line segment intersection algorithm in step S2 includes the following steps
A1, calculating an intersection point p0 of the lane line and the laser radar scanning line under a pixel coordinate system;
a2, recording point cloud coordinates p1 and p2 before and after the intersection point;
a3, recording a proportionality coefficient I of the distance between the intersection point and the point cloud;
a4, linearly interpolating a three-dimensional coordinate p of the intersection point according to the proportionality coefficient I and two laser radar points p1 and p2 before and after the intersection point.
In the method for automatically extracting a lane line, the conversion method in step S3 is configured to calculate the pose information of the current time through the linear difference, convert the point cloud coordinate system into a baseline coordinate system of the vehicle through matrix transformation, and convert the baseline coordinate system into a map coordinate system.
In the aforementioned method for automatically extracting a lane line, the method for filtering noise in step S4 includes the following steps
B1, filtering short lane lines through a distance threshold;
b2, calculating the included angle between each point and the adjacent points in sequence, wherein the adjacent points do not include two points from head to tail;
and B3, filtering the salient points through an angle threshold value to obtain preliminary lane line data.
In the aforementioned method for automatically extracting a lane line, the method for merging lane lines in step S5 is configured to calculate a slope according to two points, i.e., the head and the tail, of a lane line; respectively extending the head and the tail points outwards by a certain transverse distance dx according to the slope; and then, calculating the longitudinal distance dy between the two points from the head to the tail to other lane lines after the extension, and connecting the two lane lines from the head to the tail when the longitudinal distance is smaller than a threshold value (the two lane lines are considered to be close).
In the aforementioned method for automatically extracting a lane line, the method for breaking a lane line in step S5 is configured to read the extracted stop line data, traverse whether the lane line intersects with the stop line, and break off the portion of the lane line that exceeds the stop line if the lane line intersects with the stop line.
In the aforementioned method for automatically extracting a lane line, the method for supplementing a lane line in step S5 is configured to read the extracted stop line data, traverse whether the lane line intersects with the stop line, transversely extend the lane line by a certain distance if the lane line does not intersect with the stop line, and determine whether the lane line intersects with the stop line again, and extend the lane line to the intersection point if the lane line intersects with the stop line.
The invention has the beneficial effects that:
the method can extract lane line information according to image data, laser radar data and pose data acquired by the unmanned vehicle, and is used for production of high-precision map data.
The two-dimensional lane pixel points extracted from the image can be resolved to a three-dimensional map coordinate system by means of laser radar data, and the conversion precision is high; and moreover, the extracted multi-frame data is fused and post-processed, so that the integrity of the lane line can be greatly increased, and the automation rate of lane line extraction is improved.
Drawings
FIG. 1 is a schematic diagram of a two-dimensional to three-dimensional transition of lane points in accordance with the present invention;
FIG. 2 is a schematic view of lane line merging according to the present invention;
FIG. 3 is a schematic view of lane line alignment and breaking according to the present invention.
Detailed Description
The lane line automatic extraction method provided by the embodiment has the structure shown in fig. 1 to 3, and comprises the following steps
S1, acquiring image data of the current frame, and extracting pixel coordinates of a lane line from the image based on a deep learning lane line detection model LaneATT;
s2, acquiring laser radar data of the current frame, screening a plurality of rings which are close to the vehicle and can scan the ground, converting point cloud coordinates into a pixel coordinate system of an image through externally orientation elements which are calibrated in advance, and calculating an intersection point p0 of a lane line and a laser radar scanning line under the pixel coordinate system through a line segment intersection algorithm as shown in figure 1; recording point cloud coordinates p1 and p2 before and after the intersection point; recording a proportionality coefficient I of the distance between the intersection point and the point cloud; linearly interpolating a three-dimensional coordinate p of the intersection point according to the proportionality coefficient I and two laser radar points p1 and p2 before and after the intersection point;
s3, acquiring similar pose information according to the timestamp of the current laser radar data, calculating the pose information of the current time through a linear difference value, converting a point cloud coordinate system into a baseline coordinate system of the vehicle through matrix transformation, and converting the baseline coordinate system into a map coordinate system;
s4, performing distance clustering on all lane points extracted in one area by adopting a DBSCAN algorithm, filtering short lane lines from the clustered result through a distance threshold, sequentially calculating the included angle (excluding the head and the tail points) between each point and the adjacent points in the front and the back, and filtering the salient points through an angle threshold, thereby obtaining preliminary lane line data;
s5, when the same road section is discontinuous and exceeds the range of the road section, and the like, the clustered road lines are calculated according to the head and the tail of the road lines, and the head and the tail are respectively extended outwards by a certain distance dx (transverse distance) according to the slope; as shown in fig. 2, the distance dy (longitudinal distance) from the two points from the head to the tail to other lane lines after the extension is calculated, and if the longitudinal distance is smaller than the threshold, the two lane lines are considered to be very close and the two lane lines are connected end to end; reading the extracted stop line data, traversing whether the lane line is intersected with the stop line, and breaking the lane line to remove the part exceeding the stop line if the lane line is intersected with the stop line; as shown in fig. 3, otherwise, the lane line is extended transversely for a certain distance and it is determined again whether the lane line intersects the stop line, if so, the lane line is extended to the intersection point, and the complete lane line data is finally output through operations such as merging, completion, breaking and the like.
The method can extract lane line information according to image data, laser radar data and pose data acquired by the unmanned vehicle, and is used for producing high-precision map data.
Firstly, extracting pixel coordinates of lane lines from a two-dimensional image based on a deep learning model LaneATT, secondly, converting laser radar data at the same time into an image coordinate system, selecting a plurality of rings to intersect with the lane lines, obtaining laser points before and after an intersection point, interpolating a three-dimensional coordinate of the intersection point according to a proportion to be a lane point coordinate, and calculating a map coordinate system of the lane point according to pose data.
And performing distance clustering on all lane points in one area by adopting a DBSCAN algorithm, and performing noise point filtering through the distance and the angle to obtain preliminary lane line data.
And finally, adopting a transverse extension algorithm and longitudinal distance judgment to a plurality of discrete lane lines on the same road section to merge, complement and break the lane lines, and finally outputting complete lane line data.
The two-dimensional lane pixel points extracted from the image can be resolved to a three-dimensional map coordinate system by means of laser radar data, and the conversion precision is high; and moreover, the extracted multi-frame data is fused and post-processed, so that the integrity of the lane line can be greatly increased, and the automation rate of lane line extraction is improved.
In addition to the above embodiments, the present invention may have other embodiments. All technical solutions formed by adopting equivalent substitutions or equivalent transformations fall within the protection scope of the claims of the present invention.