CN114494617A - Automatic lane line extraction method - Google Patents

Automatic lane line extraction method Download PDF

Info

Publication number
CN114494617A
CN114494617A CN202111620243.1A CN202111620243A CN114494617A CN 114494617 A CN114494617 A CN 114494617A CN 202111620243 A CN202111620243 A CN 202111620243A CN 114494617 A CN114494617 A CN 114494617A
Authority
CN
China
Prior art keywords
lane
lines
points
lane line
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111620243.1A
Other languages
Chinese (zh)
Inventor
王伟
牛群遥
王劲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Transportation Technology Co ltd
Zhongzhixing Suzhou Technology Co ltd
Original Assignee
Zhongzhixing Suzhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongzhixing Suzhou Technology Co ltd filed Critical Zhongzhixing Suzhou Technology Co ltd
Priority to CN202111620243.1A priority Critical patent/CN114494617A/en
Publication of CN114494617A publication Critical patent/CN114494617A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Remote Sensing (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Graphics (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for automatically extracting lane lines, which relates to the technical field of automatic driving high-precision map making, and comprises the steps of firstly extracting pixel coordinates of lane lines from a two-dimensional image, secondly converting data into lane point coordinates, secondly calculating a map coordinate system of lane points according to pose data, thirdly performing distance clustering on all lane points in an area by adopting a DBSCAN algorithm, performing noise filtering to obtain preliminary lane line data, and finally merging, supplementing, aligning and breaking the lane lines and finally outputting complete lane line data; lane line information is extracted according to image data, laser radar data and pose data acquired by the unmanned vehicle and is used for producing high-precision map data, two-dimensional lane pixel points extracted from images can be resolved into a three-dimensional map coordinate system by means of the laser radar data, and the conversion precision is high; and moreover, the extracted multi-frame data is fused and post-processed, so that the integrity of the lane line can be greatly increased, and the automation rate of lane line extraction is improved.

Description

Automatic lane line extraction method
Technical Field
The invention relates to the technical field of automatic driving high-precision map making, in particular to an automatic lane line extraction method.
Background
The lane lines are left and right side lines for determining a lane range, generally, the lane range is determined by the lane lines printed on the ground, and the lane range is roughly divided into five types, namely a single dotted line, a single solid line, a double dotted line and a virtual solid line, so that the vehicle is ensured to run in a correct lane, safety guarantee is provided for vehicle running, the lane lines are the most important road elements in automatic driving high-precision map making, and the extraction method in the prior art has the defects of low precision and low automation rate.
Disclosure of Invention
In order to solve the above technical problems, the present invention provides an automatic lane line extraction method, comprising the following steps
S1, acquiring image data of the current frame, and extracting pixel coordinates of a lane line from the image based on a deep learning lane line detection model LaneATT;
s2, acquiring laser radar data of the current frame, screening a plurality of rings which are close to the vehicle and can scan the ground, projecting the point cloud to a pixel coordinate system of the image to calculate intersection points of the rings and the lane lines, and performing inverse calculation on the intersection points to a laser radar three-dimensional coordinate system through a line segment intersection algorithm;
s3, converting the three-dimensional points of the extracted lane into a map coordinate system based on the pose information at the current moment;
s4, performing distance clustering on all lane points extracted in one area by adopting a DBSCAN algorithm, and filtering noise points to obtain preliminary lane line data;
and S5, merging, supplementing and interrupting the lane lines by adopting a transverse extension algorithm and a longitudinal distance judgment method for a plurality of discrete lane lines on the same road section, and finally outputting complete lane line data.
The technical scheme of the invention is further defined as follows:
further, the line segment intersection algorithm in step S2 includes the following steps
A1, calculating an intersection point p0 of the lane line and the laser radar scanning line under a pixel coordinate system;
a2, recording point cloud coordinates p1 and p2 before and after the intersection point;
a3, recording a proportionality coefficient I of the distance between the intersection point and the point cloud;
a4, linearly interpolating a three-dimensional coordinate p of the intersection point according to the proportionality coefficient I and two laser radar points p1 and p2 before and after the intersection point.
In the method for automatically extracting a lane line, the conversion method in step S3 is configured to calculate the pose information of the current time through the linear difference, convert the point cloud coordinate system into a baseline coordinate system of the vehicle through matrix transformation, and convert the baseline coordinate system into a map coordinate system.
In the aforementioned method for automatically extracting a lane line, the method for filtering noise in step S4 includes the following steps
B1, filtering short lane lines through a distance threshold;
b2, calculating the included angle between each point and the adjacent points in sequence, wherein the adjacent points do not include two points from head to tail;
and B3, filtering the salient points through an angle threshold value to obtain preliminary lane line data.
In the aforementioned method for automatically extracting a lane line, the method for merging lane lines in step S5 is configured to calculate a slope according to two points, i.e., the head and the tail, of a lane line; respectively extending the head and the tail points outwards by a certain transverse distance dx according to the slope; and then, calculating the longitudinal distance dy between the two points from the head to the tail to other lane lines after the extension, and connecting the two lane lines from the head to the tail when the longitudinal distance is smaller than a threshold value (the two lane lines are considered to be close).
In the aforementioned method for automatically extracting a lane line, the method for breaking a lane line in step S5 is configured to read the extracted stop line data, traverse whether the lane line intersects with the stop line, and break off the portion of the lane line that exceeds the stop line if the lane line intersects with the stop line.
In the aforementioned method for automatically extracting a lane line, the method for supplementing a lane line in step S5 is configured to read the extracted stop line data, traverse whether the lane line intersects with the stop line, transversely extend the lane line by a certain distance if the lane line does not intersect with the stop line, and determine whether the lane line intersects with the stop line again, and extend the lane line to the intersection point if the lane line intersects with the stop line.
The invention has the beneficial effects that:
the method can extract lane line information according to image data, laser radar data and pose data acquired by the unmanned vehicle, and is used for production of high-precision map data.
The two-dimensional lane pixel points extracted from the image can be resolved to a three-dimensional map coordinate system by means of laser radar data, and the conversion precision is high; and moreover, the extracted multi-frame data is fused and post-processed, so that the integrity of the lane line can be greatly increased, and the automation rate of lane line extraction is improved.
Drawings
FIG. 1 is a schematic diagram of a two-dimensional to three-dimensional transition of lane points in accordance with the present invention;
FIG. 2 is a schematic view of lane line merging according to the present invention;
FIG. 3 is a schematic view of lane line alignment and breaking according to the present invention.
Detailed Description
The lane line automatic extraction method provided by the embodiment has the structure shown in fig. 1 to 3, and comprises the following steps
S1, acquiring image data of the current frame, and extracting pixel coordinates of a lane line from the image based on a deep learning lane line detection model LaneATT;
s2, acquiring laser radar data of the current frame, screening a plurality of rings which are close to the vehicle and can scan the ground, converting point cloud coordinates into a pixel coordinate system of an image through externally orientation elements which are calibrated in advance, and calculating an intersection point p0 of a lane line and a laser radar scanning line under the pixel coordinate system through a line segment intersection algorithm as shown in figure 1; recording point cloud coordinates p1 and p2 before and after the intersection point; recording a proportionality coefficient I of the distance between the intersection point and the point cloud; linearly interpolating a three-dimensional coordinate p of the intersection point according to the proportionality coefficient I and two laser radar points p1 and p2 before and after the intersection point;
s3, acquiring similar pose information according to the timestamp of the current laser radar data, calculating the pose information of the current time through a linear difference value, converting a point cloud coordinate system into a baseline coordinate system of the vehicle through matrix transformation, and converting the baseline coordinate system into a map coordinate system;
s4, performing distance clustering on all lane points extracted in one area by adopting a DBSCAN algorithm, filtering short lane lines from the clustered result through a distance threshold, sequentially calculating the included angle (excluding the head and the tail points) between each point and the adjacent points in the front and the back, and filtering the salient points through an angle threshold, thereby obtaining preliminary lane line data;
s5, when the same road section is discontinuous and exceeds the range of the road section, and the like, the clustered road lines are calculated according to the head and the tail of the road lines, and the head and the tail are respectively extended outwards by a certain distance dx (transverse distance) according to the slope; as shown in fig. 2, the distance dy (longitudinal distance) from the two points from the head to the tail to other lane lines after the extension is calculated, and if the longitudinal distance is smaller than the threshold, the two lane lines are considered to be very close and the two lane lines are connected end to end; reading the extracted stop line data, traversing whether the lane line is intersected with the stop line, and breaking the lane line to remove the part exceeding the stop line if the lane line is intersected with the stop line; as shown in fig. 3, otherwise, the lane line is extended transversely for a certain distance and it is determined again whether the lane line intersects the stop line, if so, the lane line is extended to the intersection point, and the complete lane line data is finally output through operations such as merging, completion, breaking and the like.
The method can extract lane line information according to image data, laser radar data and pose data acquired by the unmanned vehicle, and is used for producing high-precision map data.
Firstly, extracting pixel coordinates of lane lines from a two-dimensional image based on a deep learning model LaneATT, secondly, converting laser radar data at the same time into an image coordinate system, selecting a plurality of rings to intersect with the lane lines, obtaining laser points before and after an intersection point, interpolating a three-dimensional coordinate of the intersection point according to a proportion to be a lane point coordinate, and calculating a map coordinate system of the lane point according to pose data.
And performing distance clustering on all lane points in one area by adopting a DBSCAN algorithm, and performing noise point filtering through the distance and the angle to obtain preliminary lane line data.
And finally, adopting a transverse extension algorithm and longitudinal distance judgment to a plurality of discrete lane lines on the same road section to merge, complement and break the lane lines, and finally outputting complete lane line data.
The two-dimensional lane pixel points extracted from the image can be resolved to a three-dimensional map coordinate system by means of laser radar data, and the conversion precision is high; and moreover, the extracted multi-frame data is fused and post-processed, so that the integrity of the lane line can be greatly increased, and the automation rate of lane line extraction is improved.
In addition to the above embodiments, the present invention may have other embodiments. All technical solutions formed by adopting equivalent substitutions or equivalent transformations fall within the protection scope of the claims of the present invention.

Claims (7)

1. A method for automatically extracting lane lines is characterized by comprising the following steps: comprises the following steps
S1, acquiring image data of the current frame, and extracting pixel coordinates of a lane line from the image based on a deep learning lane line detection model LaneATT;
s2, acquiring laser radar data of the current frame, screening a plurality of rings which are close to the vehicle and can scan the ground, projecting the point cloud to a pixel coordinate system of the image to calculate intersection points of the rings and the lane lines, and performing inverse calculation on the intersection points to a laser radar three-dimensional coordinate system through a line segment intersection algorithm;
s3, converting the three-dimensional points of the extracted lane into a map coordinate system based on the pose information at the current moment;
s4, performing distance clustering on all lane points extracted in one region by adopting a DBSCAN algorithm, and filtering noise points to obtain preliminary lane line data;
and S5, merging, supplementing and interrupting the lane lines by adopting a transverse extension algorithm and a longitudinal distance judgment method for a plurality of discrete lane lines on the same road section, and finally outputting complete lane line data.
2. The automatic lane line extraction method according to claim 1, characterized in that: the line segment intersection algorithm in the step S2 includes the following steps
A1, calculating an intersection point p0 of the lane line and the laser radar scanning line under a pixel coordinate system;
a2, recording point cloud coordinates p1 and p2 before and after the intersection point;
a3, recording a proportionality coefficient I of the distance between the intersection point and the point cloud;
a4, linearly interpolating a three-dimensional coordinate p of the intersection point according to the proportionality coefficient I and two laser radar points p1 and p2 before and after the intersection point.
3. The method for automatically extracting a lane line according to claim 1, wherein: the conversion method in step S3 is configured to calculate pose information of the current time by using the linear difference, convert the point cloud coordinate system into a baseline coordinate system of the vehicle by matrix transformation, and convert the baseline coordinate system into a map coordinate system.
4. The method for automatically extracting a lane line according to claim 1, wherein: the method for filtering noise in step S4 includes the following steps
B1, filtering short lane lines through a distance threshold;
b2, calculating the included angle between each point and the adjacent points in sequence, wherein the adjacent points do not include two points from head to tail;
and B3, filtering the salient points through an angle threshold value to obtain preliminary lane line data.
5. The automatic lane line extraction method according to claim 1, characterized in that: the merging method of the lane lines in the step S5 is set to calculate the slope according to the head and the tail of the lane lines; respectively extending the head and the tail points outwards by a certain transverse distance dx according to the slope; and then, calculating the longitudinal distance dy between the two points from the head to the tail to other lane lines after the extension, and connecting the two lane lines from the head to the tail when the longitudinal distance is smaller than a threshold value (the two lane lines are considered to be close).
6. The automatic lane line extraction method according to claim 1, characterized in that: the method for breaking the lane line in step S5 is configured to read the extracted stop line data, traverse whether the lane line intersects the stop line, and break off the portion of the lane line that exceeds the stop line if the lane line intersects the stop line.
7. The automatic lane line extraction method according to claim 1, characterized in that: the method for supplementing the lane lines in step S5 is configured to read the extracted stop line data, traverse whether the lane lines intersect with the stop lines, transversely extend the lane lines by a certain distance if the lane lines do not intersect with the stop lines, and determine whether the lane lines intersect with the stop lines again, and extend the lane lines to the intersection points if the lane lines intersect with the stop lines.
CN202111620243.1A 2021-12-27 2021-12-27 Automatic lane line extraction method Pending CN114494617A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111620243.1A CN114494617A (en) 2021-12-27 2021-12-27 Automatic lane line extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111620243.1A CN114494617A (en) 2021-12-27 2021-12-27 Automatic lane line extraction method

Publications (1)

Publication Number Publication Date
CN114494617A true CN114494617A (en) 2022-05-13

Family

ID=81495830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111620243.1A Pending CN114494617A (en) 2021-12-27 2021-12-27 Automatic lane line extraction method

Country Status (1)

Country Link
CN (1) CN114494617A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294293A (en) * 2022-10-08 2022-11-04 速度时空信息科技股份有限公司 Method for automatically compiling high-precision map road reference lines based on low-altitude aerial photography results
CN116129392A (en) * 2023-04-17 2023-05-16 北京集度科技有限公司 Method, equipment and storage medium for identifying lane line transverse integrity
CN117723043A (en) * 2023-12-13 2024-03-19 酷哇科技有限公司 Method and system for generating double yellow lane lines in high-precision map road

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294293A (en) * 2022-10-08 2022-11-04 速度时空信息科技股份有限公司 Method for automatically compiling high-precision map road reference lines based on low-altitude aerial photography results
CN115294293B (en) * 2022-10-08 2023-03-24 速度时空信息科技股份有限公司 Method for automatically compiling high-precision map road reference line based on low-altitude aerial photography result
CN116129392A (en) * 2023-04-17 2023-05-16 北京集度科技有限公司 Method, equipment and storage medium for identifying lane line transverse integrity
CN117723043A (en) * 2023-12-13 2024-03-19 酷哇科技有限公司 Method and system for generating double yellow lane lines in high-precision map road

Similar Documents

Publication Publication Date Title
CN114494617A (en) Automatic lane line extraction method
CN110705577B (en) Laser point cloud lane line extraction method
CN111563412B (en) Rapid lane line detection method based on parameter space voting and Bessel fitting
KR101968349B1 (en) Method for detecting lane boundary by visual information
CN108960011B (en) Partially-shielded citrus fruit image identification method
CN111209780A (en) Lane line attribute detection method and device, electronic device and readable storage medium
CN113341396B (en) Robot and charging pile identification method and device thereof
CN112967283A (en) Target identification method, system, equipment and storage medium based on binocular camera
CN111308499A (en) Obstacle detection method based on multi-line laser radar
CN111295666A (en) Lane line detection method, device, control equipment and storage medium
CN104751119A (en) Rapid detecting and tracking method for pedestrians based on information fusion
CN111860321B (en) Obstacle recognition method and system
CN114743181A (en) Road vehicle target detection method and system, electronic device and storage medium
CN111353446A (en) Lane line detection method and system
CN107977649B (en) Obstacle identification method and device and terminal
CN115511834A (en) Battery cell alignment degree detection method, controller, detection system and storage medium
CN115599119A (en) Unmanned aerial vehicle keeps away barrier system
KR101998584B1 (en) Lane detection apparatus and lane detection method
CN113989765A (en) Detection method and detection device for rail obstacle and readable storage medium
WO2022142827A1 (en) Road occupancy information determination method and apparatus
CN114842166A (en) Negative obstacle detection method, system, medium, and apparatus applied to structured road
CN114170596A (en) Posture recognition method and device, electronic equipment, engineering machinery and storage medium
CN103065137B (en) A kind of license plate character recognition method
CN110135382B (en) Human body detection method and device
CN112529011A (en) Target detection method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230426

Address after: 215100 station 601-b11, Tiancheng information building, No. 88, nantiancheng Road, Xiangcheng District, Suzhou City, Jiangsu Province (cluster registration)

Applicant after: Zhongzhixing (Suzhou) Technology Co.,Ltd.

Applicant after: Tianyi Transportation Technology Co.,Ltd.

Address before: 215100 station 601-b11, Tiancheng information building, No. 88, nantiancheng Road, Xiangcheng District, Suzhou City, Jiangsu Province (cluster registration)

Applicant before: Zhongzhixing (Suzhou) Technology Co.,Ltd.

TA01 Transfer of patent application right