CN115346183A - Lane line detection method, terminal and storage medium - Google Patents

Lane line detection method, terminal and storage medium Download PDF

Info

Publication number
CN115346183A
CN115346183A CN202210963847.4A CN202210963847A CN115346183A CN 115346183 A CN115346183 A CN 115346183A CN 202210963847 A CN202210963847 A CN 202210963847A CN 115346183 A CN115346183 A CN 115346183A
Authority
CN
China
Prior art keywords
point cloud
lane line
point
sliding window
line segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210963847.4A
Other languages
Chinese (zh)
Inventor
王文爽
张灿
潘奇
赵天坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hozon New Energy Automobile Co Ltd
Original Assignee
Hozon New Energy Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hozon New Energy Automobile Co Ltd filed Critical Hozon New Energy Automobile Co Ltd
Priority to CN202210963847.4A priority Critical patent/CN115346183A/en
Publication of CN115346183A publication Critical patent/CN115346183A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations

Abstract

The application relates to a lane line detection method, a terminal and a storage medium, wherein the lane line detection method comprises the following steps: acquiring lane line point cloud data, and filtering the lane line point cloud according to the lane line point cloud data to determine a target point cloud; rasterizing the target point cloud to obtain a raster image and a line segment set of the target point cloud; clustering target point clouds according to attributes and line segment sets of cells in the grid map to obtain point cloud data of the same lane line; and fitting the point cloud data of the same lane line and outputting the lane line. The method comprises the steps of classifying original point cloud data collected by the vehicle-mounted laser radar through a deep learning network, obtaining lane line point cloud, further filtering the lane line point cloud, determining target point cloud, and improving the detection accuracy of the lane line point cloud; in addition, the target point clouds are clustered by adopting a sliding window mode according to the cell attributes and the line segment sets in the target point cloud grid map, and the accuracy and the efficiency of lane line detection are improved.

Description

Lane line detection method, terminal and storage medium
Technical Field
The application belongs to the technical field of intelligent driving, and particularly relates to a lane line detection method, a lane line detection terminal and a storage medium.
Background
In the technical field of intelligent driving, lane line detection is used as a key link of an environment sensing system, and stable and reliable support needs to be provided for subsequent path planning and decision control. The existing lane line detection methods can be classified into a lane line detection method based on camera image data and a lane line detection method based on laser radar point cloud data according to sensor classification.
The lane line detection method based on the camera image data can be divided into a lane line detection method based on a traditional algorithm and a deep learning algorithm, but no matter which algorithm is based on, the lane line detection method based on the camera image data is easily influenced by environmental factors such as illumination, weather and the like, and the accuracy of lane line detection is low under the condition of poor illumination condition or severe weather;
the lane line detection method based on the laser radar point cloud data can also be divided into a lane line detection method based on a traditional algorithm and a deep learning algorithm. The general flow of the lane line detection method based on the traditional algorithm is that firstly, points of a lane line are screened out from original point cloud data according to some filtering conditions such as a reflection intensity value and the like, then the points of the lane line are clustered, then a straight line or curve fitting is carried out on a clustering result, and the lane line is output;
the method has higher requirements on the selection of the threshold value of the reflection intensity or the threshold values of other indexes, and in a scene with complex road conditions, some non-lane line points such as marks and sidewalks on the road surface can be mistakenly judged as the points of the lane line, so that the accuracy of lane line detection is lower; the lane line detection method based on the deep learning algorithm is few, a large number of data sets are lacked for training, and the accuracy of lane line detection is low.
Disclosure of Invention
In view of the above technical problems, the present application provides a lane line detection method, a terminal, and a storage medium, so as to improve accuracy and efficiency of lane line detection.
The application provides a lane line detection method, which comprises the following steps: acquiring lane line point cloud data, and filtering the lane line point cloud according to the lane line point cloud data to determine a target point cloud; rasterizing the target point cloud to obtain a grid image and a line segment set of the target point cloud; clustering the target point clouds according to the attributes of the cells in the grid map and the line segment sets to obtain point cloud data of the same lane line; and fitting the point cloud data of the same lane line and outputting the lane line.
In one embodiment, acquiring lane line point cloud data comprises: acquiring original point cloud data acquired by a vehicle-mounted laser radar; and classifying the original point cloud through a deep learning network according to the original point cloud data to obtain the lane line point cloud data.
In one embodiment, filtering the lane line point cloud according to the lane line point cloud data includes at least one of: if the distance between any point in the lane line point cloud and the vehicle does not meet a first preset range, removing the corresponding point from the lane line point cloud; if the reflection intensity value of any point in the lane line point cloud does not meet a second preset range, removing the corresponding point from the lane line point cloud; and if the distance from any point in the lane line point cloud to the ground does not meet a third preset range, removing the corresponding point from the lane line point cloud.
In one embodiment, obtaining a set of line segments of the target point cloud comprises: acquiring a binary aerial view of the target point cloud according to the distribution condition of the target point cloud in each cell in the grid map; and carrying out Hough transformation on the binary aerial view of the target point cloud to obtain a line segment set of the target point cloud.
In one embodiment, the attributes of the cells include: the number of point clouds in the cell; the orientation of the cell; clustering conditions of point clouds within the cells.
In one embodiment, clustering the target point cloud according to the attributes of the cells in the grid map and the line segment sets includes: establishing a sliding window with a preset size on the grid map by taking the starting point of the line segment in the line segment set as the initial central point of the sliding window; carrying out point cloud cluster analysis on the cells in the sliding window one by one according to a preset sequence: when the number of point clouds in the current cell in the sliding window is larger than zero and the point clouds in the current cell are not clustered, acquiring a first included angle between the direction of the current cell and the direction of a line segment where the center point of the sliding window is located; if the first included angle is smaller than a first threshold value, the point clouds in the current cell and the point clouds on the line segment where the center point of the sliding window is located are gathered into a class; if the first included angle is larger than or equal to the first threshold value, continuously performing the point cloud cluster analysis on the next cell in the sliding window; after the point cloud cluster analysis of all the cells in the sliding window is finished, controlling the sliding window to move to the next position by a preset step length, and then continuing to execute the step of performing the point cloud cluster analysis on the cells in the sliding window one by one according to a preset sequence at the next position until the sliding window moves to the outside of the grid map.
In one embodiment, controlling the sliding window to move to the next position in preset steps comprises: acquiring the position of the current central point of the sliding window; when the current center point of the sliding window is located in the intersection area of the current line segment where the current center point of the sliding window is located and the next line segment, acquiring a second included angle between the direction of the next line segment and the direction of the current line segment; if the second included angle is smaller than a second threshold value, calculating the position of the next central point of the sliding window according to the position of the current central point of the sliding window, the preset step length and the function equation of the next line segment; and if the second included angle is larger than or equal to the second threshold, calculating the position of the next central point of the sliding window according to the position of the current central point of the sliding window, the preset step length and a function equation of the current line segment.
In an embodiment, the step of fitting the point cloud data of the same lane line and outputting a lane line includes: fitting the point cloud data of the same lane line by a cubic polynomial by adopting a least square method to determine a function equation of the lane line; and outputting the lane line according to the function equation of the lane line.
The application also provides a terminal, which comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor realizes the steps of the lane line detection method when executing the computer program.
The present application further provides a storage medium storing a computer program that, when executed by a processor, implements the steps of the lane line detection method described above.
According to the lane line detection method, the terminal and the storage medium, original point cloud data collected by the vehicle-mounted laser radar are classified through the deep learning network, lane line point clouds are obtained, the lane line point clouds are further filtered, a target point cloud is determined, and the detection accuracy of the lane line point clouds is improved; in addition, a sliding window mode is adopted, and the target point clouds are clustered according to the cell attributes and the line segment sets in the target point cloud grid map, so that the accuracy and the efficiency of lane line detection are improved.
Drawings
Fig. 1 is a schematic flowchart of a lane line detection method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a grid provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a line detection result provided in an embodiment of the present application;
fig. 4 is a schematic diagram of a point cloud clustering result provided in an embodiment of the present application;
FIG. 5 is a schematic view of the sliding window moving mechanism according to an embodiment of the present application;
FIG. 6 is a schematic diagram of lane line fitting provided in an embodiment of the present application;
fig. 7 is a schematic view of a specific flow chart of point cloud clustering according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a terminal according to a second embodiment of the present application.
Detailed Description
The technical solution of the present application is further described in detail with reference to the drawings and specific embodiments. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, "and/or" includes any and all combinations of one or more of the associated listed items.
Fig. 1 is a schematic flow chart of a lane line detection method according to an embodiment of the present application. As shown in fig. 1, the lane line detection method of the present application may include the steps of:
step S101: acquiring lane line point cloud data, and filtering the lane line point cloud according to the lane line point cloud data to determine a target point cloud;
in one embodiment, acquiring lane line point cloud data comprises:
acquiring original point cloud data acquired by a vehicle-mounted laser radar;
and classifying the original point cloud through a deep learning network according to the original point cloud data to obtain the point cloud data of the lane line.
Optionally, establishing a laser radar three-dimensional coordinate system by taking the center of a rear axle of the vehicle as a coordinate origin, wherein the X-axis direction is the vehicle driving direction, the Y-axis direction is perpendicular to the vehicle driving direction and is parallel to the vehicle driving road surface, the Z-axis direction is perpendicular to the vehicle driving road surface and is upward, and the coordinate units of the X, Y, Z axes are meters; the lane line point cloud data comprise coordinates of points in the lane line point cloud under a laser radar coordinate system, reflection intensity values of the points and distances between the points and the ground, wherein the distances between the points and the ground are obtained by combining a ground equation to solve according to coordinate values of the points on a X, Y laser radar coordinate system and a Z axis; the deep learning network is a semantic segmentation network.
In one embodiment, the filtering of the lane line point cloud from the lane line point cloud data includes at least one of:
if the distance between any point in the point cloud of the lane lines and the vehicle does not meet the first preset range, removing the corresponding point from the point cloud of the lane lines;
if the reflection intensity value of any point in the lane line point cloud does not meet the second preset range, removing the corresponding point from the lane line point cloud;
and if the distance from any point in the point cloud of the lane line to the ground does not meet the third preset range, removing the corresponding point from the point cloud of the lane line.
Optionally, the distance between any point in the point cloud of the lane lines and the vehicle is the distance between any point in the point cloud of the lane lines and the origin of the laser radar coordinate system; the first preset range comprises a range (X _ min, X _ max) in the X-axis direction such as (0,40) and a range (Y _ min, Y _ max) in the Y-axis direction such as (0,20) in a unit of meter, namely, a point of a point cloud of the lane line, the X-axis coordinate of which satisfies (0,40) and the Y-axis coordinate of which satisfies (-20,20), is reserved; the second preset range is [30,70], namely points with the reflection intensity meeting the requirement of [30,70] in the point cloud of the lane line are reserved; the third predetermined range is [0,a ], such as a =10, and is in units of centimeters, that is, points within 10cm from the point cloud of the lane line to the ground are reserved.
Step S102: rasterizing the target point cloud to obtain a raster image and a line segment set of the target point cloud;
in one embodiment, obtaining a set of line segments of a target point cloud comprises:
acquiring a binary aerial view of the target point cloud according to the distribution condition of the target point cloud in each cell in the grid map;
and carrying out Hough transformation on the binary aerial view of the target point cloud to obtain a line segment set of the target point cloud.
Alternatively, the grid size is M × N, such as 100 × 100, that is, the coordinate range of the point corresponding to the first preset range is divided into M parts in the X-axis direction, and divided into N parts in the Y-axis direction, so as to obtain M × N cells. Traversing each point in the point cloud, calculating the index of the cell to which the point belongs according to the X and Y coordinates of the point, and then adding the point into the corresponding cell, wherein the grid schematic diagram of the target point cloud is shown in FIG. 2, and i and j in the diagram are the index values of the cells. Meanwhile, whether point clouds exist in each cell or not is judged, a single-channel binary aerial view (BEV) is obtained, then straight line detection is carried out on the BEV image through Hough transform, a series of line segments are obtained and used as a line segment set of the target point clouds, and the straight line detection result is shown in fig. 3.
Step S103: clustering target point clouds according to attributes and line segment sets of cells in the grid map to obtain point cloud data of the same lane line;
optionally, the attributes of the cells include: point cloud data within the cells; the direction of the cell; clustering condition of point clouds in the cells; the point cloud data in the cells comprise the point cloud number in the cells and the coordinates of points in the cells under a laser radar coordinate system.
The direction of the cell is the direction of the line segment passing through the cell, and when a plurality of line segments pass through the same cell, the direction of the line segment with the longest length is taken as the direction of the cell. Optionally, the direction of the line segment is a direction from a starting point of the line segment to an ending point of the line segment, and the starting point of the line segment is an ending point of the line segment closest to the vehicle.
In one embodiment, clustering the target point cloud according to the attributes and line segment sets of the cells in the grid map includes:
establishing a sliding window with a preset size on the grid graph by taking the starting point of the line segment in the line segment set as the initial central point of the sliding window;
carrying out point cloud cluster analysis on the cells in the sliding window one by one according to a preset sequence:
when the number of point clouds in a current cell in the sliding window is larger than zero and the point clouds in the current cell are not clustered, acquiring a first included angle between the direction of the current cell and the direction of a line segment where a center point of the sliding window is located;
if the first included angle is smaller than a first threshold value, point clouds in the current cell and point clouds on a line segment where the center point of the sliding window is located are gathered into a category;
if the first included angle is larger than or equal to the first threshold value, continuing to perform point cloud cluster analysis on the next cell in the sliding window;
after the point cloud cluster analysis of all the cells in the sliding window is finished, controlling the sliding window to move to the next position according to the preset step length, and then continuing to perform the step of performing the point cloud cluster analysis on the cells in the sliding window one by one according to the preset sequence at the next position until the sliding window moves to the outside of the grid map.
Optionally, the first threshold is 30 degrees; the sliding window size is h × w, i.e., the sliding window includes h × w cells. It is worth mentioning that the size of the sliding window can be adjusted according to the distance between the center point of the sliding window and the vehicle; when the distance between the center point of the sliding window and the vehicle is smaller than or equal to a first preset distance, or the distance between the center point of the sliding window and the vehicle is larger than or equal to a second preset distance, adopting the sliding window with a first preset size, such as 10 multiplied by 10; when the distance between the center point of the sliding window and the vehicle is between a first preset distance and a second preset distance, adopting the sliding window with a second preset size, such as 6 multiplied by 6; the first preset distance is smaller than the second preset distance, and the first preset size is larger than the second preset size. Because the point clouds are more and more densely distributed in a range close to the vehicle, such as within 10 meters, and the point clouds are less and more dispersedly distributed in a range far away from the vehicle, such as outside 30 meters, a large-size sliding window is adopted in a range close to or far away from the vehicle, so that the clustering effect is improved. A schematic diagram of the point cloud clustering result is shown in fig. 4.
In one embodiment, controlling the sliding window to move to the next position in preset steps comprises:
acquiring the position of the current central point of the sliding window;
when the current center point of the sliding window is located in the intersection area of the current line segment where the current center point of the sliding window is located and the next line segment, acquiring a second included angle between the direction of the next line segment and the direction of the current line segment;
if the second included angle is smaller than a second threshold value, calculating the position of the next central point of the sliding window according to the position of the current central point of the sliding window, a preset step length and a function equation of the next line segment;
if the second included angle is larger than or equal to a second threshold value, calculating the position of the next central point of the sliding window according to the position of the current central point of the sliding window, a preset step length and a function equation of the current line segment;
and when the current central point of the sliding window is not in the intersection area of the current line segment and the next line segment, calculating the position of the next central point of the sliding window according to the position of the current central point of the sliding window, the preset step length and the function equation of the current line segment.
Optionally, the intersection region is a region in which a distance between a current center point of the sliding window and an intersection point of two line segments (the current line segment and a next line segment) in a current line segment direction is less than or equal to a moving step length; the second threshold is 30 degrees.
As shown in fig. 5, an image coordinate system is established with the upper left corner of the BEV image as the origin of coordinates, in the image coordinate system, the function equation of the current line segment is X = B1, the function equation of the next line segment is Y = A2 × X + B2, and the moving step of the sliding window is r;
if the current center point A (x) of the sliding window 1 ,y 1 ) In the crossing area of the current line segment and the next line segment, and the second included angle is smaller than the second threshold, the coordinate (x) of the next central point B of the sliding window is calculated according to the following equation system 2 ,y 2 ):
Figure BDA0003794155640000081
If the current center point A (x) of the sliding window 1 ,y 1 ) In the crossing area of the current line segment and the next line segment, and the second included angle is greater than or equal to the second threshold, the coordinate (x) of the next central point C of the sliding window is calculated according to the following equation system 3 ,y 3 ):
Figure BDA0003794155640000082
Step S104: and fitting the point cloud data of the same lane line and outputting the lane line.
Optionally, the point cloud data of the same lane line includes coordinates of points of the same lane line in a radar coordinate system.
In one embodiment, step S104 includes:
fitting a cubic polynomial to the point cloud data of the same lane line by adopting a least square method to determine a function equation of the lane line;
and outputting the lane line according to the function equation of the lane line.
Optionally, the cubic polynomial is f (x) = a 0 +a 1 x+a 2 x 2 +a 3 x 3 Substituting the X, Y axial coordinates of the points of the same lane line in a laser radar coordinate system into the formula, and determining the coefficient a of the cubic polynomial by minimizing the sum of squares of errors 0 、a 1 、a 2 、a 3 And obtaining a final lane line function equation and outputting the lane lines. The schematic diagram of lane line fitting is shown in fig. 6, in which-1, 0, 1, and 2 are lane line numbers.
Fig. 7 is a schematic view of a specific process of point cloud clustering according to an embodiment of the present disclosure. As shown in fig. 7, the point cloud clustering process of the lane line detection method of the present application may include the following steps:
step S201: acquiring the position of the current central point of the sliding window;
step S202: judging whether the current central point of the sliding window is located in the image range of the grid map;
if the current central point of the sliding window is not in the image range of the grid image, ending the process;
if the current center point of the sliding window is within the image range of the grid map, step S203 is executed: selecting a cell for carrying out point cloud clustering analysis in the sliding window according to a preset sequence;
step S204: judging whether the number of point clouds in the current cell in the sliding window is larger than zero or not, and the point clouds in the current cell are not clustered;
if the number of the point clouds in the current cell is equal to zero and/or the point clouds in the current cell are clustered, returning to execute the step S203, and performing point cloud clustering analysis on the next cell in the sliding window;
if the number of the point clouds in the current cell is greater than zero and the point clouds in the current cell are not clustered, step S205 is executed: judging whether an included angle between the direction of the current cell and the direction of a line segment where the current center point of the sliding window is located is smaller than a first threshold value or not;
if the included angle between the direction of the current cell and the direction of the line segment where the current center point of the sliding window is located is smaller than the first threshold, step S206 is executed: the point clouds in the current cell and the point clouds on the line segment where the center point of the sliding window is located are gathered into a class;
if the included angle between the direction of the current cell and the direction of the line segment where the current center point of the sliding window is located is greater than or equal to the first threshold, step S207 is executed: judging whether the point cloud cluster analysis of all the cells in the sliding window is finished or not;
if the point cloud cluster analysis of all the cells in the sliding window is not finished, returning to execute the step S203, and performing the point cloud cluster analysis on the next cell in the sliding window;
if the point cloud cluster analysis of all the cells in the sliding window is completed, step S208 is executed: judging whether the current central point of the sliding window is positioned in the intersection area of the current line segment and the next line segment;
if the current center point of the sliding window is not in the intersection area of the current line segment and the next line segment, step S209 is executed: calculating the position of the next central point of the sliding window according to the position of the current central point of the sliding window, a preset step length and a function equation of the current line segment;
if the current center point of the sliding window is in the intersection area of the current line segment and the next line segment, step S210 is executed: judging whether the included angle between the direction of the current line segment and the direction of the next line segment is smaller than a second threshold value or not;
if the included angle between the current line segment and the next line segment is smaller than the second threshold, step S211 is executed: calculating the position of the next central point of the sliding window according to the position of the current central point of the sliding window, a preset step length and a function equation of the next line segment;
if the included angle between the current line segment and the next line segment is greater than or equal to the second threshold, executing step S209;
step S212: controlling the sliding window to move from the current central point to the next central point by a preset step length; and then returns to perform step S202.
According to the lane line detection method provided by the embodiment of the application, original point cloud data acquired by a vehicle-mounted laser radar are classified through a deep learning network, lane line point clouds are acquired, the lane line point clouds are further filtered, a target point cloud is determined, and the detection accuracy of the lane line point clouds is improved; in addition, by adopting a sliding window mode, each line segment in the line segment set and each cell in the sliding window are traversed according to the cell attributes and the line segment set in the target point cloud grid map, the target point cloud is clustered, virtual line segments or scattered points belonging to the same lane line can be clustered into the same point set, the detection integrity of the lane line is ensured, and the accuracy and the efficiency of lane line detection are improved.
Fig. 8 is a schematic structural diagram of a terminal according to the second embodiment of the present application. The terminal of the application includes: a processor 110, a memory 111 and a computer program 112 stored in said memory 111 and executable on said processor 110. The processor 110, when executing the computer program 112, implements the steps in the above described lane line detection method embodiments.
The terminal may include, but is not limited to, a processor 110, a memory 111. Those skilled in the art will appreciate that fig. 8 is merely an example of a terminal and is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or different components, e.g., the terminal may also include input-output devices, network access devices, buses, etc.
The Processor 110 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 111 may be an internal storage unit of the terminal, such as a hard disk or a memory of the terminal. The memory 111 may also be an external storage device of the terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal. Further, the memory 111 may also include both an internal storage unit and an external storage device of the terminal. The memory 111 is used for storing the computer programs and other programs and data required by the terminal. The memory 111 may also be used to temporarily store data that has been output or is to be output.
The present application further provides a storage medium having a computer program stored thereon, which, when executed by a processor, implements the steps of the lane marking detection method as described above.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
As used herein, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, including not only those elements listed, but also other elements not expressly listed.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A lane line detection method is characterized by comprising the following steps:
acquiring lane line point cloud data, and filtering the lane line point cloud according to the lane line point cloud data to determine a target point cloud;
rasterizing the target point cloud to obtain a raster image and a line segment set of the target point cloud;
clustering the target point clouds according to the attributes of the cells in the grid map and the line segment sets to obtain point cloud data of the same lane line;
and fitting the point cloud data of the same lane line and outputting the lane line.
2. The lane line detection method of claim 1, wherein obtaining lane line point cloud data comprises:
acquiring original point cloud data acquired by a vehicle-mounted laser radar;
and classifying the original point cloud through a deep learning network according to the original point cloud data to obtain the lane line point cloud data.
3. The lane line detection method of claim 1, wherein filtering the lane line point cloud based on the lane line point cloud data comprises at least one of:
if the distance between any point in the lane line point cloud and the vehicle does not meet a first preset range, removing the corresponding point from the lane line point cloud;
if the reflection intensity value of any point in the lane line point cloud does not meet a second preset range, removing the corresponding point from the lane line point cloud;
and if the distance from any point in the lane line point cloud to the ground does not meet a third preset range, removing the corresponding point from the lane line point cloud.
4. The lane line detection method of claim 1, wherein obtaining the set of line segments of the target point cloud comprises:
acquiring a binary aerial view of the target point cloud according to the distribution condition of the target point cloud in each cell in the grid map;
and carrying out Hough transformation on the binary aerial view of the target point cloud to obtain a line segment set of the target point cloud.
5. The lane line detection method of claim 1, wherein the attributes of the cells comprise:
the number of point clouds in the cell;
the direction of the cell;
clustering conditions of point clouds within the cells.
6. The lane line detection method of claim 5, wherein clustering the target point cloud according to the attributes of the cells in the grid map and the line segment sets comprises:
establishing a sliding window with a preset size on the grid graph by taking the starting point of the line segment in the line segment set as the initial central point of the sliding window;
carrying out point cloud cluster analysis on the cells in the sliding window one by one according to a preset sequence:
when the number of point clouds in a current cell in the sliding window is larger than zero and the point clouds in the current cell are not clustered, acquiring a first included angle between the direction of the current cell and the direction of a line segment where the center point of the sliding window is located;
if the first included angle is smaller than a first threshold value, the point clouds in the current cell and the point clouds on the line segment where the center point of the sliding window is located are gathered into a same type;
if the first included angle is larger than or equal to the first threshold value, continuously performing the point cloud cluster analysis on the next cell in the sliding window;
after the point cloud cluster analysis of all the cells in the sliding window is finished, controlling the sliding window to move to the next position by a preset step length, and then continuously executing the step of carrying out the point cloud cluster analysis on the cells in the sliding window one by one according to a preset sequence at the next position until the sliding window moves to the outside of the grid map.
7. The lane line detection method of claim 6, wherein controlling the sliding window to move to a next position by a preset step size comprises:
acquiring the position of the current central point of the sliding window;
when the current center point of the sliding window is located in the intersection area of the current line segment where the current center point of the sliding window is located and the next line segment, acquiring a second included angle between the direction of the next line segment and the direction of the current line segment;
if the second included angle is smaller than a second threshold value, calculating the position of the next central point of the sliding window according to the position of the current central point of the sliding window, the preset step length and the function equation of the next line segment;
and if the second included angle is larger than or equal to the second threshold, calculating the position of the next central point of the sliding window according to the position of the current central point of the sliding window, the preset step length and a function equation of the current line segment.
8. The method of claim 1, wherein the step of fitting the point cloud data of the same lane line and outputting the lane line comprises:
fitting the point cloud data of the same lane line by a cubic polynomial by adopting a least square method to determine a function equation of the lane line;
and outputting the lane line according to the function equation of the lane line.
9. A terminal, characterized in that the terminal comprises a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the lane line detection method according to any one of claims 1 to 8 when executing the computer program.
10. A storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the steps of the lane marking detection method according to any one of claims 1 to 8.
CN202210963847.4A 2022-08-11 2022-08-11 Lane line detection method, terminal and storage medium Pending CN115346183A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210963847.4A CN115346183A (en) 2022-08-11 2022-08-11 Lane line detection method, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210963847.4A CN115346183A (en) 2022-08-11 2022-08-11 Lane line detection method, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN115346183A true CN115346183A (en) 2022-11-15

Family

ID=83952572

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210963847.4A Pending CN115346183A (en) 2022-08-11 2022-08-11 Lane line detection method, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN115346183A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116012805A (en) * 2023-03-24 2023-04-25 深圳佑驾创新科技有限公司 Object perception method, apparatus, computer device, storage medium, and program product
CN116304142A (en) * 2023-05-12 2023-06-23 智道网联科技(北京)有限公司 Point cloud data acquisition method, device, equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116012805A (en) * 2023-03-24 2023-04-25 深圳佑驾创新科技有限公司 Object perception method, apparatus, computer device, storage medium, and program product
CN116012805B (en) * 2023-03-24 2023-08-29 深圳佑驾创新科技有限公司 Target perception method, device, computer equipment and storage medium
CN116304142A (en) * 2023-05-12 2023-06-23 智道网联科技(北京)有限公司 Point cloud data acquisition method, device, equipment and storage medium
CN116304142B (en) * 2023-05-12 2023-08-08 智道网联科技(北京)有限公司 Point cloud data acquisition method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US11709058B2 (en) Path planning method and device and mobile device
CN109087510B (en) Traffic monitoring method and device
CN110598541B (en) Method and equipment for extracting road edge information
CN115346183A (en) Lane line detection method, terminal and storage medium
CN108225341B (en) Vehicle positioning method
CN109545072B (en) Map construction pose calculation method, map construction pose calculation device, map construction pose storage medium and map construction pose calculation system
CN110232329B (en) Point cloud classification method and device based on deep learning, storage medium and equipment
CN105953773B (en) Ramp slope angle acquisition methods and device
CN114488194A (en) Method for detecting and identifying targets under structured road of intelligent driving vehicle
CN111915657A (en) Point cloud registration method and device, electronic equipment and storage medium
CN111295666A (en) Lane line detection method, device, control equipment and storage medium
WO2024012211A1 (en) Autonomous-driving environmental perception method, medium and vehicle
CN111611900A (en) Target point cloud identification method and device, electronic equipment and storage medium
CN111126211B (en) Label identification method and device and electronic equipment
CN114494329B (en) Guide point selection method for autonomous exploration of mobile robot in non-planar environment
CN109740502B (en) Road quality detection method and device
CN115527187A (en) Method and device for classifying obstacles
CN113867371B (en) Path planning method and electronic equipment
CN116434181A (en) Ground point detection method, device, electronic equipment and medium
CN115713600A (en) Method and device for generating digital elevation model of automatic driving scene
CN112258566B (en) Pavement collection point identification method and device and server
CN113569600A (en) Method and device for identifying weight of object, electronic equipment and storage medium
CN113625296B (en) Robot positioning method and device based on reflector and robot
CN114926649A (en) Data processing method, device and computer readable storage medium
CN117197776A (en) Parking space occupation identification method, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 314500 988 Tong Tong Road, Wu Tong Street, Tongxiang, Jiaxing, Zhejiang

Applicant after: United New Energy Automobile Co.,Ltd.

Address before: 314500 988 Tong Tong Road, Wu Tong Street, Tongxiang, Jiaxing, Zhejiang

Applicant before: Hezhong New Energy Vehicle Co.,Ltd.