CN110705577B - Laser point cloud lane line extraction method - Google Patents

Laser point cloud lane line extraction method Download PDF

Info

Publication number
CN110705577B
CN110705577B CN201910936338.0A CN201910936338A CN110705577B CN 110705577 B CN110705577 B CN 110705577B CN 201910936338 A CN201910936338 A CN 201910936338A CN 110705577 B CN110705577 B CN 110705577B
Authority
CN
China
Prior art keywords
point cloud
projection
points
lane line
intensity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910936338.0A
Other languages
Chinese (zh)
Other versions
CN110705577A (en
Inventor
惠念
郑小辉
熊迹
刘奋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heading Data Intelligence Co Ltd
Original Assignee
Heading Data Intelligence Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heading Data Intelligence Co Ltd filed Critical Heading Data Intelligence Co Ltd
Priority to CN201910936338.0A priority Critical patent/CN110705577B/en
Publication of CN110705577A publication Critical patent/CN110705577A/en
Application granted granted Critical
Publication of CN110705577B publication Critical patent/CN110705577B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Abstract

The invention provides a laser point cloud lane line extraction method, which comprises the following steps: s1, combining the point clouds, extracting results of the historical lane lines, and generating a point cloud projection drawing and a labeling sample; s2, carrying out deep learning segmentation network model training by using the labeled sample to obtain a trained segmentation model; s3, generating a point cloud projection diagram by combining the tracks, storing the corresponding relation between the point cloud and the projection, and obtaining an intermediate result; s4, reasoning the point cloud projection image obtained in S3 by using the trained segmentation model to obtain a binary Mask image; s5, carrying out contour extraction, skeleton extraction and width calculation on the Mask graph by using opencv; and S6, reversely calculating the three-dimensional point cloud coordinates corresponding to the lane line image coordinates based on the result of S5 and the intermediate result of S3. According to the embodiment of the invention, when the pavement paving materials are different, or the lane lines are worn, or the acquisition equipment is different, the model can be retrained to adapt to the improper condition.

Description

Laser point cloud lane line extraction method
Technical Field
The invention relates to the technical field of traffic, in particular to a laser point cloud lane line extraction method.
Background
Due to the high position accuracy of the laser point cloud, the laser point cloud is increasingly used for collecting high-accuracy map elements in recent years, including lane lines, arrows, signboards, traffic lights and the like. Among them, the lane line is the most core component in the road network, and various automatic lane line extraction methods have been explored and practiced. The conventional method is based on point cloud reflection intensity for filtering, and the principle of the method is to utilize the difference between a lane line printing material and a pavement paving material, and the reflection intensity of the lane line point cloud is generally far higher than that of the pavement point cloud.
Common methods for calculating the threshold of the reflection intensity include journal method and otsu method, and some methods convert point clouds into pictures, and extract the edges of lane lines on the pictures based on the methods of edge detection and hough transform of the pictures. The common key point of the two methods is to distinguish the lane line from the reflection intensity threshold of the road surface. The setting of the threshold value directly affects the accuracy of the final extraction. In addition, the surface material of the road in the real world is different, the printing of the lane line is often worn, and in some areas, the noise of high reflection intensity on the road surface near the lane line affects the calculation of the threshold value. In addition, the reflection intensity distribution of the collected laser point clouds of the laser radar equipment of different manufacturers and models is different. The applicability of lane line extraction based on the traditional method on different areas and different acquisition devices is difficult to adapt quickly.
In the prior art, a method of directly using a point cloud reflection intensity threshold value to segment a lane line and a lane road surface is generally adopted, or a method of converting the point cloud reflection intensity threshold value into an image and using conventional edge detection of the image is adopted; therefore, the threshold value for binarization directly affects the final extraction accuracy. The universality of the threshold calculation method is not good enough when the road pavement materials are different, the lane lines are worn, or the acquisition equipment is different. Moreover, the methods can only extract long solid lines and short dashed lines on the road surface, and the construction of subsequent road networks has higher difficulty.
Disclosure of Invention
To solve the above problems, embodiments of the present invention provide a laser point cloud lane line extraction method that overcomes or at least partially solves the above problems.
According to a first aspect of embodiments of the present invention, there is provided a laser point cloud lane line extraction method, including: s1, combining the point clouds, extracting results of the historical lane lines, and generating a point cloud projection drawing and a labeling sample; s2, carrying out deep learning segmentation network model training by using the labeled sample to obtain a trained segmentation model; s3, generating a point cloud projection diagram by combining the tracks, storing the corresponding relation between the point cloud and the projection, and obtaining an intermediate result; s4, reasoning the point cloud projection image obtained in S3 by using the trained segmentation model to obtain a binary Mask image; s5, performing contour extraction, skeleton extraction and width calculation on the Mask image by using opencv; and S6, reversely calculating the three-dimensional point cloud coordinates corresponding to the lane line image coordinates based on the result of S5 and the intermediate result of S3.
Wherein, S1 specifically includes: s11, point cloud slicing: a1, filtering the z value of the point cloud reference track data, and keeping the point cloud data below the track elevation; a2, cutting the road forward direction into blocks with set distances, wherein the road direction is calculated by the collected track data; s12, projecting the point cloud into an image: b1, rotating the point cloud obtained after slicing S11 to the road advancing direction; b2, calculating a three-dimensional space bounding box of each rotated point cloud, taking the minimum point (min _ x, min _ y) as an origin, and translating all points on the point cloud block to a local coordinate system; b3, calculating the maximum reflection intensity max _ intensity and the minimum reflection intensity min _ intensity of the point cloud block; b4, projecting the point cloud into a single-channel image with a first set size according to the pixel resolution of resolution; s13, saving the projection image and the point cloud: dividing the image with the first set size into a plurality of images with the second set size, storing the images in a JPG format, storing the point cloud after slicing in an LAZ format, and recording the corresponding relation between points (x, y, z) and rows r and columns c; s14, searching the existing historical lane line extraction result by combining the position information of the point cloud data; calculating corresponding r and c of three-dimensional x, y and z of historical data, filling the area inside the lane line into white, and filling other areas into black to form a single-channel binary Mask image; and the first-size chart is divided into a plurality of second-size charts in the same manner as in S13.
The method specifically comprises the following steps of calculating r and c corresponding to three-dimensional x, y and z of historical data:
r=4098-1-(y-min_y)/resolution
c=4098-1-(x-min_x)/resolution
pix=(intensity-min_intensity)/(max_intensity-min_intensity)
where intensity is the reflection intensity of point (x, y, z), r is the corresponding row, c is the corresponding column, and pix is the pixel value.
After the correspondence between the recording points (x, y, z) and r, c in S13, the method further includes: the JPG and LAZ files are named in the form of original point cloud name, point cloud slice sequence number and image segmentation sequence number.
If no historical data exists in S14, manually marking the projection drawing in S12 in a manual marking mode, and then generating a Mask drawing; and directly drawing the labeling information on the point cloud projection drawing for screening sample data.
Wherein, S6 specifically includes: s61, restoring the pixel coordinates of the second set size provided in S5 to the first set size according to the range during division; s62, judging whether included angles smaller than a set angle of different frameworks are small within a projection diagram range of a first set size; if yes, connecting the two; s63, deleting the skeleton with the length smaller than the set value; s64, clustering the points on the skeleton according to positions, and aggregating the points in the set number of pixels into the same point by taking the set number of pixels as a radius; s65, reading an LAZ file of the slice point cloud corresponding to the projection drawing to obtain an xyz coordinate set of the point cloud and a row, column, r and c corresponding to each point (x, y and z); s66, restoring the pixel coordinates of each of the left and right contour lines represented by each framework and the width according to the form of x-0.5 width and x +0.5 width; exploring each pixel according to a neighborhood sequence to find the closest x, y and z effective points; s67, performing RANSAC random sampling fitting on the left side contour line and the right side contour line respectively, and removing partial abnormal points; during fitting, a circle is preferably adopted for fitting, and if the radius is larger than 1000m, straight line fitting is adopted; comparing the fitted line segment with the line segment before fitting, and if the length of the head or the tail is shortened by 3m, performing RANSAC random sampling fitting on the shortened part; s68, normalizing the points on the left and right contour lines to 1: detecting whether the head and the tail of the right side contour line are aligned with the left side contour line or not by taking the left side contour line as a reference, and if not, filling the missing part in a vertical projection mode; aligning the left contour line by taking the right contour line as a reference; and traversing the points of the left contour line, searching the closest points on the right contour line, calculating the central points of the two points, and calculating the two points leftwards and rightwards by adopting the central point, the width and the direction.
After S6, the method further includes: s7: and setting a fault tolerance mechanism to support data part reproduction.
Wherein, S7 specifically includes: s71: when generating the projection graph, checking a storage directory of the projection graph, if the current file exists, skipping, and otherwise, regenerating the projection graph; s72: in the new data production phase, the data of the projection drawing is stored in a local file, and the inferred mask is stored in the local file; the skeleton and the width calculated by the mask are saved in a local file; combining the three-dimensional coordinates of the lane lines after the point cloud reverse calculation, and storing the three-dimensional coordinates into a local file; and in the process of sequentially executing S3-S6, reading the local file saved in the last step.
According to a second aspect of the embodiments of the present invention, there is provided an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the laser point cloud lane line extraction method as provided in any one of the various possible implementations of the first aspect.
According to a third aspect of embodiments of the present invention, there is provided a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a laser point cloud lane line extraction method as provided in any one of the various possible implementations of the first aspect.
According to the method for extracting the laser point cloud lane line, provided by the embodiment of the invention, the geometric boundary of the lane line is the left and right contour lines printed on the ground of the lane line, and the extraction precision can reach within 1/3 lane line width; when the pavement paving materials are different, or the lane lines are worn, or the acquisition equipment is different, the improper conditions can be quickly met by supplementing samples and then training the model; and will not cause the existing function to roll back. The construction of the subsequent road network based on the result of the embodiment of the invention is simpler and more stable compared with the network construction method based on the geometric position of the realization and the dotted line in the prior art.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from these without inventive effort.
Fig. 1 is a schematic flow chart of a laser point cloud lane line extraction method according to an embodiment of the present invention;
FIG. 2 is a projection diagram of the point cloud provided by the embodiment of the present invention after rotating to the road direction;
fig. 3 is a labeled diagram after the same projection of historical achievement data provided by the embodiment of the present invention;
FIG. 4 is a projection diagram of a historical outcome data overlay point cloud provided by an embodiment of the present invention;
fig. 5 is a lane line extracted from the laser point cloud in combination with the deep learning according to the embodiment of the present invention;
FIG. 6 is a schematic diagram of a lane line extracted from a laser point cloud in conjunction with deep learning according to another embodiment of the present invention;
fig. 7 is a schematic physical structure diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments, but not all embodiments, of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In recent years, compared with a pure traditional method, the method based on deep learning has obvious improvement on a plurality of images and voices. Therefore, the embodiment of the invention combines deep learning with traditional calculation methods on some images and point clouds to provide a method for extracting lane lines. And (3) combining a deep learning method, and providing the effect to be extracted to the model in a marked form. The result after connecting the dotted lines is used as a label, and the long lines after connecting can be obtained during reasoning. And the error rate of calculation by purely depending on the distance and the angle is reduced when a subsequent road network is constructed.
Referring to fig. 1 to 6, an embodiment of the present invention provides a method for extracting a laser point cloud lane line, including the following steps:
s1: extracting results of historical lane lines, and combining point clouds to generate a point cloud projection graph and a labeling sample;
specifically, step S1 may include the following steps:
s11: point cloud slicing:
a1, filtering the z value (elevation) of the point cloud reference track data, and keeping the point cloud data below the track elevation; a2, cutting the road into 20m blocks along the advancing direction of the road, wherein the road direction is obtained by calculating the collected track data;
s12: projecting the point cloud into an image: b1, rotating the point cloud sliced in S11 to the advancing direction of the road, wherein the transverse and longitudinal directions of the rotated point cloud are basically consistent with the left and right directions and the advancing direction of the road, so that the coordinate back projection calculation of a subsequent image based on the point cloud can be simplified; b2, calculating a three-dimensional space bounding box of each rotated point cloud, taking the minimum point (min _ x, min _ y) as an origin, keeping the x axis and the y axis unchanged, and translating all points on the point cloud to be under the local coordinate system. b3, calculating the maximum reflection intensity max _ intensity and the minimum reflection intensity min _ intensity of the block point cloud. b4, projecting the point cloud into a single-channel image of 4096 x 4096 (i.e. the first set size) at a pixel resolution of resolution (pixel resolution set to 1cm to guarantee extraction accuracy). The point (x, y, z) reflection intensity is intensity, corresponding to row r, column c, and pixel value pix is calculated as follows:
r=4098-1-(y-min_y)/resolution
c=4098-1-(x-min_x)/resolution
pix=(intensity-min_intensity)/(max_intensity-min_intensity)。
S13: saving the projection image and the point cloud: (1) the 4096 × 4096 image is divided into four 2048 × 2048 (i.e., second set size) images to reduce the resources required for deep learning model training and reasoning, and is stored in the JPG format, and the sliced point cloud is stored in the LAZ format, and the correspondence between points (x, y, z) and r, c is recorded. (2) The JPG and LAZ files are named in the form of original point cloud name, point cloud slice sequence number and image segmentation sequence number: such as PointCloud _1_ 1. The method is used for quick retrieval of images and point cloud relations and various subsequent post-processing.
S14: and searching the existing lane line extraction historical result by combining the position information of the point cloud data. And (4) calculating corresponding r and c of three-dimensional x, y and z of the historical data according to a calculation formula in S12, filling the area inside the lane line into white, and filling other areas into black to form a single-channel binary Mask image. And the 4096 by 4096 graph is divided into four 2048 by 2048 graphs in the same manner as in S13. If there is no history data, a manual labeling mode can be adopted, and a Mask graph is generated after manual labeling on the projection graph of S12. And in order to facilitate the inspection, the marking information is directly drawn on the point cloud projection drawing for screening sample data.
S2: performing deep learning segmentation network model training by using the sample generated in the S1;
specifically, in the step, the point cloud projection image and the Mask image are used as samples, sample screening is performed firstly, and abnormal samples are removed. Then, randomly extracting 20 percent as a verification set, and taking the other 80 percent as a training set for model training. The method is only used for extracting the geometric boundary of the lane line, selecting a proper segmentation network model, and training until the model convergence is finished.
S3: generating a point cloud projection graph by combining the tracks, and storing the corresponding relation between the point cloud and the projection;
specifically, step S3 is implemented similar to step S1, where the point cloud is sliced according to S11, projected in S12, and stored in S13.
S4: using the S2 trained and segmented model to deduce the projection diagram of S3 to obtain a binary Mask diagram;
specifically, in this step, a model at the training position of S2 is called, and the point cloud projection diagram produced in S3 is inferred to obtain a Mask of the inference result.
S5: performing contour extraction, skeleton extraction and width calculation by using opencv from a Mask image of S4;
specifically, in this step, contour extraction, skeleton extraction and width calculation are performed from the Mask map of S4 using opencv; the extracted geometric boundary of the lane line is a left contour line and a right contour line printed on the ground of the lane line, and because the contour directly extracted from the Mask is a polygon, a left contour line and a right contour line are separated from the polygon, and the closed parts of the two polygons need to be removed, the practice effect is unstable. Therefore, the embodiment of the invention provides a mode of using the skeleton line and the width to express the left and right contour lines of the lane line.
S6: and reversely calculating the three-dimensional point cloud coordinates corresponding to the lane line image coordinates based on the result of the S5 and combined with the intermediate result of the S3.
Specifically, step S6 may specifically include the following steps:
s61: reducing the 2048 × 2048 pixel coordinates provided in S5 to 4096 × 4096 according to the range at the time of division;
s62: within a 4096 x 4096 projection view it is calculated whether the angles of different skeletons are small (set to 10 °), and if so, they are connected. Processing the situation that one line is broken into a plurality of lines according to the reasoning result of part of lane lines;
s63: the shorter skeleton (length set to 500 pixels) is deleted because the point cloud projection image is projected at a length of about 20 m. The extracted skeleton has small length, and noise data such as arrows, characters, guardrails and the like are mostly extracted by mistake and filtered;
s64: and clustering the points on the skeleton according to positions, and aggregating the points in 10 pixels into the same point by taking 10 pixels as a radius. When the point cloud is inversely calculated according to the correspondence, part of the very close pixel points are staggered in sequence;
s65: reading an LAZ file of the slice point cloud corresponding to the projection drawing to obtain an xyz coordinate set of the point cloud and a row, a line and a column r and c corresponding to each xyz point;
S66: and restoring the pixel coordinates of each lane line represented by the framework + width according to the form of x-0.5 × width and x +0.5 × width. Each pixel is explored in the order of the 8 neighbors of the table below to find the closest x, y, z valid point. Up to this point, the three-dimensional xyz coordinates corresponding to the pixel coordinates have been acquired.
Table 18 neighborhood order
3 2 4
0 1
6 5 7
S67: and respectively carrying out RANSAC random sampling fitting on the left side contour line and the right side contour line, and eliminating partial abnormal points. During fitting, a circle is preferably adopted for fitting, and if the radius is larger than 1000m, straight line fitting is adopted. And comparing the line segment after fitting with the line segment before fitting, and if the length of the head part or the tail part is shortened by 3m, performing RANSAC random sampling fitting on the shortened part. Thus, the piecewise random fitting of the lane line which is divided into three segments at most in the range of 20m is realized.
S68: normalizing points on the left and right contour lines to 1-to-1: and detecting whether the head and the tail of the right side contour line are aligned with the left side contour line or not by taking the left side contour line as a reference, and if not, filling the missing part in a vertical projection mode. Aligning the left contour line by taking the right contour line as a reference; and traversing the points of the left contour line, searching the closest points on the right contour line, calculating the central points of the two points, and calculating the two points leftwards and rightwards by using the central point + the width + the direction.
S7: and setting a fault tolerance mechanism to support data part reproduction.
Specifically, the step may further include the steps of:
s71: a fault-tolerant mechanism in a sample generation stage, wherein when the projection graph is generated, a storage directory of the projection graph is checked, if a current file exists, the storage directory is skipped, and otherwise, the projection graph is regenerated;
s72: in the new data production phase, the data of the projection graph is saved to a local file, and the inferred mask is saved to the local file. The skeleton + width computed by the mask is saved to the local file. And combining the three-dimensional coordinates of the lane lines after the point cloud reverse calculation, and storing the three-dimensional coordinates into a local file. And in the process of sequentially executing S3-S6, reading the local file saved in the last step. So as to realize the effect that the whole body can be operated and the steps can be executed.
The above realizes the laser point cloud lane line extraction of the combined deep learning through the steps of S1 to S7.
The geometric boundary of the lane line extracted by the embodiment of the invention is the left and right contour lines printed on the ground of the lane line, and the extraction precision can reach within 1/3 lane line width. And when the road surface paving materials are different, or the lane lines are abraded, or the collecting equipment is different, the improper conditions can be quickly met by supplementing samples and retraining the model. And does not cause rollback of existing functions. The construction of the subsequent road network based on the result of the embodiment of the invention is simpler and more stable compared with the network construction method based on the geometric position of the realization and the dotted line in the prior art.
An embodiment of the present invention provides an electronic device, as shown in fig. 7, the electronic device includes: a processor (processor)501, a communication Interface (Communications Interface)502, a memory (memory)503, and a communication bus 504, wherein the processor 501, the communication Interface 502, and the memory 503 are configured to communicate with each other via the communication bus 504. The processor 501 may call a computer program on the memory 503 and may be run on the processor 501 to execute the laser point cloud lane line extraction method provided by the above embodiments, for example, including: s1, combining the point clouds, extracting results of the historical lane lines, and generating a point cloud projection drawing and a labeling sample; s2, carrying out deep learning segmentation network model training by using the labeled sample to obtain a trained segmentation model; s3, generating a point cloud projection diagram by combining the tracks, storing the corresponding relation between the point cloud and the projection, and obtaining an intermediate result; s4, reasoning the point cloud projection image obtained in S3 by using the trained segmentation model to obtain a binary Mask image; s5, performing contour extraction, skeleton extraction and width calculation on the Mask image by using opencv; and S6, reversely calculating the three-dimensional point cloud coordinates corresponding to the lane line image coordinates based on the result of S5 and the intermediate result of S3.
In addition, the logic instructions in the memory 503 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
An embodiment of the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is implemented to, when executed by a processor, perform the laser point cloud lane line extraction method provided in the foregoing embodiments, for example, the method includes: s1, combining the point clouds, extracting results of the historical lane lines, and generating a point cloud projection drawing and a labeling sample; s2, carrying out deep learning segmentation network model training by using the labeled sample to obtain a trained segmentation model; s3, generating a point cloud projection diagram by combining the tracks, storing the corresponding relation between the point cloud and the projection, and obtaining an intermediate result; s4, reasoning the point cloud projection image obtained in S3 by using the trained segmentation model to obtain a binary Mask image; s5, performing contour extraction, skeleton extraction and width calculation on the Mask image by using opencv; and S6, reversely calculating the three-dimensional point cloud coordinates corresponding to the lane line image coordinates based on the result of S5 and the intermediate result of S3.
The above-described embodiments of the electronic device and the like are merely illustrative, and units illustrated as separate components may or may not be physically separate, and components displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute the various embodiments or some parts of the methods of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (9)

1. A laser point cloud lane line extraction method comprises the following steps:
s1, combining the point clouds, extracting results of the historical lane lines, and generating a point cloud projection drawing and a labeling sample;
s2, carrying out deep learning segmentation network model training by using the labeled sample to obtain a trained segmentation model;
s3, generating a point cloud projection diagram by combining the tracks, storing the corresponding relation between the point cloud and the projection, and obtaining an intermediate result;
s4, reasoning the point cloud projection diagram obtained in S3 by using the trained segmentation model to obtain a binary Mask diagram;
s5, carrying out contour extraction, skeleton extraction and width calculation on the Mask graph by using opencv;
S6, reversely calculating the three-dimensional point cloud coordinate corresponding to the lane line image coordinate based on the result of S5 and the intermediate result of S3; wherein, S6 specifically includes:
s61, restoring the pixel coordinates of the second set size provided in S5 to the first set size according to the range during division;
s62, judging whether included angles smaller than a set angle of different frameworks are small within a range of a projection graph with a first set size; if so, connecting the two pieces of the fabric;
s63, deleting the skeleton with the length smaller than the set value;
s64, clustering the points on the skeleton according to positions, and aggregating the points in the set number of pixels into the same point by taking the set number of pixels as a radius;
s65, reading an LAZ file of the slice point cloud corresponding to the projection drawing to obtain an xyz coordinate set of the point cloud and a row, column r and c corresponding to each point (x, y and z);
s66, restoring the pixel coordinates of each of the left and right contour lines represented by each framework and the width according to the form of x-0.5 width and x +0.5 width; exploring each pixel according to the sequence of the neighborhood to find the nearest x, y and z effective points;
s67, performing RANSAC random sampling fitting on the left side contour line and the right side contour line respectively, and removing partial abnormal points; during fitting, a circle is preferably adopted for fitting, and if the radius is larger than 1000m, straight line fitting is adopted; comparing the fitted line segment with the line segment before fitting, and if the length of the head or the tail is shortened by 3m, performing RANSAC random sampling fitting on the shortened part;
S68, normalizing the points on the left and right contour lines to 1-to-1: detecting whether the head and the tail of the right side contour line are aligned with the left side contour line or not by taking the left side contour line as a reference, and if not, filling the missing part in a vertical projection mode; aligning the left side contour line by taking the right side contour line as a reference; and traversing the points of the left contour line, searching the closest points on the right contour line, calculating the central points of the two points, and calculating the two points leftwards and rightwards by adopting the central point, the width and the direction.
2. The method according to claim 1, wherein the S1 specifically includes:
s11, point cloud slicing:
a1, filtering the z value of the point cloud reference track data, and keeping the point cloud data below the track elevation;
a2, cutting the road advancing direction into blocks with set distances, wherein the road advancing direction is calculated by the collected track data;
s12, projecting the point cloud into an image:
b1, rotating the point cloud obtained after slicing S11 to the road advancing direction;
b2, calculating a three-dimensional space bounding box of each rotated point cloud, taking the minimum point (min _ x, min _ y) as an origin, and translating all points on the point cloud block to a local coordinate system;
b3, calculating the maximum reflection intensity max _ intensity and the minimum reflection intensity min _ intensity of the point cloud block;
b4, projecting the point cloud into a single-channel image with a first set size according to the pixel resolution of resolution;
s13, saving the projection image and the point cloud: dividing the image with the first set size into a plurality of images with the second set size, storing the images in a JPG format, storing the point cloud after slicing in an LAZ format, and recording the corresponding relation between points (x, y, z) and rows r and columns c;
s14, searching the existing historical lane line extraction result by combining the position information of the point cloud data; calculating corresponding r and c of three-dimensional x, y and z of historical data, filling the area inside the lane line into white, and filling other areas into black to form a single-channel binary Mask image; and the first-size chart is divided into a plurality of second-size charts in the same manner as in S13.
3. The method according to claim 2, wherein r, c corresponding to the three-dimensional x, y, z of the history data is calculated by:
r = 4098 - 1 - (y - min_y) / resolution
c = 4098 - 1 - (x - min_x) / resolution
pix = (intensity - min_intensity)/ (max_intensity - min_intensity)
where intensity is the reflection intensity of point (x, y, z), r is the corresponding row, c is the corresponding column, and pix is the pixel value.
4. The method according to claim 2, further comprising, after recording the correspondence between the points (x, y, z) and r, c in S13:
the JPG and LAZ files are named in the form of original point cloud name, point cloud slice sequence number and image segmentation sequence number.
5. The method according to claim 2, wherein in S14, if there is no history data, a manual labeling is performed to generate a Mask map after manual labeling on the projected map of S12; and the marking information is directly drawn on the point cloud projection drawing and is used for screening sample data.
6. The method of claim 1, further comprising, after S6:
s7: and setting a fault tolerance mechanism to support data part reproduction.
7. The method according to claim 6, wherein the S7 specifically includes:
s71: when generating the projection graph, checking a storage directory of the projection graph, if the current file exists, skipping, and otherwise, regenerating the projection graph;
s72: in the new data production phase, the data of the projection drawing is stored in a local file, and the inferred mask is stored in the local file; the skeleton and the width calculated by the mask are saved in a local file; combining the three-dimensional coordinates of the lane lines after the point cloud reverse calculation, and storing the three-dimensional coordinates into a local file; and in the process of sequentially executing S3-S6, reading the local file saved in the previous step.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of the laser point cloud lane line extraction method of any one of claims 1 to 7.
9. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the steps of the laser point cloud lane line extraction method of any one of claims 1 to 7.
CN201910936338.0A 2019-09-29 2019-09-29 Laser point cloud lane line extraction method Active CN110705577B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910936338.0A CN110705577B (en) 2019-09-29 2019-09-29 Laser point cloud lane line extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910936338.0A CN110705577B (en) 2019-09-29 2019-09-29 Laser point cloud lane line extraction method

Publications (2)

Publication Number Publication Date
CN110705577A CN110705577A (en) 2020-01-17
CN110705577B true CN110705577B (en) 2022-06-07

Family

ID=69196807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910936338.0A Active CN110705577B (en) 2019-09-29 2019-09-29 Laser point cloud lane line extraction method

Country Status (1)

Country Link
CN (1) CN110705577B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160396B (en) * 2020-01-23 2024-03-29 宝马股份公司 Method for establishing map model, method for generating real-time map and map system
CN111397578B (en) * 2020-03-11 2022-01-25 中煤航测遥感集团有限公司 Method and device for acquiring elevation of pipeline welded junction and storage medium
CN111160328B (en) * 2020-04-03 2023-07-07 速度科技股份有限公司 Automatic extraction method of traffic marking based on semantic segmentation technology
CN111696059B (en) * 2020-05-28 2022-04-29 武汉中海庭数据技术有限公司 Lane line smooth connection processing method and device
CN112131947A (en) * 2020-08-21 2020-12-25 河北鼎联科技有限公司 Road indication line extraction method and device
CN112740225B (en) * 2020-09-30 2022-05-13 华为技术有限公司 Method and device for determining road surface elements
CN112434582A (en) * 2020-11-14 2021-03-02 武汉中海庭数据技术有限公司 Lane line color identification method and system, electronic device and storage medium
CN112700405B (en) * 2020-12-11 2023-07-07 深圳市道通科技股份有限公司 Brake disc wear degree measuring method, device and equipment
CN112907746A (en) * 2021-03-25 2021-06-04 上海商汤临港智能科技有限公司 Method and device for generating electronic map, electronic equipment and storage medium
CN113269897B (en) * 2021-07-19 2021-11-09 深圳市信润富联数字科技有限公司 Method, device and equipment for acquiring surface point cloud and storage medium
CN113807193A (en) * 2021-08-23 2021-12-17 武汉中海庭数据技术有限公司 Method and system for automatically extracting virtual line segments of traffic roads in laser point cloud
CN113609632B (en) * 2021-10-08 2021-12-21 天津云圣智能科技有限责任公司 Method and device for determining power line compensation point and server
CN116485636B (en) * 2023-04-27 2023-10-20 武汉纵横天地空间信息技术有限公司 Point cloud elevation imaging method, system and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104766058A (en) * 2015-03-31 2015-07-08 百度在线网络技术(北京)有限公司 Method and device for obtaining lane line
CN107330380A (en) * 2017-06-14 2017-11-07 千寻位置网络有限公司 Lane line based on unmanned plane image is automatically extracted and recognition methods
CN108470159A (en) * 2018-03-09 2018-08-31 腾讯科技(深圳)有限公司 Lane line data processing method, device, computer equipment and storage medium
CN108985247A (en) * 2018-07-26 2018-12-11 北方工业大学 Multispectral image urban road identification method
CN109993099A (en) * 2019-03-27 2019-07-09 西安航空职业技术学院 A kind of lane line drawing recognition methods based on machine vision
CN110111414A (en) * 2019-04-10 2019-08-09 北京建筑大学 A kind of orthography generation method based on three-dimensional laser point cloud

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10670725B2 (en) * 2017-07-25 2020-06-02 Waymo Llc Determining yaw error from map data, lasers, and cameras
US10984257B2 (en) * 2017-12-13 2021-04-20 Luminar Holdco, Llc Training multiple neural networks of a vehicle perception component based on sensor settings

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104766058A (en) * 2015-03-31 2015-07-08 百度在线网络技术(北京)有限公司 Method and device for obtaining lane line
CN107330380A (en) * 2017-06-14 2017-11-07 千寻位置网络有限公司 Lane line based on unmanned plane image is automatically extracted and recognition methods
CN108470159A (en) * 2018-03-09 2018-08-31 腾讯科技(深圳)有限公司 Lane line data processing method, device, computer equipment and storage medium
CN108985247A (en) * 2018-07-26 2018-12-11 北方工业大学 Multispectral image urban road identification method
CN109993099A (en) * 2019-03-27 2019-07-09 西安航空职业技术学院 A kind of lane line drawing recognition methods based on machine vision
CN110111414A (en) * 2019-04-10 2019-08-09 北京建筑大学 A kind of orthography generation method based on three-dimensional laser point cloud

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Road marking detection using LIDAR reflective intensity data and its application to vehicle localization;Alberto Hata,Denis Wolf;《 17th International IEEE Conference on Intelligent Transportation Systems (ITSC)》;20141120;全文 *
车道线识别系统算法设计及其FPGA实现;靖固,宋振伟,王铮;《哈尔滨理工大学学报》;20131231;全文 *

Also Published As

Publication number Publication date
CN110705577A (en) 2020-01-17

Similar Documents

Publication Publication Date Title
CN110705577B (en) Laser point cloud lane line extraction method
KR102325033B1 (en) Learning method, learning device for detecting object using edge image and testing method, testing device using the same
CN110634291B (en) High-precision map topology automatic construction method and system based on crowdsourcing data
CN110148196B (en) Image processing method and device and related equipment
CN110084095B (en) Lane line detection method, lane line detection apparatus, and computer storage medium
CN108564874B (en) Ground mark extraction method, model training method, device and storage medium
CN110008809B (en) Method and device for acquiring form data and server
CN110569699B (en) Method and device for carrying out target sampling on picture
CN111695486B (en) High-precision direction signboard target extraction method based on point cloud
WO2017041396A1 (en) Driving lane data processing method, device, storage medium and apparatus
CN110598541B (en) Method and equipment for extracting road edge information
CN107424166B (en) Point cloud segmentation method and device
CN112652015B (en) BIM-based pavement disease marking method and device
CN110188778B (en) Residential area element outline regularization method based on image extraction result
WO2021155558A1 (en) Road marking identification method, map generation method and related product
US11488402B2 (en) Method and system for segmenting touching text lines in image of uchen-script Tibetan historical document
CN113240623A (en) Pavement disease detection method and device
CN111696059B (en) Lane line smooth connection processing method and device
CN110135382B (en) Human body detection method and device
CN110598581B (en) Optical music score recognition method based on convolutional neural network
CN112033419A (en) Method, electronic device, and medium for detecting automatic port driving lane line
CN110728735A (en) Road-level topological layer construction method and system
KR101910256B1 (en) Lane Detection Method and System for Camera-based Road Curvature Estimation
CN115170657A (en) Steel rail identification method and device
CN114495049A (en) Method and device for identifying lane line

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant