CN111783722A - Lane line extraction method of laser point cloud and electronic equipment - Google Patents

Lane line extraction method of laser point cloud and electronic equipment Download PDF

Info

Publication number
CN111783722A
CN111783722A CN202010671388.3A CN202010671388A CN111783722A CN 111783722 A CN111783722 A CN 111783722A CN 202010671388 A CN202010671388 A CN 202010671388A CN 111783722 A CN111783722 A CN 111783722A
Authority
CN
China
Prior art keywords
grid
lane line
point cloud
ground
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010671388.3A
Other languages
Chinese (zh)
Other versions
CN111783722B (en
Inventor
刘立
丁亚芬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Ecarx Technology Co Ltd
Original Assignee
Hubei Ecarx Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Ecarx Technology Co Ltd filed Critical Hubei Ecarx Technology Co Ltd
Priority to CN202010671388.3A priority Critical patent/CN111783722B/en
Publication of CN111783722A publication Critical patent/CN111783722A/en
Application granted granted Critical
Publication of CN111783722B publication Critical patent/CN111783722B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a lane line extraction method of laser point cloud and electronic equipment. The method comprises the following steps: acquiring a laser point cloud comprising a lane; removing non-ground points from the laser point cloud based on the elevation values of all the points in the laser point cloud to obtain ground point cloud; converting the ground point cloud into a binary image based on a gray value; performing region growing clustering on the binary image to generate at least one clustering region; performing principal component analysis on each clustering region to obtain a shape descriptor of each clustering region, and extracting ground points for generating lane lines from the ground points of each clustering region based on the shape descriptor; and fitting the ground points for generating the lane line to generate the lane line so as to manufacture a high-precision map. The method can automatically and accurately extract the lane lines from the mass point cloud data, and has high processing efficiency and high extraction precision.

Description

Lane line extraction method of laser point cloud and electronic equipment
Technical Field
The invention relates to the technical field of high-precision maps, in particular to a lane line extraction method of laser point cloud and electronic equipment.
Background
At present, the traditional road level map can not meet the requirements of the advanced driving assistance system, and the lane level high-precision map is widely applied to the advanced driving assistance system because the lane level high-precision map can provide more road information which can help vehicle positioning, navigation and decision making and judgment. The lane line is a core component of the lane-level high-precision map, and the correct extraction of the lane line is a precondition for ensuring the accuracy of the lane-level map.
In the prior art, the following methods are mainly used for extracting lane lines. In the first method, the lane line is manually edited and identified. The method mainly depends on drawing the lane lines on the point clouds or the images manually, and has the advantages of low efficiency, high cost, no guarantee of accuracy and no batch production. The second method, image-based lane line extraction. This method generally uses a threshold segmentation method based on HIS (Hue-Saturation-brightness) color space, but due to the complexity of the road surface, the lane lines are worn, in this case, it is difficult to distinguish the worn lane lines from the road surface, and the shielding condition of the vehicle on the road surface also affects the accuracy and integrity of the extracted lane lines. And the third method is the lane line extraction based on the point cloud. The method distinguishes the lane line from the road surface by using the characteristics of the point cloud such as the echo reflectivity (gray value) and the like, and the least square fitting is adopted to fit the extracted points, so that the automatic extraction of the lane line is realized. However, this method is susceptible to noise, and for example, other highlighted signs (such as speed limit signs, characters, etc.) on the ground may be mistakenly divided into lane lines, resulting in low extraction accuracy. In the fourth method, the lane line is extracted based on the deep learning neural network. The method is mainly characterized in that a deep learning neural network is established according to a lane line model identified by characteristics, so that the lane line is extracted. However, the accuracy of the lane lines extracted based on the deep learning neural network is strongly correlated with the accuracy of the model for training the neural network. In order to obtain a highly accurate lane line, a large number of lane line models are required for training, and the training cost and load are high.
Disclosure of Invention
In view of the above problems, the present invention has been made to provide a lane line extraction method of a laser point cloud and an electronic device that overcome or at least partially solve the above problems.
The invention aims to provide a lane line extraction method which can process mass data and has high extraction precision.
A further object of the invention is to further improve the accuracy and integrity of the extracted lane lines.
Particularly, according to an aspect of an embodiment of the present invention, there is provided a lane line extraction method of a laser point cloud, including:
acquiring a laser point cloud comprising a lane;
removing non-ground points from the laser point cloud based on the elevation values of all the points in the laser point cloud to obtain ground point cloud;
converting the ground point cloud into a binary image based on a gray value;
performing region growing clustering on the binary image to generate at least one clustering region;
performing principal component analysis on each clustering region to obtain a shape descriptor of each clustering region, and extracting ground points for generating lane lines from the ground points of each clustering region based on the shape descriptor;
and fitting the ground points for generating the lane line to generate the lane line so as to manufacture a high-precision map.
Optionally, the performing principal component analysis on each of the clustering regions to obtain a shape descriptor of each of the clustering regions, and extracting ground points used for generating a lane line from the ground points of each of the clustering regions based on the shape descriptor includes:
performing principal component analysis on the ground points contained in each clustering region to obtain a characteristic value of the clustering region;
calculating a shape descriptor of the clustering region according to the characteristic value, wherein the shape descriptor comprises a linear descriptor, a plane descriptor and a spherical descriptor;
judging whether the linear descriptor of the clustering region is larger than the plane descriptor and the spherical descriptor of the clustering region;
and if so, determining the ground points contained in the clustering area as the ground points for generating the lane lines.
Optionally, the removing non-ground points from the laser point cloud based on the elevation values of the points in the laser point cloud to obtain a ground point cloud includes:
projecting the laser point cloud to an XOY plane of a WGS84 space rectangular coordinate system;
establishing a grid comprising the laser point cloud in the XOY plane;
and dividing the grid into a plurality of grids according to the set grid size, and removing non-ground points from the laser point cloud through multi-scale moving surface filtering based on the elevation values of all points in each grid to obtain the ground point cloud.
Optionally, the dividing the grid into a plurality of grids according to the set grid size, and based on the elevation values of each point in each grid, removing non-ground points from the laser point cloud through multi-scale moving surface filtering to obtain a ground point cloud includes:
step S11, taking the set minimum grid size as the current grid size;
step S12, dividing the grid into a plurality of grids according to the current grid size;
step S13, determining a grid where each point in the laser point cloud is located according to the X coordinate and the Y coordinate of the laser point cloud and the current grid size;
s14, establishing a moving surface equation according to the X coordinate, the Y coordinate and the Z coordinate of a plurality of points in each grid, and calculating the elevation value of each grid according to the moving surface equation;
s15, taking the Z coordinates of each point in each grid as the elevation value of each point, determining the point with the elevation value larger than that of the grid in each grid as a non-ground point, and removing the non-ground point;
step S16, judging whether the current grid size is smaller than the set maximum grid size;
step S17, if yes, updating the current grid size by the set increasing step size;
repeating the steps S12-S17 until the current grid size is not less than the set maximum grid size.
Optionally, the converting the ground point cloud into a binary image based on a gray value includes:
projecting the ground point cloud to an XOY plane of a WGS84 space rectangular coordinate system;
establishing a grid comprising the ground point cloud on the XOY plane, and segmenting the grid into a plurality of grids according to a specified grid size;
acquiring the gray value of each ground point in each grid, and calculating the average value of the gray values of each ground point in each grid as the gray value of each grid;
comparing the gray value of each grid with a preset gray threshold value;
if the gray value of the grid is smaller than the preset gray threshold value, the binarization gray value of the grid is made to be equal to 0, otherwise, the binarization gray value of the grid is made to be equal to 1, and the binarization gray value of each grid is obtained;
and generating a binary image according to the binary gray value of each grid.
Optionally, the performing region growing clustering on the binary image to generate at least one clustering region includes:
taking the grid with the binarization gray value of 1 in the binary image as a seed grid;
selecting one grid from the seed grids as an initial grid, traversing and searching all the seed grids by a region growing method, and clustering the searched seed grids; when the seed grids are clustered, clustering all the seed grids positioned in the eight neighborhoods of the seed grids into the same category for any seed grid so as to classify all the seed grids into at least one category;
and forming a clustering region by using the seed grids of the same category to obtain at least one clustering region.
Optionally, the fitting the ground points for generating a lane line to generate a lane line includes:
dividing the WGS84 space rectangular coordinate system into a plurality of voxels according to the specified voxel size;
assigning the ground points for generating lane lines to each of the voxels according to coordinates of the ground points for generating lane lines;
obtaining Euclidean distance between each ground point for generating the lane line and the center of the voxel where the ground point is located;
selecting the ground point with the minimum Euclidean distance in each voxel as a voxel characteristic point;
calculating a difference value between the maximum X coordinate and the minimum X coordinate in the voxel characteristic point to obtain a first difference value, and calculating a difference value between the maximum Y coordinate and the minimum Y coordinate in the voxel characteristic point to obtain a second difference value;
comparing the first difference value with the second difference value, and selecting the coordinate axis direction corresponding to the larger difference value as the main direction;
dividing the voxel characteristic points into a plurality of groups according to a first preset length along the main direction;
and performing fitting calculation on the voxel characteristic points in each group to obtain fitting parameters, and generating a lane line according to the fitting parameters.
Optionally, after the ground points for generating the lane line are fitted to generate the lane line, the method further includes:
calculating the Euclidean distance between two opposite end points of every two adjacent lane lines along the main direction;
judging whether the Euclidean distance is smaller than a second preset length or not;
if so, obtaining point coordinates between the two adjacent lane lines through interpolation according to the coordinates of the two opposite end points of the two adjacent lane lines;
and connecting the two adjacent lane lines according to the point coordinates.
Optionally, after the ground points for generating the lane line are fitted to generate the lane line, the method further includes:
and smoothing each lane line by a weighted moving average algorithm.
Another aspect of the present invention provides an electronic device, comprising:
a processor; and
a memory storing computer program code;
the computer program code, when executed by the processor, causes the electronic device to perform a lane line extraction method of a laser point cloud according to any of the above.
According to the lane line extraction method of the laser point cloud provided by the embodiment of the invention, ground points are extracted from the laser point cloud based on the elevation values of all points in the laser point cloud, so that the error identification of a high-brightness area of ground objects as a lane line can be reduced; the binary image generated based on the gray value is used for carrying out region growing, so that the positioning, tracking and clustering of the lane line are facilitated; the shape descriptors of all clustering areas are obtained through principal component analysis, non-lane line clustering is eliminated on the basis of the shape descriptors, ground points used for generating lane lines are extracted, and other highlighted marks on the ground are conveniently eliminated; and finally, fitting the extracted ground points for generating the lane line to generate the lane line, so that the aim of automatically and accurately extracting the lane line from the mass point cloud data is fulfilled, and the method is high in processing efficiency and high in extraction precision.
Further, after the lane lines are obtained through fitting, whether the situation of the lane lines which are not extracted exists can be judged, if yes, the lane lines which are not extracted are obtained through interpolation, and the accuracy and the integrity of the lane lines extracted from the laser point cloud are further improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
The above and other objects, advantages and features of the present invention will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof, taken in conjunction with the accompanying drawings.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic flow chart illustrating a lane line extraction method of a laser point cloud according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating the steps of removing non-ground points from a laser point cloud by multi-scale moving surface filtering to obtain a ground point cloud according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating the effect of removing non-ground points from a laser point cloud to obtain a ground point cloud through multi-scale moving surface filtering according to an embodiment of the present invention;
FIG. 4 shows a flowchart of the steps of converting a ground point cloud to a grayscale value based binary image according to an embodiment of the invention;
FIG. 5 is a diagram illustrating one embodiment of region growing clustering of binary images to generate at least one clustered region;
FIG. 6 is a schematic flow chart showing the steps of analyzing the principal components of each clustering region to obtain a shape descriptor of each clustering region, and extracting ground points for generating a lane line from the ground points of each clustering region based on the shape descriptor according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating the steps of fitting ground points used to generate a lane line in accordance with one embodiment of the present invention;
FIG. 8 illustrates a schematic diagram of the effect of assigning ground points used to generate a lane line to each voxel, according to an embodiment of the invention;
fig. 9 is a flowchart illustrating a step of finding a lane line that is not extracted after a lane line is generated by fitting ground points for generating a lane line according to an embodiment of the present invention;
fig. 10 is a schematic diagram illustrating a scenario of determining an unextracted lane line according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The embodiment of the invention provides a lane line extraction method of laser point cloud. Fig. 1 is a schematic flowchart illustrating a method for extracting a lane line by using a laser point cloud according to an embodiment of the present invention. Referring to fig. 1, the method may include at least the following steps S102 to S112.
Step S102, laser point cloud including a lane is obtained.
And step S104, removing non-ground points from the laser point cloud based on the elevation values of all the points in the laser point cloud to obtain ground point cloud.
And step S106, converting the ground point cloud into a binary image based on a gray value.
And step S108, performing region growing clustering on the binary image to generate at least one clustering region.
And step S110, performing principal component analysis on each clustering region to obtain a shape descriptor of each clustering region, and extracting ground points for generating lane lines from the ground points of each clustering region based on the shape descriptor.
And step S112, fitting the ground points for generating the lane line to generate the lane line so as to manufacture the high-precision map.
According to the lane line extraction method of the laser point cloud provided by the embodiment of the invention, ground points are extracted from the laser point cloud based on the elevation values of all points in the laser point cloud, so that the error identification of a high-brightness area of ground objects as a lane line can be reduced; the binary image generated based on the gray value is used for carrying out region growing, so that the positioning, tracking and clustering of the lane line are facilitated; the shape descriptors of all clustering areas are obtained through principal component analysis, non-lane line clustering is eliminated on the basis of the shape descriptors, ground points used for generating lane lines are extracted, and other highlighted marks on the ground are conveniently eliminated; and finally, fitting the extracted ground points for generating the lane line to generate the lane line, so that the aim of automatically and accurately extracting the lane line from the mass point cloud data is fulfilled, and the method is high in processing efficiency and high in extraction precision.
In the above step S102, the acquired laser point cloud may be an original point cloud acquired by a laser radar.
In step S104 above, the acquired laser point cloud may be projected to an XOY plane of the WGS84 spatial rectangular coordinate system, where a mesh including the projected laser point cloud is created. And then, dividing the grid into a plurality of grids according to the set grid size, and removing non-ground points from the laser point cloud through multi-scale moving surface filtering based on the elevation values of all points in each grid to obtain the ground point cloud. The grid referred to herein is a square of equal size.
The WGS84 space rectangular coordinate system is a right-handed coordinate system formed by taking the earth centroid as the origin of coordinates, taking the direction from the earth centroid to a protocol earth Pole (CTP) defined by the international time service 1984.0 as the Z-axis direction, pointing the X-axis direction to the intersection point of the meridian plane of the null-child of the BIH 1984.0 and the equator of the CTP, and the Y-axis being perpendicular to the Z-axis and the X-axis, respectively. The XOY plane refers to the horizontal plane established by the earth's centroid, the X-axis, and the Y-axis.
Each point in the laser point cloud includes three-dimensional coordinates, i.e., an X coordinate, a Y coordinate, and a Z coordinate. And projecting the laser point cloud to an XOY plane of a WGS84 space rectangular coordinate system according to the X coordinate and the Y coordinate of each point, wherein the Z coordinate of each point is the elevation value of the point.
In order to reduce the false recognition of the highlight areas of the ground objects as the lane lines and improve the extraction precision of the lane lines, the grid is divided into a plurality of grids according to the set grid size, and non-ground points are removed from the laser point cloud through multi-scale moving surface filtering based on the elevation values of all the points in all the grids to obtain the ground point cloud. Fig. 2 is a schematic flow chart illustrating a step of removing non-ground points from a laser point cloud to obtain a ground point cloud through multi-scale moving surface filtering according to an embodiment of the present invention. Referring to fig. 2, this step may include the following steps S11 to S17.
In step S11, the set minimum grid size is used as the current grid size.
The set minimum grid size may be calculated according to the following equation (1):
Figure BDA0002582424520000071
in the formula (1), CSizeminRepresenting the set minimum grid size and p representing the density of the laser point cloud.
In step S12, the mesh is segmented into a plurality of grids according to the current grid size.
And step S13, determining the grids where the points in the laser point cloud are located according to the X coordinate and the Y coordinate of the laser point cloud and the current grid size.
In the step, the Row number Row of each point i in the grid of the XOY plane in the laser point cloud is calculated according to the following formula (2)iCol and ColiAccording to the Row number Row of the point iiCol and ColiI.e. it can be determined that point i is on the Row of the gridiRow, ColiWithin the grid of columns.
Figure BDA0002582424520000072
In the formula (2), Xi、YiX and Y coordinates, X, of point i, respectivelymin、YminThe minimum value of the X coordinate and the minimum value of the Y coordinate of all points in the laser point cloud, CSizecurrentIndicating the current grid size, floor is the operation rounding the input value down.
And step S14, establishing a moving surface equation according to the X coordinate, the Y coordinate and the Z coordinate of a plurality of points in each grid, and calculating the elevation value of each grid according to the moving surface equation.
The relationship between the equation for the surface of motion and the elevation values of each grid is shown in the following equation (3):
Figure BDA0002582424520000073
wherein the content of the first and second substances,
Figure BDA0002582424520000074
indicating that the Row and column numbers are Row, respectivelyiAnd ColiF (X, Y) is the Row number and the column number is RowiAnd ColiThe equation for the surface of motion of the grid of (1).
Specifically, for each grid, a moving surface equation shown in the following formula (4) is established with n points in the grid:
Z=f(X,Y)=aX2+bXY+cY2+dX+eY+g (4)
in the formula (4), X, Y, Z represents the X coordinate, Y coordinate, and Z coordinate of each of the n points in the grid. a. b, c, d, e, g are coefficients of the equation for a moving surface, and can be determined from the three-dimensional coordinates (i.e., X, Y, and Z coordinates) of n points in the grid.
After determining each coefficient in the equation f (X, Y) of the moving curved surface, the X coordinate and the Y coordinate of the center of the grid can be substituted into the formula f (X, Y), and the elevation value of the center of the grid is obtained as the elevation value of the grid.
And step S15, taking the Z coordinates of each point in each grid as the elevation value of each point, determining the point with the elevation value larger than that of the grid in each grid as a non-ground point, and removing the non-ground point.
And traversing each point in each grid, and removing the points with the elevation values larger than that of the grid as non-ground points (also called ground object points) to avoid the high-brightness areas of the ground objects from being wrongly identified as lane lines.
In step S16, it is determined whether the current grid size is smaller than the set maximum grid size. If yes, go to step S17.
The set maximum grid size may be determined according to actual conditions such as the range size of the grid, the density of the laser point cloud, and the like in practical applications, which is not specifically limited in the present invention.
Step S17, the current grid size is updated with the set growth step size. Then, returning to step S12, the loop repeatedly executes steps S12 to S17.
In a specific implementation, the current grid size may be updated according to the following equation (5):
Figure BDA0002582424520000081
in formula (5), CSizestepFor a set step size of growth, CSizecurrentAnd assigning the updated grid size to the current grid size for the updated grid size so as to update the current grid size.
The set increasing step length can be determined according to actual conditions such as the range size of the grid, the density of the laser point cloud and the like in practical application, and the method is not particularly limited in this respect.
In addition, when it is determined in step S16 that the current grid size is not smaller than the set maximum grid size (e.g., the current grid size is equal to or even larger than the set maximum grid size), the loop is ended, and the next step S106 is proceeded to.
The following illustrates the effect of removing non-ground points from the laser point cloud to obtain the ground point cloud in step S104 by multi-scale moving surface filtering with reference to fig. 3. As shown in fig. 3, the original laser point cloud shown in 3(a) is projected onto the XOY plane and the grid is divided into a plurality of grids according to the current grid size, and the projected laser point cloud is distributed into the grids as shown in 3 (b). As shown in fig. 3(c), a grid (e.g., a row 1, column 1 grid) is selected, and a surface of motion, as shown in fig. 3(d), is created at a plurality of points in the grid. Thereafter, as shown in 3(e), points (i.e., non-ground points) in the grid having an elevation greater than the elevation of the curved surface (i.e., the elevation of the grid as found by the moving surface equation) are eliminated. Traversing each grid in the step 3(c), and removing non-ground points in each grid to obtain the processed laser point cloud shown in the step 3 (f). And updating the current grid size according to the set increasing step size to obtain a new current grid size, dividing the grid into a plurality of new grids according to the new current grid size, and distributing the processed laser point clouds to the new grids as shown in 3 (g). And repeating 3(c) to 3(g) and eliminating non-ground points in the new grid until the new current grid size reaches the set maximum grid size. And finally obtaining the ground point cloud shown as 3 (h).
Ground points are extracted from the laser point cloud through multi-scale moving curved surface filtering based on the elevation values of all the points in the laser point cloud, the phenomenon that a highlight area of a ground object is mistakenly identified as a lane line can be reduced, and the extraction precision of the lane line is improved.
In step S106 above, the obtained ground point cloud is converted into a binary image based on the gray-scale value. FIG. 4 shows a flowchart of the steps of converting the ground point cloud into a grayscale value-based binary image according to an embodiment of the invention. Referring to fig. 4, the step of converting the ground point cloud into a binary image based on gray values may include the following steps S402 to S412.
Step S402, projecting the ground point cloud to an XOY plane of a WGS84 space rectangular coordinate system.
Step S404, establishing a grid comprising ground point clouds on an XOY plane, and dividing the grid into a plurality of grids according to the specified grid size.
After the ground point cloud is projected to the XOY plane and the established grid is segmented, the mapping relation between the point cloud and the grid can be established according to the following formula (6), and the grid where each point in the ground point cloud is located is determined according to the mapping relation.
Figure BDA0002582424520000091
In formula (6), RowjAnd ColjRespectively, the Row number and the column number of the ground point j in the grid of the XOY plane according to the Row number Row of the ground point jjCol and ColjThat is, the Row's of the grid at the ground point j can be determinedjRow, ColjWithin the grid of columns. Xj、YjRespectively, the X and Y coordinates of the ground point j. X'min、Y′minThe minimum value of the X coordinate and the minimum value of the Y coordinate of all ground points in the ground point cloud are respectively. CSize is the size of the grid, and can be based on the ground point cloud in practical applicationThe density of (d) and the range of the ground point cloud.
In step S406, the gray values of the respective ground points in the respective grids are obtained, and the average value of the gray values of the respective ground points in the respective grids is calculated as the gray value of each grid.
The gray value of each grid is calculated according to the following formula (7):
Figure BDA0002582424520000092
in the formula (7), I(Rowj,Colj)Is the RowjRow, ColjGrey value of grid of columns, ∑ ItIs the sum of the gray values of all the ground points in the grid, and N is the number of the ground points in the grid.
In addition, if there is no ground point in a certain grid, the grayscale value of the grid is set to 0.
Step S408, comparing the gray value of each grid with a preset gray threshold.
The preset gray threshold can be set through engineering debugging in practical application.
Step S410, if the gray value of the grid is smaller than the preset gray threshold, making the binary gray value of the grid equal to 0, otherwise, making the binary gray value of the grid equal to 1, and obtaining the binary gray value of each grid.
In step S412, a binary image is generated from the binarized gradation value of each grid.
Because the lane line is strongly reflected and the ground is weakly reflected, the gray level image of the ground point cloud can be converted into a binary image by comparing the gray level value of each grid with a preset gray level threshold value, so that the lane line can be conveniently and rapidly identified.
In step S108, at least one cluster region is generated by performing region growing clustering on the binary image. Specifically, a grid of which the binarized gradation value is 1 in the binary image may be used as the seed grid. And selecting one grid from the seed grids as an initial grid, traversing and searching all the seed grids by a region growing method, and clustering the searched seed grids. When the seed grids are clustered, all the seed grids in the eight neighborhoods of the seed grids are clustered into the same category for any seed grid so as to classify all the seed grids into at least one category. And forming a clustering region by using the seed grids of the same category to obtain at least one clustering region.
Step S108 will be described in detail with reference to the specific example shown in fig. 5. In this specific example, the depth search-based region growing clustering is performed on the binary image shown in (5 a), and the specific steps are as follows:
(1) the grid having the binarized grayscale value of 1 in the binary image shown in (5 a) is used as the seed grid. As shown in fig. 5(b), one of the seed grids is selected as the initial starting grid, and specifically, the seed grids may be searched column by column and row by row starting from the first row of the grid, so that the first seed grid is searched as the initial starting grid. Thereafter, as shown in fig. 5(c), the eight neighborhood of the start grid is searched for the presence of a seed grid (i.e., a grid whose binary gradation value is 1). If so, the searched seed grid is considered to belong to the same category as the starting grid, as shown in 5 (d). If only one seed grid is searched in the eight neighborhoods of the initial starting grid, the seed grid is taken as a new starting grid, and the eight neighborhood searching as shown in 5(c) is continued until the searching failure condition is met. The search failure condition may include one of: the binary gray values of the grids in the eight neighborhoods of the starting grid are all not 1 (i.e., no seed grid exists in the eight neighborhoods of the starting grid), or the boundaries of the binary image have been searched, or all the grids in the eight neighborhoods with binary gray values of 1 have been searched. If a plurality of seed grids are searched in the eight neighborhoods of the initial starting grid, selecting one grid from the searched seed grids as a new starting grid, and continuing the eight-neighborhood searching as shown in 5(c) until a searching failure condition is met. And then returning to the initial starting grid, selecting an unprocessed seed grid from the plurality of searched seed grids as a new starting grid, and continuing the searching until all the seed grids in the eight neighborhood range of the initial starting grid are processed. If the seed grid which is already processed is searched in a certain direction of the eight neighborhoods, the seed grid is skipped, and the search is started from other directions. And circulating the steps until no unprocessed grids with the binary gray value of 1 exist in the eight neighborhoods of each seed grid searched in the search, and stopping the search according with the search stopping condition. The seed grid serving as the initial starting grid in the search and all the searched seed grids are identified as a first category (for example, with an identification value of 1 as an identification), and a first clustering region is obtained, as shown in fig. 5 (e).
(2) It is checked whether there is a seed grid that has not been searched for in the binary image (i.e., a grid whose binary gradation value is 1). If so, one of the seed grids that is not searched is used as the initial starting grid for the next search, as shown in FIG. 5 (f).
(3) And (4) carrying out the next search according to the search mode in the step (1) until the search is stopped. The seed grid searched in this search is identified as a second category (for example, with an identification value of 2 as an identification), resulting in a second classification region.
(4) And (3) repeating the steps (2) and (3) until all the seed grids in the binary image are searched (i.e. all the seed grids are traversed), ending the clustering, and obtaining at least one clustering area. Specifically, three cluster regions (i.e., a first cluster region, a second cluster region, and a third cluster region) shown as 5(g) are obtained by region growing clustering in this example.
By utilizing the binary image generated based on the gray value, the lane line seed grid is selected and the region is increased, so that the lane line is convenient to locate, track and cluster.
In step S110, the principal component analysis is performed on each clustering region, the shape descriptor of each clustering region is obtained through calculation, and the clustering regions not belonging to the lane line are removed according to the shape descriptor, so as to obtain the ground points for generating the lane line.
Principal Component Analysis (PCA) is a statistical method that transforms a set of variables that may have a correlation into a set of linearly uncorrelated variables by orthogonal transformation, and the transformed set of variables is called Principal components.
Fig. 6 is a flowchart illustrating a step of performing principal component analysis on each clustering region to obtain a shape descriptor of each clustering region, and extracting ground points for generating a lane line from the ground points of each clustering region based on the shape descriptor according to an embodiment of the present invention. Referring to fig. 6, step S110 may be further implemented as the following steps S602 to S608.
Step S602, performing principal component analysis on the ground points included in each clustering region to obtain a feature value of each clustering region.
Specifically, grids included in each clustering region are determined according to the identification values of the grids, and then ground points included in each clustering region are obtained according to the mapping relation between the ground points and the grids.
Taking the clustering region shown in fig. 5 as an example, for the first clustering region, for example, all grids with identification values of 1 are found, and the ground points in all grids with identification values of 1 are obtained according to the mapping relationship between the ground points and the grids, and are taken as the ground points included in the first clustering region.
After the ground points contained in each clustering region are obtained, the ground points contained in each clustering region are subjected to principal component analysis to obtain a characteristic value [ lambda ] of each clustering region123]。
Step S604, calculating the shape descriptor of each clustering region according to the characteristic value, wherein the shape descriptor comprises a linear descriptor, a plane descriptor and a spherical descriptor.
Specifically, the shape descriptor is calculated according to the following formula (8):
Figure BDA0002582424520000121
in formula (8), λ1、λ2、λ3Respectively, the characteristic values of the clustering regions. D1DThe linear descriptor is a linear descriptor of the clustering region, and indicates that the shape of the clustering region is linear. D2DA plane descriptor of the clustering region, which indicates that the shape of the clustering region is two-dimensional planeAnd (5) kneading. D3DThe shape of the clustering region is represented as a three-dimensional sphere by a spherical descriptor of the clustering region.
Step S606, determine whether the linear descriptor of each clustering region is larger than the plane descriptor and the spherical descriptor of the clustering region. If yes, go to step S608.
Step S608, determining the ground points included in the cluster region as the ground points for generating the lane lines.
Since the shape of the lane line is an elongated rectangle, for a cluster region belonging to the lane line, the linear descriptor in the shape descriptors calculated from the feature values of the cluster region should be the largest (i.e., the linear descriptor of the cluster region is larger than the plane descriptor and the spherical descriptor thereof). Therefore, for each clustering region, the maximum value V ═ max (D) among the linear descriptor, the plane descriptor, and the sphere descriptor of the clustering region is found1D,D2D,D3D). And if the maximum value is the linear descriptor of the clustering region, determining that the clustering region belongs to the lane line, wherein the ground points contained in the clustering region can be used for subsequently generating the lane line. If the maximum value is not the linear descriptor of the clustering region, for example, the maximum value is the plane descriptor of the clustering region, it is determined that the clustering region does not belong to the lane line, and ground points in all grids in the clustering region are removed and are not used for subsequently generating the lane line.
By eliminating non-lane line clusters (i.e. cluster regions not belonging to lane lines) based on the shape descriptors and extracting ground points for generating the lane lines, other highlighted marks (such as characters, left turn marks, deceleration marks and the like) on the ground are conveniently eliminated, and the extraction precision of the lane lines is improved.
In step S112, the obtained ground points for generating the lane lines (hereinafter simply referred to as lane line points) are fitted to generate the lane lines. Specifically, the fitting parameters may be obtained by fitting and calculating lane line points through RANSAC (Random Sample Consensus), and the extracted lane lines may be obtained according to the fitting parameters.
Because the length of some lane lines is longer, fitting the whole lane line points easily causes a larger fitting error, and the error can be gradually increased along the advancing direction of the vehicle. To solve these problems, a piecewise fitting method may be adopted for the lane line points.
In a preferred embodiment, as shown in fig. 7, the step of fitting the ground points for generating the lane line to generate the lane line may further include the following steps S702 to S716.
In step S702, the WGS84 space rectangular coordinate system is divided into a plurality of voxels according to a predetermined voxel size.
The specified voxel size may include the length-width dimension of the voxel (the length and width of the voxel being equal) and the height dimension of the voxel.
In step S704, the ground points for generating the lane lines are assigned to each voxel in accordance with the coordinates of the ground points for generating the lane lines.
Referring to fig. 8, the lane line points may be assigned to each voxel according to the following equation (9):
Figure BDA0002582424520000131
in formula (9), Rowm、Colm、HmThe line number, column number, and layer number (i.e., the sequence number in the high direction) of the voxel to which the lane line point k is assigned, respectively. Xk、Yk、ZkRespectively an X coordinate, a Y coordinate and a Z coordinate of the lane line point k. X ″)min、Y″min、Z″minThe minimum value of the X coordinate, the minimum value of the Y coordinate and the minimum value of the Z coordinate of all the lane line points are respectively. VSize is the length dimension of the voxel, which in this example is equal in width and length, andzis the height dimension of the voxel.
Step S706, obtaining the euclidean distance between the ground point for generating each lane line and the center of the voxel in which the ground point is located.
Step S708, selecting the ground point with the minimum Euclidean distance in each voxel as the voxel characteristic point.
And selecting the lane line point with the minimum Euclidean distance from the center of each voxel as a voxel characteristic point to represent all the lane line points in the voxel and participate in subsequent fitting operation, wherein other lane line points except the voxel characteristic point do not participate in fitting, so that the fitting operation amount can be greatly reduced on the premise of ensuring the fitting accuracy.
If there is no lane line point in a certain voxel, the voxel does not participate in the calculation of the present step.
Step S710, calculating a difference between the maximum X coordinate and the minimum X coordinate in the voxel feature point to obtain a first difference, and calculating a difference between the maximum Y coordinate and the minimum Y coordinate in the voxel feature point to obtain a second difference.
Step S712, the magnitude of the first difference and the magnitude of the second difference are compared, and the coordinate axis direction corresponding to the larger difference between the first difference and the second difference is selected as the main direction.
Specifically, if the first difference between the X coordinates is large, it indicates that the range in which the lane line points are distributed in the X-axis direction is larger, and thus the X-axis direction is taken as the main direction. If the second difference between the Y coordinates is larger, it indicates that the range in which the lane line points are distributed in the Y-axis direction is larger, and thus the Y-axis direction is taken as the main direction
Step S714, dividing the voxel feature points into a plurality of groups according to a first preset length along the principal direction.
The first preset length may be set according to the actual application requirement, and may be set to 50m, for example.
For example, if the principal direction is the X-axis direction, the voxel feature points within each first preset length are divided into one group along the X-axis direction, thereby obtaining a plurality of groups of voxel feature points.
And step S716, performing fitting calculation on the voxel characteristic points in each group to obtain fitting parameters, and generating a lane line according to the fitting parameters.
In this step, RANSAC fitting calculation can be performed on each group of voxel characteristic points to obtain optimal fitting parameters, and then a lane line is generated according to the obtained optimal fitting parameters.
In some cases, certain portions of the lane line may not be extracted (i.e., the lane line is missing) due to vehicle occlusion or lane line wear, affecting the accuracy and integrity of the lane line. To this end, in one embodiment, after the ground points used to generate the lane lines are fitted to generate the lane lines, the lane lines that are not extracted may also be solved. Fig. 9 is a flowchart illustrating a step of finding a lane line that is not extracted after a lane line is generated by fitting ground points used for generating the lane line according to an embodiment of the present invention. Referring to fig. 9, the step of finding the lane line not extracted may include the following steps S902 to S908.
Step S902, calculating the euclidean distance between two opposite end points of each two adjacent lane lines along the main direction.
Step S904, determining whether the euclidean distance is smaller than a second preset length. If yes, go on to step S906.
The second preset length may be set according to the actual application requirement, for example, the second preset length may be set to 20 m.
Step S906, obtaining point coordinates between the two adjacent lane lines by interpolation according to the coordinates of the two opposite end points of the two adjacent lane lines.
Step S908 is performed to connect the two adjacent lane lines according to the point coordinates obtained by interpolation.
How to find the unextracted lane line is specifically described below by way of example with reference to the scene diagram of fig. 10.
In fig. 10, arrows indicate the direction in which the lane lines are fitted (i.e., the main direction), and L1, L2, L3, and L4 are the lane lines obtained after fitting (not referred to as initial lane lines). First, as shown in fig. 10(a), adjacent lane lines L1 and L2 are found along the main direction, and the euclidean distance between the opposite end points (points a and B in fig. 10) of L1 and L2 is calculated. It is determined whether the euclidean distance between the points a and B is less than a second predetermined length (e.g., 20 m). Since it is determined that the euclidean distance between the point a and the point B is smaller than the second preset length, which indicates that there is a missing lane line (i.e., a lane line that is not extracted) between the point a of the lane line L1 and the point B of the lane line L2, the point coordinates between the point a of the lane line L1 and the point B of the lane line L2 are calculated by an interpolation method according to the coordinates of the point a and the point B. The lane line L1 and the lane line L2 are connected according to the interpolated point coordinates to connect L1 and L2 as one body, resulting in a lane line L' shown in 10(b) after the missing lane line is supplemented between L1 and L2. Then, adjacent lane lines L3 and L4 are found along the main direction, and the euclidean distance between the opposite two end points (points C and D in fig. 10) of L3 and L4 is calculated. And judging whether the Euclidean distance between the point C and the point D is smaller than a second preset length. Since the euclidean distance between the point C and the point D is determined to be greater than the second preset length, which indicates that the lane line L3 and the lane line L4 are independent lane lines, and no missing lane line exists between them, it is not necessary to perform interpolation calculation between the point C of the lane line L3 and the point D of the lane line L4. Finally, a lane line shown in (10 c) is obtained.
In this embodiment, after the lane line is obtained by fitting, it may be determined whether there is a lane line that is not extracted due to vehicle shielding or lane line abrasion, for example, and if there is a lane line that is not extracted due to vehicle shielding or lane line abrasion, the lane line that is not extracted is obtained by interpolation, so that the accuracy and the integrity of the lane line extracted from the laser point cloud are further improved.
In one embodiment, after the ground points used for generating the lane lines are fitted to generate the lane lines, the lane lines may be smoothed by a weighted moving average algorithm, so that the extracted lane lines are more accurate. Particularly, the lane lines obtained by fitting are not smooth possibly due to the adoption of the piecewise fitting, and the accuracy of the lane lines can be further improved by smoothing each lane line obtained by fitting through a weighted moving average algorithm.
Specifically, the lane line generated by fitting may be smoothed by a weighted moving average mathematical model shown by the following equations (10) and (11):
Figure BDA0002582424520000151
Figure BDA0002582424520000152
in formula (10) and formula (11), P'jIs the middle point P of the lane linejAnd (4) smoothing the three-dimensional coordinates. w is an integer of half and a whole of the size of a sliding window used in the weighted moving average method. i is the serial number of all points in the sliding window, and the value range is from-w to w. CiThe weights corresponding to each point within the sliding window. Pj+iIs the three-dimensional coordinate of the j + i-th point before smoothing. j is a point PjThe serial number of (2).
Based on the same inventive concept, the embodiment of the invention also provides electronic equipment. The electronic device includes:
a processor; and
a memory storing computer program code;
when executed by a processor, the computer program code causes the electronic device to perform the method for lane line extraction of a laser point cloud as described in any one or a combination of the above embodiments.
According to any one or a combination of multiple optional embodiments, the embodiment of the present invention can achieve the following advantages:
according to the lane line extraction method of the laser point cloud provided by the embodiment of the invention, ground points are extracted from the laser point cloud based on the elevation values of all points in the laser point cloud, so that the error identification of a high-brightness area of ground objects as a lane line can be reduced; the binary image generated based on the gray value is used for carrying out region growing, so that the positioning, tracking and clustering of the lane line are facilitated; the shape descriptors of all clustering areas are obtained through principal component analysis, non-lane line clustering is eliminated on the basis of the shape descriptors, ground points used for generating lane lines are extracted, and other highlighted marks on the ground are conveniently eliminated; and finally, fitting the extracted ground points for generating the lane line to generate the lane line, so that the aim of automatically and accurately extracting the lane line from the mass point cloud data is fulfilled, and the method is high in processing efficiency and high in extraction precision.
Further, after the lane lines are obtained through fitting, whether the situation of the lane lines which are not extracted exists can be judged, if yes, the lane lines which are not extracted are obtained through interpolation, and the accuracy and the integrity of the lane lines extracted from the laser point cloud are further improved.
It is clear to those skilled in the art that the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and for the sake of brevity, further description is omitted here.
In addition, the functional units in the embodiments of the present invention may be physically independent of each other, two or more functional units may be integrated together, or all the functional units may be integrated in one processing unit. The integrated functional units may be implemented in the form of hardware, or in the form of software or firmware.
Those of ordinary skill in the art will understand that: the integrated functional units, if implemented in software and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computing device (e.g., a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention when the instructions are executed. And the aforementioned storage medium includes: u disk, removable hard disk, Read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disk, and other various media capable of storing program code.
Alternatively, all or part of the steps of implementing the foregoing method embodiments may be implemented by hardware (such as a computing device, e.g., a personal computer, a server, or a network device) associated with program instructions, which may be stored in a computer-readable storage medium, and when the program instructions are executed by a processor of the computing device, the computing device executes all or part of the steps of the method according to the embodiments of the present invention.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments can be modified or some or all of the technical features can be equivalently replaced within the spirit and principle of the present invention; such modifications or substitutions do not depart from the scope of the present invention.

Claims (10)

1. A method for extracting lane lines of laser point clouds is characterized by comprising the following steps:
acquiring a laser point cloud comprising a lane;
removing non-ground points from the laser point cloud based on the elevation values of all the points in the laser point cloud to obtain ground point cloud;
converting the ground point cloud into a binary image based on a gray value;
performing region growing clustering on the binary image to generate at least one clustering region;
performing principal component analysis on each clustering region to obtain a shape descriptor of each clustering region, and extracting ground points for generating lane lines from the ground points of each clustering region based on the shape descriptor;
and fitting the ground points for generating the lane line to generate the lane line so as to manufacture a high-precision map.
2. The lane line extraction method according to claim 1, wherein the performing principal component analysis on each of the clustering regions to obtain a shape descriptor of each of the clustering regions, and extracting ground points for generating a lane line from the ground points of each of the clustering regions based on the shape descriptor comprises:
performing principal component analysis on the ground points contained in each clustering region to obtain a characteristic value of the clustering region;
calculating a shape descriptor of the clustering region according to the characteristic value, wherein the shape descriptor comprises a linear descriptor, a plane descriptor and a spherical descriptor;
judging whether the linear descriptor of the clustering region is larger than the plane descriptor and the spherical descriptor of the clustering region;
and if so, determining the ground points contained in the clustering area as the ground points for generating the lane lines.
3. The lane line extraction method of claim 1, wherein the removing non-ground points from the laser point cloud based on elevation values of each point in the laser point cloud to obtain a ground point cloud comprises:
projecting the laser point cloud to an XOY plane of a WGS84 space rectangular coordinate system;
establishing a grid comprising the laser point cloud in the XOY plane;
and dividing the grid into a plurality of grids according to the set grid size, and removing non-ground points from the laser point cloud through multi-scale moving surface filtering based on the elevation values of all points in each grid to obtain the ground point cloud.
4. The lane line extraction method according to claim 3, wherein the dividing the grid into a plurality of grids according to a set grid size, and removing non-ground points from the laser point cloud by multi-scale moving surface filtering based on elevation values of points in each grid to obtain a ground point cloud comprises:
step S11, taking the set minimum grid size as the current grid size;
step S12, dividing the grid into a plurality of grids according to the current grid size;
step S13, determining a grid where each point in the laser point cloud is located according to the X coordinate and the Y coordinate of the laser point cloud and the current grid size;
s14, establishing a moving surface equation according to the X coordinate, the Y coordinate and the Z coordinate of a plurality of points in each grid, and calculating the elevation value of each grid according to the moving surface equation;
s15, taking the Z coordinates of each point in each grid as the elevation value of each point, determining the point with the elevation value larger than that of the grid in each grid as a non-ground point, and removing the non-ground point;
step S16, judging whether the current grid size is smaller than the set maximum grid size;
step S17, if yes, updating the current grid size by the set increasing step size;
repeating the steps S12-S17 until the current grid size is not less than the set maximum grid size.
5. The lane line extraction method of claim 1, wherein the converting the ground point cloud into a binary image based on gray values comprises:
projecting the ground point cloud to an XOY plane of a WGS84 space rectangular coordinate system;
establishing a grid comprising the ground point cloud on the XOY plane, and segmenting the grid into a plurality of grids according to a specified grid size;
acquiring the gray value of each ground point in each grid, and calculating the average value of the gray values of each ground point in each grid as the gray value of each grid;
comparing the gray value of each grid with a preset gray threshold value;
if the gray value of the grid is smaller than the preset gray threshold value, the binarization gray value of the grid is made to be equal to 0, otherwise, the binarization gray value of the grid is made to be equal to 1, and the binarization gray value of each grid is obtained;
and generating a binary image according to the binary gray value of each grid.
6. The lane line extraction method according to claim 5, wherein the performing region growing clustering on the binary image to generate at least one clustering region comprises:
taking the grid with the binarization gray value of 1 in the binary image as a seed grid;
selecting one grid from the seed grids as an initial grid, traversing and searching all the seed grids by a region growing method, and clustering the searched seed grids; when the seed grids are clustered, clustering all the seed grids positioned in the eight neighborhoods of the seed grids into the same category for any seed grid so as to classify all the seed grids into at least one category;
and forming a clustering region by using the seed grids of the same category to obtain at least one clustering region.
7. The lane line extraction method according to claim 1, wherein the fitting the ground points for generating the lane line to generate the lane line includes:
dividing the WGS84 space rectangular coordinate system into a plurality of voxels according to the specified voxel size;
assigning the ground points for generating lane lines to each of the voxels according to coordinates of the ground points for generating lane lines;
obtaining Euclidean distance between each ground point for generating the lane line and the center of the voxel where the ground point is located;
selecting the ground point with the minimum Euclidean distance in each voxel as a voxel characteristic point;
calculating a difference value between the maximum X coordinate and the minimum X coordinate in the voxel characteristic point to obtain a first difference value, and calculating a difference value between the maximum Y coordinate and the minimum Y coordinate in the voxel characteristic point to obtain a second difference value;
comparing the first difference value with the second difference value, and selecting the coordinate axis direction corresponding to the larger difference value as the main direction;
dividing the voxel characteristic points into a plurality of groups according to a first preset length along the main direction;
and performing fitting calculation on the voxel characteristic points in each group to obtain fitting parameters, and generating a lane line according to the fitting parameters.
8. The lane line extraction method according to claim 7, further comprising, after generating a lane line by fitting the ground points for generating a lane line:
calculating the Euclidean distance between two opposite end points of every two adjacent lane lines along the main direction;
judging whether the Euclidean distance is smaller than a second preset length or not;
if so, obtaining point coordinates between the two adjacent lane lines through interpolation according to the coordinates of the two opposite end points of the two adjacent lane lines;
and connecting the two adjacent lane lines according to the point coordinates.
9. The lane line extraction method according to claim 1, further comprising, after generating a lane line by fitting the ground points for generating a lane line:
and smoothing each lane line by a weighted moving average algorithm.
10. An electronic device, comprising:
a processor; and
a memory storing computer program code;
the computer program code, when executed by the processor, causes the electronic device to perform the method of lane line extraction of a laser point cloud of any of claims 1-9.
CN202010671388.3A 2020-07-13 2020-07-13 Lane line extraction method of laser point cloud and electronic equipment Active CN111783722B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010671388.3A CN111783722B (en) 2020-07-13 2020-07-13 Lane line extraction method of laser point cloud and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010671388.3A CN111783722B (en) 2020-07-13 2020-07-13 Lane line extraction method of laser point cloud and electronic equipment

Publications (2)

Publication Number Publication Date
CN111783722A true CN111783722A (en) 2020-10-16
CN111783722B CN111783722B (en) 2021-07-06

Family

ID=72767130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010671388.3A Active CN111783722B (en) 2020-07-13 2020-07-13 Lane line extraction method of laser point cloud and electronic equipment

Country Status (1)

Country Link
CN (1) CN111783722B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330604A (en) * 2020-10-19 2021-02-05 香港理工大学深圳研究院 Method for generating vectorized road model from point cloud data
CN112767429A (en) * 2021-01-18 2021-05-07 南京理工大学 Ground-snow surface point cloud rapid segmentation method
CN115797896A (en) * 2023-01-30 2023-03-14 智道网联科技(北京)有限公司 Lane line clustering method, lane line clustering apparatus, and computer-readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050473A (en) * 2014-05-20 2014-09-17 中国人民解放军理工大学 Road data extraction method based on rectangular neighborhood analysis
CN104197897A (en) * 2014-04-25 2014-12-10 厦门大学 Urban road marker automatic sorting method based on vehicle-mounted laser scanning point cloud
US20160154999A1 (en) * 2014-12-02 2016-06-02 Nokia Technologies Oy Objection recognition in a 3d scene
CN106056614A (en) * 2016-06-03 2016-10-26 武汉大学 Building segmentation and contour line extraction method of ground laser point cloud data
CN108062517A (en) * 2017-12-04 2018-05-22 武汉大学 Unstructured road boundary line extraction method based on vehicle-mounted laser point cloud
CN111026150A (en) * 2019-11-25 2020-04-17 国家电网有限公司 System and method for pre-warning geological disasters of power transmission line by using unmanned aerial vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104197897A (en) * 2014-04-25 2014-12-10 厦门大学 Urban road marker automatic sorting method based on vehicle-mounted laser scanning point cloud
CN104050473A (en) * 2014-05-20 2014-09-17 中国人民解放军理工大学 Road data extraction method based on rectangular neighborhood analysis
US20160154999A1 (en) * 2014-12-02 2016-06-02 Nokia Technologies Oy Objection recognition in a 3d scene
CN106056614A (en) * 2016-06-03 2016-10-26 武汉大学 Building segmentation and contour line extraction method of ground laser point cloud data
CN108062517A (en) * 2017-12-04 2018-05-22 武汉大学 Unstructured road boundary line extraction method based on vehicle-mounted laser point cloud
CN111026150A (en) * 2019-11-25 2020-04-17 国家电网有限公司 System and method for pre-warning geological disasters of power transmission line by using unmanned aerial vehicle

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LI BINBING ET AL: "Uncertainty of gully sediment budgets based on laser point cloud data", 《TRANSACTIONS OF THE CHINESE SOCIETY OF AGRICULTURAL ENGINEERING》 *
张志伟 等: "基于LIDAR数据的道路平面线形拟合方法研究", 《公路交通科技》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330604A (en) * 2020-10-19 2021-02-05 香港理工大学深圳研究院 Method for generating vectorized road model from point cloud data
CN112767429A (en) * 2021-01-18 2021-05-07 南京理工大学 Ground-snow surface point cloud rapid segmentation method
CN112767429B (en) * 2021-01-18 2022-11-01 南京理工大学 Ground-snow surface point cloud rapid segmentation method
CN115797896A (en) * 2023-01-30 2023-03-14 智道网联科技(北京)有限公司 Lane line clustering method, lane line clustering apparatus, and computer-readable storage medium

Also Published As

Publication number Publication date
CN111783722B (en) 2021-07-06

Similar Documents

Publication Publication Date Title
CN111783722B (en) Lane line extraction method of laser point cloud and electronic equipment
CN110334762B (en) Feature matching method based on quad tree combined with ORB and SIFT
CN107133966B (en) Three-dimensional sonar image background segmentation method based on sampling consistency algorithm
CN111783721B (en) Lane line extraction method of laser point cloud and electronic equipment
CN109034065B (en) Indoor scene object extraction method based on point cloud
CN114492619B (en) Point cloud data set construction method and device based on statistics and concave-convex performance
CN115372989A (en) Laser radar-based long-distance real-time positioning system and method for cross-country automatic trolley
CN115049700A (en) Target detection method and device
CN110807781A (en) Point cloud simplification method capable of retaining details and boundary features
Yogeswaran et al. 3d surface analysis for automated detection of deformations on automotive body panels
Lin et al. CNN-based classification for point cloud object with bearing angle image
CN112364881B (en) Advanced sampling consistency image matching method
CN116109601A (en) Real-time target detection method based on three-dimensional laser radar point cloud
CN114463396A (en) Point cloud registration method using plane shape and topological graph voting
CN112070787B (en) Aviation three-dimensional point cloud plane segmentation method based on opponent reasoning theory
CN113723425A (en) Airplane model identification method and device, storage medium and equipment
CN115147433A (en) Point cloud registration method
Omidalizarandi et al. Segmentation and classification of point clouds from dense aerial image matching
CN112734816A (en) Heterogeneous image registration method based on CSS-Delaunay
CN115661218A (en) Laser point cloud registration method and system based on virtual super point
CN112907574B (en) Landing point searching method, device and system of aircraft and storage medium
CN112884026B (en) Image identification-assisted power transmission line laser LiDAR point cloud classification method
Shui et al. Automatic planar shape segmentation from indoor point clouds
CN112712062A (en) Monocular three-dimensional object detection method and device based on decoupling truncated object
CN117710603B (en) Unmanned aerial vehicle image three-dimensional building modeling method under constraint of linear geometry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant