CN114387293A - Road edge detection method and device, electronic equipment and vehicle - Google Patents

Road edge detection method and device, electronic equipment and vehicle Download PDF

Info

Publication number
CN114387293A
CN114387293A CN202210033492.9A CN202210033492A CN114387293A CN 114387293 A CN114387293 A CN 114387293A CN 202210033492 A CN202210033492 A CN 202210033492A CN 114387293 A CN114387293 A CN 114387293A
Authority
CN
China
Prior art keywords
point cloud
cloud data
curb
point
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210033492.9A
Other languages
Chinese (zh)
Inventor
陈颉颢
严征
胡小波
冯洪亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LeiShen Intelligent System Co Ltd
Original Assignee
LeiShen Intelligent System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LeiShen Intelligent System Co Ltd filed Critical LeiShen Intelligent System Co Ltd
Publication of CN114387293A publication Critical patent/CN114387293A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The embodiment of the application provides a road edge detection method, a road edge detection device, electronic equipment and a vehicle, wherein the method comprises the following steps: segmenting original point cloud data acquired by a laser radar according to a preset segmentation threshold value to obtain local point cloud data; sampling local point cloud data according to a preset three-dimensional voxel grid size to obtain sampled point cloud data; classifying all three-dimensional points in the sampled point cloud data through a point cloud convolution neural network model, and determining to obtain three-dimensional points of the curb; acquiring a curb marker facade according to the curb three-dimensional points; and determining the position information of the curb according to the vertical surface of the curb marker. Therefore, the sampled point cloud data is obtained by segmenting and sampling the original point cloud data, the road edge marker vertical face is determined for the sampled point cloud data through the point cloud convolution neural network model, the position information of the road edge is determined according to the position information of the road edge marker, and the stability and the precision of the road edge detection result can be improved.

Description

Road edge detection method and device, electronic equipment and vehicle
Technical Field
The application relates to the technical field of automatic driving, in particular to a road edge detection method and device, electronic equipment and a vehicle.
Background
With the continuous development and progress of artificial intelligence technology and modern sensor technology, the automatic driving technology of the automobile based on environment perception comes. In order to realize automatic safe driving of an automobile, an accurate road edge detection technology is a very important key technology. When the vehicle deviates from a driving lane due to parking, obstacle avoidance and other conditions in the driving process, the detected curb position information is an important basis for planning a path by an automatic driving system.
In the prior art, the main technical scheme for detecting the road edge is as follows: firstly, detecting the coordinates of the area where the road surface is located from the three-dimensional point cloud or the RGB image, and then positioning the road edge by extracting the boundary coordinates of the road surface area. The scheme for detecting the road edge can be suitable for scenes with open ground and less interference, but is easily interfered by other objects such as vehicles running in the road in a complex road scene, so that the problem of poor road edge detection effect exists.
Disclosure of Invention
In order to solve the technical problem, embodiments of the present application provide a road edge detection method and apparatus, an electronic device, and a vehicle.
In a first aspect, an embodiment of the present application provides a road edge detection method, where the method includes:
segmenting original point cloud data acquired by a laser radar according to a preset segmentation threshold value to obtain local point cloud data;
sampling the local point cloud data according to a preset three-dimensional voxel grid size to obtain sampled point cloud data;
classifying all three-dimensional points in the sampled point cloud data through a point cloud convolution neural network model, and determining to obtain three-dimensional points of the curb;
acquiring a curb marker facade according to the curb three-dimensional point;
and determining curb position information according to the curb mark facade.
In a second aspect, an embodiment of the present application provides a road edge detection apparatus, including:
the segmentation module is used for segmenting original point cloud data acquired by the laser radar according to a preset segmentation threshold value to obtain local point cloud data;
the sampling module is used for sampling the local point cloud data according to the preset three-dimensional voxel grid size to obtain sampled point cloud data;
the classification module is used for classifying all three-dimensional points in the sampled point cloud data through a point cloud convolution neural network model to determine and obtain three-dimensional points of the curb;
the acquisition module is used for acquiring a curb marker facade according to the curb three-dimensional point;
and the determining module is used for determining the curb position information according to the curb marker facade.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory and a processor, where the memory is used to store a computer program, and the computer program executes the road edge detection method provided in the first aspect when the processor runs.
In a fourth aspect, an embodiment of the present application provides a vehicle, which includes a vehicle body, and further includes a laser radar mounted on the vehicle body and the electronic device provided in the third aspect; the laser radar is used for acquiring original point cloud data.
In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program runs on a processor, the computer program performs the road edge detection method provided in the first aspect.
According to the road edge detection method, the road edge detection device, the electronic equipment and the vehicle, original point cloud data collected by the laser radar are segmented according to the preset segmentation threshold value to obtain local point cloud data; sampling the local point cloud data according to a preset three-dimensional voxel grid size to obtain sampled point cloud data; classifying all three-dimensional points in the sampled point cloud data through a point cloud convolution neural network model, and determining to obtain three-dimensional points of the curb; acquiring a curb marker facade according to the curb three-dimensional point; and determining curb position information according to the curb mark facade. Therefore, the original point cloud data is segmented and sampled to obtain the sampled point cloud data, the point cloud convolutional neural network model is used for determining the vertical face of the road edge marker for the sampled point cloud data, and the position information of the road edge is determined according to the position information of the road edge marker, so that the condition that the boundary line between the road surface and the running vehicle is taken as the road edge can be obviously avoided, and the stability and the precision of the road edge detection result are improved.
Drawings
In order to more clearly explain the technical solutions of the present application, the drawings needed to be used in the embodiments are briefly introduced below, and it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope of protection of the present application. Like components are numbered similarly in the various figures.
Fig. 1 is a schematic flow chart illustrating a road edge detection method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of raw three-dimensional point cloud data provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of local three-dimensional point cloud data provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of sampled point cloud data provided by an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating an edge detection effect provided by an embodiment of the present application;
fig. 6 shows a schematic structural diagram of a road edge detection method apparatus provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
Hereinafter, the terms "including", "having", and their derivatives, which may be used in various embodiments of the present application, are intended to indicate only specific features, numbers, steps, operations, elements, components, or combinations of the foregoing, and should not be construed as first excluding the existence of, or adding to, one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
Furthermore, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the various embodiments of the present application belong. The terms (such as those defined in commonly used dictionaries) should be interpreted as having a meaning that is consistent with their contextual meaning in the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in various embodiments.
Example 1
The embodiment of the disclosure provides a road edge detection method.
Specifically, referring to fig. 1, the road edge detection method includes:
and S101, segmenting original point cloud data acquired by the laser radar according to a preset segmentation threshold value to obtain local point cloud data.
In this embodiment, the road edge detection method may be applied to an electronic device, where the electronic device is installed on a vehicle, the vehicle may also be installed with a laser radar, and original point cloud data may be obtained by the laser radar, where the original point cloud data includes a plurality of three-dimensional points, each of the three-dimensional points is used to represent position information in a three-dimensional coordinate system of the laser radar, and the three-dimensional points of the original point cloud data are represented by coordinate positions in a first direction, a second direction, and a third direction, for example, a three-dimensional coordinate system is established with the laser radar as a coordinate origin, and in the coordinate system, for example, an axial direction of the three-dimensional coordinate system may be as follows: the X axis is parallel to the ground and points to the right side of the radar, the Y axis is parallel to the ground and points to the front of the radar, the Z axis points to the upper side through the center of mass of the radar, and the axial direction of the three-dimensional coordinate system can be in other modes without limitation. The three-dimensional points of the raw point cloud data may be represented by three-dimensional points (x, y, z) representing positions in a three-dimensional coordinate system of the lidar.
Referring to fig. 2, fig. 2 is a schematic diagram of original point cloud data, where the original point cloud data includes a large number of three-dimensional points, and the contained ground information and road edge information are relatively rich, and there may be following interference points in the three-dimensional points, so that denoising processing can be performed on the interference points.
It is supplementary to explain that the laser radar scans the surrounding environment to obtain the original three-dimensional point cloud data for the surrounding environment, the number of three-dimensional points of the original point cloud data is large, and the included environment information is rich. However, due to the large number of the points, if the road edge detection is directly performed based on the original point cloud data, the calculation accuracy may be affected due to the fact that the number of the interfering three-dimensional points possibly existing in the original point cloud data is large, and the subsequent calculation amount is large, and a large amount of calculation resources may be consumed due to the large number of the three-dimensional points in the original point cloud data. In the embodiment, the original point cloud data acquired by the laser radar is segmented according to the preset segmentation threshold value to obtain the local point cloud data, and the original point cloud data can be subjected to denoising pretreatment, so that the interference is reduced, the number of three-dimensional points is reduced, and the effects of saving computing resources and improving computing accuracy are achieved.
It should be noted that, according to actual conditions, due to the limitation of configuration parameters of the laser radar, the effective scanning distance of the laser radar has a certain limitation, and the distance that the laser radar can effectively scan may be referred to as the maximum scanning distance. The distribution range of the effective three-dimensional points of the original point cloud data in all coordinate axis directions can be estimated according to the farthest scanning distance and the road surface width of the laser radar. For example, a first direction preset threshold and a third direction preset threshold may be set to screen three-dimensional points of the original three-dimensional points, remove interfering three-dimensional points, and form local point cloud data from the remaining effective three-dimensional points. If the axial direction of the laser radar three-dimensional coordinate system is as follows: the X axis is parallel to the right side of the radar pointed by the ground, the Y axis is parallel to the front side of the radar pointed by the ground, the Z axis is pointed upwards through the center of mass of the radar, the first direction preset threshold value can be a Z axis direction preset threshold value, and the third direction preset threshold value can be an X axis direction preset threshold value.
In this embodiment, a three-dimensional point set including road surface information and curb information is separated from the original point cloud, and the three-dimensional point set including the road surface information and the curb information is used as input data for subsequent processing. By screening the original point cloud data, the data amount required to be processed by a subsequent point cloud convolution neural network model is reduced, and the overall detection speed is improved.
In one embodiment, step S101 includes the following steps:
determining a first target three-dimensional point of which the absolute value of a first direction numerical value in the original point cloud data is smaller than or equal to a first direction preset threshold;
determining a second target three-dimensional point of which the third direction numerical value is greater than or equal to a third direction preset threshold value in the original point cloud data;
and deleting the first target three-dimensional point and the second target three-dimensional point in the original point cloud data to obtain the local point cloud data.
Referring to fig. 3, fig. 3 is a schematic diagram of local point cloud data, which is the residual point cloud data of the original point cloud data without ground point cloud data and invalid interference points.
In one embodiment, the first direction preset threshold is determined according to the maximum lane number of the road, the maximum width of the road and a preset detection range safety factor;
and the third direction preset threshold value is determined according to the distance between the laser radar and the ground, the maximum scanning distance of the laser radar and the maximum longitudinal gradient of the road.
In an embodiment, if the first direction preset threshold is an X-axis direction preset threshold, and the third direction preset threshold is a Z-axis direction preset threshold, the X-axis direction preset threshold may be calculated according to the following formula 1:
equation 1: WX ═ (NW × WL × a)/2;
wherein WX represents a preset threshold value in the X-axis direction, NW represents the maximum lane number of the road, WL represents the maximum width of the lane, A represents a detection range safety factor, and the value of the detection range safety factor is [1.5, 4 ].
In addition, the preset threshold value in the Z-axis direction may be calculated according to the following formula 2:
equation 2: ZH ═ ZL + YL × C;
wherein ZH represents a preset threshold value in the Z-axis direction, ZL represents the distance between the center of mass of the laser radar and the ground, YL represents the farthest scanning distance of the laser radar in the Y-axis direction, and C represents the maximum longitudinal gradient of the road.
For example, if the preset threshold value in the X-axis direction is WX, the step of determining the second target three-dimensional point in the original point cloud data where the absolute value of the second direction value is smaller than or equal to the preset threshold value in the third direction may be understood as: and determining a second target three-dimensional point of which the second direction value is greater than or equal to-WX and is less than or equal to WX in the original point cloud data, namely determining that the X-axis element value of the second target three-dimensional point is positioned between intervals of [ -WX, WX ].
And S102, sampling the local point cloud data according to a preset three-dimensional voxel grid size to obtain sampled point cloud data.
In one embodiment, step S102 may include the following steps:
segmenting the local point cloud data according to the size of a preset three-dimensional voxel grid to obtain a plurality of three-dimensional voxel grids;
randomly sampling a preset number of sampling three-dimensional points from all three-dimensional points of each three-dimensional voxel grid, and taking the sampling three-dimensional points of the three-dimensional voxel grids as the sampling point cloud data.
Referring to fig. 4, fig. 4 is a schematic diagram of sampled point cloud data, and the process of acquiring the sampled point cloud data is as follows: dividing the local point cloud data shown in FIG. 4 into a plurality of stereo voxel grids according to the preset stereo voxel grid size; randomly sampling N sampling three-dimensional points from all three-dimensional points of each three-dimensional voxel grid, and forming sampling point cloud data by all sampling three-dimensional points obtained by sampling. The number of three-dimensional points of the sampled point cloud data of fig. 5 is less than the number of three-dimensional points with local point cloud data of fig. 4. Therefore, the calculation amount can be further reduced, and the calculation resources can be saved.
In one embodiment, the preset voxel grid size may be determined according to a maximum edge detection error in the first direction, a maximum edge detection error in the second direction, and a minimum curb height in the third direction.
For example, the cubic voxel grid size of [ dx/2, dy/2, dz/4] is preset, where dx is the maximum edge detection error allowed by the road edge in the use scene in the X-axis direction, dy is the maximum edge detection error allowed by the road edge in the use scene in the Y-axis direction, and dz is the lowest road edge height recognizable by the lidar in the use scene, and the lowest road edge height cannot be less than 2 times of the distance accuracy of the lidar. In one embodiment, the local point cloud data is divided into a plurality of voxel grids according to the cube voxel grid size of [ dx/2, dy/2, dz/4], and a corresponding three-dimensional point is determined for each voxel grid.
In an embodiment, the position information of the center of gravity point of each voxel grid may be calculated according to a plurality of three-dimensional points of each voxel grid, the center of gravity point may be used as an adjusted three-dimensional point of each voxel grid, and the plurality of adjusted three-dimensional points may form the sampling point cloud data.
In this embodiment, the point cloud convolution neural network model can automatically generate a corresponding preset voxel grid size according to the edge detection precision set by a user, and perform downsampling on the input sampled point cloud data based on the preset voxel grid size, so as to achieve the purpose of reducing the density of the point cloud data. The higher the edge detection precision set by a user, the smaller the preset three-dimensional voxel grid size automatically generated by the point cloud convolution neural network model is, the larger the point cloud density is, the more accurate the position information of the detected road edge is, but the larger the required storage space is, the larger the calculation amount of the subsequent point cloud convolution neural network model is. When multi-frame sampling point cloud data are detected simultaneously, after the multi-frame sampling point cloud data are compressed according to the preset three-dimensional voxel grid size, random sampling of fixed points is carried out on the compressed point cloud data, and therefore the points contained in each frame of point cloud are guaranteed to be the same. For example, the point cloud data after the compression processing is randomly sampled according to a sampling point N, where N is an integer multiple of 1024.
Therefore, two indexes of the detection precision and the detection speed of the road edge detection scheme can be effectively balanced, the required precision of detection can be ensured, and the point cloud density can be reduced as far as possible, so that the detection speed is improved.
And S103, classifying the three-dimensional points in the sampled point cloud data through the point cloud convolution neural network model, and determining to obtain the curb three-dimensional points.
The point cloud convolution neural network model is obtained by training according to the category labels of all points in the historically collected point cloud and the manually marked point cloud, and the classification accuracy can be improved. The point cloud convolution-based neural network model comprises a single-frame detection mode and a multi-frame detection mode.
In one embodiment, step S103 includes the following steps:
determining the curb point probability, the road surface point probability and other point probabilities of all three-dimensional points of the sampled point cloud data through the point cloud convolution neural network model;
and determining the maximum probability values of the road point probability, the curb point probability and other point probabilities in the three-dimensional points, and determining the three-dimensional points with the maximum probability values as the curb point probability as curb three-dimensional points.
For example, if the detection result of the three-dimensional point a is [ 70%, 20%, 10% ], the probability that the three-dimensional point a is a curb point is 70%, the probability that the three-dimensional point a is a road surface is 20%, and the probability that the three-dimensional point a is another area is 10%, the three-dimensional point a is classified into a curb point category.
In one embodiment, determining the curb point probability, the road surface point probability, and the other point probability of each three-dimensional point of the sampled point cloud data through the point cloud convolutional neural network model may include:
when the point cloud convolution neural network model is in a single-frame detection mode, converting single-frame sampling point cloud data into a first matrix, wherein the first matrix comprises N rows, and each row of the first matrix represents coordinate information of each three-dimensional point of the sampling point cloud data;
and inputting the first matrix into the point cloud convolution neural network model, and calculating the first matrix through the point cloud convolution neural network model to obtain a second matrix, wherein the second matrix comprises N rows, each row of the second matrix represents the class probability of each three-dimensional point of the sampled point cloud data, and the class probability comprises the road surface point probability, the curb point probability and other point probabilities.
Specifically, the processing procedure of the point cloud data in the single-frame detection mode is as follows: and formatting the sampled point cloud data into a first matrix with N rows and 3 columns, wherein N is the number of three-dimensional points contained in the sampled point cloud data, and the ith row in the first matrix represents the coordinates [ xi, yi, zi ] of the input ith three-dimensional point. The first matrix is input into the point cloud convolutional neural network model, the point cloud convolutional neural network model calculates the input sampling point cloud data to obtain a second matrix with N rows and 3 columns, and the ith row in the second matrix represents the detection result [ Pi0, Pi1, Pi2] of the input ith three-dimensional point, wherein Pi0 is the probability that the input ith three-dimensional point belongs to the road surface, Pi1 is the probability that the input ith three-dimensional point belongs to the curb, and Pi2 is the probability that the input ith three-dimensional point belongs to other points in other areas. And the class with the highest probability in the detection result corresponding to each point is the class to which the point belongs.
In another embodiment, determining the curb point probability, the road surface point probability, and the other point probability of each three-dimensional point of the sampled point cloud data by the point cloud convolutional neural network model may include the following steps:
when the point cloud convolution neural network model is in a multi-frame detection mode, converting multi-frame sampling point cloud data into a third matrix, wherein the third matrix comprises a plurality of sub-matrices, and each row of each sub-matrix of the third matrix represents coordinate information of each three-dimensional point of each frame of sampling point cloud data;
and inputting the third matrix into the point cloud convolution neural network model, and calculating the third matrix through the point cloud convolution neural network model to obtain a fourth matrix, wherein the fourth matrix comprises a plurality of sub-matrices, each row of each sub-matrix of the fourth matrix represents the class probability of each three-dimensional point of each frame of sampling point cloud data, and the class probability comprises the road surface point probability, the curb point probability and other point probabilities.
Specifically, the processing process of the sampling point cloud data detected by multiple frames is as follows: and formatting multi-frame sampling point cloud data to be identified into a third matrix of B x N x 3, wherein the third matrix of B x N x 3 is equivalent to a submatrix with B N x 3, the formatted third matrix of B x N x 3 is input into the point cloud convolution neural network model, the point cloud convolution neural network model calculates the input third matrix of B x N3 and then outputs a fourth matrix of B x N3, the fourth matrix of B x N3 is equivalent to a submatrix with B N3, wherein B is the frame number of standard power supply data of the input point cloud convolution neural network model, and N is the number of three-dimensional points contained in one frame of sampling point data. And when B is j, j is 1, 2, 3.
And S104, acquiring a curb marker facade according to the curb three-dimensional points.
In this embodiment, step S104 may include the following steps:
generating at least one geometric plane according to the curb three-dimensional points;
when a target geometric plane with the shape matched with a preset marker facade exists in at least one geometric plane, determining the target geometric plane as the curb marker facade.
In one embodiment, the reference curb has protrusions such as road separation guardrails, road piles, curbs, etc., and the protrusions can be used as markers to obtain the elevation forms of the curb markers in advance, and after each three-dimensional point is classified, it can be determined that each three-dimensional point belongs to a curb point, a road surface point or other points, so that a plurality of geometric planes are generated according to all the curb points, a target geometric plane matched with the preset marker elevation is determined as the curb marker elevation, and the curb marker elevation can be the elevation of the curb protrusion, so that the detection of the curb can be determined by detecting the curb marker elevation.
Referring to fig. 5, fig. 5 is a schematic view illustrating a curb detection effect, where fig. 5 includes a curb sign elevation 501, and the curb sign elevation 501 may be a raised object elevation such as a road separation guardrail, a road pile, a curb, and the like.
And S105, determining curb position information according to the curb marker vertical face.
In this embodiment, step S105 may include the steps of:
determining real position information of the vertical face of the curb marker according to the three-dimensional points corresponding to the vertical face of the curb marker;
and taking the real position information of the vertical surface of the curb marker as the curb position information.
In this embodiment, since the curb sign facade is a facade of a curb's protrusion, it can be determined that a curb has been detected. The real position information of the road edge marker facade in the real world can be obtained through conversion based on the three-dimensional points of the road edge marker facade corresponding to the laser radar coordinate system, and the road edge position information can be determined based on the real position information of the road edge marker facade in the real world.
According to the road edge detection method provided by the embodiment, original point cloud data collected by a laser radar is segmented according to a preset segmentation threshold value to obtain local point cloud data; sampling the local point cloud data according to a preset three-dimensional voxel grid size to obtain sampled point cloud data; classifying all three-dimensional points in the sampled point cloud data through a point cloud convolution neural network model, and determining to obtain three-dimensional points of the curb; acquiring a curb marker facade according to the curb three-dimensional point; and determining curb position information according to the curb mark facade. Therefore, the original point cloud data is segmented and sampled to obtain the sampled point cloud data, the point cloud convolutional neural network model is used for determining the vertical face of the road edge marker for the sampled point cloud data, and the position information of the road edge is determined according to the position information of the road edge marker, so that the condition that the boundary line between the road surface and the running vehicle is taken as the road edge can be obviously avoided, and the stability and the precision of the road edge detection result are improved.
Example 2
In addition, the embodiment of the disclosure provides a road edge detection device.
Specifically, as shown in fig. 6, the road edge detection device 600 includes:
the segmentation module 601 is configured to segment original point cloud data acquired by the laser radar according to a preset segmentation threshold to obtain local point cloud data;
a sampling module 602, configured to sample the local point cloud data according to a preset voxel grid size to obtain sampled point cloud data;
the classification module 603 is configured to classify each three-dimensional point in the sampled point cloud data through a point cloud convolution neural network model, and determine to obtain a curb three-dimensional point;
an obtaining module 604, configured to obtain a curb marker facade according to the curb three-dimensional point;
a determining module 605, configured to determine curb position information according to the curb landmark facade.
In this embodiment, the segmentation module 601 is further configured to determine a first target three-dimensional point, where an absolute value of a first direction value in the original point cloud data is less than or equal to a preset threshold in the first direction;
determining a second target three-dimensional point of which the third direction numerical value is greater than or equal to a third direction preset threshold value in the original point cloud data;
and deleting the first target three-dimensional point and the second target three-dimensional point in the original point cloud data to obtain the local point cloud data.
In this embodiment, the first direction preset threshold is determined according to the maximum number of lanes of the road, the maximum width of the road, and a preset detection range safety factor; and the third direction preset threshold value is determined according to the distance between the laser radar and the ground, the maximum scanning distance of the laser radar and the maximum longitudinal gradient of the road.
In this embodiment, the sampling module 602 is further configured to segment the local point cloud data according to a preset voxel grid size to obtain a plurality of voxel grids;
randomly sampling a preset number of sampling three-dimensional points from all three-dimensional points of each three-dimensional voxel grid, and taking the sampling three-dimensional points of the three-dimensional voxel grids as the sampling point cloud data.
In this embodiment, the classification module 603 is further configured to determine, through the point cloud convolution neural network model, a curb point probability, a road surface point probability, and other point probabilities of the three-dimensional points of the sampled point cloud data;
and determining the maximum probability values of the road point probability, the curb point probability and other point probabilities in the three-dimensional points, and determining the three-dimensional points with the maximum probability values as the curb point probability as curb three-dimensional points.
In this embodiment, the classifying module 603 is further configured to, when the point cloud convolutional neural network model is in a single-frame detection mode, convert a single-frame sampling point cloud data into a first matrix, where the first matrix includes N rows, and each row of the first matrix represents coordinate information of each three-dimensional point of the sampling point cloud data;
and inputting the first matrix into the point cloud convolution neural network model, and calculating the first matrix through the point cloud convolution neural network model to obtain a second matrix, wherein the second matrix comprises N rows, each row of the second matrix represents the class probability of each three-dimensional point of the sampled point cloud data, and the class probability comprises the road surface point probability, the curb point probability and other point probabilities.
In this embodiment, the classifying module 603 is further configured to, when the point cloud convolutional neural network model is in a multi-frame detection mode, convert the multi-frame sampled point cloud data into a third matrix, where the third matrix includes a plurality of sub-matrices, and each row of each sub-matrix of the third matrix represents coordinate information of each three-dimensional point of each frame of sampled point cloud data;
and inputting the third matrix into the point cloud convolution neural network model, and calculating the third matrix through the point cloud convolution neural network model to obtain a fourth matrix, wherein the fourth matrix comprises a plurality of sub-matrices, each row of each sub-matrix of the fourth matrix represents the class probability of each three-dimensional point of each frame of sampling point cloud data, and the class probability comprises the road surface point probability, the curb point probability and other point probabilities.
In this embodiment, the obtaining module 604 is further configured to generate at least one geometric plane according to the curb three-dimensional point;
when a target geometric plane with the shape matched with a preset marker facade exists in at least one geometric plane, determining the target geometric plane as the curb marker facade.
In this embodiment, the determining module 605 is further configured to determine real position information of the curb marker vertical surface according to the three-dimensional point corresponding to the curb marker vertical surface;
and taking the real position information of the vertical surface of the curb marker as the curb position information.
The road edge detection device provided by the embodiment partitions original point cloud data acquired by a laser radar according to a preset partition threshold value to obtain local point cloud data; sampling the local point cloud data according to a preset three-dimensional voxel grid size to obtain sampled point cloud data; classifying all three-dimensional points in the sampled point cloud data through a point cloud convolution neural network model, and determining to obtain three-dimensional points of the curb; acquiring a curb marker facade according to the curb three-dimensional point; and determining curb position information according to the curb mark facade. Therefore, the original point cloud data is segmented and sampled to obtain the sampled point cloud data, the point cloud convolutional neural network model is used for determining the vertical face of the road edge marker for the sampled point cloud data, and the position information of the road edge is determined according to the position information of the road edge marker, so that the condition that the boundary line between the road surface and the running vehicle is taken as the road edge can be obviously avoided, and the stability and the precision of the road edge detection result are improved.
Example 3
Furthermore, an embodiment of the present disclosure provides an electronic device, including a memory and a processor, where the memory stores a computer program, and the computer program executes the road edge detection method provided in embodiment 1 when the processor runs.
The electronic device provided in this embodiment may perform the road edge detection method provided in embodiment 1, and details are not described herein in order to avoid repetition.
Example 4
In addition, the embodiment of the disclosure provides a vehicle, which comprises a vehicle body, a laser radar arranged on the vehicle body and the electronic equipment provided by the embodiment 3; the laser radar is used for acquiring original point cloud data.
The vehicle provided in this embodiment may perform the road edge detection method provided in embodiment 1, and details are not repeated herein to avoid repetition.
Example 5
Furthermore, embodiments of the present disclosure provide a computer-readable storage medium storing a computer program that, when run on a processor, performs the road edge detection method provided in embodiment 1.
In this embodiment, a computer-readable storage medium may perform the road edge detection method provided in embodiment 1, and is not described herein again to avoid repetition.
In this embodiment, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (13)

1. A road edge detection method, the method comprising:
segmenting original point cloud data acquired by a laser radar according to a preset segmentation threshold value to obtain local point cloud data;
sampling the local point cloud data according to a preset three-dimensional voxel grid size to obtain sampled point cloud data;
classifying all three-dimensional points in the sampled point cloud data through a point cloud convolution neural network model, and determining to obtain three-dimensional points of the curb;
acquiring a curb marker facade according to the curb three-dimensional point;
and determining curb position information according to the curb mark facade.
2. The method according to claim 1, wherein the preset segmentation threshold includes a first direction preset threshold and a third direction preset threshold, and the segmenting the raw point cloud data collected by the lidar according to the preset segmentation threshold to obtain local point cloud data includes:
determining a first target three-dimensional point of which the absolute value of a first direction numerical value in the original point cloud data is smaller than or equal to a first direction preset threshold;
determining a second target three-dimensional point of which the third direction numerical value is greater than or equal to a third direction preset threshold value in the original point cloud data;
and deleting the first target three-dimensional point and the second target three-dimensional point in the original point cloud data to obtain the local point cloud data.
3. The method according to claim 2, wherein the first direction preset threshold is determined according to the maximum lane number of the road, the maximum width of the road and a preset detection range safety factor; and the third direction preset threshold value is determined according to the distance between the laser radar and the ground, the maximum scanning distance of the laser radar and the maximum longitudinal gradient of the road.
4. The method according to claim 1, wherein the sampling the local point cloud data according to a preset voxel grid size to obtain sampled point cloud data comprises:
segmenting the local point cloud data according to the size of a preset three-dimensional voxel grid to obtain a plurality of three-dimensional voxel grids;
randomly sampling a preset number of sampling three-dimensional points from all three-dimensional points of each three-dimensional voxel grid, and taking the sampling three-dimensional points of the three-dimensional voxel grids as the sampling point cloud data.
5. The method of claim 1, wherein the step of classifying the three-dimensional points in the sampled point cloud data through the point cloud convolutional neural network model to determine the three-dimensional points of the curb comprises the steps of:
determining the curb point probability, the road surface point probability and other point probabilities of all three-dimensional points of the sampled point cloud data through the point cloud convolution neural network model;
and determining the maximum probability values of the road point probability, the curb point probability and other point probabilities in the three-dimensional points, and determining the three-dimensional points with the maximum probability values as the curb point probability as curb three-dimensional points.
6. The method of claim 5, wherein said determining, from said point cloud convolutional neural network model, a curb point probability, a pavement point probability, and other point probabilities for each three-dimensional point of said sampled point cloud data comprises:
when the point cloud convolution neural network model is in a single-frame detection mode, converting single-frame sampling point cloud data into a first matrix, wherein the first matrix comprises N rows, and each row of the first matrix represents coordinate information of each three-dimensional point of the sampling point cloud data;
and inputting the first matrix into the point cloud convolution neural network model, and calculating the first matrix through the point cloud convolution neural network model to obtain a second matrix, wherein the second matrix comprises N rows, each row of the second matrix represents the class probability of each three-dimensional point of the sampled point cloud data, and the class probability comprises the road surface point probability, the curb point probability and other point probabilities.
7. The method of claim 5, wherein said determining, from said point cloud convolutional neural network model, a curb point probability, a pavement point probability, and other point probabilities for each three-dimensional point of said sampled point cloud data comprises:
when the point cloud convolution neural network model is in a multi-frame detection mode, converting multi-frame sampling point cloud data into a third matrix, wherein the third matrix comprises a plurality of sub-matrices, and each row of each sub-matrix of the third matrix represents coordinate information of each three-dimensional point of each frame of sampling point cloud data;
and inputting the third matrix into the point cloud convolution neural network model, and calculating the third matrix through the point cloud convolution neural network model to obtain a fourth matrix, wherein the fourth matrix comprises a plurality of sub-matrices, each row of each sub-matrix of the fourth matrix represents the class probability of each three-dimensional point of each frame of sampling point cloud data, and the class probability comprises the road surface point probability, the curb point probability and other point probabilities.
8. The method of claim 1, wherein obtaining a curb marker facade from the curb three-dimensional points comprises:
generating at least one geometric plane according to the curb three-dimensional points;
when a target geometric plane with the shape matched with a preset marker facade exists in at least one geometric plane, determining the target geometric plane as the curb marker facade.
9. The method according to claim 1, wherein said determining curb location information from the curb marker facade comprises:
determining real position information of the vertical face of the curb marker according to the three-dimensional points corresponding to the vertical face of the curb marker;
and taking the real position information of the vertical surface of the curb marker as the curb position information.
10. A road edge detection apparatus, characterized in that the apparatus comprises:
the segmentation module is used for segmenting original point cloud data acquired by the laser radar according to a preset segmentation threshold value to obtain local point cloud data;
the sampling module is used for sampling the local point cloud data according to the preset three-dimensional voxel grid size to obtain sampled point cloud data;
the classification module is used for classifying all three-dimensional points in the sampled point cloud data through a point cloud convolution neural network model to determine and obtain three-dimensional points of the curb;
the acquisition module is used for acquiring a curb marker facade according to the curb three-dimensional point;
and the determining module is used for determining the curb position information according to the curb marker facade.
11. An electronic device, comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, performs the road edge detection method of any one of claims 1 to 9.
12. A vehicle comprising a vehicle body, characterized by further comprising a lidar and the electronic device of claim 11 mounted on the vehicle body; the laser radar is used for acquiring original point cloud data.
13. A computer-readable storage medium, characterized in that it stores a computer program which, when run on a processor, performs the road edge detection method of any one of claims 1 to 9.
CN202210033492.9A 2021-12-31 2022-01-12 Road edge detection method and device, electronic equipment and vehicle Pending CN114387293A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2021116730517 2021-12-31
CN202111673051 2021-12-31

Publications (1)

Publication Number Publication Date
CN114387293A true CN114387293A (en) 2022-04-22

Family

ID=81202247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210033492.9A Pending CN114387293A (en) 2021-12-31 2022-01-12 Road edge detection method and device, electronic equipment and vehicle

Country Status (1)

Country Link
CN (1) CN114387293A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116520289A (en) * 2023-07-04 2023-08-01 东莞市新通电子设备有限公司 Intelligent control method and related device for hardware machining process

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116520289A (en) * 2023-07-04 2023-08-01 东莞市新通电子设备有限公司 Intelligent control method and related device for hardware machining process
CN116520289B (en) * 2023-07-04 2023-09-01 东莞市新通电子设备有限公司 Intelligent control method and related device for hardware machining process

Similar Documents

Publication Publication Date Title
US11709058B2 (en) Path planning method and device and mobile device
Zai et al. 3-D road boundary extraction from mobile laser scanning data via supervoxels and graph cuts
WO2021143778A1 (en) Positioning method based on laser radar
Wang et al. Road boundaries detection based on local normal saliency from mobile laser scanning data
CN111179152B (en) Road identification recognition method and device, medium and terminal
CN110349260B (en) Automatic pavement marking extraction method and device
CN103390169A (en) Sorting method of vehicle-mounted laser scanning point cloud data of urban ground objects
Zheng et al. Model-driven reconstruction of 3-D buildings using LiDAR data
CN113570665A (en) Road edge extraction method and device and electronic equipment
CN114722944A (en) Point cloud precision determination method, electronic device and computer storage medium
CN112183301A (en) Building floor intelligent identification method and device
CN115205803A (en) Automatic driving environment sensing method, medium and vehicle
CN113096181B (en) Method and device for determining equipment pose, storage medium and electronic device
CN116168246A (en) Method, device, equipment and medium for identifying waste slag field for railway engineering
CN114387293A (en) Road edge detection method and device, electronic equipment and vehicle
CN114241448A (en) Method and device for obtaining heading angle of obstacle, electronic equipment and vehicle
US8483478B1 (en) Grammar-based, cueing method of object recognition, and a system for performing same
CN113721254A (en) Vehicle positioning method based on road fingerprint space incidence matrix
CN110174115B (en) Method and device for automatically generating high-precision positioning map based on perception data
Shen et al. An automatic framework for pylon detection by a hierarchical coarse-to-fine segmentation of powerline corridors from UAV LiDAR point clouds
CN116052099A (en) Small target detection method for unstructured road
CN114063107A (en) Ground point cloud extraction method based on laser beam
KR101737889B1 (en) filtering and extraction of feature boundary method from terrestrial lidar data using data mining techniques and device thereof
CN114092419A (en) Automatic inspection method for point cloud spatial position quality based on earth surface point location
Zhang et al. Urban vehicle extraction from aerial laser scanning point cloud data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination