WO2021227797A1 - 道路边界检测方法、装置、计算机设备和存储介质 - Google Patents

道路边界检测方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2021227797A1
WO2021227797A1 PCT/CN2021/088583 CN2021088583W WO2021227797A1 WO 2021227797 A1 WO2021227797 A1 WO 2021227797A1 CN 2021088583 W CN2021088583 W CN 2021088583W WO 2021227797 A1 WO2021227797 A1 WO 2021227797A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
point cloud
gray value
point
segmentation threshold
Prior art date
Application number
PCT/CN2021/088583
Other languages
English (en)
French (fr)
Inventor
罗哲
肖振宇
李琛
周旋
Original Assignee
长沙智能驾驶研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 长沙智能驾驶研究院有限公司 filed Critical 长沙智能驾驶研究院有限公司
Publication of WO2021227797A1 publication Critical patent/WO2021227797A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • G06T3/067Reshaping or unfolding 3D tree structures onto 2D planes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • This application relates to the field of intelligent driving technology, and in particular to a road boundary detection method, device, computer equipment and storage medium.
  • the intelligent electric sweeper is driven by electric energy and will not pollute the environment again during driving. Its size is smaller than that of a normal car. It can be cleaned in parks, streets and many places where large sweepers cannot go. And it can use the vehicle-mounted sensor to detect and track the borders of road teeth, guardrails, flower beds, etc., so as to realize automatic welt cleaning.
  • a road boundary detection method includes:
  • the first image contains image points corresponding to each of the point clouds, and the image gray value of the image points corresponding to each of the point clouds is based on The point cloud position and the reflection intensity of the point cloud are determined;
  • segmentation processing is performed on the first image to obtain a second image, and a road boundary is determined based on the second image.
  • a road boundary detection device includes:
  • An acquisition module configured to acquire point cloud data collected by the detection device for the target road, the point cloud data including the point cloud position and reflection intensity of each point cloud;
  • the projection module is used to project the point cloud data into a two-dimensional image to obtain a first image.
  • the first image contains image points corresponding to each of the point clouds.
  • the gray value of the image is determined according to the point cloud position of the point cloud and the reflection intensity;
  • a determining module configured to determine the target segmentation threshold of the first image according to the gray value of each image
  • the processing module is configured to perform segmentation processing on the first image according to the target segmentation threshold to obtain a second image, and determine a road boundary based on the second image.
  • a computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the following steps when the processor executes the computer program:
  • the first image contains image points corresponding to each of the point clouds, and the image gray value of the image points corresponding to each of the point clouds is based on The point cloud position and the reflection intensity of the point cloud are determined;
  • segmentation processing is performed on the first image to obtain a second image, and a road boundary is determined based on the second image.
  • a computer-readable storage medium having a computer program stored thereon, and when the computer program is executed by a processor, the following steps are implemented:
  • the first image contains image points corresponding to each of the point clouds, and the image gray value of the image points corresponding to each of the point clouds is based on The point cloud position and the reflection intensity of the point cloud are determined;
  • segmentation processing is performed on the first image to obtain a second image, and a road boundary is determined based on the second image.
  • the above-mentioned road boundary detection method, device, computer equipment and storage medium are used to obtain the point cloud data collected by the detection equipment for the target road.
  • the point cloud data includes the point cloud position and reflection intensity of each point cloud; the point cloud data is projected to a two-dimensional image
  • the first image contains the image points corresponding to each point cloud, and the image gray value of the image point corresponding to each point cloud is determined according to the point cloud position and reflection intensity of the point cloud; according to the gray level of each image Value, determine the target segmentation threshold of the first image; perform segmentation processing on the first image according to the target segmentation threshold to obtain a second image, and determine the road boundary based on the second image.
  • the position information and reflection intensity information of the point cloud are converted into the image gray value and reflected in the first image.
  • the resulting image gray value can be used to accurately distinguish the image points corresponding to the road point cloud and the road boundary point cloud. Therefore, the first image is segmented according to the target segmentation threshold determined by the image gray value, and the segmented second image can more accurately restore the road boundary information and improve the boundary detection effect.
  • Fig. 1 is a schematic flowchart of a road boundary detection method in an embodiment.
  • Fig. 2 is a schematic diagram of a first image in an embodiment.
  • Fig. 3 is a schematic diagram of a second image in an embodiment.
  • FIG. 4 is a schematic flowchart of the step of determining the target segmentation threshold of the first image according to the gray value of each image in an embodiment.
  • Figure 5 is a schematic diagram of an image gray value distribution in an embodiment.
  • Fig. 6 is a schematic flowchart of the step of determining the road boundary based on the second image in an embodiment.
  • Fig. 7 is a schematic diagram of a road boundary curve in an embodiment.
  • Fig. 8 is a schematic flowchart of a road boundary detection method in another embodiment.
  • Fig. 9 is a structural block diagram of a road boundary detection device in an embodiment.
  • Fig. 10 is a diagram of the internal structure of a computer device in an embodiment.
  • Fig. 11 is a diagram of the internal structure of a computer device in an embodiment.
  • the road boundary detection method provided in this application can be applied to a vehicle intelligent driving system.
  • the vehicle intelligent driving system includes an industrial computer and detection equipment (such as lidar).
  • the detection equipment can be installed on the vehicle, as the vehicle is on the target road.
  • the corresponding point cloud data is collected, and the industrial computer obtains the point cloud data collected by the detection device, and processes the point cloud data to determine the road boundary, and further can control the vehicle to move along the road boundary.
  • a road boundary detection method is provided.
  • the method is applied to an industrial computer as an example for description, including the following steps S102 to S108.
  • S102 Obtain point cloud data collected by the detection device for the target road, where the point cloud data includes the point cloud position and reflection intensity of each point cloud.
  • the detection equipment can use lidar.
  • the working principle of lidar is to transmit a detection signal (laser beam) to the target, and then compare the received signal reflected from the target with the transmitted signal, and the target can be obtained after proper processing. Related information, such as distance, azimuth, height, reflection intensity and other parameters, so that the target can be detected, tracked and identified.
  • the lidar is installed on a vehicle driving on a target road. As the vehicle moves, point cloud data on the target road is collected.
  • the point cloud data includes the point cloud position and reflection intensity of each point cloud.
  • the point cloud position can be It is the coordinate position of the point cloud in the lidar coordinate system.
  • the lidar is installed on the top of the vehicle and is inclined downward at a certain angle (for example, 15 degrees) toward the front.
  • the horizontal scanning range is close to 100 degrees, and the vertical scanning angle is close to 40 degrees.
  • the scanning range can cover the front of the vehicle. Most areas.
  • calibrate the external parameters of the lidar to obtain the calibration parameters of the lidar to the vehicle body.
  • the industrial computer obtains the original point cloud data collected by the lidar, it performs coordinate conversion on the original point cloud data. Specifically, the point cloud data in the lidar coordinate system is converted to In the body coordinate system, the conversion formula can be as follows:
  • P L represents the point cloud coordinate point in the lidar coordinate system (x-axis forward, y-axis left, and z-axis upward)
  • P C represents the point cloud coordinate point in the vehicle body coordinate system (the x-axis is the front of the car).
  • the y-axis is the front left of the car, and the z-axis is directly above the car
  • R is the 3*3 rotation matrix
  • T is the translation vector. R and T can be calibrated according to actual conditions.
  • the point cloud data into a two-dimensional image to obtain a first image.
  • the first image contains image points corresponding to each point cloud, and the image gray value of the image point corresponding to each point cloud is based on the point cloud of the point cloud.
  • the position and reflection intensity are determined.
  • the point cloud data is projected into a two-dimensional image, so that all the information contained in the point cloud is converted to the image, and the point cloud data can be processed more conveniently in the image.
  • Each point cloud corresponds to an image point in the image, and the position information and reflection intensity information of each point cloud are converted into the position information of the corresponding image point and the image gray value.
  • the origin of the coordinates is at the upper left of the image
  • the w coordinate axis represents the width of the image
  • the h coordinate axis represents the height of the image.
  • the conversion formula of the point cloud coordinate in the image coordinate system and the coordinate in the car body coordinate system is as follows:
  • w′ and h′ represent the coordinate value of the point cloud in the image coordinate system
  • x and y represent the longitudinal value and the horizontal value of the point cloud in the vehicle body coordinate system, in meters (m), and the unit of the image coordinate value. Is centimeters (cm).
  • the height and reflection intensity of the point cloud are respectively mapped to the pixel gray value space (0 ⁇ 255), so that the height and reflection of the point cloud
  • the intensity is converted to the image gray value of the corresponding image point in the image.
  • FIG. 2 shows a schematic diagram of the first image in an embodiment.
  • the first image contains image points corresponding to the road surface point cloud (hereinafter referred to as road image points) and image points corresponding to the road boundary point cloud (hereinafter referred to as Boundary image points).
  • S106 Determine a target segmentation threshold of the first image according to the gray value of each image.
  • the target segmentation threshold is used to segment the road image points and boundary image points in the first image.
  • the pavement point cloud and the boundary point cloud have different point cloud positions and reflection intensities, so the corresponding image points have different image gray values. From each image gray value in the first image, determine one of the road image points The gray value of the image distinguished from the boundary image point is used as the target segmentation threshold.
  • S108 Perform segmentation processing on the first image according to the target segmentation threshold to obtain a second image, and determine the road boundary based on the second image.
  • the image points in the first image can be divided into two types, one type corresponds to road image points, and the other type corresponds to boundary image points.
  • the first image can be binarized, the gray value of the segmented road image points is set to 255 (white), and the gray value of the segmented boundary image points is set to 0 (black),
  • FIG. 3 shows a schematic diagram of the second image in an embodiment. Compared with the first image shown in FIG. 2, the road image points are removed from the second image and the boundary image points are retained.
  • the position information and reflection intensity information of the point cloud are converted into the image gray value and embodied in the first image.
  • the resulting image gray value can be used to accurately distinguish the road surface point cloud and the road boundary point cloud Therefore, the first image is segmented according to the target segmentation threshold determined by the gray value of the image, and the segmented second image can more accurately restore the road boundary information and improve the boundary detection effect.
  • the position of the point cloud includes the height of the point cloud
  • the method for determining the image gray value of the image point corresponding to each point cloud includes: according to the point cloud height of each point cloud and its corresponding first conversion factor and first Assign weights, the reflection intensity of each point cloud and its corresponding second conversion factor and the second assigned weight, and determine the image gray value of the image point corresponding to each point cloud.
  • the first conversion factor and the second conversion factor are respectively used to convert the height of the point cloud and the reflection intensity into a gray value, so that the converted value is between 0 and 255.
  • the first distribution weight and the second distribution weight respectively represent the weights of the gray values after the conversion of the point cloud height and the reflection intensity, and the value obtained by adding the first distribution weight and the second distribution weight is 1.
  • the final image gray value of the image point corresponding to the point cloud is determined by the gray value converted from the height of the point cloud and the reflection intensity and its weight.
  • each point cloud is determined
  • the step of the image gray value of the image point corresponding to the point cloud may include the following steps: the point cloud height and reflection intensity of each point cloud are respectively multiplied by the first conversion factor and the second conversion factor to obtain the The first converted gray value and the second converted gray value; the first converted gray value and the second converted gray value of each point cloud are respectively multiplied by the first assigned weight and the second assigned weight to obtain The first weighted gray value and the second weighted gray value of each point cloud; add the first weighted gray value and the second weighted gray value of each point cloud to determine the image gray of the image point corresponding to each point cloud Degree value.
  • z and i represent the point cloud height and reflection intensity
  • j and k represent the first conversion factor and the second conversion factor
  • j*z and k*i respectively represent the first conversion gray value (between 0 and 255)
  • m and n respectively represent the first distribution weight and the second distribution weight
  • P represents the image gray value, 0 ⁇ P ⁇ 255.
  • the point cloud height and reflection intensity information are converted into gray values through the conversion factors corresponding to the point cloud height and the reflection intensity, based on the gray values converted from the point cloud height and the reflection intensity and their corresponding distribution weights. Value, and jointly determine the image gray value of the image point corresponding to the point cloud.
  • the determined image gray value can be used to accurately distinguish the road image point and the boundary image point, which helps to improve the boundary detection effect.
  • the method before the step of determining the target segmentation threshold of the first image according to the gray value of each image, the method further includes: performing block processing on the first image according to the position of each image point in the first image to obtain At least two segmented images; the step of determining the target segmentation threshold of the first image according to the gray value of each image includes: for each segmented image, determining the segmentation according to the image gray value of each image point in the segmented image The segmentation threshold of the block image, the target segmentation threshold includes each segmentation threshold; according to the target segmentation threshold, the first image is segmented to obtain the second image, including: according to the segmentation threshold of each segmented image , Respectively perform segmentation processing on the corresponding block image to obtain the second image.
  • the first image is divided into blocks. Specifically, the first image can be divided into two parts, namely, two block images.
  • the point cloud image that the front is closer to the vehicle body (0 ⁇ 5m), and the upper part represents the point cloud image that is directly in front of the vehicle and that is far away (5 ⁇ 10m) from the vehicle body.
  • the following processing is performed: according to the image gray value of each image point in the block image, the block segmentation threshold of the block image is determined, and then according to the block image The block segmentation threshold is used to perform segmentation processing on the segmented image. After each segmented image is segmented, the segmented images of each segmented image collectively form a second image.
  • image block processing is not limited to upper and lower blocks, and other block processing can also be performed according to actual conditions.
  • left and right blocks can be considered.
  • the step of determining the target segmentation threshold of the first image according to the gray value of each image may specifically include the following steps S1062 to S1066.
  • S1062 Obtain a gray value distribution map according to the gray value of each image and the number of corresponding image points.
  • the first coordinate of the gray value distribution map represents the gray value of the image, and the second coordinate represents the number of image points.
  • the image gray values of all image points in the first image can be histogram statistics to obtain a gray value distribution map, and the range of the image gray values is 0-255, that is, it contains 256 values.
  • the first coordinate can be divided into 26 parts, each of the first 25 parts contains 10 gray values (respectively 0-9, 10-19,..., 240 ⁇ 249), and the 26th part contains 6 Gray value (250 ⁇ 255).
  • FIG. 5 shows a schematic diagram of an image gray value distribution in an embodiment, where the abscissa is the first coordinate, which represents the image gray value, and the ordinate is the second coordinate, which represents the image corresponding to the image gray value. The number of points.
  • S1064 Detect wave crests in the second coordinate direction in the gray value distribution map, and determine the first wave crest and the second wave crest with the largest second coordinate value among the wave crests.
  • the gray value distribution map may have multiple peaks in the direction of the second coordinate (ie, the ordinate). It can be understood that when the gray value of the image is closer to the image points, the more likely these image points will form the peaks in the direction of the ordinate. If it is detected that there are more than two peaks in the ordinate direction of the gray value distribution map, the ordinate value of each peak is obtained, and the two peaks with the largest ordinate value are selected (that is, the ordinate value is measured from large to small). For sorting, the two crests corresponding to the two ordinates in the top two are respectively regarded as the first crest and the second crest. As shown in Fig. 5, there are two wave crests in the ordinate direction, the higher wave crest is regarded as the first wave crest, and the lower wave crest is regarded as the second wave crest.
  • the road boundary is relatively obvious, and the image gray values of the boundary image points and the road image points are quite different.
  • the road boundary is not obvious. Because the gray value of the image is related to the height of the point cloud, the image of the boundary image point and the road image point with a lower height from the road surface is gray. The degree values are relatively close and not easy to distinguish. In these two different situations, different segmentation threshold determination methods are used to determine the target segmentation threshold of the first image.
  • the corresponding segmentation threshold determination method is selected to determine the target segmentation threshold of the first image. Improve the segmentation effect of road image points and boundary image points, so as to restore road boundary information more accurately.
  • the maximum inter-class variance value is determined according to the gray value of each image as the target segmentation threshold of the first image.
  • the maximum between-class variance method (otsu) can be used to calculate the target segmentation threshold of the first image.
  • the foreground that is, the boundary, which can refer to the boundary of all objects except the road surface
  • the background that is, the road surface
  • the threshold is denoted as T
  • the ratio of foreground image points to the entire image point is denoted as w 0 , and its average gray value is u 0
  • the ratio of background image points to the entire image point is w 1
  • its average gray value is u 1
  • the total average gray value of the image is denoted as ⁇
  • the variance between classes is denoted as g
  • the total number of image points is denoted as W*H
  • the number of image points with an image gray value less than the threshold T is denoted as N 0
  • the target segmentation threshold of the first image is determined by the maximum inter-class variance value. Since the inter-class variance value represents the degree of deviation between the foreground gray value and the background gray value from the average gray value, the greater the degree of deviation It shows that the better the segmentation effect is, so the maximum between-class variance value is selected as the target segmentation threshold. In most scenes, the segmentation of road image points and boundary image points can be achieved well.
  • the wave crest adjacent to the first wave crest is selected from the at least one other wave crest as the third wave crest;
  • the coordinate value is greater than or equal to the second coordinate value of the second wave crest;
  • the target segmentation threshold of the first image is determined according to the image gray value corresponding to the smallest second coordinate value between the first wave crest and the third wave crest.
  • the first wave crest is the wave crest corresponding to the first coordinate value
  • the second wave crest is the wave crest corresponding to the second coordinate value
  • the third wave crest is located in the first Between the first wave crest and the second wave crest, and adjacent to the first wave crest.
  • the image gray value corresponding to the smallest second coordinate value between the first wave crest and the third wave crest is determined as the target segmentation threshold of the first image.
  • the image gray value corresponding to the smallest second coordinate value between the first wave peak and the third wave peak is a gray value interval, which contains more than one gray
  • all the gray values in the interval can be added and divided by the number of gray values in the interval to obtain the average gray value of the interval as the target segmentation threshold.
  • the target segmentation threshold of the first image is determined by the image gray value corresponding to the smallest second coordinate value between the first wave crest and the third wave crest. Because the image gray value is related to the height of the point cloud, it leads to a distance from the road surface. The image gray value of the boundary image point with lower height is closer to that of the road image point, and the distribution of road image points is the most, corresponding to the first wave crest, and the third wave crest is adjacent to the first wave crest, corresponding to the boundary image with a lower height from the road surface Therefore, the gray value of the image corresponding to the smallest second coordinate value between the first wave crest and the third wave crest is selected as the target segmentation threshold of the first image. Segmentation effect, so as to restore road boundary information more accurately.
  • the step of determining the road boundary based on the second image may specifically include the following steps S602 to S608.
  • the boundary image points in the second image include not only the image points on the edge of the road, but also the image points on the boundary such as guardrails and flower beds.
  • the smart sweeper it pays more attention to the contour of the edge of the road so that the sweeper can paste Edge cleaning, so you can filter other image points that do not belong to the edge of the road, and only keep the contour image points of the edge of the road.
  • the sweeper drives along the border, it either drives on the left border or on the right border. According to driving habits, the default is to drive on the right. Therefore, the left border contour image points in the second image can be filtered. Keep the edge contour image points on the right side (that is, the target measurement).
  • the second image is preprocessed.
  • the preprocessing can include the removal of discrete points and corrosion expansion. This is because when the resolution of the lidar is relatively low, the projection The image points in the image are relatively discrete, and the accidental misdetection points of the lidar can be filtered through preprocessing, and the integrity of the boundary can be ensured.
  • the first image traversed from left to right The image point between the image point and the image point closest to the vehicle body is considered to be the required boundary contour image point, and the image point on the left side of the boundary contour image point is selected as the final boundary contour image point.
  • the boundary contour image points extracted from the second image may have partial boundary missing, resulting in discontinuous boundary contours formed based on the boundary contour image points.
  • different boundary contours are also discontinuous.
  • the Hermite interpolation method can be used to interpolate between the two end points of each break of the boundary contour.
  • Hermite "The coordinates of the two end points of the curve and the tangent line at the end points are known. According to the principle of “determine a curve”, the coordinates of each end point are obtained, and the tangent line at each end point is calculated according to the coordinates of each end point and its neighboring point coordinates, and then the connecting curve at each disconnection is obtained.
  • S606 Perform interpolation between the two end points of each disconnection of the boundary contour according to each connection curve to obtain an interpolated boundary contour image point.
  • S608 Perform curve fitting on the interpolated boundary contour image points to obtain a road boundary curve, and determine the road boundary based on the road boundary curve.
  • the B-spline curve is used to fit the interpolated boundary contour image points.
  • the B-spline curve fitting can be closer to the real road.
  • the road boundary curve of the boundary so as to meet the high-precision demand of the sweeper for welt cleaning.
  • FIG. 7 shows a schematic diagram of a road boundary curve in an embodiment, and the final road boundary curve is the curve pointed by the arrow. After the final road boundary curve is obtained, the coordinates of each boundary point on the road boundary curve are converted from the image coordinate system to the body coordinate system according to the conversion formula in the previous article, so as to control the sweeper to clean the edge along the road boundary curve.
  • the obtained road boundary curve can reflect road boundary information more completely and truly, and improve the boundary detection effect.
  • the method further includes the following steps: filtering the road boundary curve to obtain the filtered road boundary curve, and determining the road boundary based on the filtered road boundary curve.
  • the Kalman filter (KF) method can be used to filter the road boundary curve. Considering that the sweeper moves at a relatively low speed during work, it is suitable to adopt a uniform motion model. Therefore, the Kalman filter design process is as follows:
  • the formula of the Kalman filter prediction part includes:
  • x represents a state vector
  • F represents a state transition matrix
  • F represents a state transition matrix T transpose
  • f denotes external influences
  • P denotes the state covariance matrix
  • Q represents a process noise matrix
  • P ′ Represents the state covariance matrix after the state is updated.
  • the formula of the Kalman filter measurement part includes:
  • formulas (3), (6), and (7) are observation formulas, and formulas (4) and (5) are used to calculate the Kalman gain K, and the boundary point observation value H represents the measurement matrix, which is mainly to convert the state vector space to the measurement space.
  • R represents the measurement noise matrix. This value represents the difference between the measured value and the true value.
  • S represents a temporary variable of the simplified formula
  • I represents the identity matrix with the state vector.
  • the filtering method is not limited to the above-mentioned Kalman filter (KF), for example, extended Kalman filter (EFK), unscented Kalman filter (UKF) and other filtering methods can also be used to filter the road boundary curve.
  • KF Kalman filter
  • EFK extended Kalman filter
  • UHF unscented Kalman filter
  • a road boundary detection method is provided.
  • the method is applied to an industrial computer as an example for description, including the following steps S801 to S816.
  • S802 Perform coordinate conversion on the point cloud data, from the laser radar coordinate system to the vehicle body coordinate system.
  • S803 Project the point cloud data converted to the vehicle body coordinate system into a two-dimensional image to obtain a first image, the first image contains image points corresponding to each point cloud, so that the point cloud coordinates are converted from the vehicle body coordinate system to the image coordinates Under the system, the image gray value of the image point corresponding to each point cloud is determined according to the point cloud height and reflection intensity of the point cloud.
  • S804 Perform block processing on the first image according to the position of each image point in the first image to obtain at least two block images.
  • S805 For each block image, obtain a gray value distribution map according to the gray value of each image in the block image and the number of corresponding image points, and the first coordinate of the gray value distribution map represents the gray value of the image. The second coordinate represents the number of image points.
  • S806 Detect wave crests in the second coordinate direction in the gray value distribution map, and determine the first wave crest and the second wave crest with the largest second coordinate value among the wave crests.
  • step S807 Determine whether there are other wave crests between the first wave crest and the second wave crest, if yes, go to step S808, if not, go to step S809.
  • a wave crest adjacent to the first wave crest is selected as the third wave crest; the second coordinate value of the first wave crest is greater than or equal to the second coordinate value of the second wave crest, and the The image gray value corresponding to the smallest second coordinate value between the wave crests determines the target segmentation threshold of the segmented image.
  • S809 Determine the maximum inter-cluster variance value according to the gray value of each image as a block segmentation threshold of the block image.
  • S810 Perform segmentation processing on the corresponding segmented images respectively according to the segmentation threshold of each segmented image to obtain a second image.
  • S813 Perform interpolation between the two end points of each disconnection of the boundary contour according to each connection curve to obtain an interpolated boundary contour image point.
  • S816 Convert the coordinates of each boundary point on the filtered road boundary curve from the image coordinate system to the vehicle body coordinate system, and determine the road boundary.
  • the point cloud data is projected to a two-dimensional image, so that the point cloud height and reflection intensity are converted into image gray values, which provides important information for road boundary extraction;
  • Calculating the segmentation thresholds separately can reduce the impact of point cloud fluctuations on the segmentation thresholds caused by uneven roads or vehicle fluctuations; and by detecting the crests in the gray value distribution map, and based on the first crest and the second crest Whether there are other peaks in between, select the corresponding segmentation threshold determination method to determine the target segmentation threshold of the image, which can improve the segmentation effect of road image points and boundary image points, thereby more accurately restoring road boundary information.
  • a road boundary detection device 900 which includes: an acquisition module 910, a projection module 920, a determination module 930, and a processing module 940, wherein:
  • the acquiring module 910 is configured to acquire the point cloud data collected by the detection device for the target road, and the point cloud data includes the point cloud position and reflection intensity of each point cloud.
  • the projection module 920 is used to project the point cloud data into a two-dimensional image to obtain a first image.
  • the first image contains image points corresponding to each point cloud, and the image gray value of the image point corresponding to each point cloud is based on the point The point cloud position and reflection intensity of the cloud are determined.
  • the determining module 930 is configured to determine the target segmentation threshold of the first image according to the gray value of each image.
  • the processing module 940 is configured to perform segmentation processing on the first image according to the target segmentation threshold to obtain a second image, and determine the road boundary based on the second image.
  • the point cloud position includes the point cloud height
  • the projection module 920 further includes a gray value determining unit for determining the point cloud height of each point cloud and its corresponding first conversion factor and first distribution weight, And the reflection intensity of each point cloud and its corresponding second conversion factor and second distribution weight to determine the image gray value of the image point corresponding to each point cloud.
  • the gray value determination unit is specifically configured to: multiply the point cloud height and reflection intensity of each point cloud by the first conversion factor and the second conversion factor, respectively, to obtain the first conversion gray value of each point cloud. Degree value and second converted gray value; the first converted gray value and second converted gray value of each point cloud are respectively multiplied by the first distribution weight and the second distribution weight to obtain the value of each point cloud. The first weighted gray value and the second weighted gray value; the first weighted gray value and the second weighted gray value of each point cloud are added to determine the image gray value of the image point corresponding to each point cloud.
  • the determining module 930 further includes an image block unit, configured to perform block processing on the first image according to the position of each image point in the first image to obtain at least two block images.
  • the determining module 930 is further configured to determine the segmentation threshold of the segmented image according to the image gray value of each image point in the segmented image for each segmented image; the target segmentation threshold includes the segmentation threshold of each segment.
  • the processing module 940 is further configured to perform segmentation processing on the corresponding segmented image respectively according to the segmentation threshold of each segmented image to obtain a second image.
  • the determining module 930 includes: a gray value distribution acquiring unit, a peak detecting unit, and a segmentation threshold determining unit. in:
  • the gray value distribution acquisition unit is used to obtain a gray value distribution diagram according to the gray value of each image and the number of corresponding image points.
  • the first coordinate of the gray value distribution diagram represents the gray value of the image, and the second coordinate represents the image The number of points.
  • the wave crest detection unit is used to detect wave crests in the second coordinate direction in the gray value distribution diagram, and determine the first wave crest and the second wave crest with the largest second coordinate value among the wave crests.
  • the segmentation threshold determination unit is configured to select a corresponding segmentation threshold determination method based on whether there are other peaks between the first peak and the second peak to determine the target segmentation threshold of the first image.
  • the segmentation threshold determination unit is specifically configured to determine the maximum inter-class variance value according to the gray value of each image when there are no other peaks between the first peak and the second peak, as the target segmentation of the first image Threshold.
  • the segmentation threshold determination unit is specifically configured to select a peak adjacent to the first peak as the third peak from the at least one other peak when there is at least one other peak between the first peak and the second peak. ;
  • the second coordinate value of the first wave crest is greater than or equal to the second coordinate value of the second wave crest; the target of the first image is determined according to the image gray value corresponding to the smallest second coordinate value between the first wave crest and the third wave crest Segmentation threshold.
  • the processing module 940 further includes: an extraction unit, a connection curve determination unit, an interpolation unit, and a fitting unit. in:
  • the extracting unit is used to extract the boundary contour image points of the target side from the second image.
  • the connecting curve determination unit is used to obtain the position of the two end points of the boundary contour at each break when the boundary contour formed based on the boundary contour image points is discontinuous, and determine the boundary according to the position of each end point and the tangent at each end point The connection curve of the contour at each break.
  • the interpolation unit is used to perform interpolation between the two end points of each disconnection of the boundary contour according to each connection curve to obtain an image point of the boundary contour after interpolation.
  • the fitting unit is used to curve-fit the interpolated boundary contour image points to obtain the road boundary curve, and determine the road boundary based on the road boundary curve.
  • the processing module 940 further includes a filtering unit for filtering the road boundary curve to obtain the filtered road boundary curve, and determining the road boundary based on the filtered road boundary curve.
  • Each module in the above road boundary detection device can be implemented in whole or in part by software, hardware and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in FIG. 10.
  • the computer equipment includes a processor, a memory, and a network interface connected through a system bus.
  • the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, a computer program, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer program is executed by the processor to realize a road boundary detection method.
  • a computer device is provided.
  • the computer device may be a terminal, and its internal structure diagram may be as shown in FIG. 11.
  • the computer equipment includes a processor, a memory, a communication interface, a display screen and an input device connected through a system bus.
  • the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and a computer program.
  • the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
  • the communication interface of the computer device is used to communicate with an external terminal in a wired or wireless manner, and the wireless manner can be implemented through WIFI, an operator's network, NFC (near field communication) or other technologies.
  • the computer program is executed by the processor to realize a road boundary detection method.
  • the display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, or it can be a button, a trackball or a touchpad set on the housing of the computer equipment , It can also be an external keyboard, touchpad, or mouse.
  • FIG. 10 or FIG. 11 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied.
  • the computer device may include more or fewer components than shown in the figures, or combine certain components, or have a different component arrangement.
  • a computer device including a memory and a processor, and a computer program is stored in the memory, and the processor implements the steps in the foregoing method embodiments when the computer program is executed by the processor.
  • a computer-readable storage medium is provided, and a computer program is stored thereon, and the computer program is executed by a processor to implement the steps in the foregoing method embodiments.
  • Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory, or optical storage.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM may be in various forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种道路边界检测方法、装置、计算机设备和存储介质。所述方法包括:获取探测设备针对目标道路采集的点云数据,点云数据包括各点云的点云位置和反射强度(S102);将点云数据投影至二维图像中,获得第一图像,第一图像中包含各点云对应的图像点,各点云对应的图像点的图像灰度值根据该点云的点云位置和反射强度确定(S104);根据各图像灰度值,确定第一图像的目标分割阈值(S106);根据目标分割阈值,对第一图像进行分割处理,获得第二图像,基于第二图像确定道路边界(S108)。采用该方法能够提高边界检测准确性。

Description

道路边界检测方法、装置、计算机设备和存储介质 技术领域
本申请涉及智能驾驶技术领域,特别是涉及一种道路边界检测方法、装置、计算机设备和存储介质。
背景技术
目前城市的道路清扫工作主要依靠大量环卫工人进行手动清扫,人工手动清扫的效率低且劳动强度大,因此为了提高工作效率、降低人工劳动强度和人工成本,采用智能化程度高的清扫设备代替人工手动清扫已成为道路清扫工作的发展趋势。
智能电动清扫车采用电能作为驱动,在行驶过程中不会对环境产生再次污染,其体积相比一般的车较小,能够在园区、大街小巷及很多大型清扫车去不了的地方进行清扫,且能够利用车载传感器对马路牙子、护栏、花坛等有边界的地方进行检测跟踪,从而实现自动贴边清扫。
然而,目前的智能电动清扫车在进行道路边界检测时,存在边界检测不够准确的问题,导致无法实现精准地贴边清扫。
发明内容
基于此,有必要针对上述技术问题,提供一种能够提高边界检测准确性的道路边界检测方法、装置、计算机设备和存储介质。
一种道路边界检测方法,所述方法包括:
获取探测设备针对目标道路采集的点云数据,所述点云数据包括各点云的点云位置和反射强度;
将所述点云数据投影至二维图像中,获得第一图像,所述第一图像中包含各所述点云对应的图像点,各所述点云对应的图像点的图像灰度值根据该点云的所述点云位置和所述反射强度确定;
根据各所述图像灰度值,确定所述第一图像的目标分割阈值;
根据所述目标分割阈值,对所述第一图像进行分割处理,获得第二图像,基于所述第二图像确定道路边界。
一种道路边界检测装置,所述装置包括:
获取模块,用于获取探测设备针对目标道路采集的点云数据,所述点云数据包括各点云的点云位置和反射强度;
投影模块,用于将所述点云数据投影至二维图像中,获得第一图像,所述第一图像中包含各所述点云对应的图像点,各所述点云对应的图像点的图像灰度值根据该点云的所述点云位置和所述反射强度确定;
确定模块,用于根据各所述图像灰度值,确定所述第一图像的目标分割阈值;
处理模块,用于根据所述目标分割阈值,对所述第一图像进行分割处理,获得第二图像,基于所述第二图像确定道路边界。
一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
获取探测设备针对目标道路采集的点云数据,所述点云数据包括各点云的点云位置和反射强度;
将所述点云数据投影至二维图像中,获得第一图像,所述第一图像中包含各所述点云对应的图像点,各所述点云对应的图像点的图像灰度值根据该点云的所述点云位置和所述反射强度确定;
根据各所述图像灰度值,确定所述第一图像的目标分割阈值;
根据所述目标分割阈值,对所述第一图像进行分割处理,获得第二图像,基于所述第二图像确定道路边界。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现以下步骤:
获取探测设备针对目标道路采集的点云数据,所述点云数据包括各点云的点云位置和反射强度;
将所述点云数据投影至二维图像中,获得第一图像,所述第一图像中包含各所述点云对应的图像点,各所述点云对应的图像点的图像灰度值根据该点云 的所述点云位置和所述反射强度确定;
根据各所述图像灰度值,确定所述第一图像的目标分割阈值;
根据所述目标分割阈值,对所述第一图像进行分割处理,获得第二图像,基于所述第二图像确定道路边界。
上述道路边界检测方法、装置、计算机设备和存储介质,获取探测设备针对目标道路采集的点云数据,点云数据包括各点云的点云位置和反射强度;将点云数据投影至二维图像中,获得第一图像,第一图像中包含各点云对应的图像点,各点云对应的图像点的图像灰度值根据该点云的点云位置和反射强度确定;根据各图像灰度值,确定第一图像的目标分割阈值;根据目标分割阈值,对第一图像进行分割处理,获得第二图像,基于第二图像确定道路边界。据此,将点云的位置信息和反射强度信息转换为图像灰度值体现在第一图像中,由此得到的图像灰度值可用于准确区分路面点云和道路边界点云对应的图像点,从而,根据由图像灰度值确定的目标分割阈值对第一图像进行分割处理,分割出来的第二图像可以更准确地还原道路边界信息,提高边界检测效果。
附图说明
图1为一个实施例中道路边界检测方法的流程示意图。
图2为一个实施例中第一图像的示意图。
图3为一个实施例中第二图像的示意图。
图4为一个实施例中根据各图像灰度值,确定第一图像的目标分割阈值步骤的流程示意图。
图5为一个实施例中图像灰度值分布示意图。
图6为一个实施例中基于第二图像确定道路边界步骤的流程示意图。
图7为一个实施例中道路边界曲线示意图。
图8为另一个实施例中道路边界检测方法的流程示意图。
图9为一个实施例中道路边界检测装置的结构框图。
图10为一个实施例中计算机设备的内部结构图。
图11为一个实施例中计算机设备的内部结构图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供的道路边界检测方法,可以应用于车辆智能驾驶系统中,车辆智能驾驶系统中包括工控机和探测设备(例如激光雷达),探测设备可以安装于车辆上,随着车辆在目标道路上的移动,采集相应的点云数据,工控机获取探测设备采集的点云数据,并对点云数据进行处理,确定道路边界,进一步地可以控制车辆沿着道路边界进行移动。
在一个实施例中,如图1所示,提供了一种道路边界检测方法,以该方法应用于工控机为例进行说明,包括以下步骤S102至步骤S108。
S102,获取探测设备针对目标道路采集的点云数据,点云数据包括各点云的点云位置和反射强度。
其中,探测设备可以采用激光雷达,激光雷达的工作原理是向目标发射探测信号(激光束),然后将接收到的从目标反射回来的信号与发射信号进行比较,作适当处理后就可获得目标的相关信息,如距离、方位、高度、反射强度等参数,从而可以对目标进行探测、跟踪和识别。具体地,激光雷达安装在行驶于目标道路的车辆上,随着车辆的移动采集目标道路上的点云数据,点云数据包括各点云的点云位置和反射强度,其中,点云位置可以是点云在激光雷达坐标系下的坐标位置。
在一个实施例中,激光雷达安装于车辆顶部,朝正前方向下倾斜一定的角度(例如15度),其水平扫描范围接近100度,垂直扫描角度接近40度,扫描范围可以覆盖车辆正前方大部分区域。激光雷达安装好后,对激光雷达的外参进行标定,获得激光雷达到车身的标定参数。工控机在获得激光雷达采集的原始点云数据后,对该原始点云数据进行坐标转换,具体通过预先标定好的激光雷达到车身的外参,将激光雷达坐标系下的点云数据转换到车身坐标系下,转换公式可以如下:
Figure PCTCN2021088583-appb-000001
其中,P L表示激光雷达坐标系下的点云坐标点(x轴向前、y轴向左、z轴向上),P C表示车身坐标系下的点云坐标点(x轴为车正前方,y轴为车正左方,z轴为车正上方),R为3*3旋转矩阵,T为平移向量。R和T可以结合实际情况进行标定。
S104,将点云数据投影至二维图像中,获得第一图像,第一图像中包含各点云对应的图像点,各点云对应的图像点的图像灰度值根据该点云的点云位置和反射强度确定。
将点云数据投影至二维图像中,使得点云包含的所有信息都转换到图像中,在图像中能够更方便地对点云数据进行处理。每个点云对应图像中的一个图像点,每个点云的位置信息和反射强度信息转换为对应的图像点的位置信息以及图像灰度值。
举例来说,在一副1000*1000的单通道灰度图像中,坐标原点在图像的左上方,w坐标轴表示图像的宽,h坐标轴表示图像的高,假设图像坐标点(w=500,h=1000)代表车身坐标原点在图像中的坐标点,点云在图像坐标系下的坐标与其在车身坐标系下的坐标的转换公式如下:
w′=500-y*100;h′=1000-x*100
其中,w′和h′表示点云在图像坐标系下的坐标值,x和y分别表示点云在车身坐标系下的纵向值和横向值,单位为米(m),图像坐标值的单位为厘米(cm)。从而,将点云在车身坐标系下x轴和y轴方向上的位置信息转换为图像中对应的图像点的坐标值。点云在车身坐标系下z轴方向上的位置信息表示点云高度,将点云的高度和反射强度的值分别映射到像素灰度值空间(0~255),使得点云的高度和反射强度转换为图像中对应的图像点的图像灰度值。请参阅图2,示出了一个实施例中第一图像的示意图,第一图像中包含对应道路路面点云的图像点(后续简称路面图像点)和对应道路边界点云的图像点(后续简称边界图像点)。
S106,根据各图像灰度值,确定第一图像的目标分割阈值。
其中,目标分割阈值用于分割第一图像中的路面图像点和边界图像点。路 面点云和边界点云由于其点云位置和反射强度不同,从而其对应的图像点的图像灰度值也不同,从第一图像中的各图像灰度值中确定一个可以将路面图像点和边界图像点区分开来的图像灰度值,作为目标分割阈值。
S108,根据目标分割阈值,对第一图像进行分割处理,获得第二图像,基于第二图像确定道路边界。
根据目标分割阈值可以将第一图像中的图像点分为两类,一类对应路面图像点,另一类对应边界图像点。具体地,可以对第一图像进行二值化处理,将分割出来的路面图像点的灰度值设置为255(白色),将分割出来的边界图像点的灰度值设置为0(黑色),获得第二图像。请参阅图3,示出了一个实施例中第二图像的示意图,与图2所示的第一图像相比,第二图像中去除了路面图像点,保留了边界图像点。
上述道路边界检测方法中,将点云的位置信息和反射强度信息转换为图像灰度值体现在第一图像中,由此得到的图像灰度值可用于准确区分路面点云和道路边界点云对应的图像点,从而,根据由图像灰度值确定的目标分割阈值对第一图像进行分割处理,分割出来的第二图像可以更准确地还原道路边界信息,提高边界检测效果。
在一个实施例中,点云位置包括点云高度,各点云对应的图像点的图像灰度值的确定方式包括:根据各点云的点云高度及其对应的第一转换因子和第一分配权值、以及各点云的反射强度及其对应的第二转换因子和第二分配权值,确定各点云对应的图像点的图像灰度值。
其中,第一转换因子和第二转换因子分别用于将点云高度和反射强度转换为灰度值,使转换后的值在0到255之间。第一分配权值和第二分配权值分别表示点云高度和反射强度转换后的灰度值的所占权重,第一分配权值和第二分配权值相加得到的值为1。由点云高度和反射强度转换后的灰度值及其权重共同确定点云对应的图像点的最终图像灰度值。
具体地,根据各点云的点云高度及其对应的第一转换因子和第一分配权值、以及各点云的反射强度及其对应的第二转换因子和第二分配权值,确定各点云对应的图像点的图像灰度值的步骤,可以包括以下步骤:将各点云的点云高度 和反射强度,分别与第一转换因子和第二转换因子相乘,获得各点云的第一转换灰度值和第二转换灰度值;将各点云的第一转换灰度值和第二转换灰度值,分别与第一分配权值和第二分配权值相乘,获得各点云的第一加权灰度值和第二加权灰度值;将各点云的第一加权灰度值与第二加权灰度值相加,确定各点云对应的图像点的图像灰度值。
各点云对应的图像点的图像灰度值的计算公式为:
P=m*j*z+n*k*i
其中,z和i分别表示点云高度和反射强度,j和k分别表示第一转换因子和第二转换因子,j*z和k*i分别表示第一转换灰度值(在0到255之间)和第二转换灰度值(在0到255之间),m和n分别表示第一分配权值和第二分配权值,m+n=1,P表示图像灰度值,0≤P≤255。j、k、m、n可以结合实际情况进行标定。
本实施例中,通过点云高度和反射强度对应的转换因子,将点云高度和反射强度信息转换为灰度值,基于点云高度和反射强度转换后的灰度值及其对应的分配权值,共同确定点云对应的图像点的图像灰度值,由此确定的图像灰度值可用于准确区分路面图像点和边界图像点,有助于提高边界检测效果。
在一个实施例中,在根据各图像灰度值,确定第一图像的目标分割阈值的步骤之前,还包括:根据第一图像中各图像点的位置,对第一图像进行分块处理,获得至少两个分块图像;根据各图像灰度值,确定第一图像的目标分割阈值的步骤,包括:对于每个分块图像,根据分块图像中各图像点的图像灰度值,确定分块图像的分块分割阈值,目标分割阈值包括各分块分割阈值;根据目标分割阈值,对第一图像进行分割处理,获得第二图像的步骤,包括:根据各分块图像的分块分割阈值,分别对对应的分块图像进行分割处理,获得第二图像。
车辆在行驶过程中遇到不平坦的路面时,可能会上下或左右颠簸,导致点云高度出现比较大的波动,离车身较近的点云波动较小,离车身较远的点云波动较大,导致难以找到单一的分割阈值对近处部分和远处部分的点云进行准确分割。基于此,在确定第一图像的目标分割阈值之前,先对第一图像进行分块处理,具体可以将第一图像分为上下两部分,即两个分块图像,其中,下部分 表示车正前方离车身较近(0~5m)的点云图像,上部分表示车正前方离车身较远(5~10m)的点云图像。获得分块图像后,对于每个分块图像,都进行如下处理:根据分块图像中各图像点的图像灰度值,确定该分块图像的分块分割阈值,再根据该分块图像的分块分割阈值,对该分块图像进行分割处理。各分块图像进行分割处理后,由各分块图像分割处理后的图像共同组成第二图像。
需要说明的是,图像分块处理不限于上下分块,还可以根据实际情况进行其他分块处理,比如,路面是左右倾斜的情况下可以考虑左右分块。
本实施例中,通过对第一图像进行分块,分别确定各分块图像的分割阈值,并分别对各分块图像进行分割处理,可以减少由于路面不平或者车辆自身波动等情况导致的点云波动对分割阈值的影响,提高路面图像点和边界图像点的分割准确性。
在一个实施例中,如图4所示,根据各图像灰度值,确定第一图像的目标分割阈值的步骤,具体可以包括以下步骤S1062至步骤S1066。
S1062,根据各图像灰度值及其对应的图像点数量,获得灰度值分布图,灰度值分布图的第一坐标表示图像灰度值,第二坐标表示图像点数量。
具体地,可以对第一图像中的所有图像点的图像灰度值进行直方图统计,获得灰度值分布图,图像灰度值的范围为0~255,即包含256个值。举例来说,可以将第一坐标分成26份,前25份中每一份包含10个灰度值(分别为0~9,10~19,…,240~249),第26份包含6个灰度值(250~255)。请参阅图5,示出了一个实施例中的图像灰度值分布示意图,其中,横坐标为第一坐标,表示图像灰度值,纵坐标为第二坐标,表示图像灰度值对应的图像点的数量。
S1064,检测灰度值分布图中在第二坐标方向上的波峰,确定各波峰中第二坐标值最大的第一波峰和第二波峰。
灰度值分布图在第二坐标(即纵坐标)方向上可能存在多个波峰,可以理解,当图像灰度值接近的图像点越多时,这些图像点在纵坐标方向上越有可能形成波峰。若检测到灰度值分布图在纵坐标方向上存在两个以上的波峰,则获取各个波峰的纵坐标值,选取其中纵坐标值最大的两个波峰(即对纵坐标值从大到小进行排序,排在前二的两个纵坐标对应的两个波峰)分别作为第一波峰 和第二波峰。如图5所示,在纵坐标方向上有两个波峰,将其中较高的波峰作为第一波峰,将其中较低的波峰作为第二波峰。
S1066,基于第一波峰和第二波峰之间是否存在其它波峰,选用对应的分割阈值确定方式,确定第一图像的目标分割阈值。
当第一波峰和第二波峰之间不存在其它波峰时,可以认为道路边界比较明显,边界图像点与路面图像点的图像灰度值区别较大。当第一波峰和第二波峰之间存在其它波峰时,可以认为道路边界不明显,由于图像灰度值与点云高度相关,导致离路面高度较低的边界图像点与路面图像点的图像灰度值较为接近,不易区分。在这两种不同的情况下,采用不同的分割阈值确定方式,确定第一图像的目标分割阈值。
本实施例中,通过检测灰度值分布图中的波峰,并基于第一波峰和第二波峰之间是否存在其它波峰,选用对应的分割阈值确定方式,确定第一图像的目标分割阈值,可以提升路面图像点与边界图像点的分割效果,从而更准确地还原道路边界信息。
在一个实施例中,当第一波峰和第二波峰之间不存在其它波峰时,根据各图像灰度值确定最大类间方差值,作为第一图像的目标分割阈值。
具体地,可以利用最大类间方差法(otsu)计算第一图像的目标分割阈值,对于第一图像,前景(即边界,可以指除了路面的所有物体边界)和背景(即路面)的目标分割阈值记作T,前景图像点占整幅图像点的比例记为w 0,其平均灰度值为u 0;背景图像点占整幅图像点的比例为w 1,其平均灰度值为u 1;图像的总平均灰度值记为μ,类间方差记为g;图像点总数记为W*H,图像灰度值小于阈值T的图像点个数记作N 0,图像灰度值大于阈值T的图像点个数记作N 1,则有:w 0=N 0/M*N,w 1=N 0/M*N,w 0+w 1=1,N 0+N 1=M*N,u=u 0*w 0+u 1*w 1,g=w 0*(u-u 0) 2+w 1*(u-u 1) 2=w 0*w 1*(u 0-u 1) 2,对每个等级类间方差值进行遍历,将最大类间方差值作为目标分割阈值T。
本实施例中,通过最大类间方差值确定第一图像的目标分割阈值,由于类间方差值表示前景灰度值和背景灰度值与平均灰度值的偏离程度,偏离程度越大说明分割效果越好,因此选用最大类间方差值作为目标分割阈值,在大部分 场景下都可以很好地实现路面图像点与边界图像点的分割。
在一个实施例中,当第一波峰和第二波峰之间存在至少一个其它波峰时,从至少一个其它波峰中,选取与第一波峰相邻的波峰作为第三波峰;第一波峰的第二坐标值大于或等于第二波峰的第二坐标值;根据第一波峰与第三波峰之间的最小第二坐标值对应的图像灰度值,确定第一图像的目标分割阈值。
对所有波峰的第二坐标值从大到小进行排序,第一波峰为排在第一的坐标值对应的波峰,第二波峰为排在第二的坐标值对应的波峰,第三波峰位于第一波峰和第二波峰之间,并且与第一波峰相邻。将第一波峰与第三波峰之间的最小第二坐标值对应的图像灰度值,确定为第一图像的目标分割阈值。举例来说,当灰度值分布图为直方图时,第一波峰与第三波峰之间的最小第二坐标值对应的图像灰度值是一个灰度值区间,其包含多于一个的灰度值,可以将该区间内所有灰度值相加再除以该区间内的灰度值个数,获得该区间的平均灰度值,作为目标分割阈值。
本实施例中,通过第一波峰与第三波峰之间的最小第二坐标值对应的图像灰度值确定第一图像的目标分割阈值,由于图像灰度值与点云高度相关,导致离路面高度较低的边界图像点与路面图像点的图像灰度值较为接近,路面图像点的分布最多,对应第一波峰,第三波峰与第一波峰相邻,对应离路面高度较低的边界图像点,因此选用第一波峰与第三波峰之间的最小第二坐标值对应的图像灰度值作为第一图像的目标分割阈值,可以提升路面图像点与离路面高度较低的边界图像点的分割效果,从而更准确地还原道路边界信息。
在一个实施例中,如图6所示,基于第二图像确定道路边界的步骤,具体可以包括以下步骤S602至S608。
S602,从第二图像中提取目标侧的边界轮廓图像点。
第二图像中的边界图像点除了包含道路边沿的图像点,还可能防护栏、花坛等边界的图像点,对于智能清扫车的应用来说,其更关注道路边沿的轮廓,使得清扫车可以贴边清扫,因此可以将其它不属于道路边沿的图像点过滤,只保留道路边沿的轮廓图像点。进一步地,清扫车在沿边界行驶时,要么靠左边边界行驶,要么靠右边边界行驶,根据驾驶习惯,默认为靠右边行驶,因此可 以将第二图像中左侧的边界轮廓图像点进行过滤,保留右侧(即目标测)的边界轮廓图像点。
具体地,在提取第二图像的边界轮廓图像点之前,先对第二图像进行预处理,预处理可以包括去除离散点和腐蚀膨胀,这是因为当激光雷达分辨率比较低的情况下,投影到图像中的图像点比较离散,通过预处理既可以对激光雷达偶然的误检测点进行过滤,又可以保证边界的完整性。对第二图像进行上述预处理后,可以利用图像处理轮廓查找的方法将第二图像中的所有轮廓保存到数组序列中,在右侧边界轮廓中,将图像从左至右遍历的第一个图像点与离车身最近的图像点之间的图像点认为是需要的边界轮廓图像点,选取边界轮廓图像点中靠图像左侧的图像点作为最终的边界轮廓图像点。
S604,当基于边界轮廓图像点形成的边界轮廓不连续时,获取边界轮廓在各断开处的两个端点的位置,根据各端点的位置以及各端点处的切线,确定边界轮廓在各断开处的连接曲线。
从第二图像中提取的边界轮廓图像点可能存在部分边界缺失的情况,导致基于边界轮廓图像点形成的边界轮廓不连续,此外,若包含多个边界轮廓,不同边界轮廓之间也不连续。基于此,可以采用埃尔米特(Hermite)插值法在边界轮廓的各断开处的两个端点之间进行插值,根据Hermite:“已知曲线的两个端点坐标以及端点处的切线,可以确定一条曲线”原理,获取各端点的坐标,并根据各端点坐标及其相邻点坐标计算出各端点处的切线,继而得到在各断开处的连接曲线。
S606,根据各连接曲线,在边界轮廓的各断开处的两个端点之间进行插值,获得插值后的边界轮廓图像点。
获得边界轮廓在各断开处的连接曲线后,在各断开处的两个端点之间进行Hermite插值,可以将部分缺失的边界轮廓补齐,并且可以将所有边界轮廓连接,使获得的插值后的边界轮廓图像点可以更完整的体现边界轮廓。
S608,对插值后的边界轮廓图像点进行曲线拟合,获得道路边界曲线,基于道路边界曲线确定道路边界。
具体地,采用B样条曲线对插值后的边界轮廓图像点进行拟合,相比于其 它拟合方式(如最小二乘法拟合方式),通过B样条曲线拟合可以获得更贴近真实道路边界的道路边界曲线,从而满足清扫车对于贴边清扫的高精度需求。请参阅图7,示出了一个实施例中的道路边界曲线示意图,最终道路边界曲线为箭头所指的曲线。获得最终道路边界曲线后,根据前文中的转换公式,将道路边界曲线上的各边界点的坐标从图像坐标系转换到车身坐标系下,以便控制清扫车沿着道路边界曲线进行贴边清扫。
本实施例中,通过对第二图像中包含的道路边界轮廓图像点进行提取、插值和拟合处理,获得的道路边界曲线可以更完整、更真实地反映道路边界信息,提高边界检测效果。
在一个实施例中,在获得道路边界曲线后,还包括以下步骤:对道路边界曲线进行滤波,获得滤波后的道路边界曲线,基于滤波后的道路边界曲线确定道路边界。
道路边界检测应用于智能清扫车时,在实际检测环境中,可能会遇到各种突发情况,例如清扫车在行驶过程中出现很大抖动,此时可能造成道路边界曲线出现很大的波动,基于此,通过对道路边界曲线进行滤波,可以减少道路边界曲线的波动性。
具体地,可以利用卡尔曼滤波(KF)方法对道路边界曲线进行过滤,考虑到清扫车在工作时以比较低的速度匀速运动,适合采用匀速运动模型,因此卡尔曼的滤波器设计过程如下:
卡尔曼滤波预测部分的公式包括:
x′=Fx+f    (1)
P′=FPF T+Q    (2)
其中,x表示状态向量,F表示状态转移矩阵,F T表示状态转移矩阵转置,f表示外部影响,x′表示更新好的状态向量,P表示状态协方差矩阵,Q表示过程噪声矩阵,P′表示状态更新后的状态协方差矩阵。
将状态向量设为x=[x,y,v x,v y] T,其中,x和y分别表示边界点在图像坐标系 中的坐标,v x和v y分别表示该边界点的速度,由于是匀速运动模型,可以设f=0,
Figure PCTCN2021088583-appb-000002
由于边界点对应的是激光雷达返回的点云,通过激光雷达直接测量得出,而边界点的速度是无法进行测量的,因此对于边界点的位置信息可以比较准确地获取,不确定度较低,而对于边界点地速度信息,不确定度较高。所以可以设
Figure PCTCN2021088583-appb-000003
Q对整个系统存在影响,但难以确定对系统的影响有多大,从而设置Q为单位矩阵,则
Figure PCTCN2021088583-appb-000004
卡尔曼滤波测量部分的公式包括:
y=Z-Hx′   (3)
S=HP′H T+R   (4)
K=P′H TS -1   (5)
x=x′+Ky    (6)
P=(I-KH)P′    (7)
其中,公式(3)、(6)和(7)是观测公式,公式(4)和(5)用来求卡尔曼增益K,边界点地观测值
Figure PCTCN2021088583-appb-000005
H表示测量矩阵,主要是将状态向量空间转换到测量空间,根据Z=Hx可得
Figure PCTCN2021088583-appb-000006
R表示测量噪声矩阵,这个值表示测量值与真值之间的差值,当激光雷达测距精度为2cm时,可设
Figure PCTCN2021088583-appb-000007
S表示简化公式的一个临时变量,I表示与状态向量的单位矩阵。通过以上所有已知变量,使用公式(6)和(7)实现对状态向量x和状态协方差矩阵P更新,预测和测量不断迭代,可以获取接近真实值的预测状态向量。
本实施例中,通过对道路边界曲线进行滤波,可以减少边界曲线的波动性提高边界检测效果。可以理解,滤波方法不限于上述卡尔曼滤波(KF),例如还 可以利用扩展卡尔曼滤波(EFK)、无迹卡尔曼滤波(UKF)等滤波方法对道路边界曲线进行过滤。
在一个实施例中,如图8所示,提供了一种道路边界检测方法,以该方法应用于工控机为例进行说明,包括以下步骤S801至步骤S816。
S801,获取激光雷达针对目标道路采集的点云数据,点云数据包括各点云的点云位置和反射强度。
S802,对点云数据进行坐标转换,从激光雷达坐标系转换到车身坐标系下。
S803,将转换到车身坐标系下的点云数据投影至二维图像中,获得第一图像,第一图像中包含各点云对应的图像点,使得点云坐标从车身坐标系转换到图像坐标系下,各点云对应的图像点的图像灰度值根据该点云的点云高度和反射强度确定。
S804,根据第一图像中各图像点的位置,对第一图像进行分块处理,获得至少两个分块图像。
S805,对于每个分块图像,根据该分块图像中各图像灰度值及其对应的图像点数量,获得灰度值分布图,灰度值分布图的第一坐标表示图像灰度值,第二坐标表示图像点数量。
S806,检测灰度值分布图中在第二坐标方向上的波峰,确定各波峰中第二坐标值最大的第一波峰和第二波峰。
S807,判断第一波峰和第二波峰之间是否存在其它波峰,若是,进入步骤S808,若否,进入步骤S809。
S808,从存在的其它波峰中,选取与第一波峰相邻的波峰作为第三波峰;第一波峰的第二坐标值大于或等于第二波峰的第二坐标值,根据第一波峰与第三波峰之间的最小第二坐标值对应的图像灰度值,确定该分块图像的目标分割阈值。
S809,根据各图像灰度值确定最大类间方差值,作为该分块图像的分块分割阈值。
S810,根据各分块图像的分块分割阈值,分别对对应的分块图像进行分割处理,获得第二图像。
S811,从第二图像中提取目标侧的边界轮廓图像点。
S812,当基于边界轮廓图像点形成的边界轮廓不连续时,获取边界轮廓在各断开处的两个端点的位置,根据各端点的位置以及各端点处的切线,确定边界轮廓在各断开处的连接曲线。
S813,根据各连接曲线,在边界轮廓的各断开处的两个端点之间进行插值,获得插值后的边界轮廓图像点。
S814,对插值后的边界轮廓图像点进行曲线拟合,获得道路边界曲线。
S815,对道路边界曲线进行滤波,获得滤波后的道路边界曲线。
S816,将滤波后的道路边界曲线上的各边界点的坐标,从图像坐标系转换到车身坐标系下,确定道路边界。
关于步骤S801~S816的具体描述可以参见前文实施例,在此不再赘述。本实施例中,将点云数据投影到二维图像,使点云高度和反射强度转换为图像灰度值,为道路边界提取提供了重要信息;通过对图像进行分块处理,对每块图像分别进行分割阈值的计算,可以减少由于路面不平或者车辆自身波动等情况导致的点云波动对分割阈值的影响;并且通过检测灰度值分布图中的波峰,并基于第一波峰和第二波峰之间是否存在其它波峰,选用对应的分割阈值确定方式,确定图像的目标分割阈值,可以提升路面图像点与边界图像点的分割效果,从而更准确地还原道路边界信息。
应该理解的是,虽然图1、4、6、8的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图1、4、6、8中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
在一个实施例中,如图9所示,提供了一种道路边界检测装置900,包括:获取模块910、投影模块920、确定模块930和处理模块940,其中:
获取模块910,用于获取探测设备针对目标道路采集的点云数据,点云数据包括各点云的点云位置和反射强度。
投影模块920,用于将点云数据投影至二维图像中,获得第一图像,第一图像中包含各点云对应的图像点,各点云对应的图像点的图像灰度值根据该点云的点云位置和反射强度确定。
确定模块930,用于根据各图像灰度值,确定第一图像的目标分割阈值。
处理模块940,用于根据目标分割阈值,对第一图像进行分割处理,获得第二图像,基于第二图像确定道路边界。
在一个实施例中,点云位置包括点云高度,投影模块920还包括灰度值确定单元,用于根据各点云的点云高度及其对应的第一转换因子和第一分配权值、以及各点云的反射强度及其对应的第二转换因子和第二分配权值,确定各点云对应的图像点的图像灰度值。
在一个实施例中,灰度值确定单元具体用于:将各点云的点云高度和反射强度,分别与第一转换因子和第二转换因子相乘,获得各点云的第一转换灰度值和第二转换灰度值;将各点云的第一转换灰度值和第二转换灰度值,分别与第一分配权值和第二分配权值相乘,获得各点云的第一加权灰度值和第二加权灰度值;将各点云的第一加权灰度值与第二加权灰度值相加,确定各点云对应的图像点的图像灰度值。
在一个实施例中,确定模块930还包括图像分块单元,用于根据第一图像中各图像点的位置,对第一图像进行分块处理,获得至少两个分块图像。确定模块930还用于对于每个分块图像,根据分块图像中各图像点的图像灰度值,确定分块图像的分块分割阈值;目标分割阈值包括各分块分割阈值。处理模块940还用于根据各分块图像的分块分割阈值,分别对对应的分块图像进行分割处理,获得第二图像。
在一个实施例中,确定模块930包括:灰度值分布获取单元、波峰检测单元和分割阈值确定单元。其中:
灰度值分布获取单元,用于根据各图像灰度值及其对应的图像点数量,获得灰度值分布图,灰度值分布图的第一坐标表示图像灰度值,第二坐标表示图 像点数量。
波峰检测单元,用于检测灰度值分布图中在第二坐标方向上的波峰,确定各波峰中第二坐标值最大的第一波峰和第二波峰。
分割阈值确定单元,用于基于第一波峰和第二波峰之间是否存在其它波峰,选用对应的分割阈值确定方式,确定第一图像的目标分割阈值。
在一个实施例中,分割阈值确定单元具体用于当第一波峰和第二波峰之间不存在其它波峰时,根据各图像灰度值确定最大类间方差值,作为第一图像的目标分割阈值。
在一个实施例中,分割阈值确定单元具体用于当第一波峰和第二波峰之间存在至少一个其它波峰时,从至少一个其它波峰中,选取与第一波峰相邻的波峰作为第三波峰;第一波峰的第二坐标值大于或等于第二波峰的第二坐标值;根据第一波峰与第三波峰之间的最小第二坐标值对应的图像灰度值,确定第一图像的目标分割阈值。
在一个实施例中,处理模块940还包括:提取单元、连接曲线确定单元、插值单元和拟合单元。其中:
提取单元,用于从第二图像中提取目标侧的边界轮廓图像点。
连接曲线确定单元,用于当基于边界轮廓图像点形成的边界轮廓不连续时,获取边界轮廓在各断开处的两个端点的位置,根据各端点的位置以及各端点处的切线,确定边界轮廓在各断开处的连接曲线。
插值单元,用于根据各连接曲线,在边界轮廓的各断开处的两个端点之间进行插值,获得插值后的边界轮廓图像点。
拟合单元,用于对插值后的边界轮廓图像点进行曲线拟合,获得道路边界曲线,基于道路边界曲线确定道路边界。
在一个实施例中,处理模块940还包括滤波单元,用于对道路边界曲线进行滤波,获得滤波后的道路边界曲线,基于滤波后的道路边界曲线确定道路边界。
关于道路边界检测装置的具体限定可以参见上文中对于道路边界检测方法的限定,在此不再赘述。上述道路边界检测装置中的各个模块可全部或部分通 过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图10所示。该计算机设备包括通过系统总线连接的处理器、存储器和网络接口。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机程序和数据库。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种道路边界检测方法。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是终端,其内部结构图可以如图11所示。该计算机设备包括通过系统总线连接的处理器、存储器、通信接口、显示屏和输入装置。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机程序。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的通信接口用于与外部的终端进行有线或无线方式的通信,无线方式可通过WIFI、运营商网络、NFC(近场通信)或其他技术实现。该计算机程序被处理器执行时以实现一种道路边界检测方法。该计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏,该计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图10或图11中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,提供了一种计算机设备,包括存储器和处理器,存储器 中存储有计算机程序,该处理器执行计算机程序时实现上述各个方法实施例中的步骤。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述各个方法实施例中的步骤。
需要理解的是,上述实施例中的术语“第一”、“第二”等仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (11)

  1. 一种道路边界检测方法,所述方法包括:
    获取探测设备针对目标道路采集的点云数据,所述点云数据包括各点云的点云位置和反射强度;
    将所述点云数据投影至二维图像中,获得第一图像,所述第一图像中包含各所述点云对应的图像点,各所述点云对应的图像点的图像灰度值根据该点云的所述点云位置和所述反射强度确定;
    根据各所述图像灰度值,确定所述第一图像的目标分割阈值;
    根据所述目标分割阈值,对所述第一图像进行分割处理,获得第二图像,基于所述第二图像确定道路边界。
  2. 根据权利要求1所述的方法,其特征在于,所述点云位置包括点云高度,各所述点云对应的图像点的图像灰度值的确定方式包括:
    根据各所述点云的所述点云高度及其对应的第一转换因子和第一分配权值、以及各所述点云的所述反射强度及其对应的第二转换因子和第二分配权值,确定各所述点云对应的图像点的图像灰度值。
  3. 根据权利要求2所述的方法,其特征在于,根据各所述点云的所述点云高度及其对应的第一转换因子和第一分配权值、以及各所述点云的所述反射强度及其对应的第二转换因子和第二分配权值,确定各所述点云对应的图像点的图像灰度值,包括:
    将各所述点云的所述点云高度和所述反射强度,分别与所述第一转换因子和所述第二转换因子相乘,获得各所述点云的第一转换灰度值和第二转换灰度值;
    将各所述点云的所述第一转换灰度值和所述第二转换灰度值,分别与所述第一分配权值和所述第二分配权值相乘,获得各所述点云的第一加权灰度值和第二加权灰度值;
    将各所述点云的所述第一加权灰度值与所述第二加权灰度值相加,确定各所述点云对应的图像点的图像灰度值。
  4. 根据权利要求1所述的方法,其特征在于,在根据各所述图像灰度值,确定所述第一图像的目标分割阈值之前,还包括:根据所述第一图像中各图像点的位置,对所述第一图像进行分块处理,获得至少两个分块图像;
    根据各所述图像灰度值,确定所述第一图像的目标分割阈值,包括:对于每个所述分块图像,根据所述分块图像中各图像点的图像灰度值,确定所述分块图像的分块分割阈值;所述目标分割阈值包括各所述分块分割阈值;
    根据所述目标分割阈值,对所述第一图像进行分割处理,获得第二图像,包括:根据各所述分块图像的分块分割阈值,分别对对应的所述分块图像进行分割处理,获得第二图像。
  5. 根据权利要求1所述的方法,其特征在于,根据各所述图像灰度值,确定所述第一图像的目标分割阈值,包括:
    根据各所述图像灰度值及其对应的图像点数量,获得灰度值分布图,所述灰度值分布图的第一坐标表示图像灰度值,第二坐标表示图像点数量;
    检测所述灰度值分布图中在第二坐标方向上的波峰,确定各所述波峰中第二坐标值最大的第一波峰和第二波峰;
    基于所述第一波峰和所述第二波峰之间是否存在其它波峰,选用对应的分割阈值确定方式,确定所述第一图像的目标分割阈值。
  6. 根据权利要求5所述的方法,其特征在于,基于所述第一波峰和所述第二波峰之间是否存在其它波峰,选用对应的分割阈值确定方式,确定所述第一图像的目标分割阈值,包括:
    当所述第一波峰和所述第二波峰之间不存在其它波峰时,根据各所述图像灰度值确定最大类间方差值,作为所述第一图像的目标分割阈值。
  7. 根据权利要求5所述的方法,其特征在于,基于所述第一波峰和所述第二波峰之间是否存在其它波峰,选用对应的分割阈值确定方式,确定所述第一 图像的目标分割阈值,包括:
    当所述第一波峰和所述第二波峰之间存在至少一个其它波峰时,从所述至少一个其它波峰中,选取与所述第一波峰相邻的波峰作为第三波峰;所述第一波峰的第二坐标值大于或等于所述第二波峰的第二坐标值;
    根据所述第一波峰与所述第三波峰之间的最小第二坐标值对应的图像灰度值,确定所述第一图像的目标分割阈值。
  8. 根据权利要求1至7任意一项所述的方法,其特征在于,基于所述第二图像确定道路边界,包括:
    从所述第二图像中提取目标侧的边界轮廓图像点;
    当基于所述边界轮廓图像点形成的边界轮廓不连续时,获取所述边界轮廓在各断开处的两个端点的位置,根据各所述端点的位置以及各所述端点处的切线,确定所述边界轮廓在各断开处的连接曲线;
    根据各所述连接曲线,在所述边界轮廓的各断开处的两个端点之间进行插值,获得插值后的边界轮廓图像点;
    对所述插值后的边界轮廓图像点进行曲线拟合,获得道路边界曲线,基于所述道路边界曲线确定道路边界。
  9. 一种道路边界检测装置,其特征在于,所述装置包括:
    获取模块,用于获取探测设备针对目标道路采集的点云数据,所述点云数据包括各点云的点云位置和反射强度;
    投影模块,用于将所述点云数据投影至二维图像中,获得第一图像,所述第一图像中包含各所述点云对应的图像点,各所述点云对应的图像点的图像灰度值根据该点云的所述点云位置和所述反射强度确定;
    确定模块,用于根据各所述图像灰度值,确定所述第一图像的目标分割阈值;
    处理模块,用于根据所述目标分割阈值,对所述第一图像进行分割处理,获得第二图像,基于所述第二图像确定道路边界。
  10. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至8中任一项所述方法的步骤。
  11. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至8中任一项所述的方法的步骤。
PCT/CN2021/088583 2020-05-13 2021-04-21 道路边界检测方法、装置、计算机设备和存储介质 WO2021227797A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010400797.X 2020-05-13
CN202010400797.XA CN113673274A (zh) 2020-05-13 2020-05-13 道路边界检测方法、装置、计算机设备和存储介质

Publications (1)

Publication Number Publication Date
WO2021227797A1 true WO2021227797A1 (zh) 2021-11-18

Family

ID=78526357

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/088583 WO2021227797A1 (zh) 2020-05-13 2021-04-21 道路边界检测方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN113673274A (zh)
WO (1) WO2021227797A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115131761A (zh) * 2022-08-31 2022-09-30 北京百度网讯科技有限公司 道路边界的识别方法、绘制方法、装置及高精地图
CN117368879A (zh) * 2023-12-04 2024-01-09 北京海兰信数据科技股份有限公司 雷达图的生成方法、装置、终端设备及可读存储介质
CN117764992A (zh) * 2024-02-22 2024-03-26 山东乔泰管业科技有限公司 基于图像处理的塑料管材质量检测方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114155258A (zh) * 2021-12-01 2022-03-08 苏州思卡信息系统有限公司 一种公路施工围封区域的检测方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106067003A (zh) * 2016-05-27 2016-11-02 山东科技大学 一种车载激光扫描点云中道路矢量标识线自动提取方法
CN109766878A (zh) * 2019-04-11 2019-05-17 深兰人工智能芯片研究院(江苏)有限公司 一种车道线检测的方法和设备
CN110163047A (zh) * 2018-07-05 2019-08-23 腾讯大地通途(北京)科技有限公司 一种检测车道线的方法及装置
CN110502973A (zh) * 2019-07-05 2019-11-26 同济大学 一种基于车载激光点云的道路标线自动化提取和识别方法
US20200026930A1 (en) * 2018-07-20 2020-01-23 Boe Technology Group Co., Ltd. Lane line detection method and apparatus
CN110866449A (zh) * 2019-10-21 2020-03-06 北京京东尚科信息技术有限公司 识别道路中目标对象的方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106067003A (zh) * 2016-05-27 2016-11-02 山东科技大学 一种车载激光扫描点云中道路矢量标识线自动提取方法
CN110163047A (zh) * 2018-07-05 2019-08-23 腾讯大地通途(北京)科技有限公司 一种检测车道线的方法及装置
US20200026930A1 (en) * 2018-07-20 2020-01-23 Boe Technology Group Co., Ltd. Lane line detection method and apparatus
CN109766878A (zh) * 2019-04-11 2019-05-17 深兰人工智能芯片研究院(江苏)有限公司 一种车道线检测的方法和设备
CN110502973A (zh) * 2019-07-05 2019-11-26 同济大学 一种基于车载激光点云的道路标线自动化提取和识别方法
CN110866449A (zh) * 2019-10-21 2020-03-06 北京京东尚科信息技术有限公司 识别道路中目标对象的方法和装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115131761A (zh) * 2022-08-31 2022-09-30 北京百度网讯科技有限公司 道路边界的识别方法、绘制方法、装置及高精地图
CN117368879A (zh) * 2023-12-04 2024-01-09 北京海兰信数据科技股份有限公司 雷达图的生成方法、装置、终端设备及可读存储介质
CN117368879B (zh) * 2023-12-04 2024-03-19 北京海兰信数据科技股份有限公司 雷达图的生成方法、装置、终端设备及可读存储介质
CN117764992A (zh) * 2024-02-22 2024-03-26 山东乔泰管业科技有限公司 基于图像处理的塑料管材质量检测方法
CN117764992B (zh) * 2024-02-22 2024-04-30 山东乔泰管业科技有限公司 基于图像处理的塑料管材质量检测方法

Also Published As

Publication number Publication date
CN113673274A (zh) 2021-11-19

Similar Documents

Publication Publication Date Title
WO2021227797A1 (zh) 道路边界检测方法、装置、计算机设备和存储介质
Wang et al. Computational methods of acquisition and processing of 3D point cloud data for construction applications
Charron et al. De-noising of lidar point clouds corrupted by snowfall
Chen et al. Lidar-histogram for fast road and obstacle detection
US20230111722A1 (en) Curb detection by analysis of reflection images
Guan et al. Using mobile LiDAR data for rapidly updating road markings
Chen et al. Next generation map making: Geo-referenced ground-level LIDAR point clouds for automatic retro-reflective road feature extraction
US11295521B2 (en) Ground map generation
CN108074232B (zh) 一种基于体元分割的机载lidar建筑物检测方法
Ma et al. Automatic framework for detecting obstacles restricting 3D highway sight distance using mobile laser scanning data
Wang et al. Automated pavement distress survey: a review and a new direction
Zhang et al. Lidar-guided stereo matching with a spatial consistency constraint
Arora et al. Static map generation from 3D LiDAR point clouds exploiting ground segmentation
WO2013110072A1 (en) Surface feature detection by radiation analysis
Liu et al. Detection and reconstruction of static vehicle-related ground occlusions in point clouds from mobile laser scanning
CN114676789A (zh) 一种点云融合方法、装置、计算机设备和存储介质
Elkhrachy Feature extraction of laser scan data based on geometric properties
Miyazaki et al. Line-based planar structure extraction from a point cloud with an anisotropic distribution
CN112559539A (zh) 更新地图数据的方法与装置
Gong et al. Complex lane detection based on dynamic constraint of the double threshold
CN113763308B (zh) 一种地面检测方法、装置、服务器及介质
Stainvas et al. Performance evaluation for curb detection problem
CN114842166A (zh) 应用于结构化道路的负障碍检测方法、系统、介质及设备
JP2023152480A (ja) 地図データ生成方法、地図データ生成装置及び地図データ生成プログラム
Ouma On the use of low-cost RGB-D sensors for autonomous pothole detection with spatial fuzzy c-means segmentation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21802970

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21802970

Country of ref document: EP

Kind code of ref document: A1