CN113673274A - Road boundary detection method, road boundary detection device, computer equipment and storage medium - Google Patents

Road boundary detection method, road boundary detection device, computer equipment and storage medium Download PDF

Info

Publication number
CN113673274A
CN113673274A CN202010400797.XA CN202010400797A CN113673274A CN 113673274 A CN113673274 A CN 113673274A CN 202010400797 A CN202010400797 A CN 202010400797A CN 113673274 A CN113673274 A CN 113673274A
Authority
CN
China
Prior art keywords
image
point cloud
gray value
point
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010400797.XA
Other languages
Chinese (zh)
Inventor
罗哲
肖振宇
李琛
周旋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Intelligent Driving Research Institute Co Ltd
Original Assignee
Changsha Intelligent Driving Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Intelligent Driving Research Institute Co Ltd filed Critical Changsha Intelligent Driving Research Institute Co Ltd
Priority to CN202010400797.XA priority Critical patent/CN113673274A/en
Priority to PCT/CN2021/088583 priority patent/WO2021227797A1/en
Publication of CN113673274A publication Critical patent/CN113673274A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • G06T3/067Reshaping or unfolding 3D tree structures onto 2D planes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to a road boundary detection method, a road boundary detection device, computer equipment and a storage medium. The method comprises the following steps: acquiring point cloud data acquired by detection equipment aiming at a target road, wherein the point cloud data comprises point cloud positions and reflection intensities of point clouds; projecting the point cloud data into a two-dimensional image to obtain a first image, wherein the first image comprises image points corresponding to each point cloud, and the image gray value of the image point corresponding to each point cloud is determined according to the point cloud position and the reflection intensity of the point cloud; determining a target segmentation threshold of the first image according to the gray value of each image; and according to the target segmentation threshold, performing segmentation processing on the first image to obtain a second image, and determining a road boundary based on the second image. The method can improve the accuracy of boundary detection.

Description

Road boundary detection method, road boundary detection device, computer equipment and storage medium
Technical Field
The application relates to the technical field of intelligent driving, in particular to a road boundary detection method, a road boundary detection device, computer equipment and a storage medium.
Background
At present, road cleaning work in cities mainly depends on a large number of sanitation workers to carry out manual cleaning, and manual cleaning is low in efficiency and high in labor intensity, so that the development trend of road cleaning work is realized by adopting cleaning equipment with high intelligent degree to replace manual cleaning in order to improve the working efficiency and reduce the manual labor intensity and labor cost.
The intelligent electric sweeper adopts electric energy as drive, cannot pollute the environment again in the driving process, has smaller volume compared with a common sweeper, can sweep places which cannot be removed by a garden, a street lane and a plurality of large sweepers, and can detect and track places with boundaries such as road teeth, guardrails, flower beds and the like by utilizing the vehicle-mounted sensor, thereby realizing automatic welting sweeping.
However, when the current intelligent electric sweeper detects the road boundary, the boundary detection is not accurate enough, so that the accurate edge-attaching sweeping cannot be realized.
Disclosure of Invention
In view of the above, it is necessary to provide a road boundary detection method, apparatus, computer device and storage medium capable of improving the accuracy of boundary detection in view of the above technical problems.
A road boundary detection method, the method comprising:
acquiring point cloud data acquired by a detection device aiming at a target road, wherein the point cloud data comprises point cloud positions and reflection intensities of point clouds;
projecting the point cloud data into a two-dimensional image to obtain a first image, wherein the first image comprises image points corresponding to the point clouds, and the image gray value of the image point corresponding to the point clouds is determined according to the point cloud positions and the reflection intensity of the point clouds;
determining a target segmentation threshold of the first image according to each image gray value;
and according to the target segmentation threshold, performing segmentation processing on the first image to obtain a second image, and determining a road boundary based on the second image.
A roadway boundary detection apparatus, the apparatus comprising:
the acquisition module is used for acquiring point cloud data acquired by the detection equipment aiming at a target road, wherein the point cloud data comprises point cloud positions and reflection intensities of point clouds;
the projection module is used for projecting the point cloud data into a two-dimensional image to obtain a first image, wherein the first image comprises image points corresponding to the point clouds, and the image gray value of the image point corresponding to the point clouds is determined according to the point cloud positions and the reflection intensity of the point clouds;
the determining module is used for determining a target segmentation threshold of the first image according to each image gray value;
and the processing module is used for carrying out segmentation processing on the first image according to the target segmentation threshold value to obtain a second image, and determining a road boundary based on the second image.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring point cloud data acquired by a detection device aiming at a target road, wherein the point cloud data comprises point cloud positions and reflection intensities of point clouds;
projecting the point cloud data into a two-dimensional image to obtain a first image, wherein the first image comprises image points corresponding to the point clouds, and the image gray value of the image point corresponding to the point clouds is determined according to the point cloud positions and the reflection intensity of the point clouds;
determining a target segmentation threshold of the first image according to each image gray value;
and according to the target segmentation threshold, performing segmentation processing on the first image to obtain a second image, and determining a road boundary based on the second image.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring point cloud data acquired by a detection device aiming at a target road, wherein the point cloud data comprises point cloud positions and reflection intensities of point clouds;
projecting the point cloud data into a two-dimensional image to obtain a first image, wherein the first image comprises image points corresponding to the point clouds, and the image gray value of the image point corresponding to the point clouds is determined according to the point cloud positions and the reflection intensity of the point clouds;
determining a target segmentation threshold of the first image according to each image gray value;
and according to the target segmentation threshold, performing segmentation processing on the first image to obtain a second image, and determining a road boundary based on the second image.
According to the road boundary detection method, the road boundary detection device, the computer equipment and the storage medium, point cloud data collected by the detection equipment aiming at a target road are obtained, wherein the point cloud data comprise point cloud positions and reflection intensities of point clouds; projecting the point cloud data into a two-dimensional image to obtain a first image, wherein the first image comprises image points corresponding to each point cloud, and the image gray value of the image point corresponding to each point cloud is determined according to the point cloud position and the reflection intensity of the point cloud; determining a target segmentation threshold of the first image according to the gray value of each image; and according to the target segmentation threshold, performing segmentation processing on the first image to obtain a second image, and determining a road boundary based on the second image. Therefore, the position information and the reflection intensity information of the point cloud are converted into image gray values to be reflected in the first image, the obtained image gray values can be used for accurately distinguishing image points corresponding to the road surface point cloud and the road boundary point cloud, therefore, the first image is divided according to the target division threshold determined by the image gray values, the divided second image can accurately restore the road boundary information, and the boundary detection effect is improved.
Drawings
Fig. 1 is a schematic flowchart of a road boundary detection method according to an embodiment.
FIG. 2 is a schematic diagram of a first image in one embodiment.
FIG. 3 is a diagram of a second image in one embodiment.
FIG. 4 is a flowchart illustrating the step of determining the target segmentation threshold for the first image according to the gray-level value of each image in one embodiment.
FIG. 5 is a diagram illustrating an exemplary gray scale value distribution of an image.
FIG. 6 is a flowchart illustrating the step of determining a road boundary based on the second image in one embodiment.
FIG. 7 is a schematic diagram of a road boundary curve in one embodiment.
Fig. 8 is a flowchart illustrating a road boundary detection method according to another embodiment.
Fig. 9 is a block diagram showing the structure of a road boundary detection device according to an embodiment.
FIG. 10 is a diagram showing an internal structure of a computer device according to an embodiment.
FIG. 11 is a diagram illustrating an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The road boundary detection method can be applied to an intelligent vehicle driving system, the intelligent vehicle driving system comprises an industrial personal computer and detection equipment (such as a laser radar), the detection equipment can be installed on a vehicle, corresponding point cloud data are collected along with the movement of the vehicle on a target road, the industrial personal computer obtains the point cloud data collected by the detection equipment, the point cloud data are processed, a road boundary is determined, and the vehicle can be further controlled to move along the road boundary.
In one embodiment, as shown in fig. 1, a road boundary detection method is provided, which is described by taking an example of the method applied to an industrial personal computer, and includes the following steps S102 to S108.
S102, point cloud data collected by the detection equipment for the target road are obtained, and the point cloud data comprise point cloud positions and reflection intensities of the point clouds.
The detection device can adopt a laser radar, the working principle of the laser radar is to transmit a detection signal (laser beam) to a target, then the received signal reflected from the target is compared with the transmitted signal, and relevant information of the target, such as parameters of distance, direction, height, reflection intensity and the like, can be obtained after appropriate processing, so that the target can be detected, tracked and identified. Specifically, the laser radar is installed on a vehicle running on a target road, and point cloud data on the target road is collected along with the movement of the vehicle, wherein the point cloud data comprises a point cloud position and a reflection intensity of each point cloud, and the point cloud position can be a coordinate position of the point cloud under a laser radar coordinate system.
In one embodiment, the lidar is mounted on the top of the vehicle and is inclined downward at an angle (e.g., 15 degrees) toward the front, the horizontal scanning range of the lidar is approximately 100 degrees, the vertical scanning angle of the lidar is approximately 40 degrees, and the scanning range can cover most of the area right in front of the vehicle. And after the laser radar is installed, calibrating external parameters of the laser radar to obtain calibration parameters of the laser radar reaching the vehicle body. The industrial personal computer performs coordinate conversion on original point cloud data acquired by the laser radar after obtaining the original point cloud data, specifically, the point cloud data under a laser radar coordinate system is converted into an automobile body coordinate system through a laser radar which is calibrated in advance, and a conversion formula can be as follows:
Figure BDA0002489383350000051
wherein, PLRepresenting point cloud coordinate points (forward x-axis, leftward y-axis and upward z-axis) in a laser radar coordinate system, PCAnd point cloud coordinate points (the x axis is right in front of the vehicle, the y axis is right in left of the vehicle, the z axis is right above the vehicle) under the vehicle body coordinate system are represented, R is a 3 x 3 rotation matrix, and T is a translation vector. R and T can be calibrated by combining actual conditions.
And S104, projecting the point cloud data into a two-dimensional image to obtain a first image, wherein the first image comprises image points corresponding to the point clouds, and the image gray value of the image point corresponding to each point cloud is determined according to the point cloud position and the reflection intensity of the point cloud.
The point cloud data are projected into the two-dimensional image, so that all information contained in the point cloud is converted into the image, and the point cloud data can be processed more conveniently in the image. Each point cloud corresponds to one image point in the image, and the position information and the reflection intensity information of each point cloud are converted into the position information and the image gray value of the corresponding image point.
For example, in a 1000 × 1000 single-channel grayscale image, the coordinate origin is at the upper left of the image, the w coordinate axis represents the width of the image, the h coordinate axis represents the height of the image, and assuming that the image coordinate point (w is 500, h is 1000) represents the coordinate point of the vehicle body coordinate origin in the image, the conversion formula of the coordinates of the point cloud in the image coordinate system and the coordinates thereof in the vehicle body coordinate system is as follows:
w′=500-y*100;h′=1000-x*100
wherein, w 'and h' represent the coordinate value of the point cloud under the image coordinate system, x and y represent the longitudinal value and the transverse value of the point cloud under the vehicle body coordinate system respectively, the unit is meter (m), and the unit of the image coordinate value is centimeter (cm). Therefore, the position information of the point cloud in the x-axis direction and the y-axis direction under the vehicle body coordinate system is converted into the coordinate values of the corresponding image points in the image. The position information of the point cloud in the z-axis direction under the vehicle body coordinate system represents the height of the point cloud, and the height and the reflection intensity of the point cloud are respectively mapped to a pixel gray value space (0-255), so that the height and the reflection intensity of the point cloud are converted into image gray values of corresponding image points in the image. Referring to fig. 2, a schematic diagram of a first image in an embodiment is shown, where the first image includes image points corresponding to road surface point clouds (hereinafter referred to as road surface image points) and image points corresponding to road boundary point clouds (hereinafter referred to as boundary image points).
And S106, determining a target segmentation threshold of the first image according to the gray value of each image.
The target segmentation threshold is used for segmenting road surface image points and boundary image points in the first image. The point cloud position and the reflection intensity of the road surface point cloud and the boundary point cloud are different, so that the image gray values of the corresponding image points are also different, and an image gray value capable of distinguishing the road surface image point from the boundary image point is determined from the image gray values in the first image and is used as a target segmentation threshold.
And S108, segmenting the first image according to the target segmentation threshold value to obtain a second image, and determining a road boundary based on the second image.
Image points in the first image can be divided into two types according to the target segmentation threshold, wherein one type corresponds to road surface image points, and the other type corresponds to boundary image points. Specifically, the first image may be binarized, and the second image may be obtained by setting the grayscale value of the divided road surface image point to 255 (white) and the grayscale value of the divided boundary image point to 0 (black). Referring to FIG. 3, a schematic diagram of a second image in an embodiment is shown, in which road surface image points are removed and boundary image points are retained, as compared to the first image shown in FIG. 2.
In the road boundary detection method, the position information and the reflection intensity information of the point cloud are converted into the image gray value to be reflected in the first image, and the obtained image gray value can be used for accurately distinguishing the road surface point cloud and the image points corresponding to the road boundary point cloud.
In one embodiment, the point cloud position includes a point cloud height, and the determining of the image gray value of the image point corresponding to each point cloud includes: and determining the image gray value of the image point corresponding to each point cloud according to the point cloud height of each point cloud, the corresponding first conversion factor and first distribution weight, the reflection intensity of each point cloud, the corresponding second conversion factor and second distribution weight.
The first conversion factor and the second conversion factor are respectively used for converting the point cloud height and the reflection intensity into gray values, and the converted values are between 0 and 255. The first distribution weight value and the second distribution weight value respectively represent the weight occupied by the gray value after the point cloud height and the reflection intensity are converted, and the value obtained by adding the first distribution weight value and the second distribution weight value is 1. And determining the final image gray value of the image point corresponding to the point cloud together by the gray value and the weight of the converted point cloud height and reflection intensity.
Specifically, the step of determining the image gray value of the image point corresponding to each point cloud according to the point cloud height of each point cloud, the first conversion factor and the first distribution weight corresponding to each point cloud, and the reflection intensity of each point cloud, the second conversion factor and the second distribution weight corresponding to each point cloud may include the following steps: multiplying the point cloud height and the reflection intensity of each point cloud by the first conversion factor and the second conversion factor respectively to obtain a first conversion gray value and a second conversion gray value of each point cloud; multiplying the first conversion gray value and the second conversion gray value of each point cloud by the first distribution weight value and the second distribution weight value respectively to obtain a first weighted gray value and a second weighted gray value of each point cloud; and adding the first weighted gray value and the second weighted gray value of each point cloud to determine the image gray value of the image point corresponding to each point cloud.
The calculation formula of the image gray value of the image point corresponding to each point cloud is as follows:
P=m*j*z+n*k*i
wherein z and i represent the point cloud height and the reflection intensity, respectively, j and k represent the first conversion factor and the second conversion factor, respectively, j x z and k x i represent the first conversion gray value (between 0 and 255) and the second conversion gray value (between 0 and 255), respectively, m and n represent the first distribution weight value and the second distribution weight value, respectively, m + n is 1, P represents the image gray value, and P is 0 ≦ P ≦ 255. j. k, m and n can be calibrated by combining the actual conditions.
In the embodiment, the point cloud height and the reflection intensity information are converted into the gray values through the conversion factors corresponding to the point cloud height and the reflection intensity, and the image gray values of the image points corresponding to the point cloud are determined together based on the gray values obtained after the point cloud height and the reflection intensity are converted and the corresponding distribution weights, so that the determined image gray values can be used for accurately distinguishing the road surface image points and the boundary image points, and the boundary detection effect is improved.
In one embodiment, before the step of determining the target segmentation threshold of the first image according to the gray-scale value of each image, the method further comprises: according to the position of each image point in the first image, carrying out blocking processing on the first image to obtain at least two blocked images; the step of determining a target segmentation threshold for the first image based on the respective image gray-scale values comprises: for each block image, determining a block segmentation threshold of the block image according to the image gray value of each image point in the block image, wherein the target segmentation threshold comprises each block segmentation threshold; according to the target segmentation threshold, the step of carrying out segmentation processing on the first image to obtain a second image comprises the following steps: and according to the block segmentation threshold value of each block image, respectively carrying out segmentation processing on the corresponding block image to obtain a second image.
When a vehicle runs on an uneven road surface, the vehicle may bump up and down or left and right, so that the height of the point cloud fluctuates greatly, the point cloud fluctuation close to the vehicle body is small, the point cloud fluctuation far away from the vehicle body is large, and a single segmentation threshold value is difficult to find to accurately segment the point clouds of a near part and a far part. Based on this, before determining the target segmentation threshold of the first image, the first image is segmented, and specifically, the first image may be segmented into an upper portion and a lower portion, that is, two segmented images, where the lower portion represents a point cloud image in front of the vehicle and closer to the vehicle body (0-5 m), and the upper portion represents a point cloud image in front of the vehicle and farther from the vehicle body (5-10 m). After the block images are obtained, for each block image, the following processing is performed: and determining a block segmentation threshold of the block image according to the image gray value of each image point in the block image, and then performing segmentation processing on the block image according to the block segmentation threshold of the block image. After each block image is subjected to segmentation processing, the images subjected to segmentation processing of each block image jointly form a second image.
The image blocking process is not limited to the vertical blocking, and other blocking processes may be performed according to actual circumstances, for example, when the road surface is inclined left and right, left and right blocking may be considered.
In this embodiment, by partitioning the first image, determining the segmentation threshold of each partitioned image, and performing segmentation processing on each partitioned image, the influence of point cloud fluctuation caused by uneven road surface or vehicle fluctuation on the segmentation threshold can be reduced, and the segmentation accuracy of road surface image points and boundary image points can be improved.
In an embodiment, as shown in fig. 4, the step of determining the target segmentation threshold of the first image according to the gray-level value of each image may specifically include the following steps S1062 to S1066.
S1062, obtaining a gray value distribution graph according to the gray value of each image and the number of the image points corresponding to the gray value, wherein the first coordinate of the gray value distribution graph represents the gray value of the image, and the second coordinate represents the number of the image points.
Specifically, histogram statistics may be performed on image gray values of all image points in the first image to obtain a gray value distribution graph, where the range of the image gray values is 0 to 255, that is, the image gray values include 256 values. For example, the first coordinate may be divided into 26 parts, each of the first 25 parts includes 10 gray-scale values (0-9, 10-19, …, 240-249), and the 26 th part includes 6 gray-scale values (250-255). Referring to fig. 5, a schematic diagram of an image gray-level value distribution in an embodiment is shown, in which an abscissa represents an image gray-level value and an ordinate represents a second coordinate represents a number of image points corresponding to the image gray-level value.
S1064, detecting wave crests in the gray value distribution diagram in the second coordinate direction, and determining a first wave crest and a second wave crest which are maximum in the second coordinate value in each wave crest.
The gray value distribution map may have a plurality of peaks in the second coordinate (i.e. ordinate) direction, and it can be understood that the image points are more likely to form peaks in the ordinate direction when the image gray value is closer. If more than two wave crests exist in the gray value distribution diagram in the longitudinal coordinate direction, longitudinal coordinate values of the wave crests are obtained, and the two wave crests with the largest longitudinal coordinate values (namely the two wave crests corresponding to the two longitudinal coordinates arranged in the first two are sorted from large to small) are selected to serve as the first wave crest and the second wave crest respectively. As shown in fig. 5, there are two peaks in the ordinate direction, and the higher peak is taken as the first peak, and the lower peak is taken as the second peak.
S1066, based on whether other peaks exist between the first peak and the second peak, selecting a corresponding segmentation threshold determination mode to determine a target segmentation threshold of the first image.
When no other peak exists between the first peak and the second peak, the road boundary is considered to be obvious, and the difference between the image gray values of the boundary image point and the road surface image point is large. When other wave crests exist between the first wave crest and the second wave crest, the road boundary is not obvious, and the image gray values of the boundary image points and the road surface image points which are lower in height from the road surface are closer and are not easy to distinguish due to the fact that the image gray values are highly related to the point cloud. In these two different cases, different segmentation threshold determination methods are used to determine the target segmentation threshold for the first image.
In this embodiment, the target segmentation threshold of the first image is determined by detecting the peak in the gray value distribution map and selecting a corresponding segmentation threshold determination mode based on whether other peaks exist between the first peak and the second peak, so that the segmentation effect of the road surface image point and the boundary image point can be improved, and the road boundary information can be restored more accurately.
In one embodiment, when no other peak exists between the first peak and the second peak, the maximum inter-class variance value is determined according to the gray value of each image, and is used as the target segmentation threshold of the first image.
Specifically, the maximum inter-class variance method (otsu) may be used to calculate the object segmentation threshold of the first image, where for the first image, the object segmentation threshold of the foreground (i.e., the boundary, which may refer to all object boundaries except the road surface) and the background (i.e., the road surface) is denoted as T, and the ratio of the foreground image points to the whole image points is denoted as w0The mean gray value of u0(ii) a The proportion of background image points to the whole image point is w1The mean gray value of u1(ii) a The total average gray value of the image is recorded as mu, between-class squareThe difference is recorded as g; the total number of image points is denoted as W x H, and the number of image points with image gray-scale values smaller than a threshold value T is denoted as N0The number of image points with the gray value of the image greater than the threshold value T is recorded as N1Then, there are: w is a0=N0/M*N,w1=N0/M*N,w0+w1=1,N0+N1=M*N,u=u0*w0+u1*w1,g=w0*(u-u0)2+w1*(u-u1)2=w0*w1*(u0-u1)2And traversing the variance value between classes of each grade, and taking the variance value between the maximum classes as a target segmentation threshold T.
In this embodiment, the target segmentation threshold of the first image is determined by the maximum inter-class variance value, and since the inter-class variance value represents the deviation degree between the foreground gray value and the background gray value and the average gray value, and the larger the deviation degree is, the better the segmentation effect is, the maximum inter-class variance value is selected as the target segmentation threshold, and the segmentation of the road surface image point and the boundary image point can be well realized in most scenes.
In one embodiment, when at least one other peak exists between the first peak and the second peak, a peak adjacent to the first peak is selected from the at least one other peak as a third peak; the second coordinate value of the first peak is greater than or equal to the second coordinate value of the second peak; and determining a target segmentation threshold of the first image according to the image gray value corresponding to the minimum second coordinate value between the first peak and the third peak.
And sequencing the second coordinate values of all the wave crests from large to small, wherein the first wave crest is the wave crest arranged corresponding to the first coordinate value, the second wave crest is the wave crest arranged corresponding to the second coordinate value, and the third wave crest is positioned between the first wave crest and the second wave crest and is adjacent to the first wave crest. And determining an image gray value corresponding to the minimum second coordinate value between the first peak and the third peak as a target segmentation threshold of the first image. For example, when the gray-level distribution graph is a histogram, the image gray-level value corresponding to the minimum second coordinate value between the first peak and the third peak is a gray-level value interval including more than one gray-level value, and the average gray-level value in the interval can be obtained by adding all the gray-level values in the interval and dividing the sum by the number of the gray-level values in the interval, and is used as the target segmentation threshold.
In this embodiment, the target segmentation threshold of the first image is determined by the image gray value corresponding to the minimum second coordinate value between the first peak and the third peak, and since the image gray value is related to the point cloud height, the image gray values of the boundary image point and the road image point which are lower in height from the road surface are closer, the distribution of the road image points is the largest, the third peak is adjacent to the first peak corresponding to the first peak, and the third peak corresponds to the boundary image point which is lower in height from the road surface, so that the image gray value corresponding to the minimum second coordinate value between the first peak and the third peak is selected as the target segmentation threshold of the first image, the segmentation effect of the road image point and the boundary image point which are lower in height from the road surface can be improved, and the road boundary information can be more accurately restored.
In one embodiment, as shown in fig. 6, the step of determining the road boundary based on the second image may specifically include the following steps S602 to S608.
S602, extracting boundary contour image points on the object side from the second image.
In addition to the image points at the road edge, the boundary image points in the second image may also include image points at the boundary of guard rails, flower beds and the like, and for the application of the intelligent sweeper, the boundary image points are more concerned with the contour of the road edge, so that the sweeper can clean along the road edge, and therefore, other image points which do not belong to the road edge can be filtered, and only the contour image points at the road edge are reserved. Further, when the sweeper travels along the boundary, the sweeper travels either towards the left boundary or towards the right boundary, and the driving habit is determined as driving towards the right, so that the boundary contour image points on the left side in the second image can be filtered, and the boundary contour image points on the right side (i.e. the target map) are reserved.
Specifically, before extracting the boundary contour image points of the second image, the second image is preprocessed, and the preprocessing may include removing discrete points and erosion expansion, because the image points projected into the image are discrete when the resolution of the lidar is low, and the preprocessing may not only filter accidental false detection points of the lidar, but also ensure the integrity of the boundary. After the second image is preprocessed, all contours in the second image can be stored in an array sequence by using an image processing contour searching method, in a boundary contour on the right side, an image point between a first image point traversed from left to right of the image and an image point closest to the vehicle body is considered as a required boundary contour image point, and an image point on the left side of the image in the boundary contour image points is selected as a final boundary contour image point.
S604, when the boundary contour formed based on the image points of the boundary contour is discontinuous, the positions of two end points of the boundary contour at each disconnected position are obtained, and the connection curve of the boundary contour at each disconnected position is determined according to the positions of the end points and the tangent lines at the end points.
The boundary contour image points extracted from the second image may have partial missing boundaries, which may cause discontinuity of the boundary contour formed based on the boundary contour image points, and if a plurality of boundary contours are included, different boundary contours may not be continuous. Based on this, an Hermite (Hermite) interpolation method can be used to interpolate between the two endpoints at each break of the boundary contour, according to Hermite: the principle that two end point coordinates of a known curve and a tangent line at an end point can determine a curve is adopted, the coordinates of each end point are obtained, the tangent line at each end point is calculated according to the coordinates of each end point and the coordinates of adjacent points, and then a connection curve at each disconnected point is obtained.
And S606, interpolating between two endpoints at each break of the boundary contour according to each connection curve to obtain interpolated boundary contour image points.
After the connection curves of the boundary contour at each disconnection position are obtained, Hermite interpolation is carried out between two endpoints at each disconnection position, the partially missing boundary contour can be supplemented, and all the boundary contours can be connected, so that the obtained image points of the interpolated boundary contour can embody the boundary contour more completely.
And S608, performing curve fitting on the interpolated boundary contour image points to obtain a road boundary curve, and determining a road boundary based on the road boundary curve.
Specifically, the B-spline curve is adopted to fit the interpolated boundary contour image points, and compared with other fitting modes (such as a least square method fitting mode), the road boundary curve closer to the real road boundary can be obtained through B-spline curve fitting, so that the high-precision requirement of the sweeper on welt sweeping is met. Referring to fig. 7, a schematic diagram of a road boundary curve in one embodiment is shown, and the final road boundary curve is a curve indicated by an arrow. After the final road boundary curve is obtained, the coordinates of each boundary point on the road boundary curve are converted from the image coordinate system to the vehicle body coordinate system according to the conversion formula in the foregoing, so that the sweeper is controlled to carry out welting sweeping along the road boundary curve.
In this embodiment, by performing extraction, interpolation, and fitting processing on the road boundary contour image points included in the second image, the obtained road boundary curve can reflect the road boundary information more completely and truly, and the boundary detection effect is improved.
In one embodiment, after obtaining the road boundary curve, the method further comprises the following steps: and filtering the road boundary curve to obtain a filtered road boundary curve, and determining a road boundary based on the filtered road boundary curve.
Road boundary detection is applied to when intelligent motor sweeper, in actual measuring environment, may meet various emergency, for example the motor sweeper appears very big shake in the in-process of traveling, may cause road boundary curve to appear very big fluctuation this moment, and based on this, through filtering road boundary curve, can reduce road boundary curve's volatility.
Specifically, a Kalman Filter (KF) method may be used to filter the road boundary curve, and considering that the sweeper moves at a relatively low speed at a uniform speed during operation, a uniform motion model is suitable for use, so the kalman filter design process is as follows:
the formula of the kalman filter prediction section includes:
x′=Fx+f (1)
P′=FPFT+Q (2)
where x represents the state vector, F represents the state transition matrix, FTRepresenting state transition matrix transposition, f representing external influence, x 'representing the updated state vector, P representing the state covariance matrix, Q representing the process noise matrix, and P' representing the state covariance matrix after state update.
Setting the state vector to x ═ x, y, vx,vy]TWhere x and y represent the coordinates of the boundary point in the image coordinate system, v, respectivelyxAnd vyEach of the velocities representing the boundary points is a uniform motion model, and f may be set to 0,
Figure BDA0002489383350000131
the point cloud returned by the laser radar is corresponding to the boundary point and is obtained by direct measurement of the laser radar, and the speed of the boundary point cannot be measured, so that the position information of the boundary point can be accurately acquired, the uncertainty is low, and the uncertainty is high for the speed information of the boundary point. Can be provided with
Figure BDA0002489383350000132
Q has an influence on the entire system, but it is difficult to determine how much the influence is on the system, so that setting Q as an identity matrix, then
Figure BDA0002489383350000133
The formula of the kalman filter measurement section includes:
y=Z-Hx′ (3)
S=HP′HT+R (4)
K=P′HTS-1 (5)
x=x′+Ky (6)
P=(I-KH)P′ (7)
wherein the formulas (3), (6) and (7) are observation formulasEquations (4) and (5) are used to find the Kalman gain K, the observed value of the boundary point
Figure BDA0002489383350000134
H represents a measurement matrix, which is mainly used for converting a state vector space into a measurement space and is obtained according to Z ═ Hx
Figure BDA0002489383350000135
R represents a measurement noise matrix, the value represents the difference between the measured value and the true value, and when the laser radar ranging precision is 2cm, the difference can be set
Figure BDA0002489383350000136
S represents a temporary variable of the simplified formula and I represents the identity matrix with the state vector. Through all the known variables, updating the state vector x and the state covariance matrix P is realized by using the formulas (6) and (7), and the prediction and measurement are iterated continuously, so that a predicted state vector close to a true value can be obtained.
In this embodiment, by filtering the road boundary curve, the volatility of the boundary curve can be reduced, and the boundary detection effect can be improved. It is to be understood that the filtering method is not limited to the Kalman Filter (KF) described above, and for example, the road boundary curve may be filtered by using a filtering method such as an extended kalman filter (EFK) or an Unscented Kalman Filter (UKF).
In one embodiment, as shown in fig. 8, a road boundary detection method is provided, which is described by taking an example of the method applied to an industrial personal computer, and includes the following steps S801 to S816.
S801, point cloud data collected by the laser radar aiming at a target road are obtained, wherein the point cloud data comprise point cloud positions and reflection intensities of all point clouds.
And S802, performing coordinate conversion on the point cloud data, and converting the point cloud data from a laser radar coordinate system to a vehicle body coordinate system.
And S803, projecting the point cloud data converted into the vehicle body coordinate system into a two-dimensional image to obtain a first image, wherein the first image comprises image points corresponding to the point clouds, so that the point cloud coordinates are converted into the image coordinate system from the vehicle body coordinate system, and the image gray values of the image points corresponding to the point clouds are determined according to the point cloud height and the reflection intensity of the point clouds.
S804, according to the position of each image point in the first image, the first image is subjected to blocking processing, and at least two blocking images are obtained.
And S805, for each block image, obtaining a gray value distribution graph according to the gray value of each image in the block image and the number of the image points corresponding to the gray value, wherein the first coordinate of the gray value distribution graph represents the gray value of the image, and the second coordinate represents the number of the image points.
S806, detecting wave peaks in the gray value distribution diagram in the second coordinate direction, and determining a first wave peak and a second wave peak with the largest second coordinate value in each wave peak.
S807, judging whether other wave crests exist between the first wave crest and the second wave crest, if so, entering step S808, otherwise, entering step S809.
S808, selecting a wave peak adjacent to the first wave peak as a third wave peak from other existing wave peaks; and determining a target segmentation threshold of the block image according to the image gray value corresponding to the minimum second coordinate value between the first peak and the third peak.
And S809, determining the maximum inter-class variance value according to the gray value of each image, and using the maximum inter-class variance value as a block segmentation threshold value of the block image.
And S810, respectively carrying out segmentation processing on the corresponding segmented images according to the segmented segmentation threshold of each segmented image to obtain second images.
S811, boundary outline image points on the object side are extracted from the second image.
S812, when the boundary contour formed based on the image points of the boundary contour is discontinuous, acquiring the positions of two end points of the boundary contour at each disconnected position, and determining a connection curve of the boundary contour at each disconnected position according to the positions of the end points and tangent lines at the end points.
And S813, interpolating between two endpoints at each break of the boundary contour according to each connection curve to obtain interpolated boundary contour image points.
And S814, performing curve fitting on the interpolated boundary contour image points to obtain a road boundary curve.
And S815, filtering the road boundary curve to obtain the filtered road boundary curve.
And S816, converting the coordinates of each boundary point on the filtered road boundary curve from the image coordinate system to the vehicle body coordinate system, and determining the road boundary.
For specific description of steps S801 to S816, reference may be made to the foregoing embodiments, which are not described herein again. In the embodiment, the point cloud data is projected to the two-dimensional image, so that the height and the reflection intensity of the point cloud are converted into the gray value of the image, and important information is provided for extracting the road boundary; by carrying out blocking processing on the images and calculating the segmentation threshold value of each image, the influence of point cloud fluctuation on the segmentation threshold value caused by the conditions of uneven road surface or vehicle fluctuation and the like can be reduced; and the target segmentation threshold of the image is determined by detecting the wave crest in the gray value distribution diagram and selecting a corresponding segmentation threshold determination mode based on whether other wave crests exist between the first wave crest and the second wave crest, so that the segmentation effect of the road surface image points and the boundary image points can be improved, and the road boundary information can be restored more accurately.
It should be understood that although the steps in the flowcharts of fig. 1, 4, 6, 8 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1, 4, 6, and 8 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 9, there is provided a road boundary detecting apparatus 900, including: an obtaining module 910, a projecting module 920, a determining module 930, and a processing module 940, wherein:
an obtaining module 910, configured to obtain point cloud data collected by a detection device for a target road, where the point cloud data includes a point cloud position and a reflection intensity of each point cloud.
The projection module 920 is configured to project the point cloud data to a two-dimensional image to obtain a first image, where the first image includes image points corresponding to each point cloud, and an image gray value of an image point corresponding to each point cloud is determined according to a point cloud position and a reflection intensity of the point cloud.
A determining module 930, configured to determine a target segmentation threshold of the first image according to the gray-level value of each image.
And a processing module 940, configured to perform segmentation processing on the first image according to the target segmentation threshold to obtain a second image, and determine a road boundary based on the second image.
In one embodiment, the point cloud position includes a point cloud height, and the projection module 920 further includes a gray value determining unit, configured to determine an image gray value of the image point corresponding to each point cloud according to the point cloud height of each point cloud and its corresponding first conversion factor and first distribution weight, and the reflection intensity of each point cloud and its corresponding second conversion factor and second distribution weight.
In one embodiment, the gray value determining unit is specifically configured to: multiplying the point cloud height and the reflection intensity of each point cloud by the first conversion factor and the second conversion factor respectively to obtain a first conversion gray value and a second conversion gray value of each point cloud; multiplying the first conversion gray value and the second conversion gray value of each point cloud by the first distribution weight value and the second distribution weight value respectively to obtain a first weighted gray value and a second weighted gray value of each point cloud; and adding the first weighted gray value and the second weighted gray value of each point cloud to determine the image gray value of the image point corresponding to each point cloud.
In an embodiment, the determining module 930 further comprises an image blocking unit, configured to perform a blocking process on the first image according to the position of each image point in the first image, to obtain at least two blocked images. The determining module 930 is further configured to determine, for each block image, a block segmentation threshold of the block image according to an image gray value of each image point in the block image; the target segmentation threshold includes a respective block segmentation threshold. The processing module 940 is further configured to perform segmentation processing on the corresponding segmented images according to the segmentation threshold of each segmented image to obtain a second image.
In one embodiment, the determining module 930 includes: the device comprises a gray value distribution acquisition unit, a wave crest detection unit and a division threshold value determination unit. Wherein:
and the gray value distribution acquisition unit is used for acquiring a gray value distribution graph according to the gray value of each image and the number of the image points corresponding to the gray value, wherein the first coordinate of the gray value distribution graph represents the gray value of the image, and the second coordinate represents the number of the image points.
And the peak detection unit is used for detecting peaks in the gray value distribution diagram in the second coordinate direction and determining a first peak and a second peak with the largest second coordinate value in each peak.
And the segmentation threshold determining unit is used for selecting a corresponding segmentation threshold determining mode to determine the target segmentation threshold of the first image based on whether other peaks exist between the first peak and the second peak.
In an embodiment, the segmentation threshold determination unit is specifically configured to determine a maximum inter-class variance value according to gray values of the respective images as the target segmentation threshold of the first image when no other peak exists between the first peak and the second peak.
In an embodiment, the segmentation threshold determination unit is specifically configured to select, when at least one other peak exists between the first peak and the second peak, a peak adjacent to the first peak as a third peak from the at least one other peak; the second coordinate value of the first peak is greater than or equal to the second coordinate value of the second peak; and determining a target segmentation threshold of the first image according to the image gray value corresponding to the minimum second coordinate value between the first peak and the third peak.
In one embodiment, the processing module 940 further includes: the device comprises an extraction unit, a connection curve determination unit, an interpolation unit and a fitting unit. Wherein:
and the extraction unit is used for extracting the boundary contour image point of the target side from the second image.
And the connection curve determining unit is used for acquiring the positions of two end points of the boundary contour at each disconnected position when the boundary contour formed based on the image points of the boundary contour is discontinuous, and determining the connection curve of the boundary contour at each disconnected position according to the positions of the end points and the tangent lines at the end points.
And the interpolation unit is used for interpolating between two endpoints at each disconnection position of the boundary contour according to each connection curve to obtain an interpolated boundary contour image point.
And the fitting unit is used for performing curve fitting on the interpolated boundary contour image points to obtain a road boundary curve and determining a road boundary based on the road boundary curve.
In one embodiment, the processing module 940 further includes a filtering unit, configured to filter the road boundary curve to obtain a filtered road boundary curve, and determine the road boundary based on the filtered road boundary curve.
For the specific definition of the road boundary detection device, reference may be made to the above definition of the road boundary detection method, which is not described herein again. The various modules in the road boundary detection apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 10. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a road boundary detection method.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 11. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a road boundary detection method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the configurations shown in fig. 10 or 11 are merely block diagrams of some configurations relevant to the present disclosure, and do not constitute a limitation on the computing devices to which the present disclosure may be applied, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the above-described method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the respective method embodiment as described above.
It should be understood that the terms "first", "second", etc. in the above-described embodiments are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (11)

1. A road boundary detection method, the method comprising:
acquiring point cloud data acquired by a detection device aiming at a target road, wherein the point cloud data comprises point cloud positions and reflection intensities of point clouds;
projecting the point cloud data into a two-dimensional image to obtain a first image, wherein the first image comprises image points corresponding to the point clouds, and the image gray value of the image point corresponding to the point clouds is determined according to the point cloud positions and the reflection intensity of the point clouds;
determining a target segmentation threshold of the first image according to each image gray value;
and according to the target segmentation threshold, performing segmentation processing on the first image to obtain a second image, and determining a road boundary based on the second image.
2. The method of claim 1, wherein the point cloud locations comprise point cloud heights, and wherein the image gray scale values of the image points corresponding to each of the point clouds are determined by:
and determining the image gray value of the image point corresponding to each point cloud according to the point cloud height of each point cloud, the corresponding first conversion factor and first distribution weight, the reflection intensity of each point cloud, the corresponding second conversion factor and second distribution weight.
3. The method of claim 2, wherein determining an image gray scale value of an image point corresponding to each point cloud according to the point cloud height of each point cloud and its corresponding first conversion factor and first distribution weight, and the reflection intensity of each point cloud and its corresponding second conversion factor and second distribution weight comprises:
multiplying the point cloud height and the reflection intensity of each point cloud by the first conversion factor and the second conversion factor respectively to obtain a first conversion gray value and a second conversion gray value of each point cloud;
multiplying the first conversion gray value and the second conversion gray value of each point cloud by the first distribution weight and the second distribution weight respectively to obtain a first weighted gray value and a second weighted gray value of each point cloud;
and adding the first weighted gray value and the second weighted gray value of each point cloud to determine the image gray value of the image point corresponding to each point cloud.
4. The method of claim 1, further comprising, prior to determining the target segmentation threshold for the first image based on each of the image grayscale values: according to the position of each image point in the first image, carrying out blocking processing on the first image to obtain at least two blocked images;
determining a target segmentation threshold of the first image according to each image gray value, comprising: for each block image, determining a block segmentation threshold of the block image according to the image gray value of each image point in the block image; the target segmentation threshold comprises each of the block segmentation thresholds;
according to the target segmentation threshold, performing segmentation processing on the first image to obtain a second image, wherein the segmentation processing comprises the following steps: and according to the block segmentation threshold value of each block image, respectively carrying out segmentation processing on the corresponding block image to obtain a second image.
5. The method of claim 1, wherein determining the target segmentation threshold for the first image based on the respective image grayscale values comprises:
obtaining a gray value distribution graph according to each image gray value and the number of image points corresponding to the gray value distribution graph, wherein a first coordinate of the gray value distribution graph represents the image gray value, and a second coordinate of the gray value distribution graph represents the number of the image points;
detecting wave crests in a second coordinate direction in the gray value distribution diagram, and determining a first wave crest and a second wave crest which have the largest second coordinate value in each wave crest;
and determining a target segmentation threshold of the first image by selecting a corresponding segmentation threshold determination mode based on whether other peaks exist between the first peak and the second peak.
6. The method of claim 5, wherein determining the target segmentation threshold for the first image based on whether other peaks exist between the first peak and the second peak and by selecting a corresponding segmentation threshold determination method comprises:
and when other wave crests do not exist between the first wave crest and the second wave crest, determining the maximum inter-class variance value according to the gray value of each image, and using the maximum inter-class variance value as a target segmentation threshold of the first image.
7. The method of claim 5, wherein determining the target segmentation threshold for the first image based on whether other peaks exist between the first peak and the second peak and by selecting a corresponding segmentation threshold determination method comprises:
when at least one other peak exists between the first peak and the second peak, selecting a peak adjacent to the first peak as a third peak from the at least one other peak; the second coordinate value of the first peak is greater than or equal to the second coordinate value of the second peak;
and determining a target segmentation threshold of the first image according to the image gray value corresponding to the minimum second coordinate value between the first peak and the third peak.
8. The method of any of claims 1 to 7, wherein determining a road boundary based on the second image comprises:
extracting boundary contour image points of the target side from the second image;
when the boundary contour formed based on the image points of the boundary contour is discontinuous, acquiring the positions of two end points of the boundary contour at each disconnected position, and determining a connection curve of the boundary contour at each disconnected position according to the positions of the end points and tangent lines at the end points;
according to each connecting curve, interpolating between two end points at each disconnection position of the boundary contour to obtain interpolated boundary contour image points;
and performing curve fitting on the interpolated boundary contour image points to obtain a road boundary curve, and determining a road boundary based on the road boundary curve.
9. A road boundary detection apparatus, the apparatus comprising:
the acquisition module is used for acquiring point cloud data acquired by the detection equipment aiming at a target road, wherein the point cloud data comprises point cloud positions and reflection intensities of point clouds;
the projection module is used for projecting the point cloud data into a two-dimensional image to obtain a first image, wherein the first image comprises image points corresponding to the point clouds, and the image gray value of the image point corresponding to the point clouds is determined according to the point cloud positions and the reflection intensity of the point clouds;
the determining module is used for determining a target segmentation threshold of the first image according to each image gray value;
and the processing module is used for carrying out segmentation processing on the first image according to the target segmentation threshold value to obtain a second image, and determining a road boundary based on the second image.
10. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 8 when executing the computer program.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
CN202010400797.XA 2020-05-13 2020-05-13 Road boundary detection method, road boundary detection device, computer equipment and storage medium Pending CN113673274A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010400797.XA CN113673274A (en) 2020-05-13 2020-05-13 Road boundary detection method, road boundary detection device, computer equipment and storage medium
PCT/CN2021/088583 WO2021227797A1 (en) 2020-05-13 2021-04-21 Road boundary detection method and apparatus, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010400797.XA CN113673274A (en) 2020-05-13 2020-05-13 Road boundary detection method, road boundary detection device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113673274A true CN113673274A (en) 2021-11-19

Family

ID=78526357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010400797.XA Pending CN113673274A (en) 2020-05-13 2020-05-13 Road boundary detection method, road boundary detection device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113673274A (en)
WO (1) WO2021227797A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114155258A (en) * 2021-12-01 2022-03-08 苏州思卡信息系统有限公司 Detection method for highway construction enclosed area

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115131761B (en) * 2022-08-31 2022-12-06 北京百度网讯科技有限公司 Road boundary identification method, drawing method and drawing device
CN117368879B (en) * 2023-12-04 2024-03-19 北京海兰信数据科技股份有限公司 Radar diagram generation method and device, terminal equipment and readable storage medium
CN117764992B (en) * 2024-02-22 2024-04-30 山东乔泰管业科技有限公司 Plastic pipe quality detection method based on image processing

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106067003B (en) * 2016-05-27 2020-05-19 山东科技大学 Automatic extraction method for road vector identification line in vehicle-mounted laser scanning point cloud
CN110163047B (en) * 2018-07-05 2023-04-07 腾讯大地通途(北京)科技有限公司 Method and device for detecting lane line
CN109034047B (en) * 2018-07-20 2021-01-22 京东方科技集团股份有限公司 Lane line detection method and device
CN109766878B (en) * 2019-04-11 2019-06-28 深兰人工智能芯片研究院(江苏)有限公司 A kind of method and apparatus of lane detection
CN110502973B (en) * 2019-07-05 2023-02-07 同济大学 Automatic extraction and identification method for road marking based on vehicle-mounted laser point cloud
CN110866449A (en) * 2019-10-21 2020-03-06 北京京东尚科信息技术有限公司 Method and device for identifying target object in road

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114155258A (en) * 2021-12-01 2022-03-08 苏州思卡信息系统有限公司 Detection method for highway construction enclosed area

Also Published As

Publication number Publication date
WO2021227797A1 (en) 2021-11-18

Similar Documents

Publication Publication Date Title
CN113673274A (en) Road boundary detection method, road boundary detection device, computer equipment and storage medium
CN109684921B (en) Road boundary detection and tracking method based on three-dimensional laser radar
US20230111722A1 (en) Curb detection by analysis of reflection images
CN107330925B (en) Multi-obstacle detection and tracking method based on laser radar depth image
US4970653A (en) Vision method of detecting lane boundaries and obstacles
Yang et al. Semi-automated extraction and delineation of 3D roads of street scene from mobile laser scanning point clouds
US8605947B2 (en) Method for detecting a clear path of travel for a vehicle enhanced by object detection
JP5820774B2 (en) Road boundary estimation apparatus and program
Chen et al. Next generation map making: Geo-referenced ground-level LIDAR point clouds for automatic retro-reflective road feature extraction
CN110674705B (en) Small-sized obstacle detection method and device based on multi-line laser radar
Labayrade et al. In-vehicle obstacles detection and characterization by stereovision
US20050015201A1 (en) Method and apparatus for detecting obstacles
Wedel et al. Realtime depth estimation and obstacle detection from monocular video
CN109001757B (en) Parking space intelligent detection method based on 2D laser radar
Darms et al. Map based road boundary estimation
CN111104933A (en) Map processing method, mobile robot, and computer-readable storage medium
Chen et al. Building reconstruction from LIDAR data and aerial imagery
WO2020080088A1 (en) Information processing device
Janda et al. Road boundary detection for run-off road prevention based on the fusion of video and radar
Knoeppel et al. Robust vehicle detection at large distance using low resolution cameras
Yu et al. An evidential sensor model for velodyne scan grids
CN114842166A (en) Negative obstacle detection method, system, medium, and apparatus applied to structured road
Dailey et al. An algorithm to estimate vehicle speed using uncalibrated cameras
Truong-Hong et al. Automatic detection of road edges from aerial laser scanning data
CN113240735A (en) Slope displacement activity monitoring method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination