CN117372988A - Road boundary detection method, device, electronic equipment and storage medium - Google Patents

Road boundary detection method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117372988A
CN117372988A CN202311676716.9A CN202311676716A CN117372988A CN 117372988 A CN117372988 A CN 117372988A CN 202311676716 A CN202311676716 A CN 202311676716A CN 117372988 A CN117372988 A CN 117372988A
Authority
CN
China
Prior art keywords
clustering
current
determining
feature vector
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311676716.9A
Other languages
Chinese (zh)
Other versions
CN117372988B (en
Inventor
毛威
曹亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jika Intelligent Robot Co ltd
Original Assignee
Jika Intelligent Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jika Intelligent Robot Co ltd filed Critical Jika Intelligent Robot Co ltd
Priority to CN202311676716.9A priority Critical patent/CN117372988B/en
Publication of CN117372988A publication Critical patent/CN117372988A/en
Application granted granted Critical
Publication of CN117372988B publication Critical patent/CN117372988B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a road boundary detection method, a road boundary detection device, electronic equipment and a storage medium. Wherein the method comprises the following steps: acquiring an image to be processed comprising a boundary to be detected; processing the image to be processed based on a preset clustering algorithm, and determining image data corresponding to at least one clustering area included in the image to be processed; for each clustering area, determining a clustering classification feature vector corresponding to the current clustering area according to the image data corresponding to the current clustering area, wherein the clustering classification feature vector comprises at least one of a right-angle feature vector, a rectangular feature vector and a geometric feature vector; and determining a target clustering area based on the clustering classification feature vector corresponding to each clustering area, and determining a boundary to be detected based on the target clustering area. According to the technical scheme, the effect of extracting the road boundary according to the clustering characteristics of the point cloud data is achieved, and the extraction efficiency of the road boundary is improved.

Description

Road boundary detection method, device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of automatic driving technologies, and in particular, to a road boundary detection method, a device, an electronic apparatus, and a storage medium.
Background
Road boundary detection is the perception of important content of the environment of an autonomous vehicle, and divides the perceived environment into a driving area road, a road boundary and an environment outside the road boundary. The road boundary detection can assist in vehicle positioning, and can be used for identifying a road area, so that the perception range is reduced, and the subsequent processing efficiency and accuracy of environment perception are improved.
In the related art, with the rapid development of deep learning, the detection and extraction of road boundaries are mainly realized by using a pixel-level segmentation algorithm on road images acquired by a camera; or, the point cloud data acquired by the vehicle radar is processed through the deep learning model with high space-time complexity to obtain the road boundary points, and the road boundary information can be provided by a high-precision map with high manufacturing and maintenance cost. However, the downstream sensing algorithm such as object detection and tracking generally only needs the point cloud inside the road boundary as input, if all the point clouds of a single frame are input, not only a great number of false detection and false tracking of obstacles caused by fences, vegetation and the like can be generated, but also the time and space complexity of the algorithm can be increased, and the deployment difficulty on the vehicle-mounted computing platform is increased.
Disclosure of Invention
The invention provides a road boundary detection method, a device, electronic equipment and a storage medium, which are used for realizing the effect of extracting the road boundary according to the clustering characteristics of point cloud data and improving the extraction efficiency of the road boundary.
According to an aspect of the present invention, there is provided a road boundary detection method including:
acquiring an image to be processed comprising a boundary to be detected, wherein the image to be processed is an image determined based on point cloud data to be processed corresponding to the boundary to be detected;
processing the image to be processed based on a preset clustering algorithm, and determining image data corresponding to at least one clustering area included in the image to be processed;
for each clustering region, determining a clustering classification feature vector corresponding to the current clustering region according to image data corresponding to the current clustering region, wherein the clustering classification feature vector comprises at least one of a right-angle feature vector, a rectangular feature vector and a geometric feature vector;
and determining a target clustering area based on the clustering classification feature vector corresponding to each clustering area, and determining the boundary to be detected based on the target clustering area.
According to another aspect of the present invention, there is provided a road boundary detecting apparatus including:
the image acquisition module is used for acquiring an image to be processed comprising a boundary to be detected, wherein the image to be processed is an image determined based on point cloud data to be processed corresponding to the boundary to be detected;
the image processing module is used for processing the image to be processed based on a preset clustering algorithm and determining image data corresponding to at least one clustering area included in the image to be processed;
the feature vector determining module is used for determining a cluster classification feature vector corresponding to the current cluster region according to the image data corresponding to the current cluster region, wherein the cluster classification feature vector comprises at least one of a right-angle feature vector, a rectangular feature vector and a geometric feature vector;
and the boundary determining module is used for determining a target clustering area based on the clustering classification feature vector corresponding to each clustering area and determining the boundary to be detected based on the target clustering area.
According to another aspect of the present invention, there is provided an electronic apparatus including:
At least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the road boundary detection method according to any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to execute the road boundary detection method according to any one of the embodiments of the present invention.
According to the technical scheme, the image to be processed including the boundary to be detected is obtained, then the image to be processed is processed based on a preset clustering algorithm, image data corresponding to at least one clustering area included in the image to be processed is determined, further, for each clustering area, according to the image data corresponding to the current clustering area, a clustering classification feature vector corresponding to the current clustering area is determined, finally, a target clustering area is determined based on the clustering classification feature vector corresponding to each clustering area, and the boundary to be detected is determined based on the target clustering area, so that problems of false detection and false tracking of obstacles caused by taking all point clouds inside the road boundary as input of road boundary detection in the related technology are solved, the time and space complexity of the algorithm are increased, the deployment difficulty on a vehicle-mounted computing platform is increased, the effect of extracting the road boundary according to the clustering features of the point cloud data is achieved, the extraction efficiency of the road boundary is improved, and the three-dimensional point cloud data is converted into a two-dimensional image to extract the road boundary based on the two-dimensional image, so that the time and space complexity of the algorithm is greatly reduced, and the real-time road boundary extraction under the vehicle-mounted computing platform is achieved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a road boundary detection method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a clustered edge image and a fitted rectangle provided in accordance with a first embodiment of the present invention;
fig. 3 is a flowchart of a road boundary detection method according to a second embodiment of the present invention;
fig. 4 is a schematic structural diagram of a road boundary detecting device according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device implementing a road boundary detection method according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a road boundary detection method according to an embodiment of the present invention, where the method may be performed by a road boundary detection device, and the road boundary detection device may be implemented in hardware and/or software, and the road boundary detection device may be configured in a terminal and/or a server. As shown in fig. 1, the method includes:
s110, acquiring a to-be-processed image comprising a boundary to be detected.
The boundary to be detected is understood to be a road boundary to be detected, which may be a boundary on a road on which the vehicle is traveling. Alternatively, the boundary to be detected may include a road edge, a fence on at least one side of the road, a tree on at least one side of the road, and the like. The image to be processed may be a road raster image determined based on point cloud data to be processed corresponding to the boundary to be detected. The point cloud data to be processed can be understood as point cloud data obtained after the vehicle surrounding environment is scanned by the vehicle-mounted radar device. The point cloud data to be processed not only comprises static road structures such as road surfaces and road boundaries, but also comprises dynamic traffic participants such as vehicles and pedestrians running around. In general, a vehicle equipped with an in-vehicle radar device can scan the surrounding environment of the vehicle by the in-vehicle radar device while the vehicle is traveling. Furthermore, point cloud data of the surrounding environment of the vehicle can be obtained, and the scanned point cloud data can be used as point cloud data to be processed. The point cloud data to be processed may be three-dimensional point cloud data. The image to be processed may be a two-dimensional image. The image to be processed may be any form of two-dimensional image, alternatively, a Bird's Eye View (BEV) image. BEV is a perspective of viewing an object or scene from above, as if birds were looking down on the ground in the sky. In the field of autopilot and robotics, data acquired by sensors (e.g., radar and cameras, etc.) is typically converted to BEV representations for better object detection, path planning, etc. BEV is capable of simplifying complex three-dimensional environments into two-dimensional images, which is particularly important for efficient computation in real-time systems.
In practical applications, the image to be processed is determined based on the point cloud data to be processed, so that the point cloud data to be processed corresponding to the boundary to be detected may be acquired before the image to be processed is acquired. Furthermore, the acquired point cloud data to be processed can be processed to obtain the image to be processed including the boundary to be detected.
Based on the above, the above technical means further includes: acquiring point cloud data to be processed corresponding to a boundary to be detected; performing ground point cloud elimination processing on the point cloud data to be processed to obtain target point cloud data; and carrying out projection processing on the cloud data of the target point according to a preset visual angle to obtain an image to be processed.
In this embodiment, the point cloud data to be processed may be obtained by scanning with the vehicle-mounted radar apparatus. Alternatively, the vehicle-mounted radar device may be an ultrasonic radar, a laser radar, a microwave radar, a millimeter wave radar, or the like. The ground point cloud may be understood as point cloud data representing a road surface on which the vehicle is traveling. The preset viewing angle may be understood as a preset three-dimensional data projection viewing angle. The preset viewing angle may be any viewing angle, and alternatively, may be a BEV viewing angle.
In practical application, in the running process of the vehicle, the vehicle surrounding environment can be scanned through the vehicle-mounted radar equipment, and the point cloud data to be processed corresponding to the boundary to be detected is obtained. Further, principal component analysis (Principal Components Analysis, PCA) may be used to calculate a normal vector of the point cloud data to be processed, selecting points with approximately vertical normal vectors as the candidate ground point set. And (3) carrying out plane fitting on the candidate ground point set by applying a random sampling consistency (Random Sample Consensus, RANSAC) algorithm to obtain a ground point cloud set. Therefore, the ground point cloud set can be removed from the point cloud data to be processed, and the target point cloud data can be obtained.
Furthermore, after the target point cloud data are obtained, the data volume of the target point cloud data is still huge, the target point cloud data are directly processed, the requirements on the computing power and the memory of the equipment are high, and the equipment is difficult to deploy in the vehicle-mounted embedded platform. Therefore, the point cloud data can be converted into a two-dimensional image by performing projection processing on the point cloud data. Specifically, the target point cloud data is projected into a preset viewing angle and is discretized into a two-dimensional image. In the two-dimensional image, the pixel value of the pixel occupied by the target point cloud data is set to a first preset pixel value, and the pixel value of the pixel not occupied by the target point cloud data is set to a second preset pixel value. Further, an image to be processed can be obtained. The first preset pixel value may be any pixel value, alternatively, may be 1. The second preset pixel value may be any pixel value, alternatively, may be 0. It should be noted that the first preset pixel value and the second preset pixel value are two different pixel values, so that the pixel points occupied by the target point cloud data in the image to be processed can be displayed in a distinguished manner. It should be noted that, with respect to the target point cloud data, the time and space complexity of the image processing to be processed will be greatly reduced.
Further, after the image to be processed is obtained, the boundary to be detected in the image to be processed can be detected. Thus, the road boundary data in the image to be processed can be detected.
S120, processing the image to be processed based on a preset clustering algorithm, and determining image data corresponding to at least one clustering area included in the image to be processed.
The preset clustering algorithm may be understood as a preset image clustering algorithm. The preset clustering algorithm may be any algorithm, and optionally, may be a connected component calibration (Connected Component Labeling) algorithm. The connected component calibration algorithm is a classical binary image clustering algorithm in the field of computer vision. The cluster region may be understood as a cluster region of pixels in the image to be processed. The cluster region may be any cluster of objects included in the image. Alternatively, the cluster region may include a road boundary cluster, a vehicle cluster, and the like. The Image data corresponding to the cluster region may be understood as an Image region (the term in computer vision is Image Patch) corresponding to the cluster region.
In practical application, after the image to be processed is obtained, a preset clustering algorithm may be used to process the image to be processed, and then a set of pixel points corresponding to at least one cluster included in the image to be processed may be output. Thus, each pixel point set can be used as image data corresponding to a corresponding clustering area.
It should be noted that, the image to be processed is obtained based on the projection of the point cloud data to be processed, and a certain mapping relationship exists between the image to be processed and the point cloud data to be processed. After obtaining the image data corresponding to at least one clustering region, the point cloud data corresponding to each clustering region can be determined by combining the mapping relation between the image to be processed and the point cloud data to be processed.
S130, determining a clustering classification feature vector corresponding to the current clustering region according to the image data corresponding to the current clustering region for each clustering region.
The clustering classification feature vector may be understood as a feature vector corresponding to the clustering region for classifying the clustering region. The cluster classification feature vector may comprise a variety of feature vectors, and optionally may comprise at least one of a right angle feature vector, a rectangular feature vector, and a geometric feature vector. Right angle feature vectors can be understood as vectors that characterize whether right angle features are present in a clustered region. Optionally, if the right-angle feature exists in the clustering area, the corresponding right-angle feature vector may be a first preset value; if the right-angle feature does not exist in the clustering area, the right-angle feature vector corresponding to the clustering area can be a second preset value. Rectangular feature vectors can be understood as vectors that characterize the coverage of the occupied pixels of the clustered regions in the horizontal and vertical directions. A geometric feature vector may be understood as a vector characterizing the geometric features of a clustered region. Alternatively, the geometric feature vector may include a height standard deviation, an aspect ratio, a length absolute value, a width absolute value, a height absolute value, and the like.
In practical application, after obtaining the image data corresponding to at least one clustering region, at least one type of feature extraction may be performed on the clustering region for each clustering region. Further, a cluster classification feature vector corresponding to the cluster region can be obtained. In this embodiment, the cluster classification feature vector may include at least three types, and each type of feature vector extraction process may be described below.
The first cluster classification feature vector is a right angle feature vector. According to the image data corresponding to the current clustering area, determining a clustering classification feature vector corresponding to the current clustering area comprises the following steps: denoising the image data corresponding to the current clustering region to obtain image data to be processed; edge pixel extraction is carried out on the image data to be processed, and a clustering edge image corresponding to the current clustering area is obtained; and carrying out horizontal line detection and vertical line detection on the clustering edge images based on Hough transformation, and determining right-angle feature vectors corresponding to the current clustering region based on detection results.
The image data to be processed may be data obtained by filtering noise pixels in the image data and retaining edge pixel information. A clustered edge image may be understood as image data characterizing the edge pixels of a clustered region. Those skilled in the art will appreciate that the Hough Transform (Hough Transform) is a method of finding straight lines, circles, and other simple shapes in an image. The hough transform takes a set of shapes within the current image in a manner similar to voting. The detection result can be understood as a detection result indicating whether a horizontal straight line and/or a vertical straight line exists in the clustering region. In this embodiment, the detection results include a first detection result and a second detection result. The first detection result is that a horizontal straight line and a vertical straight line exist simultaneously. The second detection result is that a horizontal straight line or a vertical straight line exists and no horizontal straight line and no vertical straight line exists. The first detection result corresponds to a first right angle feature vector. The second detection result corresponds to a second right angle feature vector. The first right angle feature vector may be any value, alternatively, may be 1. The second right angle feature vector may be any value, alternatively, may be 0.
In practical application, for each clustering region, an image denoising algorithm can be applied to image data corresponding to the current clustering region to filter noise pixels and retain edge pixel information, so as to implement denoising processing of the image data corresponding to the current clustering region. Further, image data to be processed can be obtained. Further, edge pixel extraction can be performed on the image data to be processed by adopting an edge detection algorithm, so as to obtain a clustering edge image corresponding to the current clustering area. Further, hough transform may be used to detect horizontal lines and vertical lines in the clustered edge images to determine whether horizontal lines and/or vertical lines exist in the clustered edge images. Further, right-angle feature vectors corresponding to the current cluster region can be determined according to the detection result. The image denoising algorithm can be any denoising algorithm, and can be a bilateral filter. Alternatively, the edge detection algorithm may be a Canny edge detection algorithm.
In a specific implementation, determining, according to the detection result, the right angle feature vector corresponding to the current cluster region may be: if the detection result is that a horizontal straight line and a vertical straight line exist at the same time, the detection result is a first detection result, the right-angle feature vector corresponding to the detection result is a first right-angle feature vector, and correspondingly, the right-angle feature vector corresponding to the current clustering area is the first right-angle feature vector. If the detection result is that only a horizontal straight line exists or only a vertical straight line exists, the detection result is a second detection result, the right-angle feature vector corresponding to the detection result is a second right-angle feature vector, and correspondingly, the right-angle feature vector corresponding to the current clustering area is the second right-angle feature vector. If the detection result is that the horizontal straight line and the vertical straight line do not exist, the detection result is the second detection result, and the right angle feature vector corresponding to the detection result is the second right angle feature vector.
It should be noted that, for large vehicle clusters, the vehicle body may be erroneously recognized as a road boundary due to a long length along the forward direction of the vehicle, that is, a long vehicle body. Large vehicle contours typically exhibit right angle features in the point cloud data, while road boundaries typically exhibit two parallel straight line features, so right angle feature vectors have some degree of differentiation for distinguishing clusters of road boundaries. However, roadside vegetation also gives way to roadside clusters to appear horizontal lines, and further, right angle features. However, the number of pixel points forming a horizontal straight line is relatively small, so that the occupied pixel coverage rate in the horizontal direction and the vertical direction can be quantitatively described for distinguishing the large-scale vehicle cluster from the road boundary cluster, and rectangular feature extraction processing can be performed on the cluster area to obtain rectangular feature vectors corresponding to the cluster area. Thus, the occupancy pixel coverage of the respective cluster regions in the horizontal direction and the vertical direction can be determined from the rectangular feature vectors.
The second cluster classification feature vector is a rectangular feature vector. According to the image data corresponding to the current clustering area, determining a clustering classification feature vector corresponding to the current clustering area comprises the following steps: performing rectangular feature fitting treatment on the clustering edge image corresponding to the current clustering area to obtain a fitting rectangle corresponding to the current clustering area, and performing shrinkage treatment on the fitting rectangle based on a preset proportion to obtain a shrinkage rectangle corresponding to the fitting rectangle; determining the number of pixel points occupied by a rectangular frame of a fitting rectangle as a value to be processed, and determining the ratio between the value to be processed and a preset value as a first value; determining an image area to be processed based on the fitting rectangle and the shrinking rectangle; for each column of pixel points in an image area to be processed, scanning each pixel point in a current column in sequence, if the pixel value corresponding to the current pixel point is detected to be a preset pixel value, determining the current pixel point as a first target pixel point, stopping scanning the current column, continuing to scan the next column, and if the pixel point with the pixel value of the preset pixel value is not detected in each pixel point in the current column, continuing to scan the next column until the current column is the last column in the target area; scanning all the pixel points in the current line in sequence aiming at all the pixel points in the image area to be processed, if the pixel value corresponding to the current pixel point is detected to be a preset pixel value, determining the current pixel point as a second target pixel point, stopping scanning the current line, continuing to scan the next line, and if the pixel point with the pixel value of the preset pixel value is not detected in all the pixel points in the current line, continuing to scan the next line until the last line in the target area of the current line; determining the sum of the numbers of the first target pixel points and the second target pixel points, and taking the sum of the numbers as a second numerical value; and determining the ratio between the second value and the first value to obtain a first target value, and taking the first target value as a rectangular feature vector corresponding to the current clustering region.
The image area to be processed comprises an image area occupied by a rectangular frame of a fitting rectangle and a rectangular frame of a shrinking rectangle in the clustering edge image and an image area between the fitting rectangle and the shrinking rectangle. The preset value may be any value, alternatively, may be 2. The preset pixel value may be any pixel value, alternatively, may be 1.
In this embodiment, the L-Shape algorithm may be used to perform rectangular feature fitting processing on the clustered edge images. Accordingly, the fitted rectangle (abbreviated as fitted rectangle) can be understood as the optimal rectangle searched out in the image data (image patch) to be discretized. The core idea of the L-Shape algorithm is to find the two edges of the L Shape and then fit it to a complete L Shape. In this embodiment, the L shape may be a fitting rectangle. For example, as shown in fig. 2, a in fig. 2 is a clustering edge image corresponding to a cart clustering area, an area 1 corresponding to an outer layer dashed line frame in a in fig. 2 is a fitting rectangle corresponding to a cart clustering area, and an area 2 corresponding to an inner layer dashed line frame is a shrinking rectangle corresponding to the fitting rectangle. B in fig. 2 is a clustering edge image corresponding to a road boundary clustering region, and a region 3 corresponding to an outer layer dashed line frame in b in fig. 2 is a fitting rectangle corresponding to the road boundary region, and a region 4 corresponding to an inner layer dashed line frame is a shrinking rectangle corresponding to the fitting rectangle.
In practical application, after the cluster edge image corresponding to the current cluster area is obtained, rectangular feature fitting processing can be performed on the cluster edge image, and a fitting rectangle corresponding to the current cluster area can be obtained. Then, the fitted rectangle can be shrunk inwards, that is, each side moves inwards according to the preset pixel number, so that a rectangle with smaller size, that is, a shrunk rectangle, can be obtained, and at the moment, for the current clustering area, a rectangle with larger size, that is, the fitted rectangle, and a rectangle with smaller size, that is, the shrunk rectangle, can be corresponding. Further, the number of pixel points occupied by the rectangular frame of the fitting rectangle can be determined, and the number is used as a value to be processed. Furthermore, the ratio between the value to be processed and the preset value can be determined, so that the sum of occupied pixel points corresponding to one length side and one width side in the rectangular frame can be obtained, and the ratio can be used as the first value. Then, an image area occupied by the rectangular frame including the fitting rectangle and the rectangular frame including the shrinking rectangle in the clustering edge image and an image area between the fitting rectangle and the shrinking rectangle can be determined, and the image area is taken as an area to be processed. Further, for each column of pixel points in the image area to be processed, scanning each pixel point in the current column in sequence, if the pixel value corresponding to the current pixel point is detected to be the preset pixel value, determining the current pixel point as a first target pixel point, stopping scanning the current column, continuing to scan the next column, and if the pixel point with the pixel value being the preset pixel value is not detected in each pixel point in the current column, continuing to scan the next column until the current column is the last column in the image area to be processed. And then, scanning each pixel point in the current line in sequence for each line of pixel points in the image area to be processed, if the pixel value corresponding to the current pixel point is detected to be a preset pixel value, determining the current pixel point as a second target pixel point, stopping scanning the current line, continuing to scan the next line, and if the pixel point with the pixel value of the preset pixel value is not detected in each pixel point in the current line, continuing to scan the next line until the last line in the image area to be processed is obtained in the current line. Further, the sum of the number of pixels of the first target pixel and the second target pixel may be determined, and the sum of the number of pixels may be regarded as the second value. And finally, determining the ratio between the second numerical value and the first numerical value to obtain a first target numerical value, and taking the first target numerical value as a rectangular feature vector corresponding to the current clustering region. In general, the second value does not exceed the first value, and thus the value range of the rectangular feature vector may be any number between 0 and 1. The smaller the rectangular feature vector, the more likely the corresponding cluster region is to be a road boundary cluster.
Illustratively, with continued reference to fig. 2, as indicated by a (cart cluster) in fig. 2, the number of occupied pixels (i.e., pixels having a pixel value of a preset pixel value) between the fitted rectangle and the contracted rectangle is 23, i.e., the second value is 23. The sum of the pixel points occupied by one length side and one width side in the rectangular frame of the fitting rectangle is 31, namely,the first value is 31. Further, the rectangular feature vector of a in fig. 2 is
The third cluster classification feature vector is a geometric feature vector. According to the image data corresponding to the current clustering area, determining a clustering classification feature vector corresponding to the current clustering area comprises the following steps: determining point cloud data corresponding to the current clustering area according to the image data corresponding to the current clustering area; and determining the geometric feature vector corresponding to the current clustering area according to the point cloud data.
In practical application, after obtaining the image data corresponding to the current clustering area, a predetermined mapping relation corresponding to the current clustering area can be obtained, and the mapping relation can represent the corresponding relation between the image data and the point cloud data. Furthermore, the point cloud data corresponding to the current clustering area can be determined according to the mapping relation and the image data. Further, geometric feature analysis may be performed on the point cloud data corresponding to the current cluster region to determine a geometric feature vector corresponding to the current cluster region based on the point cloud data.
The geometric feature vector includes at least one of a standard deviation of height, an aspect ratio, an absolute value of length, an absolute value of width, and an absolute value of height. The determination process of each geometrical feature vector can be described separately below.
The first geometric feature vector is the height standard deviation. Determining a geometric feature vector corresponding to the current clustering region according to the point cloud data, including: and determining the height value of each object included in the current clustering area according to the point cloud data, and determining the height standard deviation corresponding to the current clustering area based on each height value.
It should be noted that, when seen from the height direction of the point cloud data corresponding to the clustering area, the vehicle clustering point clouds are uniformly distributed along the height direction, and the road boundary clustering point clouds covered by vegetation may have more points concentrated above the clustering point clouds due to the influence of the vegetation, so that the distribution of the point cloud height values is more concentrated, in other words, the height standard deviation of the two clustering point clouds is different. Therefore, the height standard deviation corresponding to the cluster region may be determined to take the height standard deviation as one of the features of the road boundary cluster detection.
The objects included in the clustering area are static objects and/or dynamic objects included in the clustering area. By way of example, objects included in the clustered region may include vegetation, vehicles, pedestrians, and road edges, among others.
In practical application, the point cloud data is three-dimensional data, and accordingly, the point cloud data corresponding to the clustering area may include a length value, a width value, and a height value of each object (i.e., each point). After obtaining the point cloud data corresponding to the current cluster region, a height value of each object included in the current cluster region may be determined according to the point cloud data. Furthermore, each height value may be added to obtain a height sum, and a ratio between the height sum and the total number of objects may be determined, and the ratio may be used as a height average value corresponding to the current cluster region. Furthermore, the height value, the total number of objects and the height average value of each object can be processed according to a standard deviation calculation formula so as to obtain the height standard deviation corresponding to the current clustering region.
By way of example, the height standard deviation corresponding to the current cluster region may be determined based on the following formula:
wherein,representing the height standard deviation corresponding to the current clustering region; />Representing objects included in the current cluster region; />Representing the total number of objects in the current cluster region; />A height value representing an object included in the current cluster region; />And representing the height average value corresponding to the current clustering region.
The second geometric feature vector is the aspect ratio. Determining a geometric feature vector corresponding to the current clustering region according to the point cloud data, including: and determining a boundary box corresponding to the current clustering area according to the point cloud data, determining a length value and a width value of the boundary box, determining a ratio between the length value and the width value, obtaining a second target value, and taking the second target value as an aspect ratio corresponding to the current clustering area.
It should be noted that, the road boundary clustering region is relatively slender from the aspect of the clustering shape, and therefore, the aspect ratio of the clustering region may also be used as one of the features of the road boundary clustering detection.
A bounding box is understood to be a box that encloses the least part of the objects included in the current cluster region.
In practical application, the minimum abscissa, the maximum abscissa, the minimum ordinate and the maximum ordinate corresponding to the current clustering area can be determined according to the point cloud data corresponding to the current clustering area. Further, a frame may be obtained by connecting the point corresponding to the minimum abscissa, the point corresponding to the maximum abscissa, the point corresponding to the minimum ordinate, and the point corresponding to the maximum ordinate, and the frame may be used as a bounding box corresponding to the current cluster region. Further, the length value and the height value of the bounding box may be determined from the coordinate values of each point on the bounding box. Further, a ratio between the length value and the height value may be determined, a second target value may be obtained, and the second target value may be used as an aspect ratio corresponding to the current cluster region.
The third geometric feature vector is the absolute value of the length. Determining a geometric feature vector corresponding to the current clustering region according to the point cloud data, including: and determining the minimum abscissa and the maximum abscissa corresponding to the current clustering area according to the point cloud data, and determining the length absolute value corresponding to the current clustering area based on the minimum abscissa and the maximum abscissa.
The direction of the abscissa axis corresponding to the maximum abscissa corresponds to the running direction of the vehicle, the direction of the ordinate axis corresponding to the maximum ordinate axis is perpendicular to the vehicle chassis and points to the top of the vehicle, and the direction of the ordinate axis corresponding to the maximum ordinate axis is perpendicular to the abscissa axis and points to the left side of the running direction of the vehicle.
In practical application, the minimum abscissa and the maximum abscissa corresponding to the current clustering area can be determined according to the point cloud data corresponding to the current clustering area. Further, a difference between the maximum abscissa and the minimum abscissa may be determined, and an absolute value of the difference may be used as a length absolute value corresponding to the current cluster region.
The fourth geometric feature vector is the absolute value of the width. Determining a geometric feature vector corresponding to the current clustering region according to the point cloud data, including: and determining the minimum ordinate and the maximum ordinate corresponding to the current clustering area according to the point cloud data, and determining the width absolute value corresponding to the current clustering area based on the minimum ordinate and the maximum ordinate.
In practical application, the minimum ordinate and the maximum ordinate corresponding to the current clustering area can be determined according to the point cloud data corresponding to the current clustering area. Further, a difference between the maximum ordinate and the minimum ordinate may be determined, and an absolute value of the difference may be used as an absolute value of a width corresponding to the current cluster region.
The fifth geometric feature vector is the absolute value of the height. Determining a geometric feature vector corresponding to the current clustering region according to the point cloud data, including: and determining the minimum vertical coordinate and the maximum vertical coordinate corresponding to the current clustering area according to the point cloud data, and determining the height absolute value corresponding to the current clustering area based on the minimum vertical coordinate and the maximum vertical coordinate.
In practical application, the minimum vertical coordinate and the maximum vertical coordinate corresponding to the current clustering area can be determined according to the point cloud data corresponding to the current clustering area. Further, a difference between the maximum vertical coordinate and the minimum vertical coordinate may be determined, and an absolute value of the difference may be used as a height absolute value corresponding to the current cluster region.
And S140, determining a target clustering area based on the clustering classification feature vector corresponding to each clustering area, and determining a boundary to be detected based on the target clustering area.
In this embodiment, the cluster classification feature vector includes a right angle feature vector, a rectangular feature vector, and a geometric feature vector. The geometric feature vector includes a height standard deviation, an aspect ratio, an absolute value of length, an absolute value of width, and an absolute value of height. Each feature vector corresponds to a vector value, and the clustering classification feature vector corresponding to the clustering region is a seven-dimensional vector. The target cluster region may be understood as a road boundary cluster region.
In practical application, after the cluster classification feature vectors corresponding to the cluster areas are obtained, the cluster classification feature vectors corresponding to the cluster areas can be analyzed to determine whether the corresponding cluster areas are road boundary cluster areas or not based on the cluster classification feature vectors. Further, a target cluster region can be obtained.
Optionally, determining the target cluster region based on the cluster classification feature vector corresponding to each cluster region includes: and processing the clustering classification feature vectors corresponding to each clustering region based on the pre-trained road boundary detection model to obtain a target clustering region.
The road boundary detection model may be understood as a machine learning classifier model that uses a cluster classification feature vector corresponding to the cluster region as an input object to determine a road boundary cluster region based on the cluster classification feature vector. In this embodiment, the road boundary detection model may be a classification model, which is a model constructed based on a classification algorithm. Alternatively, the classification algorithm may include a support vector machine (Support Vector Machine, SVM), a Multi-Layer Perceptron (MLP), a decision tree, and the like.
It should be noted that, before the road boundary detection model provided in this embodiment is applied, the model to be trained needs to be trained first. Before training the model to be trained, a plurality of training samples may be constructed to train the model to be trained based on the training samples. In order to provide accuracy of the model, training samples can be built as much and as rich as possible. Alternatively, the training process of the model to be trained may be: obtaining a plurality of training samples, wherein the training samples comprise clustering classification feature vectors corresponding to point cloud data and theoretical output labels corresponding to the point cloud data; inputting the training sample into a model to be trained to obtain an actual output result; carrying out loss processing on the theoretical output label and the actual output result according to a preset loss function; and correcting model parameters in the model to be trained based on the loss value to obtain a road boundary detection model.
The theoretical output label can be understood as a label for representing whether the point cloud data is a road boundary area or not. The theoretical output label may be any value, alternatively, 0 or 1 (0 represents a non-road boundary, and 1 represents a road boundary).
In practical application, after the cluster classification feature vectors corresponding to the cluster regions are obtained, the cluster classification feature vectors can be input into a road boundary detection model obtained through training in advance, so that the cluster classification feature vectors are processed based on the road boundary detection model. Further, a target cluster region can be obtained.
It should be noted that the target cluster area may be one or more, that is, the target cluster area may include a left boundary cluster of the road and/or a right boundary cluster of the road.
Further, after the target clustering area is obtained, the boundary to be detected can be determined based on the target clustering area.
Optionally, determining the boundary to be detected based on the target cluster region includes: and determining the point cloud of the road boundary based on the point cloud data corresponding to the target clustering region, and performing curve fitting on the point cloud of the road boundary to obtain the boundary to be detected.
The road boundary point cloud may be understood as a point data set characterizing road boundary points in the point cloud data.
In practical application, after the target cluster area is obtained, whether the target cluster area is a left road boundary cluster or a right road boundary cluster can be determined. For the road boundary clustering on the left side, point cloud data corresponding to the target clustering area can be acquired, progressive scanning is carried out on the point cloud data along the advancing direction of the vehicle, and all target pixel points with the pixel values of the point cloud data being first preset pixel values are determined. And selecting the rightmost target pixel point of each row as a left road boundary point. Thus, a left road boundary point cloud can be obtained. For the road boundary clustering on the right, point cloud data corresponding to the target clustering area can be acquired, progressive scanning is carried out on the point cloud data along the advancing direction of the vehicle, and all target pixel points with the pixel values of the point cloud data being first preset pixel values are determined. And selecting the leftmost target pixel point of each row as a right road boundary point. Thus, a right road boundary point cloud can be obtained.
Further, curve fitting is carried out on the road boundary point cloud, and the boundary to be detected can be obtained.
According to the technical scheme, the image to be processed including the boundary to be detected is obtained, then the image to be processed is processed based on a preset clustering algorithm, image data corresponding to at least one clustering area included in the image to be processed is determined, further, for each clustering area, according to the image data corresponding to the current clustering area, a clustering classification feature vector corresponding to the current clustering area is determined, finally, a target clustering area is determined based on the clustering classification feature vector corresponding to each clustering area, and the boundary to be detected is determined based on the target clustering area, so that problems of false detection and false tracking of obstacles caused by taking all point clouds inside the road boundary as input of road boundary detection in the related technology are solved, the time and space complexity of the algorithm are increased, the deployment difficulty on a vehicle-mounted computing platform is increased, the effect of extracting the road boundary according to the clustering features of the point cloud data is achieved, the extraction efficiency of the road boundary is improved, and the three-dimensional point cloud data is converted into a two-dimensional image to extract the road boundary based on the two-dimensional image, so that the time and space complexity of the algorithm is greatly reduced, and the real-time road boundary extraction under the vehicle-mounted computing platform is achieved.
Example two
Fig. 3 is a flowchart of a road boundary detection method according to a second embodiment of the present invention, which is an alternative embodiment to the above-mentioned embodiments. As shown in fig. 3, the method according to the embodiment of the present invention may include the following steps:
1. acquiring point cloud data to be processed;
2. removing ground point cloud in the point cloud data to be processed to obtain target point cloud data;
3. projecting the cloud data of the target point to the BEV view to obtain an image to be processed;
4. clustering the images to be processed to obtain image data corresponding to at least one clustering area;
5. for each clustering area, processing the image data corresponding to the current clustering area by adopting bilateral filtering, and removing noise while retaining edge characteristics in the image data;
6. processing the image by using a Canny edge detection algorithm to obtain a clustered edge image;
7. processing the clustering edge image by using Hough transformation to obtain a right-angle feature vector corresponding to the current clustering region;
8. performing L-Shape fitting on the clustered edge images to obtain fitted rectangles, and shrinking the fitted rectangles based on a preset proportion to obtain shrunk rectangles;
9. determining the quantity ratio of occupied pixel points in all pixel points between the fitting rectangle and the contracted rectangle, and taking the quantity ratio as a rectangular feature vector corresponding to the current clustering area;
10. For each clustering area, determining a geometric feature vector according to point cloud data corresponding to the current clustering area;
11. processing right angle feature vectors, rectangular feature vectors and geometric feature vectors corresponding to the clustering areas according to a road boundary detection model (SVM, MLP, decision tree) to determine the road boundary clustering areas;
12. and extracting road boundary points from the road boundary clustering area to obtain a road boundary point cloud.
According to the technical scheme, the image to be processed including the boundary to be detected is obtained, then the image to be processed is processed based on a preset clustering algorithm, image data corresponding to at least one clustering area included in the image to be processed is determined, further, for each clustering area, according to the image data corresponding to the current clustering area, a clustering classification feature vector corresponding to the current clustering area is determined, finally, a target clustering area is determined based on the clustering classification feature vector corresponding to each clustering area, and the boundary to be detected is determined based on the target clustering area, so that problems of false detection and false tracking of obstacles caused by taking all point clouds inside the road boundary as input of road boundary detection in the related technology are solved, the time and space complexity of the algorithm are increased, the deployment difficulty on a vehicle-mounted computing platform is increased, the effect of extracting the road boundary according to the clustering features of the point cloud data is achieved, the extraction efficiency of the road boundary is improved, and the three-dimensional point cloud data is converted into a two-dimensional image to extract the road boundary based on the two-dimensional image, so that the time and space complexity of the algorithm is greatly reduced, and the real-time road boundary extraction under the vehicle-mounted computing platform is achieved.
Example III
Fig. 4 is a schematic structural diagram of a road boundary detecting device according to a third embodiment of the present invention. As shown in fig. 4, the apparatus includes: an image acquisition module 310, an image processing module 320, a feature vector determination module 330, and a boundary determination module 340.
The image obtaining module 310 is configured to obtain an image to be processed including a boundary to be detected, where the image to be processed is an image determined based on point cloud data to be processed corresponding to the boundary to be detected; the image processing module 320 is configured to process the image to be processed based on a preset clustering algorithm, and determine image data corresponding to at least one clustering area included in the image to be processed; a feature vector determining module 330, configured to determine, for each of the cluster areas, a cluster classification feature vector corresponding to the current cluster area according to image data corresponding to the current cluster area, where the cluster classification feature vector includes at least one of a right angle feature vector, a rectangular feature vector, and a geometric feature vector; the boundary determining module 340 is configured to determine a target cluster area based on the cluster classification feature vectors corresponding to the cluster areas, and determine the boundary to be detected based on the target cluster area.
According to the technical scheme, the image to be processed including the boundary to be detected is obtained, then the image to be processed is processed based on a preset clustering algorithm, image data corresponding to at least one clustering area included in the image to be processed is determined, further, for each clustering area, according to the image data corresponding to the current clustering area, a clustering classification feature vector corresponding to the current clustering area is determined, finally, a target clustering area is determined based on the clustering classification feature vector corresponding to each clustering area, and the boundary to be detected is determined based on the target clustering area, so that problems of false detection and false tracking of obstacles caused by taking all point clouds inside the road boundary as input of road boundary detection in the related technology are solved, the time and space complexity of the algorithm are increased, the deployment difficulty on a vehicle-mounted computing platform is increased, the effect of extracting the road boundary according to the clustering features of the point cloud data is achieved, the extraction efficiency of the road boundary is improved, and the three-dimensional point cloud data is converted into a two-dimensional image to extract the road boundary based on the two-dimensional image, so that the time and space complexity of the algorithm is greatly reduced, and the real-time road boundary extraction under the vehicle-mounted computing platform is achieved.
Optionally, the cluster classification feature vector includes a right angle feature vector, and the feature vector determining module 330 includes: the image denoising device comprises an image denoising unit, an edge pixel extraction unit and a right angle feature vector determination unit.
The image denoising unit is used for denoising the image data corresponding to the current clustering region to obtain image data to be processed;
the edge pixel extraction unit is used for carrying out edge pixel extraction processing on the image data to be processed to obtain a clustering edge image corresponding to the current clustering area;
the right-angle feature vector determining unit is used for carrying out horizontal line detection and vertical line detection on the clustered edge images based on Hough transformation, and determining right-angle feature vectors corresponding to the current clustered region based on detection results; wherein the detection result comprises a first detection result and a second detection result; the first detection result is that a horizontal straight line and a vertical straight line exist at the same time; the second detection result is that a horizontal straight line or a vertical straight line exists or a horizontal straight line and a vertical straight line does not exist; the first detection result corresponds to a first right angle feature vector, and the second detection result corresponds to a second right angle feature vector.
Optionally, the cluster classification feature vector includes a rectangular feature vector, and the feature vector determining module 330 includes: the device comprises a fitting processing unit, a first numerical value determining unit, a to-be-processed image area determining unit, a column pixel scanning unit, a row pixel scanning unit, a second numerical value determining unit and a rectangular feature vector determining unit.
The fitting processing unit is used for carrying out rectangular characteristic fitting processing on the clustering edge image corresponding to the current clustering area to obtain a fitting rectangle corresponding to the current clustering area, and carrying out shrinkage processing on the fitting rectangle based on a preset proportion to obtain a shrinkage rectangle corresponding to the fitting rectangle;
the first numerical value determining unit is used for determining the number of pixel points occupied by the rectangular frame of the fitting rectangle as a numerical value to be processed and determining the ratio between the numerical value to be processed and a preset numerical value as a first numerical value;
a to-be-processed image area determining unit, configured to determine an to-be-processed image area based on the fitted rectangle and the contracted rectangle, where the to-be-processed image area includes an image area occupied by a rectangular frame of the fitted rectangle and a rectangular frame of the contracted rectangle in the clustered edge image, and an image area between the fitted rectangle and the contracted rectangle;
A column pixel point scanning unit, configured to scan each column of pixel points in the current column in sequence for each column of pixel points in the image area to be processed, if the pixel value corresponding to the current pixel point is detected to be a preset pixel value, determine the current pixel point as a first target pixel point, stop scanning the current column, and continue scanning the next column, if the pixel point with the pixel value being the preset pixel value is not detected in each pixel point in the current column, continue scanning the next column until the current column is the last column in the image area to be processed;
a row pixel point scanning unit, configured to scan each pixel point in the current row sequentially for each row of pixel points in the image area to be processed, if a pixel value corresponding to the current pixel point is detected to be a preset pixel value, determine the current pixel point as a second target pixel point, stop scanning the current row, and continue scanning the next row, if a pixel point with a pixel value being the preset pixel value is not detected in each pixel point in the current row, continue scanning the next row until the current row is the last row in the image area to be processed;
a second value determining unit configured to determine a sum of the numbers of the first target pixel points and the second target pixel points, and take the sum of the numbers as a second value;
And the rectangular feature vector determining unit is used for determining the ratio between the second numerical value and the first numerical value to obtain a first target numerical value, and taking the first target numerical value as the rectangular feature vector corresponding to the current clustering area.
Optionally, the cluster classification feature vector includes a geometric feature vector, and the feature vector determining module 330 includes: and the point cloud data determining unit and the geometric feature vector determining unit.
The point cloud data determining unit is used for determining point cloud data corresponding to the current clustering area according to the image data corresponding to the current clustering area;
and the geometric feature vector determining unit is used for determining the geometric feature vector corresponding to the current clustering area according to the point cloud data.
Optionally, the geometric feature vector includes at least one of a height standard deviation, an aspect ratio, a length absolute value, a width absolute value, and a height absolute value, and the geometric feature vector determining unit includes: a height standard deviation determination subunit, an aspect ratio determination subunit, a length absolute value determination subunit, a width absolute value determination subunit, and a height absolute value determination subunit.
A height standard deviation determining subunit, configured to determine a height value of each object included in the current cluster area according to the point cloud data, and determine a height standard deviation corresponding to the current cluster area based on each height value;
An aspect ratio determining subunit, configured to determine, according to the point cloud data, a bounding box corresponding to the current cluster area, determine a length value and a width value of the bounding box, determine a ratio between the length value and the width value, obtain a second target value, and use the second target value as an aspect ratio corresponding to the current cluster area;
the length absolute value determining subunit is used for determining a minimum abscissa and a maximum abscissa corresponding to the current clustering area according to the point cloud data, and determining a length absolute value corresponding to the current clustering area based on the minimum abscissa and the maximum abscissa;
the width absolute value determining subunit is used for determining a minimum ordinate and a maximum ordinate corresponding to the current clustering area according to the point cloud data, and determining a width absolute value corresponding to the current clustering area based on the minimum ordinate and the maximum ordinate;
the height absolute value determining subunit is used for determining a minimum vertical coordinate and a maximum vertical coordinate corresponding to the current clustering area according to the point cloud data, and determining a height absolute value corresponding to the current clustering area based on the minimum vertical coordinate and the maximum vertical coordinate;
The direction of the abscissa axis corresponding to the maximum abscissa corresponds to the running direction of the vehicle, the direction of the ordinate axis corresponding to the maximum ordinate axis is perpendicular to the direction of the vehicle chassis to the top of the vehicle, and the direction of the ordinate axis corresponding to the maximum ordinate axis is perpendicular to the direction of the abscissa axis to the left side of the running direction of the vehicle.
Optionally, the boundary determination module 340 includes: and a target cluster area determining unit.
The target clustering area determining unit is used for processing the clustering classification feature vectors corresponding to the clustering areas based on the pre-trained road boundary classification model to obtain target clustering areas.
Optionally, the boundary determination module 340 includes: and a boundary determination unit.
And the boundary determining unit is used for determining a road boundary point cloud based on the point cloud data corresponding to the target clustering region, and performing curve fitting on the road boundary point cloud to obtain the boundary to be detected.
The road boundary detection device provided by the embodiment of the invention can execute the road boundary detection method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example IV
Fig. 5 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 5, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the respective methods and processes described above, such as the road boundary detection method.
In some embodiments, the road boundary detection method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into the RAM 13 and executed by the processor 11, one or more steps of the road-boundary detection method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the road boundary detection method in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A road boundary detection method, characterized by comprising:
acquiring an image to be processed comprising a boundary to be detected, wherein the image to be processed is an image determined based on point cloud data to be processed corresponding to the boundary to be detected;
processing the image to be processed based on a preset clustering algorithm, and determining image data corresponding to at least one clustering area included in the image to be processed;
For each clustering region, determining a clustering classification feature vector corresponding to the current clustering region according to image data corresponding to the current clustering region, wherein the clustering classification feature vector comprises at least one of a right-angle feature vector, a rectangular feature vector and a geometric feature vector;
and determining a target clustering area based on the clustering classification feature vector corresponding to each clustering area, and determining the boundary to be detected based on the target clustering area.
2. The method of claim 1, wherein the cluster classification feature vector comprises a right angle feature vector, and wherein the determining the cluster classification feature vector corresponding to the current cluster region from the image data corresponding to the current cluster region comprises:
denoising the image data corresponding to the current clustering region to obtain image data to be processed;
performing edge pixel extraction processing on the image data to be processed to obtain a clustering edge image corresponding to the current clustering region;
performing horizontal line detection and vertical line detection on the clustered edge images based on Hough transformation, and determining right-angle feature vectors corresponding to the current clustered region based on detection results;
Wherein the detection result comprises a first detection result and a second detection result; the first detection result is that a horizontal straight line and a vertical straight line exist at the same time; the second detection result is that a horizontal straight line or a vertical straight line exists or a horizontal straight line and a vertical straight line does not exist; the first detection result corresponds to a first right angle feature vector, and the second detection result corresponds to a second right angle feature vector.
3. The method of claim 2, wherein the cluster classification feature vector comprises a rectangular feature vector, and wherein the determining the cluster classification feature vector corresponding to the current cluster region from the image data corresponding to the current cluster region comprises:
performing rectangular feature fitting treatment on the clustering edge image corresponding to the current clustering area to obtain a fitting rectangle corresponding to the current clustering area, and performing shrinkage treatment on the fitting rectangle based on a preset proportion to obtain a shrinkage rectangle corresponding to the fitting rectangle;
determining the number of pixel points occupied by the rectangular frame of the fitting rectangle as a value to be processed, and determining the ratio between the value to be processed and a preset value as a first value;
Determining an image area to be processed based on the fitted rectangle and the contracted rectangle, wherein the image area to be processed comprises an image area occupied by a rectangular frame of the fitted rectangle and a rectangular frame of the contracted rectangle in the clustering edge image and an image area between the fitted rectangle and the contracted rectangle;
for each row of pixel points in the image area to be processed, scanning each pixel point in a current row in sequence, if the pixel value corresponding to the current pixel point is detected to be a preset pixel value, determining the current pixel point as a first target pixel point, stopping scanning the current row, continuing to scan the next row, and if the pixel point with the pixel value being the preset pixel value is not detected in each pixel point in the current row, continuing to scan the next row until the current row is the last row in the image area to be processed;
scanning each pixel point in the current line in sequence aiming at each line of pixel points in the image area to be processed, if the pixel value corresponding to the current pixel point is detected to be a preset pixel value, determining the current pixel point as a second target pixel point, stopping scanning the current line, continuing to scan the next line, and if the pixel point with the pixel value being the preset pixel value is not detected in each pixel point in the current line, continuing to scan the next line until the current line is the last line in the image area to be processed;
Determining the sum of the numbers of the first target pixel points and the second target pixel points, and taking the sum of the numbers as a second numerical value;
and determining the ratio between the second value and the first value to obtain a first target value, and taking the first target value as a rectangular feature vector corresponding to the current clustering region.
4. The method of claim 1, wherein the cluster classification feature vector comprises a geometric feature vector, and wherein the determining the cluster classification feature vector corresponding to the current cluster region from the image data corresponding to the current cluster region comprises:
determining point cloud data corresponding to the current clustering area according to the image data corresponding to the current clustering area;
and determining a geometric feature vector corresponding to the current clustering area according to the point cloud data.
5. The method of claim 4, wherein the geometric feature vector comprises at least one of a standard deviation of height, an aspect ratio, an absolute value of length, an absolute value of width, and an absolute value of height, the determining the geometric feature vector corresponding to the current cluster region from the point cloud data comprises:
Determining the height value of each object included in the current clustering area according to the point cloud data, and determining the height standard deviation corresponding to the current clustering area based on each height value;
determining a boundary box corresponding to the current clustering area according to the point cloud data, determining a length value and a width value of the boundary box, determining a ratio between the length value and the width value to obtain a second target value, and taking the second target value as an aspect ratio corresponding to the current clustering area;
determining a minimum abscissa and a maximum abscissa corresponding to the current clustering area according to the point cloud data, and determining a length absolute value corresponding to the current clustering area based on the minimum abscissa and the maximum abscissa;
determining a minimum ordinate and a maximum ordinate corresponding to the current clustering area according to the point cloud data, and determining a width absolute value corresponding to the current clustering area based on the minimum ordinate and the maximum ordinate;
determining a minimum vertical coordinate and a maximum vertical coordinate corresponding to the current clustering area according to the point cloud data, and determining a height absolute value corresponding to the current clustering area based on the minimum vertical coordinate and the maximum vertical coordinate;
The direction of the abscissa axis corresponding to the maximum abscissa corresponds to the running direction of the vehicle, the direction of the ordinate axis corresponding to the maximum ordinate axis is perpendicular to the direction of the vehicle chassis to the top of the vehicle, and the direction of the ordinate axis corresponding to the maximum ordinate axis is perpendicular to the direction of the abscissa axis to the left side of the running direction of the vehicle.
6. The method of claim 1, wherein determining the target cluster region based on the cluster classification feature vector corresponding to each cluster region comprises:
and processing the clustering classification feature vectors corresponding to the clustering areas based on the pre-trained road boundary classification model to obtain target clustering areas.
7. The method of claim 1, wherein the determining the boundary to be detected based on the target cluster region comprises:
and determining a road boundary point cloud based on the point cloud data corresponding to the target clustering region, and performing curve fitting on the road boundary point cloud to obtain the boundary to be detected.
8. A road boundary detection apparatus, comprising:
the image acquisition module is used for acquiring an image to be processed comprising a boundary to be detected, wherein the image to be processed is an image determined based on point cloud data to be processed corresponding to the boundary to be detected;
The image processing module is used for processing the image to be processed based on a preset clustering algorithm and determining image data corresponding to at least one clustering area included in the image to be processed;
the feature vector determining module is used for determining a cluster classification feature vector corresponding to the current cluster region according to the image data corresponding to the current cluster region, wherein the cluster classification feature vector comprises at least one of a right-angle feature vector, a rectangular feature vector and a geometric feature vector;
and the boundary determining module is used for determining a target clustering area based on the clustering classification feature vector corresponding to each clustering area and determining the boundary to be detected based on the target clustering area.
9. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the road-boundary detection method of any one of claims 1-7.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores computer instructions for causing a processor to implement the road boundary detection method of any one of claims 1-7 when executed.
CN202311676716.9A 2023-12-08 2023-12-08 Road boundary detection method, device, electronic equipment and storage medium Active CN117372988B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311676716.9A CN117372988B (en) 2023-12-08 2023-12-08 Road boundary detection method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311676716.9A CN117372988B (en) 2023-12-08 2023-12-08 Road boundary detection method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117372988A true CN117372988A (en) 2024-01-09
CN117372988B CN117372988B (en) 2024-02-13

Family

ID=89400711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311676716.9A Active CN117372988B (en) 2023-12-08 2023-12-08 Road boundary detection method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117372988B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801022A (en) * 2021-02-09 2021-05-14 青岛慧拓智能机器有限公司 Method for rapidly detecting and updating road boundary of unmanned mine card operation area
US20220189158A1 (en) * 2020-12-15 2022-06-16 Hyundai Motor Company Method and device for detecting boundary of road in 3d point cloud using cascade classifier
CN115171094A (en) * 2022-06-23 2022-10-11 北京百度网讯科技有限公司 Road element determination method, device, equipment and storage medium
CN116311127A (en) * 2023-03-10 2023-06-23 浙江零跑科技股份有限公司 Road boundary detection method, computer equipment, readable storage medium and motor vehicle
CN117037103A (en) * 2023-09-08 2023-11-10 中国第一汽车股份有限公司 Road detection method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220189158A1 (en) * 2020-12-15 2022-06-16 Hyundai Motor Company Method and device for detecting boundary of road in 3d point cloud using cascade classifier
CN112801022A (en) * 2021-02-09 2021-05-14 青岛慧拓智能机器有限公司 Method for rapidly detecting and updating road boundary of unmanned mine card operation area
CN115171094A (en) * 2022-06-23 2022-10-11 北京百度网讯科技有限公司 Road element determination method, device, equipment and storage medium
CN116311127A (en) * 2023-03-10 2023-06-23 浙江零跑科技股份有限公司 Road boundary detection method, computer equipment, readable storage medium and motor vehicle
CN117037103A (en) * 2023-09-08 2023-11-10 中国第一汽车股份有限公司 Road detection method and device

Also Published As

Publication number Publication date
CN117372988B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
CN107507167B (en) Cargo tray detection method and system based on point cloud plane contour matching
CN107358149B (en) Human body posture detection method and device
WO2020108311A1 (en) 3d detection method and apparatus for target object, and medium and device
CN111582054B (en) Point cloud data processing method and device and obstacle detection method and device
CN113902897A (en) Training of target detection model, target detection method, device, equipment and medium
CN111222395A (en) Target detection method and device and electronic equipment
US20210117704A1 (en) Obstacle detection method, intelligent driving control method, electronic device, and non-transitory computer-readable storage medium
CN112733812A (en) Three-dimensional lane line detection method, device and storage medium
CN115147809B (en) Obstacle detection method, device, equipment and storage medium
CN112683228A (en) Monocular camera ranging method and device
CN114419599A (en) Obstacle identification method and device and electronic equipment
CN110675442A (en) Local stereo matching method and system combined with target identification technology
CN114550117A (en) Image detection method and device
CN112733678A (en) Ranging method, ranging device, computer equipment and storage medium
CN113378837A (en) License plate shielding identification method and device, electronic equipment and storage medium
CN115063578B (en) Method and device for detecting and positioning target object in chip image and storage medium
CN117372988B (en) Road boundary detection method, device, electronic equipment and storage medium
CN114581890B (en) Method and device for determining lane line, electronic equipment and storage medium
CN117911891A (en) Equipment identification method and device, electronic equipment and storage medium
CN116681932A (en) Object identification method and device, electronic equipment and storage medium
CN114612544A (en) Image processing method, device, equipment and storage medium
CN115115535A (en) Depth map denoising method, device, medium and equipment
CN114511862A (en) Form identification method and device and electronic equipment
CN113408456A (en) Environment perception algorithm, system, device, electronic equipment and storage medium
CN117392631B (en) Road boundary extraction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant