CN116259023A - Road edge detection method, system and vehicle - Google Patents

Road edge detection method, system and vehicle Download PDF

Info

Publication number
CN116259023A
CN116259023A CN202310029589.7A CN202310029589A CN116259023A CN 116259023 A CN116259023 A CN 116259023A CN 202310029589 A CN202310029589 A CN 202310029589A CN 116259023 A CN116259023 A CN 116259023A
Authority
CN
China
Prior art keywords
point cloud
cloud data
road
plane
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310029589.7A
Other languages
Chinese (zh)
Inventor
罗石
钟辉平
唐路松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Sany Zhongyi Machinery Co Ltd
Original Assignee
Hunan Sany Zhongyi Machinery Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Sany Zhongyi Machinery Co Ltd filed Critical Hunan Sany Zhongyi Machinery Co Ltd
Priority to CN202310029589.7A priority Critical patent/CN116259023A/en
Publication of CN116259023A publication Critical patent/CN116259023A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of intellectualization, and provides a road edge detection method, a system and a vehicle, wherein the method comprises the following steps: extracting candidate road edges from images of the road environment; projecting the point cloud data of the road environment to a pixel coordinate system to obtain a first image; determining a target area in the first image, wherein the target area comprises an area corresponding to the candidate road edge; acquiring point cloud data corresponding to a target area in point cloud data of a road environment as first point cloud data; based on the first point cloud data, a road edge in the road environment is detected. And determining a target area of the first image, which contains an area corresponding to the candidate road edge, by taking the candidate road edge in the image of the road environment as priori information, and acquiring point cloud data corresponding to the target area in point cloud data of the road environment as first point cloud data, so that the existence of the road edge information in the first point cloud data is ensured, the environmental interference is greatly reduced, and the road edge in the road environment is detected based on the first point cloud data, thereby realizing simple and accurate road edge detection.

Description

Road edge detection method, system and vehicle
Technical Field
The invention relates to the technical field of intellectualization, in particular to a road edge detection method, a system and a vehicle.
Background
Many roads are provided with road edges which are used as boundaries between road surfaces and road shoulders so as to distinguish sidewalks, greenbelts and the like, and the functions of guaranteeing traffic safety of pedestrians and vehicles, tidying edges of the road surfaces and the like are achieved.
At present, along with the development of unmanned technology, road edge detection has important significance. Taking road surface machinery as an example, road surface construction environment is severe, danger coefficient is high, unmanned construction is the development direction of future road machinery, road edge detection has important significance for unmanned construction, and key technologies such as automatic edge bonding, path planning and seamless paving are all required to be established on the basis of accurate road edge detection. However, the current road edge detection has low accuracy or is difficult to realize.
Disclosure of Invention
The invention provides a road edge detection method, a system and a vehicle, which are used for solving or improving the problems of low accuracy or high implementation difficulty of road edge detection in the prior art.
The invention provides a road edge detection method, which comprises the following steps:
extracting candidate road edges from images of the road environment;
projecting the point cloud data of the road environment to a pixel coordinate system to obtain a first image;
determining a target area in the first image, wherein the target area comprises an area corresponding to the candidate road edge;
Acquiring point cloud data corresponding to the target area in the point cloud data of the road environment as first point cloud data;
based on the first point cloud data, a road edge in the road environment is detected.
According to the method for detecting the road edge provided by the invention, the road edge in the road environment is detected based on the first point cloud data, and the method comprises the following steps:
converting the first point cloud data by using a first conversion matrix to obtain second point cloud data; the first conversion matrix is used for converting a calibration pavement plane under a space coordinate system where the point cloud data are located into a horizontal plane which coincides with a horizontal plane formed by a horizontal coordinate axis under the space coordinate system;
extracting a current pavement plane from the second point cloud data;
determining a first rotation matrix by which the current road surface plane rotates to be parallel to the horizontal plane;
rotating the second point cloud data by using the first rotation matrix to obtain third point cloud data;
and detecting a road edge in the road environment based on the third point cloud data.
According to the road edge detection method provided by the invention, the extracting of the current road surface plane from the second point cloud data comprises the following steps:
Performing plane fitting on the second point cloud data to obtain a plurality of first candidate planes;
selecting a first candidate plane with an included angle between a normal vector and a vertical coordinate axis under the space coordinate system meeting a first included angle range from a plurality of first candidate planes as a second candidate plane;
generating a target point based on the second point cloud data, wherein the coordinate value of the target point is an average value of the coordinate values of the second point cloud data in the horizontal coordinate axis, and the coordinate value of the target point is higher than the coordinate value of the second point cloud data in the vertical coordinate axis;
and taking the second candidate plane farthest from the target point as the current pavement plane.
According to the method for detecting the road edge provided by the invention, the road edge in the road environment is detected based on the third point cloud data, and the method comprises the following steps:
acquiring fourth point cloud data meeting the height range of a road surface and fifth point cloud data meeting the height range of the side surface of the road edge from the third point cloud data, wherein the height range of the side surface of the road edge is the height range of the side surface of the road edge perpendicular to the road surface;
acquiring a first target plane based on the fourth point cloud data;
Obtaining a second target plane based on the fifth point cloud data;
and obtaining the road edge in the road environment based on the intersection line of the first target plane and the second target plane.
According to the method for detecting the road edge provided by the invention, the method for obtaining the first target plane based on the fourth point cloud data comprises the following steps:
performing plane fitting on the fourth point cloud data to obtain a plurality of third candidate planes;
selecting a third candidate plane with an included angle between a normal vector and a vertical coordinate axis in the space coordinate system meeting a second included angle range from a plurality of third candidate planes as the first target plane;
the obtaining a second target plane based on the fifth point cloud data includes:
performing plane fitting on the fifth point cloud data to obtain a plurality of fourth candidate planes;
and selecting the fourth candidate plane with the included angle between the normal vector and the vertical coordinate axis meeting a third included angle range from the fourth candidate planes as the second target plane.
According to the road edge detection method provided by the invention, the first conversion matrix is obtained by the following steps:
taking a pitch angle and a roll angle required by converting the calibrated pavement plane to coincide with the horizontal plane as variables, and solving an optimal solution of the pitch angle and the roll angle based on a particle swarm optimization algorithm;
And obtaining the first conversion matrix based on the optimal solution of the pitch angle and the roll angle.
According to the road edge detection method provided by the invention, the method for extracting the candidate road edge from the image of the road environment comprises the following steps:
obtaining a second image including a road surface area and a non-road surface area based on the image of the road environment;
dividing the second image into two sub-images in the row direction of the second image to obtain two regions of interest;
and carrying out edge detection in each region of interest, and extracting the candidate road edges from the edge detection result.
According to the road edge detection method provided by the invention, the image of the road environment is acquired by the image acquisition device, and the projection of the point cloud data of the road environment to the pixel coordinate system comprises the following steps:
and projecting the point cloud data of the road environment to the pixel coordinate system by using a second conversion matrix and an internal reference of the image acquisition device, wherein the second conversion matrix is a conversion matrix of the coordinate system of the image acquisition device and a space coordinate system where the point cloud data is located, the second conversion matrix is obtained by solving a point cloud data sample and an image sample, and two sides of the image sample comprise calibration plates.
The invention also provides a road edge detection system, which comprises:
the image acquisition device is used for acquiring images of road environments;
the laser radar is used for collecting point cloud data of the road environment;
the image acquisition device and the laser radar are respectively connected with the controller, and the controller is used for executing the road edge detection method according to any one of the above.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method for detecting the road edge according to any one of the above when executing the program.
The invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor implements a method of road edge detection as described in any of the above.
The invention also provides a vehicle, which comprises a vehicle body, wherein the vehicle body is provided with the road edge detection system, the electronic equipment or the computer readable storage medium.
According to the road edge detection method, the candidate road edge can be extracted from the image of the road environment, the point cloud data of the road environment is projected to the pixel coordinate system to obtain the first image, the candidate road edge in the image of the road environment is taken as prior information for the first image, the target area of the area corresponding to the candidate road edge is determined in the first image, the point cloud data corresponding to the target area in the point cloud data of the road environment is obtained and is taken as first point cloud data, so that the existence of road edge information in the first point cloud data is ensured, the environmental interference is greatly reduced, the road edge in the road environment can be detected based on the first point cloud data, and therefore the simple and accurate road edge detection is realized.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a method for detecting a road edge according to the present invention;
FIG. 2 is a schematic diagram of a fusion sensor according to the present invention;
FIG. 3 is a schematic illustration of a target region in a first image provided by the present invention;
FIG. 4 is a second schematic view of a target area in a first image according to the present invention;
FIG. 5 is a second flow chart of the method for detecting a road edge according to the present invention;
FIG. 6 is a schematic diagram of a first region of interest and a second region of interest provided by the present invention;
FIG. 7 is a third flow chart of the method for detecting a road edge according to the present invention;
FIG. 8 is a flow chart of a method for detecting a road edge according to the present invention;
FIG. 9 is a fifth flow chart of the method for detecting a road edge according to the present invention;
FIG. 10 is a schematic diagram of third point cloud data provided by the present invention;
fig. 11 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
There are various ways of detecting the edges in the related art. For example, a map drawn by a method of GPS and RTK tread point positioning based on a road edge detection mode of a global positioning system (Global Positioning System, GPS) and Real-time kinematic (RTK) has the defect that the detection precision is about 5cm due to tread point errors and edge fitting errors, and cannot meet the centimeter-level requirements of mechanical pavement edge bonding operation. For another example, the road edge detection mode based on monocular depth estimation, wherein the accuracy depends on the difference between a calibration environment and a field measurement environment based on road edge detection information obtained by monocular depth recovery, and the detection accuracy is affected by uneven ground or changing angles of cameras, so that the requirement of the positioning accuracy of the welt cannot be met. For another example, a road edge detection mode of generating a point cloud based on binocular vision, in which the accuracy cannot meet the requirement when the camera is far from the road edge. For example, the laser radar-based road edge detection mode can be used for extracting road edge points based on geometric features of the road edge, and the road edge point extraction method based on the geometric features of the road edge can be an existing region growing method, a threshold value judging method, a point cloud rasterization method and a ground segmentation method, but the road edge high-precision detection in the modes needs dense point clouds, and the dense point clouds are generated to have higher requirements on sensor hardware and vehicle running speed, so that the realization difficulty is high.
For this reason, the present invention provides a method for detecting a road edge, which is described in detail below with reference to fig. 1 to 10.
The present embodiment provides a method for detecting a road edge, as shown in fig. 1, where the method for detecting a road edge may include:
and 101, extracting candidate road edges from the image of the road environment.
And 102, projecting the point cloud data of the road environment to a pixel coordinate system to obtain a first image.
Step 103, determining a target area in the first image, wherein the target area comprises an area corresponding to the candidate road edge.
Step 104, obtaining point cloud data corresponding to the target area in the point cloud data of the road environment, and taking the point cloud data as first point cloud data.
Step 105, detecting a road edge in the road environment based on the first point cloud data.
The road edge detection method provided by the embodiment can be applied to a vehicle, and can be executed by a whole vehicle controller in the vehicle or can be executed by a vehicle-mounted terminal independently arranged in the vehicle. The vehicle may be road surface machinery such as a road roller, a grader, a paver, or an automobile.
In practical application, an image acquisition device may be used to acquire an image of a road environment, and the image acquisition device may be a camera or may be other terminal devices with an image acquisition function. The laser radar can be utilized to collect point cloud data of the road environment. The image acquisition device and the laser radar have coincident perception areas for road environments. In practice, the image acquisition device and the lidar may be fixed to the body of the vehicle together. Thus, the image acquisition device and the laser radar form a fusion sensor of vision and the laser radar. The image acquisition device and the laser radar can be fixed on the body of the vehicle through a bracket. As shown in fig. 2, the stand includes a base 201, and a mounting plate 202 disposed on the base 201, and an image acquisition device 203 and a laser radar 204 are disposed on the mounting plate 202, where the image acquisition device 203 and the laser radar 204 are disposed side by side in a height direction of the vehicle body, and the image acquisition device 203 may be above or below the laser radar 204, so that sensing areas of the two are overlapped to the greatest extent. The fusion sensor can be fixed on the central axis of the vehicle body, so that the road edges on two sides of the vehicle can be conveniently detected, and the coordinate axes of the vehicle control coordinate system and the coordinate system of the fusion sensor in the front-rear direction of the vehicle are identical, so that when the road edge detection result is applied, the vehicle control coordinate system is more conveniently switched, and only the coordinate translation is required on the coordinate axes in the left-right direction of the vehicle.
The image is composed of pixel points, the coordinates of the pixel points in the image can be expressed based on a pixel coordinate system, the pixel coordinate system is a plane rectangular coordinate system taking the pixel points as units, the pixel points are arranged in the image in a row and column mode, the direction of one row of the pixel points is the row direction of the image, and the direction of one column of the pixel points is the column direction of the image. The image of the road environment is an image expressed based on a pixel coordinate system.
In this embodiment, the road edge in the image of the road environment may be extracted through semantic segmentation as a candidate road edge, and the point cloud data of the road environment may be projected to a pixel coordinate system, so that a first image may be obtained, and then, an area corresponding to the candidate road edge in the image of the road environment in the first image also includes the road edge information, based on this, a target area in the first image may be determined, and the target area includes an area corresponding to the candidate road edge, so that the target area includes the road edge information, based on this, the point cloud data corresponding to the target area in the point cloud data of the road environment may be obtained, so that the local point cloud data of the road environment may be extracted as first point cloud data, so that the extracted local point cloud data also includes the road edge information, and then, based on the first point cloud data, the road edge in the road environment may be detected. Therefore, the candidate road edge provided by the image of the road environment is used as priori information of the laser radar, the existence of the road edge information in the first point cloud data adopted in the road edge detection is ensured, and the environment interference (such as smoke, vibration and other interference) is greatly reduced, so that the road edge detection is simple and accurate.
As shown in fig. 3, the target region may be a region 301 corresponding to the candidate route edge in the first image, and as shown in fig. 4, the target region may be an expanded region 302 obtained based on the region 301 corresponding to the candidate route edge in the first image.
Among them, there are various ways of obtaining the expanded region.
By way of example, the manner in which the expanded region is obtained may include: as shown in fig. 4, based on the region corresponding to the candidate route edge in the first image, at least one side of the region corresponding to the candidate route edge is expanded in the row direction of the first image, and an expanded region is obtained. In this way, the region corresponding to the candidate route edge is expanded in the row direction of the first image, so that more route edge information can be contained in the target region. When the row direction of the first image expands to one side of the area corresponding to the candidate road edge, the number of expanded pixel points is within the set number. The value of the set number can be set according to actual requirements, for example, the set number can be 100. If there is no expandable pixel point on one side of the region corresponding to the candidate road edge in the row direction of the first image, expansion may not be performed.
The point cloud data has disorder, and filtering is performed based on rules, so that the point cloud data corresponding to the road edge can not be accurately extracted. In this embodiment, a candidate road edge may be extracted from an image of a road environment, point cloud data of the road environment is projected to a pixel coordinate system to obtain a first image, the candidate road edge in the image of the road environment is used as prior information for the first image, a target area including an area corresponding to the candidate road edge in the first image is determined, point cloud data corresponding to the target area in the point cloud data of the road environment is obtained and is used as first point cloud data, so that it is ensured that there is road edge information in the first point cloud data, environmental interference is greatly reduced, and a road edge in the road environment can be detected based on the first point cloud data, so that simple and accurate road edge detection is realized.
In an exemplary embodiment, the extracting the candidate road edge from the image of the road environment, as shown in fig. 5, may include:
step 501, obtaining a second image comprising a road surface area and a non-road surface area based on the image of the road environment.
Specifically, the image of the road environment can be input into a pre-trained semantic segmentation model, and a second image output by the semantic segmentation model is obtained. The semantic segmentation model may be used to semantically segment an input image of a road environment, obtain distinguished road surface regions and non-road surface regions, and output a second image including the road surface regions and the non-road surface regions. The training mode of the semantic segmentation model can include training the initial model by utilizing an image sample of the road environment and a corresponding sample label to obtain the semantic segmentation model. Illustratively, the semantic segmentation model may employ a MODET network architecture. The road surface area and the non-road surface area in the image of the road environment can be accurately distinguished through the semantic segmentation technology. The area where the road surface is located in the image of the road environment is a road surface area, and the area outside the road surface area is a non-road surface area.
Alternatively, a pixel value range of a pixel corresponding to a road surface area and a pixel value range of a pixel corresponding to a non-road surface area may be set. And distinguishing the road surface area from the non-road surface area in the image of the road environment based on the pixel value range of the pixel point corresponding to the road surface area and the pixel value range of the pixel point corresponding to the non-road surface area so as to obtain a second image comprising the road surface area and the non-road surface area. The road surface area and the non-road surface area are distinguished directly through the pixel value range of the pixel points corresponding to the road surface area and the non-road surface area, and the realization is simple.
The second image may be a binary image, and the pixel values of the pixels of the road surface area are the first pixel values, and the pixel values of the pixels of the non-road surface area are the second pixel values. Illustratively, the first pixel value is 1 and the second pixel value is 0.
Step 502, dividing the second image into two sub-images in the row direction of the second image, so as to obtain two regions of interest.
Since the road edge is generally on the left and right sides of the road, the second image can be divided into left and right sub-images in the row direction of the second image. For example, the division may be performed at the midpoint in the row direction of the second image, resulting in two sub-images of the left and right. The left sub-image may contain road left-side road edge information, and the right sub-image may contain road right-side road edge information. The two sub-images are taken as two regions of interest, as shown in fig. 6, including a first region of interest and a second region of interest.
And step 503, performing edge detection in each region of interest, and extracting the candidate road edges from the edge detection result.
In this step, a candidate road edge is extracted from each region of interest, and first, edge detection is performed in each region of interest to obtain edge lines of a road surface region and a non-road surface region in a second image, and a straight line L1 is extracted from the result of edge detection as a candidate road edge.
Illustratively, in each region of interest, edge detection may be performed in the region of interest using a Canny operator, and then a straight line is extracted from the result of the edge detection using a Hough transform as a candidate road edge.
The Canny operator is an edge detection method for comprehensively searching an optimal compromise scheme between noise interference resistance and accurate positioning, and edge lines of a pavement area and a non-pavement area in the second image can be accurately extracted. The Hough transform can detect curves of straight lines, circles, parabolas, ellipses and the like, the shapes of which can be described by functional relations. The straight line serving as the candidate route edge can be accurately extracted by using Hough transformation. Specifically, a straight line equation, i.e., a straight line equation of the candidate road edge, can be obtained by using Hough transformation.
In this embodiment, by distinguishing the road area and the non-road area from each other in the image of the road environment, a second image including the road area and the non-road area is obtained, and then the second image is divided into two sub-images in the row direction of the second image as two regions of interest, each region of interest may include road edge information on one side of the road, and based on this, edge detection is performed on the two regions of interest, so that edge lines of the road area and the non-road area can be obtained, and further, candidate road edges on each side of the road can be accurately extracted.
In an exemplary embodiment, the image of the road environment is acquired by an image acquisition device, the projecting the point cloud data of the road environment to a pixel coordinate system, comprising:
and projecting the point cloud data of the road environment to the pixel coordinate system by using a second conversion matrix and an internal reference of the image acquisition device, wherein the second conversion matrix is a conversion matrix of the coordinate system of the image acquisition device and a space coordinate system where the point cloud data is located, the second conversion matrix is obtained by solving a point cloud data sample and an image sample, and two sides of the image sample comprise calibration plates.
Here, the spatial coordinate system of the point cloud data is the coordinate system of the point cloud data acquisition device, and when the point cloud data is acquired by using the laser radar, the spatial coordinate system of the point cloud data is the coordinate system of the laser radar.
The image acquisition device may be a camera, based on which, the camera external parameter and the camera internal parameter may be utilized to project the point cloud data of the road environment to the pixel coordinate system, where the camera external parameter is a second transformation matrix, and at this time, the second transformation matrix is a transformation matrix of the spatial coordinate system where the camera coordinate system and the point cloud data are located, and exemplary, projecting the point cloud data of the road environment to the pixel coordinate system includes: and converting the point cloud data of the road environment into a camera coordinate system by using the second conversion matrix, and converting the point cloud data of the road environment into a pixel coordinate system by using the camera internal parameters. For camera imaging, it generally relates to a camera coordinate system, an image coordinate system and a pixel coordinate system, where the camera coordinate system is a coordinate system centered on the camera, the image coordinate system is a coordinate system describing imaging of a real object at a focal distance of the camera, a relationship between the camera coordinate system and the pixel coordinate system can be established, the origin positions of the image coordinate system and the pixel coordinate system are different, the origin of the image coordinate system is an intersection point of the optical axis of the camera and the imaging plane, and the origin of the pixel coordinate system is generally in the upper left corner of the image. When converting the point cloud data of the road environment under the camera coordinate system to the pixel coordinate system, the point cloud data of the road environment under the camera coordinate system needs to be converted to the image coordinate system through perspective projection and then converted to the pixel coordinate system through affine transformation.
The second transformation matrix and the expression of the camera internal parameters are exemplified as follows:
Figure BDA0004046275050000121
Figure BDA0004046275050000122
Figure BDA0004046275050000123
Figure BDA0004046275050000124
wherein, (X c ,Y c ,Z c ) Representing coordinates of a camera coordinate system; (X) l ,Y l ,Z l ) Representing coordinates of the spatial coordinate system; (u, v) represents coordinates of a pixel point of a pixel coordinate system; r represents a second rotation matrix, which is a 3*3 matrix, containing the elements R 11 ,R 12 ……,R 33 The method comprises the steps of carrying out a first treatment on the surface of the t represents a translation matrix comprising the element t 1 ,t 2 ,t 3 The method comprises the steps of carrying out a first treatment on the surface of the T' represents a second transformation matrix, consisting of R and T; f represents a camera focal length; dX and dY represent the physical dimensions of the pixel point; θ represents the angle of affine transformation; k represents camera internal parameters; c x And c y Representing the amount of translation of the affine transformation.
In practical application, after the fusion sensor is obtained, under the condition of keeping static state in an environment with more linear characteristics (namely, an environment with more linear characteristics), an image acquisition device is utilized to acquire an image sample, a laser radar is utilized to acquire a point cloud data sample, and then the point cloud data sample and the image sample are utilized to calculate a second conversion matrix for projecting the point cloud data to a pixel coordinate system. The solving process of the second conversion matrix includes:
and extracting linear features in the point cloud data samples and linear features in the image samples, acquiring an objective function, taking a second conversion matrix as a variable of the objective function, taking the minimum distance between all linear features in the point cloud data samples and the linear features in the image samples closest to the point cloud data samples after projection to a pixel coordinate system as a target, and solving the second conversion matrix through the objective function.
The smaller the distance between the linear feature in the point cloud data sample and the linear feature in the image sample closest to the point cloud data sample after being projected to the pixel coordinate system, the higher the coincidence degree of the linear feature and the linear feature.
Through the second transformation matrix, the point cloud data of the road environment can be projected to a pixel coordinate system, so that fusion of the point cloud data of the road environment and the image is realized.
In addition, when the image sample is acquired, the calibration plates can be arranged at the left lower corner and the right lower corner of the sensing area, the calibration plates are flat plates with fixed-interval pattern arrays, and linear characteristics can be added, so that the two sides of the image sample can contain the calibration plates, the fusion precision of the right lower corner and the left lower corner is further enhanced, and the road edges are generally arranged at the left lower corner and the right lower corner, so that the accuracy of road edge detection is improved.
In the road edge detection process, after the point cloud data of the road environment are collected, the second transformation matrix can be utilized to project the point cloud data of the road environment to a pixel coordinate system, and a first image is obtained. The pixel values of the pixel points in the first image contain indexes of the point cloud data, and the corresponding point cloud data can be determined by utilizing the indexes. Some pixels in the first image may not have corresponding projected point cloud data, and thus, the pixel values thereof also do not include an index of the point cloud data.
In the first image, a target area is acquired based on a linear equation of L1, and pixel points within a set number of about a distance L1 can be extracted in the row direction of the first image to obtain the target area, and then point cloud data corresponding to the target area is extracted to serve as first point cloud data.
In this embodiment, the second conversion matrix is calibrated by the calibration plate, so that the fusion of the image and the point cloud data in the calibration plate area is more accurate, the point cloud data in the road environment is projected to the pixel coordinate system by using the second conversion matrix, the fusion effect of the point cloud data and the image in the road environment is improved, and the accuracy is higher.
In the step 105, a road edge in the road environment is detected based on the first point cloud data, and various implementation manners are provided. In one implementation, the target area is an area corresponding to the candidate road edge, and the first point cloud data can be directly fitted to a straight line to be used as the detected road edge. Thus, the road edge detection can be simply and quickly completed. In another implementation, as shown in fig. 7, a specific implementation may include:
step 701, converting the first point cloud data by using a first conversion matrix to obtain second point cloud data; the first transformation matrix is used for transforming a calibration pavement plane under a space coordinate system where the point cloud data are located into a horizontal plane which coincides with a horizontal plane formed by a horizontal coordinate axis under the space coordinate system.
Here, the spatial coordinate system in which the point cloud data is located is an xyz coordinate system, the horizontal coordinate axis includes an X axis and a Y axis, the X axis and the Y axis form an XOY plane, that is, a horizontal plane formed by the horizontal coordinate axis, and the vertical coordinate axis includes a Z axis. In the implementation, the calibration of the pavement plane is needed, the calibration environment with a flat pavement can be selected in advance, the point cloud data are collected, the point cloud data of the pavement are extracted from the collected point cloud data, the calibration pavement plane is obtained based on the fitting of the point cloud data of the pavement, and the first conversion matrix enabling the calibration pavement plane to coincide with the horizontal plane formed by the horizontal coordinate axis under the space coordinate system where the point cloud data are located is obtained.
Illustratively, the first transformation matrix is obtained by:
taking a pitch angle and a roll angle required by the transformation of the calibrated pavement plane to coincide with the horizontal plane as variables, and solving an optimal solution of the pitch angle and the roll angle based on a particle swarm optimization algorithm (Particle Swarm Optimization, PSO); and obtaining the first conversion matrix based on the optimal solution of the pitch angle and the roll angle.
Figure BDA0004046275050000141
Figure BDA0004046275050000142
Figure BDA0004046275050000143
R′=R x R y R z (8)
Wherein alpha represents pitch angle and corresponds to rotation matrix R y The method comprises the steps of carrying out a first treatment on the surface of the Beta represents the roll angle and corresponds to the rotation matrix R x The method comprises the steps of carrying out a first treatment on the surface of the Gamma denotes yaw angle, corresponding to rotation matrix R z . R' represents a third rotation matrix.
The pitch angle is a rotation angle to the Y axis, the roll angle is a rotation angle to the X axis, the yaw angle is a rotation angle to the Z axis, and for calibration of the road surface plane, the road surface plane needs to be converted to coincide with the X axis and the Y axis to form an XOY plane, so that only the rotation angles of the X axis and the Y axis are needed, and based on this, the pitch angle and the roll angle can be solved, and the yaw angle can be zero.
The third rotation matrix is used for enabling the standard pavement plane to be parallel to a horizontal plane formed by a horizontal coordinate axis in a space coordinate system where the point cloud data are located. Further, after the third rotation matrix is used for rotating the first point cloud data, the translation of the Z axis is performed, so that the standard pavement plane coincides with a horizontal plane formed by a horizontal coordinate axis under a space coordinate system where the point cloud data is located. And the translation value of the Z axis is the distance between the rotated calibration pavement plane and the origin of the space coordinate system. Based on this, the first transformation matrix includes a third rotation matrix and translation values of the Z-axis.
Specifically, when the optimal solution of the pitch angle and the roll angle is solved based on the PSO, a particle swarm may be initialized first, each particle in the particle swarm represents one solution of the pitch angle and the roll angle, and the optimal solution of the pitch angle and the roll angle is sought based on the initialized particle swarm. The process of finding the optimal solution may refer to the related art of the PSO algorithm, and will not be described in detail herein. And finally, expressing the optimal solutions of the pitch angle and the roll angle in a matrix form to obtain a first conversion matrix. The first transformation matrix is stored in a configuration file. Thus, in the road surface detection process, the pre-stored first conversion matrix can be directly obtained from the configuration file.
In this step, the first point cloud data is converted by using the first conversion matrix, so that not only is the road surface plane in the first point cloud data converted, but also the first point cloud data is converted integrally along with the conversion of the road surface plane, and the second point cloud data is obtained, thereby realizing the initial calibration of the road surface plane.
And 702, extracting a current pavement plane from the second point cloud data.
Because the actual environment is different from the calibration environment, the road surface plane is also changed in real time, and in the step, the current road surface plane can be extracted from the second point cloud data. The current road surface plane is expressed in terms of an equation for the current road surface plane.
Step 703, determining that the current road surface plane rotates to a first rotation matrix parallel to the horizontal plane.
Specifically, a rotation angle required when the current road surface plane rotates to be parallel to a horizontal plane formed by a horizontal coordinate axis in a space coordinate system where the point cloud data is located can be calculated. The rotation angle may be an angle between a normal vector of the current road surface plane and a vertical coordinate axis in the spatial coordinate system. The rotation angle is expressed in a matrix form to obtain a first rotation matrix.
The first rotation matrix is exemplified as follows:
Figure BDA0004046275050000161
Wherein δ represents the rotation angle; i represents an identity matrix; n represents the rotation axis, is a vector, and has three components n of X axis, Y axis and Z axis x Ny, nz; r' represents the first rotation matrix.
The normal vector of a plane is a non-zero vector perpendicular to the plane and is an important vector that enables the position of the plane to be determined. Based on this, the normal vector of the current road surface plane is a non-zero vector perpendicular to the current road surface plane.
And step 704, rotating the second point cloud data by using the first rotation matrix to obtain third point cloud data.
In step 701, the first transformation matrix is used to realize the initial calibration of the pavement plane, and because of the real-time change of the pavement plane, the initial calibration is realized to be approximately parallel, that is, the current pavement plane may still be not parallel to the horizontal plane formed by the horizontal coordinate axis under the space coordinate system where the point cloud data is located, so in this step, the first rotation matrix is further used to rotate the second point cloud data, so that the current pavement plane is parallel to the horizontal plane formed by the horizontal coordinate axis under the space coordinate system where the point cloud data is located, thereby realizing the real-time calibration of the current pavement plane, automatically calibrating the pavement plane in real time, eliminating the errors caused by the difference between the actual environment and the calibration environment, and being beneficial to improving the accuracy of road edge detection. As the second point cloud data is rotated, the point cloud data changes in height of the vertical coordinate system under the spatial coordinate system.
Step 705, detecting a road edge in the road environment based on the third point cloud data.
In this embodiment, the road surface plane in the third point cloud data is calibrated in real time, so that the road surface plane is parallel to the horizontal plane formed by the horizontal coordinate axis in the space coordinate system where the point cloud data is located, the road surface plane can adapt to the environment of road surface change, and the road edge detection is more accurate based on the road surface plane.
In an exemplary embodiment, the extracting the current pavement plane from the second point cloud data, as shown in fig. 8, may include:
step 801, performing plane fitting on the second point cloud data to obtain a plurality of first candidate planes.
Specifically, a plane fitting may be performed on the second point cloud data based on a random sample consensus algorithm (Random sample consensus, RANSAC), to obtain a plurality of planes as a plurality of first candidate planes. RANSAC is an iterative method of estimating mathematical model parameters by using observed data points. Plane fitting can be achieved based on RANSAC, and specific reference may be made to related technology implementation, which is not described here.
Step 802, selecting, from a plurality of first candidate planes, the first candidate plane whose normal vector and the vertical coordinate axis in the space coordinate system satisfy a first angle range as a second candidate plane.
Here, the first angle range may be 0±a degrees, and the value of a may be 5. Because the vertical coordinate axis is perpendicular to the horizontal plane formed by the horizontal coordinate axis in the space coordinate system where the point cloud data is located, when the included angle between the normal vector of the first candidate plane and the vertical coordinate axis (for example, the Z axis) meets the first included angle range, the first candidate plane is approximately parallel to the horizontal plane formed by the horizontal coordinate axis in the space coordinate system where the point cloud data is located. In this way, in this step, the second candidate plane that is substantially parallel to the horizontal plane formed by the horizontal coordinate axis in the spatial coordinate system in which the point cloud data is located can be selected.
And 803, generating a target point based on the second point cloud data, wherein the coordinate value of the target point is an average value of the coordinate values of the second point cloud data in the horizontal coordinate axis, and the coordinate value of the target point is higher than the coordinate value of the second point cloud data in the vertical coordinate axis.
The target point here is a reference point for selecting the current road surface plane.
Under a space coordinate system where the point cloud data is located, the coordinate values of the target point include the coordinate values of the horizontal coordinate axis and the coordinate values of the vertical coordinate axis. The coordinate value of the target point on the horizontal coordinate axis may be an average value of coordinate values of the second point cloud data on the horizontal coordinate axis. The coordinate value of the target point on the vertical coordinate axis may be an average value of coordinate values of the second point cloud data on the vertical coordinate axis. The spatial coordinate system is an xyz coordinate system, and then the coordinate value of the target point P in the X axis is the average value Xave of the coordinate values of the second point cloud data in the X axis and the coordinate value in the Y axis is the average value Yave of the coordinate values of the second point cloud data in the Y axis. On the vertical coordinate axis, the target point may be higher than the second point cloud data, and the coordinate value of the target point is set higher than the coordinate value of the second point cloud data. Thus, the target point is directly above the second point cloud data. For example, the coordinate value of the target point in the vertical coordinate axis may be set to be the height of the fusion sensor from the ground. For example, set to 1m. The target point P is (Xave, yave, 1).
Step 804, using the second candidate plane farthest from the target point as the current pavement plane.
In this step, the current road surface plane is the second candidate plane farthest from the target point. Because the target point is directly above the second point cloud data and is far away from the whole second point cloud data, the interference caused by the non-parallel second candidate planes can be reduced as much as possible, and because the intersection point exists under the condition that the second candidate planes are non-parallel, the mispositioning of the current pavement plane is caused, and the accuracy of the extracted current pavement plane can be improved. And the second candidate plane farthest from the target point is selected, so that the interference of other second candidate planes parallel to the road surface plane can be removed, and the current road surface plane can be stably extracted.
In this embodiment, a plurality of first candidate planes are obtained by performing plane fitting on the second point cloud data, then, from the plurality of first candidate planes, a first candidate plane whose normal vector and an included angle of a vertical coordinate axis in a space coordinate system satisfy a first included angle range is selected, so as to obtain a second candidate plane approximately parallel to a horizontal plane formed by a horizontal coordinate axis in the space coordinate system in which the point cloud data is located, and a target point directly above the second point cloud data is generated, and a second candidate plane farthest from the target point is used as a current road surface plane, so that not only interference of the second candidate plane in non-parallel but also interference of other second candidate planes parallel to the road surface plane are removed, thereby accurately obtaining the current road surface plane.
In an exemplary embodiment, the detecting a road edge in the road environment based on the third point cloud data, as shown in fig. 9, may include:
step 901, acquiring fourth point cloud data meeting the height range of the road surface and fifth point cloud data meeting the height range of the side surface of the road edge from the third point cloud data, wherein the height range of the side surface of the road edge is the height range of the side surface of the road edge perpendicular to the road surface.
The side surface of the road edge is the side surface perpendicular to the road surface. In the third point cloud data, part of the point cloud data is point cloud data corresponding to a road surface, see point cloud data with lower height illustrated in the left side of fig. 10, part of the point cloud data is point cloud data corresponding to a side surface of a road edge perpendicular to the road surface, see point cloud data with higher height illustrated in the right side of fig. 10, and the like. In this step, fourth point cloud data satisfying the height range of the road surface and fifth point cloud data satisfying the height range of the side surface of the road edge are obtained from the third point cloud data, so that direct filtering is realized, point cloud data irrelevant to the road surface and the side surface of the road edge are filtered, and for sparse point cloud data, purer point cloud data of the road surface and the side surface of the road edge are obtained.
Step 902, obtaining a first target plane based on the fourth point cloud data.
Specifically, the obtaining the first target plane based on the fourth point cloud data includes:
performing plane fitting on the fourth point cloud data to obtain a plurality of third candidate planes;
and selecting the third candidate plane with the included angle between the normal vector and the vertical coordinate axis in the space coordinate system meeting a second included angle range from the plurality of third candidate planes as the first target plane.
In an implementation, plane fitting may be performed on the fourth point cloud data based on RANSAC to obtain a plurality of third candidate planes.
The second angular range may be 0±b degrees. The second angular range may or may not coincide with the first angular range. Illustratively, b may have a value of 5. Because the vertical coordinate axis is perpendicular to the horizontal plane formed by the horizontal coordinate axis in the space coordinate system where the point cloud data is located, when the included angle between the normal vector of the third candidate plane and the vertical coordinate axis (for example, the Z axis) meets the second included angle range, the horizontal plane formed by the horizontal coordinate axis in the space coordinate system where the point cloud data is located is approximately parallel to the third candidate plane. And then, selecting a third candidate plane with an included angle between the normal vector and the vertical coordinate axis in the space coordinate system meeting the second included angle range from the plurality of third candidate planes, thereby obtaining a third candidate plane approximately parallel to the horizontal plane formed by the horizontal coordinate axis in the space coordinate system where the point cloud data is located as the first target plane. In this way, the noise plane can be filtered out. A first object plane is typically obtained as a result of the pass-through filtering. The first object plane may characterize a road surface.
And 903, obtaining a second target plane based on the fifth point cloud data.
Specifically, the obtaining, based on the fifth point cloud data, a second target plane includes:
performing plane fitting on the fifth point cloud data to obtain a plurality of fourth candidate planes;
and selecting the fourth candidate plane with the included angle between the normal vector and the vertical coordinate axis meeting a third included angle range from the fourth candidate planes as the second target plane.
In implementation, plane fitting may be performed on the fifth point cloud data based on RANSAC to obtain a plurality of fourth candidate planes.
The third included angle may be 90±c degrees, and c may have a value of 5. Because the vertical coordinate axis is perpendicular to the horizontal plane formed by the horizontal coordinate axis in the space coordinate system where the point cloud data is located, when the included angle between the normal vector of the fourth candidate plane and the vertical coordinate axis (for example, the Z axis) meets the third included angle range, the horizontal plane formed by the horizontal coordinate axis in the space coordinate system where the point cloud data is located is approximately perpendicular to the fourth candidate plane. And then, selecting a fourth candidate plane with an included angle between the normal vector and a vertical coordinate axis in a space coordinate system meeting a third included angle range from the plurality of fourth candidate planes, so as to obtain a fourth candidate plane which is approximately perpendicular to a horizontal plane formed by a horizontal coordinate axis in the space coordinate system where the point cloud data is located as a second target plane. In this way, the noise plane can be filtered out. And a second object plane is generally obtained as a result of the pass-through filtering. The second object plane may characterize a road side.
Step 904, obtaining a road edge in the road environment based on the intersection line of the first target plane and the second target plane.
Since the first object plane characterizes the road surface and the second object plane characterizes the side of the road edge, the intersection of the first object plane and the second object plane can characterize the road edge. Thus, in this step, the intersection of the first target plane and the second target plane can be solved as the road edge of the detected road environment based on the equation of the first target plane and the equation of the second target plane. The intersection line is a three-dimensional straight line. Thus, the road edge detection is realized by the way of intersecting the surface and the plane.
In this embodiment, the fourth point cloud data satisfying the height range of the road surface and the fifth point cloud data satisfying the height range of the side surface of the road can be obtained from the third point cloud data, so that straight-through filtering is implemented to ensure the purity of the point cloud data of the fitting plane under the sparse point cloud, thereby accurately extracting the point cloud data of the road surface and the side surface of the road, obtaining the first target plane representing the road surface and the second target plane representing the side surface of the road, and detecting the road edge by using the method of solving the intersection line of the first target plane and the second target plane, so that the road edge is stable and accurate, and the influence of the precision difference existing in the laser radar on the result can be reduced. In addition, the number of the point cloud data required by the plane fitting is extremely small, so that the method can be applied to the situation of sparse point cloud data. And in addition, the point cloud data is equivalent to filter output based on all information plane fitting during plane fitting, so that the data stability is improved.
It should be noted that the first conversion matrix, the second conversion matrix, and the camera internal parameters may be stored in the configuration file in advance. When edge detection is initiated, the configuration file may be read, as well as the semantic segmentation model.
The following describes a road edge detection system provided by the present invention, and the road edge detection system described below and the road edge detection method described above can be referred to correspondingly.
The present embodiment provides a road edge detection system, including:
the image acquisition device is used for acquiring images of road environments;
the laser radar is used for collecting point cloud data of the road environment;
the image acquisition device and the laser radar are respectively connected with the controller, and the controller is used for executing the road edge detection method provided by any embodiment.
In this embodiment, reference may be made to the above embodiments for a specific implementation manner of the method for detecting a road edge, which is not described herein.
Fig. 11 illustrates a physical structure diagram of an electronic device, as shown in fig. 11, which may include: processor 1110, communication interface 1120, memory 1130, and communication bus 1140, wherein processor 1110, communication interface 1120, memory 1130 perform communication with each other through communication bus 1140. Processor 1110 may invoke logic instructions in memory 1130 to perform a method of edge detection, the method comprising:
Extracting candidate road edges from images of the road environment;
projecting the point cloud data of the road environment to a pixel coordinate system to obtain a first image;
determining a target area in the first image, wherein the target area comprises an area corresponding to the candidate road edge;
acquiring point cloud data corresponding to the target area in the point cloud data of the road environment as first point cloud data;
based on the first point cloud data, a road edge in the road environment is detected.
Further, the logic instructions in the memory 1130 described above may be implemented in the form of software functional units and sold or used as a stand-alone product, stored on a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random AccessMemory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the method of edge detection provided by the methods described above, the method comprising:
extracting candidate road edges from images of the road environment;
projecting the point cloud data of the road environment to a pixel coordinate system to obtain a first image;
determining a target area in the first image, wherein the target area comprises an area corresponding to the candidate road edge;
acquiring point cloud data corresponding to the target area in the point cloud data of the road environment as first point cloud data;
based on the first point cloud data, a road edge in the road environment is detected.
In yet another aspect, the present invention further provides a computer readable storage medium having stored thereon a computer program which when executed by a processor is implemented to perform the above provided methods of detecting a road edge, the method comprising:
extracting candidate road edges from images of the road environment;
Projecting the point cloud data of the road environment to a pixel coordinate system to obtain a first image;
determining a target area in the first image, wherein the target area comprises an area corresponding to the candidate road edge;
acquiring point cloud data corresponding to the target area in the point cloud data of the road environment as first point cloud data;
based on the first point cloud data, a road edge in the road environment is detected.
The invention also provides a vehicle comprising a vehicle body provided with a road edge detection system as provided in any of the above embodiments, or a road electronic device for performing any of the above embodiments, or a computer program product as provided in any of the above embodiments, or a computer readable storage medium as provided in any of the above embodiments.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. The road edge detection method is characterized by comprising the following steps of:
extracting candidate road edges from images of the road environment;
projecting the point cloud data of the road environment to a pixel coordinate system to obtain a first image;
determining a target area in the first image, wherein the target area comprises an area corresponding to the candidate road edge;
acquiring point cloud data corresponding to the target area in the point cloud data of the road environment as first point cloud data;
based on the first point cloud data, a road edge in the road environment is detected.
2. The method of claim 1, wherein the detecting a road edge in the road environment based on the first point cloud data comprises:
converting the first point cloud data by using a first conversion matrix to obtain second point cloud data; the first conversion matrix is used for converting a calibration pavement plane under a space coordinate system where the point cloud data are located into a horizontal plane which coincides with a horizontal plane formed by a horizontal coordinate axis under the space coordinate system;
extracting a current pavement plane from the second point cloud data;
determining a first rotation matrix by which the current road surface plane rotates to be parallel to the horizontal plane;
Rotating the second point cloud data by using the first rotation matrix to obtain third point cloud data;
and detecting a road edge in the road environment based on the third point cloud data.
3. The method of claim 2, wherein extracting the current road surface plane from the second point cloud data comprises:
performing plane fitting on the second point cloud data to obtain a plurality of first candidate planes;
selecting a first candidate plane with an included angle between a normal vector and a vertical coordinate axis under the space coordinate system meeting a first included angle range from a plurality of first candidate planes as a second candidate plane;
generating a target point based on the second point cloud data, wherein the coordinate value of the target point is an average value of the coordinate values of the second point cloud data in the horizontal coordinate axis, and the coordinate value of the target point is higher than the coordinate value of the second point cloud data in the vertical coordinate axis;
and taking the second candidate plane farthest from the target point as the current pavement plane.
4. The method of claim 2, wherein detecting a road edge in the road environment based on the third point cloud data comprises:
Acquiring fourth point cloud data meeting the height range of a road surface and fifth point cloud data meeting the height range of the side surface of the road edge from the third point cloud data, wherein the height range of the side surface of the road edge is the height range of the side surface of the road edge perpendicular to the road surface;
acquiring a first target plane based on the fourth point cloud data;
obtaining a second target plane based on the fifth point cloud data;
and obtaining the road edge in the road environment based on the intersection line of the first target plane and the second target plane.
5. The method of claim 4, wherein the obtaining a first target plane based on the fourth point cloud data comprises:
performing plane fitting on the fourth point cloud data to obtain a plurality of third candidate planes;
selecting a third candidate plane with an included angle between a normal vector and a vertical coordinate axis in the space coordinate system meeting a second included angle range from a plurality of third candidate planes as the first target plane;
the obtaining a second target plane based on the fifth point cloud data includes:
performing plane fitting on the fifth point cloud data to obtain a plurality of fourth candidate planes;
And selecting the fourth candidate plane with the included angle between the normal vector and the vertical coordinate axis meeting a third included angle range from the fourth candidate planes as the second target plane.
6. The method of claim 2, wherein the first transformation matrix is obtained by:
taking a pitch angle and a roll angle required by converting the calibrated pavement plane to coincide with the horizontal plane as variables, and solving an optimal solution of the pitch angle and the roll angle based on a particle swarm optimization algorithm;
and obtaining the first conversion matrix based on the optimal solution of the pitch angle and the roll angle.
7. The method according to any one of claims 1 to 6, wherein the extracting the candidate road edge from the image of the road environment comprises:
obtaining a second image including a road surface area and a non-road surface area based on the image of the road environment;
dividing the second image into two sub-images in the row direction of the second image to obtain two regions of interest;
and carrying out edge detection in each region of interest, and extracting the candidate road edges from the edge detection result.
8. The method according to any one of claims 1 to 6, wherein the image of the road environment is acquired by an image acquisition device, and the projecting the point cloud data of the road environment to a pixel coordinate system includes:
and projecting the point cloud data of the road environment to the pixel coordinate system by using a second conversion matrix and an internal reference of the image acquisition device, wherein the second conversion matrix is a conversion matrix of the coordinate system of the image acquisition device and a space coordinate system where the point cloud data is located, the second conversion matrix is obtained by solving a point cloud data sample and an image sample, and two sides of the image sample comprise calibration plates.
9. A curb detection system, comprising:
the image acquisition device is used for acquiring images of road environments;
the laser radar is used for collecting point cloud data of the road environment;
the controller, the image acquisition device and the laser radar are respectively connected with the controller, and the controller is used for executing the road edge detection method according to any one of claims 1 to 8.
10. A vehicle comprising a body, wherein the body is provided with the road edge detection system of claim 9.
CN202310029589.7A 2023-01-09 2023-01-09 Road edge detection method, system and vehicle Pending CN116259023A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310029589.7A CN116259023A (en) 2023-01-09 2023-01-09 Road edge detection method, system and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310029589.7A CN116259023A (en) 2023-01-09 2023-01-09 Road edge detection method, system and vehicle

Publications (1)

Publication Number Publication Date
CN116259023A true CN116259023A (en) 2023-06-13

Family

ID=86678614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310029589.7A Pending CN116259023A (en) 2023-01-09 2023-01-09 Road edge detection method, system and vehicle

Country Status (1)

Country Link
CN (1) CN116259023A (en)

Similar Documents

Publication Publication Date Title
JP7398506B2 (en) Methods and systems for generating and using localization reference data
CN105667518B (en) The method and device of lane detection
EP2660777B1 (en) Image registration of multimodal data using 3D geoarcs
Wedel et al. B-spline modeling of road surfaces with an application to free-space estimation
WO2021227645A1 (en) Target detection method and device
Holgado‐Barco et al. Semiautomatic extraction of road horizontal alignment from a mobile LiDAR system
US10909395B2 (en) Object detection apparatus
CN110197173B (en) Road edge detection method based on binocular vision
CN112740225B (en) Method and device for determining road surface elements
Oniga et al. Curb detection for driving assistance systems: A cubic spline-based approach
KR102127679B1 (en) System for correcting geometry of mobile platform with sensor based on an orthophoto
CN109871739B (en) Automatic target detection and space positioning method for mobile station based on YOLO-SIOCTL
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
CN112489106A (en) Video-based vehicle size measuring method and device, terminal and storage medium
CN109115232B (en) Navigation method and device
Hara et al. Vehicle localization based on the detection of line segments from multi-camera images
CN112446915A (en) Picture-establishing method and device based on image group
CN112424568A (en) System and method for constructing high-definition map
KR102003387B1 (en) Method for detecting and locating traffic participants using bird's-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program
CN116385997A (en) Vehicle-mounted obstacle accurate sensing method, system and storage medium
KR20210098534A (en) Methods and systems for creating environmental models for positioning
CN116259023A (en) Road edge detection method, system and vehicle
Velat et al. Vision based vehicle localization for autonomous navigation
CN116007637B (en) Positioning device, method, in-vehicle apparatus, vehicle, and computer program product
CN112530270A (en) Mapping method and device based on region allocation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination