WO2022067647A1 - 一种路面要素确定方法及装置 - Google Patents

一种路面要素确定方法及装置 Download PDF

Info

Publication number
WO2022067647A1
WO2022067647A1 PCT/CN2020/119343 CN2020119343W WO2022067647A1 WO 2022067647 A1 WO2022067647 A1 WO 2022067647A1 CN 2020119343 W CN2020119343 W CN 2020119343W WO 2022067647 A1 WO2022067647 A1 WO 2022067647A1
Authority
WO
WIPO (PCT)
Prior art keywords
road surface
point
coordinates
point cloud
candidate
Prior art date
Application number
PCT/CN2020/119343
Other languages
English (en)
French (fr)
Inventor
湛逸飞
果晨阳
支晶晶
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2020/119343 priority Critical patent/WO2022067647A1/zh
Priority to CN202080005143.5A priority patent/CN112740225B/zh
Publication of WO2022067647A1 publication Critical patent/WO2022067647A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/64Analysis of geometric attributes of convexity or concavity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present application relates to the field of intelligent driving, and in particular, to a method and device for determining road surface elements.
  • the field of intelligent driving (including assisted driving and unmanned driving) is developing rapidly.
  • the accuracy of real-time environment perception and the accuracy of electronic maps are particularly important for the safety of intelligent driving.
  • Real-time environmental perception includes the identification of various pavement elements on the road surface, and high-precision electronic maps also need to accurately identify various pavement elements on the road surface.
  • the pavement elements may include lane lines, stop lines, pavement signs, arrows, text, and the like.
  • pavement elements are necessary elements in real-time environmental perception and high-precision electronic map production.
  • the present application provides a pavement element determination method and device for accurately determining pavement elements.
  • a method for determining pavement elements including:
  • the coordinates of the candidate laser point cloud point of the at least one road surface element and the coordinates of the projection point of the candidate pixel point of the at least one road surface element on the curved surface are determined, and the at least one road surface element point is determined.
  • the road surface element points correspond to one or more road surface elements in the at least one road surface element.
  • the curved surface on which the road surface is located is determined based on the laser point cloud of the road surface, and the candidate pixel points of the road surface element in the road surface image are projected onto the curved surface, and then the coordinates of the projected point and the candidate laser points of the road surface element are projected.
  • the coordinates of the cloud points are fused to obtain the coordinates of the pavement element points on the pavement.
  • neither the laser point cloud nor the pavement image is relied on, but the two are fused into consideration. , which can improve the accuracy and reliability of the extraction of pavement elements.
  • the pixels in the image are not directly used, but the surface where the pavement is located is determined based on the laser point cloud of the pavement.
  • the candidate pixel points of the pavement elements in the pavement image are projected onto the surface, that is, the influence of the actual conditions of the pavement on the accuracy of pavement element extraction is considered, thereby further improving the accuracy and reliability of pavement element extraction, and making the extraction of pavement elements more accurate and reliable.
  • This embodiment can be applied to various road conditions, which improves the applicability of this embodiment.
  • the coordinates of the at least one pavement element point are a set of coordinates of pavement element points corresponding to at least one sampling space in the space corresponding to the pavement, wherein the first sampling space is the Any sampling space within at least one sampling space;
  • determining the coordinates of the at least one pavement element point including:
  • a sampling space usually includes multiple candidate laser point cloud points and projection points, these candidate laser points and projection points in the sampling space can be integrated by the above method.
  • the coordinates of a pavement feature point are obtained from the coordinates of the points, thereby improving the accuracy and reliability of the coordinates of the pavement feature points.
  • the coordinates of a pavement feature point are calculated for a sampling space for subsequent pavement feature vector extraction. Computational complexity can also be reduced.
  • the coordinates of the road surface element points corresponding to the first sampling space satisfy the following formula:
  • P sample represents the coordinates of the road surface element points corresponding to the first sampling space
  • Pi is the coordinates of the candidate laser point cloud point i
  • pi is the coordinates of the projection point i on the surface
  • C Li is the candidate laser Confidence of point cloud point i
  • C ci is the confidence of projection point i on the surface
  • n is the number of candidate laser point cloud points in the first sampling space
  • m is all the points in the first sampling space.
  • the number of projection points on the surface, both n and m are integers greater than or equal to 1.
  • the sampling sub-item weighting method can be implemented based on the above formula, the confidence of the candidate laser point cloud points based on the pavement elements, and the confidence of the projection points of the candidate pixel points of the pavement elements on the pavement surface, combined with a
  • the coordinates of the candidate laser point cloud points and the projection point in the sampling space are calculated to obtain the coordinates of the corresponding pavement element points in the sampling space.
  • the coordinates and confidence of each candidate laser point cloud point in a sampling space are comprehensively considered. degree, as well as the coordinates and confidence of each projected point in the sampling space, thus improving the accuracy and reliability of the coordinates of the pavement feature points.
  • the confidence level of the candidate laser point cloud point of the at least one road surface element satisfies the following formula:
  • C Li is the confidence level of the candidate laser point cloud point i
  • D i is the neighborhood density of the candidate laser point cloud point i
  • I i is the neighborhood relative reflectivity of the candidate laser point cloud point i
  • W L1 is D i
  • the confidence weight coefficient of , W L2 is the confidence weight coefficient of I i .
  • the confidence level of the candidate laser point cloud point is calculated based on the above formula, and the neighborhood density of the candidate laser point cloud point and the neighborhood relative reflectivity of the candidate laser point cloud point can be comprehensively considered, so that the laser point based on the road surface element can be used.
  • the features of the point cloud points are used to determine the confidence of the candidate laser point cloud points to make the results more accurate.
  • the confidence level of the projection point of the candidate pixel point of the at least one road surface element on the curved surface satisfies the following formula:
  • C Ci is the confidence level of the projection point i on the curved surface
  • ci is the confidence level of the candidate pixel point i of the road surface element in the image
  • Li is the difference between the coordinates of the projection point i and the origin of the camera device
  • W C1 is the confidence weight coefficient of ci
  • W C2 is the confidence weight coefficient of 1/L i .
  • the confidence of the projection point of the candidate pixel point on the road surface surface is calculated based on the above formula, and the confidence degree of the candidate pixel point of the road surface element in the image and the projection point and the location of the candidate pixel point on the road surface surface can be comprehensively considered.
  • the distance between the origins of the camera devices can be used to determine the confidence of the projection point of the candidate pixel point on the road surface surface based on the characteristics of the pixel point of the road surface element, so that the result is more accurate.
  • the at least one road surface is determined according to the coordinates of the candidate laser point cloud points of the at least one road surface element and the projected point coordinates of the candidate pixel points of the at least one road surface element on the curved surface before the coordinates of the feature points, also include:
  • the determining the coordinates of the at least one road surface element point according to the coordinates of the candidate laser point cloud points of the at least one road surface element and the projected point coordinates of the candidate pixel points of the at least one road surface element on the curved surface includes:
  • the first road surface element is any road surface element obtained by clustering.
  • the category of the road surface element to which each candidate laser point cloud point belongs, and each candidate pixel point on the road surface can be determined.
  • the determining the curved surface on which the road surface is located according to the laser point cloud of the road surface includes:
  • the determining, according to the coordinates of the candidate pixel points of the at least one road surface element and the curved surface, the coordinates of the projection points of the candidate pixel points of the at least one road surface element on the curved surface including:
  • the projection point coordinates of the candidate pixel points of the at least one road surface element in the gridded surface are determined.
  • the gridded surface is adopted, which can reduce the computational cost of the system.
  • the method further includes: determining or outputting the information of the one or more road surface elements according to the coordinates of the at least one road surface element point.
  • the acquiring the laser point cloud of the road surface and the image of the road surface includes:
  • the point cloud point coordinates in the laser point cloud and the pixel point coordinates in the image belong to the same coordinate system.
  • the road surface elements include at least one of the following: lane lines, stop lines, road signs, arrows, and words.
  • a pavement element determination device including:
  • an acquisition unit for acquiring the laser point cloud of the road surface and the image of the road surface
  • a processing unit configured to extract a candidate laser point cloud point of at least one road surface element in the laser point cloud of the road surface, and determine the curved surface on which the road surface is located according to the laser point cloud of the road surface;
  • the processing unit is further configured to extract a candidate pixel point of at least one road surface element in the image of the road surface, and determine the candidate pixel point of the at least one road surface element according to the coordinates of the candidate pixel point of the at least one road surface element and the curved surface the coordinates of the projected point of the point on the curved surface; and, according to the coordinates of the candidate laser point cloud point of the at least one road surface element and the coordinates of the projected point of the candidate pixel point of the at least one road surface element on the curved surface , determining the coordinates of at least one road surface element point, the at least one road surface element point corresponding to one or more road surface elements in the at least one road surface element.
  • the coordinates of the at least one pavement element point are a set of coordinates of pavement element points corresponding to at least one sampling space in the space corresponding to the pavement, wherein the first sampling space is the Any sampling space in at least one sampling space corresponding to the road surface;
  • the processing unit is specifically used for:
  • the coordinates of the road surface element points corresponding to the first sampling space satisfy the following formula:
  • P sample represents the coordinates of the road surface element points corresponding to the first sampling space
  • Pi is the coordinates of the candidate laser point cloud point i
  • pi is the coordinates of the projection point i on the surface
  • C Li is the candidate laser Confidence of point cloud point i
  • C ci is the confidence of projection point i on the surface
  • n is the number of candidate laser point cloud points in the first sampling space
  • m is all the points in the first sampling space.
  • the number of projection points on the surface, both n and m are integers greater than or equal to 1.
  • the confidence level of the candidate laser point cloud point of the at least one road surface element satisfies the following formula:
  • C Li is the confidence level of the candidate laser point cloud point i
  • D i is the neighborhood density of the candidate laser point cloud point i
  • I i is the neighborhood relative reflectivity of the candidate laser point cloud point i
  • W L1 is D i
  • the confidence weight coefficient of , W L2 is the confidence weight coefficient of I i .
  • the confidence level of the projection point of the candidate pixel point of the at least one road surface element on the curved surface satisfies the following formula:
  • C Ci is the confidence level of the projection point i on the curved surface
  • ci is the confidence level of the candidate pixel point i of the road surface element in the image
  • Li is the coordinate of the projection point i and the origin of the camera device.
  • W C1 is the confidence weight coefficient of ci
  • W C2 is the confidence weight coefficient of 1/L i .
  • the processing unit is further configured to:
  • the at least one The candidate laser point cloud points of the road surface element and the at least one projection point on the curved surface are clustered to obtain the candidate laser point cloud point of the at least one road surface element and the road surface element to which the at least one projection point on the curved surface belongs;
  • the determining the coordinates of the at least one pavement element point according to the coordinates of the candidate laser point cloud point of the at least one pavement element and the projection point coordinates of the candidate pixel point of the at least one pavement element on the curved surface includes:
  • the first road surface element is any road surface element obtained by clustering.
  • the processing unit is specifically configured to: generate a gridded surface where the road surface is located according to the coordinates of the point cloud points in the laser point cloud of the road surface; according to the at least one road surface element and the gridded surface, determine the projection point coordinates of the candidate pixel of the at least one road surface element on the grid in the gridded surface.
  • the processing unit is further configured to: determine or output the information of the one or more road surface elements according to the coordinates of the at least one road surface element point.
  • the acquiring unit is specifically configured to: acquire a laser point cloud of the road surface from at least one lidar and an image of the road surface from at least one camera device.
  • the point cloud point coordinates in the laser point cloud and the pixel point coordinates in the image belong to the same coordinate system.
  • the road surface elements include at least one of the following: lane lines, stop lines, road signs, arrows, and words.
  • a pavement element determination device comprising at least one processor and an interface, wherein the interface is used to provide program instructions or data for the at least one processor; the at least one processor is used to execute all
  • the program line instructions are used to implement the method according to any one of the above-mentioned first aspects.
  • an in-vehicle system including the device according to any one of the above-mentioned second aspects.
  • the vehicle-mounted system further includes at least one laser radar and at least one camera device.
  • a computer storage medium on which computer programs or instructions are stored, and when the computer programs or instructions are executed by at least one processor, implement the method according to any one of the above-mentioned first aspects.
  • FIG. 1 is a system architecture to which an embodiment of the application is applicable
  • FIG. 2 is a block diagram of a method for determining pavement elements according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of the principle of extracting a candidate laser point cloud point set of road surface elements from a laser point cloud of a road surface in an embodiment of the present application;
  • FIG. 4 is a schematic diagram of a mesh curved surface obtained by fitting a laser point cloud of a road surface in an embodiment of the present application
  • FIG. 5 is a schematic flowchart of a method for determining pavement elements in an embodiment of the present application
  • FIG. 6 is a schematic structural diagram of a pavement element determination device in an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a communication device according to an embodiment of the present application, respectively.
  • the pavement elements refer to various markings on the pavement used for traffic guidance.
  • the pavement elements may include at least one of the following: lane lines, stop lines, pavement signs, arrows, words, and the like.
  • the pavement elements may be painted on the pavement using a material such as paint.
  • the set of point data on the surface of an object measured by a measuring device can be called a point cloud, and a point cloud is a collection of massive points on the surface characteristics of the target object.
  • the point cloud measured based on the principle of laser measurement is called laser point cloud.
  • the laser point cloud can include three-dimensional coordinates (X, Y, Z) and laser reflection intensity (Intensity). Since the pavement elements are painted on the pavement with materials such as paint, the reflection intensity of the laser light is different from that of the pavement. Therefore, the pavement elements and the pavement can be distinguished by the reflection intensity.
  • the point cloud of the pavement element includes the sampling points on the outer surface of the pavement element. The spatial coordinates and laser reflection intensity and other information.
  • the world coordinate system is the absolute coordinate system of the system. Before the user coordinate system is established, the coordinates of all points on the screen are based on the origin of the coordinate system to determine their respective positions. For example, since the camera can be placed anywhere in the environment, a reference coordinate system is selected in the environment to describe the position of the camera and use it to describe the position of any object in the environment, which is called the world coordinate system.
  • the user coordinate system is the coordinate system with the center of the specified object or object as the origin, for example, the vehicle body coordinate system is the coordinate system with the center of the vehicle body as the origin.
  • Conversion between different coordinate systems may be performed based on conversion parameters between coordinate systems, and the conversion parameters may include a rotation matrix and a translation vector.
  • system and “network” in the embodiments of the present application may be used interchangeably.
  • “Plurality” refers to two or more than two, and in view of this, “plurality” may also be understood as “at least two” in the embodiments of the present application.
  • “At least one” can be understood as one or more, such as one, two or more. For example, including at least one refers to including one, two or more, and does not limit which ones are included. For example, including at least one of A, B, and C, then including A, B, C, A and B, A and C, B and C, or A and B and C.
  • ordinal numbers such as “first” and “second” mentioned in the embodiments of the present application are used to distinguish multiple objects, and are not used to limit the order, sequence, priority, or importance of multiple objects.
  • high-precision electronic map acquisition systems or unmanned driving systems usually have LiDAR (light detection and ranging, LiDAR), camera devices, and high-precision positioning and attitude equipment.
  • LiDAR light detection and ranging, LiDAR
  • the laser radar can obtain a three-dimensional laser point cloud with reflection intensity
  • the camera device can obtain color images
  • the high-precision positioning and attitude device can obtain the pose of the laser radar and the camera device.
  • the pavement elements are painted on the pavement with materials such as paint, the reflection intensity of the laser light is higher than that of the pavement, so the pavement elements and the pavement can be distinguished by the reflection intensity of the laser point cloud.
  • the pavement elements are generally white, yellow, etc., and the pavement elements can be segmented by color information in the image.
  • pavement elements are detected based on a two-dimensional image, the pixels belonging to the pavement elements are obtained, and then the three-dimensional laser point cloud is projected onto the pavement element pixels in the two-dimensional image to determine the pavement elements. .
  • This method strongly relies on the detection results of pavement elements based on two-dimensional images. In the case of strong light, occlusion or other poor image detection results, omissions or false extractions may occur, resulting in lower accuracy of pavement element extraction.
  • the laser point cloud is converted into a two-dimensional top view with a certain resolution (for example, a sampling space of 10*10cm corresponds to one pixel), and the pixel value of the two-dimensional top view is converted by the intensity value.
  • the pavement features are extracted based on the intensity map.
  • the pavement elements are detected based on the image, and the detection results are subjected to inverse perspective transformation, and then converted to the same perspective as the two-dimensional intensity map.
  • the final pavement features extract candidate points.
  • This method needs to convert the perspective image to the front view image and then perform road surface element detection, and the front view image can only be based on a fixed ground height parameter, the three-dimensional position cannot be accurately estimated in the sloped road section, and the two-dimensional intensity map cannot be corresponded, resulting in The extraction accuracy of pavement features is reduced.
  • the embodiments of the present application provide a method and device for determining road surface elements, which extract road surface elements by fusing laser point clouds and images, which can be applied to scenarios of high-precision electronic map production or applied to intelligent driving (such as assisted driving). or autonomous driving) context perception in scenarios.
  • the embodiments of the present application can make full use of the laser point cloud and image of the road surface to extract the road surface elements, instead of relying on one of the information alone, so that the accuracy of the road surface element extraction can be improved (that is, it has higher three-dimensional spatial position accuracy). , and further, the robustness can be improved (that is, the effectiveness of pavement element extraction can be guaranteed under more complex environmental conditions).
  • FIG. 1 exemplarily shows a system to which the embodiments of the present application are applied.
  • the system includes at least one lidar (eg, 101a, 101b, . . . , 101n), at least one camera (eg, 102a, 102b, . . . , 102m), at least one positioning and attitude device 103 and /or at least one storage device 104 camera.
  • the system also includes a pavement element determination device 105 . Further, a time synchronization device 106 may also be included.
  • Lidars (101a, 101b,..., 101n), cameras (102a, 102b,..., 102m), and positioning and attitude equipment 103 are respectively connected to the storage device 104, and the storage device 104 is connected to the road surface element determination device 105.
  • the time synchronization device 106 is respectively connected with the lidar (101a, 101b,..., 101n), the camera device (102a, 102b,..., 102m) and the positioning and attitude device 103.
  • the time synchronization device 106 can perform timing for the lidars (101a, 101b, . . . , 101n), the camera devices (102a, 102b, .
  • the lidars (101a, 101b, . . . , 101n) are used to measure the road surface to obtain the laser point cloud of the road surface, and store the obtained laser point cloud in the storage device 104.
  • the camera devices ( 102 a , 102 b , . . . , 102 m ) can collect images of the road surface, and store the collected images of the road surface in the storage device 104 .
  • the cameras (102a, 102b, . . . , 102m) may be color cameras, such as color cameras, capable of capturing color images.
  • the positioning and posture device 103 is used to detect information such as the posture and posture of the lidar and/or the camera device, and store the detected posture and other information in the storage device 104 .
  • the pavement element determining device 105 may determine pavement elements on the pavement according to data such as laser point clouds and images of the pavement stored in the storage device 104 .
  • the above system may be an in-vehicle system, and the above-mentioned laser radar, camera device, position positioning device, storage device, road element determination device, etc., may be installed on the vehicle and constitute an integral part of the in-vehicle system.
  • the number of laser radars and the number of camera devices are only examples, and the embodiments of the present application do not limit the number of laser radars and the number of camera devices. If the number of laser radars is multiple, the multiple laser radars may be arranged in different positions, and if the number of camera devices is multiple, the multiple camera devices may be arranged at different locations.
  • the above system can be applied to high-precision electronic map production scenarios, and can also be applied to intelligent driving scenarios (such as assisted driving scenarios or automatic driving scenarios).
  • the high-precision electronic map collecting vehicle equipped with the above-mentioned system collects road surface information, and determines the road surface elements on the road surface according to the collected road surface information, so as to produce a high-precision electronic map.
  • the time synchronization device can provide timing for the laser radar, the camera device, and the positioning and attitude determination device in the above-mentioned system.
  • the road surface laser point cloud collected by the lidar, the road surface image collected by the camera device, and the position and attitude information of the laser radar and the camera device detected by the positioning and attitude device are stored in the storage device.
  • the pavement element determination device can first calibrate and unify the laser point cloud collected by the lidar and the image collected by the camera device into the same coordinate system according to the pose information detected by the positioning pose device. For example, it is unified into the world coordinate system, and then the determination process of road surface elements is carried out according to the laser point cloud and image unified into the same coordinate system. For example, if the point cloud point coordinates in the laser point cloud of the road surface and the pixel point coordinates in the image are unified to the world coordinate system in advance, the finally extracted road surface elements are high-precision electronic map vector data in the world coordinate system.
  • the intelligent vehicle equipped with the above system collects road surface information in real time, and determines the road surface elements on the road surface according to the collected road surface information for positioning or vehicle control during the intelligent driving process.
  • the road surface laser point cloud collected by the laser radar, the road surface image collected by the camera device, and the position and attitude information of the laser radar and the camera device detected by the positioning and attitude device are stored in the storage device.
  • the road element determination device can first calibrate and unify the laser point cloud collected by the lidar and the image collected by the camera device into the same coordinate system according to the pose information detected by the positioning pose device.
  • the extraction process of road surface elements is carried out according to the laser point cloud and image unified into the same coordinate system. If the point cloud point coordinates in the laser point cloud of the road surface and the pixel point coordinates in the image are unified into the vehicle body coordinate system in advance, the finally extracted road surface elements are the road surface elements around the vehicle body under the vehicle body coordinate system. vector data.
  • FIG. 2 exemplarily shows a block diagram of a method for determining a pavement element provided by an embodiment of the present application.
  • the method can be implemented by the above-mentioned system, or implemented by the device for determining road surface elements in the above-mentioned system.
  • the method can be applied to high-precision electronic map production scenarios, and can also be applied to intelligent driving scenarios.
  • the process may include the following steps:
  • S201 Obtain a laser point cloud of a road surface and an image of the road surface.
  • the laser point cloud of the road surface can be collected by at least one laser radar.
  • the laser point cloud is a collection of point cloud points, and the information of each point cloud point in the collection includes the three-dimensional coordinates (X, Y, Z) of the point cloud point and the laser reflection intensity.
  • the image of the road surface can be acquired by at least one camera device.
  • the image may be a color image.
  • At least one lidar and at least one camera device can be installed on the vehicle to obtain the data collected by the lidar and the camera device installed in the front of the vehicle.
  • the data collection area includes the road in front of the vehicle, so the laser point cloud and image of the road in front of the vehicle can be obtained.
  • the laser point cloud collected by the radar device and the image data collected by the camera device can be correlated based on the trajectory data to obtain the laser point cloud and image of the same road surface.
  • the trajectory data includes trajectory data obtained based on a satellite positioning system, or trajectory data obtained based on an inertial measurement unit (Inertial measurement unit, IMU).
  • the satellite positioning system includes but is not limited to a global positioning system (Global Positioning System, GPS), a global navigation satellite system (Global Navigation Satellite System, GNSS), a Beidou satellite navigation system, and the like.
  • An optional processing procedure includes: for an area, on the one hand, obtain the laser point cloud obtained when the lidar travels through the area during a certain period of time (for the convenience of description, referred to as the first period of time here), combined with the trajectory data. , convert the laser point cloud to a specified coordinate system (such as the world coordinate system); on the other hand, also for this area, obtain the image data obtained when the camera device travels through the same area in the same time period (that is, the above-mentioned first time period), combined with Trajectory data, which converts image data to the coordinate system specified above.
  • a specified coordinate system such as the world coordinate system
  • Trajectory data which converts image data to the coordinate system specified above.
  • the image of the road surface may include a single frame image, or may include multiple frames of images in a continuous period of time. If it is a multi-frame image in a continuous period of time, the road surface elements can be determined by using the method provided by the embodiment of the present application for each frame of image; or a tracking algorithm can be used based on the multi-frame image to obtain a certain space (for example, shooting within the continuous period of time). 10-meter-long pavement image) used in subsequent steps to determine pavement features.
  • data preprocessing operations may also be performed on the laser point cloud and/or the image, so as to retain only the data related to the road surface and remove other interfering data, so that The extraction of pavement elements is carried out in the follow-up.
  • At least one of the following preprocessing can be performed on the laser point cloud:
  • Road surface segmentation that is, retaining the point cloud points belonging to the road surface and removing the point cloud points of the non-road surface (such as the sky, roadside buildings, roadside facilities, etc.);
  • Obstacle removal that is, removing point cloud points belonging to obstacles (such as vehicles, pedestrians, non-motor vehicles, guardrails, isolation belts, etc.) on the road surface.
  • the above data preprocessing method can adopt a rule-based method. For example, according to the pre-given conditions that the point cloud points belonging to the road surface should meet, the point cloud points in the laser point cloud that meet the above conditions can be confirmed as belonging to the road surface. The point cloud points that do not meet this condition will be removed.
  • the above data preprocessing method can also adopt a machine learning method.
  • a classifier can be trained in advance, the laser point cloud can be input into the classifier, and the output point cloud points belonging to the road surface can be obtained.
  • This classifier can be implemented with a neural network to identify point cloud points on and off the road.
  • the pose information (eg, including the pose information of the laser radar and/or the camera device) can be obtained according to the laser point cloud and the image acquisition time.
  • the same coordinate system may be a world coordinate system or a vehicle body coordinate system, or the like.
  • the specified coordinate system may be the world coordinate system, so that the high-precision map vector data in the world coordinate system can be obtained through this process.
  • the specified coordinate system may be a vehicle body coordinate system, so that the vector data of the road surface elements around the vehicle body in the vehicle body coordinate system can be obtained through this process.
  • the principle of converting the laser point cloud to the world coordinate system is similar to the principle of converting the laser point cloud to the geocentric geo-fixed coordinate system described below.
  • the geomorphic coordinates of a certain laser point cloud point P are set as (x P , y P , z P ), and the coordinates of the center point of the laser scanning mirror are (X L , Y L , Z L ), the measured distance component between the laser point cloud point P and the center point of the laser scanning mirror is ( ⁇ X P , ⁇ Y P , ⁇ Z P ), then the laser point cloud point P is in the fixed coordinate system at the center of the earth
  • the ground coordinates in E can be expressed as:
  • the measured distance components ( ⁇ X P , ⁇ Y P , ⁇ Z P ) between the laser point cloud point P and the center point of the laser scanning mirror are realized by the coordinate transformation process.
  • the transformation process is as follows: Instantaneous laser beam coordinate system SL->laser scanning parameters Coordinate system T->laser carrier coordinate system L->IMU carrier coordinate system b->local horizontal reference coordinate system g->geocentric fixed coordinate system E, the transformation equation is:
  • S202 Extract the candidate laser point cloud point of at least one road surface element in the laser point cloud of the road surface, and determine the curved surface on which the road surface is located according to the laser point cloud of the road surface.
  • the laser point cloud of the road surface contains point cloud points of various road surface elements, and the number of point cloud points corresponding to one road surface element is usually large.
  • Candidate laser point cloud points for extracting pavement features from the point cloud are usually large.
  • the reflection intensity of the material to the laser is different from the reflection intensity of the pavement material to the laser, so the laser reflection intensity of the point cloud points in the laser point cloud of the pavement elements can be calculated to extract candidate laser point cloud points for pavement features.
  • road elements such as lane lines, stop lines, road signs, arrows, characters, etc.
  • the reflection intensity of the laser irradiated on the paint is greater than the reflection intensity irradiated on the road surface.
  • the laser reflection intensity of the point cloud points is used to obtain the candidate laser point cloud points of the road surface elements, that is, the point cloud points with high reflection intensity (for example, the reflection intensity is greater than the set threshold) are obtained from the laser point cloud of the road surface, as the road surface elements.
  • the candidate laser point cloud points with high reflection intensity for example, the reflection intensity is greater than the set threshold
  • FIG. 3 exemplarily shows a schematic diagram of the principle of extracting a set of candidate laser point cloud points of road surface elements from the laser point cloud of the road surface.
  • FIG. 3 shows the road surface elements on the road surface 300 in the actual scene (the straight-way indication 301 in the figure, the pedestrian crossing warning line 302, the U-turn indication 303, the lane line 304), and
  • FIG. 3 shows the laser reflection intensity information of each point in the laser point cloud collected by the lidar, in which the laser reflection intensity of the points on the road surface is greater than the laser reflection intensity of the points on the road surface, so these can be Points with higher laser reflection intensity are determined as candidate laser point cloud points for pavement features.
  • some other denoising operations can also be performed to obtain more reliable results, so as to improve the accuracy or reliability of the candidate laser point cloud points of the road surface elements.
  • the candidate laser points preliminarily selected through the above operations can be removed. Outliers in the cloud point collection.
  • step S202 further optionally, the surface fitting may be performed according to the coordinates of the point cloud points in the laser point cloud of the road surface to obtain the surface where the road surface is located.
  • the road surface can be meshed, and the laser point cloud points in each mesh can be used to fit a plane to obtain a meshed surface.
  • the scale of the grid can be adjusted according to the ODD (operational design domain).
  • the surface or meshed surface on which the road surface is located can be represented as a three-dimensional mathematical model, and the three-dimensional mathematical model can specifically be a surface equation, a grid plane equation, a digital elevation model (DEM), etc.
  • the representation of the surface or meshed surface is not limited.
  • FIG. 4 exemplarily shows a schematic diagram of a meshed surface, in which the dotted squares represent meshes, and each mesh is approximately a plane.
  • each grid is fitted into a plane, and each plane has a normal vector.
  • the normal vector of grid g6 in the figure is shown by arrow n g6 in the figure
  • the normal vector of the grid g7 in the figure is shown by the arrow n g7 in the figure, and so on.
  • S203 Extract the candidate pixel point of at least one road surface element in the image of the road surface, and determine the coordinates of the projection point of the candidate pixel point of at least one road surface element on the curved surface according to the coordinates of the candidate pixel point of the at least one road surface element and the above-mentioned curved surface .
  • an image of a road surface contains a variety of road surface elements, and the number of pixels corresponding to one road surface element is usually large.
  • the candidate pixel points of each road surface element can be determined separately in this embodiment of the present application. The coordinates of its projected point on the road surface.
  • the road surface element pixel points may be segmented in the road surface image to obtain candidate pixel points of the road surface element in the image to form a set of candidate pixel points.
  • the segmentation instruction here does not limit the action of segmentation, and it is subject to the final formation of the set of candidate pixels.
  • a rule-based method can be used to extract the candidate pixel points of the pavement elements. Pixels that satisfy the above conditions in the image are confirmed as candidate pixels of the road surface element, and those that do not satisfy the conditions are removed.
  • a machine learning method can also be used to extract the candidate pixels of the pavement elements.
  • a classifier can be trained in advance, and the pavement image is input to the classifier to obtain the output candidate pixels of the pavement elements.
  • This classifier can be implemented by a neural network.
  • some other denoising operations can also be performed to obtain more reliable results, thereby improving the accuracy or reliability of the candidate pixel points of the road surface element.
  • the collection is morphologically filtered to remove candidate pixels that do not meet the requirements.
  • the present application can maintain the accuracy of projecting the candidate pixel points of the road surface elements in the two-dimensional image of the road surface to the three-dimensional space in a larger range, so that the candidate laser point cloud points and the projected points of the candidate pixels can be better matched and fused, so that Improve the recognition accuracy of pavement elements.
  • the projection accuracy of less than 10cm error can be achieved within a range of 20m horizontally and 40m vertically.
  • step S203 it can be determined that the candidate pixel is on the gridded surface according to the coordinates of the candidate pixel points and the gridded surface. Projection point coordinates on the mesh in the surface.
  • the pixel coordinate of a point on the image be p(u, v), the depth be s, and the internal parameter matrix be K (the internal parameter matrix is the conversion parameter matrix from the laser point cloud coordinate system to the camera coordinate system), and the camera coordinate system in the camera.
  • One point is p'(x, y, z)
  • the external parameter matrix (the transformation parameter matrix from the world coordinate system to the camera coordinate system) includes R CT (rotation matrix) and T CT (translation vector).
  • the coordinates of a point of are P(XYZ), and there are the following conversion relationships:
  • the only unknown is the depth s, and the three-dimensional coordinates of the point P can be obtained.
  • S204 Determine the coordinates of at least one pavement element point or determine at least one pavement element point according to the coordinates of the candidate laser point cloud point of the at least one pavement element and the coordinates of the projection point of the candidate pixel point of the at least one pavement element on the above-mentioned curved surface.
  • the at least one road surface element point corresponds to one or more road surface elements in the at least one road surface element.
  • the coordinates of the corresponding road surface element points may be determined according to the coordinates of each candidate laser point cloud point and the coordinates of the projection point of each candidate pixel point on the above-mentioned curved surface.
  • the points corresponding to the coordinates determined according to the coordinates of the candidate laser point cloud points and the coordinates of the projection points of the candidate pixel points on the above-mentioned curved surface are referred to as "road surface element points".
  • the pavement elements can be located according to the coordinates of the pavement element points, that is, the pavement elements can be identified.
  • the coordinates of the at least one road surface element point are a set of coordinates of the road surface element points corresponding to at least one sampling space in the space corresponding to the road surface, wherein the first sampling space is the at least one sampling space. Any sampling space in the space. Further optionally, the at least one sampling space is all sampling spaces in the space corresponding to the road surface. Specifically, the sampling space may be any sampling space in at least one space corresponding to the road surface.
  • the space corresponding to the road contains multiple sampling spaces.
  • the size of the three-dimensional space corresponding to the road is X*Y*Z m 3
  • the size of a sampling space is x*y*z cm 3
  • the three-dimensional space corresponding to the road is Contains multiple sampling spaces. This embodiment of the present application does not limit the size of a sampling space.
  • a sampling space includes at least one candidate laser point cloud point and/or at least one projection point.
  • n and m are positive integers.
  • the point coordinates and confidence of the candidate laser point cloud of at least one road surface element in the first sampling space may be used, and/or, the at least one road surface element in the first sampling space
  • the coordinates and confidence levels of the projection points of the candidate pixel points on the above-mentioned curved surface are used to obtain the coordinates of the road surface element points corresponding to the first sampling space.
  • a sampling space may include only at least one candidate laser point cloud point, or only at least one projection point, or may include both at least one candidate laser point cloud point and at least one projection point.
  • Case 1 If only at least one candidate laser point cloud point is included in the first sampling space, the road surface corresponding to the first sampling space is obtained according to the coordinates and confidence of the candidate laser point cloud point of at least one road surface element in the first sampling space the coordinates of the feature point;
  • Case 2 If the first sampling space includes only at least one projection point, then the first sampling space corresponding to The coordinates of the pavement feature points;
  • Case 3 If the first sampling space includes at least one candidate laser point cloud point and at least one projection point, then according to the coordinates and confidence of the candidate laser point cloud point of at least one road surface element in the first sampling space, and the first sampling The coordinates of the projected point of the candidate pixel point of at least one road surface element in the space on the above-mentioned curved surface and the confidence level are used to obtain the coordinates of the road surface element point corresponding to the first sampling space.
  • the confidence level of each candidate laser point cloud point and the projection point of each candidate pixel point on the above-mentioned surface can be determined first. Confidence, and then for the first sampling space, perform the following operations: according to the coordinates and confidence of each candidate laser point cloud point in the sampling space, and the coordinates and confidence of the projection point of each candidate pixel on the above-mentioned surface degree, and calculate the coordinates of the road surface element points corresponding to the sampling space.
  • the coordinates of the pavement element points calculated by the above method do not depend solely on the laser point cloud collected by the lidar, nor solely on the image collected by the camera device, but combine the laser point cloud and the image, which can improve the The reliability of pavement element points, thereby improving the accuracy of pavement elements, can realize pavement element extraction in scenarios such as strong light, pavement element wear, and dark light, reducing the requirements for the data collection environment.
  • each candidate laser point cloud point and the projection point of each candidate pixel point have their own confidence
  • the candidate laser point cloud point in the sampling space is based on its confidence.
  • the coordinates are weighted and averaged by degrees, and the projection points of the candidate pixel points in the sampling space are weighted and averaged based on their confidence degrees, and then the coordinates of the pavement feature points are obtained by combining the two, thus further improving the reliability of the pavement feature points.
  • P sample represents the coordinates of a pavement element point corresponding to a sampling space
  • Pi is the coordinate of the candidate laser point cloud point i
  • pi is the coordinate of the projection point i on the above surface
  • C Li is the candidate laser point cloud point i
  • C ci is the confidence of the projection point i on the above-mentioned surface
  • n is the number of candidate laser point cloud points in the sampling space
  • m is the number of projection points on the above-mentioned surface in the sampling space
  • n and m All are integers greater than or equal to 1.
  • the confidence of candidate laser point cloud points can be calculated using the following formula:
  • C Li is the confidence level of the candidate laser point cloud point i
  • D i is the neighborhood density of the candidate laser point cloud point i
  • I i is the neighborhood relative reflectivity of the candidate laser point cloud point i
  • W L1 is D i
  • the confidence weight coefficient of , W L2 is the confidence weight coefficient of I i .
  • W L1 +W L2 1.
  • W L1 and W L2 can be preset.
  • the settings of W L1 and W L2 are related to the performance indicators of lidar and can be configured according to empirical values.
  • the statistical neighborhood is a space with a fixed size and shape (the size can be empirically set to 15-30 cm according to the size of the road marking).
  • the candidate point is a circle with a center and a radius of r
  • the number of laser point cloud points in the circle is n i
  • the following formula can be used to calculate the confidence of the projection point of the candidate pixel point on the above-mentioned curved surface:
  • C Ci is the confidence level of the projection point i on the above-mentioned curved surface
  • ci is the confidence level of the candidate pixel point i of the road surface element in the image
  • Li is the distance between the coordinates of the projection point i and the origin of the camera device.
  • the origin of the camera refers to the origin of the coordinate system of the camera.
  • the meaning of Li can be expressed as: the coordinates of the projection point i in the world coordinate system and the origin of the camera coordinate system (such as the origin of the camera coordinate system) when the frame image is captured The distance between coordinates in the world coordinate system.
  • W C1 is the confidence weight coefficient of c i
  • W C2 is the confidence weight coefficient of 1/L i .
  • W C1 +W C2 1.
  • W C1 and W C2 can be preset.
  • the settings of W C1 and W C2 are related to the performance of the image detection algorithm and the performance index of the camera device, and can be configured according to empirical values.
  • the value range of the confidence is generally 0 to 1, and the closer to 1, the higher the confidence.
  • an algorithm of a deep neural network can be used to calculate the confidence level c i of the projection point of the candidate pixel point i on the above-mentioned curved surface.
  • the candidate laser point cloud points with high reflection intensity segmented from the laser point cloud and the projection points of the candidate pixels belonging to the road surface elements detected in the image on the curved surface are weighted to select the most Reliable pavement feature points.
  • the point cloud with high reflection intensity segmented from the laser point cloud its field density and field relative reflection intensity can be weighted.
  • the detected three-dimensional pixel projection points belonging to the road surface elements in the image can be weighted for their detection confidence, the distance from the projection center, and the position within the phase frame. Finally, in each sampling interval, the weighted results of all points are combined to obtain the final candidate point.
  • the above process may further include the following steps: determining or outputting the information of the one or more road surface elements according to the coordinates of at least one road surface element point.
  • the information of the pavement elements may be pavement element vector data, and may specifically include coordinates, colors, types, and the like of pavement element points.
  • the pavement element can be determined, and the pavement element vector data can be output.
  • the color and type of the road surface elements can be obtained by detecting the image of the road surface.
  • fitting processing may be performed on the road surface element points belonging to the same category obtained in S204 to obtain vector data of the road surface element.
  • the coordinates of the lane line points of the order of tens of thousands may be obtained, and the shape of the lane line can be obtained by fitting these points. Since the lane line is represented, the data amount of the information of the lane line can be reduced.
  • Fig. 5 exemplarily shows a schematic diagram of a method for determining pavement elements provided by the implementation of the present application.
  • the process may include laser point cloud processing, image processing and fusion processing.
  • Laser point cloud processing can include:
  • a laser point cloud from the lidar is acquired.
  • this step may adopt a rule-based or machine learning-based method to remove point cloud points belonging to obstacles (such as vehicles, pedestrians, non-motor vehicles, guardrails, isolation belts, etc.) in the laser point cloud. ;
  • a laser point cloud of the road surface is acquired.
  • obstacles are removed from the laser point cloud obtained in step 501, and a rule-based or machine learning-based method is further used to remove point cloud points on non-road surfaces (such as the sky, roadside facilities, etc.), thereby extracting Laser point cloud belonging to the road surface;
  • a gridded road surface curve is obtained. Specifically, a grid plane fitting process is performed on the laser point cloud of the road surface to obtain a three-dimensional mathematical model (such as a grid plane equation) representing the actual road surface, that is, the gridded road surface surface;
  • point cloud points with high reflection intensity are obtained from the laser point cloud of the road surface
  • a set of candidate laser point cloud points for road surface elements is obtained.
  • a de-drying process can be performed on the point cloud points with high reflection intensity obtained by separation (for example, removing outliers).
  • Image processing may include:
  • images from the camera are acquired.
  • a rule-based or machine learning-based method is used to detect the pavement element pixels in the image, and further perform a denoising operation (such as morphological filtering) on the detected pavement element pixels to obtain the pavement element pixels. candidate pixel set;
  • a fitting operation is performed, that is, the pixel points of the road surface elements detected in each frame of the image are fitted to the shapes of the corresponding road surface elements by means of fitting (for example, for lane lines, fitting is performed as a curve or a curve), In this way, some recognition results that are obviously inconsistent with the shape of the road surface elements are removed (for example, for the lane line, the recognition results that are obviously not straight and the direction of the road are obviously inconsistent with the direction of the lane are removed, so as to avoid identifying the street light pole object as a lane line);
  • a tracking process is performed, for example, taking a lane line as an example, according to the continuity between the image frames, the same lane line is classified as a lane line by means of tracking;
  • a de-noising process is performed, that is, noise points are filtered out by a de-noising algorithm, for example, some points that are far away from other points are removed by using an outlier removal method.
  • the fusion process can include:
  • the candidate laser point cloud point set of the road surface element obtained in 505 and the projection point set of the candidate pixel point of the road surface element obtained from the image and processed in 514 on the road surface surface are weighted by sub-item (such as using the above formulas 7, 8, 9) to obtain the coordinates of the pavement element points;
  • the coordinates of each pavement element point are clustered to determine the pavement element category to which each pavement element point belongs;
  • each category of road surface elements select the coordinates of the points belonging to the same category of road surface elements from the various road surface element points obtained in 512, and combine the colors of the points of the road surface elements to obtain the vector data of the road surface elements .
  • any one or more of the above steps 501-522 may be an internal implementation of the product, not an independently executed step, but an intermediate process, depending on the algorithm and the implementation of the product. That is, there may be one or more optional steps in the above 501-522, which are not specifically limited in this application.
  • the road surface elements included on the road surface can be determined by clustering, that is, each candidate can be determined.
  • the candidate laser point cloud points and the projection points of the candidate pixel points on the above-mentioned curved surface may be clustered to obtain the road surface elements to which each candidate laser point cloud point and each projection point on the curved surface belong.
  • the coordinates of the candidate laser point cloud points belonging to the same road surface element and the coordinates of the projection point of the candidate pixel point on the above-mentioned curved surface are calculated according to the method shown in S204, and the road surface element can be obtained.
  • the coordinates of each point can be determined so that the road surface element can be determined, that is, the position and shape of the road surface element on the road surface can be determined.
  • the mathematical model of the real road surface obtained by using the laser point cloud in the above embodiments of the present application can maintain the road surface in a larger range than using a fixed projection plane.
  • the embodiments of the present application maintain the accuracy of projecting two-dimensional pixels to three-dimensional space in a larger range, the perception range of credible road surface elements is increased, thereby improving the safety of the intelligent driving system.
  • the laser point cloud and the image are comprehensively used to extract the pavement elements, instead of relying solely on a certain kind of data, thereby improving the robustness of the pavement element extraction.
  • the embodiments of the present application can extract road surface elements in scenarios such as strong light, road surface element wear, and dark light, the requirements for the data collection environment are reduced, and high-precision map production can be improved. efficiency and reduce production costs.
  • the embodiments of the present application can also extract road surface elements in scenarios such as strong light, road surface element wear, and dark light, the working field of the environment perception system is enlarged, and the assistance is improved. /Safety of automated driving systems.
  • an embodiment of the present application also provides a pavement element determination device, which may have a structure as shown in FIG. 6 , the device may be a pavement element determination device, or a device capable of supporting the foregoing device to implement the foregoing method chip or system-on-chip.
  • the apparatus 600 may include: an acquisition unit 601 and a processing unit 602 .
  • an acquisition unit 601 configured to acquire a laser point cloud of a road surface and an image of the road surface
  • the processing unit 602 is configured to extract the candidate laser point cloud point of at least one road surface element in the laser point cloud of the road surface, and determine the curved surface on which the road surface is located according to the laser point cloud of the road surface; a candidate pixel point of at least one road surface element, the coordinates of the projection point of the candidate pixel point of the at least one road surface element on the curved surface are determined according to the coordinates of the candidate pixel point of the at least one road surface element and the curved surface; and, According to the coordinates of the candidate laser point cloud point of the at least one road surface element and the coordinates of the projection point of the candidate pixel point of the at least one road surface element on the curved surface, the coordinates of the at least one road surface element point are determined, and the at least one road surface element point is determined.
  • the road surface element points correspond to one or more road surface elements in the at least one road surface element.
  • the coordinates of the at least one pavement feature point are a set of coordinates of the pavement feature points corresponding to at least one sampling space in the space corresponding to the pavement, wherein the first sampling space is the at least one sampling space Any sampling space within the space.
  • the processing unit 602 can be specifically used for:
  • Point coordinates and confidence levels are used to obtain the coordinates of the road surface element points corresponding to the first sampling space; wherein, the first sampling space includes at least one candidate laser point cloud point and/or at least one projection point.
  • the coordinates of the road surface element points corresponding to the first sampling space determined by the processing unit 602 satisfy the foregoing formula (7).
  • the confidence level of the candidate laser point cloud point of the at least one road surface element determined by the processing unit 602 satisfies the foregoing formula (8).
  • the confidence level of the projection point of the candidate pixel point of the at least one road surface element determined by the processing unit 602 on the curved surface satisfies the foregoing formula (9).
  • processing unit 602 is also used to:
  • the at least one The candidate laser point cloud points of the road surface element and the at least one projection point on the curved surface are clustered to obtain the candidate laser point cloud point of the at least one road surface element and the road surface element to which the at least one projection point on the curved surface belongs.
  • the processing unit 602 may determine at least one pavement element point in the first pavement element according to the coordinates of the at least one candidate laser point cloud point to which the first pavement element belongs and the coordinates of the projection point of the at least one candidate pixel point on the curved surface coordinates; wherein, the first road surface element is any road surface element obtained by clustering.
  • the processing unit 602 may be specifically configured to: generate a gridded surface on which the road surface is located according to the coordinates of the point cloud points in the laser point cloud of the road surface; according to the candidate pixels of the at least one road surface element point coordinates and the gridded surface, to determine the projected point coordinates of the candidate pixel points of the at least one road surface element on the grid in the gridded surface.
  • the processing unit 602 is further configured to: determine or output the information of the one or more road surface elements according to the coordinates of the at least one road surface element point.
  • the acquiring unit 601 may be specifically configured to: acquire a laser point cloud of the road surface from at least one lidar, and an image of the road surface from at least one camera; The point cloud point coordinates and the pixel point coordinates in the image belong to the same coordinate system.
  • the pavement elements include at least one of the following: lane lines, stop lines, pavement markings, arrows, text.
  • an embodiment of the present application also provides a communication device.
  • the communication device may have a structure as shown in FIG. 7 .
  • the communication device may be a road surface element determination device, or a device capable of supporting the road surface element determination device to implement the above method. chip or system of chips.
  • the communication apparatus 700 shown in FIG. 7 may include at least one processor 702, and the at least one processor 702 is configured to be coupled with a memory, and read and execute instructions in the memory to implement the methods provided in the embodiments of the present application. Steps involved in the pavement feature determination device.
  • the communication apparatus 700 may further include at least one interface 703 for providing program instructions or data for the at least one processor.
  • the interface 703 in the communication device 700 can be used to implement the functions of the above obtaining unit 601.
  • the interface 703 can be used by the communication device 700 to perform the steps of obtaining information in the method shown in FIG. 2 or FIG.
  • the processor 702 can be used
  • the communication device 700 can be used to execute the step of determining the road surface elements in the method shown in FIG. 2 or FIG. 5 .
  • the interface 703 may be used to support the communication device 700 to communicate.
  • the communication device 700 may further include a memory 704 in which computer programs and instructions are stored, and the memory 704 may be coupled with the processor 702 and/or the interface 703 for supporting the processor 702 to call the computer programs and instructions in the memory 704.
  • the memory 704 may also be used to store the data involved in the method embodiment of the present application, for example, to store the data necessary for the support interface 703 to realize interaction,
  • the instruction, and/or is used to store the configuration information necessary for the communication apparatus 700 to execute the method described in the embodiments of the present application.
  • the embodiments of the present application further provide a computer-readable storage medium, on which some instructions are stored.
  • the computer can complete the above method embodiments and method implementations.
  • the computer-readable storage medium is not limited, for example, it may be RAM (random-access memory, random access memory), ROM (read-only memory, read-only memory), etc.
  • the present application further provides a computer program product, which, when invoked and executed by a computer, can complete the method embodiments and the methods involved in any possible designs of the above method embodiments.
  • the present application further provides a chip, which may include a processor and an interface circuit, and is used to implement the above method embodiments and any possible implementation manners of the method embodiments.
  • a chip which may include a processor and an interface circuit, and is used to implement the above method embodiments and any possible implementation manners of the method embodiments.
  • method where "coupled” means that two components are directly or indirectly bonded to each other, which may be fixed or movable, and which may allow flow of fluids, electricity, electrical signals, or other types of signals between two components. communication between the components.
  • the present application also provides a terminal, which may include a unit as shown in FIG. 6 , or at least one processor and an interface as shown in FIG. 7 , and the terminal can implement the implementation of the present application.
  • a terminal may include a unit as shown in FIG. 6 , or at least one processor and an interface as shown in FIG. 7 , and the terminal can implement the implementation of the present application. Examples of pavement feature determination methods.
  • the terminal may be a vehicle-mounted system, or a vehicle in automatic or intelligent driving, a drone, an unmanned transport vehicle, or a robot, or the like.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present invention are generated.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server or data center Transmission to another website site, computer, server, or data center is by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media.
  • the usable media may be magnetic media (e.g., floppy disks, hard disks, magnetic tapes), optical media (e.g., DVD), or semiconductor media (e.g., Solid State Disk (SSD)), and the like.
  • a general-purpose processor may be a microprocessor, or alternatively, the general-purpose processor may be any conventional processor, controller, microcontroller, or state machine.
  • a processor may also be implemented by a combination of computing devices, such as a digital signal processor and a microprocessor, multiple microprocessors, one or more microprocessors in combination with a digital signal processor core, or any other similar configuration. accomplish.
  • a software unit may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art.
  • a storage medium may be coupled to the processor such that the processor may read information from, and store information in, the storage medium.
  • the storage medium can also be integrated into the processor.
  • the processor and storage medium may be provided in the ASIC, and the ASIC may be provided in the terminal device. Alternatively, the processor and the storage medium may also be provided in different components in the terminal device.

Abstract

一种路面要素确定方法及装置,应用于智能汽车领域,具体涉及自动驾驶或者智能驾驶,例如电子地图获取。方法包括:获取路面的激光点云以及该路面的图像(S201);提取该路面的激光点云中至少一个路面要素的候选激光点云点,并根据该路面的激光点云确定所述路面所在的曲面(S202);提取该路面的图像中至少一个路面要素的候选像素点,根据所述至少一个路面要素的候选像素点的坐标以及所述曲面确定所述至少一个路面要素的候选像素点在所述曲面上的投影点的坐标(S203);根据所述至少一个路面要素的候选激光点云点的坐标以及所述至少一个路面要素的候选像素点在所述曲面上的投影点的坐标,确定至少一个路面要素点的坐标或确定至少一个路面要素点(S204)。可提高路面要素提取的准确性与可靠性。

Description

一种路面要素确定方法及装置 技术领域
本申请涉及智能驾驶领域,尤其涉及一种路面要素确定方法及装置。
背景技术
智能驾驶(包括辅助驾驶、无人驾驶)领域飞速发展,在智能驾驶系统中,实时环境感知的准确性以及电子地图的精度,对于智能驾驶的安全性尤为重要。
实时环境感知包括对路面上的各种路面要素的识别,高精度的电子地图中也需要准确标识出路面上的各种路面要素。其中,路面要素可包括车道线、停止线、路面标识、箭头、文字等。作为路网信息的重要组成部分,路面要素在实时环境感知与高精度电子地图制作中都是必要的要素。
因此,目前亟需一种能够准确确定路面要素的方法。
发明内容
本申请提供一种路面要素确定方法及装置,用以准确地确定路面要素。
第一方面,提供一种路面要素确定方法,包括:
获取路面的激光点云以及所述路面的图像;
提取所述路面的激光点云中至少一个路面要素的候选激光点云点,并根据所述路面的激光点云确定所述路面所在的曲面;
提取所述路面的图像中至少一个路面要素的候选像素点,根据所述至少一个路面要素的候选像素点的坐标以及所述曲面确定所述至少一个路面要素的候选像素点在所述曲面上的投影点的坐标;
根据所述至少一个路面要素的候选激光点云点的坐标以及所述至少一个路面要素的候选像素点在所述曲面上的投影点的坐标,确定至少一个路面要素点的坐标,所述至少一个路面要素点对应所述至少一个路面要素中的一个或多个路面要素。
上述实施例中,基于路面的激光点云确定该路面所在的曲面,并将路面图像中的路面要素的候选像素点投影到该曲面上,进而对该投影点的坐标以及路面要素的候选激光点云点的坐标进行融合,得到该路面上的路面要素点的坐标,一方面在确定路面要素点的坐标时,既不仅依赖于激光点云也不仅依赖于路面图像,而是将两者融合考虑,从而可以提高路面要素提取的准确性和可靠性,另一方面,提取路面要素的过程中,不是直接使用图像中的像素点,而是由于基于路面的激光点云确定该路面所在的曲面,并将路面图像中的路面要素的候选像素点投影到该曲面上,即考虑到了路面实际情况对路面要素提取的准确性的影响,从而进一步提高了路面要素提取的准确性和可靠性,并使得本实施例可以适用于各种路面情况,提高了本实施例的适用性。
在一种可能的实现方式中,所述至少一个路面要素点的坐标为所述路面对应的空间内的至少一个采样空间对应的路面要素点的坐标的集合,其中,第一采样空间为所述至少一个采样空间内的任一采样空间;
所述根据所述至少一个路面要素的候选激光点云点坐标以及所述至少一个路面要素的 候选像素点在所述曲面上的投影点坐标,确定至少一个路面要素点的坐标,包括:
确定所述至少一个路面要素的候选激光点云点的置信度,和/或,所述第一采样空间中的至少一个路面要素的候选像素点在所述曲面上的投影点的置信度;
根据所述第一采样空间中的至少一个路面要素的候选激光点云点坐标及置信度,以及所述至少一个路面要素的候选像素点在所述曲面上的投影点坐标及置信度,得到所述第一采样空间对应的路面要素点的坐标;其中,所述第一采样空间内包括至少一个候选激光点云点和/或至少一个投影点。
上述实施例中,基于路面要素的候选激光点云点的置信度,以及路面要素的候选像素点在路面曲面上的投影点的置信度,并结合一个采样空间中的候选激光点云点以及投影点坐标,计算得到该采样空间对应的路面要素点的坐标,由于一个采样空间通常包括多个候选激光点云点以及投影点,因此通过上述方式可以综合该采样空间中的这些候选激光点以及投影点的坐标得到一个路面要素点的坐标,从而提高路面要素点坐标的准确性和可靠性,另一方面,针对一个采样空间计算得到一个路面要素点的坐标以用于后续的路面要素矢量提取,还可以降低计算复杂度。
在一种可能的实现方式中,所述第一采样空间对应的路面要素点的坐标满足以下公式:
Figure PCTCN2020119343-appb-000001
其中,P sample表示所述第一采样空间对应的路面要素点的坐标,P i为候选激光点云点i的坐标,p i为所述曲面上的投影点i的坐标,C Li为候选激光点云点i的置信度,C ci为所述曲面上的投影点i的置信度,n为所述第一采样空间内候选激光点云点的数量,m为所述第一采样空间内所述曲面上的投影点的数量,n和m均为大于或等于1的整数。
上述实施例中,基于上述公式可实现采样分项加权方式,基于路面要素的候选激光点云点的置信度,以及路面要素的候选像素点在路面曲面上的投影点的置信度,并结合一个采样空间中的候选激光点云点以及投影点坐标,计算得到该采样空间对应的路面要素点的坐标,该方式中,综合考虑到了一个采样空间中的每个候选激光点云点的坐标以及置信度,以及该采样空间中的每个投影点的坐标以及置信度,因而提高了路面要素点坐标的准确性和可靠性。
在一种可能的实现方式中,所述至少一个路面要素的候选激光点云点的置信度满足以下公式:
C Li=W L1*D i+W L2*I i
其中,C Li为候选激光点云点i的置信度,D i为候选激光点云点i的邻域密度,I i为候选激光点云点i的邻域相对反射率,W L1为D i的置信度权重系数,W L2为I i的置信度权重系数。
上述实施例中,基于上述公式计算候选激光点云点的置信度,可综合考虑候选激光点云点的邻域密度以及候选激光点云点的邻域相对反射率,从而可以基于路面要素的激光点云点的特征,来确定候选激光点云点的置信度,以使得结果更加准确。
在一种可能的实现方式中,所述至少一个路面要素的候选像素点在所述曲面上的投影点的置信度满足以下公式:
C Ci=W C1*c i+W C2/L i
其中,C Ci为所述曲面上的投影点i的置信度,c i为所述图像中路面要素的候选像素点i 的置信度,L i为所述投影点i的坐标与摄像装置原点的坐标间的距离,W C1为c i的置信度权重系数,W C2为1/L i的置信度权重系数。
上述实施例中,基于上述公式计算候选像素点在路面曲面上的投影点的置信度,可综合考虑图像中路面要素的候选像素点的置信度以及候选像素点在路面曲面上的投影点与所在摄像装置原点间的距离,从而可以基于路面要素的像素点的特征,来确定候选像素点在路面曲面上的投影点的置信度,以使得结果更加准确。
在一种可能的实现方式中,所述根据所述至少一个路面要素的候选激光点云点坐标以及所述至少一个路面要素的候选像素点在所述曲面上的投影点坐标,确定至少一个路面要素点的坐标之前,还包括:
对所述至少一个路面要素的候选激光点云点以及所述曲面上的至少一个投影点进行聚类,得到所述至少一个路面要素的候选激光点云点以及所述曲面上的至少一个投影点所属的路面要素;
所述根据所述至少一个路面要素的候选激光点云点坐标以及所述至少一个路面要素的候选像素点在所述曲面上的投影点坐标,确定至少一个路面要素点的坐标,包括:
根据第一路面要素所属的至少一个候选激光点云点的坐标以及至少一个候选像素点在所述曲面上的投影点的坐标,确定所述第一路面要素中至少一个路面要素点的坐标;其中,所述第一路面要素为聚类得到的任一路面要素。
上述实施例中,通过对所有候选激光点云点以及候选像素点在路面曲面上的投影点进行聚类,可以确定各候选激光点云点所属的路面要素的类别,以及各候选像素点在路面曲面上的投影点所属的路面要素的类别,从而可以基于属于同一路面要素类别的候选激光点云点和投影点,确定该类别的路面要素点。
在一种可能的实现方式中,所述根据所述路面的激光点云确定所述路面所在的曲面,包括:
根据所述路面的激光点云中点云点的坐标,生成所述路面所在的网格化曲面;
所述根据所述至少一个路面要素的候选像素点坐标以及所述曲面确定所述至少一个路面要素的候选像素点在所述曲面上的投影点坐标,包括:
根据所述至少一个路面要素的候选像素点坐标以及所述网格化曲面,确定所述至少一个路面要素的候选像素点在所述网格化曲面中的投影点坐标。
上述实施例中,采用网格化曲面,可以降低系统计算开销。
在一种可能的实现方式中,还包括:根据所述至少一个路面要素点的坐标,确定或者输出所述一个或多个路面要素的信息。
在一种可能的实现方式中,所述获取路面的激光点云以及所述路面的图像,包括:
获取来自于至少一个激光雷达的所述路面的激光点云,以及来自至少一个摄像装置的所述路面的图像;
所述激光点云中的点云点坐标和所述图像中的像素点坐标属于同一坐标系。
在一种可能的实现方式中,所述路面要素包括以下至少一项:车道线、停止线、路面标识、箭头、文字。
第二方面,提供一种路面要素确定装置,包括:
获取单元,用于获取路面的激光点云以及所述路面的图像;
处理单元,用于提取所述路面的激光点云中至少一个路面要素的候选激光点云点,并 根据所述路面的激光点云确定所述路面所在的曲面;
所述处理单元还用于提取所述路面的图像中至少一个路面要素的候选像素点,根据所述至少一个路面要素的候选像素点的坐标以及所述曲面确定所述至少一个路面要素的候选像素点在所述曲面上的投影点的坐标;以及,根据所述至少一个路面要素的候选激光点云点的坐标以及所述至少一个路面要素的候选像素点在所述曲面上的投影点的坐标,确定至少一个路面要素点的坐标,所述至少一个路面要素点对应所述至少一个路面要素中的一个或多个路面要素。
在一种可能的实现方式中,所述至少一个路面要素点的坐标为所述路面对应的空间内的至少一个采样空间对应的路面要素点的坐标的集合,其中,第一采样空间为所述路面对应的至少一个采样空间内的任一采样空间;
所述处理单元具体用于:
确定所述至少一个路面要素的候选激光点云点的置信度,和/或,所述第一采样空间中的至少一个路面要素的候选像素点在所述曲面上的投影点的置信度;
根据所述第一采样空间中的至少一个路面要素的候选激光点云点坐标及置信度,以及所述至少一个路面要素的候选像素点在所述曲面上的投影点坐标及置信度,得到所述第一采样空间对应的路面要素点的坐标;其中,所述第一采样空间内包括至少一个候选激光点云点和/或至少一个投影点。
在一种可能的实现方式中,所述第一采样空间对应的路面要素点的坐标满足以下公式:
Figure PCTCN2020119343-appb-000002
其中,P sample表示所述第一采样空间对应的路面要素点的坐标,P i为候选激光点云点i的坐标,p i为所述曲面上的投影点i的坐标,C Li为候选激光点云点i的置信度,C ci为所述曲面上的投影点i的置信度,n为所述第一采样空间内候选激光点云点的数量,m为所述第一采样空间内所述曲面上的投影点的数量,n和m均为大于或等于1的整数。
在一种可能的实现方式中,所述至少一个路面要素的候选激光点云点的置信度满足以下公式:
C Li=W L1*D i+W L2*I i
其中,C Li为候选激光点云点i的置信度,D i为候选激光点云点i的邻域密度,I i为候选激光点云点i的邻域相对反射率,W L1为D i的置信度权重系数,W L2为I i的置信度权重系数。
在一种可能的实现方式中,所述至少一个路面要素的候选像素点在所述曲面上的投影点的置信度满足以下公式:
C Ci=W C1*c i+W C2/L i
其中,C Ci为所述曲面上的投影点i的置信度,c i为所述图像中路面要素的候选像素点i的置信度,L i为所述投影点i的坐标与摄像装置原点的坐标间的距离,W C1为c i的置信度权重系数,W C2为1/L i的置信度权重系数。
在一种可能的实现方式中,所述处理单元还用于:
根据所述至少一个路面要素的候选激光点云点坐标以及所述至少一个路面要素的候选像素点在所述曲面上的投影点坐标,确定至少一个路面要素点的坐标之前,对所述至少一个路面要素的候选激光点云点以及所述曲面上的至少一个投影点进行聚类,得到所述至少 一个路面要素的候选激光点云点以及所述曲面上的至少一个投影点所属的路面要素;
所述根据所述至少一个路面要素的候选激光点云点坐标以及所述至少一个路面要素的候选像素点在所述曲面上的投影点坐标,确定至少一个路面要素点的坐标,包括:
根据第一路面要素所属的至少一个候选激光点云点的坐标以及至少一个候选像素点在所述曲面上的投影点的坐标,确定所述第一路面要素中至少一个路面要素点的坐标;其中,所述第一路面要素为聚类得到的任一路面要素。
在一种可能的实现方式中,所述处理单元具体用于:根据所述路面的激光点云中点云点的坐标,生成所述路面所在的网格化曲面;根据所述至少一个路面要素的候选像素点坐标以及所述网格化曲面,确定所述至少一个路面要素的候选像素点在所述网格化曲面中的网格上的投影点坐标。
在一种可能的实现方式中,所述处理单元还用于:根据所述至少一个路面要素点的坐标,确定或者输出所述一个或多个路面要素的信息。
在一种可能的实现方式中,所述获取单元具体用于:获取来自于至少一个激光雷达的所述路面的激光点云,以及来自至少一个摄像装置的所述路面的图像。所述激光点云中的点云点坐标和所述图像中的像素点坐标属于同一坐标系。
在一种可能的实现方式中,所述路面要素包括以下至少一项:车道线、停止线、路面标识、箭头、文字。
第三方面,提供一种路面要素确定装置,包括至少一个处理器和接口,其中,所述接口,用于为所述至少一个处理器提供程序指令或者数据;所述至少一个处理器用于执行所述程序行指令,以实现如上述第一方面中任一项所述的方法。
第四方面,提供一种车载系统,包括如上述第二方面中任一项所述的装置。
在一种可能的方式中,所述车载系统还包括至少一个激光雷达以及至少一个摄像装置。
第五方面,提供一种计算机存储介质,其上存储有计算机程序或指令,所述计算机程序或指令被至少一个处理器执行时实现如上述第一方面中任一项所述的方法。
附图说明
图1为本申请实施例适用的一种系统架构;
图2为本申请实施例提供的一种路面要素确定方法的框图;
图3为本申请实施例中从路面的激光点云提取路面要素的候选激光点云点集合的原理示意图;
图4为本申请实施例中根据路面的激光点云拟合得到的网格曲面的示意图;
图5为本申请实施例中的路面要素确定方法的流程示意图;
图6为本申请实施例中的路面要素确定装置的结构示意图;
图7分别为本申请实施例提供的一种通信装置的结构示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述。
以下,对本申请实施例中的部分用语进行解释说明,以便于本领域技术人员理解。
(1)路面要素
路面要素是指路面上用于交通指引的各种标记,比如,路面要素可包括以下至少一项:车道线、停止线、路面标识、箭头、文字等。可选的,可使用油漆等材质在路面上刷涂路面要素。
(2)点云以及激光点云
通过测量设备测量得到的物体外观表面上的点数据集合可称之为点云(point cloud),点云是在目标对象表面特性的海量点集合。
基于激光测量原理测量到的点云称之为激光点云。激光点云可包括三维坐标(X,Y,Z)和激光反射强度(Intensity)。由于路面要素是由油漆等材质刷涂在路面上,其对激光的反射强度不同于路面,故可以通过反射强度区分路面要素与路面,路面要素的点云包括该路面要素外表面上各采样点的空间坐标以及激光反射强度等信息。
(3)世界坐标系和用户坐标系
世界坐标系是系统的绝对坐标系,在没有建立用户坐标系之前画面上所有点的坐标都是以该坐标系的原点来确定各自的位置的。举例来说,由于摄像装置可安放在环境中的任意位置,在环境中选择一个基准坐标系来描述摄像装置的位置,并用它描述环境中任何物体的位置,该坐标系称为世界坐标系。
用户坐标系是以指定的物体或对象的中心为原点的坐标系,比如车体坐标系是以车体中心为原点的坐标系。
不同坐标系之间可基于坐标系间的转换参数进行转换,所述转换参数可包括旋转矩阵与平移向量。
本申请实施例中的术语“系统”和“网络”可被互换使用。“多个”是指两个或两个以上,鉴于此,本申请实施例中也可以将“多个”理解为“至少两个”。“至少一个”,可理解为一个或多个,例如理解为一个、两个或更多个。例如,包括至少一个,是指包括一个、两个或更多个,而且不限制包括的是哪几个,例如,包括A、B和C中的至少一个,那么包括的可以是A、B、C、A和B、A和C、B和C、或A和B和C。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,字符“/”,如无特殊说明,一般表示前后关联对象是一种“或”的关系。
除非有相反的说明,本申请实施例提及“第一”、“第二”等序数词用于对多个对象进行区分,不用于限定多个对象的顺序、时序、优先级或者重要程度。
方法实施例中的具体操作方法也可以应用于装置实施例或系统实施例中。
在智能驾驶领域,高精度电子地图采集系统或者无人驾驶系统通常具备激光雷达(light detection and ranging,LiDAR)、摄像装置以及高精度定位定姿设备。其中,激光雷达能够获取具有反射强度的三维激光点云,摄像装置能够获取彩色影像,高精度定位定姿设备能够获得激光雷达和摄像装置的位姿。
一般地,由于路面要素是通过油漆等材质刷涂在路面上的,其对激光的反射强度要高于路面,故可以通过激光点云的反射强度区分路面要素与路面。而路面要素一般是白色、黄色等,在图像中可以通过颜色信息等分割出路面要素。
目前采用的一些路面要素确定方法中,基于二维图像进行路面要素检测,获取属于路面要素的像素点,然后将三维激光点云投影到该二维图像中的路面要素像素上,以确定路 面要素。该方法强依赖于基于二维图像的路面要素检测结果,在强光、遮挡或其它图像检测效果不佳的情况下,可能产生漏提或者误提取,导致路面要素提取的准确性较低。
目前采用的另一些路面要素确定方法中,将激光点云以一定的分辨率(如10*10cm的采样空间对应一个像素点)转为二维俯视图,该二维俯视图的像素值由强度值换算而来,然后基于强度图进行路面要素的提取。同时,基于图像进行路面要素检测,对检测结果进行反透视变换,转为与二维强度图同样的视角,将二者进行叠加匹配,将最终的路面要素像素点对应的三维激光点云点作为最终的路面要素提取候选点。该方法需要将透视图像转为正视图像再进行路面要素检测,而转正视图像只能基于一个固定的地面高度参数,在有坡度的路段无法准确估计三维位置,与二维强度图无法对应,导致路面要素提取精度降低。
为解决上述问题,本申请实施例提供一种路面要素确定方法和装置,通过融合激光点云与图像进行路面要素提取,可以应用于高精度电子地图制作的场景或者应用于智能驾驶(如辅助驾驶或自动驾驶)场景中的环境感知。本申请实施例可以充分利用路面的激光点云和图像进行路面要素的提取,而不单独依赖于其中一种信息,从而可以提高路面要素提取的准确性(即具有更高的三维空间位置精度),进一步地,还可以提高鲁棒性(即可在更复杂的环境条件下保证路面要素提取的有效性)。
下面结合附图对本申请实施例进行描述。
图1示例性示出了本申请实施例适用的一种系统。
如图1所示,该系统中包括至少一个激光雷达(例如,101a,101b,…,101n)、至少一个摄像装置(例如,102a,102b,…,102m)、至少一个定位位姿设备103和/或至少一个存储设备104摄像装置。所述系统还包括路面要素确定装置105。进一步地,还可包括时间同步设备106。
激光雷达(101a,101b,…,101n)、摄像装置(102a,102b,…,102m)以及定位位姿设备103分别与存储设备104连接,存储设备104与路面要素确定装置105连接。时间同步设备106分别与激光雷达(101a,101b,…,101n)、摄像装置(102a,102b,…,102m)以及定位位姿设备103连接。
时间同步设备106可为激光雷达(101a,101b,…,101n)、摄像装置(102a,102b,…,102m)以及定位位姿设备103进行授时,以使得各激光雷达以及各摄像装置时间同步。
激光雷达(101a,101b,…,101n)用于对路面进行测量,以获取路面的激光点云,并将获取到的激光点云存储到存储设备104中。
摄像装置(102a,102b,…,102m)可采集路面的图像,并将采集到的路面的图像存储到存储设备104中。摄像装置(102a,102b,…,102m)可以是彩色摄像装置,比如彩色摄像装置,能够采集到彩色图像。
定位位姿设备103用于检测激光雷达和/或摄像装置的位姿等信息,并将检测到的位姿等信息存储到存储设备104中。
路面要素确定装置105可以根据存储设备104中存储的路面的激光点云、图像等数据,确定该路面上的路面要素。
上述系统可以是车载系统,上述激光雷达、摄像装置、位置定位装置、存储设备、路面要素确定设备等,可以被安装在车辆上,并构成车载系统的组成部分。
需要说明的是,上述系统中,激光雷达的数量以及摄像装置的数量仅为示例,本申请实施例对激光雷达的数量和摄像装置的数量不做限制。如果激光雷达的数量为多个,则该 多个激光雷达可以设置在不同位置,如果摄像装置的数量为多个,则该多个摄像装置可设置在不同位置。
上述系统可被应用于高精度电子地图制作场景,也可以被应用于智能驾驶场景(如辅助驾驶场景或自动驾驶场景)。
在高精度电子地图制作场景中,由搭载了上述系统的高精度电子地图采集车进行路面信息采集并根据采集到的路面信息确定路面上的路面要素,以用于制作高精度电子地图。具体地,可由时间同步设备为上述系统内的激光雷达、摄像装置以及定位定姿设备进行授时。在路面信息采集过程中,激光雷达采集到的路面激光点云、摄像装置采集到的路面图像,以及定位定姿设备检测到的激光雷达和摄像装置的位姿信息被存储至存储设备内。在高精度电子地图制作过程中,路面要素确定装置可首先根据定位位姿设备检测到的位姿信息,将激光雷达采集到的激光点云、摄像装置采集到的图像经过标定统一到同一坐标系中,比如统一到世界坐标系中,再根据统一到同一坐标系中的激光点云和图像进行路面要素的确定过程。例如,事先将路面的激光点云中的点云点坐标和图像中的像素点坐标统一到世界坐标系,则最终提取到的路面要素是在世界坐标系下的高精度电子地图矢量数据。
在智能驾驶场景中,由搭载了上述系统的智能车辆实时进行路面信息采集并根据采集到的路面信息确定路面上的路面要素,以用于智能驾驶过程中的定位或者车辆控制。具体地,在路面信息采集过程中,激光雷达采集到的路面激光点云、摄像装置采集到的路面图像,以及定位定姿设备检测到的激光雷达和摄像装置的位姿信息被存储至存储设备内。在智能驾驶功能运行过程中,路面要素确定装置可首先根据定位位姿设备检测到的位姿信息,将激光雷达采集到的激光点云、摄像装置采集到的图像经过标定统一到同一坐标系中,比如统一到该车辆车体坐标系中,再根据统一到同一坐标系中的激光点云和图像进行路面要素的提取过程。如果事先将路面的激光点云中的点云点坐标和图像中的像素点坐标统一到车辆车体坐标系,则最终提取到的路面要素是在该车体坐标系下的车身周边路面要素的矢量数据。
图2示例性示出了本申请实施例提供的一种路面要素确定方法的框图。该方法可由上述系统实现,或者有上述系统中的路面要素确定装置实现。该方法可应用于高精度电子地图制作场景,也可应用于智能驾驶场景。
下面结合图2和图3,对本申请实施例进行描述。
如图2所示,该流程可包括如下步骤:
S201:获取路面的激光点云以及该路面的图像。
其中,路面的激光点云可由至少一个激光雷达采集得到。激光点云是点云点的集合,该集合中的每个点云点的信息包括点云点的三维坐标(X,Y,Z)以及激光反射强度。
路面的图像可由至少一个摄像装置采集得到。该图像可以是彩色图像。
一种可选的设计中,车辆上可安装至少一个激光雷达以及至少一个摄像装置,获取车辆前部安装的激光雷达以及摄像装置所采集的数据,由于车辆前部安装的激光雷达和摄像装置的数据采集区域包括车辆前方路面,因此可获取车辆前方路面的激光点云以及图像。
可选地,可将雷达装置采集的激光点云以及摄像装置采集的图像数据,基于轨迹数据进行关联处理,得到同一路面的激光点云以及图像。其中,所述轨迹数据包括基于卫星定位系统获得的轨迹数据,或者基于惯性测量单元(Inertial measurement unit,IMU)获得的轨迹数据。其中,卫星定位系统包括但不限于全球定位系统(Global Positioning System, GPS)、全球导航卫星系统(Global Navigation Satellite System,GNSS)、北斗卫星导航系统等。一种可选的处理过程包括:对一区域而言,一方面获得激光雷达在某一时段(为描述方便,这里称为第一时段)行驶过该区域时获取的激光点云,结合轨迹数据,将激光点云转换至指定坐标系(如世界坐标系);另一方面,同样对于该区域,获得摄像装置在同一时段(即上述第一时段)行驶过同一区域时获取的图像数据,结合轨迹数据,将图像数据转换至上述指定坐标系。其中,坐标系转换的相关说明可参见下面的描述。
其中,对于摄像装置而言,所述路面的图像可以包括单帧图像,也可以包括一段连续时间内的多帧图像。如果是一段连续时间内的多帧图像,则可以针对每帧图像采用本申请实施例提供的方式确定路面要素;也可以基于该多帧图像采用跟踪算法得到一定空间内(比如该连续时间内拍摄的10米长的路面上)的路面图像,用于后续步骤以确定路面要素。
在一些实施例中,获取到路面的激光点云以及该路面的图像后,还可以对激光点云和/或图像进行数据预处理操作,从而仅保留路面相关的数据,去除其他干扰数据,以便于后续进行路面要素的提取。
举例来说,可对激光点云进行以下预处理中的至少一项:
(1)路面分割,即,保留属于路面的点云点,去除非路面(比如天空、路旁建筑物、路边设施等)的点云点;
(2)障碍物去除,即,去除路面上属于障碍物(比如路面上的车辆、行人、非机动车、护栏、隔离带等)的点云点。
上述数据预处理方法可以采用基于规则的方法,举例来说,可以根据预先给出的属于路面上的点云点应该满足的条件,将激光点云中满足上述条件的点云点确认为属于路面的点云点,将不满足该条件的点云点去除。
上述数据预处理方法还可以采用机器学习方法,举例来说,可预先对分类器进行训练,将激光点云输入到该分类器,得到输出的属于路面的点云点。该分类器可通过神经网络实现,用于识别路面上和非路面上的点云点。
在一些实施例中,获取到路面的激光点云以及该路面的图像后,可根据该激光点云以及图像的采集时刻的位姿信息(如包括激光雷达和/或摄像装置的位姿信息),将该激光点云中的点云点的坐标和该图像中的像素点的坐标转换到指定的同一坐标系下。其中,该同一坐标系可以为世界坐标系或者车体坐标系等。例如,在高精度电子地图制作场景下,该指定坐标系可以是世界坐标系,以使得通过本流程可以得到在世界坐标系下的高精地图矢量数据。又如,在智能驾驶应用场景下,该指定坐标系可以是车体坐标系,以使得通过本流程就可以得到在车体坐标系下的车身周边路面要素的矢量数据。
将激光点云转换为世界坐标系的原理,与下面所描述的将激光点云转换为地心地固坐标系的原理类似。
在地心地固坐标系E(简称地心坐标系)中,设某一个激光点云点P的地貌坐标为(x P,y P,z P),激光器扫描镜中心点的坐标为(X L,Y L,Z L),激光点云点P与激光扫描镜中心点之间的量测距离分量为(ΔX P,ΔY P,ΔZ P),则激光点云点P在地心地固坐标系E中的地面坐标可以表示为:
Figure PCTCN2020119343-appb-000003
激光点云点P与激光扫描镜中心点之间的量测距离分量(ΔX P,ΔY P,ΔZ P)通过坐标变换过程来实现,变换过程如下:瞬时激光束坐标系SL->激光扫描参数坐标系T->激光载体坐标系L->IMU载体坐标系b->当地水平参考坐标系g->地心地固坐标系E,变换方程为:
Figure PCTCN2020119343-appb-000004
不考虑偏心矢量的点云地面点计算公式为:
Figure PCTCN2020119343-appb-000005
其中,
Figure PCTCN2020119343-appb-000006
为当地水平参考坐标系g到地心地固坐标系E的旋转矩阵,与激光点处的经纬度坐标(B,L)相关;
Figure PCTCN2020119343-appb-000007
为激光载体坐标系L到当地水平参考坐标系g的旋转矩阵,由姿态角(φ,θ,ψ)构成;
Figure PCTCN2020119343-appb-000008
为激光扫描参考坐标系T相对于激光载体坐标系L的姿态旋转矩阵,由偏心角(θ xyz)构成;
Figure PCTCN2020119343-appb-000009
为瞬时激光束坐标系SL到激光扫描参考坐标系T之间的旋转矩阵,由激光棱镜扫过的角度θ i确定;p为激光器扫描镜中心到激光点云点P的斜距。
综合上述式(2)和式(3),可以解算任意一个激光点云点P的地面坐标。
S202:提取该路面的激光点云中至少一个路面要素的候选激光点云点,并根据该路面的激光点云确定该路面所在的曲面。
通常情况下,路面的激光点云中包含多种路面要素的点云点,一个路面要素对应的点云点的数量通常也较多,这种情况下,本申请实施例中从该路面的激光点云中提取路面要素的候选激光点云点。
由于路面要素通常采用不同于路面的材料涂刷在路面上,该材料对激光的反射强度不同于路面材料对激光的反射强度,因此可以根据路面要素的激光点云中点云点的激光反射强度来提取路面要素的候选激光点云点。比如由于路面要素(比如车道线、停止线、路面标识、箭头、文字等)通常由油漆涂刷在路面上,激光照射在油漆上的反射强度要大于照射在路面上的反射强度,因此可以根据点云点的激光反射强度,获取路面要素的候选激光点云点,即,从路面的激光点云中获取具有高反射强度(比如反射强度大于设定阈值)的点云点,作为路面要素的候选激光点云点,从而可到路面要素的候选激光点云点集合。
图3示例性示出了从路面的激光点云中提取路面要素的候选激光点云点集合的原理示意图。如图所示,图3中的(a)示出了实际场景中路面300上的路面要素(如图中的直行指示301,人行横道预告标线302,掉头指示303,车道线304),图3中的(b)示出了由激光雷达采集的激光点云中各点的激光反射强度信息,其中,路面要素上的点的激光反射强度要大于路面上的点的激光反射强度,因此可这些具有较高激光反射强度的点确定为路面要素的候选激光点云点。
可选地,还可以进行一些其它去噪操作以得到更可靠的结果,从而提高路面要素的候选激光点云点的准确性或可靠性,比如,可去除通过以上操作初步选取出的候选激光点云点集合中的离群点。
步骤S202中,进一步可选的,可根据路面的激光点云中的点云点的坐标进行曲面拟合,得到该路面所在的曲面。
进一步地,为了简化后续的计算复杂度,可对路面进行网格化处理,使用在每个网格内的激光点云点拟合平面,得到网格化的曲面。网格的尺度可以根据ODD(operational design domain,运行设计域)进行调节。该路面所在的曲面或网格化曲面,可以表征为三维数学模型,该三维数学模型具体可以是曲面方程、网格平面方程、数字高程模型(digital elevation model,DEM)等,本申请实施例对该曲面或网格化曲面的表现形式不做限制。
图4示例性示出了一种网格化曲面的示意图,图中虚线方格表示网格,每个网格近似为一个平面。具体实施时,可先将地面网格化以后,每个网格被拟合成平面,每个平面有一个法向量,比如,图中网格g6的法向量如图中箭头n g6所示,图中网格g7的法向量如图中箭头n g7所示,以此类推。
S203:提取该路面的图像中至少一个路面要素的候选像素点,根据至少一个路面要素的候选像素点的坐标以及上述曲面,确定至少一个路面要素的候选像素点在该曲面上的投影点的坐标。
通常情况下,路面的图像中包含多种路面要素,一个路面要素对应的像素点的数量通常也较多,这种情况下,本申请实施例中可针对每个路面要素的候选像素点分别确定其在该路面曲面上的投影点坐标。
该步骤中,可在路面图像中进行路面要素像素点的分割,得到该图像中路面要素的候选像素点,形成候选像素点的集合。这里的分割指示为了清楚阐述方案,不限定一定有分割的动作,以最终形成候选像素点的集合为准。
具体可以采用基于规则的方法,提取路面要素的候选像素点,举例来说,可以根据预先给出的属于路面要素上的像素点满足的条件(比如颜色为黄色或白色的像素点),将路面图像中满足上述条件的像素点确认为路面要素的候选像素点,将不满足该条件的像素点去除。
还可以采用机器学习方法来提取路面要素的候选像素点,举例来说,可预先对分类器进行训练,将路面图像输入到该分类器,得到输出的路面要素的候选像素点。该分类器可通过神经网络实现。
可选地,还可以进行一些其它去噪操作以得到更可靠的结果,从而提高路面要素的候选像素点的准确性或可靠性,比如,可对采用上述方法初步得到的路面要素的候选像素点集合进行形态学滤波,以去除不满足要求的候选像素点。
本申请能够在更大的范围内保持路面二维图像中的路面要素的候选像素点投影至三维空间的精度,从而使得候选激光点云点与候选像素的投影点能够更好的匹配融合,以提高路面要素识别精度。在一种实测场景下,可达到横向20m、纵向40m范围内实现小于10cm误差的投影精度。
需要说明的是,如果在S202中得到的是该路面所在的网格化曲面,则在该步骤S203中,可根据候选像素点坐标以及该网格化曲面,确定候选像素点在该网格化曲面中的网格上的投影点坐标。
在S203中,将该路面图像中路面要素的候选像素点投影到该路面所在曲面三维数学模型上的过程,从数学角度来说,其本质是求该曲面所在的三维空间内射线与面的交点坐标。以平面为例说明投影的数学计算过程:
设图像上一个点像素坐标为p(u,v),深度为s,内参矩阵为K(该内参矩阵为激光点云坐标系至摄像装置坐标系的转换参数矩阵),摄像装置坐标系内的一点为p’(x,y,z),外参矩阵(世界坐标系至摄像装置坐标系的转换参数矩阵)包括R CT(旋转矩阵)和T CT(平移向量),对应的世界坐标系内的一个点的坐标为P(XYZ),存在以下转换关系:
Figure PCTCN2020119343-appb-000010
对于图像上的路面要素的每个候选像素点p,其对应的世界坐标系内的点P均在路面上。设已经求得点P在拟合得到的路面曲面方程为Ax+By+Cz+D=0,则点P的坐标满足:
Figure PCTCN2020119343-appb-000011
联立以上式(4)和式(5),可以得到:
Figure PCTCN2020119343-appb-000012
依据上式可解出唯一未知数即深度s,从而得到点P的三维坐标。
S204:根据至少一个路面要素的候选激光点云点的坐标以及至少一个路面要素的候选像素点在上述曲面上的投影点的坐标,确定至少一个路面要素点的坐标或者确定至少一个路面要素点。
所述至少一个路面要素点对应所述至少一个路面要素中的一个或多个路面要素。具体实施时,可针对每个候选激光点云点的坐标以及每个候选像素点在上述曲面上的投影点的坐标,确定相应的路面要素点的坐标。
本申请实施例中,将根据候选激光点云点的坐标以及至候选像素点在上述曲面上的投影点的坐标所确定出的坐标所对应的点,称为“路面要素点”。根据路面要素点的坐标可以定位路面要素,即识别得到路面要素。
本申请实施例中,所述至少一个路面要素点的坐标为所述路面对应的空间内的至少一个采样空间对应的路面要素点的坐标的集合,其中,第一采样空间为所述至少一个采样空间中的任一采样空间。进一步可选的,所述至少一个采样空间为所述路面对应的空间内的所有采样空间。具体的,所述采样空间可以为所述路面对应的至少一个空间内的任一采样空间。
该路面对应的空间内包含多个采样空间,比如该路面对应的三维空间大小为X*Y*Z m 3,一个采样空间的大小为x*y*z cm 3,则该路面对应的三维空间内包含有多个采样空间。本申请实施例对一个采样空间的大小不做限制。
一个采样空间内包括至少一个候选激光点云点和/或至少一个投影点,比如,一个采样空间内包含n个候选激光点云点P i(i=1~n)以及m个候选像素点在上述曲面上的投影点p i(i=1~m)。本申请实施例中,可针对一个采样空间,根据该采样空间内的n个候选激光点云点P i(i=1~n)的坐标以及m个候选像素点在上述曲面上的投影点p i(i=1~m)的坐标计算得到一个对应的路面要素点的坐标。其中,n、m为正整数。
在一些实施例中,对于第一采样空间,可根据第一采样空间中的至少一个路面要素的候选激光点云点坐标及置信度,和/或,第一采样空间中的至少一个路面要素的候选像素点 在上述曲面上的投影点坐标及置信度,得到第一采样空间对应的路面要素点的坐标。
其中,一个采样空间内可仅包括至少一个候选激光点云点,也可仅包括至少一个投影点,还可能既包括至少一个候选激光点云点,还包括至少一个投影点。
可选地,根据上述第一采样空间的不同情况,在确定第一采样空间对应的路面要素点的坐标时,相应包括以下几种情况:
情况1:若第一采样空间内仅包括至少一个候选激光点云点,则根据第一采样空间中的至少一个路面要素的候选激光点云点坐标及置信度,得到第一采样空间对应的路面要素点的坐标;
情况2:若第一采样空间内仅包括至少一个投影点,则根据第一采样空间中的至少一个路面要素的候选像素点在上述曲面上的投影点坐标及置信度,得到第一采样空间对应的路面要素点的坐标;
情况3:若第一采样空间内包括至少一个候选激光点云点以及至少一个投影点,则根据第一采样空间中的至少一个路面要素的候选激光点云点坐标及置信度,以及第一采样空间中的至少一个路面要素的候选像素点在上述曲面上的投影点坐标及置信度,得到第一采样空间对应的路面要素点的坐标。
以第一采样空间中包括多个候选激光点云点以及多个投影点为例,可首先确定每个候选激光点云点的置信度,以及每个候选像素点在上述曲面上的投影点的置信度,然后针对第一采样空间,执行以下操作:根据该采样空间中的每个候选激光点云点的坐标及置信度,以及每个候选像素点在上述曲面上的投影点的坐标及置信度,计算得到该采样空间对应的路面要素点的坐标。
采用上述方法计算得到的路面要素点的坐标,既不单独依赖于激光雷达采集到的激光点云,也不单独依赖于摄像装置采集到的图像,而是结合激光点云与图像,从而可以提高路面要素点的可靠性,进而提高路面要素的准确性,在强光、路面要素磨损、暗光等场景下,能够实现路面要素提取,降低了对数据采集环境的要求。
进一步地,由于每个候选激光点云点以及每个候选像素点的投影点都具有各自的置信度,在计算路面要素点的坐标时,对该采样空间中的候选激光点云点基于其置信度对坐标进行加权平均,对该采样空间中的候选像素点的投影点基于其置信度对坐标进行加权平均,再结合两者得到路面要素点的坐标,因此进一步提高了路面要素点的可靠性。
本申请的一些实施例中,可采用以下公式计算得到路面要素点的坐标:
Figure PCTCN2020119343-appb-000013
其中,P sample表示一个采样空间对应的路面要素点的坐标,P i为候选激光点云点i的坐标,p i为上述曲面上的投影点i的坐标,C Li为候选激光点云点i的置信度,C ci为上述曲面上的投影点i的置信度,n为该采样空间内候选激光点云点的数量,m为该采样空间内上述曲面上的投影点的数量,n和m均为大于或等于1的整数。
在一些实施例中,可以采用以下公式计算候选激光点云点的置信度:
C Li=W L1*D i+W L2*I i………………………………………………………(8)
其中,C Li为候选激光点云点i的置信度,D i为候选激光点云点i的邻域密度,I i为候选激光点云点i的邻域相对反射率,W L1为D i的置信度权重系数,W L2为I i的置信度权重系数。
其中,W L1+W L2=1。W L1和W L2可预先设置。W L1和W L2的设置与激光雷达的性能指 标相关,可根据经验值配置。
其中,统计邻域为一种固定大小与形状的空间(其大小可根据道路标线的尺寸经验地设置为15~30cm)。以圆形为例,候选点为圆心,半径为r的圆形,该圆形内的激光点云点的数量为n i,则候选激光点云点i的邻域密度
Figure PCTCN2020119343-appb-000014
设候选激光点云点i的反射率为I,邻域内所有激光点云点的反射率平均值为I a,则候选激光点云点i的邻域相对反射率I i=I/I a
在一些实施例中,可采用以下公式计算候选像素点在上述曲面上的投影点的置信度:
根据以下公式确定所述候选像素点在所述曲面上的投影点坐标的置信度:
C Ci=W C1*c i+W C2/L i………………………………………………………(9)
其中,C Ci为上述曲面上的投影点i的置信度,c i为图像中路面要素的候选像素点i的置信度,L i为投影点i的坐标与摄像装置原点的坐标间的距离。其中,摄像装置原点是指摄像装置坐标系原点。在将坐标系统一到世界坐标系的情况下,L i的含义可表述为:投影点i在世界坐标系下的坐标与拍摄该帧图像时摄像装置坐标系原点(如相机坐标系原点)在世界坐标系下的坐标间的距离。使用L i的倒数的意义是由于图像检测结果距离越远的像素点检测越不准确。W C1为c i的置信度权重系数,W C2为1/L i的置信度权重系数。
其中,W C1+W C2=1。W C1和W C2可预先设置。W C1和W C2的设置与图像检测算法的性能与摄像装置的性能指标相关,可根据经验值配置。
置信度的值域一般为0~1,越接近1表示置信度越高。
可选地,可采用深度神经网络类的算法计算候选像素点i在上述曲面上的投影点的置信度c i
上述方法中,将从激光点云中分割出的具有高反射强度的候选激光点云点与图像中检测出的属于路面要素的候选像素在曲面上的投影点,通过加权的方式,选取出最可靠的路面要素点。具体而言,针对激光点云中分割出的高反射强度点云,可以对其领域密度以及领域相对反射强度等进行加权。图像中检测出的属于路面要素的三维像素投影点,可以对其检测置信度、距离投影中心的距离,在相幅内的位置等进行加权。最终,在每个采样区间内,综合所有点的加权结果,得到最终的候选点。
进一步地,上述流程中还可包括以下步骤:根据至少一个路面要素点的坐标,确定或者输出所述一个或多个路面要素的信息。
其中,所述路面要素的信息可以是路面要素矢量数据,具体可包括路面要素点的坐标、颜色、类型等。
该步骤中,根据以上过程计算得到的路面要素点的坐标,再进一步结合路面要素的颜色、类型等信息,可以确定出路面要素,即可输出路面要素矢量数据。其中,路面要素的颜色、类型,可通过对该路面的图像进行检测得到。
在一些实施例中,在S205中,可以对S204中得到的属于同一类别的路面要素点进行拟合处理,以得到该路面要素的矢量数据。以车道线为例,在S204中可能得到上万个数量级的车道线点的坐标,通过对这些点进行拟合可以得到车道线的形状,基于该形状可使用其中上百个数量级的点就能表示该车道线,因此可以降低该车道线的信息的数据量。
基于上述任一实施例或多个实施例的组合,图5示例性示出了本申请实施提供的一种 路面元素确定方法的示意图。
如图所示,该流程可包括激光点云处理过程、图像处理过程以及融合处理过程。
激光点云处理过程可包括:
在501,获取来自激光雷达的激光点云。可选的,该步骤可以采用基于规则的或基于机器学习的方法,去除该激光点云中属于障碍物(比如路面上的车辆、行人、非机动车、护栏、隔离带等)的点云点;
在502,获取路面的激光点云。可选的,在步骤501获取的激光点云中去除了障碍物,进一步采用基于规则的或基于机器学习的方法,去除非路面(比如天空、路边设施等)的点云点,从而提取得到属于路面的激光点云;
在503,基于502得到的路面的激光点云,得到网格化的路面曲线。具体的,对所述路面的激光点云进行网格平面拟合处理,得到表征实际路面的三维数学模型(比如网格平面方程),即所述网格化的路面曲面;
在504,从路面的激光点云中分离得到高反射强度的点云点;
在505,得到路面要素的候选激光点云点集合。具体的,可以对分离得到的高反射强度的点云点进行去燥处理(比如去除离群点)。
图像处理过程可包括:
在510,获取来自摄像装置的图像。可选的,采用基于规则的或基于机器学习的方法,检测该图像中的路面要素像素点,并进一步对检测到的路面要素像素点进行去噪操作(如形态学滤波),得到路面要素的候选像素点集合;
在511,将在图像中检测到的路面要素的候选像素点集合中的像素点,投影至503中得到的路面曲面上,得到路面要素的候选像素点在该路面曲面上的投影点集合。其中,在计算投影点坐标时,还需要使用用于坐标系转换的内参、外参数据,以及位姿数据(如摄像装置的位姿数据)。
在512,进行拟合操作,即,将每帧图像中检测出的路面要素的像素点,通过拟合手段,拟合相应路面要素的形状(比如针对车道线,拟合为执行或曲线),从而去除一些明显与路面元素形状不符的识别结果(比如针对车道线,去除明显不是直线以及朝向与车道方向明显不符合的识别结果,以避免将路灯杆对象等识别成了车道线);
在513,进行跟踪处理,比如,以车道线为例,根据图像帧间的连续性,通过跟踪的方式将同一条车道线归类为一条车道线;
在514,进行去燥处理,即,通过去躁算法过滤掉噪声点,比如采用离群点去除方法将一些离其它点都比较远的点去掉。
融合处理过程可包括:
在520,将505中得到的路面要素的候选激光点云点集合,与514中从图像中得到的并进行处理后的路面要素的候选像素点在路面曲面上的投影点集合,采用分项加权方式进行融合(如采用上述公式7、8、9),以得到路面要素点的坐标;
在521,对各路面要素点的坐标进行聚类,以确定每个路面要素点所属的路面要素类别;
在522,分别针对每个类别的路面要素:从512得到的各路面要素点中选取属于该同一类别路面要素的点的坐标,并结合该路面要素的点的颜色,得到该路面要素的矢量数据。
需要说明的是,上述流程中部分步骤的具体实现方式可参见前述实施例的相关内容, 在此不再重复。另外,上述501-522中的任一个或者多个步骤,可以是产品的内部实现,不作为独立执行的步骤,属于中间过程,具体取决于算法以及产品的实现。即,上述501-522中可以存在一个或多个可选步骤,本申请不具体限定。
本申请的一些实施例中,考虑到路面上可能包含多个路面要素(比如可包括车道线以及等车线等),可通过聚类来确定路面上所包括的路面要素,即,确定各候选激光点云点以及候选像素点在曲面上的投影点所属的路面要素,从而根据路面要素点的坐标识别出该路面上的每个路面要素。
具体地,在S204之前,可对候选激光点云点以及候选像素点在上述曲面上的投影点进行聚类,得到各候选激光点云点以及该曲面上的各投影点所属的路面要素。相应地,在S204中,针对属于同一个路面要素的候选激光点云点的坐标以及候选像素点在上述曲面上的投影点的坐标,按照S204所示的方法进行计算,可得到该路面要素的各点的坐标,从而可以确定出该路面要素,即确定出该路面要素在路面上的位置、形状等。
由于在很多实际场景中,路面并非是一个平面,所以本申请上述实施例中使用激光点云得到真实路面的数学模型相对于使用某一固定的投影平面,能够在更大的范围内保持路面二维图像中的路面要素的候选像素点投影至三维空间(路面曲面)的精度,从而使得路面要素的候选激光点云点与候选像素点的投影点能够更好的匹配融合。
具体而言,在针对高精度电子地图制作场景中,由于本申请上述实施例在更大的范围内保持二维像素点投影至三维空间的精度,所以基于同样的采集数据(包括激光点云以及图像),能够完成更大范围的高精度电子地图的制作(比如之前一次采集能够完成三车道地图的制作,而使用本申请实施例能够完成五车道地图的制作),从而降低了高精地图的制作成本。
针对智能驾驶(如辅助/自动驾驶)系统而言,由于本申请实施例在更大的范围内保持二维像素点投影至三维空间的精度,所以可信的路面要素感知范围加大,从而提升了智能驾驶系统的安全性。
另一方面,本申请的实施例中,综合使用激光点云与图像进行路面要素的提取,而不单独依赖某一种数据,从而提升了路面要素提取的鲁棒性。具体而言,针对高精度地图制作,由于本申请实施例在强光、路面要素磨损、暗光等场景下能够实现路面要素提取,降低了对数据采集环境的要求,从而能够提升高精地图制作效率,降低制作成本。针对智能驾驶(辅助/自动驾驶)系统而言,由于本申请实施例在强光、路面要素磨损、暗光等场景下也能实现路面要素提取,使得环境感知系统工作域增大,提升了辅助/自动驾驶系统的安全性。
基于同一发明构思,本申请实施例还提供一种路面要素确定装置,该装置可以具有如图6所示的结构,所述装置可以是路面要素确定装置,也可以是能够支持上述装置实现上述方法的芯片或芯片系统。
如图6所示,该装置600可包括:获取单元601、处理单元602。
获取单元601,用于获取路面的激光点云以及所述路面的图像;
处理单元602,用于提取所述路面的激光点云中至少一个路面要素的候选激光点云点,并根据所述路面的激光点云确定所述路面所在的曲面;提取所述路面的图像中至少一个路面要素的候选像素点,根据所述至少一个路面要素的候选像素点的坐标以及所述曲面确定所述至少一个路面要素的候选像素点在所述曲面上的投影点的坐标;以及,根据所述至少 一个路面要素的候选激光点云点的坐标以及所述至少一个路面要素的候选像素点在所述曲面上的投影点的坐标,确定至少一个路面要素点的坐标,所述至少一个路面要素点对应所述至少一个路面要素中的一个或多个路面要素。
在一些实施例中,所述至少一个路面要素点的坐标为所述路面对应的空间内的至少一个采样空间对应的路面要素点的坐标的集合,其中,第一采样空间为所述至少一个采样空间内的任一采样空间。处理单元602可具体用于:
确定所述第一空间中的至少一个路面要素的候选激光点云点的置信度,和/或,所述第一采样空间中的至少一个路面要素的候选像素点在所述曲面上的投影点的置信度;
根据所述第一采样空间中的至少一个路面要素的候选激光点云点坐标及置信度,和/或,所述第一空间中的至少一个路面要素的候选像素点在所述曲面上的投影点坐标及置信度,得到所述第一采样空间对应的路面要素点的坐标;其中,所述第一采样空间内包括至少一个候选激光点云点和/或至少一个投影点。
在一些实施例中,处理单元602确定出的所述第一采样空间对应的路面要素点的坐标满足前述公式(7)。
在一些实施例中,处理单元602确定出的所述至少一个路面要素的候选激光点云点的置信度满足前述公式(8)。
在一些实施例中,处理单元602确定出的所述至少一个路面要素的候选像素点在所述曲面上的投影点的置信度满足前述公式(9)。
在一些实施例中,处理单元602还用于:
根据所述至少一个路面要素的候选激光点云点坐标以及所述至少一个路面要素的候选像素点在所述曲面上的投影点坐标,确定至少一个路面要素点的坐标之前,对所述至少一个路面要素的候选激光点云点以及所述曲面上的至少一个投影点进行聚类,得到所述至少一个路面要素的候选激光点云点以及所述曲面上的至少一个投影点所属的路面要素。处理单元602可根据第一路面要素所属的至少一个候选激光点云点的坐标以及至少一个候选像素点在所述曲面上的投影点的坐标,确定所述第一路面要素中至少一个路面要素点的坐标;其中,所述第一路面要素为聚类得到的任一路面要素。
在一些实施例中,处理单元602可具体用于:根据所述路面的激光点云中点云点的坐标,生成所述路面所在的网格化曲面;根据所述至少一个路面要素的候选像素点坐标以及所述网格化曲面,确定所述至少一个路面要素的候选像素点在所述网格化曲面中的网格上的投影点坐标。
在一些实施例中,处理单元602还可用于:根据所述至少一个路面要素点的坐标,确定或者输出所述一个或多个路面要素的信息。
在一些实施例中,获取单元601可具体用于:获取来自于至少一个激光雷达的所述路面的激光点云,以及来自至少一个摄像装置的所述路面的图像;所述激光点云中的点云点坐标和所述图像中的像素点坐标属于同一坐标系。
在一些实施例中,所述路面要素包括以下至少一项:车道线、停止线、路面标识、箭头、文字。
此外,本申请实施例还提供一种通信装置,该通信装置可以具有如图7所示的结构,所述通信装置可以是路面要素确定装置,也可以是能够支持路面要素确定装置实现上述方法的芯片或芯片系统。
如图7所示的通信装置700可以包括至少一个处理器702,所述至少一个处理器702用于与存储器耦合,读取并执行所述存储器中的指令以实现本申请实施例提供的方法中路面要素确定装置涉及的步骤。可选的,该通信装置700还可以包括至少一个接口703,用于为所述至少一个处理器提供程序指令或者数据。通信装置700中的接口703,可用于实现上述获取单元601所具有的功能,例如,接口703可用于通信装置700执行如图2或图5所示的方法中获取信息的步骤;处理器702可用于实现上述处理单元602所具有的功能,例如,可用于通信装置700执行如图2或图5所示的方法中确定路面要素的步骤。此外,接口703可用于支持通信装置700进行通信。可选的,通信装置700还可以包括存储器704,其中存储有计算机程序、指令,存储器704可以与处理器702和/或接口703耦合,用于支持处理器702调用存储器704中的计算机程序、指令以实现本申请实施例提供的方法中接收设备涉及的步骤;另外,存储器704还可以用于存储本申请方法实施例所涉及的数据,例如,用于存储支持接口703实现交互所必须的数据、指令,和/或,用于存储通信装置700执行本申请实施例所述方法所必须的配置信息。
基于与上述方法实施例相同构思,本申请实施例还提供了一种计算机可读存储介质,其上存储有一些指令,这些指令被计算机调用执行时,可以使得计算机完成上述方法实施例、方法实施例的任意一种可能的设计中所涉及的方法。本申请实施例中,对计算机可读存储介质不做限定,例如,可以是RAM(random-access memory,随机存取存储器)、ROM(read-only memory,只读存储器)等。
基于与上述方法实施例相同构思,本申请还提供一种计算机程序产品,该计算机程序产品在被计算机调用执行时可以完成方法实施例以及上述方法实施例任意可能的设计中所涉及的方法。
基于与上述方法实施例相同构思,本申请还提供一种芯片,该芯片可以包括处理器以及接口电路,用于完成上述方法实施例、方法实施例的任意一种可能的实现方式中所涉及的方法,其中,“耦合”是指两个部件彼此直接或间接地结合,这种结合可以是固定的或可移动性的,这种结合可以允许流动液、电、电信号或其它类型信号在两个部件之间进行通信。
基于与上述方法实施例相同构思,本申请还提供一种终端,该终端可包含如图6所示的单元,或者如图7所示的至少一个处理器和接口,该终端能够实现本申请实施例提供的路面要素确定方法。可选的,所述终端可以为车载系统,或者自动驾驶或智能驾驶中的车辆、无人机、无人运输车或者机器人等。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用 介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(Solid State Disk,SSD))等。
本申请实施例中所描述的各种说明性的逻辑单元和电路可以通过通用处理器,数字信号处理器,专用集成电路(ASIC),现场可编程门阵列(FPGA)或其它可编程逻辑装置,离散门或晶体管逻辑,离散硬件部件,或上述任何组合的设计来实现或操作所描述的功能。通用处理器可以为微处理器,可选地,该通用处理器也可以为任何传统的处理器、控制器、微控制器或状态机。处理器也可以通过计算装置的组合来实现,例如数字信号处理器和微处理器,多个微处理器,一个或多个微处理器联合一个数字信号处理器核,或任何其它类似的配置来实现。
本申请实施例中所描述的方法或算法的步骤可以直接嵌入硬件、处理器执行的软件单元、或者这两者的结合。软件单元可以存储于RAM存储器、闪存、ROM存储器、EPROM存储器、EEPROM存储器、寄存器、硬盘、可移动磁盘、CD-ROM或本领域中其它任意形式的存储媒介中。示例性地,存储媒介可以与处理器连接,以使得处理器可以从存储媒介中读取信息,并可以向存储媒介存写信息。可选地,存储媒介还可以集成到处理器中。处理器和存储媒介可以设置于ASIC中,ASIC可以设置于终端设备中。可选地,处理器和存储媒介也可以设置于终端设备中的不同的部件中。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管结合具体特征及其实施例对本发明进行了描述,显而易见的,在不脱离本发明的范围的情况下,可对其进行各种修改和组合。相应地,本说明书和附图仅仅是所附权利要求所界定的本发明的示例性说明,且视为已覆盖本发明范围内的任意和所有修改、变化、组合或等同物。显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。

Claims (24)

  1. 一种路面要素确定方法,其特征在于,包括:
    获取路面的激光点云以及所述路面的图像;
    提取所述路面的激光点云中至少一个路面要素的候选激光点云点,并根据所述路面的激光点云确定所述路面所在的曲面;
    提取所述路面的图像中至少一个路面要素的候选像素点,根据所述至少一个路面要素的候选像素点的坐标以及所述曲面确定所述至少一个路面要素的候选像素点在所述曲面上的投影点的坐标;
    根据所述至少一个路面要素的候选激光点云点的坐标以及所述至少一个路面要素的候选像素点在所述曲面上的投影点的坐标,确定至少一个路面要素点的坐标,所述至少一个路面要素点对应所述至少一个路面要素中的一个或多个路面要素。
  2. 如权利要求1所述的方法,其特征在于,所述至少一个路面要素点的坐标为所述路面对应的空间内的至少一个采样空间对应的路面要素点的坐标的集合,其中,第一采样空间为所述至少一个采样空间内的任一采样空间;
    所述根据所述至少一个路面要素的候选激光点云点坐标以及所述至少一个路面要素的候选像素点在所述曲面上的投影点坐标,确定至少一个路面要素点的坐标,包括:
    确定所述第一采样空间中的至少一个路面要素的候选激光点云点的置信度,和/或,所述第一采样空间中的至少一个路面要素的候选像素点在所述曲面上的投影点的置信度;
    根据所述第一采样空间中的至少一个路面要素的候选激光点云点坐标及置信度,和/或,所述第一采样空间中的至少一个路面要素的候选像素点在所述曲面上的投影点坐标及置信度,得到所述第一采样空间对应的路面要素点的坐标;其中,所述第一采样空间内包括至少一个候选激光点云点和/或至少一个投影点。
  3. 如权利要求2所述的方法,其特征在于,所述第一采样空间对应的路面要素点的坐标满足以下公式:
    Figure PCTCN2020119343-appb-100001
    其中,P sample表示所述第一采样空间对应的路面要素点的坐标,P i为候选激光点云点i的坐标,p i为所述曲面上的投影点i的坐标,C Li为候选激光点云点i的置信度,C ci为所述曲面上的投影点i的置信度,n为所述第一采样空间内候选激光点云点的数量,m为所述第一采样空间内所述曲面上的投影点的数量,n和m均为大于或等于1的整数。
  4. 如权利要求2或3所述的方法,其特征在于,所述至少一个路面要素的候选激光点云点的置信度满足以下公式:
    C Li=W L1*D i+W L2*I i
    其中,C Li为候选激光点云点i的置信度,D i为候选激光点云点i的邻域密度,I i为候选激光点云点i的邻域相对反射率,W L1为D i的置信度权重系数,W L2为I i的置信度权重系数。
  5. 如权利要求2或3所述的方法,其特征在于,所述至少一个路面要素的候选像素点在所述曲面上的投影点的置信度满足以下公式:
    C Ci=W C1*c i+W C2/L i
    其中,C Ci为所述曲面上的投影点i的置信度,c i为所述图像中路面要素的候选像素点i 的置信度,L i为所述投影点i的坐标与所在摄像装置原点的坐标间的距离,W C1为c i的置信度权重系数,W C2为1/L i的置信度权重系数。
  6. 如权利要求1-5中任一项所述的方法,其特征在于:
    所述根据所述至少一个路面要素的候选激光点云点坐标以及所述至少一个路面要素的候选像素点在所述曲面上的投影点坐标,确定至少一个路面要素点的坐标之前,还包括:
    对所述至少一个路面要素的候选激光点云点以及所述曲面上的至少一个投影点进行聚类,得到所述至少一个路面要素的候选激光点云点以及所述曲面上的至少一个投影点所属的路面要素;
    所述根据所述至少一个路面要素的候选激光点云点坐标以及所述至少一个路面要素的候选像素点在所述曲面上的投影点坐标,确定至少一个路面要素点的坐标,包括:
    根据第一路面要素所属的至少一个候选激光点云点的坐标以及至少一个候选像素点在所述曲面上的投影点的坐标,确定所述第一路面要素中至少一个路面要素点的坐标;其中,所述第一路面要素为聚类得到的任一路面要素。
  7. 如权利要求1-6中任一项所述的方法,其特征在于:
    所述根据所述路面的激光点云确定所述路面所在的曲面,包括:
    根据所述路面的激光点云中点云点的坐标,生成所述路面所在的网格化曲面;
    所述根据所述至少一个路面要素的候选像素点坐标以及所述曲面确定所述至少一个路面要素的候选像素点在所述曲面上的投影点坐标,包括:
    根据所述至少一个路面要素的候选像素点坐标以及所述网格化曲面,确定所述至少一个路面要素的候选像素点在所述网格化曲面中的投影点坐标。
  8. 如权利要求1-7中任一项所述的方法,其特征在于,还包括:
    根据所述至少一个路面要素点的坐标,确定或者输出所述一个或多个路面要素的信息。
  9. 如权利要求1-8中任一项所述的方法,其特征在于,所述获取路面的激光点云以及所述路面的图像,包括:
    获取来自于至少一个激光雷达的所述路面的激光点云,以及来自至少一个摄像装置的所述路面的图像;
    所述激光点云中的点云点坐标和所述图像中的像素点坐标属于同一坐标系。
  10. 如权利要求1-9中任一项所述的方法,其特征在于,所述路面要素包括以下至少一项:车道线、停止线、路面标识、箭头、文字。
  11. 一种路面要素确定装置,其特征在于,包括:
    获取单元,用于获取路面的激光点云以及所述路面的图像;
    处理单元,用于提取所述路面的激光点云中至少一个路面要素的候选激光点云点,并根据所述路面的激光点云确定所述路面所在的曲面;
    所述处理单元还用于提取所述路面的图像中至少一个路面要素的候选像素点,根据所述至少一个路面要素的候选像素点的坐标以及所述曲面确定所述至少一个路面要素的候选像素点在所述曲面上的投影点的坐标;以及,根据所述至少一个路面要素的候选激光点云点的坐标以及所述至少一个路面要素的候选像素点在所述曲面上的投影点的坐标,确定至少一个路面要素点的坐标,所述至少一个路面要素点对应所述至少一个路面要素中的一个或多个路面要素。
  12. 如权利要求11所述的装置,其特征在于,所述至少一个路面要素点的坐标为所述 路面对应的空间内的至少一个采样空间对应的路面要素点的坐标的集合,其中,第一采样空间为所述至少一个采样空间内的任一采样空间;
    所述处理单元,具体用于:
    确定所述至少一个路面要素的候选激光点云点的置信度,和/或,所述第一采样空间中的至少一个路面要素的候选像素点在所述曲面上的投影点的置信度;
    根据所述第一采样空间中的至少一个路面要素的候选激光点云点坐标及置信度,和/或,所述第一空间中的至少一个路面要素的候选像素点在所述曲面上的投影点坐标及置信度,得到所述第一采样空间对应的路面要素点的坐标;其中,所述第一采样空间内包括至少一个候选激光点云点和/或至少一个投影点。
  13. 如权利要求12所述的装置,其特征在于,所述第一采样空间对应的路面要素点的坐标满足以下公式:
    Figure PCTCN2020119343-appb-100002
    其中,P sample表示所述第一采样空间对应的路面要素点的坐标,P i为候选激光点云点i的坐标,p i为所述曲面上的投影点i的坐标,C Li为候选激光点云点i的置信度,C ci为所述曲面上的投影点i的置信度,n为所述第一采样空间内候选激光点云点的数量,m为所述第一采样空间内所述曲面上的投影点的数量,n和m均为大于或等于1的整数。
  14. 如权利要求12或13所述的装置,其特征在于,所述至少一个路面要素的候选激光点云点的置信度满足以下公式:
    C Li=W L1*D i+W L2*I i
    其中,C Li为候选激光点云点i的置信度,D i为候选激光点云点i的邻域密度,I i为候选激光点云点i的邻域相对反射率,W L1为D i的置信度权重系数,W L2为I i的置信度权重系数。
  15. 如权利要求12或13所述的装置,其特征在于,所述至少一个路面要素的候选像素点在所述曲面上的投影点的置信度满足以下公式:
    C Ci=W C1*c i+W C2/L i
    其中,C Ci为所述曲面上的投影点i的置信度,c i为所述图像中路面要素的候选像素点i的置信度,L i为所述投影点i的坐标与摄像装置原点的坐标间的距离,W C1为c i的置信度权重系数,W C2为1/L i的置信度权重系数。
  16. 如权利要求11-15中任一项所述的装置,其特征在于,所述处理单元,还用于:
    根据所述至少一个路面要素的候选激光点云点坐标以及所述至少一个路面要素的候选像素点在所述曲面上的投影点坐标,确定至少一个路面要素点的坐标之前,对所述至少一个路面要素的候选激光点云点以及所述曲面上的至少一个投影点进行聚类,得到所述至少一个路面要素的候选激光点云点以及所述曲面上的至少一个投影点所属的路面要素;
    所述根据所述至少一个路面要素的候选激光点云点坐标以及所述至少一个路面要素的候选像素点在所述曲面上的投影点坐标,确定至少一个路面要素点的坐标,包括:
    根据第一路面要素所属的至少一个候选激光点云点的坐标以及至少一个候选像素点在所述曲面上的投影点的坐标,确定所述第一路面要素中至少一个路面要素点的坐标;其中,所述第一路面要素为聚类得到的任一路面要素。
  17. 如权利要求11-16中任一项所述的装置,其特征在于,所述处理单元,具体用于:
    根据所述路面的激光点云中点云点的坐标,生成所述路面所在的网格化曲面;
    根据所述至少一个路面要素的候选像素点坐标以及所述网格化曲面,确定所述至少一个路面要素的候选像素点在所述网格化曲面中的投影点坐标。
  18. 如权利要求11-17中任一项所述的装置,其特征在于,所述处理单元,还用于:
    根据所述至少一个路面要素点的坐标,确定或者输出所述一个或多个路面要素的信息。
  19. 如权利要求11-18中任一项所述的装置,其特征在于,所述获取单元,具体用于:
    获取来自于至少一个激光雷达的所述路面的激光点云,以及来自至少一个摄像装置的所述路面的图像;
    所述激光点云中的点云点坐标和所述图像中的像素点坐标属于同一坐标系。
  20. 如权利要求11-19中任一项所述的装置,其特征在于,所述路面要素包括以下至少一项:车道线、停止线、路面标识、箭头、文字。
  21. 一种路面要素确定装置,其特征在于,包括至少一个处理器和接口;
    所述接口,用于为所述至少一个处理器提供程序指令或者数据;
    所述至少一个处理器用于执行所述程序行指令,以实现如权利要求1-10中任一所述的方法。
  22. 一种车载系统,其特征在于,包括如权利要求11-20中任一项所述的装置。
  23. 如权利要求22所述的车载系统,其特征在于,所述车载系统还包括至少一个激光雷达以及至少一个摄像装置。
  24. 一种计算机存储介质,其上存储有计算机程序或指令,其特征在于,所述计算机程序或指令被至少一个处理器执行时实现如权利要求1-10中任一所述的方法。
PCT/CN2020/119343 2020-09-30 2020-09-30 一种路面要素确定方法及装置 WO2022067647A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/119343 WO2022067647A1 (zh) 2020-09-30 2020-09-30 一种路面要素确定方法及装置
CN202080005143.5A CN112740225B (zh) 2020-09-30 2020-09-30 一种路面要素确定方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/119343 WO2022067647A1 (zh) 2020-09-30 2020-09-30 一种路面要素确定方法及装置

Publications (1)

Publication Number Publication Date
WO2022067647A1 true WO2022067647A1 (zh) 2022-04-07

Family

ID=75609525

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/119343 WO2022067647A1 (zh) 2020-09-30 2020-09-30 一种路面要素确定方法及装置

Country Status (2)

Country Link
CN (1) CN112740225B (zh)
WO (1) WO2022067647A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114755695A (zh) * 2022-06-15 2022-07-15 北京海天瑞声科技股份有限公司 关于激光雷达点云数据的路面检测方法、装置及介质
WO2024051344A1 (zh) * 2022-09-05 2024-03-14 北京地平线机器人技术研发有限公司 一种地图创建方法及装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113034566B (zh) * 2021-05-28 2021-09-24 湖北亿咖通科技有限公司 高精度地图构建方法、装置、电子设备及存储介质
CN117670874A (zh) * 2024-01-31 2024-03-08 安徽省交通规划设计研究总院股份有限公司 一种基于图像处理的箱型梁内部裂缝检测方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584294A (zh) * 2018-11-23 2019-04-05 武汉中海庭数据技术有限公司 一种基于激光点云的路面点云提取方法和装置
CN111126182A (zh) * 2019-12-09 2020-05-08 苏州智加科技有限公司 车道线检测方法、装置、电子设备及存储介质
US20200242373A1 (en) * 2017-11-27 2020-07-30 Tusimple, Inc. System and method for large-scale lane marking detection using multimodal sensor data
CN111551957A (zh) * 2020-04-01 2020-08-18 上海富洁科技有限公司 基于激光雷达感知的园区低速自动巡航及紧急制动系统
CN111563398A (zh) * 2019-02-13 2020-08-21 北京京东尚科信息技术有限公司 用于确定目标物的信息的方法和装置

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10528851B2 (en) * 2017-11-27 2020-01-07 TuSimple System and method for drivable road surface representation generation using multimodal sensor data
CN110400363A (zh) * 2018-04-24 2019-11-01 北京京东尚科信息技术有限公司 基于激光点云的地图构建方法和装置
CN110160502B (zh) * 2018-10-12 2022-04-01 腾讯科技(深圳)有限公司 地图要素提取方法、装置及服务器
CN109614889B (zh) * 2018-11-23 2020-09-18 华为技术有限公司 对象检测方法、相关设备及计算机存储介质
CN110208819A (zh) * 2019-05-14 2019-09-06 江苏大学 一种多个障碍物三维激光雷达数据的处理方法
CN110378196B (zh) * 2019-05-29 2022-08-02 电子科技大学 一种结合激光点云数据的道路视觉检测方法
CN110456363B (zh) * 2019-06-17 2021-05-18 北京理工大学 三维激光雷达点云和红外图像融合的目标检测及定位方法
CN110705577B (zh) * 2019-09-29 2022-06-07 武汉中海庭数据技术有限公司 一种激光点云车道线提取方法
CN111340797B (zh) * 2020-03-10 2023-04-28 山东大学 一种激光雷达与双目相机数据融合检测方法及系统
CN111667545B (zh) * 2020-05-07 2024-02-27 东软睿驰汽车技术(沈阳)有限公司 高精度地图生成方法、装置、电子设备及存储介质
CN111598034B (zh) * 2020-05-22 2021-07-23 知行汽车科技(苏州)有限公司 障碍物检测方法、装置及存储介质
CN111694011A (zh) * 2020-06-19 2020-09-22 安徽卡思普智能科技有限公司 一种摄像机和三维激光雷达数据融合的路沿检测方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200242373A1 (en) * 2017-11-27 2020-07-30 Tusimple, Inc. System and method for large-scale lane marking detection using multimodal sensor data
CN109584294A (zh) * 2018-11-23 2019-04-05 武汉中海庭数据技术有限公司 一种基于激光点云的路面点云提取方法和装置
CN111563398A (zh) * 2019-02-13 2020-08-21 北京京东尚科信息技术有限公司 用于确定目标物的信息的方法和装置
CN111126182A (zh) * 2019-12-09 2020-05-08 苏州智加科技有限公司 车道线检测方法、装置、电子设备及存储介质
CN111551957A (zh) * 2020-04-01 2020-08-18 上海富洁科技有限公司 基于激光雷达感知的园区低速自动巡航及紧急制动系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114755695A (zh) * 2022-06-15 2022-07-15 北京海天瑞声科技股份有限公司 关于激光雷达点云数据的路面检测方法、装置及介质
WO2024051344A1 (zh) * 2022-09-05 2024-03-14 北京地平线机器人技术研发有限公司 一种地图创建方法及装置

Also Published As

Publication number Publication date
CN112740225A (zh) 2021-04-30
CN112740225B (zh) 2022-05-13

Similar Documents

Publication Publication Date Title
EP3581890B1 (en) Method and device for positioning
WO2022067647A1 (zh) 一种路面要素确定方法及装置
CN107703528B (zh) 自动驾驶中结合低精度gps的视觉定位方法及系统
Guan et al. Using mobile LiDAR data for rapidly updating road markings
WO2018177026A1 (zh) 确定道路边沿的装置和方法
EP3903293A1 (en) Crowdsourced detection, identification and sharing of hazardous road objects in hd maps
Negru et al. Image based fog detection and visibility estimation for driving assistance systems
JP7138718B2 (ja) 地物検出装置、地物検出方法および地物検出プログラム
JP6804991B2 (ja) 情報処理装置、情報処理方法、および情報処理プログラム
JP6442834B2 (ja) 路面高度形状推定方法とシステム
US20200082560A1 (en) Estimating two-dimensional object bounding box information based on bird's-eye view point cloud
Konrad et al. Localization in digital maps for road course estimation using grid maps
CN112308913B (zh) 一种基于视觉的车辆定位方法、装置及车载终端
WO2023087526A1 (zh) 用于点云去噪的方法、电子设备及存储介质
EP3291178A1 (en) 3d vehicle localizing using geoarcs
Berriel et al. A particle filter-based lane marker tracking approach using a cubic spline model
CN114325634A (zh) 一种基于激光雷达的高鲁棒性野外环境下可通行区域提取方法
Hwang et al. Vision-based vehicle detection and tracking algorithm design
CN112733678A (zh) 测距方法、装置、计算机设备和存储介质
WO2023179032A1 (zh) 图像处理方法及装置、电子设备、存储介质、计算机程序、计算机程序产品
Na et al. Drivable space expansion from the ground base for complex structured roads
CN115965847A (zh) 交叉视角下多模态特征融合的三维目标检测方法和系统
CN115588047A (zh) 一种基于场景编码的三维目标检测方法
Hernandez-Gutierrez et al. Probabilistic road geometry estimation using a millimetre-wave radar
CN114384486A (zh) 一种数据处理方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20955672

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20955672

Country of ref document: EP

Kind code of ref document: A1