WO2022047744A1 - 一种用于地图的路面提取方法及装置 - Google Patents

一种用于地图的路面提取方法及装置 Download PDF

Info

Publication number
WO2022047744A1
WO2022047744A1 PCT/CN2020/113560 CN2020113560W WO2022047744A1 WO 2022047744 A1 WO2022047744 A1 WO 2022047744A1 CN 2020113560 W CN2020113560 W CN 2020113560W WO 2022047744 A1 WO2022047744 A1 WO 2022047744A1
Authority
WO
WIPO (PCT)
Prior art keywords
road surface
road
points
point cloud
candidate
Prior art date
Application number
PCT/CN2020/113560
Other languages
English (en)
French (fr)
Inventor
周旺
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2020/113560 priority Critical patent/WO2022047744A1/zh
Priority to CN202080004150.3A priority patent/CN112513876B/zh
Publication of WO2022047744A1 publication Critical patent/WO2022047744A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/182Network patterns, e.g. roads or rivers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present application relates to the technical field of electronic maps, and in particular, to a method and device for extracting road surfaces from maps.
  • Self-driving cars rely on the synergy of technologies such as artificial intelligence, visual computing, radar, positioning systems, and high-precision maps to allow computers to operate motor vehicles autonomously and safely without any human initiative.
  • technologies such as artificial intelligence, visual computing, radar, positioning systems, and high-precision maps to allow computers to operate motor vehicles autonomously and safely without any human initiative.
  • the accuracy and precision of high-precision maps as a tool for car navigation are critical to the safety of autonomous vehicles.
  • the embodiments of the present application provide a road surface extraction method and device for a map, which improves the accuracy of road surface extraction and simultaneously shortens the calculation time.
  • a first aspect provides a road surface extraction method for a map, comprising: determining a plurality of candidate road surface points on a road surface based on an original laser point cloud, wherein the original laser point cloud is a point cloud collected by a laser sensor; Determine the road surface image of the road surface from the image of the road surface; fuse the plurality of candidate road surface points with the road surface image to obtain a plurality of first road surface points of the road surface; extract the first road surface of the plurality of first road surface points an envelope, wherein the first road envelope includes a set of ordered points in the plurality of first road points for characterizing the outline of the road; based on the original laser point cloud and the first A road surface contour line is used to determine the first road surface point cloud of the road surface.
  • the first road surface envelope is calculated with the original laser point cloud, the high-resolution effect of the original laser point cloud is preserved, and the operation speed is improved; the road edge can be accurately determined by extracting the road edge points; The fusion of road images can be suitable for a variety of bad road conditions. Therefore, the present application can quickly, accurately and completely extract the road surface information in the laser point cloud, and at the same time, the extracted road surface information has the characteristics of high precision and high resolution.
  • the extracting the first road surface contours of the plurality of first road surface points includes: extracting the first road surface contour lines of the plurality of first road surface points by using a concave hull extraction method.
  • the automatic extraction of the computer is realized without human intervention, and a more complete envelope can be obtained, thereby improving the accuracy of road surface extraction.
  • the determining the candidate road surface points of the road surface based on the original laser point cloud includes: dividing the original laser point cloud into a plurality of grids; calculating the point cloud of each grid in the plurality of grids Thickness, the thickness of the point cloud is the height difference between the point with the highest height and the point with the lowest height in each grid; when the thickness of the point cloud is less than the first threshold, determine the grid as a candidate grid; The candidate pavement points are determined, the candidate pavement points including any point or points in at least one candidate grid of the plurality of grids.
  • the grid step size can be flexibly selected.
  • the computing device needs a shorter time for data processing, which improves the extraction speed of the road point cloud.
  • the determining the road surface image of the road surface from the image collected by the camera includes: performing semantic segmentation on the image collected by the camera to determine the road surface image.
  • the fusion of the candidate road surface point and the road surface image to obtain the first road surface point of the road surface includes: projecting the candidate road surface point on the road surface image; At least one candidate road surface point among the points can be projected onto a road surface image, and the at least one candidate road surface point is clustered to obtain the first road surface point.
  • the multi-sensor fusion technology is utilized, the robustness of the road surface extraction is improved, and the accuracy of the road surface extraction is improved.
  • the method further includes: determining a road edge point of the road surface based on the original laser point cloud; and obtaining the first road surface point of the road surface by fusing the candidate road surface point with the road surface image includes: : fuse the candidate road point, the road edge point, and the road image to obtain the first road point of the road.
  • the multi-sensor fusion technology is utilized, the robustness of the road surface extraction is improved, more accurate road surface edge information is obtained, and the precision of road surface extraction is improved.
  • the determining the road edge point of the road surface based on the original laser point cloud includes: processing the original laser point cloud with a road surface edge model to obtain the road surface edge point.
  • the extraction of the road surface edge is more accurate, thereby improving the accuracy of the road surface extraction.
  • the determining the first road surface point cloud based on the original laser point cloud and the first road surface envelope includes: placing the original laser point cloud in an area included in the first road surface envelope. The points inside are determined as the first road surface point cloud.
  • the envelope extraction module is specifically configured to: extract the first road envelope of the first road point by using a concave extraction method.
  • the candidate point determination module is specifically configured to: divide the original laser point cloud into a plurality of grids; calculate the point cloud thickness of each grid in the plurality of grids, and the point cloud The thickness is the height difference between the point of the highest height and the point of the lowest height in each grid; when the thickness of the point cloud is less than the first threshold, the grid is determined as a candidate grid; the candidate road point is determined , the candidate road points include any point or points in at least one candidate grid in the plurality of grids.
  • the image determination module is specifically configured to: perform semantic segmentation on the image collected by the camera to determine the road surface image.
  • the pavement point determination module is specifically configured to: project the candidate pavement points on the pavement image; when at least one candidate pavement point in the candidate pavement points can be projected on the pavement image; The at least one candidate road surface point is clustered to obtain the first road surface point.
  • an edge point determination module is further included, which is specifically used for: determining the road surface edge points of the road surface based on the original laser point cloud; the road surface point determination module is specifically used for: determining the candidate road points, The road surface edge point is fused with the road surface image to obtain the first road surface point of the road surface.
  • the edge point determination module is specifically configured to: process the original laser point cloud by using a road surface edge model to obtain the road surface edge point.
  • the road surface point cloud determination module is specifically configured to: determine the points in the original laser point cloud that are located in the area included by the first road surface contour line as the first road surface point cloud.
  • an electronic device comprising: a processor; a memory for storing instructions executable by the processor; the processor for executing any one of the above-mentioned first aspect method of extraction
  • a computer-readable storage medium where the storage medium stores a computer program, and the computer program is used to execute the extraction method according to any one of the above-mentioned first aspect.
  • a chip including a processor and an interface, wherein the interface is used to read the processor-executable instructions from an external memory, and the processor can be used to execute any of the above-mentioned first aspects. the extraction method described.
  • a server is provided, and the server is configured to execute the extraction method according to any one of the above-mentioned first aspect.
  • a computer storage medium where a computer program is stored in the computer storage medium, and the computer program is used to execute the extraction method according to any one of the above-mentioned first aspect.
  • a computer program product which, when the computer program product runs on a computer, causes the computer to execute the extraction method described in any one of the above-mentioned first aspect.
  • an electronic device configured to execute the extraction method according to any one of the above-mentioned first aspects.
  • any of the above-mentioned road surface extraction devices for maps, computer-readable storage media, electronic devices, computer program products, chips, and servers can be implemented by the corresponding methods provided above. Therefore, , the beneficial effects that can be achieved may refer to the beneficial effects in the corresponding methods provided above, which will not be repeated here.
  • FIG. 1 is a schematic diagram of an electronic map data collection scene provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of data processing and usage scenarios of an electronic map provided by an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a cloud command-side map data processing provided by an embodiment of the present application.
  • FIG. 4 is a flowchart of a road surface extraction method for a map provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a road provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an envelope extraction method provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a first road surface contour of a real road provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a first road surface point cloud of a real road according to an embodiment of the present application.
  • FIG. 9 is a structural diagram of a road surface extraction device provided by an embodiment of the present application.
  • FIG. 10 is a structural diagram of another road surface extraction device provided by an embodiment of the application.
  • FIG. 11 is a schematic diagram of a road surface extraction process with a complete road edge line provided by an embodiment of the application.
  • FIG. 12 is a schematic diagram of a road surface extraction process with missing road edge lines provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of an extraction process without a road edge line provided by an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of a computer program product provided by an embodiment of the present application.
  • first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implicitly indicating the number of indicated technical features.
  • a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
  • plural means two or more.
  • words such as “exemplary” or “for example” are used to represent examples, illustrations or illustrations. Any embodiments or designs described in the embodiments of the present application as “exemplary” or “such as” should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as “exemplary” or “such as” is intended to present the related concepts in a specific manner to facilitate understanding.
  • LiDAR light detection and ranging, LiDRA
  • LiDRA light detection and ranging
  • Lidar such as scanning lidar
  • Lidar is arranged vertically by multiple laser beams, and rotates 360 degrees around the axis.
  • Each laser beam scans a plane, and the vertical superposition presents a three-dimensional three-dimensional figure.
  • LiDAR detects targets by emitting laser beams, and obtains point cloud data by collecting the reflected beams. These point cloud data can generate accurate 3D stereoscopic images.
  • Electronic maps are digital maps, including high-precision maps.
  • An electronic map is a map based on a map database, using computer technology, stored in digital form, and can be displayed on the screen of a terminal device.
  • the main constituent elements of an electronic map are map elements, such as geographical elements such as mountains, water systems, land, administrative divisions, points of interest, or roads.
  • map elements such as geographical elements such as mountains, water systems, land, administrative divisions, points of interest, or roads.
  • roads can be further divided into highways, first-class roads, second-class roads, and third-class roads. And four levels of roads and five levels, each level of roads can be different map elements.
  • Semantic segmentation a basic task in computer vision, in which we need to classify the visual input into different semantically interpretable categories, the interpretability of semantics, that is, the classification of categories is meaningful in the real world. For example, all pixels in the image that belong to roads need to be distinguished.
  • FIG. 1 is a schematic diagram of an electronic map data collection scenario provided by an embodiment of the present application. Please refer to FIG. 1, the data of the electronic map is mainly collected by the lidar 120, and other sensors 110 are assisted, and the lidar 120 is arranged on the top of the mobile carrier.
  • the above-mentioned vehicle 100 can be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a lawn mower, a recreational vehicle, a playground vehicle, construction equipment, a tram, a golf cart, a train, a cart, etc.
  • Other sensors 110 may be disposed at the front, rear, or side of the vehicle, and other sensors 110 may be cameras (also called cameras), millimeter-wave radars, ultrasonic radars, infrared sensors, etc., which are not specially described in this embodiment of the present application. limited.
  • the data collected by the other sensors 110 is fused with the data collected by the lidar 120 .
  • the road laser point cloud collected by the lidar 120 includes a lot of noise. Therefore, to extract road surface information, the data collected by the lidar 120 needs to be processed.
  • FIG. 2 is a schematic diagram of data processing and usage scenarios of an electronic map.
  • the data collected by the lidar 120 and other sensors 110 are input into the computing device 1 of FIG. 2 , and the other sensors 110 are described below by taking a camera as an example, and are also suitable for other sensors.
  • the computing device 1 performs a series of data processing on the point cloud data collected by the lidar 120 and the images collected by the camera 110 to obtain an accurate road surface point cloud, and extracts road surface information for making an electronic map.
  • the prepared electronic map is transmitted to the cloud server 2 by wire, wireless, or through a storage medium such as a U disk or a hard disk.
  • the cloud server 2 includes a large-capacity storage space for storing map data, including high-precision maps, It is also responsible for issuing electronic map updates to vehicle terminals, etc., or other terminals such as mobile phones and tablets.
  • the vehicle terminal includes the common vehicle 101 in the lower part of FIG. 2 , and may also include the special collecting vehicle 100 on the right side of FIG. 2 .
  • the map data can be deployed on one or more servers.
  • crowdsourcing can also be used for electronic map data.
  • crowdsourcing has been widely used in recent years. Crowdsourcing is based on the power of the public to complete a specific task, as shown in Figure 2.
  • the ordinary vehicle 101 in the vehicle can also collect road data, and report the collected road data to the computing device 1 .
  • the computing device 1 can decide whether to update the current map based on the road data reported by the common vehicle 101, and execute the update of the map data, and a new electronic map can be issued after the update.
  • the computing device 1 may be an independent device, such as an independent computer.
  • the computing device 1 may also be included in the cloud server 2 , and the road data collected by the dedicated collection vehicle 100 and the road data collected by the ordinary vehicle 101 may be directly reported to the computing device 1 in the cloud server 2 .
  • the computing device 1 may also be set on the collection vehicle 100, and the calculation is performed directly at the vehicle end.
  • FIG. 3 is a schematic structural diagram of a cloud command-side map data processing provided by an embodiment of the present application.
  • the on-board computer system 112 may also receive information from or transfer information to other computer systems.
  • sensor data collected from a sensor system at the vehicle terminal 12, such as a lidar or a camera may be transferred to another computer for processing of this data.
  • data from computer system 112 may be transmitted via a network to cloud-side computer 720 for further processing.
  • Networks and intermediate nodes may include various configurations and protocols, including the Internet, the World Wide Web, Intranets, Virtual Private Networks, Wide Area Networks, Local Area Networks, private networks using one or more of the company's proprietary communication protocols, Ethernet, WiFi and HTTP, and various combinations of the foregoing.
  • Such communications may be by any device capable of transferring data to and from other computers, such as modems and wireless interfaces.
  • computer 720 may include a server having multiple computers, such as a load balancing server farm, that exchange information with different nodes of the network for the purpose of receiving, processing, and transmitting data from computer system 112.
  • the server may be configured similarly to a computer system, with processor 730 , memory 740 , instructions 750 , and data 760 .
  • the data 760 may include point cloud data collected by the lidar 120 , road data collected by other sensors 110 , such as images collected by cameras, processed intermediate data, and finally processed road point cloud data.
  • the server 720 may receive, monitor, store, update, and various information related to the map road data, and determine whether the map data is updated.
  • FIG. 4 is a flowchart of a road surface extraction method for a map provided by an embodiment of the present application.
  • the execution subject of this embodiment may be a computing device 1 in a cloud server or an independent computing device 1 .
  • S101 Determine candidate road points of the road surface based on an original laser point cloud, where the original laser point cloud is a point cloud collected by a laser sensor.
  • the original laser point cloud is the road point cloud collected by the above-mentioned lidar 120. It can be set to collect the current road laser point cloud every certain time interval, and the period length can be adjusted according to the needs of the operator.
  • the lidar can be single-line lidar, multi-line lidar, mechanical rotating lidar, MEMS lidar, phased array lidar, Flash lidar, etc.
  • the original laser point cloud of the road surface collected by lidar contains a lot of noise, and the original laser point cloud needs to be de-noised to determine candidate road points.
  • the way of determining the candidate road points can be in various ways, such as the grid method.
  • the grid method specifically includes: dividing the original laser point cloud into a plurality of grids. Because a larger grid side length can effectively reduce the noise of extracting road points and the time for extracting road point grids, you can set the side length of the grid to be greater than a certain threshold, for example, set the side length of the grid to be greater than 1m.
  • the point cloud thickness is calculated separately for each grid, and the point cloud thickness is the height difference between the point with the highest height and the point with the lowest height in each grid. Thickness can be calculated in several ways:
  • the point cloud thickness can be calculated by establishing a Cartesian coordinate system to calculate the height difference of the points in the grid.
  • the point cloud thickness of each grid can also be calculated by the method of plane difference.
  • the grid with the thickness of the point cloud lower than a certain threshold is selected as the candidate grid of the road surface, and any point or points of this grid can be used as the candidate point of the road surface.
  • the point can be the center point of the grid, the vertices of the four corners of the grid, or the points on the four sides.
  • the grid step size can be selected flexibly.
  • the computing device 1 needs a shorter time for data processing, which improves the extraction speed of the road point cloud.
  • the camera may be a camera disposed on the vehicle 100, and the camera may be disposed above, in front of, behind, or on the side of the vehicle.
  • the specific settable positions can be: windshield, door, column, roof, rear, etc.
  • the camera may be a monocular camera, a binocular camera, a trinocular camera, a depth camera, an infrared camera, a fisheye camera, a surround-view camera, or the like.
  • the computing device 1 can perform semantic segmentation on the image collected by the camera to determine the road surface image.
  • the semantic segmentation can adopt various methods, such as manual labeling, or deep learning methods.
  • the deep learning methods include, for example, convolutional neural networks CNN, Recurrent neural network RNN, K-means clustering and other methods.
  • the lidar is set on the top of the vehicle, and the vehicle collects the road point cloud in the process of traveling, as shown in Figure 5, because the reflectivity of the road in front of and behind the vehicle does not change. Therefore, the original laser point cloud cannot effectively distinguish the data in front of and behind the road.
  • the boundaries of the front and rear of the road can be obtained by semantically segmenting the image data collected by the camera.
  • all or some road sections of some roads may not have edges, and the original laser point cloud is not accurate for the road surface point cloud collected by the road without edges, resulting in inaccurate road point cloud extraction.
  • the extraction method relying on the laser point cloud cannot segment the road and non-road information. In this case, combined with the semantically segmented image data, accurate road information can be obtained.
  • the candidate road point and the road image may be fused to obtain the first road point of the road.
  • the road candidate points are projected onto the semantically segmented road image.
  • the points are determined as noise points, and the noise points are filtered to leave reflection points that can be projected onto the road surface image as candidate road surface points.
  • a plurality of candidate road points are clustered to obtain a first road point.
  • the multi-sensor fusion technology is utilized, the robustness of the acquired road surface is improved, and the extraction accuracy is improved.
  • a concave extraction method can be used to extract the first road surface contour of the first road surface point.
  • Figure 6a is a set of points S, the concave extraction method is as follows:
  • Step 1 First find the convex hull of the point set S, as shown in Figure 6b, the convex hull is the initial outline of the envelope;
  • Step 2 Select an edge MN in the convex hull, as shown in Figure 6c, if the length of MN is greater than the threshold d1, select an interior point P closest to the edge of MN, the star-shaped point in Figure 6c. Calculate the distance between the interior point P and MN, and if the distance is greater than the threshold d2, this interior point is taken as a point on the envelope, as shown in Figure 6d.
  • Step 3 Repeat step 2 until all edges on the envelope are traversed.
  • the resulting envelope is shown in Figure 6e.
  • the envelope of the pavement can be extracted according to the envelope extraction method shown in FIG. 6 , so as to obtain the outline information of the pavement.
  • FIG. 7 is the result of extracting the envelope of the first road surface point of the real road, and the first road surface envelope is obtained, wherein the first road surface envelope is a road outline composed of gray lines.
  • the above-mentioned concave extraction method can be used, or other methods can be used. By extracting redundant and disordered points out of the outer boundary, only the ordered envelope points on the outline of the first pavement point can be retained. .
  • the points in the original laser point cloud that are located in the area included by the first road surface contour are determined as the first road surface point cloud.
  • FIG. 8 is a schematic diagram of applying the first road surface envelope to the original laser point cloud, so that the laser point cloud in the middle part can be determined as the first road surface laser point cloud, wherein the first road surface laser point cloud The point cloud is the white part.
  • the envelope extraction method is adopted, the outer contour of the first road surface point is selected, and the maximum range of the first road surface point is taken as the first road surface envelope line, thereby ensuring the road surface. Completeness of point cloud data. And the envelope method is used to extract the road point cloud in the original laser point cloud, which effectively controls the time complexity, shortens the data processing time, and can quickly obtain a complete road point cloud. It can be seen that the embodiment of the present application can completely, quickly and accurately extract road surface information.
  • the road will have a road edge.
  • the boundary of the road can be accurately determined, and then by determining which side of the road edge the original laser point cloud is on, thereby Determine whether the reflection point is a road point cloud, which improves the accuracy and precision of road point cloud extraction.
  • the extraction method may further include the step of: S103 , determining a road edge point of the road surface based on the original laser point cloud.
  • the road edge points include points that have a large change in height from the road plane.
  • the signal emitted by the lidar encounters the guardrails, piers, isolation belts, curbs, etc. set on the road to generate reflections of reflected signals.
  • the point is the road edge point.
  • the original laser point cloud can be processed by using the road edge model to obtain the road edge points.
  • the road edge model For example, single-line information of lidar can be used to find jump points, deep learning methods can be used to analyze road edges, or a sliding box can be used to find jump points.
  • the following is an introduction to the method of sliding the frame to find the jump point.
  • First set the sliding frame, follow the trajectory of the point cloud data collected by the lidar, and slide the shaking frame to the left and right respectively.
  • the compensation of moving the sliding frame is equal to the sliding The side length of the box.
  • the thickness of the original laser point cloud is calculated every time the sliding box is slid. If the thickness of two adjacent sliding boxes is greater than a certain threshold, it is considered that a jump has occurred, and any point of the sliding box, such as a center point or a corner point, is used as a point on the edge of the road.
  • the step "S104, obtaining the first road surface point of the road surface” specifically includes: fusing the candidate road surface point, the road surface edge point, and the road surface image to obtain the first road surface point of the road surface.
  • the candidate road surface point, the road surface edge point, and the road surface image are fused to calculate the first road surface point of the road surface.
  • the road candidate point cloud is projected onto the road image that has been semantically segmented.
  • Cluster multiple candidate road points fuse the clustered points with the road edge points, determine whether the clustered points are located inside the road edge point, and select the point inside the road edge as the first road point. , so as to obtain the first road surface point of the road surface.
  • the road surface boundary can be accurately determined for the road surface point cloud with the road edge, thereby improving the accuracy of road surface extraction.
  • the embodiment of the present application also provides a road surface extraction device for a map
  • the road surface extraction device may be as shown in FIG. 9 or FIG. 10
  • the road surface extraction device may include: a candidate point determination module 121, an edge point determination module 122, Image determination module 123 , road point determination module 124 , envelope extraction module 125 , and road point cloud determination module 126 .
  • the candidate point determination module 121 determines candidate road points of the road surface based on the original laser point cloud, wherein the original laser point cloud is a point cloud collected by a laser sensor;
  • the image determination module 123 determines the road surface image of the road surface in the image collected by the camera;
  • the pavement point determination module 124 fuses the candidate pavement point with the pavement image to obtain the first pavement point of the pavement;
  • the envelope extraction module 125 extracts a first road surface envelope of the first road surface point, wherein the first road surface envelope line includes a set of ordered points in the first road surface point, used to represent all the describe the outline of the road surface;
  • the road surface point cloud determination module 126 determines a first road surface point cloud of the road surface based on the original laser point cloud and the first road surface envelope.
  • envelope extraction module 125 is specifically used for:
  • the first road surface contour of the first road surface point is extracted using a concave extraction method.
  • the candidate point determination module 121 is specifically used for:
  • the point cloud thickness is the height difference between the point of the highest height and the point of the lowest height in the each grid
  • the candidate pavement points are determined, the candidate pavement points including any point or points in at least one candidate grid of the plurality of grids.
  • the image determination module 123 is specifically used for:
  • Semantic segmentation is performed on the image collected by the camera to determine the road surface image.
  • road point determination module 124 is specifically used for:
  • the at least one candidate road surface point in the candidate road surface points When at least one candidate road surface point in the candidate road surface points can be projected onto the road surface image, the at least one candidate road surface point is clustered to obtain the first road surface point.
  • an edge point determination module 122 is also included, which is specifically used for:
  • the pavement point determination module is specifically configured to: fuse the candidate pavement points, the pavement edge points, and the pavement image to obtain the first pavement point of the pavement.
  • edge point determination module 122 is specifically used for:
  • the original laser point cloud is processed using a road surface edge model to obtain the road surface edge points.
  • road point cloud determination module 124 is specifically used for:
  • the reflection points in the original laser point cloud that are located in the area included by the first road surface envelope are determined as the first road surface point cloud.
  • the two solid lines in Fig. 11a are the road edges, and the arrows are the direction of the road.
  • Figure 11b shows the candidate road points of the road surface determined based on the original laser point cloud. As shown in Figure 11b, since the reflectivity has not changed, there are many candidate road points at both ends of the road collected by the lidar (equivalent to the front and back of the road in Figure 5), so there is no boundary, and road surface extraction cannot be performed.
  • Fig. 11c is a road edge point of the road surface determined based on the original laser point cloud.
  • FIG. 11d is a first road surface point of the road surface obtained by fusing the candidate road surface point with the road surface image. Due to the fusion of semantically segmented road surface image data, both ends of the above candidate road points can be cut. However, in order to improve the operation speed, the grid step size is selected larger, resulting in low accuracy of the road candidate points. Therefore, the resolution of the road candidate points in this link is not high. If the grid step size is reduced, the computation time will increase, and the number of noise points will also increase.
  • FIG. 11e is a first road surface point of the road surface obtained by fusing the candidate road surface point, the road surface edge point, and the road surface image.
  • the outline of the first road candidate point in Figure 11e is accurate and complete.
  • a more accurate road laser point cloud is obtained, that is, the first road surface point cloud.
  • a road point cloud (figure omitted, see Figure 8).
  • the point cloud information of the original laser point cloud is retained, the problem of low resolution in Fig. 11d is solved, and the resolution of the point cloud is improved. rate, while also improving the operation speed. It can quickly, accurately and completely extract the road surface information in the laser point cloud, and the extracted road point cloud has the characteristics of high precision and high resolution.
  • Figure 12a shows a road
  • the two solid lines are the edges of the road, and there is a missing in the middle of the upper solid line.
  • the dotted line above is the boundary of the side road.
  • the arrow is the direction of the road.
  • Figure 12b shows the candidate road points of the road surface determined based on the original laser point cloud.
  • the reflectivity since the reflectivity has not changed, there are many candidate road points at both ends of the road collected by the lidar (equivalent to the front and back of the road in Figure 5), so there is no boundary, and road surface extraction cannot be performed.
  • Fig. 12c is a road edge point of the road surface determined based on the original laser point cloud.
  • FIG. 12d is a first road surface point of the road surface obtained by fusing the candidate road surface point with the road surface image. Due to the fusion of semantically segmented road surface image data, the two ends of the above candidate road points can be cut, and some points on the auxiliary road are screened out, and only the points on the road surface and the points on the auxiliary road without road edge parts are retained.
  • FIG. 12e shows the first road surface point of the road surface obtained by fusing the candidate road surface point, the road surface edge point, and the road surface image.
  • the outline of the first road candidate point in Fig. 12e is accurate and complete.
  • a more accurate road laser point cloud can be obtained, namely the first road point cloud.
  • a road point cloud (figure omitted, see Figure 8).
  • the point cloud information of the original laser point cloud is retained, the resolution of the point cloud is improved, and the operation speed is also improved. It can quickly, accurately and completely extract the road surface information in the laser point cloud, and the extracted road point cloud has the characteristics of high precision and high resolution. In addition, for the missing part of the road edge, the road edge of the side road can be automatically extracted.
  • Fig. 13a is a road, and the two dotted lines are virtual road boundaries.
  • the arrow is the direction of the road.
  • Figure 13b shows the candidate road points of the road surface determined based on the original laser point cloud.
  • the reflectivity has not changed, there are many candidate road points at both ends of the road collected by lidar (equivalent to the front and back of the road in Figure 5), so there is no boundary, and road surface extraction cannot be performed. Also, since there are no road edges, there are also a large number of candidate points outside the virtual road boundary.
  • Fig. 13c shows the road edge points of the road surface determined based on the original laser point cloud.
  • FIG. 13d is a first road surface point of the road surface obtained by fusing the candidate road surface point with the road surface image. Due to the fusion of semantically segmented road surface image data, both ends of the above candidate road points can be cut, and the points outside the virtual road boundary are screened out, and only the points within the road virtual road boundary are retained.
  • Fig. 13e shows the first road surface point of the road surface obtained by fusing the candidate road surface point, the road surface edge point, and the road surface image.
  • the outline of the first road candidate point in Fig. 13e is accurate and complete.
  • a more accurate road laser point cloud is obtained, namely the first road point cloud.
  • a road point cloud (figure omitted, see Figure 8).
  • the point cloud information of the original laser point cloud is retained, the resolution of the point cloud is improved, and the operation speed is also improved. It can quickly, accurately and completely extract the road surface information in the laser point cloud, and the extracted road point cloud has the characteristics of high precision and high resolution. In addition, for the case of no road boundary, points within the virtual road boundary can be automatically extracted.
  • An embodiment of the present application further provides a chip, including a processor and an interface, where the interface is used to read the processor-executable instructions from an external memory, and the processor can be used to execute the technical solutions of the foregoing method embodiments , the implementation principle and technical effect are similar, and the functions of each module may refer to the corresponding description in the method embodiment, which will not be repeated here.
  • Embodiments of the present application further provide a server, which can be used to execute the technical solutions of the foregoing method embodiments, and its implementation principles and technical effects are similar, and the functions of each module can refer to the corresponding descriptions in the method embodiments. No longer.
  • Embodiments of the present application further provide a computer storage medium, where a computer program is stored in the computer storage medium, and the computer program is used to execute the technical solutions of the above method embodiments.
  • the implementation principles and technical effects are similar, and the functions of each module can be Reference is made to the corresponding descriptions in the method embodiments, which are not repeated here.
  • the embodiments of the present application also provide a computer program product containing instructions, when the computer program product runs on a computer, the computer is made to execute the technical solutions of the above method embodiments, and the implementation principles and technical effects thereof are similar.
  • the computer program product runs on a computer
  • the computer is made to execute the technical solutions of the above method embodiments, and the implementation principles and technical effects thereof are similar.
  • functions reference may be made to the corresponding descriptions in the method embodiments, which are not repeated here.
  • the embodiments of the present application further provide an electronic device, which can be used to implement the technical solutions of the above method embodiments, and its implementation principles and technical effects are similar, and the functions of each module can refer to the corresponding descriptions in the method embodiments, It will not be repeated here.
  • example computer program product 600 is provided using signal bearing medium 601 .
  • the signal bearing medium 601 may include one or more program instructions 602 that, when executed by one or more processors, may provide the functions, or portions thereof, described above with respect to FIG. 4 .
  • program instructions 602 in FIG. 9 also describe example instructions.
  • the signal bearing medium 601 may include a computer-readable medium 603, such as, but not limited to, a hard drive, a compact disc (CD), a digital video disc (DVD), a digital tape, a memory, a read only memory (Read) -Only Memory, ROM) or random access memory (Random Access Memory, RAM) and so on.
  • the signal bearing medium 601 may include a computer recordable medium 604, such as, but not limited to, memory, read/write (R/W) CDs, R/W DVDs, and the like.
  • signal bearing medium 601 may include communication medium 605, such as, but not limited to, digital and/or analog communication media (eg, fiber optic cables, waveguides, wired communication links, wireless communication links, etc.).
  • the signal bearing medium 601 may be conveyed by a wireless form of communication medium 605 (eg, a wireless communication medium conforming to the IEEE 802.11 standard or other transmission protocol).
  • the one or more program instructions 602 may be, for example, computer-executable instructions or logic-implemented instructions.
  • a computing device or road surface extraction apparatus such as those described with respect to FIGS.
  • 3 , 9 , and 10 may be configured to respond via one of computer readable media 603 , computer recordable media 604 , and/or communication media 605 in response to One or more program instructions 602 communicated to a computing device to provide various operations, functions, or actions.
  • program instructions 602 communicated to a computing device to provide various operations, functions, or actions.
  • the arrangements described herein are for illustrative purposes only. Thus, those skilled in the art will understand that other arrangements and other elements (eg, machines, interfaces, functions, sequences, and groups of functions, etc.) can be used instead and that some elements may be omitted altogether depending on the desired results . Additionally, many of the described elements are functional entities that may be implemented as discrete or distributed components, or in conjunction with other components in any suitable combination and position.
  • each functional module in the embodiments of the present application may be integrated into one processing module, or each module may exist physically alone, or two or more modules may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules.
  • the integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or a part that contributes to the prior art, or all or part of the technical solution, and the computer software product is stored in a storage inoculation , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, removable hard disk, ROM, RAM, magnetic disk or optical disk and other media that can store programs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

一种用于地图的路面提取方法,包括:基于原始激光点云,确定路面的候选路面点,其中所述原始激光点云是激光传感器采集的点云;在相机采集的图像中确定路面的路面图像;将所述候选路面点与所述路面图像进行融合,得到所述路面的第一路面点;提取所述第一路面点的第一路面包络线,其中所述第一路面包络线包括所述第一路面点中的一组有序点,用于表征所述路面的轮廓;基于所述原始激光点云和所述第一路面包络线,确定所述路面的第一路面点云。

Description

一种用于地图的路面提取方法及装置 技术领域
本申请涉及电子地图技术领域,具体涉及一种用于地图的路面提取的方法和装置。
背景技术
自动驾驶汽车依靠人工智能、视觉计算、雷达、定位系统和高精度地图等技术的协同合作,让电脑可以在没有任何人类主动的操作下,自动安全地操作机动车辆。高精度地图作为汽车导航使用的工具,其准确度和精度对于自动驾驶汽车的安全至关重要。
在无人驾驶领域中,准确的路面提取结果能够为无人驾驶车辆提供可行驶区域,为车辆的规划控制提供很强的环境先验信息。目前的路面提取的研究中,有基于激光的提取,主要根据路面激光点云的厚度及高度信息。还有基于视觉的路面分割技术,应用深度学习的方式对图像中地面像素点进行语义分割。也有基于视觉和激光融合的路面提取技术,将激光与相机两种传感器融合起来,获得鲁棒性更强的地面提取结果。
然而上述路面提取的方法,无法兼顾路面提取精度与提取速度,即现有技术中存在着提高路面提取精度,会相应的增加路面提取时间的问题。
发明内容
本申请实施例提供一种用于地图的路面提取方法及装置,提高了路面提取的精度并且同时缩短了计算时间。
为达到上述目的,本申请的实施例采用如下技术方案:
第一方面,提供一种用于地图的路面提取方法,包括:基于原始激光点云,确定路面的多个候选路面点,其中所述原始激光点云是激光传感器采集的点云;在相机采集的图像中确定路面的路面图像;将所述多个候选路面点与所述路面图像进行融合,得到所述路面的多个第一路面点;提取所述多个第一路面点的第一路面包络线,其中所述第一路面包络线包括所述多个第一路面点中的一组有序点,用于表征所述路面的轮廓;基于所述原始激光点云和所述第一路面包络线,确定所述路面的第一路面点云。
采用上述技术方案,将第一路面包络线与原始激光点云进行计算,保留了原始激光点云的高分辨率的效果,提高了运算速度;提取路面边沿点能够准确的确定道路边沿;采用路面图像进行融合能够适合多种不良路况。因此,本申请能够快速、准确、完备的提取激光点云中的路面信息,同时提取得到的路面信息具有高精度、高分辨率的特点。
可选地,所述提取所述多个第一路面点的第一路面包络线包括:采用凹包提取方法提取所述多个第一路面点的第一路面包络线。
采用上述技术方案,实现了计算机的自动提取,无需人为干预,能够获得更加完 备的包络线,从而提高了路面提取的准确性。
可选地,所述基于原始激光点云,确定路面的候选路面点包括:将所述原始激光点云划分为多个网格;计算所述多个网格中的每一个网格的点云厚度,所述点云厚度为所述每一个网格中的最高高度的点与最低高度的点的高度差;当所述点云厚度小于第一阈值时,确定该网格为候选网格;确定所述候选路面点,所述候选路面点包括所述多个网格中的至少一个候选网格中的任一点或多个点。
采用上述技术方案,可以灵活选取网格的步长,当激光点云的网格选取步长较大时,计算装置需要数据处理的时间较短,提升了路面点云的提取速度。
可选地,所述在相机采集的图像中确定所述路面的路面图像包括:对所述相机采集的图像进行语义分割,确定所述路面图像。
采用上述技术方案,对道路前方、后方以及缺失路边沿的情况都可以进行提取,从而提高了路面提取的准确度。
可选地,所述将所述候选路面点与所述路面图像进行融合,得到所述路面的第一路面点包括:将所述候选路面点投影到所述路面图像上;当所述候选路面点中的至少一个候选路面点能够投影到路面图像上,对所述至少一个候选路面点进行聚类,得到所述第一路面点。
采用上述技术方案,利用了多传感器融合技术,提高了提取路面的鲁棒性,提高了路面提取的精度。
可选地,还包括:基于所述原始激光点云,确定所述路面的路面边沿点;所述将所述候选路面点与所述路面图像进行融合,得到所述路面的第一路面点包括:将所述候选路面点、所述路面边沿点、与所述路面图像进行融合,得到所述路面的第一路面点。
采用上述技术方案,利用了多传感器融合技术,提高了提取路面的鲁棒性,并且获得了更加准确的路面边沿信息,提高了路面提取的精度。
可选地,所述基于所述原始激光点云,确定所述路面的路面边沿点包括:利用路面边沿模型处理所述原始激光点云,获取所述路面边沿点。
采用上述技术方案,使得路面边沿的提取更加准确,从而提高了路面提取的准确度。
可选地,所述基于所述原始激光点云和第一路面包络线,确定第一路面点云包括:将所述原始激光点云中位于所述第一路面包络线所包括的区域内的点,判定为第一路面点云。
采用上述技术方案,提高了路面提取的精确度,并且提取用时并不会随着精度提升而提升,降低了提取时间的复杂度。
第二方面,提供了一种用于地图的路面提取装置,包括:候选点确定模块,基于原始激光点云,确定路面的多个候选路面点,其中所述原始激光点云是激光传感器采集的点云;图像确定模块,在相机采集的图像中确定路面的路面图像;路面点确定模块,将所述多个候选路面点与所述路面图像进行融合,得到所述路面的第一路面点;包络线提取模块,提取所述多个第一路面点的第一路面包络线,其中所述第一路面包络线包括所述第一路面点中的一组有序点,用于表征所述路面的轮廓;路面点云确定 模块,基于所述原始激光点云和所述第一路面包络线,确定所述路面的第一路面点云。
可选地,所述包络线提取模块具体用于:采用凹包(concave)提取方法提取所述第一路面点的第一路面包络线。
可选地,所述候选点确定模块具体用于:将所述原始激光点云划分为多个网格;计算所述多个网格中的每一个网格的点云厚度,所述点云厚度为所述每一个网格中的最高高度的点与最低高度的点的高度差;当所述点云厚度小于第一阈值时,确定该网格为候选网格;确定所述候选路面点,所述候选路面点包括所述多个网格中的至少一个候选网格中的任一点或多个点。
可选地,所述图像确定模块具体用于:对所述相机采集的图像进行语义分割,确定所述路面图像。
可选地,所述路面点确定模块具体用于:将所述候选路面点投影到所述路面图像上;当所述候选路面点中的至少一个候选路面点能够投影到路面图像上;对所述至少一个候选路面点进行聚类,得到所述第一路面点。
可选地,还包括边沿点确定模块,具体用于:基于所述原始激光点云,确定所述路面的路面边沿点;所述路面点确定模块,具体用于:将所述候选路面点、所述路面边沿点、与所述路面图像进行融合,得到所述路面的第一路面点。
可选地,所述边沿点确定模块,具体用于:利用路面边沿模型处理所述原始激光点云,获取所述路面边沿点。
可选地,所述路面点云确定模块具体用于:将所述原始激光点云中位于所述第一路面包络线所包括的区域内的点,判定为第一路面点云。
第三方面,提供了一种电子设备,所述电子设备包括:处理器;用于存储所述处理器可执行指令的存储器;所述处理器,用于执行上述第一方面任一所述的提取方法
第四方面,提供了一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行上述第一方面任一所述的提取方法。
第五方面,提供了一种芯片,包括处理器和接口,所述接口用于从外部存储器读取所述处理器可执行指令,所述处理器,可以用于执行上述第一方面任一所述的提取方法。
第六方面,提供了一种服务器,所述服务器用于执行上述第一方面任一所述的提取方法。
第七方面,提供了一种计算机存储介质,该计算机存储介质存储有计算机程序,该计算机程序用于执行上述第一方面任一所述的提取方法。
第八方面,提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述第一方面任一所述的提取方法。
第九方面,提供了一种电子设备,所述电子设备用于执行上述第一方面任一所述的提取方法。
可以理解地,上述提供的任一种用于地图的路面提取装置、计算机可读存储介质、电子设备、计算机程序产品、芯片、服务器,均可以由上文所提供的对应的方法来实现,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
附图说明
图1为本申请实施例提供的电子地图数据采集场景示意图;
图2是本申请实施例提供的电子地图的数据处理和使用场景的示意图;
图3为本申请实施例提供的一种云端指令侧地图数据处理结构示意图;
图4为本申请实施例提供的一种用于地图的路面提取方法流程图;
图5为本申请实施例提供的一种道路示意图;
图6为本申请实施例提供的一种包络线提取方法示意图;
图7为本申请实施例提供的一种真实道路的第一路面包络线示意图;
图8为本申请实施例提供的一种真实道路的第一路面点云示意图;
图9为本申请实施例提供的一种路面提取装置结构图;
图10为本申请实施例提供的另一种路面提取装置结构图;
图11为本申请实施例提供的一种具有完整的道路边沿线的路面提取过程示意图;
图12为本申请实施例提供的一种具有缺失道路边沿线的的路面提取过程示意图;
图13为本申请实施例提供的一种无道路边沿线的提取过程示意图;
图14为本申请实施例提供的一种计算机程序产品的结构示意图。
具体实施方式
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
同时,在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念,便于理解。
为了便于理解,以下,对本申请实施例可能出现的术语进行解释。
激光雷达(light detection and ranging,LiDRA)能够捕获目标物基本形状特征及丰富局部细节,具有可靠性及测量精度高等优点,目前被广泛应用在智能设备(例如无人驾驶车辆、机器人、无人机等)环境感知中。
激光雷达,例如扫描式激光雷达,通过多束激光竖列而排,绕轴进行360度旋转,每一束激光扫描一个平面,纵向叠加后呈现出三维立体图形。具体地,激光雷达通过发射激光光束来探测目标,并通过搜集反射回来的光束获取点云数据。这些点云数据可以生成精确的三维立体图像。
电子地图,电子地图即数字地图,包括高精度地图。电子地图是以地图数据库为基础,利用计算机技术,以数字形式存储,可以在终端设备的屏幕上显示的地图。电子地图的主要构成元素就是地图元素,例如山脉、水系、陆地、行政区划、兴趣点或者道路等地理元素,其中,道路还可以进一步划分为高速公路、一级公路、二级公路、三级公路和四级公路五个等级,每个等级的道路可以为不同的地图元素。
语义分割,计算机视觉中的基本任务,在语义分割中我们需要将视觉输入分为不 同的语义可解释类别,语义的可解释性,即分类类别在真实世界中是有意义的。例如,需要区分图像中属于道路的所有像素。
图1是本申请实施例提供的电子地图数据采集场景示意图。请参阅图1,电子地图的数据主要通过激光雷达120进行采集,其他传感器110进行辅助,将激光雷达120设置在移动载体的顶部上,移动载体例如可以为采集车辆100、无人机、机器人等,上述车辆100可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车、和手推车等,其他传感器110可以设置在车辆的前部、后部或者侧面,其他传感器110可以为相机(也可称之为摄像头)、毫米波雷达、超声波雷达、红外传感器等,本申请实施例不做特别的限定。通过多传感器融合技术,将其他传感器110采集的数据与激光雷达120采集的数据进行融合。
由于道路表面树木、植被、建筑、路标等物体的存在,激光雷达120采集到的路面激光点云包括很多的噪声,因此要提取路面信息就需要对激光雷达120采集的数据进行处理。
图2是电子地图的数据处理和使用场景的示意图。将激光雷达120和其他传感器110采集的数据输入到图2的计算装置1中,以下其他传感器110以相机为例进行介绍,同样也适合其他传感器。计算装置1对激光雷达120采集的点云数据以及相机110采集的图像,进行一系列的数据处理,得到准确的路面点云,提取出路面信息进行电子地图的制作。将制作好的电子地图通过有线、无线、或者通过存储介质如U盘、硬盘的方式传输到云端服务器2,云端服务器2包括较大容量的存储空间,用于存储地图数据,包括高精度地图,并且负责将电子地图更新下发给车辆终端等,或其他的终端如手机、平板等。其中,车辆终端包括图2下方的普通车辆101,还可以包括图2右侧的专用采集车辆100。具体的,可以将地图数据部署于一台或者多台服务器上。
可选的,电子地图数据还可以采用众包模式,众包作为一种成本较低的数据采集模式近几年来被广泛采用,众包就是基于大众的力量完成某种特定工作任务,即图2中的普通车辆101也可以进行道路数据的采集,将采集的道路数据上报给计算装置1。
可选的,在云端服务器2中可以由计算装置1基于普通车辆101上报的道路数据,决策是否对当前地图进行更新,并执行对地图数据的更新工作,更新后可以下发新的电子地图。
可选的,计算装置1可以是独立的装置,比如独立的计算机。计算装置1也可以包含在云端服务器2中,专用采集车辆100采集的道路数据以及普通车辆101采集的道路数据都可以直接上报给云端服务器2中的计算装置1。计算装置1也可以设置在采集车辆100上,在车端直接进行计算。
图3为本申请实施例提供的一种云端指令侧地图数据处理结构示意图。
车端计算机系统112还可以从其它计算机系统接收信息或转移信息到其它计算机系统。或者,从车辆终端12的传感器系统,例如激光雷达或相机收集的传感器数据可以被转移到另一个计算机对此数据进行处理。如图3所示,来自计算机系统112的数 据可以经由网络被传送到云侧的计算机720用于进一步的处理。网络以及中间节点可以包括各种配置和协议,包括因特网、万维网、内联网、虚拟专用网络、广域网、局域网、使用一个或多个公司的专有通信协议的专用网络、以太网、WiFi和HTTP、以及前述的各种组合。这种通信可以由能够传送数据到其它计算机和从其它计算机传送数据的任何设备,诸如调制解调器和无线接口。
在一个示例中,计算机720可以包括具有多个计算机的服务器,例如负载均衡服务器群,为了从计算机系统112接收、处理并传送数据的目的,其与网络的不同节点交换信息。该服务器可以被类似于计算机系统配置,具有处理器730、存储器740、指令750、和数据760。
数据760可以包括激光雷达120采集的点云数据,其他传感器110采集的道路数据,如相机采集的图像,经过处理的中间过程的数据以及最终的处理路面点云数据等。服务器720可以接受、监视、存储、更新、以及与地图道路数据相关的各种信息,并且判别地图数据是否更新。
图4下面结合附图详细说明本申请的技术方案。图4为本申请实施例提供的一种用于地图的路面提取方法流程图,如图4所示,本实施例的执行主体可以为云端服务器中的计算装置1或者独立的计算装置1。
S101、基于原始激光点云,确定路面的候选路面点,其中所述原始激光点云是激光传感器采集的点云。
其中,原始激光点云是上述激光雷达120采集的道路点云,可以设置每隔一定时间间隔采集一次当前路面激光点云,周期长度可以根据作业人员的需求进行调整。激光雷达可以为单线激光雷达、多线激光雷达、机械旋转式激光雷达、MEMS激光雷达、相控阵激光雷达、Flash型激光雷达等。一般情况下,由于道路表面树木、植被等存在,激光雷达采集到的路面的原始激光点云中包含很多噪声,需要将原始激光点云进行除噪,从而确定候选路面点。
候选路面点的确定方式可以采用多种方式,例如网格法。该网格法具体包括:将该原始激光点云划分为多个网格。因为较大的网格边长能够有效的降低提取路面点的噪声以及提取路面点网格的时间,因此可以设置网格的边长大于某个阈值,例如设置网格的边长大于1m。
针对每一个网格分别进行点云厚度的计算,该点云厚度为所述每一个网格中最高高度的点与最低高度的点的高度差。厚度的计算可以采用多种方法:
方法一、可以通过建立直角坐标系计算该网格中的点的高度差的方法来计算点云厚度。
或者方法二、还可以通过平面差的方法计算每个网格的点云厚度。
基于点云的厚度,选取点云厚度低于某一个阈值的网格为路面的候选网格,并可以将此网格的任一点或多个点作为路面候选点,其中该任一点或多个点可以为该网格的中心点、网格四角的顶点、或者四条边上的点。
依次计算所有网格的点云厚度,并选出所有符合条件的路面候选点。
在该方案中,可以灵活选取网格的步长,当激光点云的网格选取步长较大时,计算装置1需要数据处理的时间较短,提升了路面点云的提取速度。
S102、在相机采集的图像中确定路面的路面图像。
其中,相机可以为设置在车辆100上的摄像头,该摄像头可以设置在车辆的上方、前方、后方、或侧面。具体可设置位置可以为:挡风玻璃、车门、立柱、车顶、车尾等。该摄像头可以为单目摄像头、双目摄像头、三目摄像头、深度摄像头、红外摄像头、鱼眼摄像头、环视摄像头等。
计算装置1可以对该相机采集的图像进行语义分割,确定所述路面图像,语义分割可以采用多种方法,比如人工标注、或者深度学习的方法,深度学习的方法包括例如卷积神经网络CNN、循环神经网络RNN、K-means聚类等方法。
由于一般情况,激光雷达是设置在车辆顶部上的,车辆在行进的过程中进行道路点云的采集,如图5所示,由于位于车辆的前方道路和后方道路的反射率并未发生变化。因此,原始激光点云对于道路前方和后方的数据并不能有效的进行区分,这种情况下,对相机采集的图像数据进行语义分割,就可以得到道路前方和后方的边界。
还有一种情况,有些道路的全部路段或部分路段可能没有边沿,原始激光点云对于没有边沿的道路采集的路面点云并不准确,从而造成了路面点云提取的不准确,此时单纯的依靠激光点云的提取方法无法分割出路面与非路面的信息。这种情况下,结合语义分割过的图像数据,就可以得到准确的路面信息。
S104、得到所述路面的第一路面点。
其中,可以将所述候选路面点与所述路面图像进行融合,得到所述路面的第一路面点。
根据相机的内参、外参、位姿以及激光雷达位姿,将路面候选点投影到进行过语义分割的该路面图像上。
如果该候选路面点中的某些点无法投影到该路面图像上,则确定该点为噪声点,对噪声点进行过滤,留下可以投影到路面图像上的反射点,作为候选路面点。对多个候选路面点进行聚类,得到第一路面点。
采用上述技术方案,利用了多传感器融合技术,提高了获取的路面的鲁棒性,提高了提取精度。
S105、提取所述第一路面点的第一路面包络线,其中所述第一路面包络线包括所述第一路面点中的一组有序点,用于表征所述路面的轮廓。
其中,提取所述第一路面点的第一路面包络线有很多种方法。例如可以采用凹包(concave)提取方法提取所述第一路面点的第一路面包络线。如图6所示,6a图为一个点的集合S,该凹包(concave)提取方法如下:
步骤1、先求点集合S的凸包,如图6b所示,该凸包为包络线的初始轮廓;
步骤2、选择凸包中的一个边MN,如图6c所示,如果MN的长度大于阈值d1,则选择距离MN边距离最近的一个内点P,图6c中的星星形状的点。计算内点P距离 MN的距离,如果该距离大于阈值d2,则将此内点作为包络线上的点,如图6d所示。
步骤3、重复步骤2,直到包络线上的所有的边都遍历完毕。最终得到的包络线如图6e所示。
对于第一路面点,可以按照图6的所示的包络线提取方法提取路面的包络线,从而得到路面的轮廓信息。
采用凹包的提取方法,实现了计算机的自动提取,无需人为干预,能够获得更加完备的包络线。
如图7所示,图7为对真实道路的第一路面点进行包络线的提取结果,得到第一路面包络线,其中该第一路面包络线为灰色的线条组成的路面轮廓。可采用上述的凹包(concave)提取方法,也可以采用其他的方法,能够通过将冗余无序的点提取出外边界,仅保留第一路面点轮廓上的有序的包络线点即可。
S106、基于所述原始激光点云和所述第一路面包络线,确定所述路面的第一路面点云。
其中,将所述原始激光点云中位于所述第一路面包络线所包括的区域内的点,判定为第一路面点云。
如图8所示,图8为将该第一路面包络线应用到原始激光点云上的示意图,从而能够确定中间部分的激光点云为第一路面激光点云,其中该第一路面激光点云为白色部分。
采用上述第一路面包络线提高了路面提取的精确度,并且提取用时并不会随着精度提升而提升,降低了提取时间的复杂度。
因此,在本申请上述实施例中,因为采用包络线提取方法,选择了第一路面点的外轮廓,将第一路面点的最大范围都作为了第一路面包络线,从而保证了路面点云数据的完备性。并且采用了包络线方法来提取原始激光点云中的路面点云,有效的控制了时间复杂度,缩短了数据处理的时间,可以快速的获取完备的路面点云。可见,本申请实施例可以完备、快速、准确的提取出路面信息。
进一步的,如图5所示一般在城市道路中,道路都会有道路边沿,通过提取道路边沿,能够准确的确定道路的边界,然后通过判定原始激光点云是在该道路边沿的哪一边,从而判定反射点是否为路面点云,提升了路面点云提取的准确度和精度。
可选的,该提取方法还可以包括步骤:S103、基于所述原始激光点云,确定所述路面的路面边沿点。
如图5所示,路面边沿点包括与道路平面在高度上存在较大变化的点,例如激光雷达发射的信号遇到道路上设置的护栏、墩子、隔离带、马路牙子等产生反射信号的反射点为路面边沿点。
提取道路边沿可以利用路面边沿模型处理所述原始激光点云,获取所述路面边沿点。例如可以采用激光雷达单线信息寻找跳变点、应用深度学习的方法分析道路边沿、或采用滑动框寻找跳变点的方式等。
下面以滑动框寻找跳变点的方式为例进行介绍,首先设置滑动框,沿着激光雷达采集点云数据的轨迹,分别向左向右滑动该晃动框,移动该滑动框的补偿等于该滑动框的边长。每滑动一次滑动框就计算一次该原始激光点云的厚度。如果相邻的两个滑动框的厚度大于某一阈值,则认为发生了跳变,将该次滑动框的任一点,例如中心点或者边角点作为道路边沿的点。
所述步骤“S104、得到所述路面的第一路面点”具体包括:将所述候选路面点、所述路面边沿点、与所述路面图像进行融合,得到所述路面的第一路面点。
其中,在该实施例中将候选路面点、所述路面边沿点、路面图像三者进行融合来计算路面的第一路面点。
根据相机的内参、外参、位姿以及激光雷达位姿,将路面候选点云投影到进行过语义分割的该路面图像上。
如果该候选路面点中的某些反射点无法投影到该路面图像上,则确定该反射点为噪声点,对噪声点进行过滤,留下可以投影到路面图像上的反射点,作为候选路面点。对多个候选路面点进行聚类,将聚类后的点与路面边沿点进行融合,判定该聚类后的点是否位于该路面边沿点内部,选择该路面边沿内部的点为第一路面点,从而得到所述路面的第一路面点。
因此,本申请上述实施例通过提取道路边沿的方法,对于具有道路边沿的路面点云可以准确的确定路面的边界,从而提高了路面提取的准确性。
本申请实施例还提供了一种用于地图的路面提取装置,该路面提取装置可以如图9或图10所示,该路面提取装置可以包括:候选点确定模块121、边沿点确定模块122、图像确定模块123、路面点确定模块124、包络线提取模块125、路面点云确定模块126。
在本申请实施例中,候选点确定模块121,基于原始激光点云,确定路面的候选路面点,其中所述原始激光点云是激光传感器采集的点云;
图像确定模块123,在相机采集的图像中确定路面的路面图像;
路面点确定模块124,将所述候选路面点与所述路面图像进行融合,得到所述路面的第一路面点;
包络线提取模块125,提取所述第一路面点的第一路面包络线,其中所述第一路面包络线包括所述第一路面点中的一组有序点,用于表征所述路面的轮廓;
路面点云确定模块126,基于所述原始激光点云和所述第一路面包络线,确定所述路面的第一路面点云。
进一步的,所述包络线提取模块125具体用于:
采用凹包(concave)提取方法提取所述第一路面点的第一路面包络线。
进一步的,所述候选点确定模块121具体用于:
将所述原始激光点云划分为多个网格;
计算所述多个网格中的每一个网格的点云厚度,所述点云厚度为所述每一个网格 中的最高高度的点与最低高度的点的高度差;
当所述点云厚度小于第一阈值时,确定该网格为候选网格;
确定所述候选路面点,所述候选路面点包括所述多个网格中的至少一个候选网格中的任一点或多个点。
进一步的,所述图像确定模块123具体用于:
对所述相机采集的图像进行语义分割,确定所述路面图像。
进一步的,所述路面点确定模块124具体用于:
将所述候选路面点投影到所述路面图像上;
当所述候选路面点中的至少一个候选路面点能够投影到路面图像上,对所述至少一个候选路面点进行聚类,得到所述第一路面点。
进一步的,如图10所示,还包括边沿点确定模块122,具体用于:
基于所述原始激光点云,确定所述路面的路面边沿点;
所述路面点确定模块,具体用于:将所述候选路面点、所述路面边沿点、与所述路面图像进行融合,得到所述路面的第一路面点。
进一步的,所述边沿点确定模块122,具体用于:
利用路面边沿模型处理所述原始激光点云,获取所述路面边沿点。
进一步的,所述路面点云确定模块124具体用于:
将所述原始激光点云中位于所述第一路面包络线所包括的区域内的反射点,判定为第一路面点云。
需要说明的是,上述候选点确定模块121、边沿点确定模块122、图像确定模块123、路面点确定模块124、包络线提取模块125、路面点云确定模块126的具体描述可以参见上述方法实施例中的相关描述,本申请实施例在此不再赘述。
本申请实施例中的道路情况包含以下几种情况:
一、具有完整的道路边沿线
如图11所示,图11a中的两条实线为道路边沿,箭头为道路的方向。
图11b为基于原始激光点云,确定的路面的候选路面点。如图11b所示,由于反射率没有发生变化,在激光雷达采集的道路两头(相当于图5中的道路前方和后方)多出很多候选路面点,因而没有分界,无法进行路面提取。
图11c为基于所述原始激光点云,确定的所述路面的路面边沿点。
图11d为将所述候选路面点与所述路面图像进行融合,得到的所述路面的第一路面点。由于融合了经过语义分割的路面图像数据,可以对上述候选路面点的两头进行切割。但是,由于为了提高运算速度,网格的步长选取的较大,造成了路面候选点的精度较低,因此,此环节的路面候选点的分辨率不高。如果减小网格步长,又会增加运算时间,而且还会提高噪声点的数量。
图11e为将所述候选路面点、所述路面边沿点、与所述路面图像进行融合,得到的所述路面的第一路面点。
因此,图11e的第一路面候选点轮廓准确、完备,通过提取图11e中的第一路面点的包络线,与原始激光点云进行运算,从而得到更加准确的路面激光点云,即第一路面点云(图略,可参见附图8)。
该实施例,由于将第一路面包络线与原始激光点云进行计算,保留了原始激光点云的点云信息,解决了图11d存在的分辨率不高的问题,提高了点云的分辨率,同时还提高了运算速度。能够快速、准确、完备的提取激光点云中的路面信息,提取得到的路面点云具有高精度、高分辨率的特点。
二、道路边沿缺失情况
在城市中,由于交通事故等因素,道路边沿可能会出现缺失。如图12所示,图12a图为道路,两条实线为道路的边沿,在上面一条实线的中间出现了缺失。上方的虚线为辅路的边界。箭头为道路的方向。
图12b为基于原始激光点云,确定的路面的候选路面点。如图12b所示,由于反射率没有发生变化,在激光雷达采集的道路两头(相当于图5中的道路前方和后方)多出很多候选路面点,因而没有分界,无法进行路面提取。并且,在辅路中也具有大量的候选点。
图12c为基于所述原始激光点云,确定的所述路面的路面边沿点。
图12d为将所述候选路面点与所述路面图像进行融合,得到的所述路面的第一路面点。由于融合了经过语义分割的路面图像数据,可以对上述候选路面点的两头进行切割,并且将辅路上的部分点筛除,仅保留路面上的点与没有道路边沿部分的辅路上的点。
图12e为将所述候选路面点、所述路面边沿点、与所述路面图像进行融合,得到的所述路面的第一路面点。
因此,图12e的第一路面候选点轮廓准确、完备,通过提取图12e中的第一路面点的包络线,与原始激光点云进行运算,从而得到更加准确的路面激光点云,即第一路面点云(图略,可参见附图8)。
在该实施例中,由于将第一路面包络线与原始激光点云进行计算,保留了原始激光点云的点云信息,提高了点云的分辨率,同时还提高了运算速度。能够快速、准确、完备的提取激光点云中的路面信息,提取得到的路面点云具有高精度、高分辨率的特点。此外,对于路面边沿缺失的部分,可以自动的提取出辅路的道路边沿。
三、无道路边沿情况
在乡村道路上,有一些道路没有道路边沿。如图13所示,图13a为道路,两条虚线为虚拟的道路边界。箭头为道路的方向。
图13b为基于原始激光点云,确定的路面的候选路面点。如图13b所示,由于反射率没有发生变化,在激光雷达采集的道路两头(相当于图5中的道路前方和后方)多出很多候选路面点,因而没有分界,无法进行路面提取。并且,由于没有道路边沿,在虚拟的道路边界外也具有大量的候选点。
图13c为基于所述原始激光点云,确定的所述路面的路面边沿点。
图13d为将所述候选路面点与所述路面图像进行融合,得到的所述路面的第一路面点。由于融合了经过语义分割的路面图像数据,可以对上述候选路面点的两头进行切割,并且将虚拟的道路边界外的点筛除,仅保留路虚拟道路边界内的点。
图13e为将所述候选路面点、所述路面边沿点、与所述路面图像进行融合,得到所述路面的第一路面点。
因此,图13e的第一路面候选点轮廓准确、完备,通过提取图13e中的第一路面点的包络线,与原始激光点云进行运算,从而得到更加准确的路面激光点云,即第一路面点云(图略,可参见附图8)。
在该实施例中,由于将第一路面包络线与原始激光点云进行计算,保留了原始激光点云的点云信息,提高了点云的分辨率,同时还提高了运算速度。能够快速、准确、完备的提取激光点云中的路面信息,提取得到的路面点云具有高精度、高分辨率的特点。此外,对于没有道路边界的情况,可以自动的提取出虚拟道路边界内的点。
本申请实施例还提供一种芯片,包括处理器和接口,所述接口用于从外部存储器读取所述处理器可执行指令,所述处理器,可以用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,其中各个模块的功能可以参考方法实施例中相应的描述,此处不再赘述。
本申请实施例还提供一种服务器,所述服务器可以用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,其中各个模块的功能可以参考方法实施例中相应的描述,此处不再赘述。
本申请实施例还提供了一种计算机存储介质,该计算机存储介质存储有计算机程序,该计算机程序用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,其中各个模块的功能可以参考方法实施例中相应的描述,此处不再赘述。
本申请实施例还提供了一种包含指令的计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述方法实施例的技术方案,其实现原理和技术效果类似,其中各个模块的功能可以参考方法实施例中相应的描述,此处不再赘述。
本申请实施例还提供一种电子设备,所述电子设备可以用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,其中各个模块的功能可以参考方法实施例中相应的描述,此处不再赘述。
在一些实施例中,所公开的方法可以实施为以机器可读格式被编码在计算机可读存储介质上的或者被编码在其它非瞬时性介质或者制品上的计算机程序指令。图14示意性地示出根据这里展示的至少一些实施例而布置的示例计算机程序产品的概念性局部视图,所述示例计算机程序产品包括用于在计算设备上执行计算机进程的计算机程序。在一个实施例中,示例计算机程序产品600是使用信号承载介质601来提供的。所述信号承载介质601可以包括一个或多个程序指令602,其当被一个或多个处理器运行时可以提供以上针对图4描述的功能或者部分功能。因此,例如,参考图4中所示的实施例,步骤101-106的一个或多个特征可以由与信号承载介质601相关联的一个或多个指令来承担。此外,图9中的程序指令602也描述示例指令。
在一些示例中,信号承载介质601可以包含计算机可读介质603,诸如但不限于,硬 盘驱动器、紧密盘(CD)、数字视频光盘(DVD)、数字磁带、存储器、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等等。在一些实施方式中,信号承载介质601可以包含计算机可记录介质604,诸如但不限于,存储器、读/写(R/W)CD、R/W DVD、等等。在一些实施方式中,信号承载介质601可以包含通信介质605,诸如但不限于,数字和/或模拟通信介质(例如,光纤电缆、波导、有线通信链路、无线通信链路、等等)。因此,例如,信号承载介质601可以由无线形式的通信介质605(例如,遵守IEEE 802.11标准或者其它传输协议的无线通信介质)来传达。一个或多个程序指令602可以是,例如,计算机可执行指令或者逻辑实施指令。在一些示例中,诸如针对图3、9、10描述的计算设备或路面提取装置可以被配置为,响应于通过计算机可读介质603、计算机可记录介质604、和/或通信介质605中的一个或多个传达到计算设备的程序指令602,提供各种操作、功能、或者动作。应该理解,这里描述的布置仅仅是用于示例的目的。因而,本领域技术人员将理解,其它布置和其它元素(例如,机器、接口、功能、顺序、和功能组等等)能够被取而代之地使用,并且一些元素可以根据所期望的结果而一并省略。另外,所描述的元素中的许多是可以被实现为离散的或者分布式的组件的、或者以任何适当的组合和位置来结合其它组件实施的功能实体。
需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。在本申请的实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储接种中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施例所述方法的全部或者部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序的介质。
在上述实施例中,可以全部或者部分地通过软件、硬件、固件或者其任意组合来实现。
本实施例以上所述的电子设备,可以用于执行上述各方法实施例的技术方案,其实现原理和技术效果类似,其中各个器件的功能可以参考实施例中相应的描述,此处不再赘述。
最后应说明的是:以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (18)

  1. 一种用于地图的路面提取方法,其特征在于,包括:
    基于原始激光点云,确定路面的多个候选路面点,其中所述原始激光点云是激光传感器采集的点云;
    在相机采集的图像中确定路面的路面图像;
    将所述多个候选路面点与所述路面图像进行融合,得到所述路面的多个第一路面点;
    提取所述多个第一路面点的第一路面包络线,其中所述第一路面包络线包括所述多个第一路面点中的一组有序点,用于表征所述路面的轮廓;
    基于所述原始激光点云和所述第一路面包络线,确定所述路面的第一路面点云。
  2. 根据权利要求1所述的提取方法,其特征在于,所述提取所述多个第一路面点的第一路面包络线包括:采用凹包提取方法提取所述多个第一路面点的第一路面包络线。
  3. 根据权利要求1或2所述的提取方法,其特征在于,所述基于原始激光点云,确定路面的多个候选路面点包括:
    将所述原始激光点云划分为多个网格;
    计算所述多个网格中的每一个网格的点云厚度,所述点云厚度为所述每一个网格中的最高高度的点与最低高度的点的高度差;
    当所述点云厚度小于第一阈值时,确定该网格为候选网格;
    确定所述多个候选路面点,所述多个候选路面点包括所述多个网格中的至少一个候选网格中的任一点或多个点。
  4. 根据权利要求1-3之一所述的提取方法,其特征在于,所述在相机采集的图像中确定所述路面的路面图像包括:
    对所述相机采集的图像进行语义分割,确定所述路面图像。
  5. 根据权利要求1-4之一所述的提取方法,其特征在于,所述将所述多个候选路面点与所述路面图像进行融合,得到所述路面的多个第一路面点包括:
    将所述多个候选路面点投影到所述路面图像上;
    当所述多个候选路面点中的至少一个候选路面点能够投影到路面图像上,对所述至少一个候选路面点进行聚类,得到所述多个第一路面点。
  6. 根据权利要求1-5之一所述的提取方法,其特征在于,还包括:基于所述原始激光点云,确定所述路面的路面边沿点;
    所述将所述多个候选路面点与所述路面图像进行融合,得到所述路面的多个第一路面点包括:将所述多个候选路面点、所述路面边沿点、与所述路面图像进行融合,得到所述路面的多个第一路面点。
  7. 根据权利要求6所述的提取方法,所述基于所述原始激光点云,确定所述路面的路面边沿点包括:
    利用路面边沿模型处理所述原始激光点云,获取所述路面边沿点。
  8. 根据权利要求1-7之一所述的提取方法,所述基于所述原始激光点云和第一路面包络线,确定所述路面的第一路面点云包括:
    将所述原始激光点云中位于所述第一路面包络线所包括的区域内的点,判定为第一路面点云。
  9. 一种用于地图的路面提取装置,其特征在于,包括:
    候选点确定模块,基于原始激光点云,确定路面的多个候选路面点,其中所述原始激光点云是激光传感器采集的点云;
    图像确定模块,在相机采集的图像中确定路面的路面图像;
    路面点确定模块,将所述多个候选路面点与所述路面图像进行融合,得到所述路面的多个第一路面点;
    包络线提取模块,提取所述多个第一路面点的第一路面包络线,其中所述第一路面包络线包括所述多个第一路面点中的一组有序点,用于表征所述路面的轮廓;
    路面点云确定模块,基于所述原始激光点云和所述第一路面包络线,确定所述路面的第一路面点云。
  10. 根据权利要求9所述的提取装置,其特征在于,所述包络线提取模块具体用于:
    采用凹包提取方法提取所述多个第一路面点的第一路面包络线。
  11. 根据权利要求9或10所述的提取装置,其特征在于,所述候选点确定模块具体用于:
    将所述原始激光点云划分为多个网格;
    计算所述多个网格中的每一个网格的点云厚度,所述点云厚度为所述每一个网格中的最高高度的点与最低高度的点的高度差;
    当所述点云厚度小于第一阈值时,确定该网格为候选网格;
    确定所述多个候选路面点,所述多个候选路面点包括所述多个网格中的至少一个候选网格中的任一点或多个点。
  12. 根据权利要求9-11任一项所述的提取装置,其特征在于,所述图像确定模块具体用于:
    对所述相机采集的图像进行语义分割,确定所述路面图像。
  13. 根据权利要求9-12任一项所述的提取装置,其特征在于,所述路面点确定模块具体用于:
    将所述候选路面点投影到所述路面图像上;
    当所述候多个选路面点中的至少一个候选路面点能够投影到路面图像上,对所述至少一个候选路面点进行聚类,得到所述多个第一路面点。
  14. 根据权利要求9-13任一项所述的提取装置,其特征在于,还包括边沿点确定模块,具体用于:
    基于所述原始激光点云,确定所述路面的路面边沿点;
    所述路面点确定模块,具体用于:将所述多个候选路面点、所述路面边沿点、与所述路面图像进行融合,得到所述路面的多个第一路面点。
  15. 根据权利要求14所述的提取装置,其特征在于,所述边沿点确定模块,具体用于:
    利用路面边沿模型处理所述原始激光点云,获取所述路面边沿点。
  16. 根据权利要求9-15任一项所述的提取装置,其特征在于,所述路面点云确定模块具体用于:
    将所述原始激光点云中位于所述第一路面包络线所包括的区域内的点,判定为第一路面点云。
  17. 一种电子设备,其特征在于,所述电子设备包括:
    处理器;
    用于存储所述处理器可执行指令的存储器;
    所述处理器,用于执行上述权利要求1-8中任一所述的提取方法。
  18. 一种计算机可读存储介质,其特征在于,所述存储介质存储有计算机程序,所述计算机程序用于执行上述权利要求1-8中任一所述的提取方法。
PCT/CN2020/113560 2020-09-04 2020-09-04 一种用于地图的路面提取方法及装置 WO2022047744A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/113560 WO2022047744A1 (zh) 2020-09-04 2020-09-04 一种用于地图的路面提取方法及装置
CN202080004150.3A CN112513876B (zh) 2020-09-04 2020-09-04 一种用于地图的路面提取方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/113560 WO2022047744A1 (zh) 2020-09-04 2020-09-04 一种用于地图的路面提取方法及装置

Publications (1)

Publication Number Publication Date
WO2022047744A1 true WO2022047744A1 (zh) 2022-03-10

Family

ID=74953029

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/113560 WO2022047744A1 (zh) 2020-09-04 2020-09-04 一种用于地图的路面提取方法及装置

Country Status (2)

Country Link
CN (1) CN112513876B (zh)
WO (1) WO2022047744A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114627073A (zh) * 2022-03-14 2022-06-14 一汽解放汽车有限公司 地形识别方法、装置、计算机设备和存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI790858B (zh) * 2021-12-15 2023-01-21 財團法人工業技術研究院 路面資料萃取方法及系統與自駕車控制方法及系統
US11999352B2 (en) 2021-12-15 2024-06-04 Industrial Technology Research Institute Method and system for extracting road data and method and system for controlling self-driving car

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184852A (zh) * 2015-08-04 2015-12-23 百度在线网络技术(北京)有限公司 一种基于激光点云的城市道路识别方法及装置
US20170294026A1 (en) * 2016-04-08 2017-10-12 Thinkware Corporation Method and apparatus for generating road surface, method and apparatus for processing point cloud data, computer program, and computer readable recording medium
CN108519605A (zh) * 2018-04-09 2018-09-11 重庆邮电大学 基于激光雷达和摄像机的路沿检测方法
CN109858460A (zh) * 2019-02-20 2019-06-07 重庆邮电大学 一种基于三维激光雷达的车道线检测方法
CN111274976A (zh) * 2020-01-22 2020-06-12 清华大学 基于视觉与激光雷达多层次融合的车道检测方法及系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3295422B1 (en) * 2015-05-10 2020-01-01 Mobileye Vision Technologies Ltd. Road profile along a predicted path
CN105551082B (zh) * 2015-12-02 2018-09-07 百度在线网络技术(北京)有限公司 一种基于激光点云的路面识别方法及装置
KR102427980B1 (ko) * 2017-12-20 2022-08-02 현대자동차주식회사 차량 및 그 위치 인식 방법
CN116129376A (zh) * 2018-05-02 2023-05-16 北京图森未来科技有限公司 一种道路边缘检测方法和装置
CN109407115B (zh) * 2018-12-25 2022-12-27 中山大学 一种基于激光雷达的路面提取系统及其提取方法
CN111291676B (zh) * 2020-02-05 2020-12-11 清华大学 一种基于激光雷达点云和相机图像融合的车道线检测方法及装置和芯片

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184852A (zh) * 2015-08-04 2015-12-23 百度在线网络技术(北京)有限公司 一种基于激光点云的城市道路识别方法及装置
US20170294026A1 (en) * 2016-04-08 2017-10-12 Thinkware Corporation Method and apparatus for generating road surface, method and apparatus for processing point cloud data, computer program, and computer readable recording medium
CN108519605A (zh) * 2018-04-09 2018-09-11 重庆邮电大学 基于激光雷达和摄像机的路沿检测方法
CN109858460A (zh) * 2019-02-20 2019-06-07 重庆邮电大学 一种基于三维激光雷达的车道线检测方法
CN111274976A (zh) * 2020-01-22 2020-06-12 清华大学 基于视觉与激光雷达多层次融合的车道检测方法及系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114627073A (zh) * 2022-03-14 2022-06-14 一汽解放汽车有限公司 地形识别方法、装置、计算机设备和存储介质
CN114627073B (zh) * 2022-03-14 2024-06-04 一汽解放汽车有限公司 地形识别方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN112513876B (zh) 2022-01-11
CN112513876A (zh) 2021-03-16

Similar Documents

Publication Publication Date Title
CN110148144B (zh) 点云数据的分割方法和装置、存储介质、电子装置
CN107850672B (zh) 用于精确车辆定位的系统和方法
CN107850453B (zh) 匹配道路数据对象以更新精确道路数据库的系统和方法
CN114842438B (zh) 用于自动驾驶汽车的地形检测方法、系统及可读存储介质
CN107851125B (zh) 通过车辆和服务器数据库进行两步对象数据处理以生成、更新和传送精确道路特性数据库的系统和方法
WO2022047744A1 (zh) 一种用于地图的路面提取方法及装置
WO2022206942A1 (zh) 一种基于行车安全风险场的激光雷达点云动态分割及融合方法
US9315192B1 (en) Methods and systems for pedestrian avoidance using LIDAR
WO2021238306A1 (zh) 一种激光点云的处理方法及相关设备
US12001517B2 (en) Positioning method and apparatus
WO2020259284A1 (zh) 一种障碍物检测方法及装置
US11798289B2 (en) Streaming object detection and segmentation with polar pillars
CN114841910A (zh) 车载镜头遮挡识别方法及装置
US20220309806A1 (en) Road structure detection method and apparatus
CN117576652B (zh) 道路对象的识别方法、装置和存储介质及电子设备
CN115879060A (zh) 基于多模态的自动驾驶感知方法、装置、设备和介质
WO2022052881A1 (zh) 一种构建地图的方法及计算设备
US20220371606A1 (en) Streaming object detection and segmentation with polar pillars
CN115359332A (zh) 基于车路协同的数据融合方法、装置、电子设备及系统
US11544899B2 (en) System and method for generating terrain maps
US11967159B2 (en) Semantic annotation of sensor data with overlapping physical features
US20240096109A1 (en) Automatic lane marking extraction and classification from lidar scans
US20240127603A1 (en) Unified framework and tooling for lane boundary annotation
WO2022033089A1 (zh) 确定检测对象的三维信息的方法及装置
CN117612127B (zh) 场景生成方法、装置和存储介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20951988

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20951988

Country of ref document: EP

Kind code of ref document: A1