CN113822332A - Road edge data labeling method, related system and storage medium - Google Patents

Road edge data labeling method, related system and storage medium Download PDF

Info

Publication number
CN113822332A
CN113822332A CN202110932824.2A CN202110932824A CN113822332A CN 113822332 A CN113822332 A CN 113822332A CN 202110932824 A CN202110932824 A CN 202110932824A CN 113822332 A CN113822332 A CN 113822332A
Authority
CN
China
Prior art keywords
feature point
road edge
feature
laser
point set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110932824.2A
Other languages
Chinese (zh)
Inventor
白东峰
曹彤彤
刘冰冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110932824.2A priority Critical patent/CN113822332A/en
Publication of CN113822332A publication Critical patent/CN113822332A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application provides a road edge data labeling method, a related system and a storage medium, wherein the method comprises the following steps: acquiring a vehicle body pose data sequence of a self vehicle and a laser point cloud data sequence containing the peripheral environment information of the self vehicle, and acquiring a laser high-precision map according to the vehicle body pose data sequence and the laser point cloud data sequence; obtaining a road edge feature point set according to the laser high-precision map and the laser point cloud data sequence; processing each characteristic point in the road edge characteristic point set to obtain a road edge map containing a plurality of road edge examples; and obtaining road edge characteristic marking data corresponding to each frame of laser point cloud data in the laser point cloud data sequence according to the road edge map, the laser point cloud data sequence and the vehicle body pose data sequence. By adopting the method, the road edge data generation efficiency can be improved.

Description

Road edge data labeling method, related system and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a road data labeling method, a related system, and a storage medium.
Background
For an unmanned Driving platform and an Advanced Driving Assistance System (ADAS), an environment sensing System is an interactive port of an intelligent vehicle platform and surrounding traffic scenes and is also front-end input of an intelligent vehicle platform motion decision and planning control System. The performance of the environmental awareness system directly determines the stability of the unmanned platform, ADAS, to perform driving tasks. In recent years, taking a deep Convolutional Neural Network (CNN) as an example, a data-driven perception algorithm in an environmental perception system is widely applied in the field of unmanned driving, most of the current data-driven perception algorithms need to supervise a model training process by using pre-labeled truth data, and the labeling quantity, the labeling quality and the diversity of labeling scenes of the truth data directly determine the prediction performance and the generalization capability of the model.
At present, for labeling data in the automatic driving technology, such as two-dimensional or three-dimensional truth value Bounding boxes (Bounding boxes) of target objects, semantic labeling of pixel levels of image data and point levels in laser point cloud, and labeling of road topological structures such as road edges and lane lines, a large amount of time is consumed by data labeling personnel to manually label the targets or pixels one by one, so that the labeling efficiency is low, and the data labeling cost is high; in addition, for the labeled data, a large amount of time is still needed for rechecking and controlling the labeling quality, and the workload of label data generation is indirectly increased again. Taking semantic segmentation manual labeling of laser point cloud as an example, the labeling of each frame of data needs about 3-4 hours, the working frequency of the laser radar is generally 10-20Hz, that is, at least 10 frames of data are generated per second, and the efficiency of manual labeling is far lower than the frequency of data generation.
Determining a road edge pixel area in the sample image and a pixel area with visible road edges in the road edge pixel area based on manual work; marking the road edge confidence coefficient of the sample image according to the proportion information of the road edge pixel region and the pixel region visible in the road edge pixel region; and finally, training based on manually marked road edge data and sample image data to obtain a road edge detection model.
The road edge data generation method in the road structure basically depends on manual marking, the manual marking efficiency is low, the data marking quality is difficult to guarantee, and a large amount of time is usually required to be invested to recheck the data marking quality. In addition, according to the method, road edge marking data are generated in the image, and in practical application, the road edge marking data need to be converted into a vehicle body coordinate system by using inverse perspective transformation, and projection errors are introduced in the conversion process, so that the road edge data at a far distance have large deviation.
Disclosure of Invention
The application discloses a road edge data labeling method, a related system and a storage medium, which can improve the road edge data generation efficiency.
In a first aspect, an embodiment of the present application provides a method for labeling road edge data, including: acquiring a vehicle body pose data sequence of a self vehicle and a laser point cloud data sequence containing the peripheral environment information of the self vehicle, and acquiring a laser high-precision map according to the vehicle body pose data sequence and the laser point cloud data sequence; obtaining a road edge feature point set according to the laser high-precision map and the laser point cloud data sequence; processing each characteristic point in the road edge characteristic point set to obtain a road edge map containing a plurality of road edge examples; and obtaining road edge characteristic marking data corresponding to each frame of laser point cloud data in the laser point cloud data sequence according to the road edge map, the laser point cloud data sequence and the vehicle body pose data sequence.
According to the method and the device, a laser high-precision map is obtained based on a vehicle body pose data sequence and a laser point cloud data sequence, and a road edge feature point set is obtained according to the laser high-precision map and the laser point cloud data sequence; then processing each characteristic point in the road edge characteristic point set to obtain a road edge map containing a plurality of road edge examples; and according to the road edge map, the laser point cloud data sequence and the vehicle body pose data sequence, obtaining road edge feature marking data corresponding to each frame of laser point cloud data in the laser point cloud data sequence. By adopting the method, the road edge map is a necessary condition for automatically generating the single-frame road edge data, the road edge data in the road edge map has better consistency in a world coordinate system relative to the road edge data generated frame by single frame, and the road edge map data obtained by using the multi-frame data has better data quality in an obstacle shielding area, so that the confidence coefficient and the accuracy of the road edge marking data are improved; road edge marking data are obtained based on road edge maps and each frame of laser point cloud data. Compared with the existing manual labeling mode, the scheme effectively improves the data labeling efficiency and the prediction performance of the model on the premise of ensuring the road edge data precision.
As an optional implementation manner, the obtaining a road edge feature point set according to the laser high-precision map and the laser point cloud data sequence includes: processing the laser point cloud data sequence to obtain laser point cloud semantic information; obtaining a road semantic map according to the laser point cloud semantic information and the laser high-precision map; and extracting road edge feature points of the road semantic map to obtain a road edge feature point set in the road semantic map.
The method is based on a laser SLAM method and a laser semantic segmentation network to obtain the road edge feature point set, and the map is established by utilizing multi-frame laser point cloud data, so that the influence caused by semantic segmentation errors in single-frame laser point cloud data can be effectively avoided, and the efficiency of obtaining the road edge feature point set in the road semantic map is improved.
Further, the road semantic map includes a first point cloud and a second point cloud, and the extracting of the road edge feature points from the road semantic map to obtain a road edge feature point set in the road semantic map includes: mapping the road semantic map to a two-dimensional grid map to obtain a mapped road semantic map; acquiring a candidate grid from the mapped road semantic map, wherein the candidate grid is a grid with the first point cloud and the second point cloud, and the height difference between the first point cloud and the second point cloud in the road semantic map is not greater than a first threshold value; and obtaining a road edge feature point set according to the first point cloud and the second point cloud in the candidate grid.
The point clouds in the road semantic map can be divided into two types, the point clouds belonging to the road surface classification are used as first type point clouds, and the point clouds belonging to the road edge outer side are classified as second type point clouds. The point cloud classification outside the road edge can be, for example, a sidewalk classification, a green plant classification, and the like.
By adopting the method, the efficiency and the accuracy of determining the road edge feature point set are improved by acquiring the candidate grids containing the two types of point clouds; on the other hand, on the basis of the characteristics of the road edge in the road, corresponding feature points such as partial tree branches and stems extending above the road surface in the actual road scene, lamp posts and the like are excluded, and therefore the reliability of determining the road edge feature point set is improved.
Further, the processing each feature point in the road edge feature point set to obtain a road edge map including a plurality of road edge instances includes: performing iterative search processing on feature points in a road edge feature point set of K sub-maps to obtain a feature point set of a plurality of road edge examples, wherein the K sub-maps are obtained by segmenting the road semantic map, and K is an integer not less than 2; wherein, for the p-th sub-map of the K sub-maps, the steps S1-S6 are executed:
s1, obtaining a characteristic point T (i, j) from the p-th sub map, wherein the number of points in a neighborhood characteristic point set corresponding to the characteristic point T (i, j) is not less than a second threshold value;
s2, obtaining at least one point set according to a neighborhood feature point set corresponding to the feature point T (i, j) and the feature point T (i, j), wherein an included angle formed by connecting any two feature points in each point set in the at least one point set and the feature point T (i, j) is not more than a third threshold value;
s3, determining a characteristic point T (i, j +1) with the distance from the characteristic point T (i, j) not less than a fourth threshold value from the at least one point set;
s4, acquiring a neighborhood feature point set corresponding to the feature point T (i, j +1), and determining a feature point T (i, j +2) of which the distance from the feature point T (i, j +1) is not less than a fifth threshold value from the neighborhood feature point set corresponding to the feature point T (i, j +1), wherein the neighborhood feature point set corresponding to the feature point T (i, j +1) is not overlapped with the neighborhood feature point set corresponding to the feature point T (i, j);
s5, setting j to j +1, and repeatedly executing step S4 until the number of points in the neighborhood feature point set corresponding to the feature point T (i, j +1) is 0, to obtain a feature point set of the ith road edge instance in the p-th sub-map, where the feature point set of the ith road edge instance in the p-th sub-map includes the feature point T (i, j), the neighborhood feature point set corresponding to the feature point T (i, j), the feature point T (i, j +1), and the neighborhood feature point set corresponding to the feature point T (i, j + 1);
s6, repeating steps S1-S5 until feature points in the p-th sub-map are traversed, obtaining a feature point set of each road edge instance in the p-th sub-map, where the p-th sub-map is any one of the K sub-maps, and i and j are positive integers;
and obtaining a road edge map comprising a plurality of road edge examples according to the feature point sets of the road edge examples in the K sub-maps.
In this embodiment, the road edge examples are automatically labeled based on the road edge feature point set in the road semantic map, so as to obtain the road edge map including the road edge examples. Compared with the current manual single-frame-by-frame marking, the scheme is adopted, so that the marking efficiency is greatly improved; and because the road edge feature point set in the road semantic map is directly utilized to generate the road edge data, no dotting error exists in the manual marking process, and the reliability of marking is further improved.
As another optional implementation manner, the obtaining of the road edge feature point set according to the laser high-precision map and the laser point cloud data sequence includes: semantic feature extraction is carried out on each frame of laser point cloud data in the laser point cloud data sequence, and a road edge feature point set of each frame of laser point cloud data in the laser point cloud data sequence is obtained; and mapping the road edge feature point set of each frame of laser point cloud data in the laser point cloud data sequence to the laser high-precision map to obtain a road edge feature point set corresponding to a coordinate system where the laser high-precision map is located.
Further, the processing each feature point in the road edge feature point set to obtain a road edge map including a plurality of road edge instances includes:
s1, acquiring a characteristic point T (i, j) from a road edge characteristic point set corresponding to a coordinate system where the laser high-precision map is located, wherein the number of points in a neighborhood characteristic point set corresponding to the characteristic point T (i, j) is not less than a first threshold value;
s2, obtaining at least one point set according to a neighborhood feature point set corresponding to the feature point T (i, j) and the feature point T (i, j), wherein an included angle formed by connecting any two feature points in each point set in the at least one point set and the feature point T (i, j) is not more than a second threshold value;
s3, determining a characteristic point T (i, j +1) with the distance from the characteristic point T (i, j) not less than a third threshold value from the at least one point set;
s4, acquiring a neighborhood feature point set corresponding to the feature point T (i, j +1), and determining a feature point T (i, j +2) of which the distance from the feature point T (i, j +1) is not less than a fourth threshold value from the neighborhood feature point set corresponding to the feature point T (i, j +1), wherein the neighborhood feature point set corresponding to the feature point T (i, j +1) is not overlapped with the neighborhood feature point set corresponding to the feature point T (i, j);
s5, setting j to j +1, and repeatedly executing step S4 until the number of points in the neighborhood feature point set corresponding to the feature point T (i, j +1) is 0, to obtain a feature point set of an ith road edge instance in the road edge feature point set, where the feature point set of the ith road edge instance includes the feature point T (i, j), the neighborhood feature point set corresponding to the feature point T (i, j), the feature point T (i, j +1), and the neighborhood feature point set corresponding to the feature point T (i, j + 1);
s6, repeating steps S1-S5 until traversing the feature points in the edge feature point set, obtaining a feature point set of each edge instance in the edge feature point set, where i and j are positive integers;
and obtaining a road edge map containing a plurality of road edge examples according to the characteristic point set of each road edge example in the road edge characteristic point set.
In this embodiment, the road edge examples are automatically labeled based on the road edge feature point set, so as to obtain the road edge map including the road edge examples. Compared with the current manual single-frame-by-frame marking, the scheme is adopted, so that the marking efficiency is greatly improved; and because the road edge data are generated by directly utilizing the road edge feature point set, a dotting error in the manual marking process does not exist, and the reliability of marking is further improved.
As an optional implementation manner, the obtaining, according to the road edge map, the laser point cloud data sequence, and the vehicle body pose data sequence, road edge feature labeling data corresponding to each frame of laser point cloud data in the laser point cloud data sequence includes: obtaining a plurality of reference road edge maps according to a plurality of frames of laser point cloud data in the laser point cloud data sequence and the vehicle body pose data sequence, wherein the plurality of reference road edge maps are obtained by mapping the road edge maps in coordinate systems where the plurality of frames of laser point cloud data are respectively located, and the plurality of reference road edge maps correspond to the plurality of frames of laser point cloud data one to one; acquiring a plurality of preset areas from the plurality of reference road edge maps, wherein the size of each preset area is not smaller than that of each frame of laser point cloud data, and the plurality of preset areas correspond to the plurality of reference road edge maps one by one; determining the score of each characteristic point in the plurality of preset areas according to the laser point cloud data sequence and the characteristic points in the plurality of preset areas; and obtaining road edge feature marking data corresponding to each frame of laser point cloud data according to the score of each feature point in the preset areas, wherein the road edge feature marking data are obtained by processing the feature points with the score not less than a sixth threshold value.
In this embodiment, the road edge map and the single-frame laser point cloud data are combined to obtain the score of each feature point, and then road edge marking data in a specific data form matched with the single-frame original laser point cloud data are screened out. By adopting the method, the marking efficiency is high, and compared with manual frame-by-frame marking, the scheme has stronger global consistency.
In a second aspect, an embodiment of the present application provides a road edge data labeling device, including:
the acquisition module is used for acquiring a vehicle body pose data sequence of a self vehicle and a laser point cloud data sequence containing the peripheral environment information of the self vehicle;
a processing module to:
obtaining a laser high-precision map according to the vehicle body pose data sequence and the laser point cloud data sequence;
obtaining a road edge feature point set according to the laser high-precision map and the laser point cloud data sequence;
processing each characteristic point in the road edge characteristic point set to obtain a road edge map containing a plurality of road edge examples;
and obtaining road edge characteristic marking data corresponding to each frame of laser point cloud data in the laser point cloud data sequence according to the road edge map, the laser point cloud data sequence and the vehicle body pose data sequence.
As an optional implementation manner, the processing module is configured to: processing the laser point cloud data sequence to obtain laser point cloud semantic information; obtaining a road semantic map according to the laser point cloud semantic information and the laser high-precision map; and extracting road edge feature points of the road semantic map to obtain a road edge feature point set in the road semantic map.
As an optional implementation manner, the road semantic map includes a first point cloud and a second point cloud, and the processing module is further configured to: mapping the road semantic map to a two-dimensional grid map to obtain a mapped road semantic map; acquiring a candidate grid from the mapped road semantic map, wherein the candidate grid is a grid with the first point cloud and the second point cloud, and the height difference between the first point cloud and the second point cloud in the road semantic map is not greater than a first threshold value; and obtaining a road edge feature point set according to the first point cloud and the second point cloud in the candidate grid.
As an optional implementation manner, the processing module is further configured to: performing iterative search processing on feature points in a road edge feature point set of K sub-maps to obtain a feature point set of a plurality of road edge examples, wherein the K sub-maps are obtained by segmenting the road semantic map, and K is an integer not less than 2; wherein, for the p-th sub-map of the K sub-maps, the steps S1-S6 are executed:
s1, obtaining a characteristic point T (i, j) from the p-th sub map, wherein the number of points in a neighborhood characteristic point set corresponding to the characteristic point T (i, j) is not less than a second threshold value;
s2, obtaining at least one point set according to a neighborhood feature point set corresponding to the feature point T (i, j) and the feature point T (i, j), wherein an included angle formed by connecting any two feature points in each point set in the at least one point set and the feature point T (i, j) is not more than a third threshold value;
s3, determining a characteristic point T (i, j +1) with the distance from the characteristic point T (i, j) not less than a fourth threshold value from the at least one point set;
s4, acquiring a neighborhood feature point set corresponding to the feature point T (i, j +1), and determining a feature point T (i, j +2) of which the distance from the feature point T (i, j +1) is not less than a fifth threshold value from the neighborhood feature point set corresponding to the feature point T (i, j +1), wherein the neighborhood feature point set corresponding to the feature point T (i, j +1) is not overlapped with the neighborhood feature point set corresponding to the feature point T (i, j);
s5, setting j to j +1, and repeatedly executing step S4 until the number of points in the neighborhood feature point set corresponding to the feature point T (i, j +1) is 0, to obtain a feature point set of the ith road edge instance in the p-th sub-map, where the feature point set of the ith road edge instance in the p-th sub-map includes the feature point T (i, j), the neighborhood feature point set corresponding to the feature point T (i, j), the feature point T (i, j +1), and the neighborhood feature point set corresponding to the feature point T (i, j + 1);
s6, repeating steps S1-S5 until feature points in the p-th sub-map are traversed, obtaining a feature point set of each road edge instance in the p-th sub-map, where the p-th sub-map is any one of the K sub-maps, and i and j are positive integers;
and obtaining a road edge map comprising a plurality of road edge examples according to the feature point sets of the road edge examples in the K sub-maps.
As another optional implementation manner, the processing module is configured to: semantic feature extraction is carried out on each frame of laser point cloud data in the laser point cloud data sequence, and a road edge feature point set of each frame of laser point cloud data in the laser point cloud data sequence is obtained; and mapping the road edge feature point set of each frame of laser point cloud data in the laser point cloud data sequence to the laser high-precision map to obtain a road edge feature point set corresponding to a coordinate system where the laser high-precision map is located.
As an optional implementation manner, the processing module is further configured to:
s1, acquiring a characteristic point T (i, j) from a road edge characteristic point set corresponding to a coordinate system where the laser high-precision map is located, wherein the number of points in a neighborhood characteristic point set corresponding to the characteristic point T (i, j) is not less than a first threshold value;
s2, obtaining at least one point set according to a neighborhood feature point set corresponding to the feature point T (i, j) and the feature point T (i, j), wherein an included angle formed by connecting any two feature points in each point set in the at least one point set and the feature point T (i, j) is not more than a second threshold value;
s3, determining a characteristic point T (i, j +1) with the distance from the characteristic point T (i, j) not less than a third threshold value from the at least one point set;
s4, acquiring a neighborhood feature point set corresponding to the feature point T (i, j +1), and determining a feature point T (i, j +2) of which the distance from the feature point T (i, j +1) is not less than a fourth threshold value from the neighborhood feature point set corresponding to the feature point T (i, j +1), wherein the neighborhood feature point set corresponding to the feature point T (i, j +1) is not overlapped with the neighborhood feature point set corresponding to the feature point T (i, j);
s5, setting j to j +1, and repeatedly executing step S4 until the number of points in the neighborhood feature point set corresponding to the feature point T (i, j +1) is 0, to obtain a feature point set of an ith road edge instance in the road edge feature point set, where the feature point set of the ith road edge instance includes the feature point T (i, j), the neighborhood feature point set corresponding to the feature point T (i, j), the feature point T (i, j +1), and the neighborhood feature point set corresponding to the feature point T (i, j + 1);
s6, repeating steps S1-S5 until traversing the feature points in the edge feature point set, obtaining a feature point set of each edge instance in the edge feature point set, where i and j are positive integers;
and obtaining a road edge map containing a plurality of road edge examples according to the characteristic point set of each road edge example in the road edge characteristic point set.
As an optional implementation manner, the processing module is further configured to: obtaining a plurality of reference road edge maps according to a plurality of frames of laser point cloud data in the laser point cloud data sequence and the vehicle body pose data sequence, wherein the plurality of reference road edge maps are obtained by mapping the road edge maps in coordinate systems where the plurality of frames of laser point cloud data are respectively located, and the plurality of reference road edge maps correspond to the plurality of frames of laser point cloud data one to one; acquiring a plurality of preset areas from the plurality of reference road edge maps, wherein the size of each preset area is not smaller than that of each frame of laser point cloud data, and the plurality of preset areas correspond to the plurality of reference road edge maps one by one; determining the score of each characteristic point in the plurality of preset areas according to the laser point cloud data sequence and the characteristic points in the plurality of preset areas; and obtaining road edge feature marking data corresponding to each frame of laser point cloud data according to the score of each feature point in the preset areas, wherein the road edge feature marking data are obtained by processing the feature points with the score not less than a sixth threshold value.
In a third aspect, the present application provides a road edge data labeling apparatus, including a processor and a memory; wherein the memory is configured to store program code, and the processor is configured to call the program code to perform the method as provided in any one of the possible embodiments of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium storing a computer program for execution by a processor to perform the method as provided in any one of the possible embodiments of the first aspect.
In a fifth aspect, the present application provides a computer program product for causing a computer to perform the method as provided in any one of the possible embodiments of the first aspect when the computer program product runs on the computer.
In a sixth aspect, the present application provides a chip system, which is applied to an electronic device; the chip system comprises one or more interface circuits, and one or more processors; the interface circuit and the processor are interconnected through a line; the interface circuit is to receive a signal from a memory of the electronic device and to send the signal to the processor, the signal comprising computer instructions stored in the memory; when the processor executes the computer instructions, the electronic device performs the method as provided in any one of the possible embodiments of the first aspect.
In a seventh aspect, the present application provides an intelligent driving vehicle, comprising a traveling system, a sensing system, a control system and a computer system, wherein the computer system is configured to perform the method as provided in any one of the possible embodiments of the first aspect.
It is to be understood that the apparatus of the second aspect, the apparatus of the third aspect, the computer-readable storage medium of the fourth aspect, the computer program product of the fifth aspect, the chip system of the sixth aspect, and the smart driving vehicle of the seventh aspect are all configured to perform the method of the first aspect. Therefore, the beneficial effects achieved by the method can refer to the beneficial effects in the corresponding method, and are not described herein again.
Drawings
The drawings used in the embodiments of the present application are described below.
Fig. 1a is a schematic structural diagram of a road marking system according to an embodiment of the present application;
fig. 1b is a schematic diagram of a road edge data labeling application provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for labeling road edge data according to an embodiment of the present disclosure;
fig. 3a is a top view of laser point cloud data provided by an embodiment of the present application;
FIG. 3b is a schematic diagram of a laser high-precision map provided by an embodiment of the present application;
fig. 3c is a schematic diagram of a laser point cloud semantic map provided in an embodiment of the present application;
fig. 3d is a schematic diagram of a road semantic map provided by an embodiment of the present application;
fig. 3e is a schematic diagram of a road edge feature point set provided in an embodiment of the present application;
fig. 4 is a schematic diagram of a method for obtaining a road edge map according to an embodiment of the present disclosure;
fig. 5a is a schematic diagram of a road edge feature point set provided in an embodiment of the present application;
FIG. 5b is a schematic diagram of a growth iteration direction provided by an embodiment of the present application;
FIG. 5c is a schematic diagram of an embodiment of the present application illustrating an example of obtaining multiple road edges;
FIG. 5d is a schematic diagram of a sub-map merge provided by an embodiment of the present application;
fig. 6a is a schematic diagram of determining a neighborhood feature point according to an embodiment of the present disclosure;
FIG. 6b is a schematic diagram of determining different iterative growth directions according to an embodiment of the present application;
fig. 7 is a schematic diagram of road edge data labeling based on a road edge map according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a road edge data marking device according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of another road edge data labeling device according to an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings. The terminology used in the description of the embodiments herein is for the purpose of describing particular embodiments herein only and is not intended to be limiting of the application.
Referring to fig. 1a, a schematic diagram of an architecture of a road edge data labeling system according to an embodiment of the present application is shown. As shown in fig. 1a, the system includes a laser radar, a combined navigation system, and a road edge data labeling device. The laser radar is used for acquiring laser point cloud data of the surrounding environment of the vehicle; the integrated navigation System can comprise a Global Positioning System (GPS) and an Inertial Measurement Unit (IMU), and is used for acquiring a vehicle body pose data sequence; and the road edge data labeling device is used for performing road edge data labeling processing. Specifically, the laser radar sends the acquired point cloud data of the surrounding environment of the vehicle to the road edge data marking device, and the global positioning system and the inertial measurement unit send the acquired pose data to the road edge data marking device, so that the road edge data marking device marks the road edge data.
As an optional implementation manner, the road data labeling device may include a laser radar driving module, a combined navigation system driving module, a road map generating module, and a road data labeling module. The laser radar driving module is used for converting a data packet sent by a laser radar into three-dimensional laser point cloud data; the integrated navigation system driving module is used for fusing data acquired by the global positioning system and data acquired by the inertial measurement unit and outputting continuous and smooth vehicle body pose data; the road edge map generation module is used for generating road edge map data containing a plurality of road edge examples; the road edge data marking module is used for generating each frame of road edge marking data of multiple data types.
The above embodiments are described by taking a laser radar and an integrated navigation system as examples, wherein the system may also obtain point cloud data of the surrounding environment of the vehicle and vehicle body pose data based on other sensors, and the present disclosure is not particularly limited thereto.
Optionally, the road edge data labeling device may be a server, which may be a virtual server, an entity server, or the like, or may be another device, which is not specifically limited in this embodiment.
The road edge data marking system can be applied to an environment perception module in an automatic driving platform and an advanced assistant driving system. As shown in fig. 1b, where the smart vehicle needs to be sufficiently aware of the environment around the vehicle, such as the information of the road boundary around the vehicle, in the current industry data-driven deep neural network algorithm, the prediction capability of the model depends greatly on the quantity and quality of the labeled data. Therefore, the scheme provides a road edge data marking method which is used for a data marking (data generation) part of a laser road edge. By adopting the automatic road edge data labeling method provided by the scheme, the road edge data generation efficiency can be improved, and the prediction performance of the model can be indirectly improved.
Fig. 2 is a schematic flow chart of a road edge data labeling method according to an embodiment of the present application. The method comprises the following steps 201-204:
201. acquiring a vehicle body pose data sequence of a self vehicle and a laser point cloud data sequence containing the peripheral environment information of the self vehicle, and acquiring a laser high-precision map according to the vehicle body pose data sequence and the laser point cloud data sequence;
the vehicle body pose data sequence is vehicle body pose data of a plurality of poses acquired based on continuous movement of the vehicle.
Wherein the sequence of vehicle body pose data comprises a plurality of frames of vehicle body pose data. The vehicle body pose data may include position information and attitude information of the vehicle.
The laser point cloud data sequence is laser point cloud data of surrounding environment information of the vehicle, which is obtained based on the continuous movement of the vehicle. That is, the laser point cloud data sequence includes a plurality of frames of laser point cloud data.
The laser high-precision map may be a map with a precision of centimeter level. The coordinate system corresponding to the map may be a world coordinate system.
As an optional implementation manner, a laser high-precision map can be obtained by using a laser point cloud data sequence and a vehicle body pose data sequence and by using a laser instant positioning and mapping (SLAM) method or a laser odometer method.
Among them, the laser SLAM method is suitable for a data sequence including a plurality of repeated road segments (a plurality of loops), and the repeated road segments can be understood as repeatedly passing through a certain road segment, and consistency of map data and road edge data can be ensured by using loop detection in such scenes. The laser odometer method is suitable for scenes without repeated road sections (without loops) such as expressways.
In the map generation process, the frame number frame _ id of the laser point as the map point and the number point _ id of the point in the frame are recorded, and the laser high-precision map can be recorded as
Figure BDA0003211734960000082
Wherein N is the number of points in the map, (x, y, z) is the three-dimensional space coordinate of the point cloud under the world coordinate system, and intensity is the reflection intensity of the corresponding laser point.
202. Obtaining a road edge feature point set according to the laser high-precision map and the laser point cloud data sequence;
a road edge is understood to be a boundary between a road surface and other road elements other than a road surface. Such non-road surface road elements are for example curbs, walls, green belts etc. on the road boundaries.
As an optional implementation manner (embodiment one), the obtaining of the road edge feature point set according to the laser high-precision map and the laser point cloud data sequence includes steps 2021 and 2023, which are specifically as follows:
2021. processing the laser point cloud data sequence to obtain laser point cloud semantic information;
the semantic information of the laser point cloud can be understood as category information to which each point in the laser point cloud belongs. Such as pedestrians, the ground, etc.
Specifically, the laser point cloud data sequence is input into a laser semantic segmentation network for semantic segmentation processing, and then laser point cloud semantic information can be obtained.
Of course, other semantic segmentation processing may also be adopted, for example, semantic segmentation that only distinguishes between ground and non-ground point clouds, which is not specifically limited in this scheme.
2022. Obtaining a road semantic map according to the laser point cloud semantic information and the laser high-precision map;
as an optional implementation manner, the laser point cloud semantic information is mapped into the laser high-precision map through frame _ id and point _ id to obtain a point cloud semantic map, which can be recorded as
Figure BDA0003211734960000081
Wherein, label is the classification information of the point cloud; and then, extracting semantic map points related to the road structure information from the point cloud semantic map to obtain a road semantic map.
The semantic map points related to the road structure information may be, for example, road surface, sidewalk, roadside vegetation, and the like.
2023. And extracting road edge feature points of the road semantic map to obtain a road edge feature point set in the road semantic map.
The point clouds in the road semantic map can be divided into two types, the point clouds belonging to the road surface classification are used as first type point clouds, and the point clouds belonging to the road edge outer side are classified as second type point clouds. The point cloud classification outside the road edge can be, for example, a sidewalk classification, a green plant classification, and the like. The road edge point cloud is positioned at the junction of the first point cloud and the second point cloud.
As an optional implementation manner, the extracting of the road edge feature points from the road semantic map to obtain a road edge feature point set in the road semantic map includes:
mapping the road semantic map to a two-dimensional grid map to obtain a mapped road semantic map;
acquiring a candidate grid from the mapped road semantic map, wherein the candidate grid is a grid with the first point cloud and the second point cloud, and the height difference between the first point cloud and the second point cloud in the road semantic map is not greater than a first threshold value;
and obtaining a road edge feature point set according to the first point cloud and the second point cloud in the candidate grid.
Specifically, the road semantic map is mapped into an x-y plane two-dimensional grid map, and the road edge feature points are determined based on the following aspects:
for each grid in the two-dimensional grid map, if the first type point cloud and the second type point cloud are both located in the grid, the grid is marked as a candidate grid.
As an optional implementation manner, considering that part of tree branches and the like in an actual road scene can extend to the upper side of a road surface, if the height difference between a first point cloud and a second point cloud in a candidate grid in an original road semantic map is greater than a certain height threshold, the candidate grid is removed, and candidate grids with the height difference between the first point cloud and the second point cloud being less than the certain height threshold are reserved.
The first-class point clouds in the screened candidate grids can be marked as road edge feature points; or marking the second point cloud in the screened candidate grid as a road edge feature point; or road edge feature points and the like are obtained based on the boundary of the first point cloud and the second point cloud.
And traversing the point cloud in the road semantic map to further obtain a road edge feature point set.
As shown in fig. 3a, 3b, 3c, 3d, and 3e, a laser point cloud data sequence in a top view (fig. 3a) based on laser point cloud data is combined with a vehicle body pose data sequence to obtain a laser high-precision map of fig. 3b, the laser point cloud data sequence is subjected to semantic segmentation processing to obtain laser point cloud semantic information, the laser point cloud semantic information is mapped to the laser high-precision map to obtain a laser point cloud semantic map of fig. 3c, semantic map points related to road structure information are extracted from the laser point cloud semantic map to further obtain a road semantic map shown in fig. 3d, and a road edge feature point set is obtained based on a two-dimensional raster map, as shown in fig. 3 e. The area B corresponds to the candidate grid, and both the two types of point clouds exist, the area a corresponds to the first type of point cloud, and the area C corresponds to the second type of point cloud, or the area a corresponds to the second type of point cloud, and the area C corresponds to the first type of point cloud, and the like.
The above-described embodiment has been described taking as an example the extraction of the set of road edge feature points based on the entire high-precision map. And acquiring a road edge feature point set based on the semantic features of the road edge of the single frame.
As another optional implementation manner (embodiment two), the obtaining of the road edge feature point set according to the laser high-precision map and the laser point cloud data sequence includes 202A to 202B, which is specifically as follows:
202A, performing semantic feature extraction on each frame of laser point cloud data in the laser point cloud data sequence to obtain a road edge feature point set of each frame of laser point cloud data in the laser point cloud data sequence;
202B, mapping the road edge feature point set of each frame of laser point cloud data in the laser point cloud data sequence to the laser high-precision map to obtain a road edge feature point set corresponding to a coordinate system where the laser high-precision map is located.
The single-frame road edge feature point set can be extracted based on the geometrical features of the point cloud space, and specifically, a mode of combining the features of a laser single scanning layer and the features between adjacent scanning layers can be adopted. The single scanning layer refers to one laser scanning line beam in the multi-line laser radar, and the adjacent scanning layer refers to a plurality of adjacent laser scanning line beams.
The single scan layer characteristics mainly include a slope characteristic, a number characteristic, and a reflection intensity characteristic.
The slope characteristic mainly considers the consistency of the space distribution of the road edge and the motion direction of the vehicle, namely, the slope of a connecting line between a road edge characteristic point and a neighborhood point is within a certain angle threshold range under a vehicle body coordinate system.
The number characteristic mainly considers the continuity of the road edge space distribution, and the number of the neighborhood points of the road edge characteristic points is greater than a certain number threshold.
The reflection intensity characteristic is intended to filter noise points of low reflection intensity such as rain water.
The neighborhood points of the road edge feature points may be understood as points in the circular region obtained by drawing a circle with a certain radius and length by taking the road edge feature points as the center of the circle, that is, neighborhood points of the road edge feature points. Of course, other methods may be used to determine the neighborhood point, and this is not specifically limited in this embodiment.
The characteristics between adjacent scanning layers mainly consider that a road edge is a section which is generally vertical to a road surface, and the distribution of laser point clouds on the section of the road edge is denser relative to road surface points, so that the distance of adjacent points between adjacent layers in a top view is within a certain distance threshold; the inter-layer number characteristic, namely the number of the road edge characteristic points and the inter-layer neighborhood points of the scanned layer can also evaluate the continuity of road edge distribution.
And determining a road edge feature point set of each frame of laser point cloud data based on the laser single scanning layer features and the adjacent scanning layer features. And recording the obtained data frame number frame _ id of the single-frame road edge characteristic point and the point number point _ id in the frame, and mapping the characteristic point to a high-precision map through a vehicle pose information sequence to obtain a road edge characteristic point set corresponding to a coordinate system where the laser high-precision map is located, namely obtaining the road edge point cloud characteristic map.
203. Processing each characteristic point in the road edge characteristic point set to obtain a road edge map containing a plurality of road edge examples;
the road edge example can be understood as a road edge with continuity in space.
As an optional implementation manner (corresponding to the foregoing embodiment), the processing each feature point in the road edge feature point set to obtain a road edge map including a plurality of road edge instances includes:
performing iterative search processing on feature points in a road edge feature point set of K sub-maps to obtain a feature point set of a plurality of road edge examples, wherein the K sub-maps are obtained by segmenting the road semantic map, and K is an integer not less than 2; wherein, for the p-th sub-map of the K sub-maps, the steps S1-S6 are executed:
s1, obtaining a characteristic point T (i, j) from the p-th sub map, wherein the number of points in a neighborhood characteristic point set corresponding to the characteristic point T (i, j) is not less than a second threshold value;
s2, obtaining at least one point set according to a neighborhood feature point set corresponding to the feature point T (i, j) and the feature point T (i, j), wherein an included angle formed by connecting any two feature points in each point set in the at least one point set and the feature point T (i, j) is not more than a third threshold value;
s3, determining a characteristic point T (i, j +1) with the distance from the characteristic point T (i, j) not less than a fourth threshold value from the at least one point set;
s4, acquiring a neighborhood feature point set corresponding to the feature point T (i, j +1), and determining a feature point T (i, j +2) of which the distance from the feature point T (i, j +1) is not less than a fifth threshold value from the neighborhood feature point set corresponding to the feature point T (i, j +1), wherein the neighborhood feature point set corresponding to the feature point T (i, j +1) is not overlapped with the neighborhood feature point set corresponding to the feature point T (i, j);
s5, setting j to j +1, and repeatedly executing step S4 until the number of points in the neighborhood feature point set corresponding to the feature point T (i, j +1) is 0, to obtain a feature point set of the ith road edge instance in the p-th sub-map, where the feature point set of the ith road edge instance in the p-th sub-map includes the feature point T (i, j), the neighborhood feature point set corresponding to the feature point T (i, j), the feature point T (i, j +1), and the neighborhood feature point set corresponding to the feature point T (i, j + 1);
s6, repeating steps S1-S5 until feature points in the p-th sub-map are traversed, obtaining a feature point set of each road edge instance in the p-th sub-map, where the p-th sub-map is any one of the K sub-maps, and i and j are positive integers;
and obtaining a road edge map comprising a plurality of road edge examples according to the feature point sets of the road edge examples in the K sub-maps.
The above feature point T (i, j) represents the jth feature point in the ith road edge example.
Specifically, taking 64-line lidar as an example, it produces about 120-130 ten thousand data points per second. The data points of the high-precision map are generally at the level of tens of millions, and in consideration of the computational performance limit of the system, the road semantic map is generally processed in a block-by-block and batch manner according to the area when the map data is processed, so that the road semantic map is divided into a plurality of sub-maps, as shown in fig. 4.
Wherein, the division can be uniform division or arbitrary division; of course, the division may not be performed, and this scheme is not particularly limited.
And then, carrying out iterative search processing on the characteristic points in the road edge characteristic point set of each sub-map to obtain a plurality of characteristic point sets of road edge examples.
The iterative search process may include:
1) building a K-dimensional (KD) tree for each sub-map, as shown in FIG. 5 a;
2) randomly extracting feature points of a certain road edge in each sub-map, searching the feature points in a certain neighborhood range of the feature points by using a nearest neighbor search algorithm (KNN), and taking the feature points as initial feature points if the number of the feature points in the neighborhood is greater than a certain threshold;
for example, as shown in fig. 6a, a point in the circular region obtained by drawing a circle with a certain radius length with the feature point a as the center is a feature point (neighborhood feature point) in the neighborhood range of the feature point a. Since point D is not within this region, point D is not a neighborhood feature point of feature point a.
The initial feature point may also be determined based on other manners, which is not specifically limited in this embodiment.
3) Classifying the neighborhood feature points according to the included angle of the connection line of each neighborhood feature point and the initial feature point searched in the step 2), wherein the feature points with the same growth iteration direction are classified into one class;
wherein, for such linearly distributed objects along the road, there are generally two growth iteration directions for each initial feature point, as shown in fig. 5 b. The growth iteration direction may be understood as the direction of distribution of the road edges in space.
Specifically, the neighborhood feature points are classified, wherein an included angle formed by connecting any two feature points in the same type of point set and the feature point T (i, j) is not greater than a certain threshold. The angle may be calculated based on the vector angle. As shown in FIG. 6B, point A is connected to point B, point E, point F and point G respectively
Figure BDA0003211734960000111
Wherein the content of the first and second substances,
Figure BDA0003211734960000112
and
Figure BDA0003211734960000113
the included angle between the two,
Figure BDA0003211734960000114
And
Figure BDA0003211734960000115
the included angle between the two,
Figure BDA0003211734960000116
And
Figure BDA0003211734960000117
the included angle between the two,
Figure BDA0003211734960000118
And
Figure BDA0003211734960000119
the included angle between the two,
Figure BDA00032117349600001110
And
Figure BDA00032117349600001111
the included angle between the points A and C is less than a certain threshold value
Figure BDA00032117349600001112
Wherein the content of the first and second substances,
Figure BDA00032117349600001113
and
Figure BDA00032117349600001114
the included angle between the two,
Figure BDA00032117349600001115
And
Figure BDA00032117349600001116
the included angle between the two,
Figure BDA00032117349600001117
And
Figure BDA00032117349600001118
the included angle between the two,
Figure BDA00032117349600001119
And
Figure BDA00032117349600001120
the included angles between the points are all larger than a certain threshold, so that the point B, the point E, the point F and the point G are taken as a point set (e.g. point set 1 in fig. 6 a) of the neighborhood points corresponding to the point a, and the point set where the point C is taken as another point set (e.g. point set 2 in fig. 6 a).
If the initial feature point is selected at the edge of the sub-map, all the neighborhood feature points are in one direction relative to the initial feature point, and only one growth iteration direction exists at the moment.
4) And selecting the feature point with each iteration direction farthest from the initial feature point from the neighborhood feature points as a new iteration search point, searching the feature points in the neighborhood of the iteration search point in a certain range by using KNN again, and screening out the feature points with the connection direction of the iteration search point and the neighborhood feature points consistent with the current iteration direction as the feature points generated by the iteration.
That is, the neighborhood feature point set of the new iterative search point does not overlap with the neighborhood feature point set of the initial feature point.
As shown in fig. 6a, a feature point B farthest from the initial feature point in one iteration direction is taken as a new iteration search point corresponding to the iteration direction, and a feature point C farthest from the initial feature point in the other iteration direction is taken as a new iteration search point corresponding to the iteration direction.
5) And repeatedly executing the iterative growth process in each iteration direction in the step 4) until the number of the feature points grown in each iteration direction is 0, and obtaining the feature point set of the road edge example.
6) And repeating the steps 2) to 5) until the road edge feature points in the sub-map completely traverse, and finally outputting the road edge sub-map containing a plurality of road edge instances, as shown in fig. 5 c.
The road edges in the multiple road edge sub-maps can be divided by the sub-map boundary, so that the road edge examples can be renumbered in the sub-map merging process, and the examples divided by the sub-map boundary are merged into the same example.
As shown in fig. 5d, the road edge 1 of the sub-map 1 and the road edge 1 of the sub-map 2 are merged into one road edge example, and the road edge 3 of the sub-map 1 and the road edge 2 of the sub-map 2 are merged into one road edge example.
And merging all the road edge sub-maps by wholly renumbering all the road edge examples to obtain the road edge map containing a plurality of road edge examples.
In this embodiment, the road edge examples are automatically labeled based on the road edge feature point set in the road semantic map, so as to obtain the road edge map including the road edge examples. Compared with the current manual single-frame-by-frame marking, the scheme is adopted, so that the marking efficiency is greatly improved; and because the road edge feature point set in the road semantic map is directly utilized to directly generate the road edge data, the dotting error in the manual labeling process does not exist, and the reliability of the labeling is further improved.
The above embodiment is described by taking a road edge map including a plurality of road edge examples obtained by dividing based on a road semantic map as an example.
As another optional implementation manner (corresponding to the foregoing embodiment), the processing each feature point in the road edge feature point set to obtain a road edge map including a plurality of road edge instances includes:
s1, acquiring a characteristic point T (i, j) from a road edge characteristic point set corresponding to a coordinate system where the laser high-precision map is located, wherein the number of points in a neighborhood characteristic point set corresponding to the characteristic point T (i, j) is not less than a first threshold value;
s2, obtaining at least one point set according to a neighborhood feature point set corresponding to the feature point T (i, j) and the feature point T (i, j), wherein an included angle formed by connecting any two feature points in each point set in the at least one point set and the feature point T (i, j) is not more than a second threshold value;
s3, determining a characteristic point T (i, j +1) with the distance from the characteristic point T (i, j) not less than a third threshold value from the at least one point set;
s4, acquiring a neighborhood feature point set corresponding to the feature point T (i, j +1), and determining a feature point T (i, j +2) of which the distance from the feature point T (i, j +1) is not less than a fourth threshold value from the neighborhood feature point set corresponding to the feature point T (i, j +1), wherein the neighborhood feature point set corresponding to the feature point T (i, j +1) is not overlapped with the neighborhood feature point set corresponding to the feature point T (i, j);
s5, setting j to j +1, and repeatedly executing step S4 until the number of points in the neighborhood feature point set corresponding to the feature point T (i, j +1) is 0, to obtain a feature point set of an ith road edge instance in the road edge feature point set, where the feature point set of the ith road edge instance includes the feature point T (i, j), the neighborhood feature point set corresponding to the feature point T (i, j), the feature point T (i, j +1), and the neighborhood feature point set corresponding to the feature point T (i, j + 1);
s6, repeating steps S1-S5 until traversing the feature points in the edge feature point set, obtaining a feature point set of each edge instance in the edge feature point set, where i and j are positive integers;
and obtaining a road edge map containing a plurality of road edge examples according to the characteristic point set of each road edge example in the road edge characteristic point set.
The difference from the above embodiment is that this embodiment processes the set of road edge feature points mapped in the laser high-precision map to obtain a road edge map.
In this embodiment, the road edge examples are automatically labeled based on the road edge feature point set, so as to obtain the road edge map including the road edge examples. Compared with the current manual single-frame-by-frame marking, the scheme is adopted, so that the marking efficiency is greatly improved; and because the road edge data are generated by directly utilizing the road edge feature point set, a dotting error in the manual marking process does not exist, and the reliability of marking is further improved.
The above specific implementation can refer to the description in the foregoing embodiments, and is not described herein again.
204. And obtaining road edge characteristic marking data corresponding to each frame of laser point cloud data in the laser point cloud data sequence according to the road edge map, the laser point cloud data sequence and the vehicle body pose data sequence.
As an optional implementation manner, the obtaining, according to the road edge map, the laser point cloud data sequence, and the vehicle body pose data sequence, road edge feature labeling data corresponding to each frame of laser point cloud data in the laser point cloud data sequence includes:
obtaining a plurality of reference road edge maps according to a plurality of frames of laser point cloud data in the laser point cloud data sequence and the vehicle body pose data sequence, wherein the plurality of reference road edge maps are obtained by mapping the road edge maps in coordinate systems where the plurality of frames of laser point cloud data are respectively located, and the plurality of reference road edge maps correspond to the plurality of frames of laser point cloud data one to one;
acquiring a plurality of preset areas from the plurality of reference road edge maps, wherein the size of each preset area is not smaller than that of each frame of laser point cloud data, and the plurality of preset areas correspond to the plurality of reference road edge maps one by one;
determining the score of each characteristic point in the plurality of preset areas according to the laser point cloud data sequence and the characteristic points in the plurality of preset areas;
and obtaining road edge feature marking data corresponding to each frame of laser point cloud data according to the score of each feature point in the preset areas, wherein the road edge feature marking data are obtained by processing the feature points with the score not less than a sixth threshold value.
Specifically, the coordinate system of the road edge map is a world coordinate system, and the coordinate system of the laser point cloud data is a local laser coordinate system, so that the road edge map and the laser point cloud data sequence are spatially aligned by means of the vehicle body pose data sequence. Wherein the road edge map can be represented as McurbThe corresponding vehicle body pose of the laser point cloud data is [ R | T]tThe parameter matrix between the laser coordinate system and the vehicle body coordinate system is
Figure BDA0003211734960000131
The global road edge map mapped to the current laser coordinate system can be expressed as:
Figure BDA0003211734960000132
wherein M iscurb_transIs a road edge map mapped to a laser coordinate system.
That is to say, the road edge map is respectively mapped to the coordinate system where each frame of laser point cloud data is located, and a plurality of reference road edge maps are obtained.
Then, based on the coarse-grained extraction and the fine-grained extraction, local road edge map extraction is performed on each reference road edge map.
Specifically, coarse particle size extraction is achieved by: selecting a rectangular area under the current coordinate system, for example, selecting an area with the range of (-40m, 70m) in the x direction and the range of (-30m,30m) in the y direction, and reserving a road edge map in the rectangular area by filtering the road edge map outside the rectangular area.
And then performing fine-grained extraction on the road edge map in the reserved rectangular area. For example, the network or laser path may be segmented based on laser semanticsObtaining laser pavement point cloud by a surface segmentation algorithm and the like; constructing a KD tree for the laser pavement point cloud, searching the pavement point cloud of the neighborhood by using KNN for any point P in a road edge map in a rectangular area, and recording the number of the pavement point cloud obtained by searching as NeighPDetermining a score function based on the spatial distribution of the laser pavement point cloud and the road edge map and the global distance map, wherein the score function can be expressed as:
Score(P)=t*NeighP+DisMap(P);
wherein, dispap (P) represents the energy value of the point P in the global distance map, the closer to the origin of the laser coordinate system, the higher the energy value, and conversely, the lower the energy value; t may be a predetermined positive number, etc.
Calculating scores of the feature points in the road edge map in the rectangular area, reserving the feature points with the scores larger than a certain threshold value, and obtaining road edge feature marking data corresponding to each frame of laser point cloud data based on the feature points with the scores larger than the certain threshold value.
As shown in fig. 7, as an alternative implementation manner, after feature points with scores greater than a certain threshold are obtained, the screened road edge map points may be uniformly sampled to obtain road edge feature point annotation data.
As another alternative implementation, the road edge map points with the scores larger than a certain threshold are mapped to the two-dimensional grid map of the x-y plane of the current coordinate system to obtain road edge occupied grid marking data.
As another optional implementation manner, according to feature points with scores greater than a certain threshold, laser point cloud data in the road edge grid marking data is extracted to obtain road edge original point cloud semantic marking data.
The above description is only given by taking three road edge feature labeled data as an example, and the data may also be other data, which is not specifically limited in this embodiment.
In this embodiment, the road edge map and the single-frame laser point cloud data are combined to obtain the score of each feature point, and then road edge marking data in a specific data form matched with the single-frame original laser point cloud data are screened out. By adopting the method, the marking efficiency is high, and compared with manual frame-by-frame marking, the scheme has stronger global consistency.
It should be noted that each threshold in the embodiment of the present application may be set arbitrarily, and the present solution is not limited specifically.
According to the method and the device, a laser high-precision map is obtained based on a vehicle body pose data sequence and a laser point cloud data sequence, and a road edge feature point set is obtained according to the laser high-precision map and the laser point cloud data sequence; then processing each characteristic point in the road edge characteristic point set to obtain a road edge map containing a plurality of road edge examples; and according to the road edge map, the laser point cloud data sequence and the vehicle body pose data sequence, obtaining road edge feature marking data corresponding to each frame of laser point cloud data in the laser point cloud data sequence. By adopting the method, the road edge map is a necessary condition for automatically generating the single-frame road edge data, the road edge data in the road edge map has better consistency in a world coordinate system relative to the road edge data generated frame by single frame, and the road edge map data obtained by using the multi-frame data has better data quality in an obstacle shielding area, so that the confidence coefficient and the accuracy of the road edge marking data are improved; road edge marking data are obtained based on road edge maps and each frame of laser point cloud data. Compared with the existing manual labeling mode, the scheme effectively improves the data labeling efficiency and the prediction performance of the model on the premise of ensuring the road edge data precision.
Referring to fig. 8, a road edge data labeling apparatus provided in this embodiment of the present application includes an obtaining module 801 and a processing module 802, which are specifically as follows:
an obtaining module 801, configured to obtain a vehicle body pose data sequence of a host vehicle and a laser point cloud data sequence including peripheral environment information of the host vehicle;
a processing module 802 configured to:
obtaining a laser high-precision map according to the vehicle body pose data sequence and the laser point cloud data sequence;
obtaining a road edge feature point set according to the laser high-precision map and the laser point cloud data sequence;
processing each characteristic point in the road edge characteristic point set to obtain a road edge map containing a plurality of road edge examples;
and obtaining road edge characteristic marking data corresponding to each frame of laser point cloud data in the laser point cloud data sequence according to the road edge map, the laser point cloud data sequence and the vehicle body pose data sequence.
As an optional implementation manner, the processing module 802 is configured to: performing semantic segmentation processing on the laser point cloud data sequence to obtain laser point cloud semantic information; obtaining a road semantic map according to the laser point cloud semantic information and the laser high-precision map; and extracting road edge feature points of the road semantic map to obtain a road edge feature point set in the road semantic map.
As an optional implementation manner, the road semantic map includes a first point cloud and a second point cloud, and the processing module 802 is further configured to: mapping the road semantic map to a two-dimensional grid map to obtain a mapped road semantic map; acquiring a candidate grid from the mapped road semantic map, wherein the candidate grid is a grid with the first point cloud and the second point cloud, and the height difference between the first point cloud and the second point cloud in the road semantic map is not greater than a first threshold value; and obtaining a road edge feature point set according to the first point cloud and the second point cloud in the candidate grid.
As an optional implementation manner, the processing module 802 is further configured to: performing iterative search processing on feature points in a road edge feature point set of K sub-maps to obtain a feature point set of a plurality of road edge examples, wherein the K sub-maps are obtained by segmenting the road semantic map, and K is an integer not less than 2; wherein, for the p-th sub-map of the K sub-maps, the steps S1-S6 are executed:
s1, obtaining a characteristic point T (i, j) from the p-th sub map, wherein the number of points in a neighborhood characteristic point set corresponding to the characteristic point T (i, j) is not less than a second threshold value;
s2, obtaining at least one point set according to a neighborhood feature point set corresponding to the feature point T (i, j) and the feature point T (i, j), wherein an included angle formed by connecting any two feature points in each point set in the at least one point set and the feature point T (i, j) is not more than a third threshold value;
s3, determining a characteristic point T (i, j +1) with the distance from the characteristic point T (i, j) not less than a fourth threshold value from the at least one point set;
s4, acquiring a neighborhood feature point set corresponding to the feature point T (i, j +1), and determining a feature point T (i, j +2) of which the distance from the feature point T (i, j +1) is not less than a fifth threshold value from the neighborhood feature point set corresponding to the feature point T (i, j +1), wherein the neighborhood feature point set corresponding to the feature point T (i, j +1) is not overlapped with the neighborhood feature point set corresponding to the feature point T (i, j);
s5, setting j to j +1, and repeatedly executing step S4 until the number of points in the neighborhood feature point set corresponding to the feature point T (i, j +1) is 0, to obtain a feature point set of the ith road edge instance in the p-th sub-map, where the feature point set of the ith road edge instance in the p-th sub-map includes the feature point T (i, j), the neighborhood feature point set corresponding to the feature point T (i, j), the feature point T (i, j +1), and the neighborhood feature point set corresponding to the feature point T (i, j + 1);
s6, repeating steps S1-S5 until feature points in the p-th sub-map are traversed, obtaining a feature point set of each road edge instance in the p-th sub-map, where the p-th sub-map is any one of the K sub-maps, and i and j are positive integers;
and obtaining a road edge map comprising a plurality of road edge examples according to the feature point sets of the road edge examples in the K sub-maps.
As another optional implementation manner, the processing module 802 is configured to: semantic feature extraction is carried out on each frame of laser point cloud data in the laser point cloud data sequence, and a road edge feature point set of each frame of laser point cloud data in the laser point cloud data sequence is obtained; and mapping the road edge feature point set of each frame of laser point cloud data in the laser point cloud data sequence to the laser high-precision map to obtain a road edge feature point set corresponding to a coordinate system where the laser high-precision map is located.
As an optional implementation manner, the processing module 802 is further configured to:
s1, acquiring a characteristic point T (i, j) from a road edge characteristic point set corresponding to a coordinate system where the laser high-precision map is located, wherein the number of points in a neighborhood characteristic point set corresponding to the characteristic point T (i, j) is not less than a first threshold value;
s2, obtaining at least one point set according to a neighborhood feature point set corresponding to the feature point T (i, j) and the feature point T (i, j), wherein an included angle formed by connecting any two feature points in each point set in the at least one point set and the feature point T (i, j) is not more than a second threshold value;
s3, determining a characteristic point T (i, j +1) with the distance from the characteristic point T (i, j) not less than a third threshold value from the at least one point set;
s4, acquiring a neighborhood feature point set corresponding to the feature point T (i, j +1), and determining a feature point T (i, j +2) of which the distance from the feature point T (i, j +1) is not less than a fourth threshold value from the neighborhood feature point set corresponding to the feature point T (i, j +1), wherein the neighborhood feature point set corresponding to the feature point T (i, j +1) is not overlapped with the neighborhood feature point set corresponding to the feature point T (i, j);
s5, setting j to j +1, and repeatedly executing step S4 until the number of points in the neighborhood feature point set corresponding to the feature point T (i, j +1) is 0, to obtain a feature point set of an ith road edge instance in the road edge feature point set, where the feature point set of the ith road edge instance includes the feature point T (i, j), the neighborhood feature point set corresponding to the feature point T (i, j), the feature point T (i, j +1), and the neighborhood feature point set corresponding to the feature point T (i, j + 1);
s6, repeating steps S1-S5 until traversing the feature points in the edge feature point set, obtaining a feature point set of each edge instance in the edge feature point set, where i and j are positive integers;
and obtaining a road edge map containing a plurality of road edge examples according to the characteristic point set of each road edge example in the road edge characteristic point set.
As an optional implementation manner, the processing module 802 is further configured to: obtaining a plurality of reference road edge maps according to a plurality of frames of laser point cloud data in the laser point cloud data sequence and the vehicle body pose data sequence, wherein the plurality of reference road edge maps are obtained by mapping the road edge maps in coordinate systems where the plurality of frames of laser point cloud data are respectively located, and the plurality of reference road edge maps correspond to the plurality of frames of laser point cloud data one to one; acquiring a plurality of preset areas from the plurality of reference road edge maps, wherein the size of each preset area is not smaller than that of each frame of laser point cloud data, and the plurality of preset areas correspond to the plurality of reference road edge maps one by one; determining the score of each characteristic point in the plurality of preset areas according to the laser point cloud data sequence and the characteristic points in the plurality of preset areas; and obtaining road edge feature marking data corresponding to each frame of laser point cloud data according to the score of each feature point in the preset areas, wherein the road edge feature marking data are obtained by processing the feature points with the score not less than a sixth threshold value.
It should be noted that the obtaining module 801 and the processing module 802 shown in fig. 8 are configured to execute the relevant steps of the above-mentioned road edge data labeling method.
In this embodiment, the road edge data labeling device is presented in a module form. A "module" herein may refer to an application-specific integrated circuit (ASIC), a processor and memory that execute one or more software or firmware programs, an integrated logic circuit, and/or other devices that may provide the described functionality.
In addition, the above acquiring module 801 and the processing module 802 may be implemented by the processor 902 of the road edge data labeling apparatus shown in fig. 9.
Fig. 9 is a schematic hardware structure diagram of a road edge data labeling device according to an embodiment of the present application. The road edge data labeling apparatus 900 shown in fig. 9 (the apparatus 900 may be a computer device) includes a memory 901, a processor 902, a communication interface 903, and a bus 904. The memory 901, the processor 902 and the communication interface 903 are connected to each other by a bus 904.
The Memory 901 may be a Read Only Memory (ROM), a static Memory device, a dynamic Memory device, or a Random Access Memory (RAM).
The memory 901 may store a program, and when the program stored in the memory 901 is executed by the processor 902, the processor 902 and the communication interface 903 are used for executing the steps of the road edge data labeling method according to the embodiment of the present application.
The processor 902 may be a general-purpose Central Processing Unit (CPU), a microprocessor, an Application Specific Integrated Circuit (ASIC), a Graphics Processing Unit (GPU), or one or more Integrated circuits, and is configured to execute related programs to implement the functions that need to be executed by the units in the road edge data labeling apparatus according to the embodiment of the present Application, or to execute the road edge data labeling method according to the embodiment of the present Application.
The processor 902 may also be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the method for labeling road edge data of the present application may be implemented by hardware integrated logic circuits in the processor 902 or instructions in the form of software. The processor 902 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 901, and the processor 902 reads information in the memory 901, and completes, in combination with hardware of the processor, functions required to be executed by units included in the road edge data labeling apparatus according to the embodiment of the present application, or executes the road edge data labeling method according to the embodiment of the present application.
The communication interface 903 enables communication between the apparatus 900 and other devices or communication networks using transceiver means, such as, but not limited to, a transceiver. For example, the data may be acquired through the communication interface 903.
Bus 904 may include a pathway to transfer information between various components of device 900, such as memory 901, processor 902, and communication interface 903.
It should be noted that although the apparatus 900 shown in fig. 9 shows only memories, processors, and communication interfaces, in a specific implementation, those skilled in the art will appreciate that the apparatus 900 also includes other components necessary to achieve normal operation. Also, those skilled in the art will appreciate that the apparatus 900 may also include hardware components for performing other additional functions, according to particular needs. Furthermore, those skilled in the art will appreciate that apparatus 900 may also include only those components necessary to implement embodiments of the present application, and need not include all of the components shown in FIG. 9.
The application provides a chip system, which is applied to electronic equipment; the chip system comprises one or more interface circuits, and one or more processors; the interface circuit and the processor are interconnected through a line; the interface circuit is to receive a signal from a memory of the electronic device and to send the signal to the processor, the signal comprising computer instructions stored in the memory; when the processor executes the computer instructions, the electronic device performs the method as provided in any one of the possible embodiments of the first aspect.
Embodiments of the present application further provide an intelligent driving vehicle, which includes a traveling system, a sensing system, a control system, and a computer system, wherein the computer system is configured to execute the method as provided in any one of the possible implementation manners of the first aspect.
Embodiments of the present application also provide a computer-readable storage medium having stored therein instructions, which when executed on a computer or processor, cause the computer or processor to perform one or more steps of any one of the methods described above.
The embodiment of the application also provides a computer program product containing instructions. The computer program product, when run on a computer or processor, causes the computer or processor to perform one or more steps of any of the methods described above.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the specific descriptions of the corresponding steps in the foregoing method embodiments, and are not described herein again.
It should be understood that in the description of the present application, unless otherwise indicated, "/" indicates a relationship where the objects associated before and after are an "or", e.g., a/B may indicate a or B; wherein A and B can be singular or plural. Also, in the description of the present application, "a plurality" means two or more than two unless otherwise specified. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple. In addition, in order to facilitate clear description of technical solutions of the embodiments of the present application, in the embodiments of the present application, terms such as "first" and "second" are used to distinguish the same items or similar items having substantially the same functions and actions. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance. Also, in the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or illustrations. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present relevant concepts in a concrete fashion for ease of understanding.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the division of the unit is only one logical function division, and other division may be implemented in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. The shown or discussed mutual coupling, direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are wholly or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)), or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a read-only memory (ROM), or a Random Access Memory (RAM), or a magnetic medium, such as a floppy disk, a hard disk, a magnetic tape, a magnetic disk, or an optical medium, such as a Digital Versatile Disk (DVD), or a semiconductor medium, such as a Solid State Disk (SSD).
The above description is only a specific implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the embodiments of the present application should be covered by the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (19)

1. A road edge data labeling method is characterized by comprising the following steps:
acquiring a vehicle body pose data sequence of a self vehicle and a laser point cloud data sequence containing the peripheral environment information of the self vehicle, and acquiring a laser high-precision map according to the vehicle body pose data sequence and the laser point cloud data sequence;
obtaining a road edge feature point set according to the laser high-precision map and the laser point cloud data sequence;
processing each characteristic point in the road edge characteristic point set to obtain a road edge map containing a plurality of road edge examples;
and obtaining road edge characteristic marking data corresponding to each frame of laser point cloud data in the laser point cloud data sequence according to the road edge map, the laser point cloud data sequence and the vehicle body pose data sequence.
2. The method of claim 1, wherein the obtaining a set of road edge feature points from the laser high-precision map and the laser point cloud data sequence comprises:
processing the laser point cloud data sequence to obtain laser point cloud semantic information;
obtaining a road semantic map according to the laser point cloud semantic information and the laser high-precision map;
and extracting road edge feature points of the road semantic map to obtain a road edge feature point set in the road semantic map.
3. The method according to claim 2, wherein the road semantic map includes a first point cloud and a second point cloud, and the extracting of the road edge feature points from the road semantic map to obtain the road edge feature point set in the road semantic map includes:
mapping the road semantic map to a two-dimensional grid map to obtain a mapped road semantic map;
acquiring a candidate grid from the mapped road semantic map, wherein the candidate grid is a grid with the first point cloud and the second point cloud, and the height difference between the first point cloud and the second point cloud in the road semantic map is not greater than a first threshold value;
and obtaining a road edge feature point set according to the first point cloud and the second point cloud in the candidate grid.
4. The method according to claim 2 or 3, wherein the processing each feature point in the set of road edge feature points to obtain a road edge map including a plurality of road edge instances comprises:
performing iterative search processing on feature points in a road edge feature point set of K sub-maps to obtain a feature point set of a plurality of road edge examples, wherein the K sub-maps are obtained by segmenting the road semantic map, and K is an integer not less than 2; wherein, for the p-th sub-map of the K sub-maps, the steps S1-S6 are executed:
s1, obtaining a characteristic point T (i, j) from the p-th sub map, wherein the number of points in a neighborhood characteristic point set corresponding to the characteristic point T (i, j) is not less than a second threshold value;
s2, obtaining at least one point set according to a neighborhood feature point set corresponding to the feature point T (i, j) and the feature point T (i, j), wherein an included angle formed by connecting any two feature points in each point set in the at least one point set and the feature point T (i, j) is not more than a third threshold value;
s3, determining a characteristic point T (i, j +1) with the distance from the characteristic point T (i, j) not less than a fourth threshold value from the at least one point set;
s4, acquiring a neighborhood feature point set corresponding to the feature point T (i, j +1), and determining a feature point T (i, j +2) of which the distance from the feature point T (i, j +1) is not less than a fifth threshold value from the neighborhood feature point set corresponding to the feature point T (i, j +1), wherein the neighborhood feature point set corresponding to the feature point T (i, j +1) is not overlapped with the neighborhood feature point set corresponding to the feature point T (i, j);
s5, setting j to j +1, and repeatedly executing step S4 until the number of points in the neighborhood feature point set corresponding to the feature point T (i, j +1) is 0, to obtain a feature point set of the ith road edge instance in the p-th sub-map, where the feature point set of the ith road edge instance in the p-th sub-map includes the feature point T (i, j), the neighborhood feature point set corresponding to the feature point T (i, j), the feature point T (i, j +1), and the neighborhood feature point set corresponding to the feature point T (i, j + 1);
s6, repeating steps S1-S5 until feature points in the p-th sub-map are traversed, obtaining a feature point set of each road edge instance in the p-th sub-map, where the p-th sub-map is any one of the K sub-maps, and i and j are positive integers;
and obtaining a road edge map comprising a plurality of road edge examples according to the feature point sets of the road edge examples in the K sub-maps.
5. The method of claim 1, wherein the obtaining a set of road edge feature points from the laser high-precision map and the laser point cloud data sequence comprises:
semantic feature extraction is carried out on each frame of laser point cloud data in the laser point cloud data sequence, and a road edge feature point set of each frame of laser point cloud data in the laser point cloud data sequence is obtained;
and mapping the road edge feature point set of each frame of laser point cloud data in the laser point cloud data sequence to the laser high-precision map to obtain a road edge feature point set corresponding to a coordinate system where the laser high-precision map is located.
6. The method of claim 5, wherein the processing each feature point in the set of road edge feature points to obtain a road edge map including a plurality of road edge instances comprises:
s1, acquiring a characteristic point T (i, j) from a road edge characteristic point set corresponding to a coordinate system where the laser high-precision map is located, wherein the number of points in a neighborhood characteristic point set corresponding to the characteristic point T (i, j) is not less than a first threshold value;
s2, obtaining at least one point set according to a neighborhood feature point set corresponding to the feature point T (i, j) and the feature point T (i, j), wherein an included angle formed by connecting any two feature points in each point set in the at least one point set and the feature point T (i, j) is not more than a second threshold value;
s3, determining a characteristic point T (i, j +1) with the distance from the characteristic point T (i, j) not less than a third threshold value from the at least one point set;
s4, acquiring a neighborhood feature point set corresponding to the feature point T (i, j +1), and determining a feature point T (i, j +2) of which the distance from the feature point T (i, j +1) is not less than a fourth threshold value from the neighborhood feature point set corresponding to the feature point T (i, j +1), wherein the neighborhood feature point set corresponding to the feature point T (i, j +1) is not overlapped with the neighborhood feature point set corresponding to the feature point T (i, j);
s5, setting j to j +1, and repeatedly executing step S4 until the number of points in the neighborhood feature point set corresponding to the feature point T (i, j +1) is 0, to obtain a feature point set of an ith road edge instance in the road edge feature point set, where the feature point set of the ith road edge instance includes the feature point T (i, j), the neighborhood feature point set corresponding to the feature point T (i, j), the feature point T (i, j +1), and the neighborhood feature point set corresponding to the feature point T (i, j + 1);
s6, repeating steps S1-S5 until traversing the feature points in the edge feature point set, obtaining a feature point set of each edge instance in the edge feature point set, where i and j are positive integers;
and obtaining a road edge map containing a plurality of road edge examples according to the characteristic point set of each road edge example in the road edge characteristic point set.
7. The method according to any one of claims 1 to 6, wherein the obtaining of the road edge feature marking data corresponding to each frame of laser point cloud data in the laser point cloud data sequence according to the road edge map, the laser point cloud data sequence and the vehicle body pose data sequence comprises:
obtaining a plurality of reference road edge maps according to a plurality of frames of laser point cloud data in the laser point cloud data sequence and the vehicle body pose data sequence, wherein the plurality of reference road edge maps are obtained by mapping the road edge maps in coordinate systems where the plurality of frames of laser point cloud data are respectively located, and the plurality of reference road edge maps correspond to the plurality of frames of laser point cloud data one to one;
acquiring a plurality of preset areas from the plurality of reference road edge maps, wherein the size of each preset area is not smaller than that of each frame of laser point cloud data, and the plurality of preset areas correspond to the plurality of reference road edge maps one by one;
determining the score of each characteristic point in the plurality of preset areas according to the laser point cloud data sequence and the characteristic points in the plurality of preset areas;
and obtaining road edge feature marking data corresponding to each frame of laser point cloud data according to the score of each feature point in the preset areas, wherein the road edge feature marking data are obtained by processing the feature points with the score not less than a sixth threshold value.
8. A road edge data labeling device, comprising:
the acquisition module is used for acquiring a vehicle body pose data sequence of a self vehicle and a laser point cloud data sequence containing the peripheral environment information of the self vehicle;
a processing module to:
obtaining a laser high-precision map according to the vehicle body pose data sequence and the laser point cloud data sequence;
obtaining a road edge feature point set according to the laser high-precision map and the laser point cloud data sequence;
processing each characteristic point in the road edge characteristic point set to obtain a road edge map containing a plurality of road edge examples;
and obtaining road edge characteristic marking data corresponding to each frame of laser point cloud data in the laser point cloud data sequence according to the road edge map, the laser point cloud data sequence and the vehicle body pose data sequence.
9. The apparatus of claim 8, wherein the processing module is configured to:
processing the laser point cloud data sequence to obtain laser point cloud semantic information;
obtaining a road semantic map according to the laser point cloud semantic information and the laser high-precision map;
and extracting road edge feature points of the road semantic map to obtain a road edge feature point set in the road semantic map.
10. The apparatus of claim 9, wherein the road semantic map comprises a first point cloud and a second point cloud, and wherein the processing module is further configured to:
mapping the road semantic map to a two-dimensional grid map to obtain a mapped road semantic map;
acquiring a candidate grid from the mapped road semantic map, wherein the candidate grid is a grid with the first point cloud and the second point cloud, and the height difference between the first point cloud and the second point cloud in the road semantic map is not greater than a first threshold value;
and obtaining a road edge feature point set according to the first point cloud and the second point cloud in the candidate grid.
11. The apparatus of claim 9 or 10, wherein the processing module is further configured to:
performing iterative search processing on feature points in a road edge feature point set of K sub-maps to obtain a feature point set of a plurality of road edge examples, wherein the K sub-maps are obtained by segmenting the road semantic map, and K is an integer not less than 2; wherein, for the p-th sub-map of the K sub-maps, the steps S1-S6 are executed:
s1, obtaining a characteristic point T (i, j) from the p-th sub map, wherein the number of points in a neighborhood characteristic point set corresponding to the characteristic point T (i, j) is not less than a second threshold value;
s2, obtaining at least one point set according to a neighborhood feature point set corresponding to the feature point T (i, j) and the feature point T (i, j), wherein an included angle formed by connecting any two feature points in each point set in the at least one point set and the feature point T (i, j) is not more than a third threshold value;
s3, determining a characteristic point T (i, j +1) with the distance from the characteristic point T (i, j) not less than a fourth threshold value from the at least one point set;
s4, acquiring a neighborhood feature point set corresponding to the feature point T (i, j +1), and determining a feature point T (i, j +2) of which the distance from the feature point T (i, j +1) is not less than a fifth threshold value from the neighborhood feature point set corresponding to the feature point T (i, j +1), wherein the neighborhood feature point set corresponding to the feature point T (i, j +1) is not overlapped with the neighborhood feature point set corresponding to the feature point T (i, j);
s5, setting j to j +1, and repeatedly executing step S4 until the number of points in the neighborhood feature point set corresponding to the feature point T (i, j +1) is 0, to obtain a feature point set of the ith road edge instance in the p-th sub-map, where the feature point set of the ith road edge instance in the p-th sub-map includes the feature point T (i, j), the neighborhood feature point set corresponding to the feature point T (i, j), the feature point T (i, j +1), and the neighborhood feature point set corresponding to the feature point T (i, j + 1);
s6, repeating steps S1-S5 until feature points in the p-th sub-map are traversed, obtaining a feature point set of each road edge instance in the p-th sub-map, where the p-th sub-map is any one of the K sub-maps, and i and j are positive integers;
and obtaining a road edge map comprising a plurality of road edge examples according to the feature point sets of the road edge examples in the K sub-maps.
12. The apparatus of claim 8, wherein the processing module is configured to:
semantic feature extraction is carried out on each frame of laser point cloud data in the laser point cloud data sequence, and a road edge feature point set of each frame of laser point cloud data in the laser point cloud data sequence is obtained;
and mapping the road edge feature point set of each frame of laser point cloud data in the laser point cloud data sequence to the laser high-precision map to obtain a road edge feature point set corresponding to a coordinate system where the laser high-precision map is located.
13. The apparatus of claim 12, wherein the processing module is further configured to:
s1, acquiring a characteristic point T (i, j) from a road edge characteristic point set corresponding to a coordinate system where the laser high-precision map is located, wherein the number of points in a neighborhood characteristic point set corresponding to the characteristic point T (i, j) is not less than a first threshold value;
s2, obtaining at least one point set according to a neighborhood feature point set corresponding to the feature point T (i, j) and the feature point T (i, j), wherein an included angle formed by connecting any two feature points in each point set in the at least one point set and the feature point T (i, j) is not more than a second threshold value;
s3, determining a characteristic point T (i, j +1) with the distance from the characteristic point T (i, j) not less than a third threshold value from the at least one point set;
s4, acquiring a neighborhood feature point set corresponding to the feature point T (i, j +1), and determining a feature point T (i, j +2) of which the distance from the feature point T (i, j +1) is not less than a fourth threshold value from the neighborhood feature point set corresponding to the feature point T (i, j +1), wherein the neighborhood feature point set corresponding to the feature point T (i, j +1) is not overlapped with the neighborhood feature point set corresponding to the feature point T (i, j);
s5, setting j to j +1, and repeatedly executing step S4 until the number of points in the neighborhood feature point set corresponding to the feature point T (i, j +1) is 0, to obtain a feature point set of an ith road edge instance in the road edge feature point set, where the feature point set of the ith road edge instance includes the feature point T (i, j), the neighborhood feature point set corresponding to the feature point T (i, j), the feature point T (i, j +1), and the neighborhood feature point set corresponding to the feature point T (i, j + 1);
s6, repeating steps S1-S5 until traversing the feature points in the edge feature point set, obtaining a feature point set of each edge instance in the edge feature point set, where i and j are positive integers;
and obtaining a road edge map containing a plurality of road edge examples according to the characteristic point set of each road edge example in the road edge characteristic point set.
14. The apparatus of any one of claims 8 to 13, wherein the processing module is further configured to:
obtaining a plurality of reference road edge maps according to a plurality of frames of laser point cloud data in the laser point cloud data sequence and the vehicle body pose data sequence, wherein the plurality of reference road edge maps are obtained by mapping the road edge maps in coordinate systems where the plurality of frames of laser point cloud data are respectively located, and the plurality of reference road edge maps correspond to the plurality of frames of laser point cloud data one to one;
acquiring a plurality of preset areas from the plurality of reference road edge maps, wherein the size of each preset area is not smaller than that of each frame of laser point cloud data, and the plurality of preset areas correspond to the plurality of reference road edge maps one by one;
determining the score of each characteristic point in the plurality of preset areas according to the laser point cloud data sequence and the characteristic points in the plurality of preset areas;
and obtaining road edge feature marking data corresponding to each frame of laser point cloud data according to the score of each feature point in the preset areas, wherein the road edge feature marking data are obtained by processing the feature points with the score not less than a sixth threshold value.
15. The road edge data labeling device is characterized by comprising a processor and a memory; wherein the memory is configured to store program code and the processor is configured to invoke the program code to perform the method of any of claims 1 to 7.
16. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which is executed by a processor to implement the method of any one of claims 1 to 7.
17. A computer program product, characterized in that, when the computer program product is run on a computer, it causes the computer to perform the method according to any of claims 1 to 7.
18. A chip system, wherein the chip system is applied to an electronic device; the chip system comprises one or more interface circuits, and one or more processors; the interface circuit and the processor are interconnected through a line; the interface circuit is to receive a signal from a memory of the electronic device and to send the signal to the processor, the signal comprising computer instructions stored in the memory; the electronic device performs the method of any one of claims 1-7 when the processor executes the computer instructions.
19. An intelligent driving vehicle comprising a travel system, a sensing system, a control system and a computer system, wherein the computer system is configured to perform the method of any one of claims 1 to 7.
CN202110932824.2A 2021-08-13 2021-08-13 Road edge data labeling method, related system and storage medium Pending CN113822332A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110932824.2A CN113822332A (en) 2021-08-13 2021-08-13 Road edge data labeling method, related system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110932824.2A CN113822332A (en) 2021-08-13 2021-08-13 Road edge data labeling method, related system and storage medium

Publications (1)

Publication Number Publication Date
CN113822332A true CN113822332A (en) 2021-12-21

Family

ID=78922902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110932824.2A Pending CN113822332A (en) 2021-08-13 2021-08-13 Road edge data labeling method, related system and storage medium

Country Status (1)

Country Link
CN (1) CN113822332A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419190A (en) * 2022-01-11 2022-04-29 长沙慧联智能科技有限公司 Grid map visual guiding line generation method and device
CN116449335A (en) * 2023-06-14 2023-07-18 上海木蚁机器人科技有限公司 Method and device for detecting drivable area, electronic device and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419190A (en) * 2022-01-11 2022-04-29 长沙慧联智能科技有限公司 Grid map visual guiding line generation method and device
CN116449335A (en) * 2023-06-14 2023-07-18 上海木蚁机器人科技有限公司 Method and device for detecting drivable area, electronic device and storage medium
CN116449335B (en) * 2023-06-14 2023-09-01 上海木蚁机器人科技有限公司 Method and device for detecting drivable area, electronic device and storage medium

Similar Documents

Publication Publication Date Title
Yu et al. Automatic 3D building reconstruction from multi-view aerial images with deep learning
Che et al. Object recognition, segmentation, and classification of mobile laser scanning point clouds: A state of the art review
CN111462275B (en) Map production method and device based on laser point cloud
Yadav et al. Extraction of road surface from mobile LiDAR data of complex road environment
Chen et al. Road extraction in remote sensing data: A survey
US20220028163A1 (en) Computer Vision Systems and Methods for Detecting and Modeling Features of Structures in Images
Zai et al. 3-D road boundary extraction from mobile laser scanning data via supervoxels and graph cuts
US10670416B2 (en) Traffic sign feature creation for high definition maps used for navigating autonomous vehicles
Caltagirone et al. Fast LIDAR-based road detection using fully convolutional neural networks
Forlani et al. C omplete classification of raw LIDAR data and 3D reconstruction of buildings
Ma et al. Capsule-based networks for road marking extraction and classification from mobile LiDAR point clouds
Chen et al. Next generation map making: Geo-referenced ground-level LIDAR point clouds for automatic retro-reflective road feature extraction
CN111527467A (en) Method and apparatus for automatically defining computer-aided design files using machine learning, image analysis, and/or computer vision
CN109584294A (en) A kind of road surface data reduction method and apparatus based on laser point cloud
CN113822332A (en) Road edge data labeling method, related system and storage medium
US20220163346A1 (en) Method and apparatus for generating a map for autonomous driving and recognizing location
Bao et al. A review of high-definition map creation methods for autonomous driving
Bao et al. High-definition map generation technologies for autonomous driving
CN115690138A (en) Road boundary extraction and vectorization method fusing vehicle-mounted image and point cloud
Chen et al. Urban vegetation segmentation using terrestrial LiDAR point clouds based on point non-local means network
Xiao et al. 3D urban object change detection from aerial and terrestrial point clouds: A review
Ma et al. Virtual analysis of urban road visibility using mobile laser scanning data and deep learning
CN116071729A (en) Method and device for detecting drivable area and road edge and related equipment
Liu et al. Detection and reconstruction of static vehicle-related ground occlusions in point clouds from mobile laser scanning
WO2022099620A1 (en) Three-dimensional point cloud segmentation method and apparatus, and mobile platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination