WO2021063756A1 - Improved trajectory estimation based on ground truth - Google Patents

Improved trajectory estimation based on ground truth Download PDF

Info

Publication number
WO2021063756A1
WO2021063756A1 PCT/EP2020/076467 EP2020076467W WO2021063756A1 WO 2021063756 A1 WO2021063756 A1 WO 2021063756A1 EP 2020076467 W EP2020076467 W EP 2020076467W WO 2021063756 A1 WO2021063756 A1 WO 2021063756A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
sensor
odometry
scan points
trajectory
Prior art date
Application number
PCT/EP2020/076467
Other languages
French (fr)
Inventor
Cesar LOBO-CASTILLO
Pavel JIROUTEK
Jan OLSINA
Miroslav ZIMA
Original Assignee
Valeo Schalter Und Sensoren Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Valeo Schalter Und Sensoren Gmbh filed Critical Valeo Schalter Und Sensoren Gmbh
Publication of WO2021063756A1 publication Critical patent/WO2021063756A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments

Definitions

  • the present invention refers to a method for providing accurate trajectory information, in particular for use in for use in a validation of a driving support system of a vehicle.
  • the present invention also refers to a validation system, in particular for validation of an advanced driver assistance system, based on accurate trajectory information, comprising a receiver for receiving position information from a Global Navigation Satellite System, at least one environment sensor for providing sensor information covering an environment of the vehicle, whereby the sensor information comprises individual scan points, at least one odometry sensor for providing odometry information, and a processing unit connected to the receiver, the at least one environment sensor and the at least one odometry sensor, wherein the validation system is adapted to perform the above method.
  • Semi-autonomous and autonomous driving are considered as disruptive technologies in the automotive sector.
  • Semi-autonomous and autonomous driving are increasing the safety on roads.
  • the increasing degree of automation of systems used in modern automotive industry increases dramatically the requirements for the validation.
  • the international functional safety standard ISO 26262 defines the development and the validation process. In particular, it mandates that for all possible hazardous situations there shall be a properly validated safety-goal. Reducing possibility of unknown hazardous situations occurrence is achieved e.g. by using redundant sensing systems and by statistical validation.
  • the first step of a typical statistical validation process is the open road data acquisition where the data is recorded according to the defined statistical model.
  • the measurement vehicle is equipped with a primary system, which is the subject of the system validation, and a reference system, which is the source of raw data as reference data for ground truth. In ground truth extraction, different features are labelled in the collected reference data. This step can combine manual and automated labelling techniques.
  • the labelled data is then compared with the system output using defined KPIs, which shall be mapped to system requirements.
  • the KPIs are often calculated based on many thousands of kilometers on open roads with significant precision. This poses high requirements on alignment between the primary and the reference system and measurements of the vehicle odometry in conditions, where precise calibration is more difficult to achieve than in laboratory conditions. Achieving precise trajectory information is a key for autonomous driving.
  • the present invention provides a method for providing accurate trajectory information, in particular for use in for use in a validation of a driving support system of a vehicle, comprising the steps of receiving odometry information in respect to a movement of the vehicle, determining a trajectory based on the odometry information, performing data acquisition of sensor information from at least one environment sensor covering an environment of the vehicle, whereby the sensor information comprises individual scan points, generating a global map comprising the scan points of the sensor information, extracting ground truth data as reference data based on the sensor information, performing data post-processing as a second level odometry correction of the trajectory based on the ground truth data and the global map, and providing the accurate trajectory information based on the second level odometry correction.
  • the present invention also provides a validation system, in particular for validation of an advanced driver assistance system, based on accurate trajectory information, comprising a receiver for receiving position information from a Global Navigation Satellite System, at least one environment sensor for providing sensor information covering an environment of the vehicle, whereby the sensor information comprises individual scan points, at least one odometry sensor for providing odometry information, and a processing unit connected to the receiver, the at least one environment sensor and the at least one odometry sensor, wherein the validation system is adapted to perform the above method.
  • a validation system in particular for validation of an advanced driver assistance system, based on accurate trajectory information, comprising a receiver for receiving position information from a Global Navigation Satellite System, at least one environment sensor for providing sensor information covering an environment of the vehicle, whereby the sensor information comprises individual scan points, at least one odometry sensor for providing odometry information, and a processing unit connected to the receiver, the at least one environment sensor and the at least one odometry sensor, wherein the validation system is adapted to perform
  • the basic idea of the invention is to provide accurate trajectory information based on a combination of odometry information of the vehicle, which is provided from the at least one odometry sensor and provides a trajectory, and sensor information from the at least one environment sensor, which is used to correct the odometry.
  • the individual scan points can be fitted to the ground truth data, and a score can be calculated for providing correction information as basis for the accurate trajectory information. This makes the method robust against errors resulting from integration and imprecise odometry measurements.
  • T ransformation of a vehicle coordinate system (VCS) to a global coordinate system (GCS) is done through integration.
  • VCS vehicle coordinate system
  • GCS global coordinate system
  • This transformation is important to obtain a full overview of the environment of the vehicle, in particular using multiple environment sensors.
  • this procedure suffers from imprecise measurements and rounding errors.
  • the trajectory of the vehicle can be determined with increased precision, so that these errors can be reduced.
  • the method can be used to provide an output of accurate pre-annotated features like road boundaries in a global coordinate frame, which can be directly used in a later system performance evaluation.
  • the odometry information in respect to the movement of the vehicle can comprise by way of example instantaneous measurements of first order quantities such as velocity and yaw rate.
  • the velocity can be determined based on wheel ticks provided by a respective sensor, i.e. a wheel sensor.
  • the yaw rate can be determined as a steering wheel angle, e.g. using a steering wheel sensor.
  • the trajectory indicates a movement of the vehicle, e.g. a trajectory driven by the vehicle:
  • the trajectory is based on the odometry information.
  • the trajectory terminates at a current position of the vehicle.
  • the trajectory is first determined based on the odometry information and improved in the course of the specified method.
  • the sensor information from the at least one environment sensor covering an environment of the vehicle comprises the individual scan points, which provide distance information in a given resolution in the field of view compared to a position of the respective environment sensor.
  • the scan points can provide supplementary information like an intensity value for each scan point.
  • the at least one environment sensor can be e.g. a LiDAR-based environment sensor or a radar sensor, each of which provides scan points as described above.
  • Each of the at least one environment sensor covers an individual field of view.
  • the field of view of multiple environment sensors can at least partially overlap or not.
  • Generating the global map comprises transforming the scan points of the respective environment sensor, which are provided each in a respective sensor coordinate system, to a global coordinate system (GCS), as discussed above.
  • the scan points of the respective environment sensor can be transformed first into a vehicle coordinate system (VCS), which provides a common coordinate system in case of multiple individual environment sensors.
  • VCS vehicle coordinate system
  • the term global map here refers to a map covering a close-by environment of the vehicle, which can be based on sensor information from different types and numbers of differently located environment sensors.
  • the second level odometry correction is performed.
  • the step can be based on ground truth data like extracted features including road boundaries as well as movable features. It processes, in particular, raw scan points to perform odometry correction and to provide the corrected trajectory of the ego vehicle.
  • the second level odometry correction can be performed independently from further odometry correction steps.
  • the second level odometry correction provides odometry information having a good quality and suitable for validation purposes of primary systems. Such a validation of the driving support system as the primary system compared to the present validation system as reference system can be performed in subsequent processing steps, when the validation is performed based on KPI values (Key Performance Indicator).
  • score values can be determined for the ground truth data compared to the global map.
  • the score can be calculated according any suitable prior definition.
  • the score shall be determined in order to allow an optimization of the trajectory information and can be defined e.g. as a distance value indicating a difference between the ground truth data, e.g. static features as extracted from the sensor information as discussed below, and the scan points.
  • multiple sets of sensor information are acquired over time and commonly processed to perform the second level odometry correction.
  • a single set of sensor information refers to sensor information acquired at a certain moment of time. Multiple sets of sensor information can form a sequence of sensor information.
  • a set of sensor information can also be referred to as frame.
  • the score is based on the individual scan points of the sensor information provided in VCS for each set of sensor information, and the ground truth data, e.g. the static features, provided in GCS G,.
  • the score is determined by comparing the ground truth data and the sensor information provided from the at least one environment sensor, i.e. the scan points, as obtained in VCS coordinates.
  • the ground truth data e.g. the static features
  • G the ground truth data
  • the score based on distances between the scan points and the respective ground truth data can be defined as where the sum is over scan points r i j .
  • Constant ⁇ provides a scale factor.
  • the third power in the score formula is chosen based on empirical rules and takes into account that density of the scan points is inversely proportional to the second power of the inverse distance d.
  • performing the data post-processing preferably comprises a score maximization for the extracted ground truth data compared to the scan points to obtain correction information for providing the accurate trajectory.
  • the ground truth data best matches the scan points.
  • the best fitting score is considered as best choice.
  • the underlying processing indicates the correction information for further use.
  • the score optimization refers to a correction of the GCS as a process with GCS G, on the input and GCS G 0 on the output. Both represent approximation of the real GCS.
  • G 0 is created by transformation ⁇ i (t), ⁇ x i (t) , ⁇ y i (t) ⁇ ⁇ o (t), ⁇ x o (t), ⁇ y 0 (t) where ⁇ 0 (t), ⁇ x 0 (t), ⁇ y 0 (t) maximize the score under some constraints.
  • the ground truth data is already provided in the input GCS, it is required to determine how to transfer the ground truth data into the new coordinate system.
  • the ground truth data comprises one or more static features, for example road boundaries, which are represented by a set of points [x'j, yjj.
  • the upper index i denotes the number of the point and the lower index j is the frame number of the local coordinate system.
  • the points do not arise from measurement, like the scan points, but from the ground truth, e.g. from labelling, and it is therefore not given to which VCS they belong.
  • VCS V(t i j ) is chosen, where time t j is chosen so that the distance of [x i j , y i j] from the origin of the V(t j ) is minimal.
  • time t j is chosen so that the distance of [x i j , y i j] from the origin of the V(t j ) is minimal.
  • the point is represented the best in the local coordinate system, where it is closest to the ego vehicle because of its sensor observation model.
  • this can be a design choice, because the transformation is not uniquely given.
  • the method comprises steps of determining a position of the vehicle based on position information from a Global Navigation Satellite System, and performing a first level odometry correction of the trajectory based on the determined position information.
  • Global navigation satellite systems GNSS
  • GNSS Global navigation satellite systems
  • the position information based on GNSS is in general imprecise due to noise. This makes it important to determine movements of the vehicle in particular for short timescales with high accuracy, so that the position can always be determined accurately based on a combination of the determined movement based on the odometry information together with the position information based on GNSS.
  • the step of determining a position of the vehicle based on position information from a Global Navigation Satellite System comprises performing a correction step of the determined position.
  • Different kinds of correction steps can be applied, e.g. known as differential GNSS or satellite-based augmentation systems (SBAS).
  • the correction step can be based on a fusion of position information from the GNSS together with information from an inertial measurement unit (IMU).
  • IMU inertial measurement unit
  • Another possibility to achieve a more accurate position information from GNSS is to use real-time kinematics (RTK) techniques, which use carrier-based ranging to provide positions that are orders of magnitude more precise to standard code-based positioning from GNSS.
  • RTK real-time kinematics
  • the correction step can be supported by highly precise sensors for sensing a relative movement such as ring laser gyroscopes (RLG) with an accuracy better than 0.01 degree/hour or wheel vector sensors, which can precisely measure wheel movements in multiple axes.
  • RLG ring laser gyroscopes
  • a further possibility for performing a correction step of the determined position comprises a correction step based on GNSS data using traditional smoothing techniques such as (extended) Rauch-Tung-Striebel (RTS) smoother.
  • performing a first level odometry correction of the trajectory based on the determined position information comprises performing first level odometry correction using Visual Odometry and/or Simultaneous Localization and Mapping (SLAM).
  • SLAM Simultaneous Localization and Mapping
  • sequences of sets of sensor information of the environment of the vehicle can be analyzed.
  • first level odometry correction can improve consistency of local and global localization based on environment sensing.
  • the sensor information can be provided e.g. by optical cameras, LiDAR-based environment sensors or radar sensors.
  • odometry information can be derived from an affine transformation that optimizes point-wise differences of scan points of consecutive scans.
  • the transformation can be calculated by Iterative Closest Point algorithm with expectation maximization.
  • Expectation maximization filters out outliers from source and destination point clouds, i.e. scan points that do not have nearby points in the destination point cloud after transformation.
  • Further stability of the algorithm can be obtained by accumulation of several sets of environment information over time.
  • mapping of scan points from more distant locations, e.g. 20 m or more enforces consistency of the static environment on a larger scale.
  • This method step is based on raw sensor information, i.e. raw scan points. It directly optimizes a quantity of interest.
  • the transformation then gives an indication how the vehicle has moved and how the predicted trajectory information shall be corrected based on instantaneous measurements of velocity and yaw rate.
  • the transformation directly optimizes a quantity of interest by checking consistency of the scan points as received from different positions. It also has certain robustness against noise, occlusion and dynamic agents on the scene due to outlier rejection.
  • the step of the step of extracting ground truth data as reference data based on the sensor information comprises performing an automatic feature extraction, in particular a road boundary extraction and/or an object extraction, based on the sensor information.
  • Feature extraction can comprise extraction of static features including the road boundaries, buildings, treed, or others, and movable features like further traffic participants, in particular third party vehicles.
  • the road boundary extraction is in charge of extraction of the road boundaries from the given sensor information.
  • the output of the road boundary extraction comprises extraction of two road boundaries in world coordinates corresponding to left and right road boundaries of the respective road. Other objects can be extracted in different ways, e.g. depending on their shape.
  • the step of performing a road boundary extraction comprises performing a road boundary detection and performing a road boundary polyline extraction.
  • Road boundary detection refers to generating road detections from the given sensor information.
  • the road boundary detection can comprise post-processing using e.g. image processing techniques to remove or mitigate detection artifacts.
  • Road boundary polyline extraction refers to an extraction of road boundary polylines from the road boundary detection. Accordingly, first road contours are extracted from the input road detection grid using e.g. standard image processing edge detection techniques. Blurring and Sobel operators can be applied to the extracted road boundaries. The contours are then clustered using a hierarchical clustering algorithm. The centers of the clusters are extracted and separated in left and right boundary points, respectively, for each frame, i.e. each set of sensor information. The left and right road boundaries are then accumulated in world coordinates, clustered and the corresponding road boundaries polylines are finally extracted using a graph traversing based algorithm on the extracted clusters in world coordinates. The polylines correspond to pre-annotated or labelled road boundaries.
  • the step of the step of performing a road boundary detection comprises applying a deep neural network.
  • the deep neural network uses a Fully Convolutional Network (FCN) semantic segmentation approach.
  • FCN Fully Convolutional Network
  • the step of performing a road boundary extraction comprises performing a road boundary polyline simplification.
  • Road boundary polyline simplification refers to a simplification of the extracted road boundaries polylines.
  • the road boundary polyline simplification can be performed using the Ramer- Douglas-Peucker algorithm to provide to the annotators pre-annotated road boundaries with a number of scan points easy to handle.
  • the step of extracting ground truth data as reference data based on the sensor information comprises automatic data labelling.
  • the data labelling refers to a correct labelling of the scan points to identify features therein.
  • the labelling can be performed as only automatic labelling or human assisted labelling.
  • the automatic labelling can be based on automatically pre annotated features, which are labelled by a human.
  • Automatic data labelling preferably comprises applying a deep neural network. Deep neural networks are becoming more and more important and powerful in recognition of features in sensor information. The Deep neural performs the labelling in an efficient and automatic manner.
  • the step of performing data post processing as a second level odometry correction of the trajectory based on the ground truth data and the global map comprises filtering off scan points away from extracted static features, scan points of movable features, and/or scan points out of a relevant distance range.
  • scan points which do not carry relevant information with respect to the extracted features, are disregarded for further processing.
  • calculating the score based on the extracted features e.g. by performing the distance minimization
  • merely information in respect to the extracted static features is required. This refers in particular to scan points not far away from extracted static features.
  • a distance measure and a distance threshold are defined for separating the scan points.
  • scan points of movable features are scan points within an area, which cannot provide information in respect to static features like road boundaries.
  • the scan points filtered off as scan points of movable features can include scan points overlapping with these objects or e.g. respective bounding boxes covering the movable features, or scan points, which are closer to them than a given distance threshold.
  • the distance range is defined by an upper threshold and a lower threshold defining a distance interval x i j e [d min , d max ]. By filtering off the scan points out of the relevant distance range, only the scan points within the distance interval x i j e [dmin, d max ] are considered.
  • the upper threshold d max is chosen to limit measurement imprecisions and noise of the scan points.
  • the lower threshold d min is chosen because scan points in higher distance from the ego vehicle hold more relevant trajectory information. In an extreme case, where the scan points would be localized in the origin of VCS, they would not hold any information usable for the trajectory correction. Overall, relevance of processed information is increased by filtering off these scan points.
  • the step of the step of performing data post-processing as a second level odometry correction of the trajectory based on the ground truth data and the global map comprises matching positions of the scan points of the global map with positions of the extracted static features.
  • the extracted static features are part of the ground truth data.
  • a possible correction G 0 can be found in form where v(t) is a tangential velocity of point [Ax(t), Ay(t)] and cp’(t) is the time-derivative of cp(t), a Monte-Carlo technique is applied for maximization of a score.
  • parameters a 9 , b 9 , c 9 , a v , b v , c v are generated from a random uniform distribution. From the equation above, the matrices of the affine transformation of the candidate GCS G o ( ⁇ (t), ⁇ x(t), ⁇ y(t)) are calculated. The solution with lower score is always accepted, and the already accepted solution is used as a mean value for the following parameters generation. A simulated annealing schedule is used to prevent trapping in a local minima. Preferably, spread of generated parameters is initially large to allow higher sampling of the state space, and it is gradually decreased to allow convergence.
  • the whole matching algorithm can be as follows:
  • a Generate a 9 , b 9 , o f , a v , b v , c v from random uniform distribution with means a ⁇ best , b ⁇ best c ⁇ best a v best , b v best , c v best b.
  • Calculate score c. If score > score best : a ⁇ best , b ⁇ best c ⁇ best a v best , b v best , c v best , score best : a ⁇ , b ⁇ , c ⁇ , a v , b v , c v , score d. If score best was not assigned for M cycles, half spread of distributions from point a. with the system output using defined scores, which are mapped to system requirements.
  • the data post-processing comprises a global coordinates correction (GCS).
  • the step of matching positions of the scan points of the global map with positions of the extracted static features comprises performing a maximization of score dependent on the positions of the scan points of the global map and the positions of the extracted static features.
  • the maximization is preferably performed as an iterative step, which can provide a detailed matching.
  • the step of performing data post processing as a second level odometry correction of the trajectory based on the ground truth data and the global map comprises providing a corrected global map.
  • the corrected global map can be used for further purposes, e.g. validation purposes.
  • the at least one environment sensor comprises at least one out of a LiDAR-based environment sensor and/or a radar sensor.
  • Fig. 1 shows a schematic view of a vehicle with a driving support system according to a first, preferred embodiment
  • Fig. 2 shows a detailed flow chart of an implementation of a road boundary extraction in accordance with the method of the first embodiment
  • Fig. 3 shows a schematic view of a deep neural network using a Fully Convolutional Network (FCN) semantic segmentation for performing the road boundary detection in accordance with the method of the first embodiment
  • Fig. 4 shows three pictures indicating on the left a model input, in the center a model output and on the right an overlay of the model input and the model output.
  • Fig. 5 shows in the upper drawing a road with road boundaries marked with extracted polylines in world coordinates, in the lower left drawing sensor information including scan points from the environment of the vehicle together with a road detection plot in ego coordinates, in the lower middle drawing a detected road together with extracted road boundary points forming the polyline in ego coordinates, and in the lower right drawing sensor information including scan points from the environment of the vehicle together with extracted road boundary points forming the polyline in ego coordinates,
  • Fig. 6 shows the vehicle driving in the environment with the x-axis oriented in the driving direction on the road together with road boundaries as features, which are indicated by polylines,
  • Fig. 7 shows a partial view of a global map together with a trajectory and a corrected trajectory with scan points before correction
  • Fig. 8 shows a partial view of a global map together with the corrected trajectory of Fig. 7 with scan points after correction
  • Fig. 9 shows a detailed flow chart of the method for providing accurate trajectory information according to the first embodiment
  • FIG. 1 shows a vehicle 10, further referred to as ego vehicle, with a validation system 12 according to a first, preferred embodiment.
  • the validation system 12 of the first embodiment is provided for performing validation of a driving support system, e.g. an advanced driver assistance system (ADAS) as primary system.
  • ADAS advanced driver assistance system
  • the validation system 12 comprises a LiDAR-based environment sensor 14 as environment sensor for providing sensor information covering an environment 16 of the ego vehicle 10.
  • the validation system 12 further comprises a receiver 18 for receiving position information from a Global Navigation Satellite System.
  • the validation system 12 also comprises a processing unit 20, which is connected to the receiver 18 and the LiDAR- based environment sensor 14 via communication bus 22.
  • the validation system 12 still further comprises odometry sensors for providing odometry information of the ego vehicle 10.
  • the odometry sensors are not shown in figure 1 and comprise sensors for instantaneous measurements of first order quantities such as velocity and yaw rate.
  • the odometry sensors comprise a wheel tick sensor for providing wheel ticks and a steering wheel sensor for providing a steering wheel angle. Also the wheel tick sensor and the steering wheel sensor are connected to the processing unit 20 via communication bus 22.
  • step S100 refers to receiving odometry information in respect to a movement of the ego vehicle 10.
  • odometry information is received from the wheel tick sensor and the steering wheel sensor indicating a velocity and a yaw rate.
  • Step S110 refers to determining a trajectory 42 based on the odometry information.
  • the trajectory information refers to a trajectory 42, which has been driven by the ego vehicle 10.
  • the odometry information provided over time provides the trajectory 42.
  • the trajectory terminates at a current position of the ego vehicle 10.
  • Step S120 refers to performing data acquisition of sensor information from the LiDAR- based environment sensor 14 covering the environment 16 of the ego vehicle 10.
  • the sensor information comprises individual scan points 38 covering an individual field of view as part of the environment 16.
  • the scan points 38 provide distance information in a given angular resolution in the field of view.
  • the scan points 38 can optionally provide supplementary information like an intensity value for each scan point 38.
  • Step S130 refers to determining a position of the ego vehicle 10 based on position information from a Global Navigation Satellite System (GNSS).
  • GNSS Global Navigation Satellite System
  • the position information is received from the receiver 18 and transmitted via communication bus 22 to the processing unit 20, where the further data processing is performed.
  • a correction step of the determined position is included.
  • the correction step includes in one embodiment use of differential GNSS or satellite-based augmentation systems (SBAS).
  • the correction is based on a fusion of position information from the GNSS together with information from an inertial measurement unit (IMU).
  • IMU inertial measurement unit
  • RTK real-time kinematics
  • a correction step is performed based on GNSS data using traditional smoothing techniques such as (extended) Rauch-Tung-Striebel (RTS) smoother.
  • the correction step can be supported in each case by highly precise sensors for sensing a relative movement such as ring laser gyroscopes (RLG) with an accuracy better than 0.01 degree/hour or wheel vector sensors, which can precisely measure wheel movements in multiple axes.
  • RLG ring laser gyroscopes
  • the correction step is part of the first level odometry correction of step S140.
  • Step S140 refers to performing a first level odometry correction of the trajectory 42.
  • first level odometry correction comprises a correction of the trajectory 42 based on the determined position information.
  • the first level odometry correction of the trajectory 42 is performed using e.g. Visual Odometry and/or Simultaneous Localization and Mapping (SLAM). Accordingly, sequences of sensor information of the environment 16 of the ego vehicle 10, i.e. sensor information from the LiDAR-based environment sensor 14, is analyzed.
  • SLAM Simultaneous Localization and Mapping
  • odometry information can be derived from an affine transformation that optimizes point-wise differences of scan points 38 of consecutive sets of sensor information based on the assumption that most of the sensed environment 16 is static.
  • a set of sensor information refers to sensor information acquired at a certain moment of time, so multiple sets of sensor information can form a sequence.
  • the transformation is calculated by Iterative Closest Point algorithm with expectation maximization, which filters out outliers from source and destination point clouds, i.e. scan points 38 that do not have nearby points in the destination point cloud after transformation. Further stability of the algorithm is obtained by accumulation of several sets of sensor information over time and mapping of scan points 38 from more distant locations, e.g.
  • This sub-step is based on raw sensor information, i.e. the scan points 38. It directly optimizes a quantity of interest.
  • the transformation then gives an indication how the ego vehicle 10 has moved and how the predicted trajectory information shall be corrected based on instantaneous measurements of velocity and yaw rate.
  • Step S150 refers to generating a global map 40 comprising the scan points 38 of the sensor information.
  • Such global maps 40 with the scan points 38 are shown by way of example in figures 7 and 8.
  • Generating the global map 40 comprises transforming the scan points 38 of the LiDAR-based environment sensor 14, which are provided each in a respective sensor coordinate system, to a global coordinate system (GCS), as discussed above.
  • the scan points 38 from the LiDAR-based environment sensor 14 can be transformed first into a vehicle coordinate system (VCS), which provides a common coordinate system e.g. in case of multiple individual environment sensors 14.
  • VCS vehicle coordinate system
  • Step S160 refers to extracting ground truth data as reference data based on the sensor information.
  • This step comprises performing an automatic feature extraction, in particular a road boundary extraction and an object extraction, based on the sensor information.
  • Feature extraction comprise extraction of static features including the road boundaries 28 and movable features like further traffic participants, in particular third party vehicles.
  • the road boundary extraction as indicated in figure 2, is in charge of extraction of the road boundaries 28 from the given sensor information.
  • the output of the road boundary extraction comprises extraction of two road boundaries 28 in world coordinates corresponding to left and right road boundaries 28 of the respective road 26.
  • extraction of the road boundaries 28 comprises performing a road boundary detection and performing a road boundary polyline extraction.
  • Road boundary detection refers to generating road detections from the acquired sensor information.
  • the road boundary detection can comprise post-processing using e.g. image processing techniques to remove or mitigate detection artifacts.
  • Road boundary detection comprises applying a deep neural network, which is shown in figure 3.
  • the deep neural network uses a Fully Convolutional Network (FCN) semantic segmentation approach with multiple individual layers.
  • Road boundary detection is based on the sensor information from the LiDAR-based environment sensor 14.
  • the input in this embodiment is a grayscale image with mean accumulated occupancy values of the resampled scan points 38 for a given region of interest (ROI), a frame range and a resampling resolution.
  • ROI region of interest
  • configuration values are a ROI of [0, 64]x[-16, 16] a frame range of 80 frames (70 frames forward and 10 frames backwards) and a grid resolution of 0.2 meters giving an input image size of 320x160 pixels.
  • the output is the probability of presence of road detection on each grid cell.
  • the model’s decoder block extracts features and downsamples them to reduce the memory requirements, the context module aggregates multi-scale contextual information using dilated convolutions and finally the decoder block upsamples the feature maps back to the input size using the mask provided by the max pooling layer.
  • a set of manually annotated traces is used to create the training dataset.
  • the above step is performed using the accumulated scan points 38.
  • Data augmentation is performed to avoid overfitting and improve the model’s generalization.
  • the dataset is shuffled and the training/validation dataset split is 80% for the training 20% for the validation.
  • the model is trained with the Adam optimizer using the cross-entropy loss function.
  • Figure 4 shows input and output of the road boundary detection.
  • Left picture of figure 4 indicates a model input
  • the center picture indicates a model output with the road 26 and the road boundaries 28, and the right picture shows an overlay of the model input and the model output.
  • Road boundary polyline extraction refers to an extraction of road boundary polylines 30 from the road boundary detection, as indicated in figure 5.
  • the polylines 30 refer to a set of individual points, which together define a line.
  • first road contours are extracted from an input road detection grid using standard image processing edge detection techniques. Blurring and Sobel operators can be applied to the extracted road boundaries 28.
  • the contours are then clustered using a hierarchical clustering algorithm.
  • the centers of the clusters are extracted and separated in left and right boundary points, respectively, for each frame.
  • the left and right road boundaries 28 are then accumulated in world coordinates, clustered and the corresponding road boundaries polylines 30 are finally extracted using a graph traversing based algorithm on the extracted clusters in world coordinates.
  • the polylines 30 correspond to pre-annotated or labelled road boundaries 28.
  • Figure 5 shows in the upper drawing a road 26, where the road boundaries 28 are marked with extracted polylines 30 in world coordinates.
  • the lower left drawing indicates sensor information including scan points 38 from the environment 16 of the ego vehicle 10 together with a road detection plot in ego coordinates.
  • the lower middle drawing indicates a detected road 26 together with extracted road boundary points forming the polyline 30 in ego coordinates.
  • the lower right drawing indicates sensor information including scan points 38 from the environment 16 of the ego vehicle 10 together with extracted road boundary points forming the polyline 30 in ego coordinates.
  • Extraction of the road boundaries 28 further comprises performing a road boundary polyline simplification.
  • Road boundary polyline simplification refers to a simplification of the extracted road boundaries polylines 30.
  • the road boundary polyline simplification can be performed e.g. using the Ramer-Douglas-Peucker algorithm to provide to the annotators pre-annotated road boundaries 28 with a number of scan points 38 easy to handle.
  • the detected road boundaries 28 are provided for further processing. Hence, automatic data labelling is performed of the road boundaries 28 as features.
  • extraction of the ground truth data can comprise automatic data labelling.
  • the data labelling refers to a correct labelling of the scan points 38 according to features, so that they can be further processed in a suitable way.
  • the labelling can be performed as only automatic labelling or human assisted labelling.
  • the automatic labelling can be based on automatically pre-annotated features, which are labelled by a human.
  • Automatic data labelling preferably comprises applying a deep neural network.
  • Step S170 refers to performing data post-processing as a second level odometry correction of the trajectory 42 based on the ground truth data and the global map 40. It is based on the ground truth data, i.e. annotated static features like the road boundaries 28, movable features and raw reference scan points 38. It performs a correction of a trajectory 42 of the ego vehicle 10 and extracts a corrected trajectory 44 of the ego vehicle 10 using the ground truth data, as can be seen e.g. in figures 7 and 8. Thereby, it further improves extracted odometry from the first level odometry correction.
  • scan points 38 away from extracted static features, scan points 38 of movable features, and/or scan points 38 out of a relevant distance range are filtered off to remove some of the scan points 38.
  • scan points 38, which do not carry relevant information with respect to the extracted features, are filtered off and disregarded for further processing.
  • Figure 7 shows the ego vehicle 10 driving in the environment 16 with the x- axis oriented in the driving direction.
  • Figure 6 further shows the road 26 and the road boundaries 28 as features, indicated by the polylines 30.
  • Scan points 38 relevant for score calculation are in distance D from any of the road boundaries 28.
  • a distance measure and a distance threshold are defined for the distance D for separating the scan points 38.
  • scan points 38 of movable features are filtered off.
  • Scan points 38 of movable features are scan points 38 within an area of extracted, movable objects, in particular other vehicles, pedestrians, bicycles, or others.
  • the movable features can be marked by bounding boxes.
  • the scan points 38 filtered off as scan points 38 of movable features can include scan points 38 overlapping with these features or the respective bounding boxes, or scan points 38, which are closer to them than a given distance threshold.
  • the distance interval 32 is defined by an upper threshold d ma x and a lower threshold dmin defining a distance interval x'j e [dmin, dmax] ⁇
  • Figure 6 shows the upper threshold d ma x and the lower threshold d min defining the distance interval 32 therebetween.
  • the score can be defined and reliably calculated based on distances between the scan points 38 and the respective ground truth data as where the sum is calculated over scan points 38 r i j .
  • Constant x provides a scale factor.
  • the third power in the score formula is chosen based on empirical rules and takes into account that density of the scan points is inversely proportional to the second power of the inverse distance d.
  • the score is determined for the sensor information provided from the LiDAR-based environment sensor 14, i.e. the respective scan points 38, as obtained in VCS coordinates, for the respective static features, which are the road boundaries 28 in this embodiment.
  • the features can be defined in G,. For determining the score, the features are transformed into the respective VCS.
  • the score maximization refers to a correction of the GCS as a process with GCS G, on the input and GCS G o on the output. Both represent approximation of the real GCS.
  • G o is created by transformation ⁇ i (t), ⁇ x i (t) , ⁇ y i (t) ⁇ ⁇ o (t), ⁇ x o (t), ⁇ y 0 (t) where ⁇ 0 (t), ⁇ x 0 (t), ⁇ y 0 (t) maximize the score under some constraints.
  • the ground truth data is already provided in the input GCS, it is required to determine how to transfer the ground truth data into the new coordinate system.
  • the ground truth data comprises one or more static features, for example the road boundaries 28, which are represented by a set of points [x i j , y i j].
  • the upper index i denotes the number of the point and the lower index j is the frame number of the local coordinate system.
  • the points do not arise from measurement, but from the ground truth data, and it is therefore not given to which VCS they belong.
  • VCS V(t i j ) is chosen, where time t j is chosen so that the distance of [x i j , y i j] from the origin of the V(t j ) is minimal.
  • the point is represented the best in the local coordinate system, where it is closest to the ego vehicle 10 because of its sensor observation model.
  • the score is based on the individual scan points 38 of the sensor information provided in VCS for each set of sensor information, and the ground truth data provided in GCS G,.
  • a best score matching the ground truth data is determined, i.e. it is determined, when the positions of the scan points 38 of the global map 40 best fit with positions of the ground truth data, i.e. the positions of the extracted road boundaries 28 in this embodiment.
  • the minimal value of the score indicates the odometry correction for further use.
  • a possible correction G o can be found in form where v(t) is a tangential velocity of point [ ⁇ x(t), ⁇ y(t)] and ⁇ (t) is the time-derivative of ⁇ (t), a Monte-Carlo technique is applied for maximization of a score.
  • parameters a 9 , b 9 , c 9 , a v , b v , c v are generated from a random uniform distribution. From the equation above, the matrices of the affine transformation of the candidate GCS G 0 (cp(t), Ax(t), Ay(t)) are calculated.
  • the solution with lower score is always accepted, and the already accepted solution is used as a mean value for the following parameters generation.
  • a simulated annealing schedule is used to prevent trapping in a local minima.
  • spread of generated parameters is initially large to allow higher sampling of the state space, and it is gradually decreased to allow convergence.
  • the maximization of score dependent on the positions of the ground truth data and the positions of the scan points 38 is performed as an iterative step.
  • the whole matching algorithm performing the maximization of the score dependent on the positions of the scan points 38 of the global map 40 and the positions of the extracted static features can be as follows:
  • a Generate a 9 , b 9 , c 9 , a v , b v , c v from random uniform distribution with means a ⁇ best , b ⁇ best c ⁇ best a v best , b v best , c v best b.
  • Calculate score c. If score > score best : i. a ⁇ best , b ⁇ best c ⁇ best a v best , b v best , c v best , score best : a ⁇ , b ⁇ , c ⁇ , a v , b v , c v , score d. If score best was not assigned for M cycles, half spread of distributions from point a. with the system output using defined score, which are mapped to system requirements.
  • the data post-processing comprises a global coordinates correction (GCS).
  • the second level odometry correction of the trajectory 42 based on the ground truth data and the global map 40 also provides a corrected global map 40, which can be used for further purposes, e.g. validation purposes.
  • the global map as shown in figure 7 is corrected, as shown in figure 8.
  • Step S180 refers to providing the accurate trajectory information based on the second level odometry correction.
  • the correction information obtained in step S170 is applied, so that the trajectory information of the trajectory 42 can be corrected and provided with a best available accuracy.
  • the trajectory 42 as provided prior to second level odometry correction can be seen by way of example in figure 7 in the respective global map 40.
  • a corrected trajectory 44 is provided as accurate trajectory information.
  • the knowledge of the corrected trajectory 44 facilitates a correct determination of a current position of the ego vehicle 10 in and e.g. generation of the global map 40, thereby showing the environment 16 of the ego vehicle 10 with a high level of accuracy.
  • the global map 40 is shown in figure 7 before and in figure 8 after correction. Reference signs list

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

The present invention refers to a method for providing accurate trajectory information, in particular for use in a validation of a driving support system of a vehicle (10), comprising the steps of receiving odometry information in respect to a movement of the vehicle (10), determining a trajectory (42) based on the odometry information, performing data acquisition of sensor information from at least one environment sensor (14) covering an environment (16) of the vehicle (10), whereby the sensor information comprises individual scan points (38), generating a global map (40) comprising the scan points (38) of the sensor information, extracting ground truth data as reference data based on the sensor information, performing data post-processing as a second level odometry correction of the trajectory (42) based on the ground truth data and the global map (40), and providing the accurate trajectory information based on the second level odometry correction. The present invention also refers to a validation system (12) for performing the above method.

Description

Improved trajectory estimation based on ground truth
The present invention refers to a method for providing accurate trajectory information, in particular for use in for use in a validation of a driving support system of a vehicle.
The present invention also refers to a validation system, in particular for validation of an advanced driver assistance system, based on accurate trajectory information, comprising a receiver for receiving position information from a Global Navigation Satellite System, at least one environment sensor for providing sensor information covering an environment of the vehicle, whereby the sensor information comprises individual scan points, at least one odometry sensor for providing odometry information, and a processing unit connected to the receiver, the at least one environment sensor and the at least one odometry sensor, wherein the validation system is adapted to perform the above method.
Semi-autonomous and autonomous driving are considered as disruptive technologies in the automotive sector. Semi-autonomous and autonomous driving are increasing the safety on roads. However, the increasing degree of automation of systems used in modern automotive industry increases dramatically the requirements for the validation. The international functional safety standard ISO 26262 defines the development and the validation process. In particular, it mandates that for all possible hazardous situations there shall be a properly validated safety-goal. Reducing possibility of unknown hazardous situations occurrence is achieved e.g. by using redundant sensing systems and by statistical validation.
An effective statistical validation process should allow high level of automation to enable repeatable analysis of the system performance and precise application of validation rules, also referred to as key performance indicators (KPI). The first step of a typical statistical validation process is the open road data acquisition where the data is recorded according to the defined statistical model. The measurement vehicle is equipped with a primary system, which is the subject of the system validation, and a reference system, which is the source of raw data as reference data for ground truth. In ground truth extraction, different features are labelled in the collected reference data. This step can combine manual and automated labelling techniques. The labelled data is then compared with the system output using defined KPIs, which shall be mapped to system requirements.
The KPIs are often calculated based on many thousands of kilometers on open roads with significant precision. This poses high requirements on alignment between the primary and the reference system and measurements of the vehicle odometry in conditions, where precise calibration is more difficult to achieve than in laboratory conditions. Achieving precise trajectory information is a key for autonomous driving.
It is an object of the present invention to provide a method for providing trajectory information, in particular for use in validation system for validation of a primary system, and a validation system, in particular for validation of an advanced driver assistance system, which enable an improved trajectory estimation with increased accuracy.
This object is achieved by the independent claims. Advantageous embodiments are given in the dependent claims.
In particular, the present invention provides a method for providing accurate trajectory information, in particular for use in for use in a validation of a driving support system of a vehicle, comprising the steps of receiving odometry information in respect to a movement of the vehicle, determining a trajectory based on the odometry information, performing data acquisition of sensor information from at least one environment sensor covering an environment of the vehicle, whereby the sensor information comprises individual scan points, generating a global map comprising the scan points of the sensor information, extracting ground truth data as reference data based on the sensor information, performing data post-processing as a second level odometry correction of the trajectory based on the ground truth data and the global map, and providing the accurate trajectory information based on the second level odometry correction.
The present invention also provides a validation system, in particular for validation of an advanced driver assistance system, based on accurate trajectory information, comprising a receiver for receiving position information from a Global Navigation Satellite System, at least one environment sensor for providing sensor information covering an environment of the vehicle, whereby the sensor information comprises individual scan points, at least one odometry sensor for providing odometry information, and a processing unit connected to the receiver, the at least one environment sensor and the at least one odometry sensor, wherein the validation system is adapted to perform the above method.
The basic idea of the invention is to provide accurate trajectory information based on a combination of odometry information of the vehicle, which is provided from the at least one odometry sensor and provides a trajectory, and sensor information from the at least one environment sensor, which is used to correct the odometry. The individual scan points can be fitted to the ground truth data, and a score can be calculated for providing correction information as basis for the accurate trajectory information. This makes the method robust against errors resulting from integration and imprecise odometry measurements.
T ransformation of a vehicle coordinate system (VCS) to a global coordinate system (GCS) is done through integration. This transformation is important to obtain a full overview of the environment of the vehicle, in particular using multiple environment sensors. However, this procedure suffers from imprecise measurements and rounding errors. With the method and the validation system of the present invention, the trajectory of the vehicle can be determined with increased precision, so that these errors can be reduced.
Furthermore, the method can be used to provide an output of accurate pre-annotated features like road boundaries in a global coordinate frame, which can be directly used in a later system performance evaluation.
The odometry information in respect to the movement of the vehicle can comprise by way of example instantaneous measurements of first order quantities such as velocity and yaw rate. The velocity can be determined based on wheel ticks provided by a respective sensor, i.e. a wheel sensor. The yaw rate can be determined as a steering wheel angle, e.g. using a steering wheel sensor.
The trajectory indicates a movement of the vehicle, e.g. a trajectory driven by the vehicle: The trajectory is based on the odometry information. The trajectory terminates at a current position of the vehicle. The trajectory is first determined based on the odometry information and improved in the course of the specified method. The sensor information from the at least one environment sensor covering an environment of the vehicle comprises the individual scan points, which provide distance information in a given resolution in the field of view compared to a position of the respective environment sensor. In addition to the distance information, the scan points can provide supplementary information like an intensity value for each scan point. The at least one environment sensor can be e.g. a LiDAR-based environment sensor or a radar sensor, each of which provides scan points as described above. Each of the at least one environment sensor covers an individual field of view. The field of view of multiple environment sensors can at least partially overlap or not.
Generating the global map comprises transforming the scan points of the respective environment sensor, which are provided each in a respective sensor coordinate system, to a global coordinate system (GCS), as discussed above. The scan points of the respective environment sensor can be transformed first into a vehicle coordinate system (VCS), which provides a common coordinate system in case of multiple individual environment sensors. The term global map here refers to a map covering a close-by environment of the vehicle, which can be based on sensor information from different types and numbers of differently located environment sensors.
Based on the ground truth data and the global map, the second level odometry correction is performed. The step can be based on ground truth data like extracted features including road boundaries as well as movable features. It processes, in particular, raw scan points to perform odometry correction and to provide the corrected trajectory of the ego vehicle. The second level odometry correction can be performed independently from further odometry correction steps. The second level odometry correction provides odometry information having a good quality and suitable for validation purposes of primary systems. Such a validation of the driving support system as the primary system compared to the present validation system as reference system can be performed in subsequent processing steps, when the validation is performed based on KPI values (Key Performance Indicator). Such a validation is - in general - known for such primary systems. In order to perform the second level odometry correction, score values can be determined for the ground truth data compared to the global map. The score can be calculated according any suitable prior definition. The score shall be determined in order to allow an optimization of the trajectory information and can be defined e.g. as a distance value indicating a difference between the ground truth data, e.g. static features as extracted from the sensor information as discussed below, and the scan points. Preferably, multiple sets of sensor information are acquired over time and commonly processed to perform the second level odometry correction. A single set of sensor information refers to sensor information acquired at a certain moment of time. Multiple sets of sensor information can form a sequence of sensor information. A set of sensor information can also be referred to as frame. The score is based on the individual scan points of the sensor information provided in VCS for each set of sensor information, and the ground truth data, e.g. the static features, provided in GCS G,.
Hence, the score is determined by comparing the ground truth data and the sensor information provided from the at least one environment sensor, i.e. the scan points, as obtained in VCS coordinates. In particular, the ground truth data, e.g. the static features, can be defined in G,. For determining the score, the ground truth data is transformed into the respective VCS.
The score based on distances between the scan points and the respective ground truth data can be defined as
Figure imgf000007_0001
where the sum is over scan points ri j. Constant ξ provides a scale factor. The third power in the score formula is chosen based on empirical rules and takes into account that density of the scan points is inversely proportional to the second power of the inverse distance d.
Based on the above score, performing the data post-processing preferably comprises a score maximization for the extracted ground truth data compared to the scan points to obtain correction information for providing the accurate trajectory. Hence, with the best score, the ground truth data best matches the scan points. The best fitting score is considered as best choice. The underlying processing indicates the correction information for further use.
The score optimization refers to a correction of the GCS as a process with GCS G, on the input and GCS G0 on the output. Both represent approximation of the real GCS. In general, G0 is created by transformation φi(t),Δxi(t),Δyi(t) → φ o(t),Δxo(t),Δy0(t) where φ0(t), Δx0(t), Δy0(t) maximize the score under some constraints. In case the ground truth data is already provided in the input GCS, it is required to determine how to transfer the ground truth data into the new coordinate system. While there is one G, coordinate system and one G0 coordinate system, there is one VCS per measurement frame. The ground truth data comprises one or more static features, for example road boundaries, which are represented by a set of points [x'j, yjj. The upper index i denotes the number of the point and the lower index j is the frame number of the local coordinate system. The points do not arise from measurement, like the scan points, but from the ground truth, e.g. from labelling, and it is therefore not given to which VCS they belong. For a point [xi j, yij], VCS V(ti j) is chosen, where time tj is chosen so that the distance of [xi j, yij] from the origin of the V(tj) is minimal. Preferably, it is assumed that the point is represented the best in the local coordinate system, where it is closest to the ego vehicle because of its sensor observation model. However, this can be a design choice, because the transformation is not uniquely given.
According to a modified embodiment of the invention, the method comprises steps of determining a position of the vehicle based on position information from a Global Navigation Satellite System, and performing a first level odometry correction of the trajectory based on the determined position information. Global navigation satellite systems (GNSS) provide a direct and accurate measurement of a position for long time intervals, therefore avoiding errors of integration. However, on short timescales, the position information based on GNSS is in general imprecise due to noise. This makes it important to determine movements of the vehicle in particular for short timescales with high accuracy, so that the position can always be determined accurately based on a combination of the determined movement based on the odometry information together with the position information based on GNSS.
According to a modified embodiment of the invention, the step of determining a position of the vehicle based on position information from a Global Navigation Satellite System comprises performing a correction step of the determined position. Different kinds of correction steps can be applied, e.g. known as differential GNSS or satellite-based augmentation systems (SBAS). The correction step can be based on a fusion of position information from the GNSS together with information from an inertial measurement unit (IMU). Another possibility to achieve a more accurate position information from GNSS is to use real-time kinematics (RTK) techniques, which use carrier-based ranging to provide positions that are orders of magnitude more precise to standard code-based positioning from GNSS. The correction step can be supported by highly precise sensors for sensing a relative movement such as ring laser gyroscopes (RLG) with an accuracy better than 0.01 degree/hour or wheel vector sensors, which can precisely measure wheel movements in multiple axes. A further possibility for performing a correction step of the determined position comprises a correction step based on GNSS data using traditional smoothing techniques such as (extended) Rauch-Tung-Striebel (RTS) smoother.
Alternatively or in addition, performing a first level odometry correction of the trajectory based on the determined position information comprises performing first level odometry correction using Visual Odometry and/or Simultaneous Localization and Mapping (SLAM). Accordingly, sequences of sets of sensor information of the environment of the vehicle can be analyzed. Hence, first level odometry correction can improve consistency of local and global localization based on environment sensing. The sensor information can be provided e.g. by optical cameras, LiDAR-based environment sensors or radar sensors. In the case of LiDAR-based environment sensors, odometry information can be derived from an affine transformation that optimizes point-wise differences of scan points of consecutive scans. This technique is motivated by the assumption that most of the sensed environment is static. The transformation can be calculated by Iterative Closest Point algorithm with expectation maximization. Expectation maximization filters out outliers from source and destination point clouds, i.e. scan points that do not have nearby points in the destination point cloud after transformation. Further stability of the algorithm can be obtained by accumulation of several sets of environment information over time. Also mapping of scan points from more distant locations, e.g. 20 m or more, enforces consistency of the static environment on a larger scale. This method step is based on raw sensor information, i.e. raw scan points. It directly optimizes a quantity of interest. The transformation then gives an indication how the vehicle has moved and how the predicted trajectory information shall be corrected based on instantaneous measurements of velocity and yaw rate. The transformation directly optimizes a quantity of interest by checking consistency of the scan points as received from different positions. It also has certain robustness against noise, occlusion and dynamic agents on the scene due to outlier rejection.
According to a modified embodiment of the invention, the step of the step of extracting ground truth data as reference data based on the sensor information comprises performing an automatic feature extraction, in particular a road boundary extraction and/or an object extraction, based on the sensor information. Feature extraction can comprise extraction of static features including the road boundaries, buildings, treed, or others, and movable features like further traffic participants, in particular third party vehicles. The road boundary extraction is in charge of extraction of the road boundaries from the given sensor information. The output of the road boundary extraction comprises extraction of two road boundaries in world coordinates corresponding to left and right road boundaries of the respective road. Other objects can be extracted in different ways, e.g. depending on their shape.
According to a modified embodiment of the invention, the step of performing a road boundary extraction comprises performing a road boundary detection and performing a road boundary polyline extraction. Road boundary detection refers to generating road detections from the given sensor information. The road boundary detection can comprise post-processing using e.g. image processing techniques to remove or mitigate detection artifacts.
Road boundary polyline extraction refers to an extraction of road boundary polylines from the road boundary detection. Accordingly, first road contours are extracted from the input road detection grid using e.g. standard image processing edge detection techniques. Blurring and Sobel operators can be applied to the extracted road boundaries. The contours are then clustered using a hierarchical clustering algorithm. The centers of the clusters are extracted and separated in left and right boundary points, respectively, for each frame, i.e. each set of sensor information. The left and right road boundaries are then accumulated in world coordinates, clustered and the corresponding road boundaries polylines are finally extracted using a graph traversing based algorithm on the extracted clusters in world coordinates. The polylines correspond to pre-annotated or labelled road boundaries.
According to a modified embodiment of the invention, the step of the step of performing a road boundary detection comprises applying a deep neural network. Preferably, the deep neural network uses a Fully Convolutional Network (FCN) semantic segmentation approach.
According to a modified embodiment of the invention, the step of performing a road boundary extraction comprises performing a road boundary polyline simplification. Road boundary polyline simplification refers to a simplification of the extracted road boundaries polylines. The road boundary polyline simplification can be performed using the Ramer- Douglas-Peucker algorithm to provide to the annotators pre-annotated road boundaries with a number of scan points easy to handle.
According to a modified embodiment of the invention, the step of extracting ground truth data as reference data based on the sensor information comprises automatic data labelling. The data labelling refers to a correct labelling of the scan points to identify features therein. The labelling can be performed as only automatic labelling or human assisted labelling. Hence, the automatic labelling can be based on automatically pre annotated features, which are labelled by a human. Automatic data labelling preferably comprises applying a deep neural network. Deep neural networks are becoming more and more important and powerful in recognition of features in sensor information. The Deep neural performs the labelling in an efficient and automatic manner.
According to a modified embodiment of the invention, the step of performing data post processing as a second level odometry correction of the trajectory based on the ground truth data and the global map comprises filtering off scan points away from extracted static features, scan points of movable features, and/or scan points out of a relevant distance range. In general, scan points, which do not carry relevant information with respect to the extracted features, are disregarded for further processing. In particular, when calculating the score based on the extracted features, e.g. by performing the distance minimization, merely information in respect to the extracted static features is required. This refers in particular to scan points not far away from extracted static features. A distance measure and a distance threshold are defined for separating the scan points. Furthermore, scan points of movable features are scan points within an area, which cannot provide information in respect to static features like road boundaries. The scan points filtered off as scan points of movable features can include scan points overlapping with these objects or e.g. respective bounding boxes covering the movable features, or scan points, which are closer to them than a given distance threshold. The distance range is defined by an upper threshold and a lower threshold defining a distance interval xij e [dmin, dmax]. By filtering off the scan points out of the relevant distance range, only the scan points within the distance interval xij e [dmin, dmax] are considered. The upper threshold dmax is chosen to limit measurement imprecisions and noise of the scan points. The lower threshold dmin is chosen because scan points in higher distance from the ego vehicle hold more relevant trajectory information. In an extreme case, where the scan points would be localized in the origin of VCS, they would not hold any information usable for the trajectory correction. Overall, relevance of processed information is increased by filtering off these scan points.
According to a modified embodiment of the invention, the step of the step of performing data post-processing as a second level odometry correction of the trajectory based on the ground truth data and the global map comprises matching positions of the scan points of the global map with positions of the extracted static features. The extracted static features are part of the ground truth data. By way of example, a possible correction G0 can be found in form
Figure imgf000012_0001
where v(t) is a tangential velocity of point [Ax(t), Ay(t)] and cp’(t) is the time-derivative of cp(t), a Monte-Carlo technique is applied for maximization of a score. In every step, parameters a9, b9, c9, av, bv, cv are generated from a random uniform distribution. From the equation above, the matrices of the affine transformation of the candidate GCS Go(φ (t), Δx(t), Δy(t)) are calculated. The solution with lower score is always accepted, and the already accepted solution is used as a mean value for the following parameters generation. A simulated annealing schedule is used to prevent trapping in a local minima. Preferably, spread of generated parameters is initially large to allow higher sampling of the state space, and it is gradually decreased to allow convergence. The whole matching algorithm can be as follows:
For D in [D1, D2], where D1 > D2:
For i from 1 to N: a. Generate a9, b9, of, av, bv, cv from random uniform distribution with means aφ best, bφ best cφ best av best, bv best, cv best b. Calculate score c. If score > scorebest: aφ best, bφ best cφ best av best, bv best, cv best, scorebest := aφ , bφ, cφ, av, bv, cv, score d. If scorebest was not assigned for M cycles, half spread of distributions from point a. with the system output using defined scores, which are mapped to system requirements. The data post-processing comprises a global coordinates correction (GCS).
According to a modified embodiment of the invention, the step of matching positions of the scan points of the global map with positions of the extracted static features comprises performing a maximization of score dependent on the positions of the scan points of the global map and the positions of the extracted static features. This enables a simple means for matching the positions of the scan points in the global map with positions of the extracted features. The maximization is preferably performed as an iterative step, which can provide a detailed matching. According to a modified embodiment of the invention, the step of performing data post processing as a second level odometry correction of the trajectory based on the ground truth data and the global map comprises providing a corrected global map. The corrected global map can be used for further purposes, e.g. validation purposes.
According to a modified embodiment of the invention, the at least one environment sensor comprises at least one out of a LiDAR-based environment sensor and/or a radar sensor.
These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter. Individual features disclosed in the embodiments can constitute alone or in combination an aspect of the present invention. Features of the different embodiments can be carried over from one embodiment to another embodiment.
In the drawings:
Fig. 1 shows a schematic view of a vehicle with a driving support system according to a first, preferred embodiment,
Fig. 2 shows a detailed flow chart of an implementation of a road boundary extraction in accordance with the method of the first embodiment,
Fig. 3 shows a schematic view of a deep neural network using a Fully Convolutional Network (FCN) semantic segmentation for performing the road boundary detection in accordance with the method of the first embodiment,
Fig. 4 shows three pictures indicating on the left a model input, in the center a model output and on the right an overlay of the model input and the model output.
Fig. 5 shows in the upper drawing a road with road boundaries marked with extracted polylines in world coordinates, in the lower left drawing sensor information including scan points from the environment of the vehicle together with a road detection plot in ego coordinates, in the lower middle drawing a detected road together with extracted road boundary points forming the polyline in ego coordinates, and in the lower right drawing sensor information including scan points from the environment of the vehicle together with extracted road boundary points forming the polyline in ego coordinates,
Fig. 6 shows the vehicle driving in the environment with the x-axis oriented in the driving direction on the road together with road boundaries as features, which are indicated by polylines,
Fig. 7 shows a partial view of a global map together with a trajectory and a corrected trajectory with scan points before correction,
Fig. 8 shows a partial view of a global map together with the corrected trajectory of Fig. 7 with scan points after correction, and
Fig. 9 shows a detailed flow chart of the method for providing accurate trajectory information according to the first embodiment,
Figure 1 shows a vehicle 10, further referred to as ego vehicle, with a validation system 12 according to a first, preferred embodiment. The validation system 12 of the first embodiment is provided for performing validation of a driving support system, e.g. an advanced driver assistance system (ADAS) as primary system.
The validation system 12 comprises a LiDAR-based environment sensor 14 as environment sensor for providing sensor information covering an environment 16 of the ego vehicle 10. The validation system 12 further comprises a receiver 18 for receiving position information from a Global Navigation Satellite System. The validation system 12 also comprises a processing unit 20, which is connected to the receiver 18 and the LiDAR- based environment sensor 14 via communication bus 22. The validation system 12 still further comprises odometry sensors for providing odometry information of the ego vehicle 10. The odometry sensors are not shown in figure 1 and comprise sensors for instantaneous measurements of first order quantities such as velocity and yaw rate. Hence, the odometry sensors comprise a wheel tick sensor for providing wheel ticks and a steering wheel sensor for providing a steering wheel angle. Also the wheel tick sensor and the steering wheel sensor are connected to the processing unit 20 via communication bus 22.
Subsequently, a method for providing accurate trajectory information, in particular for use in for use in a validation of a driving support system of a vehicle 10, will be described. The method is performed using the validation system 12 of the first embodiment. Flow charts illustrating the method are shown in figures 2 and 9. The method will be described with additional reference to figures 1 and 3 to 8.
The method starts with step S100, which refers to receiving odometry information in respect to a movement of the ego vehicle 10. Hence, odometry information is received from the wheel tick sensor and the steering wheel sensor indicating a velocity and a yaw rate.
Step S110 refers to determining a trajectory 42 based on the odometry information. The trajectory information refers to a trajectory 42, which has been driven by the ego vehicle 10. The odometry information provided over time provides the trajectory 42. The trajectory terminates at a current position of the ego vehicle 10.
Step S120 refers to performing data acquisition of sensor information from the LiDAR- based environment sensor 14 covering the environment 16 of the ego vehicle 10. The sensor information comprises individual scan points 38 covering an individual field of view as part of the environment 16. The scan points 38 provide distance information in a given angular resolution in the field of view. In addition to the distance information, the scan points 38 can optionally provide supplementary information like an intensity value for each scan point 38.
Step S130 refers to determining a position of the ego vehicle 10 based on position information from a Global Navigation Satellite System (GNSS).The position information is received from the receiver 18 and transmitted via communication bus 22 to the processing unit 20, where the further data processing is performed. In order to obtain precise position information, a correction step of the determined position is included. The correction step includes in one embodiment use of differential GNSS or satellite-based augmentation systems (SBAS). In another embodiment, the correction is based on a fusion of position information from the GNSS together with information from an inertial measurement unit (IMU). In yet another embodiment, real-time kinematics (RTK) techniques are used, which use carrier-based ranging to provide positions that are orders of magnitude more precise to standard code-based positioning from GNSS. In a still further embodiment, a correction step is performed based on GNSS data using traditional smoothing techniques such as (extended) Rauch-Tung-Striebel (RTS) smoother. The correction step can be supported in each case by highly precise sensors for sensing a relative movement such as ring laser gyroscopes (RLG) with an accuracy better than 0.01 degree/hour or wheel vector sensors, which can precisely measure wheel movements in multiple axes. In an alternative embodiment, the correction step is part of the first level odometry correction of step S140.
Step S140 refers to performing a first level odometry correction of the trajectory 42. In this embodiment, first level odometry correction comprises a correction of the trajectory 42 based on the determined position information. Furthermore, in this embodiment, the first level odometry correction of the trajectory 42 is performed using e.g. Visual Odometry and/or Simultaneous Localization and Mapping (SLAM). Accordingly, sequences of sensor information of the environment 16 of the ego vehicle 10, i.e. sensor information from the LiDAR-based environment sensor 14, is analyzed. For the LiDAR-based environment sensor 14, odometry information can be derived from an affine transformation that optimizes point-wise differences of scan points 38 of consecutive sets of sensor information based on the assumption that most of the sensed environment 16 is static. A set of sensor information refers to sensor information acquired at a certain moment of time, so multiple sets of sensor information can form a sequence. The transformation is calculated by Iterative Closest Point algorithm with expectation maximization, which filters out outliers from source and destination point clouds, i.e. scan points 38 that do not have nearby points in the destination point cloud after transformation. Further stability of the algorithm is obtained by accumulation of several sets of sensor information over time and mapping of scan points 38 from more distant locations, e.g. twenty meters or more, which enforces consistency of the static environment on a larger scale. This sub-step is based on raw sensor information, i.e. the scan points 38. It directly optimizes a quantity of interest. The transformation then gives an indication how the ego vehicle 10 has moved and how the predicted trajectory information shall be corrected based on instantaneous measurements of velocity and yaw rate.
Step S150 refers to generating a global map 40 comprising the scan points 38 of the sensor information. Such global maps 40 with the scan points 38 are shown by way of example in figures 7 and 8. Generating the global map 40 comprises transforming the scan points 38 of the LiDAR-based environment sensor 14, which are provided each in a respective sensor coordinate system, to a global coordinate system (GCS), as discussed above. The scan points 38 from the LiDAR-based environment sensor 14 can be transformed first into a vehicle coordinate system (VCS), which provides a common coordinate system e.g. in case of multiple individual environment sensors 14.
Step S160 refers to extracting ground truth data as reference data based on the sensor information. This step comprises performing an automatic feature extraction, in particular a road boundary extraction and an object extraction, based on the sensor information. Feature extraction comprise extraction of static features including the road boundaries 28 and movable features like further traffic participants, in particular third party vehicles. The road boundary extraction, as indicated in figure 2, is in charge of extraction of the road boundaries 28 from the given sensor information. The output of the road boundary extraction comprises extraction of two road boundaries 28 in world coordinates corresponding to left and right road boundaries 28 of the respective road 26.
In detail, extraction of the road boundaries 28 comprises performing a road boundary detection and performing a road boundary polyline extraction. Road boundary detection refers to generating road detections from the acquired sensor information. The road boundary detection can comprise post-processing using e.g. image processing techniques to remove or mitigate detection artifacts.
Road boundary detection comprises applying a deep neural network, which is shown in figure 3. The deep neural network uses a Fully Convolutional Network (FCN) semantic segmentation approach with multiple individual layers. Road boundary detection is based on the sensor information from the LiDAR-based environment sensor 14. The input in this embodiment is a grayscale image with mean accumulated occupancy values of the resampled scan points 38 for a given region of interest (ROI), a frame range and a resampling resolution. By way of example, configuration values are a ROI of [0, 64]x[-16, 16] a frame range of 80 frames (70 frames forward and 10 frames backwards) and a grid resolution of 0.2 meters giving an input image size of 320x160 pixels. The output is the probability of presence of road detection on each grid cell. The model’s decoder block extracts features and downsamples them to reduce the memory requirements, the context module aggregates multi-scale contextual information using dilated convolutions and finally the decoder block upsamples the feature maps back to the input size using the mask provided by the max pooling layer.
To train the above model, a set of manually annotated traces is used to create the training dataset. The above step is performed using the accumulated scan points 38. Data augmentation is performed to avoid overfitting and improve the model’s generalization. The dataset is shuffled and the training/validation dataset split is 80% for the training 20% for the validation. The model is trained with the Adam optimizer using the cross-entropy loss function.
Figure 4 shows input and output of the road boundary detection. Left picture of figure 4 indicates a model input, the center picture indicates a model output with the road 26 and the road boundaries 28, and the right picture shows an overlay of the model input and the model output.
Road boundary polyline extraction refers to an extraction of road boundary polylines 30 from the road boundary detection, as indicated in figure 5. The polylines 30 refer to a set of individual points, which together define a line. Accordingly, first road contours are extracted from an input road detection grid using standard image processing edge detection techniques. Blurring and Sobel operators can be applied to the extracted road boundaries 28. The contours are then clustered using a hierarchical clustering algorithm. The centers of the clusters are extracted and separated in left and right boundary points, respectively, for each frame. The left and right road boundaries 28 are then accumulated in world coordinates, clustered and the corresponding road boundaries polylines 30 are finally extracted using a graph traversing based algorithm on the extracted clusters in world coordinates. The polylines 30 correspond to pre-annotated or labelled road boundaries 28. Figure 5 shows in the upper drawing a road 26, where the road boundaries 28 are marked with extracted polylines 30 in world coordinates. The lower left drawing indicates sensor information including scan points 38 from the environment 16 of the ego vehicle 10 together with a road detection plot in ego coordinates. The lower middle drawing indicates a detected road 26 together with extracted road boundary points forming the polyline 30 in ego coordinates. The lower right drawing indicates sensor information including scan points 38 from the environment 16 of the ego vehicle 10 together with extracted road boundary points forming the polyline 30 in ego coordinates.
Extraction of the road boundaries 28 further comprises performing a road boundary polyline simplification. Road boundary polyline simplification refers to a simplification of the extracted road boundaries polylines 30. The road boundary polyline simplification can be performed e.g. using the Ramer-Douglas-Peucker algorithm to provide to the annotators pre-annotated road boundaries 28 with a number of scan points 38 easy to handle.
The detected road boundaries 28 are provided for further processing. Hence, automatic data labelling is performed of the road boundaries 28 as features.
Furthermore, extraction of the ground truth data can comprise automatic data labelling. The data labelling refers to a correct labelling of the scan points 38 according to features, so that they can be further processed in a suitable way. The labelling can be performed as only automatic labelling or human assisted labelling. Hence, the automatic labelling can be based on automatically pre-annotated features, which are labelled by a human. Automatic data labelling preferably comprises applying a deep neural network.
Step S170 refers to performing data post-processing as a second level odometry correction of the trajectory 42 based on the ground truth data and the global map 40. It is based on the ground truth data, i.e. annotated static features like the road boundaries 28, movable features and raw reference scan points 38. It performs a correction of a trajectory 42 of the ego vehicle 10 and extracts a corrected trajectory 44 of the ego vehicle 10 using the ground truth data, as can be seen e.g. in figures 7 and 8. Thereby, it further improves extracted odometry from the first level odometry correction. First, scan points 38 away from extracted static features, scan points 38 of movable features, and/or scan points 38 out of a relevant distance range are filtered off to remove some of the scan points 38. In general, scan points 38, which do not carry relevant information with respect to the extracted features, are filtered off and disregarded for further processing.
Hence, scan points 38 far away from extracted static features like road boundaries 28 are filtered off. Figure 7 shows the ego vehicle 10 driving in the environment 16 with the x- axis oriented in the driving direction. Figure 6 further shows the road 26 and the road boundaries 28 as features, indicated by the polylines 30. Scan points 38 relevant for score calculation are in distance D from any of the road boundaries 28. A distance measure and a distance threshold are defined for the distance D for separating the scan points 38.
Furthermore, scan points 38 of movable features are filtered off. Scan points 38 of movable features are scan points 38 within an area of extracted, movable objects, in particular other vehicles, pedestrians, bicycles, or others. The movable features can be marked by bounding boxes. The scan points 38 filtered off as scan points 38 of movable features can include scan points 38 overlapping with these features or the respective bounding boxes, or scan points 38, which are closer to them than a given distance threshold.
Still further, scan points 38 out of a relevant distance interval 32 are filtered off. The distance interval 32 is defined by an upper threshold dmax and a lower threshold dmin defining a distance interval x'j e [dmin, dmax]· Figure 6 shows the upper threshold dmax and the lower threshold dmin defining the distance interval 32 therebetween.
After filtering, the score can be defined and reliably calculated based on distances between the scan points 38 and the respective ground truth data as
Figure imgf000021_0001
where the sum is calculated over scan points 38 ri j. Constant x provides a scale factor. The third power in the score formula is chosen based on empirical rules and takes into account that density of the scan points is inversely proportional to the second power of the inverse distance d. Hence, the score is determined for the sensor information provided from the LiDAR-based environment sensor 14, i.e. the respective scan points 38, as obtained in VCS coordinates, for the respective static features, which are the road boundaries 28 in this embodiment. In particular, the features can be defined in G,. For determining the score, the features are transformed into the respective VCS.
A score maximization is performed. The score maximization refers to a correction of the GCS as a process with GCS G, on the input and GCS Go on the output. Both represent approximation of the real GCS. In general, Go is created by transformation φi(t),Δxi(t),Δyi(t) → φ o(t),Δxo(t),Δy0(t) where φ0(t), Δx0(t), Δy0(t) maximize the score under some constraints. In case the ground truth data is already provided in the input GCS, it is required to determine how to transfer the ground truth data into the new coordinate system. While there is one G, coordinate system and one Go coordinate system, there is one VCS per measurement frame. The ground truth data comprises one or more static features, for example the road boundaries 28, which are represented by a set of points [xi j, yij]. The upper index i denotes the number of the point and the lower index j is the frame number of the local coordinate system. The points do not arise from measurement, but from the ground truth data, and it is therefore not given to which VCS they belong. For a point [xi j, yij], VCS V(ti j) is chosen, where time tj is chosen so that the distance of [xi j, yij] from the origin of the V(tj) is minimal. In this embodiment, it is assumed that the point is represented the best in the local coordinate system, where it is closest to the ego vehicle 10 because of its sensor observation model.
As discussed above, multiple sets of sensor information are acquired over time and commonly processed to perform the second level odometry correction. The score is based on the individual scan points 38 of the sensor information provided in VCS for each set of sensor information, and the ground truth data provided in GCS G,. A best score matching the ground truth data is determined, i.e. it is determined, when the positions of the scan points 38 of the global map 40 best fit with positions of the ground truth data, i.e. the positions of the extracted road boundaries 28 in this embodiment. The minimal value of the score indicates the odometry correction for further use.
By way of example, a possible correction Go can be found in form
Figure imgf000023_0001
where v(t) is a tangential velocity of point [Δx(t), Δy(t)] and φ (t) is the time-derivative of φ (t), a Monte-Carlo technique is applied for maximization of a score. In every step, parameters a9, b9, c9, av, bv, cv are generated from a random uniform distribution. From the equation above, the matrices of the affine transformation of the candidate GCS G0(cp(t), Ax(t), Ay(t)) are calculated. The solution with lower score is always accepted, and the already accepted solution is used as a mean value for the following parameters generation. A simulated annealing schedule is used to prevent trapping in a local minima. Preferably, spread of generated parameters is initially large to allow higher sampling of the state space, and it is gradually decreased to allow convergence. The maximization of score dependent on the positions of the ground truth data and the positions of the scan points 38 is performed as an iterative step. The whole matching algorithm performing the maximization of the score dependent on the positions of the scan points 38 of the global map 40 and the positions of the extracted static features can be as follows:
For D in [D1, D2], where D1 > D2:
For i from 1 to N: a. Generate a9, b9, c9, av, bv, cv from random uniform distribution with means aφ best, bφ best cφ best av best, bv best, cv best b. Calculate score c. If score > scorebest: i. aφ best, bφ best cφ best av best, bv best, cv best, scorebest := aφ , bφ, cφ, av, bv, cv, score d. If scorebest was not assigned for M cycles, half spread of distributions from point a. with the system output using defined score, which are mapped to system requirements. The data post-processing comprises a global coordinates correction (GCS).
The second level odometry correction of the trajectory 42 based on the ground truth data and the global map 40 also provides a corrected global map 40, which can be used for further purposes, e.g. validation purposes. Hence, the global map as shown in figure 7 is corrected, as shown in figure 8.
Step S180 refers to providing the accurate trajectory information based on the second level odometry correction. Hence, the correction information obtained in step S170 is applied, so that the trajectory information of the trajectory 42 can be corrected and provided with a best available accuracy. The trajectory 42 as provided prior to second level odometry correction can be seen by way of example in figure 7 in the respective global map 40. Based on the correction information of step S170, a corrected trajectory 44 is provided as accurate trajectory information.
As can be further seen in figures 7 and 8, the knowledge of the corrected trajectory 44 facilitates a correct determination of a current position of the ego vehicle 10 in and e.g. generation of the global map 40, thereby showing the environment 16 of the ego vehicle 10 with a high level of accuracy. The global map 40 is shown in figure 7 before and in figure 8 after correction. Reference signs list
10 vehicle, ego vehicle
12 validation system
14 LiDAR-based environment sensor, environment sensor
16 environment
18 receiver
20 processing unit
22 communication bus
26 road
28 road boundary
30 polyline
32 distance interval
38 scan point
40 global map
42 trajectory
44 corrected trajectory

Claims

Patent claims
1. Method for providing accurate trajectory information, in particular for use in a validation of a driving support system of a vehicle (10), comprising the steps of receiving odometry information in respect to a movement of the vehicle (10), determining a trajectory (42) based on the odometry information, performing data acquisition of sensor information from at least one environment sensor (14) covering an environment (16) of the vehicle (10), whereby the sensor information comprises individual scan points (38), generating a global map (40) comprising the scan points (38) of the sensor information, extracting ground truth data as reference data based on the sensor information, performing data post-processing as a second level odometry correction of the trajectory (42) based on the ground truth data and the global map (40), and providing the accurate trajectory information based on the second level odometry correction.
2. Method according to claim 1 , characterized in that the method comprises steps of determining a position of the vehicle (10) based on position information from a Global Navigation Satellite System, and performing a first level odometry correction of the trajectory (42) based on the determined position information.
3. Method according to claim 2, characterized in that the step of determining a position of the vehicle (10) based on position information from a Global Navigation Satellite System comprises performing a correction step of the determined position.
4. Method according to any preceding claim, characterized in that the step of extracting ground truth data as reference data based on the sensor information comprises performing an automatic feature extraction, in particular a road boundary extraction and/or an object extraction, based on the sensor information.
5. Method according to claim 4, characterized in that the step of performing a road boundary extraction comprises performing a road boundary detection and performing a road boundary polyline extraction.
6. Method according to claim 5, characterized in that the step of performing a road boundary detection comprises applying a deep neural network.
7. Method according to any of preceding claims 4 to 6, characterized in that the step of performing a road boundary extraction comprises performing a road boundary polyline simplification.
8. Method according to any preceding claim, characterized in that the step of extracting ground truth data as reference data based on the sensor information comprises automatic data labelling.
9. Method according to any preceding claim, characterized in that the step of performing data post-processing as a second level odometry correction of the trajectory (42) based on the ground truth data and the global map (40) comprises filtering off scan points (38) away from extracted static features, scan points (38) of movable features, and/or scan points (38) out of a relevant distance range.
10. Method according to any preceding claim, characterized in that the step of performing data post-processing as a second level odometry correction of the trajectory (42) based on the ground truth data and the global map (40) comprises matching positions of the scan points (38) of the global map (40) with positions of extracted static features.
11 . Method according to claim 10, characterized in that the step of matching positions of the scan points (38) of the global map (40) with positions of the extracted static features comprises performing a maximization of score dependent on the positions of the scan points (38) of the global map (40) and the positions of the extracted static features.
12. Method according to any preceding claim, characterized in that the step of performing data post-processing as a second level odometry correction of the trajectory (42) based on the ground truth data and the global map (40) comprises providing a corrected global map (40) .
13. Validation system (12), in particular for validation of an advanced driver assistance system, based on accurate trajectory information, comprising a receiver (18) for receiving position information from a Global Navigation Satellite System, at least one environment sensor (14) for providing sensor information covering an environment (16) of the vehicle (10), whereby the sensor information comprises individual scan points (38), at least one odometry sensor for providing odometry information, and a processing unit (20) connected to the receiver (18), the at least one environment sensor (14) and the at least one odometry sensor, wherein the validation system (12) is adapted to perform the method of any of above claims 1 to 12.
14. Validation system (12) according to claim 13, characterized in that the at least one environment sensor (14) comprises at least one out of a LiDAR-based environment sensor (14) and/or a radar sensor.
PCT/EP2020/076467 2019-10-02 2020-09-23 Improved trajectory estimation based on ground truth WO2021063756A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102019126631.9 2019-10-02
DE102019126631.9A DE102019126631A1 (en) 2019-10-02 2019-10-02 Improved trajectory estimation based on ground truth

Publications (1)

Publication Number Publication Date
WO2021063756A1 true WO2021063756A1 (en) 2021-04-08

Family

ID=72615882

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/076467 WO2021063756A1 (en) 2019-10-02 2020-09-23 Improved trajectory estimation based on ground truth

Country Status (2)

Country Link
DE (1) DE102019126631A1 (en)
WO (1) WO2021063756A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11841708B2 (en) * 2020-02-28 2023-12-12 Zoox, Inc. System and method for adjusting a planned trajectory of an autonomous vehicle
DE102021105342A1 (en) 2021-03-05 2022-09-08 Bayerische Motoren Werke Aktiengesellschaft Method and system for generating ground truth data to secure an environment detection

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160011594A1 (en) * 2014-07-09 2016-01-14 Korea University Research And Business Foundation Method for extracting curb of road using laser range finder and method for localizing of mobile robot using curb informaiton of road
US20170097642A1 (en) * 2015-09-16 2017-04-06 Denso Corporation Apparatus for correcting vehicle location
US20190243382A1 (en) * 2018-02-08 2019-08-08 Denso Corporation Driving support device, storage medium, and driving support method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3078935A1 (en) * 2015-04-10 2016-10-12 The European Atomic Energy Community (EURATOM), represented by the European Commission Method and device for real-time mapping and localization
US10782694B2 (en) * 2017-09-07 2020-09-22 Tusimple, Inc. Prediction-based system and method for trajectory planning of autonomous vehicles
DE102019113114A1 (en) * 2018-06-19 2019-12-19 Nvidia Corporation BEHAVIOR-CONTROLLED ROUTE PLANNING IN AUTONOMOUS MACHINE APPLICATIONS

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160011594A1 (en) * 2014-07-09 2016-01-14 Korea University Research And Business Foundation Method for extracting curb of road using laser range finder and method for localizing of mobile robot using curb informaiton of road
US20170097642A1 (en) * 2015-09-16 2017-04-06 Denso Corporation Apparatus for correcting vehicle location
US20190243382A1 (en) * 2018-02-08 2019-08-08 Denso Corporation Driving support device, storage medium, and driving support method

Also Published As

Publication number Publication date
DE102019126631A1 (en) 2021-04-08

Similar Documents

Publication Publication Date Title
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN111563415B (en) Binocular vision-based three-dimensional target detection system and method
US10552982B2 (en) Method for automatically establishing extrinsic parameters of a camera of a vehicle
CN110988912A (en) Road target and distance detection method, system and device for automatic driving vehicle
KR102249769B1 (en) Estimation method of 3D coordinate value for each pixel of 2D image and autonomous driving information estimation method using the same
WO2020104423A1 (en) Method and apparatus for data fusion of lidar data and image data
CN112836633A (en) Parking space detection method and parking space detection system
CN112740225B (en) Method and device for determining road surface elements
CN113903011B (en) Semantic map construction and positioning method suitable for indoor parking lot
KR102363719B1 (en) Lane extraction method using projection transformation of 3D point cloud map
Konrad et al. Localization in digital maps for road course estimation using grid maps
US20220398856A1 (en) Method for reconstruction of a feature in an environmental scene of a road
Labayrade et al. A reliable and robust lane detection system based on the parallel use of three algorithms for driving safety assistance
KR102490521B1 (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
Jang et al. Road lane semantic segmentation for high definition map
US11941892B2 (en) Method and device for providing data for creating a digital map
WO2021063756A1 (en) Improved trajectory estimation based on ground truth
CN112084835A (en) Generating map features based on aerial data and telemetry data
CN112150448A (en) Image processing method, device and equipment and storage medium
Stuebler et al. Feature-based mapping and self-localization for road vehicles using a single grayscale camera
CN113971697A (en) Air-ground cooperative vehicle positioning and orienting method
Huang et al. Probabilistic lane estimation for autonomous driving using basis curves
CN113029185A (en) Road marking change detection method and system in crowdsourcing type high-precision map updating
US20220309776A1 (en) Method and system for determining ground level using an artificial neural network
CN113227713A (en) Method and system for generating environment model for positioning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20776151

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20776151

Country of ref document: EP

Kind code of ref document: A1