WO2022247994A1 - Système de fusion de données de capteur pour perception de l'environnement - Google Patents

Système de fusion de données de capteur pour perception de l'environnement Download PDF

Info

Publication number
WO2022247994A1
WO2022247994A1 PCT/DE2022/100397 DE2022100397W WO2022247994A1 WO 2022247994 A1 WO2022247994 A1 WO 2022247994A1 DE 2022100397 W DE2022100397 W DE 2022100397W WO 2022247994 A1 WO2022247994 A1 WO 2022247994A1
Authority
WO
WIPO (PCT)
Prior art keywords
grid
data
occupancy
model
environment
Prior art date
Application number
PCT/DE2022/100397
Other languages
German (de)
English (en)
Inventor
Philipp Materne
Christoph Hartwig
Original Assignee
Iav Gmbh Ingenieurgesellschaft Auto Und Verkehr
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Iav Gmbh Ingenieurgesellschaft Auto Und Verkehr filed Critical Iav Gmbh Ingenieurgesellschaft Auto Und Verkehr
Publication of WO2022247994A1 publication Critical patent/WO2022247994A1/fr

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3848Data obtained from both position sensors and additional sensors
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Definitions

  • the invention relates to a system for merging data from different sensors in the field of environment perception, specifically for the environment perception of highly automated or autonomous road users, such as self-driving vehicles, mobile robots and driverless transport systems.
  • Examples would be the fusion of several radars, e.g. for an adaptive cruise control, or the lidar camera fusion, in which lidar point clouds are transformed into the camera image and then fused with the object classes recognized by the camera. Objects are then generated in the vehicle coordinate system from the classified lidar points.
  • radars e.g. for an adaptive cruise control
  • lidar camera fusion in which lidar point clouds are transformed into the camera image and then fused with the object classes recognized by the camera. Objects are then generated in the vehicle coordinate system from the classified lidar points.
  • the necessary pre-processing of the sensor data in the sensor is disadvantageous, since only defined object data can be processed.
  • the respective sensors In order to generate this object data, the respective sensors must be designed for raw data evaluation and object extraction and therefore each require high decentralized computing power.
  • the classic object data fusion cannot be expanded with further information layers, eg unoccupied areas, and/or data from other sensor sources without object extraction.
  • the second approach uses occupancy grid-based particle filters, which determine the static and dynamic elements of the environment on the basis of individual grid cells.
  • Particle filter is a common name of sequential Monte Carlo methods. This is intended to estimate the current but unknown probability density of the distribution of a state variable. For this purpose, a large number of particles (pairs of characteristics and point coordinates in the state space) are generated. The totality of the particles represents the probability density.
  • the particle filter approach has the disadvantage that a large number of processing steps to be calculated in parallel for estimating the state on the basis of a large number of estimation methods and a large number of particles require a very high computing power. This is currently performed by graphics processors (GPU) that are designed for parallel data processing.
  • GPU graphics processors
  • DE 102015201 747 A1 discloses a sensor system for a vehicle in which the sensor devices calculate an occupancy grid from the respective raw data and a control device calculates a merged occupancy grid from the individual occupancy grids.
  • DE 102020005597 A1 describes a method for generating an environment map for environment representation in the form of an occupancy grid, the coordinate origin of which is dynamically adapted depending on the driving speed, the driving situation or a vehicle application using the environment representation, and the resolution and the displayed detection area of the Occupancy grid can be set depending on the coordinate origin.
  • the input data from various sensors, such as cameras, radars, lidars, etc., are displayed in the occupancy grid.
  • DE 102016209704 A1 relates to a method for controlling a personal protection device of a vehicle, in which information relevant to an accident is processed in an occupancy grid. On the basis of this environmental grid, the processed information is already available in a time-critical driving state and can therefore be used for rapid and robust triggering of the personal protection device.
  • DE 102017217972 A1 relates to a method for generating an inverse sensor model for radar sensor devices.
  • real obstacles are positioned in the vicinity of the radar sensor device and then measured by the radar sensor device.
  • the inverse sensor model is generated using the generated radar measurement data and the specified dimensions and spatial positions of the obstacles and assigns an occupancy probability to a cell of an occupancy grid as a function of specified radar measurement data.
  • the object of the present invention is to create an environment model for highly automated or autonomously driving vehicles with a large number of sensors in a resource-minimised manner with a high degree of reliability.
  • the object is achieved by a method for creating an environment model for a highly automated or autonomously operated vehicle with at least two sensors for detecting the environment according to the measures of independent claim 1, and by a control unit, a computer-implemented method, a computer program and a program code according to claims 11 to 14 solved.
  • Advantageous configurations of the invention are the subject matter of the dependent claims.
  • Environment models also called environment models, are known per se for mobile applications, such as driver assistance systems in motor vehicles or mobile robots.
  • For use in a highly automated or autonomously operated vehicle degree of automation level 3 to 5 according to SAE standard J3016; automated and autonomous mode according to the Federal Highway Research Institute - BASt), there are particularly high demands on the quality of the environment model.
  • Environment models are an image of the environment surrounding the vehicle that is as accurate as possible. They extend from the vehicle to the detection limit of the environmental sensors, which can be 100m away or more for long-distance sensors. Environment models contain as many stationary and dynamic, i.e. moving, objects as possible. This can be other vehicles or pedestrians. Stationary objects are stationary objects, such as infrastructure elements, traffic lights, traffic signs, but also lane markings. Ideally, the environmental object not only contains the occupancy of grid cells by these objects, referenced via a vehicle coordinate system, but also other information about these objects. This can be dynamic information such as speed, direction of movement or a predicted position, but also object properties such as object class, lane data, traffic information or free spaces. In order to capture the environmental data required to obtain this content, vehicles have a large number of different sensors installed to capture the environment. These differ in arrangement, detection principle and sensor data evaluation.
  • Lidar sensors emit radiant laser pulses in the non-visible spectrum and receive the light reflections at specific points. This creates a high-resolution image of the covered environment, which, however, has a very high number of individual sensor values.
  • Radar sensors emit electromagnetic waves in the radio frequency range and also detect reflected signals at specific points. However, the point density and the resolution are significantly lower than with lidar sensors. Both lidar and radar have the greatest measuring distance (long-distance sensors) of the usual environment sensors and can in principle calculate distances (radar, ToF lidar - time of flight) and relative speeds (radar, FMCW lidar - frequency modulated continuous wave) on the basis of runtime differences and the Doppler effect ) of objects in the environment.
  • Optical sensors such as cameras, have a significantly higher resolution/pixel density than radar sensors and often also than lidar sensors. However, estimating the positions of objects is significantly less accurate. They are therefore preferably used as medium to short-distance sensors.
  • the images from the optical sensors are processed, with objects being recognized and identified using feature vectors for object classification, using identified geometric shapes or using the optical flow. This is often implemented using machine learning methods, for example in the object recognition of a convolutional neural network (CNN).
  • CNN convolutional neural network
  • the pre-processing of the object data is usually already integrated in the camera.
  • These sensors are available in different versions with different measuring ranges and resolutions. All sensors are calibrated in the vehicle coordinate system depending on the field of vision, which means that a sensor value can be located in the environment relative to the vehicle.
  • the method according to the invention uses at least two, preferably a large number of sensors.
  • An exemplary sensor configuration for highly automated applications in urban areas consists of two lidar sensors arranged on the roof for monitoring the front and rear area at long distances and four lidar sensors arranged at the corners of the vehicle floor for detecting the environment at medium distances.
  • Four additional lidar sensors are arranged in the center of the vehicle's roof edges. Their detection range is directed downwards Close range to capture the vehicle.
  • the recorded environment data is used in addition to self-localization within the environment, among other things for pedestrian or lane detection.
  • Eight radar sensors are arranged on two levels at the corners of the vehicle to monitor medium distances. These have different detection properties to reduce the influence of different weather conditions and to increase detection reliability through redundant or overlapping detection areas.
  • the exemplary sensor configuration includes five optical cameras, one of which is aligned in the direction of travel to provide object and lane information in the short to medium range, and four additional cameras arranged on the vehicle edges to capture the immediate surroundings around the vehicle.
  • a V2X-based sensor can also be installed, which wirelessly receives information from other road users or infrastructure facilities in the form of objects via vehicle-to-vehicle or vehicle-to-infrastructure communication.
  • an assignment rule between the coordinate systems of the vehicle and the communication partner is required in order to be able to use the transmitted object positions.
  • the assignment can take place via a global coordinate system.
  • the method according to the invention can be used particularly advantageously in such sensor configurations with differently generated environment data (lidar and radar provide point clouds, cameras and V2X-based sensors provide objects), since it enables a very efficient use of resources to create the environment model with the highest detection or model quality. In principle, however, the method can also be used advantageously with a smaller number of sensors.
  • the surroundings are represented by means of an occupancy grid as a coherent arrangement of grid cells.
  • a grid cell dimension is assigned to each grid cell and the resolution of the occupancy grid is determined by the number of grid cells for a defined section of the environment.
  • Occupancy grids also known as occupancy grids, are discrete representations of the constantly existing environment and are known per se. When creating them, the section of the environment to be covered must be defined. Shape and size of the grid cells as well as the resolution of the occupancy grid, i.e. the number of grid cells in relation to the area covered, must also be specified. Occupancy grids can be designed as polar grids or as Cartesian grids.
  • each grid cell is assigned a grid cell dimension. In the simplest and preferred case, this is an occupancy probability. This indicates how high the probability is that an object is present in the section of the environment represented by a grid cell.
  • the grid cell metric can also be expressed as a cost value.
  • the grid cell dimensions can be reset to an initial value or taken over from the previous magazine. For example, the initial value of the grid cell measure is 0.5. This corresponds to an unclear occupancy of the grid cell. A value of zero corresponds to a definitely vacant grid cell and a value of 1 corresponds to a definitely occupied grid cell.
  • captured raw data from at least one first sensor is projected onto the occupancy grid by generating a grid cell dimension as a function of an inverse sensor model.
  • An inverse sensor model gives the accuracy or the error range of the individual measuring points of the raw data and depicts measuring errors or detection tolerances caused by the principle as a distribution around the measuring point.
  • the distribution can have a defined shape and extent and/or correspond to a probability distribution around the measurement point.
  • the properties of the sensor can be taken into account, as can the calibration accuracy and other influencing variables.
  • the inverse sensor models can be specified a priori specifically for the measurement principle, installation location and alignment and can be easily selected and adjusted by an operator or depending on environmental conditions (light conditions, precipitation). If the distribution of a measurement point extends over several grid cells, the grid cell dimension is generated proportionally. This can be done using the area of the grid cell covered by the distribution or by mapping the probability distribution around the measurement point.
  • the distribution determined by the inverse sensor model can be configured differently for each individual sensor. So e.g. B. a radar due to its ability to detect relative movements in the raw data have a shift in the distribution opposite to its direction of movement. This property is compensated or enhanced by the inverse sensor model, depending on the design required.
  • the grid cell measure is calculated for all projected raw data points and their error areas, for example in the form of an occupancy probability.
  • the pre-processed object data of at least one further sensor is projected onto the occupancy grid by generating a further grid cell dimension as a function of an inverse sensor object model.
  • An inverse sensor object model reflects the detection accuracy of the pre-processed object data parallel or radiating to the sensor line of sight and relates the characteristics of the object-generating sensor (camera, V2X-based sensor), such as the field of view, the detection accuracy or the accuracy of the object detection in the raw data as well as possible projection errors, eg due to inaccuracies in the assignment of sensor coordinate system and occupancy grid coordinate system (calibration quality).
  • the detection accuracy in the form of geometric or statistical uncertainty intervals can be increased by comparison with reference data, for example by retrieving object dimensions from a database for classified objects.
  • object edges and object surfaces are provided with an uncertainty distribution and are thus mapped onto the coverage grid or the individual grid cells.
  • the inverse sensor object model is likewise defined, selected or adapted a priori in a sensor-specific manner.
  • the grid cell dimension is generated in the same way as the distribution around the measuring point in the raw data projection.
  • the projected raw data and the projected, pre-processed object data are merged into an occupancy grid by calculating a merged grid cell dimension.
  • the occupancy grid as a uniform interface combines the grid cell dimensions of the individual raw or object data-based measuring points of all environmental sensors used into a merged grid cell dimension.
  • the calculation can be normalized using the number of grid cell dimensions.
  • a distance-dependent uncertainty of unoccupied cells (occupancy probability ⁇ 0.5) can be taken into account, so that from a certain distance every grid cell recorded as unoccupied assumes the initial dimension 0.5, i.e. is considered undetermined.
  • the occupancy grid is processed to extract grid data.
  • the purpose of the occupancy grid is to represent the environment. For this purpose, objects such as other road users, infrastructure objects, road markings, etc. and unoccupied open spaces are to be recognized and mapped. These objects (open spaces are also treated as objects) are the mesh data to be extracted.
  • the processing of the occupancy grid can include a qualitative preparation of the grid cell dimensions and/or a determination of geometric shapes of coherent or related object-specific grid cell dimensions.
  • the extracted grid data is processed to create the environment model. Each extracted object is recognized with its properties and processed accordingly. In this way, objects can be classified into road users such as vehicles or pedestrians. These are processed differently than stationary objects such as lane markings, traffic lights or other obstacles. The latter, for example, do not have to be tracked individually or their behavior predicted.
  • the environment model for the highly automated or autonomous operation of the vehicle is provided by the method according to the invention, preferably by storing the processed, extracted grid data in the occupancy grid.
  • This current occupancy grid is made available to the driving functions or driver assistance functions in the form of a uniform interface.
  • the method according to the invention advantageously enables sensor-independent environmental data processing or environmental data fusion, particularly with a large number of different sensors, since the object extraction, tracking and prediction is carried out independently on the basis of a uniform representation of all sensor data.
  • Another major advantage is the relatively small amount of computing resources required to create the environment model compared to the high informative value and reliability of the environment model for all applications in the vehicle. This plays a significant role, especially in mobile applications.
  • the processing according to the invention of the recorded environmental data is therefore highly scalable, ie it can be used with a large number of environmental sensors with little use of resources.
  • a simplification and thus optimization of the assistive or autonomous driving functions accessing the environment model is possible, since these do not have to be adapted to different interfaces.
  • distributed control units can also be saved.
  • the uniform interface enables a uniform and complete representation of dynamic and static parts of the environment. Objects can also be output that cannot be represented by classic object fusion (e.g. crash barriers or a house wall). Previously, these had to be considered separately, i.e. additionally.
  • the resolution of the occupancy grid, the size of the grid cells and/or the shape of the grid cells can be changed as a function of an environmental category and/or a specific scenario.
  • the environment can be categorized based on specific movement patterns of road users, such as expected speeds or actions. This can be done based on road categories, such as a freeway or residential area road, the presence of vulnerable road users, such as pedestrians or cyclists, or function-specific environments, such as a parking garage.
  • Application scenarios can be specified depending on the specific degree of autonomy or specific use cases, such as autonomous parking or autonomous passenger transport.
  • the autonomous driving of a vehicle in an urban environment requires a significantly larger spatial extent of the environment model in order to be able to be carried out with foresight.
  • a parking assistant requires a high resolution in the immediate vicinity of the vehicle.
  • Environment categories and scenarios can be specified or based on sensor data or database queries, e.g. depending on the position within a digital map, selected or specified or activated from outside the vehicle, e.g. when entering a parking garage.
  • the occupancy grid is a dynamic data layer for storing grid cell attributes assigned.
  • Grid cell attributes are additional properties of the measuring points or the respective grid cell, which go beyond the assignment itself, i.e. the presence of a grid cell dimension.
  • This can be dynamic information, such as the speed or direction of movement of the detected object point, or information about the object class, such as pedestrians, lane markings or lanes, or trafficability (open space).
  • the grid cell attributes may be known from a previous time step and retained when fusing the projected data and processing the grid data.
  • the grid cell attributes can also be encoded directly in the captured raw or pre-processed object data, such as the relative speed in a radar measurement point or the object class of an object captured and recognized by a camera.
  • the data layer can consist of a large number of data layers, each containing different information categories (e.g. object classes, dynamic information, navigability).
  • the inverse sensor model or sensor object model preferably models which sensor-specific information is projected onto the occupancy grid in which data layer and then merged with the further data of the grid cell.
  • the grid cell attributes stored in the corresponding data layer can also be used as the main feature for the fusion if they are more meaningful than the pure occupancy probability.
  • Dynamic data layers have the advantage of providing a large amount of additional information with layer-specific information classes, which minimize calculation resources and calculation times through assignment to the grid cells.
  • the projection of the recorded raw data also includes the projection of grid cell attributes derived from the raw data into the at least one dynamic data layer.
  • the merging of the projected raw data and the projected preprocessed object data includes combining the current merged grid cell dimension with the merged grid cell dimension of the previous time step and compensating for the vehicle's own movements and/or the projected preprocessed object data and/or grid cells with dynamic information.
  • Each time step can be viewed as its own occupancy grid. Since the procedure is carried out regularly or quasi-permanently, there is always an occupancy grid from the previous time step, except for the start of the system. In order not to have to laboriously redefine the information contained there for each time step, the current time step is merged with the previous one. This requires an assignment of the grid cells to the grid cell data from the previous step. This is achieved by calculating out, i.e.
  • Grid cells can also have dynamic information without an object reference, for example a relative speed from one or more radar measuring points merged into a grid cell dimension.
  • the merging of the projected raw data and the projected pre-processed data includes Object data combining the current merged grid cell measure with the merged grid cell measure of the previous time step by means of a grid update.
  • a grid update is a method in which changing or new information is integrated into an existing occupancy grid without recalculating all other data. This results in an updated grid cell measure, ie, for example, an updated occupancy probability of the cell.
  • the update function used can be specified depending on the type of grid cell measure (occupancy probability, cost value) or by a user or operator. Exemplary functions are a Bayesian update function or a Dempster-Shafer update function.
  • the processing of the coverage grid includes morphological operations across all grid cells for consistent distribution of the merged grid cell dimensions, ie for consistent imaging or reproduction of object shapes in the coverage grid.
  • Morphological operations are known from image processing. They are used to correct or reduce incorrect image information due to contamination or insufficient templates.
  • Exemplary basic operations are opening and closing (indefinite image content is omitted or enhanced to open or close geometric shapes) as well as erosion and dilatation, i.e. the dissolution or connection of structures in (image) data.
  • the advantageous application of the morphological operations to the occupancy grid, ie the entirety or certain subsets of the grid cells leads to a higher data consistency of the recorded sensor data and compensates for interference or measurement deviations.
  • the processing of the occupancy grid includes a cluster analysis for extracting the grid data.
  • shapes that are connected in the grid cell dimensions or data layer attributes are advantageously defined as objects without great computational effort. Individual grid cells are thus associated or clustered with one another.
  • the cluster analysis includes determining connected regions of grid cells of the occupancy grid on the basis of neighborhood relationships of the grid cell dimensions and/or on the basis of the attributes assigned to the grid cells, for example on the basis of the same directions of movement, speed information, dynamic states of the grid cells or object classes .
  • Neighborhood relationships exist between neighboring grid cells.
  • each grid cell can have a different number of neighboring grid cells.
  • An exemplary neighborhood relationship in a square grid is the Moore neighborhood, where the grid cells that share a corner or an edge with the starting row are considered neighbors (8-neighborhood).
  • Another exemplary adjacency relationship in a square grid is the von Neumann adjacency, in which the grid cells that share an edge with the seed row are considered neighbors. (4-neighborhood).
  • a connected component analysis approach is used.
  • a sufficiently reliably occupied grid cell is defined as the “condensation nucleus” of a new cluster.
  • the Moore neighborhood of this grid cell is examined for further sufficiently reliably occupied grid cells and these are added to the cluster.
  • the Moore neighborhood of the added grid cells is analyzed until there are no more sufficiently reliably occupied grid cells in the Moore neighborhood.
  • other criteria for cluster formation can also be used, e.g. B.
  • Object classifications or dynamic state variables such as speed and direction of movement.
  • the clustering of grid cells is significantly less computationally intensive than the clustering of point clouds, their number of measurement points exceeds the number of grid cells by several orders of magnitude. This pre-processing is required for classic object data fusion.
  • the processing of the extracted grid data includes a tracking of objects in order to obtain dynamic information about the objects.
  • This includes stationary objects in the area, such as traffic lights or open spaces.
  • the tracking only depends on the reliably known movement information of the vehicle and is therefore easy to carry out.
  • Dynamic information about dynamic objects e.g. B. speed and direction of movement, are not only dependent on the movement of the vehicle and can therefore not be calculated directly from the ego data. Therefore, this dynamic information must be estimated from the real movement detected.
  • tracking lattice objects is one way to estimate this dynamics information.
  • the preferred tracking algorithm is characterized by low resource requirements combined with great robustness and high quality. Tracking can also be used to transfer the additional information of the grid cell attributes within the occupancy grid to other grid cells if the associated element (object, measured value) shifts in the occupancy grid.
  • dynamic objects are tracked using an interacting multiple model with at least two Kalman filters (IMMKF), which are expanded according to a selected basic characteristic.
  • IMMKF Kalman filters
  • an IMMKF allows the coupling of any number of systems and can therefore also be extended to more than two Kalman filters, which can then map other estimation characteristics.
  • the number of coupled systems is proportional to the required computing power, which is why two Kalman filters represent a good compromise, for example for autonomous people carriers in slow-speed inner-city traffic.
  • the first Kalman filter is a constant motion model with constant velocity and the second Kalman filter is a constant curve model with constant velocity and constant trajectory radius.
  • Dynamic objects are objects that move or can move in relation to the environment. The tracking thus requires the inclusion of the own speed of the dynamic objects.
  • the Interacting Multiple Model is a dynamically coupled system of two Kalman filters. These are specially selected for the application with the intended characteristics. In the present case, the two Kalman filters deal with the extremes of possible movements of the object: driving straight ahead and cornering. The advantage of this approach is that a special Kalman filter can describe certain movements particularly well, but delivers relatively poor results for other movements.
  • the dynamic coupling makes it possible to use the best Kalman filter in each case.
  • the configuration as an interacting multiple model which is known per se, enables the independent selection of the preferred Kalman filter without external intervention. It is particularly advantageous that the use of computationally intensive particle filters can be dispensed with, which makes additional hardware (eg GPU) unnecessary.
  • One aspect of the invention relates to a control unit which is designed as a computing unit in order to carry out all the steps of the method according to the invention.
  • One aspect of the invention relates to a computer-implemented method, the computer-implemented method for creating an environment model for a highly automated or autonomously operated vehicle being configured according to the method according to the invention.
  • One aspect of the invention relates to a computer program, the computer program causing a control unit to carry out all the steps of a method according to the invention when it is being run on the control unit.
  • One aspect of the invention relates to a program code with processing instructions for creating a computer program that can be run on a computer, the program code resulting in the computer program according to the invention when the program code is converted into an executable computer program in accordance with the processing instructions.
  • FIG. 1 shows an exemplary flow chart of the method according to the invention for creating an environment model
  • FIG. 2 shows a projection of a measuring point of raw sensor data onto a section of an occupancy grid for determining grid cell dimensions
  • FIG. 3 shows a projection of a preprocessed object received by a sensor onto a section of an occupancy grid for determining grid cell dimensions
  • FIG. 4 shows an exemplary flow chart of an interacting multiple model with two extended Kalman filters.
  • the exemplary embodiments described relate to partial aspects of the invention and relate equally to the method, the computer-implemented method, the control device, the computer program and the program code.
  • the features mentioned below, for example with regard to the method can also be implemented in the control device, the computer program and the program code and vice versa.
  • FIG. 1 shows an exemplary flow chart of the method according to the invention for creating an environment model.
  • the aim is to merge raw sensor data and pre-processed object data in order to be able to use a large number of different environmental sensors for the most comprehensive possible detection of the environment and to enable efficient or minimized use of computing resources.
  • the method according to the invention is thus independent of specific sensor configurations and is highly scalable, ie by different ones Sensor types can be supplemented or replaced, and the modularity means it can be easily adapted to technical conditions and configured according to use cases.
  • the creation of the environment model uses an extended occupancy grid approach. It starts with the data input 10.
  • the raw sensor data, pre-processed objects and additional information, for example in the form of grid cell attributes, are received by a calculation unit, for example a control unit.
  • the data is obtained from the sensors and a possibly already existing occupancy grid from an earlier magazine.
  • the raw sensor data are initially projected onto the occupancy grid by means of an inverse sensor model and calculated to form an averaged occupancy probability (FIG. 2).
  • the pre-processed object data are projected onto the occupancy grid by means of an inverse sensor object model (FIG. 3).
  • the pre-processed object data also includes further pre-processed object data, such as object classes (cars, pedestrians, infrastructure elements) or dynamic data (direction of movement, speed of movement). These are also projected as grid cell attributes onto the occupancy grid using the inverse sensor model, for example onto a dynamic data layer.
  • the inverse sensor or sensor object models are entered as fusion models 16 for data fusion 11 .
  • the occupancy grid that may be present is then updated using a grid update function and the current input data.
  • the procedure is illustrated using a Bayesian update function as an example.
  • Dynamic grid cells are grid cells with dynamic information, for example in the form of grid cell attributes of additional data layers. For this purpose, a one-to-one mapping between the grid cells of both occupancy grids is carried out.
  • all grid cells C 0 associated with a dynamic object 0 become the journal t-1 motion compensated.
  • the grid cells C 0 have a center point r.
  • the object has the speed v.
  • the properties of the grid cells C 0 are mapped to a grid cell transferred or shifted with the center r': where ⁇ t is the time interval between the time steps t-1 and t and is the vehicle's own speed.
  • Other example motion compensations may alternatively or additionally use angular velocity or other dynamic attributes of the object.
  • the Bayesian update rule for calculating the current occupancy probability reads: whereby is the average occupancy probability from the projection of the sensor raw or pre-processed object data. ⁇ is one
  • Transition probability to include alignment errors is complementary to the occupancy probability and describes how free or unoccupied the grid cell C', to which the object or the grid cell property was moved, is at time t.
  • the occupancy grid is processed 12. For this purpose, for example, morphological operations are applied to the grid data and then connected grid cells are merged into clusters (see description above) in order to extract the grid data.
  • the processing 13 of the extracted grid data takes place.
  • an input 17 of additional information for example in the form of grid cell attributes.
  • the processing 13 of the extracted grid data can include object tracking (FIG. 4) using an interactive multiple model 18 and free space calculation depending on the extracted grid data and the additional information.
  • the output 14 of the processed grid data takes place in the form of occupancy probabilities and additional information or grid cell attributes, such as object speed, object movement direction, object class or an open space classification by taking them over into a current grid model as a current image of the environment (environment model).
  • This environment model can also be extended by further or derived data by means of further processing.
  • the navigability of open spaces, the presence of streets or other navigability attributes, such as the quality of the asphalt, can be stored as properties in the grid cells.
  • Driver assistance functions as well as highly automated or autonomous driving functions then access the current environment model.
  • the environment model from the previous time step t-1 can be archived, overwritten on a rolling basis, or deleted.
  • the current occupancy grid together with a data input 10 updated in the next time step, enters the next loop 15 in the method according to the invention.
  • FIG. 2 shows the projection of a measured data point (measuring point 23) of a radar sensor 22 onto an occupancy grid 20, which is composed of a large number of grid cells 21 arranged relative to one another.
  • Measuring point 23 is imprecise due to the characteristics of the radar sensor.
  • a specific inverse sensor model for the radar sensor 22 specifies a probability distribution in the error area 24 of the measurement point 23, which specifies the probability for the real location (distance, direction) of the measurement.
  • the inverse sensor model takes into account measurement deviations of radar sensor 22 that are dependent on distance and solid angle, with the probability being increased around measuring point 23, for example, and falling towards the edge.
  • Other inverse sensor models can have other distributions, e.g. B. use a uniform distribution.
  • an inverse sensor model can also transform the measurement points 23 as a function of further map measurements.
  • the distribution of the occupancy probability can be shifted along the direction of movement of the detection in order to compensate for deviations caused by the measuring principle.
  • the inverse sensor model changes the grid cell size of all grid cells 21 that lie within the error range 24 of the measuring point 23 . This can be achieved by an increase 26 in the grid cell size proportional to the overlap area of the respective grid cell 21 and the error area 24 .
  • the expression of the probability distribution of the inverse sensor model aggregated over the overlapping area can also be included.
  • the increases 26 in the respective grid cell dimension are shown using exemplary numerical values.
  • the grid cell dimensions are increased analogously.
  • Several measuring points can also lie in one grid cell.
  • the grid cell measure is then formed as a function of all increases 26 resulting from all measurement points 23, for example as an average occupancy probability a count value C, which represents the aggregated increases in the grid cell size of the respective grid cell and, if applicable, the original grid cell size, is normalized over the number of projected measurement points N c .
  • the tanh function ensures that for plausible values for C and N c the occupancy probability OCC m remains ⁇ 1.
  • the original grid cell metric may be formed from a grid cell metric from a previous time step in creating the environment model.
  • the initial value of all grid cells is 0.5. This corresponds to an indeterminate occupancy of the grid cell. This condition is also present at the maximum limit of the measuring range.
  • FIG. 3 shows a projection of a preprocessed object 33 received by a sensor onto a section 30 of an occupancy grid for determining grid cell dimensions.
  • the projection is based on a representative occupancy probability of the object 33, carried out using an inverse sensor object model.
  • the peculiarities of the sensor here a camera 32
  • the pre-processed object 33 available such as the detection range, detection accuracy or ability to classify objects, are taken into account.
  • the dimensions of the object 33 and the reliability of the occupancy detection or its distribution over the object dimensions must be modeled.
  • the certainty or trust (confidence) in the occupancy detection can be written into the occupancy grid 30 as a grid cell measure analogous to the occupancy probability of the raw data projection.
  • the occupancy grid 30 consists of a large number of grid cells (34, 35, 36).
  • the resolution of the occupancy grid 30 shown here in relation to the size of the object vehicle is to be understood only schematically.
  • the object 33 captured by the camera 32 is not true to scale in its dimensions as a rectangle.
  • An example Cartesian occupancy grid used in practice has a size of 100m by 100m.
  • Polar occupancy grids can have a radius of 50m centered on the vehicle.
  • the required expansion of the movement grid depends on the application. Use cases with high vehicle and object speeds tend to require larger expansions.
  • Grid cells modeled in urban, speed-limited environments have edge lengths of 20cm by 20cm.
  • An occupancy grid with an edge length of 100 m and grid cell dimensions of 20 x 20 cm comprises approx. 250,000 grid cells. A correspondingly higher edge length is required for applications with significantly higher speeds to be expected, such as motorways or country roads.
  • the pre-processed object vehicle 33 transmitted by the camera 32 is now projected onto the occupancy grid 30 with its location-referenced dimensions.
  • the visual edges of the object 33 are determined by the inverse sensor object model. These are the solid edges of the object 33 in FIG. This is illustrated by the lines of sight 31 . Anything outside of the acute angle between the lines of sight 31 can still be captured as long as it is within the capture range (angle of view) of the camera 32 . In contrast, the area delimited by the lines of sight 31 and the visible edges of the object 33 can be taken over into the occupancy grid 30 as being definitely unoccupied.
  • the distribution of the reliability of the occupancy detection can be variably modeled by the inverse sensor object model depending on the reliability of the object detection.
  • the certainty can be evenly distributed over the projected object area. This can be the case if the object was clearly identified as a vehicle with specific dimensions, for example by comparison with a database using pattern recognition. Since such comparisons are resource intensive, this will not be the case in the majority of practical operations. Then the distribution of the detection reliability from the visible edges of the object in direction
  • the inverse sensor object model then covers the grid cells overlapped by the object 33 with a grid cell dimension that results from the distribution of the detection reliability.
  • the grid cell measure is, for example, the occupancy probability occ calculated above.
  • FIG. 3 the grid cells that are overlapped by the visible object edges are rated as definitely occupied. These grid cells are marked by a cross in FIG. 3 (cf. grid cell 34).
  • the grid cells that border on the grid cells rated as definitely occupied are only rated as probably occupied (cf. grid cell 35 with diagonal).
  • the subsequent grid cells in the direction of the corner of the object 33 facing away from the camera sensor 32 are modeled with low detection reliability in accordance with the inverse sensor object model used here.
  • the relevant grid cells like all grid cells lying outside the field of view of the camera 32 (angle of view, range of vision), are rated as indefinitely occupied. In this case, there is no entry in the occupancy grid 30.
  • the inverse sensor object model therefore takes into account the field of view and the ability of the sensor used to recognize objects.
  • FIG. 4 shows an exemplary flowchart of an interacting multiple model 40 with two extended Kalman filters (42, 43) as a dynamically coupled system.
  • the status data of the objects and the model from the previous time step t ⁇ 1 as well as the grid data extracted from the occupancy grid and processed in the current time step t go into the model.
  • the model is used for calculation and output estimated state data at time t.
  • the model state of the previous time step t-1 is called the conditional model probability taken from the previous time step t.
  • the grid data extracted describe the object-related, merged grid data, for example in the form of the object-related occupancy of the occupancy grid and advantageously extracted additional information, such as the speed or direction of movement of the extracted object.
  • State data are state vectors for the constant motion model 42 and the constant curve model 43 where x and y the position of the object in the occupancy grid, ⁇ the direction of movement of the object, v the object speed ⁇ the angular speed.
  • the constant motion model 42 (CV constant velocity) is adapted to suitably describe rectilinear uniform motion.
  • the constant curve model 43 (CTRV constant turn rate and velocity) is very well adapted to suitably describe curved uniform motion.
  • the model with the higher estimation quality is used weighted higher, i.e. used preferentially. This takes place in the case of dynamic state coupling 41.
  • the coupling of both models which can also be understood as a mixing of both models, results in coupled state vectors which represent better estimates of the object-related dynamics information than either of the two independent state vectors (CV, CTRV).
  • the coupled state vectors go together with the current (object-related) grid or status data from time step t into the respective model (42, 43).
  • the constant motion model 42 is designed and processed as an extended Kalman filter and to a current state vector This estimates the dynamics of the object since the last time step, assuming a uniform, rectilinear motion of the object object.
  • the state transition of the state vectors between the time steps is determined for the constant motion model 42 as follows:
  • the constant motion model 42 assumes zero object acceleration. Therefore, the direction of movement ⁇ and the speed v remain constant.
  • the constant curve model 43 is also designed and processed as an extended Kalman filter to a current state vector This estimates the dynamics of the object since the last time step, assuming a uniform, curved movement of the object on a trajectory.
  • the state transition of the state vectors between the time steps is determined for the constant curve model 43 as follows:
  • Both models 42 and 43 can implicitly include an existing acceleration via a process noise matrix known from Kalman filters. Put simply, this reflects the system noise that results from the noise of the individual variables, for example fluctuations in speed due to fluctuating air or rolling resistance, which can be interpreted as noise and filtered out or taken into account accordingly.
  • both models 42 and 43 output their own model likelihood ⁇ cv and ⁇ CTRV . These describe how well the estimate of the respective model fits the recorded status data.
  • the model likelihood is also called the model quality, model plausibility or model probability and is a measure of the informative value of the current estimate.
  • conditional model probability for the current time step t is determined during the update 44 .
  • the currently estimated dynamic state is output 45 for each object.
  • the estimated dynamic state is calculated from the current state vectors, taking into account the model qualities ⁇ cv and ⁇ CTRV and the updated model probability calculated.
  • the dynamic state transferred to the occupancy grid as the current status.
  • An up-to-date image of the environment is thus available for automated or autonomous driving functions, for example for trajectory planning.
  • the dynamic coupling of two simple motion models in the form of characteristically extended Kalman filters enables a significant increase in resource efficiency compared to conventional Kalman filters with particle filters. Due to the efficiency of the method, distances between the time steps of well under 100ms can be achieved. In this way, functional safety can be guaranteed or further increased even with commercially available computer technology, and the use of expensive graphics processors can be avoided.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

L'invention concerne un procédé de création d'un modèle d'environnement pour un véhicule qui est exploité de manière hautement automatisée ou autonome et qui présente au moins deux capteurs pour la détection de l'environnement, l'environnement étant représenté sous la forme d'un agencement continu de cellules de grille au moyen d'une grille d'occupation, une mesure de cellule de grille étant attribuée à chaque cellule de grille et la résolution de la grille d'occupation étant déterminée par le nombre de cellules de grille pour une partie définie de l'environnement. Les données brutes de capteur (10) et les données d'objet prétraitées (10) sont projetées sur la grille d'occupation et ensuite fusionnées (11), les modèles inverses de capteur ou d'objet capteur et les fonctions de mise à jour de la grille étant introduits comme modèles de fusion (16) pour la fusion de données (11). Des données de grille sont extraites de celles-ci (12) et sont traitées (13) pour former le modèle d'environnement.
PCT/DE2022/100397 2021-05-27 2022-05-24 Système de fusion de données de capteur pour perception de l'environnement WO2022247994A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DEDE102021113651.2 2021-05-27
DE102021113651.2A DE102021113651B3 (de) 2021-05-27 2021-05-27 System zur Sensordatenfusion für die Umgebungswahrnehmung

Publications (1)

Publication Number Publication Date
WO2022247994A1 true WO2022247994A1 (fr) 2022-12-01

Family

ID=82402574

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/DE2022/100397 WO2022247994A1 (fr) 2021-05-27 2022-05-24 Système de fusion de données de capteur pour perception de l'environnement

Country Status (2)

Country Link
DE (1) DE102021113651B3 (fr)
WO (1) WO2022247994A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116908836A (zh) * 2023-07-13 2023-10-20 大连海事大学 一种融合多传感器信息的usv环境感知方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115656057A (zh) * 2022-12-05 2023-01-31 中国水利水电科学研究院 基于多源数据融合的水华精准协同监测方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102015201747A1 (de) 2015-02-02 2016-08-04 Continental Teves Ag & Co. Ohg Sensorsystem für ein fahrzeug und verfahren
DE102016209704A1 (de) 2016-06-02 2017-12-07 Robert Bosch Gmbh Verfahren zum Ansteuern einer Personenschutzeinrichtung eines Fahrzeugs und Steuergerät
DE102017217972A1 (de) 2017-10-10 2019-04-11 Robert Bosch Gmbh Verfahren und Vorrichtung zum Erzeugen eines inversen Sensormodells und Verfahren zum Erkennen von Hindernissen
DE102019205008B3 (de) * 2019-04-08 2020-07-02 Zf Friedrichshafen Ag System zur Rückwärtigen Kollisionsvermeidung
DE102020005597A1 (de) 2020-09-14 2020-11-05 Daimler Ag Verfahren zur Erzeugung einer Umgebungskarte zur Umgebungsrepräsentation eines Fahrzeugs
DE102019214628A1 (de) * 2019-09-25 2021-03-25 Zf Friedrichshafen Ag Validierung von Umfelderfassung mittels Satelitenbildern und SAR-Radardaten

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102015201747A1 (de) 2015-02-02 2016-08-04 Continental Teves Ag & Co. Ohg Sensorsystem für ein fahrzeug und verfahren
DE102016209704A1 (de) 2016-06-02 2017-12-07 Robert Bosch Gmbh Verfahren zum Ansteuern einer Personenschutzeinrichtung eines Fahrzeugs und Steuergerät
DE102017217972A1 (de) 2017-10-10 2019-04-11 Robert Bosch Gmbh Verfahren und Vorrichtung zum Erzeugen eines inversen Sensormodells und Verfahren zum Erkennen von Hindernissen
DE102019205008B3 (de) * 2019-04-08 2020-07-02 Zf Friedrichshafen Ag System zur Rückwärtigen Kollisionsvermeidung
DE102019214628A1 (de) * 2019-09-25 2021-03-25 Zf Friedrichshafen Ag Validierung von Umfelderfassung mittels Satelitenbildern und SAR-Radardaten
DE102020005597A1 (de) 2020-09-14 2020-11-05 Daimler Ag Verfahren zur Erzeugung einer Umgebungskarte zur Umgebungsrepräsentation eines Fahrzeugs

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TASDELEN ELIF AKSU ET AL: "Comparison and Application of Multiple 3D LIDAR Fusion Methods for Object Detection and Tracking", 2020 5TH INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION ENGINEERING (ICRAE), IEEE, 20 November 2020 (2020-11-20), pages 64 - 69, XP033872132, DOI: 10.1109/ICRAE50850.2020.9310833 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116908836A (zh) * 2023-07-13 2023-10-20 大连海事大学 一种融合多传感器信息的usv环境感知方法
CN116908836B (zh) * 2023-07-13 2024-03-08 大连海事大学 一种融合多传感器信息的usv环境感知方法

Also Published As

Publication number Publication date
DE102021113651B3 (de) 2022-08-04

Similar Documents

Publication Publication Date Title
DE102014223363B4 (de) Verfahren und Vorrichtung zur Lokalisation eines Kraftfahrzeugs in einer ortsfesten Referenzkarte
DE102010006828B4 (de) Verfahren zur automatischen Erstellung eines Modells der Umgebung eines Fahrzeugs sowie Fahrerassistenzsystem und Fahrzeug
DE102009006113B4 (de) Vorrichtung und Verfahren zur Sensorfusion mit dynamischen Objekten
DE102018114042A1 (de) Vorrichtung, verfahren und system zur multimodalen fusionsverarbeitung von daten mit mehreren verschiedenen formaten, die von heterogenen vorrichtungen erfasst wurden
DE102011119767A1 (de) Erscheinungsbild-gestützte vereinigung von kamera- undentfernungssensordaten für mehrere objekte
DE102017204404B3 (de) Verfahren und Vorhersagevorrichtung zum Vorhersagen eines Verhaltens eines Objekts in einer Umgebung eines Kraftfahrzeugs und Kraftfahrzeug
EP3298541A1 (fr) Procédé d'estimation de files de circulation
DE102019121521A1 (de) Videostabilisierung
DE102021112349A1 (de) Fahrzeugbetrieb unter verwendung eines dynamischen belegungsrasters
WO2022247994A1 (fr) Système de fusion de données de capteur pour perception de l'environnement
DE102010005293A1 (de) System und Verfahren zur Spurpfadschätzung unter Verwendung einer Sensorvereinigung
WO2019201565A1 (fr) Procédé, dispositif et support d'enregistrement lisible par ordinateur comprenant des instructions pour le traitement de données de capteur
WO2020207528A1 (fr) Procédé et unité de traitement permettant de déterminer la taille d'un objet
EP3490862A1 (fr) Procédé et dispositif permettant de déterminer un modèle de chaussée pour un environnement de véhicule
DE102007013664A1 (de) Multisensorieller Hypothesen-basierter Objektdetektor und Objektverfolger
DE102010005290A1 (de) Vereinigungsmodul für mehrere Objekte für ein Kollisionsvorbereitungssystem
EP3526624A1 (fr) Détection automatisée de l'espace libre par analyse différentielle pour des véhicules
DE102018133441A1 (de) Verfahren und System zum Bestimmen von Landmarken in einer Umgebung eines Fahrzeugs
DE112021006299T5 (de) Verfahren und Systeme zur Bodensegmentierung mittels Graphenschnitten
DE102021104044A1 (de) Neuronales netzwerk zur positionsbestimmung und objektdetektion
DE102021124810A1 (de) Neuronales fahrzeugnetzwerk
DE102021121712A1 (de) Gruppenobjektnachverfolgung
DE102019109332A1 (de) Verfahren und Verarbeitungseinheit zur Ermittlung eines Objekt-Zustands eines Objektes
DE102021201178A1 (de) Computerimplementiertes verfahren zum erzeugen von zuverlässigkeitsangaben für computervision
WO2021170321A1 (fr) Procédé de détection d'objets en mouvement dans l'environnement d'un véhicule, et véhicule à moteur

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22740766

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 22740766

Country of ref document: EP

Kind code of ref document: A1