EP4196379A1 - Computerimplementiertes verfahren und computerprogrammprodukt zum erhalten einer umfeldszenen-repräsentation für ein automatisiertes fahrsystem, computerimplementiertes verfahren zum lernen einer prädiktion von umfeldszenen für ein automatisiertes fahrsystem und steuergerät für ein automatisiertes fahrsystem - Google Patents
Computerimplementiertes verfahren und computerprogrammprodukt zum erhalten einer umfeldszenen-repräsentation für ein automatisiertes fahrsystem, computerimplementiertes verfahren zum lernen einer prädiktion von umfeldszenen für ein automatisiertes fahrsystem und steuergerät für ein automatisiertes fahrsystemInfo
- Publication number
- EP4196379A1 EP4196379A1 EP21745818.1A EP21745818A EP4196379A1 EP 4196379 A1 EP4196379 A1 EP 4196379A1 EP 21745818 A EP21745818 A EP 21745818A EP 4196379 A1 EP4196379 A1 EP 4196379A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- driving system
- information
- environment
- layer
- computer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000003068 static effect Effects 0.000 claims abstract description 20
- 230000007613 environmental effect Effects 0.000 claims description 43
- 239000013598 vector Substances 0.000 claims description 28
- 238000004422 calculation algorithm Methods 0.000 claims description 21
- 238000010801 machine learning Methods 0.000 claims description 20
- 230000006399 behavior Effects 0.000 claims description 17
- 230000003993 interaction Effects 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 10
- 230000001105 regulatory effect Effects 0.000 claims description 4
- 101000767160 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) Intracellular protein transport protein USO1 Proteins 0.000 claims description 2
- 239000002131 composite material Substances 0.000 claims description 2
- 230000008859 change Effects 0.000 description 7
- 238000013528 artificial neural network Methods 0.000 description 6
- 239000003795 chemical substances by application Substances 0.000 description 6
- 230000004927 fusion Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 102100034112 Alkyldihydroxyacetonephosphate synthase, peroxisomal Human genes 0.000 description 1
- 102100028043 Fibroblast growth factor 3 Human genes 0.000 description 1
- 101000799143 Homo sapiens Alkyldihydroxyacetonephosphate synthase, peroxisomal Proteins 0.000 description 1
- 241000711920 Human orthopneumovirus Species 0.000 description 1
- 102100024061 Integrator complex subunit 1 Human genes 0.000 description 1
- 101710092857 Integrator complex subunit 1 Proteins 0.000 description 1
- 108050002021 Integrator complex subunit 2 Proteins 0.000 description 1
- 101100446506 Mus musculus Fgf3 gene Proteins 0.000 description 1
- 230000001154 acute effect Effects 0.000 description 1
- 238000000848 angular dependent Auger electron spectroscopy Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000013067 intermediate product Substances 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0268—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
- G05D1/0274—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3807—Creation or updating of map data characterised by the type of data
- G01C21/3815—Road data
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/0088—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots characterized by the autonomous decision making process, e.g. artificial intelligence, predefined behaviours
Definitions
- Computer-implemented method and computer program product for obtaining a representation of surrounding scenes for an automated driving system, computer-implemented method for learning a prediction of surrounding scenes for an automated driving system and control device for an automated driving system
- the invention relates to a computer-implemented method and a computer program product for obtaining a representation of surrounding scenes for an automated driving system, a computer-implemented method for learning a prediction of surrounding scenes for an automated driving system, and a control unit for an automated driving system.
- the environment is characterized by a large number of explicit, visible signs and markings, such as traffic signs, lane markings, curbs, roadsides, which are coupled with regionally different meanings, rules and real behavior and with a large number of underlying rules and standards, which do not visibly determine the behavior of the interactors in the environment, such as when an emergency vehicle approaches from behind, an emergency lane must be formed.
- these rules are applied very differently from region to region and, on the other hand, they depend on accompanying events, such as the approach of an emergency vehicle in an acute traffic jam situation in the previous example.
- all of these explicit, implicit, regional and event-driven rules/information must be considered and used for temporal prediction.
- Occupancy grids are known in the prior art, a map-like representation of the static environment and road users located therein, see for example EP 2 771 873 B1. Spatial dependencies can be detected by means of such grid representations.
- the disadvantage is that additional semantic information is usually not recorded or has to be managed separately.
- the invention is based on the object of enabling improved movement planning for an intelligent agent including automated driving systems
- the methods according to claims 1 and 8, the computer program product according to claim 7 and the control unit according to claim 12 each solve this task.
- the environment scene representation according to the invention represents a hybrid representation.
- the further processing based on this representation in order to enable, for example, a chronological prediction of all road users over several time steps into the future, becomes faster, more efficient, more powerful, more precise, less error-prone, more robust and reliable.
- the advantages of the spatial and the semantic representation are brought into harmony with one another in an intelligent manner.
- One aspect of the invention relates to a computer-implemented method for obtaining a representation of an environment scene for an automated driving system, comprising the steps
- the static environment features include regional information, position data of the driving system and/or the environment features, traffic regulation information, traffic signs and anchor trajectories.
- the dynamic environment features include semantic information and movement information of road users.
- the driving system is regulated and/or controlled based on the scene representation.
- a further aspect of the invention relates to a computer program for obtaining a representation of an environment scene for an automated driving system.
- the computer program comprises instructions that cause a computer to carry out a method according to the invention when the program is run on the computer.
- a further aspect of the invention relates to a computer-implemented method for learning a prediction of environmental scenes for an automated driving system.
- a machine learning algorithm receives the environmental scene representations obtained according to a method according to the invention together with the respective reference predictions as input data pairs. Based on these pairs of input data, the gradient-based prediction is learned from the surrounding scene representations.
- a further aspect of the invention relates to a control unit for an automated driving system.
- the control unit includes first interfaces via which the control unit receives environmental sensor data from the driving system.
- the control unit includes a processing unit that determines environmental features from the environmental sensor data, executes a machine learning algorithm learned according to a method according to the invention and receives predicted environmental scenes and, based on the predicted environmental scenes, determines regulation and/or control signals for automated operation of the driving system.
- the control device includes second interfaces, via which the control device provides the control and/or control signals to actuators for longitudinal and/or lateral guidance of the driving system.
- Computer-implemented means that the steps of the method are executed by a data processing device, for example a computer, a computing system, a computer network, for example a cloud system, or parts thereof.
- a data processing device for example a computer, a computing system, a computer network, for example a cloud system, or parts thereof.
- Automated driving systems include automated vehicles, road vehicles, people movers, robots and drones.
- Environmental features include houses, streets, in particular street geometry and/or condition, signs, lane markings, vegetation, moving road users, vehicles, pedestrians, cyclists.
- Surroundings sensor data include raw data and/or data preprocessed, for example with filters, amplifiers, serializers, compression and/or conversion units, from cameras, radar sensors, lidar sensors, ultrasonic sensors, acoustic sensors, Car2X units and/or real-time/offline maps arranged on the driving system .
- the surroundings sensor data are actually data entered with the driving system.
- the environmental sensor data includes virtually generated data, for example using software, hardware, model and/or vehicle-in-the-loop methods.
- the surroundings sensor data are real data that have been virtually augmented and/or varied.
- the environmental features are obtained from the environmental sensor data using object classifiers, for example artificial neural networks for semantic image segmentation.
- the environment scene representation layers a scenario into several layers.
- a real scenario is presented as a hybrid of static and dynamic and thus semantic information.
- the environment scene representation according to the invention is also called Hybrid Scene Representation for Prediction, abbreviated HSRV.
- the scenario is an image with i pixels in the x-direction and j pixels in the y-direction.
- the individual layers can also be displayed as images and are arranged congruently with one another, for example the layers are spatially congruently one on top of the other.
- the environment scene representation according to the invention can be imagined as a stack of digital photos lying one on top of the other, for example taken from a bird's eye view of an intersection.
- this stack of images is combined with further layers of partly purely semantic information that is represented, for example, as pure feature vectors.
- Static environmental characteristics are divided into two further categories. Elements that do not change at all or only after a long period of time do not change their state in the short term and are referred to as rigid.
- HRSV also provides for an adaptation of these elements if, for example, there is a change in traffic routing. However, this aspect of adaptation takes place on a different time scale. Road markings are an example of this. In contrast, there are elements that can change state frequently and are therefore state-changing. Traffic lights or variable message signs, for example, are classified in the latter category.
- Position data of the driving system and/or the environmental features are recorded via map information.
- a map section is formed by assigning a value to each pixel of the map information corresponding layer of the environment scene representation. The values are based on discrete labels of the map, e.g. numeric codes for street, walkway, broken line, double line, etc.
- the right of way rules are shown via the traffic regulation information.
- a line is drawn in the middle of each lane. Additional lines are drawn at intersections, representing all permissible maneuvers.
- implicitly regulated information such as "Right before left” is overlaid on the signage. Any conflicting rule information is aggregated to form a consistent rule in this layer, so that the rules then in effect are treated as having priority.
- Traffic advisors include state-changing and stateful traffic advisors.
- Status-changing traffic signs are usually used to summarize signals that are passed on to the driver visually and that can change their status several times in the course of a day. Examples of this category are traffic lights, variable message signs on motorways and entry signs at toll booths.
- These traffic signs are represented as a pixel value representing the current state in the spatial context of the local scene representation. For reasons of redundancy, such pixel regions are generally not limited to one pixel, but rather mapped to a larger number of pixels. The exact size of the expansion is mostly learned from data to an optimum.
- the anchor trajectories combine information from the right of way rules and from the status-changing traffic signs. According to one aspect of the invention, the anchor trajectories determined in this way are brought into line with the rules of the status-changing traffic indicators and prioritized accordingly. According to one aspect of the invention, the layer of the anchor trajectories can supplement or replace the layers of traffic instructions and/or traffic regulation information, depending on the time required of the driving system.
- the computer program instructions include software and/or hardware instructions.
- the computer program is loaded into a memory of the control device according to the invention, for example, or is already loaded into this memory. According to a further aspect of the invention, the computer program according to the invention is executed on hardware and/or software of a cloud facility.
- the computer program is loaded into the memory, for example, by a computer-readable data carrier or a data carrier signal.
- the invention is thus also implemented as an aftermarket solution.
- the control unit prepares input signals, processes them using an electronic circuit and provides logic and/or power levels as regulation and/or control signals.
- the control device according to the invention is scalable for assisted driving through to fully automated/autonomous/driverless driving.
- control unit receives raw data from sensors and includes an evaluation unit that processes the raw data for HSRV. According to a further aspect of the invention, the control unit receives pre-processed raw data. According to a further aspect of the invention, the control unit includes an interface to an evaluation unit that processes the raw data for HSRV.
- control unit includes a software and/or hardware level for trajectory planning or high-level controlling. After this level, the signals are then sent to the actuators.
- the processing unit includes, for example, a programmable electronic circuit.
- the processing unit or the control device is designed as a system-on-chip.
- the scene representation includes:
- the regional information and/or the weather information is provided in the form of codes or a machine learning algorithm learns a connection between the region and driving behavior by entering global coordinates and driving data of the driving system,
- the position of the driving system is determined from a map section at a specific point in time and the map section is generated for each new time step or the map section is updated after a specified number of time steps, with each pixel of the second layer being assigned a value on the map,
- the traffic regulation information is determined by means of traffic signs recorded from the environmental sensor data and/or traffic regulations derived from the regional information
- the anchor trajectories which according to one aspect of the invention include lane lines that can be reached by a road user, are prioritized depending on the traffic signs,
- the movement information is learned and determined using a machine learning algorithm via time steps and displayed spatially.
- Adding the regional information for example in the form of a country code from a table, leads to an improvement in the prediction quality.
- Each region is represented by a specific country or region code.
- the current weather situation is processed via a weather code.
- This code can also be global to the machine learning algorithm, i.e. not over one layer, to be provided.
- the machine learning algorithm thus has the opportunity to learn the real connections between region and/or weather and actual driving behavior.
- the same regional value is assigned to each pixel in a layer.
- one option is to learn a connection between the region and driving behavior directly via the global coordinates instead of a country code and thus not having to carry out an expert-based delimitation of regions.
- country codes are obtained from the following look-up table:
- pixel values for traffic lights are taken from the following look-up table:
- street line types are taken from the following look-up table, for example:
- semantic information is bundled into a feature vector.
- vehicle class for example truck, car, motorcycle, bicycle, pedestrian
- the height and width of the objects or states of the flashing lights for example right, left, warning, off.
- Descriptors describe these properties, i.e. they generate the feature vectors for input into a machine learning algorithm. These descriptors are arranged in the same way as the dynamic information descriptors and form the semantically explicit information layer.
- latent feature vectors are calculated using artificial deep neural networks.
- object classifiers which are upstream of the environmental scene representation according to the invention, are implemented as artificial deep neural networks.
- latent feature vector is generated as an intermediate product during classification.
- latent intermediate vectors of all Road users are spatially arranged in the manner described above and form the layer of semantic-latent information.
- the semantically explicit layer is supplemented with the semantically latent layer.
- An advantage of the semantically latent information is the robustness against noise signals of discrete classes.
- the discrete classification varies between two classes, such as truck and passenger car, it is difficult to correctly interpret the class information.
- the latent feature vector is a vector of continuous numbers, fluctuations have little to no effect and allow for a more robust interpretation of the object's semantic information.
- the dynamic part describes the moving road users in the scene.
- the coordinates of the road users are used over a certain period of time to generate a descriptor for this dynamic movement behavior.
- Driving behavior can also be contained latently.
- the calculation of this descriptor is learned on the one hand by means of an artificial deep neural network, for example a network comprising long-short-term memory layers, abbreviated LSTM.
- LSTMs With LSTMs, after a settling phase, an iterative adjustment of the descriptor is only possible by entering the coordinates of the next time step.
- parameters of a vehicle dynamics or movement dynamics model are used here, for example by means of a Kalman filter.
- the descriptors of all road users are spatially arranged based on the last coordinate and form the layer of movement information.
- the environmental features are represented in pixels of the layers and/or via feature vectors with spatial anchor points.
- the feature vectors have a predetermined spatial anchor point.
- the environmental features are interpreted as color values of the pixels.
- a spatial position of the environment features is recorded in each layer via a corresponding position on a map. This is advantageous for a spatially corresponding arrangement of the environmental features.
- spatial coordinates of the driving system and/or the environmental features are represented in pixels, with one pixel in each of the layers corresponding to the same route length.
- a plurality of environment scene representations are provided, which depict the static and dynamic environment features including the road users over a variable number of x time steps.
- the machine learning algorithm is trained, validated and tested using these environment scene representations. During the validation, meta-parameters included in the learning process are adjusted appropriately. During the test phase, the prediction of the learned machine learning algorithm is evaluated.
- the environment scene representation is coupled to the neural structures.
- the advantage of the environmental scene representation according to the invention is that a very large and very flexible amount of information is provided which the machine learning algorithm can access. Within the learning phase, in which the variable parameters/weights of the machine learning algorithm are adjusted, the use of the specific information that is best suited to perform the tasks of prediction then emerges.
- the machine learning algorithm comprises an encoder-decoder structure
- the convolutional network learns interactions between the layers of the environment scene representation, interactions between road users and/or interactions between road users and environment features and in the form of an output volume whose height and width is equal to the size of the environment scene representation, to output it, whereby a column based on the pixel-discrete position of the road user is determined from the output volume for each road user and the column with a vector that describes the dynamic behavior, is concatenated,
- Composite feature vectors obtained from the concatenation are decoded into predicted trajectories of the driving system and/or the road users.
- the encoders and/or decoders are based on long-short-term memory technology.
- noise vectors are concatenated by generative adversarial learning and different trajectories in the future are generated by different noise vectors for identical trajectories in the past. This captures multimodal uncertainties of predictions.
- the machine learning algorithm is a multi-agent tensor fusion encoder-decoder.
- a multi-agent tensor fusion encoder-decoder for static environmental scenes is disclosed in arXiv: 1904.04776v2 [cs.CV].
- the invention provides a multi-agent tensor fusion algorithm for the environment scene representation according to the invention, which also includes dynamic environment features in addition to static environment features.
- the multi-agent tensor fusion algorithm according to the invention does not receive static environmental scenes as input, but rather the HSRV containing dynamic environmental features.
- an encoder-decoder LSTM network is particularly well suited to solving sequence-based problems.
- the noise vectors are generated by a generative adversarial network, abbreviated GAN, for example by the GAN disclosed in arXiv: 1904.04776v2 [cs.CV] under point 3.3.
- FIG. 1 shows a representation of an environment scene representation according to the invention
- FIG. 4 shows a representation of the method according to the invention for obtaining the environment scene representation from FIG.
- FIG. 1 shows an example of a surrounding scene representation HSRV according to the invention.
- a car as an example of a driving system R at a junction.
- a pedestrian W At the junction there is a pedestrian W.
- the right of way is controlled by a traffic light L.
- the traffic light circuit L shows the car R the green traffic light phase and the pedestrian W the red one.
- the various layers that are essential for the prediction of the trajectories of the road users are shown above the representation of this situation from a bird's eye view.
- Layer A shows the regional information.
- Layer B uses the map information, layer C the traffic regulation information.
- the stateful traffic signs and the anchor trajectories are contained in layer D and layer E.
- Layer F describes the semantic characteristics of the individual road users.
- Layer G and Layer H contain latent information, where this information in layer G is based on properties that describe the road user, and in layer H on the dynamic movement behavior.
- Layers A to E are static layers and describe static environmental features stat of environmental scene E.
- Layers A to C describe rigid static environmental features stat_1 and layers D and E state-changing static environmental features stat_2.
- the layers F to H are dynamic layers and describe dynamic environment features dyn of the environment scene E.
- FIG. 2 shows an exemplary architecture of an artificial deep neural network DNN, which receives the environment scene representation HSRV as input.
- the environment scene representation HSRV is input into the network DNN as a feature volume.
- the network DNN includes a convolutional network-encoder-decoder structure, which uses multi-agent tensor fusion to control the interactions between the various layers AH and, due to its filter mask-based architecture, the interactions with elements of the environmental scenes located in the environment -Representation HSRV to be modeled.
- a feature volume results from the network DNN, where height and width correspond to the input volume.
- the input volume is the environment scene representation HSRV.
- a column is now selected for each road user from the output volume V and concatenated with the vector that describes the dynamic behavior and a noise vector.
- the column is determined based on the quantized position of the road user.
- the assembled feature vectors are now each fed into an LSTM decoder. This decoder then generates the future trajectory for each road user. Since different noise vectors are concatenated in the training according to the GAN setup, different noise vectors for identical trajectories in the past can be used in the inference to generate different trajectories in the future.
- the control unit ECU shown in FIG. 3 receives environment sensor data U via first interfaces INT 1 , for example from one or more cameras of the Driving system R.
- a processing unit P for example a CPU, GPU or FPGA, carries out object classifiers and determines the static and/or dynamic surroundings features stat and dyn from the surroundings sensor data U.
- the processing unit P processes the environmental features using a machine learning algorithm learned according to the invention and obtains predicted environmental scenes. Based on the predicted environmental scenes, the processing unit P determines regulation and/or control signals for automated operation of the driving system R.
- the control unit ECU uses second interfaces INT 2 to provide the regulation and/or control signals to actuators for longitudinal and/or lateral guidance of the driving system R ready.
- step V1 the environmental features stat and dyn are obtained.
- step V2 the slices AH are generated with the respective environment features stat and dyn.
- step V3 the driving system R is regulated and/or controlled based on the scene representation HSRV.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Game Theory and Decision Science (AREA)
- Medical Informatics (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102020210379.8A DE102020210379A1 (de) | 2020-08-14 | 2020-08-14 | Computerimplementiertes Verfahren und Computerprogrammprodukt zum Erhalten einer Umfeldszenen-Repräsentation für ein automatisiertes Fahrsystem, computerimplementiertes Verfahren zum Lernen einer Prädiktion von Umfeldszenen für ein automatisiertes Fahrsystem und Steuergerät für ein automatisiertes Fahrsystem |
PCT/EP2021/070099 WO2022033810A1 (de) | 2020-08-14 | 2021-07-19 | Computerimplementiertes verfahren und computerprogrammprodukt zum erhalten einer umfeldszenen-repräsentation für ein automatisiertes fahrsystem, computerimplementiertes verfahren zum lernen einer prädiktion von umfeldszenen für ein automatisiertes fahrsystem und steuergerät für ein automatisiertes fahrsystem |
Publications (1)
Publication Number | Publication Date |
---|---|
EP4196379A1 true EP4196379A1 (de) | 2023-06-21 |
Family
ID=77042979
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP21745818.1A Pending EP4196379A1 (de) | 2020-08-14 | 2021-07-19 | Computerimplementiertes verfahren und computerprogrammprodukt zum erhalten einer umfeldszenen-repräsentation für ein automatisiertes fahrsystem, computerimplementiertes verfahren zum lernen einer prädiktion von umfeldszenen für ein automatisiertes fahrsystem und steuergerät für ein automatisiertes fahrsystem |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP4196379A1 (de) |
DE (1) | DE102020210379A1 (de) |
WO (1) | WO2022033810A1 (de) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102021203440A1 (de) | 2021-04-07 | 2022-10-13 | Zf Friedrichshafen Ag | Computerimplementiertes Verfahren, Computerprogramm und Anordnung zum Vorhersagen und Planen von Trajektorien |
DE102022201127A1 (de) | 2022-02-03 | 2023-08-03 | Zf Friedrichshafen Ag | Verfahren und Computerprogramm zum Charakterisieren von zukünftigen Trajektorien von Verkehrsteilnehmern |
CN114926788B (zh) * | 2022-03-11 | 2024-10-29 | 武汉理工大学 | 一种多模态自动提取交通场景信息的方法、系统及设备 |
CN115468778B (zh) * | 2022-09-14 | 2023-08-15 | 北京百度网讯科技有限公司 | 车辆测试方法、装置、电子设备及存储介质 |
CN115662167B (zh) * | 2022-10-14 | 2023-11-24 | 北京百度网讯科技有限公司 | 自动驾驶地图构建方法、自动驾驶方法及相关装置 |
DE102022131178B3 (de) | 2022-11-24 | 2024-02-08 | Cariad Se | Verfahren zum automatisierten Führen eines Fahrzeugs sowie Verfahren zum Erzeugen eines hierzu fähigen Modells des Maschinellen Lernens sowie Prozessorschaltung und Fahrzeug |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2771873B1 (de) | 2011-10-28 | 2018-04-11 | Conti Temic microelectronic GmbH | Gitterbasiertes umfeldmodell für ein fahrzeug |
US11169531B2 (en) * | 2018-10-04 | 2021-11-09 | Zoox, Inc. | Trajectory prediction on top-down scenes |
-
2020
- 2020-08-14 DE DE102020210379.8A patent/DE102020210379A1/de active Pending
-
2021
- 2021-07-19 EP EP21745818.1A patent/EP4196379A1/de active Pending
- 2021-07-19 WO PCT/EP2021/070099 patent/WO2022033810A1/de unknown
Also Published As
Publication number | Publication date |
---|---|
WO2022033810A1 (de) | 2022-02-17 |
DE102020210379A1 (de) | 2022-02-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022033810A1 (de) | Computerimplementiertes verfahren und computerprogrammprodukt zum erhalten einer umfeldszenen-repräsentation für ein automatisiertes fahrsystem, computerimplementiertes verfahren zum lernen einer prädiktion von umfeldszenen für ein automatisiertes fahrsystem und steuergerät für ein automatisiertes fahrsystem | |
WO2022214414A1 (de) | Computerimplementiertes verfahren, computerprogramm und anordnung zum vorhersagen und planen von trajektorien | |
DE112017006530B4 (de) | Rückmeldung für ein autonomes fahrzeug | |
DE102016205152A1 (de) | Fahrerassistenzsystem zum Unterstützen eines Fahrers beim Führen eines Fahrzeugs | |
DE112020001103T5 (de) | Multitasking-Wahrnehmungsnetzwerk mit Anwendungen für ein Szenenverständnis und ein fortschrittliches Fahrerassistenzsystem | |
DE102016007899B4 (de) | Verfahren zum Betreiben einer Einrichtung zur Verkehrssituationsanalyse, Kraftfahrzeug und Datenverarbeitungseinrichtung | |
DE102019209736A1 (de) | Verfahren zur Bewertung möglicher Trajektorien | |
DE102018203583B4 (de) | Verfahren, Fahrerassistenzsystem sowie Kraftfahrzeug zur Prädiktion einer Position oder einer Trajektorie mittels eines graphbasierten Umgebungsmodells | |
DE102021109395A1 (de) | Verfahren, systeme und vorrichtungen für benutzerverständliche erklärbare lernmodelle | |
DE102013203239A1 (de) | Gridbasierte Vorhersage der Position eines Objektes | |
DE112022002869T5 (de) | Verfahren und System zur Verhaltensprognose von Akteuren in einer Umgebung eines autonomen Fahrzeugs | |
DE112021006846T5 (de) | Systeme und Verfahren zur szenarioabhängigen Trajektorienbewertung | |
EP4027245A1 (de) | Computerimplementiertes verfahren zur bestimmung von ähnlichkeitswerten von verkehrsszenarien | |
DE102022003079A1 (de) | Verfahren zu einer automatisierten Generierung von Daten für rasterkartenbasierte Prädiktionsansätze | |
DE112022001546T5 (de) | Systeme und Verfahren zur Erzeugung von Objekterkennungs-Labels unter Verwendung fovealer Bildvergrößerung für autonomes Fahren | |
EP3983936A1 (de) | Verfahren und generator zum erzeugen von gestörten eingangsdaten für ein neuronales netz | |
EP3850536A1 (de) | Analyse dynamisscher räumlicher szenarien | |
EP4224436A1 (de) | Verfahren und computerprogramm zum charakterisieren von zukünftigen trajektorien von verkehrsteilnehmern | |
DE102021000792A1 (de) | Verfahren zum Betrieb eines Fahrzeuges | |
DE112021005432T5 (de) | Verfahren und System zum Vorhersagen von Trajektorien zur Manöverplanung basierend auf einem neuronalen Netz | |
DE102020200876B4 (de) | Verfahren zum Verarbeiten von Sensordaten einer Sensorik eines Fahrzeugs | |
DE102019204187A1 (de) | Klassifizierung und temporale Erkennung taktischer Fahrmanöver von Verkehrsteilnehmern | |
DE102020115233B3 (de) | Verfahren zum Koordinieren von Verkehrsteilnehmern durch eine Servervorrichtung sowie eine Servervorrichtung und eine Steuerschaltung zum Durchführen des Verfahrens | |
WO2022263175A1 (de) | Bewegungsvorhersage für verkehrsteilnehmer | |
DE102022109385A1 (de) | Belohnungsfunktion für Fahrzeuge |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20230227 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20240405 |