WO2020164732A1 - A method for computer-implemented simulation of sensor data of a vehicle - Google Patents
A method for computer-implemented simulation of sensor data of a vehicle Download PDFInfo
- Publication number
- WO2020164732A1 WO2020164732A1 PCT/EP2019/053804 EP2019053804W WO2020164732A1 WO 2020164732 A1 WO2020164732 A1 WO 2020164732A1 EP 2019053804 W EP2019053804 W EP 2019053804W WO 2020164732 A1 WO2020164732 A1 WO 2020164732A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- simulation parameters
- vehicle
- simulation
- scene
- parameters
- Prior art date
Links
- 238000004088 simulation Methods 0.000 title claims abstract description 63
- 238000000034 method Methods 0.000 title claims abstract description 38
- 230000008449 language Effects 0.000 claims abstract description 25
- 239000003981 vehicle Substances 0.000 claims description 51
- 238000010801 machine learning Methods 0.000 claims description 17
- 238000004422 calculation algorithm Methods 0.000 claims description 14
- 238000013528 artificial neural network Methods 0.000 claims description 10
- 238000012549 training Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 7
- 238000012360 testing method Methods 0.000 claims description 4
- 101100149256 Mus musculus Sema6b gene Proteins 0.000 claims description 3
- 230000001133 acceleration Effects 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 2
- 208000026097 Factitious disease Diseases 0.000 description 5
- 230000006399 behavior Effects 0.000 description 3
- 238000003058 natural language processing Methods 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/15—Vehicle, aircraft or watercraft design
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
Definitions
- the invention refers to a method and a system for computer- implemented simulation of sensor data of a vehicle. Further more, the invention refers to a corresponding computer pro gram product and a corresponding computer program.
- the recorded sensor data refer e.g. to data of one or more cameras, lidars and/or radars installed in the respec tive vehicles.
- the data can be used to train and test machine learning algorithms for autonomous vehicles. Corresponding trained machine learning algorithms may then be implemented in an autonomous vehicle in order to predict the appropriate behavior of the vehicle during driving.
- the method of the invention provides a computer-implemented simulation of sensor data of a vehicle.
- simulation parameters in the form of a digital description of a 3D scene in the sur rounding area of the (simulated) vehicle are provided.
- the 3D scene comprises a number of (simulated) objects in the sur rounding area of the vehicle.
- At least a part of the simula tion parameters are first simulation parameters.
- Those first simulation parameters are derived in step i) by semantic parsing of a natural language text. E.g., this natural lan guage text is read in step i) based on a data input of a user via a user interface.
- the natural language text is provided as digital data.
- Step i) implements a well-known method of semantic parsing used in natural language processing in order to derive seman tics from the natural language description. These semantics are transformed in a digital description of a 3D scene com plying with the natural language text. E.g., the methods de scribed in document [1], [2], [3] and [4] may be used for de riving such a digital description.
- Step ii) outputs of a number of sensors of the ve hicle are simulated for a plurality of successive time points based on the simulation parameters as provided in step i) .
- Step ii) is based on a prior art method for simulating sensor data of a vehicle. Such methods are well-known for a skilled person (see e.g. document [5]).
- the invention is based on the finding that natural language processing can be used in order to transform a textual de scription of a driving situation into a digital description of a 3D scene which can be processed by a simulation program simulating the sensor data of the vehicle.
- the manual effort for defining simulation parameters is sig nificantly reduced because a driving situation can be easily specified by a user based on natural language text.
- the number of (simulated) sensors comprises one or more cameras and/or one or more radar sensors and/or one or more lidar sensors in stalled at the (simulated) vehicle.
- the data of those sensors are usually analyzed when a vehicle drives autonomously so that those sensor data are well-suited for training machine learning methods used for autonomous driving.
- the semantic parsing per formed in step i) extracts a 3D image from a database storing 3D images taken from regions on the earth where the extracted 3D image at least partly complies with the natural language text.
- the term 3D image is to be interpreted broadly. I.e., a 3D image may comprise meta data in combination with image da ta, thus forming a 3D scene description. At least a part of the information of the extracted 3D image is included in the digital description of the 3D scene.
- the 3D image may be re trieved from a database publicly available or from a proprie tary database. The extraction of an appropriate 3D image from a database facilitates the process of semantic parsing.
- the simulation parameters comprise one or more second simulation parameters, i.e. simu lation parameters different from the first simulation parame ters derived by semantic parsing.
- the one or more second sim ulation parameters are provided in step i) by reading a data input of a user via a user interface where the data input (directly) defines the one or more second simulation parame ters.
- the data input directly describes the second simulation parameters without the need of semantic parsing. This embodiment enables to manually add relevant simulation parameters which cannot be derived by semantic parsing .
- the simulation parameters comprise one or more dynamic parameters referring to a movement of the vehicle and/or a movement of one or more objects out of the number of objects within the 3D scene.
- the one or more dynamic parameters comprise one or more velocities and/or one or more accelerations with re spect to the vehicle or the objects.
- a dynamic parameter may refer to a first or a second simulation parameter.
- a training of one or more machine learning algorithms and/or a testing of one or more trained machine learning algorithms is performed by us ing at least some of the simulated outputs of the number of sensors as training data. After training, those machine learning algorithms provide appropriate predictions of the vehicle's behavior in dependency on outputs of (real) sen sors .
- At least one machine learning algorithm and particularly each machine learning algorithm of the one or more machine learning algo rithms is based on a number or artificial neural networks, particularly deep artificial neural networks having a large number of hidden layers. Artificial neural networks are well- known in the art and provide reliable predictions.
- the invention refers to a system for computer-implemented simulation of sensor data of a vehi cle, where the system comprises a processor configured to carry out the method according to the invention or according to one or more embodiments of the invention.
- the invention also refers to a computer program product with program code, which is stored on a non-transitory machine- readable carrier, for carrying out the method according to the invention or according to one or more embodiments of the invention, when the program code is executed on a computer.
- the invention refers to a computer program with program code for carrying out the method according to the in vention or according to one or more embodiments of the inven tion, when the program code is executed on a computer.
- program code for carrying out the method according to the in vention or according to one or more embodiments of the inven tion, when the program code is executed on a computer.
- Fig. 1 is a flow chart illustrating the steps performed in an embodiment of the invention.
- Fig. 2 shows a system for performing the method illustrat ed in Fig . 1.
- the invention described in the following provides a computer- implemented simulation of a driving situation of a vehicle.
- the trained machine learning al gorithms may be implemented in real autonomous cars in order to determine the correct behavior of the car in certain driv ing situations.
- the invention differs from the prior art in that at least some simulation parameters necessary to specify a specific driving situation are derived automatically from a natural language text forming an input of the method as described herein .
- the above mentioned natural language text TX is read in step SI based on an input made by a user via a user interface UI .
- the natural language text may be stored beforehand in a storage which is read in step SI of the method.
- the natural language text refers to a textual description in a human language and describes a specific driving situation in the surrounding ar ea of the (simulated) vehicle.
- the natural language text refers to the following sentence: "Pedestrian is walking with blue colors on a not so busy highway at 12:00 p.m. in Los Angeles".
- a se mantic parsing SPA is applied in step SI to this text.
- the semantics included in the natural language text are extracted.
- the semantic parsing method will derive a 3D scene in the surrounding area of the vehicle where the 3D scene complies with the natural language text.
- known methods as described in documents [1], [2], [3] and [4] may be used.
- those methods are adapted in that the semantics iden tified in the natural language text are mapped with 3D images included in a database DB .
- This database includes 3D images taken from regions on the earth.
- the 3D images comprise image data as well as meta data describing inter alia the traffic occupancy and the weather conditions at the time at which the image was taken.
- the database may be publicly available or it may be a proprietary database.
- an image IM at least partly complying with the natural lan guage description TX is extracted from the database.
- first simulation parameters SP1 which represent a digital de scription DES of a 3D scene SC comprising one or more objects OB in the surrounding area of the vehicle VE .
- additional second simulation parame ters SP2 may be read in case that the first simulation param eters SP1 are not sufficient for performing a simulation of sensor data of the vehicle VE .
- the second simu lation parameters SP2 are input by a user via the user inter face UI .
- Those second simulation parameters are read in step S2.
- the second simulation parameters may refer to additional dynamic information concerning the movement of the vehicle VE and the objects OB, i.e. the velocities and/or accelerations of the vehicle and/or the objects when initiating the simula tion .
- both the first and second simulation pa rameters SP1 and SP2 are processed by a simulation program SIP.
- a well-known simulation program may be used in step S3 (see e.g. reference [5]).
- the outputs refer to an output OU1 of a sensor SE1, an output OU2 of a sensor SE2 and an output OU3 of a sensor SE3.
- the sensor SE1 is a camera installed in the front of the vehicle VE
- the sensor SE2 is a lidar sensor installed in the front of the vehicle VE
- the sensor SE3 is a radar sensor in stalled in the front of the vehicle VE .
- the time series of the outputs OU1, OU2 and OU3 are thereafter stored in a memory .
- the method as described so far can be performed a plurality of times based on different natural language texts defining different driving situations and resulting in different time series of outputs.
- the outputs can thereafter be used as training data for training one or more artificial neural net works ANN as indicated by step S4 in Fig. 1.
- the artificial neural networks can predict the adequate be havior of an autonomous vehicle in different driving situa- tions.
- the trained neural networks can be implemented as a corresponding computer program on an autonomous vehicle in order to control the vehicle during autonomous driving.
- Fig. 2 shows a system which for performing the method of Fig. 1.
- the system comprises a user interface UI which is used for inputting the natural language text TX. This input is provid ed to a processor PR forming another element of the system.
- the processor PR performs the steps SI to S3 of Fig. 1. As a result, corresponding outputs OU1, OU2 and OU3 are obtained which are stored in a storage ST.
- the processor PR may also perform step S4 referring to the training of one or more ar tificial neural networks ANN as described with respect to Fig. 1.
- the invention as described in the foregoing has several ad vantageous. Due to the simulation of sensor data of a vehi cle, there is no need to only rely on data recorded during the operation of real vehicles in order to train and/or test machine learning methods such as artificial neural networks. Hence, the amount of storage and processing needed for rec orded real data can be decreased. Moreover, besides regular driving situations, more extreme driving situations which are usually not included in recorded real data may be addressed by performing a corresponding simulation. Furthermore, by au tomatically deriving simulation parameters for a simulation based on natural language processing, the manual effort of a user wishing to create simulated sensor data is reduced sig nificantly. Reference list
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Geometry (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Aviation & Aerospace Engineering (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Automation & Control Theory (AREA)
- Computer Hardware Design (AREA)
- Pure & Applied Mathematics (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Optimization (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Mathematical Analysis (AREA)
- Data Mining & Analysis (AREA)
- Computational Mathematics (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention refers to a method for computer-implemented simulation of sensor data of a vehicle (VE), comprising the steps of: i) providing simulation parameters (SP1, SP2) in the form of a digital description (DES) of a 3D scene (SC) in the surrounding area of the vehicle (VE), the 3D scene (SC) comprising a number of objects (OB) in the surrounding area, where at least a part of the simulation parameters (SP1, SP2) are first simulation parameters (SP1), the first simulation parameters (SP1) being derived by semantic parsing (SPA) of a natural language text (TX); ii) simulating outputs (OU1, OU2, OU3) of a number of sensors (SE1, SE2, SE3) of the vehicle (VE) for a plurality of successive time points based on the simulation parameters (SP1, SP2).
Description
Description
A method for computer-implemented simulation of sensor data of a vehicle
The invention refers to a method and a system for computer- implemented simulation of sensor data of a vehicle. Further more, the invention refers to a corresponding computer pro gram product and a corresponding computer program.
It is known to record sensor data of a fleet of vehicles dur ing operation and to store those data in large cloud net works. The recorded sensor data refer e.g. to data of one or more cameras, lidars and/or radars installed in the respec tive vehicles. The data can be used to train and test machine learning algorithms for autonomous vehicles. Corresponding trained machine learning algorithms may then be implemented in an autonomous vehicle in order to predict the appropriate behavior of the vehicle during driving.
There is a need to train machine learning algorithms for au tonomous vehicles based on extreme driving situations which are often not covered by recorded sensor data. To do so, it is known to simulate sensor data of vehicles by computer- implemented methods. In order to use such methods, suitable simulation parameters describing the driving situation to be simulated need to be defined. This is usually done manually by a user specifying those simulation parameters.
It is an object of the invention to provide a method for com puter-implemented simulation of sensor data of a vehicle fa cilitating the specification of simulation parameters.
This object is solved by the independent patent claims. Pre ferred embodiments of the invention are defined in the de pendent claims.
The method of the invention provides a computer-implemented simulation of sensor data of a vehicle. In a step i) of the method according to the invention, simulation parameters in the form of a digital description of a 3D scene in the sur rounding area of the (simulated) vehicle are provided. The 3D scene comprises a number of (simulated) objects in the sur rounding area of the vehicle. At least a part of the simula tion parameters are first simulation parameters. Those first simulation parameters are derived in step i) by semantic parsing of a natural language text. E.g., this natural lan guage text is read in step i) based on a data input of a user via a user interface. The natural language text is provided as digital data.
Step i) implements a well-known method of semantic parsing used in natural language processing in order to derive seman tics from the natural language description. These semantics are transformed in a digital description of a 3D scene com plying with the natural language text. E.g., the methods de scribed in document [1], [2], [3] and [4] may be used for de riving such a digital description.
In a next step ii) , outputs of a number of sensors of the ve hicle are simulated for a plurality of successive time points based on the simulation parameters as provided in step i) . Step ii) is based on a prior art method for simulating sensor data of a vehicle. Such methods are well-known for a skilled person (see e.g. document [5]).
The invention is based on the finding that natural language processing can be used in order to transform a textual de scription of a driving situation into a digital description of a 3D scene which can be processed by a simulation program simulating the sensor data of the vehicle. As a consequence, the manual effort for defining simulation parameters is sig nificantly reduced because a driving situation can be easily specified by a user based on natural language text.
In the preferred embodiment of the invention, the number of (simulated) sensors comprises one or more cameras and/or one or more radar sensors and/or one or more lidar sensors in stalled at the (simulated) vehicle. The data of those sensors are usually analyzed when a vehicle drives autonomously so that those sensor data are well-suited for training machine learning methods used for autonomous driving.
In another preferred embodiment, the semantic parsing per formed in step i) extracts a 3D image from a database storing 3D images taken from regions on the earth where the extracted 3D image at least partly complies with the natural language text. The term 3D image is to be interpreted broadly. I.e., a 3D image may comprise meta data in combination with image da ta, thus forming a 3D scene description. At least a part of the information of the extracted 3D image is included in the digital description of the 3D scene. The 3D image may be re trieved from a database publicly available or from a proprie tary database. The extraction of an appropriate 3D image from a database facilitates the process of semantic parsing.
In another preferred embodiment, the simulation parameters comprise one or more second simulation parameters, i.e. simu lation parameters different from the first simulation parame ters derived by semantic parsing. The one or more second sim ulation parameters are provided in step i) by reading a data input of a user via a user interface where the data input (directly) defines the one or more second simulation parame ters. In other words, the data input directly describes the second simulation parameters without the need of semantic parsing. This embodiment enables to manually add relevant simulation parameters which cannot be derived by semantic parsing .
In another preferred variant of the invention, the simulation parameters comprise one or more dynamic parameters referring to a movement of the vehicle and/or a movement of one or more objects out of the number of objects within the 3D scene.
Preferably, the one or more dynamic parameters comprise one or more velocities and/or one or more accelerations with re spect to the vehicle or the objects. A dynamic parameter may refer to a first or a second simulation parameter.
In another variant of the invention, a training of one or more machine learning algorithms and/or a testing of one or more trained machine learning algorithms is performed by us ing at least some of the simulated outputs of the number of sensors as training data. After training, those machine learning algorithms provide appropriate predictions of the vehicle's behavior in dependency on outputs of (real) sen sors .
In a preferred variant of the above embodiment, at least one machine learning algorithm and particularly each machine learning algorithm of the one or more machine learning algo rithms is based on a number or artificial neural networks, particularly deep artificial neural networks having a large number of hidden layers. Artificial neural networks are well- known in the art and provide reliable predictions.
Besides the above method, the invention refers to a system for computer-implemented simulation of sensor data of a vehi cle, where the system comprises a processor configured to carry out the method according to the invention or according to one or more embodiments of the invention.
The invention also refers to a computer program product with program code, which is stored on a non-transitory machine- readable carrier, for carrying out the method according to the invention or according to one or more embodiments of the invention, when the program code is executed on a computer.
Furthermore, the invention refers to a computer program with program code for carrying out the method according to the in vention or according to one or more embodiments of the inven tion, when the program code is executed on a computer.
In the following, an embodiment of the invention will be de scribed in detail with respect to the accompanying drawings wherein :
Fig. 1 is a flow chart illustrating the steps performed in an embodiment of the invention; and
Fig. 2 shows a system for performing the method illustrat ed in Fig . 1.
The invention described in the following provides a computer- implemented simulation of a driving situation of a vehicle.
It is an aim of the invention to simulate sensor outputs for specific driving situations defined by simulation parameters. Those simulated sensor outputs may be used in order to train machine learning algorithms. The trained machine learning al gorithms may be implemented in real autonomous cars in order to determine the correct behavior of the car in certain driv ing situations.
The invention differs from the prior art in that at least some simulation parameters necessary to specify a specific driving situation are derived automatically from a natural language text forming an input of the method as described herein .
In the method as illustrated in Fig. 1, the above mentioned natural language text TX is read in step SI based on an input made by a user via a user interface UI . Alternatively, the natural language text may be stored beforehand in a storage which is read in step SI of the method. The natural language text refers to a textual description in a human language and describes a specific driving situation in the surrounding ar ea of the (simulated) vehicle. E.g., the natural language text refers to the following sentence:
"Pedestrian is walking with blue colors on a not so busy highway at 12:00 p.m. in Los Angeles".
After having received this natural language Text TXT, a se mantic parsing SPA is applied in step SI to this text. In other words, the semantics included in the natural language text are extracted. The semantic parsing method will derive a 3D scene in the surrounding area of the vehicle where the 3D scene complies with the natural language text. For deriving such a 3D scene, known methods as described in documents [1], [2], [3] and [4] may be used. In the embodiment described herein, those methods are adapted in that the semantics iden tified in the natural language text are mapped with 3D images included in a database DB . This database includes 3D images taken from regions on the earth. The 3D images comprise image data as well as meta data describing inter alia the traffic occupancy and the weather conditions at the time at which the image was taken. The database may be publicly available or it may be a proprietary database. As a result of this mapping, an image IM at least partly complying with the natural lan guage description TX is extracted from the database.
E.g., when processing the above text "Pedestrian is walking with blue colors on a not so busy highway at 12:00 p.m. in Los Angeles", a search is performed in the database DB for images taken at 12:00 p.m. from a highway in Los Angeles where the traffic density is below a predefined threshold. Alternatively, an image may be retrieved which fulfills the above description except that it was taken at another time than 12:00 p.m. In this case, the meta data of the image are adjusted to the conditions at 12:00 p.m.
Information from the extracted image IM is used to determine first simulation parameters SP1 which represent a digital de scription DES of a 3D scene SC comprising one or more objects OB in the surrounding area of the vehicle VE .
In an optional step S2, additional second simulation parame ters SP2 may be read in case that the first simulation param eters SP1 are not sufficient for performing a simulation of sensor data of the vehicle VE . In this case, the second simu lation parameters SP2 are input by a user via the user inter face UI . Those second simulation parameters are read in step S2. The second simulation parameters may refer to additional dynamic information concerning the movement of the vehicle VE and the objects OB, i.e. the velocities and/or accelerations of the vehicle and/or the objects when initiating the simula tion .
In a next step S3, both the first and second simulation pa rameters SP1 and SP2 are processed by a simulation program SIP. A well-known simulation program may be used in step S3 (see e.g. reference [5]).
As a result of the simulation program SIP in step S3, time series of outputs of sensor data of the vehicle VE resulting from the driving situation described by the natural language text TX are simulated. In the embodiment described herein, the outputs refer to an output OU1 of a sensor SE1, an output OU2 of a sensor SE2 and an output OU3 of a sensor SE3. The sensor SE1 is a camera installed in the front of the vehicle VE, the sensor SE2 is a lidar sensor installed in the front of the vehicle VE and the sensor SE3 is a radar sensor in stalled in the front of the vehicle VE . The time series of the outputs OU1, OU2 and OU3 are thereafter stored in a memory .
The method as described so far can be performed a plurality of times based on different natural language texts defining different driving situations and resulting in different time series of outputs. The outputs can thereafter be used as training data for training one or more artificial neural net works ANN as indicated by step S4 in Fig. 1. After training, the artificial neural networks can predict the adequate be havior of an autonomous vehicle in different driving situa-
tions. Hence, the trained neural networks can be implemented as a corresponding computer program on an autonomous vehicle in order to control the vehicle during autonomous driving.
Fig. 2 shows a system which for performing the method of Fig. 1. The system comprises a user interface UI which is used for inputting the natural language text TX. This input is provid ed to a processor PR forming another element of the system. The processor PR performs the steps SI to S3 of Fig. 1. As a result, corresponding outputs OU1, OU2 and OU3 are obtained which are stored in a storage ST. The processor PR may also perform step S4 referring to the training of one or more ar tificial neural networks ANN as described with respect to Fig. 1.
The invention as described in the foregoing has several ad vantageous. Due to the simulation of sensor data of a vehi cle, there is no need to only rely on data recorded during the operation of real vehicles in order to train and/or test machine learning methods such as artificial neural networks. Hence, the amount of storage and processing needed for rec orded real data can be decreased. Moreover, besides regular driving situations, more extreme driving situations which are usually not included in recorded real data may be addressed by performing a corresponding simulation. Furthermore, by au tomatically deriving simulation parameters for a simulation based on natural language processing, the manual effort of a user wishing to create simulated sensor data is reduced sig nificantly.
Reference list
[1] A. Chang, et al . : "Semantic parsing for text to 3d scene generation" Proceedings of the ACL 2014 Workshop on Se mantic Parsing, 2014.
[2] A. Chang, et al . : "Text to 3d scene generation with rich lexical grounding" arXiv: 1505.06289, 23 May 2015.
[3] J. Devlin, et al . : "BERT: Pre-training of Deep Bidirec tional Transformers for Language Understanding", arXiv: 1810.04805, 11 October 2018.
[4] L. M. Seversky, et al . : "Real-time automatic 3D scene generation from natural language voice and text descrip tions" Proceedings of the 14th ACM International Confer ence on Multimedia, Santa Barbara, CA, USA, October 23- 27, 2006, pages 61 - 64.
[5] C. Mallet, et al . : „Full-waveform topographic lidar:
State-of-the-art" ISPRS Journal of photogrammetry and remote sensing 64.1, p. 1-16, 2009.
Claims
1. A method for computer-implemented simulation of sensor da ta of a vehicle (VE) , comprising the steps of:
i) providing simulation parameters (SP1, SP2) in the form of a digital description (DES) of a 3D scene (SC) in the surrounding area of the vehicle (VE) , the 3D scene (SC) comprising a number of objects (OB) in the surrounding area, where at least a part of the simulation parameters (SP1, SP2) are first simulation parameters (SP1), the first simulation parameters (SP1) being derived by seman tic parsing (SPA) of a natural language text (TX) ;
ii) simulating outputs (OU1, OU2, OU3) of a number of sensors (SE1, SE2, SE3) of the vehicle (VE) for a plurality of successive time points based on the simulation parameters (SP1, SP2 ) .
2. The method according to claim 1,
wherein the number of sensors (SE1, SE2, SE3) comprises one or more cameras and/or one or more radar sensors and/or one or more lidar sensors.
3. The method according to claim 1 or 2,
wherein the semantic parsing (SPA) extracts a 3D image (IM) from a database (DB) storing 3D images taken from regions on the earth, the extracted 3D image at least partly complying with the natural language text (TX) , where at least a part of the information of the extracted 3D image (IM) is included in the digital description (DES) of the 3D scene (SC) .
4. The method according to one of the preceding claims, wherein the simulation parameters (SP1, SP2) comprise one or more second simulation parameters (SP2) which are provided in step i) by reading a data input of a user via a user inter face (UI), the data input defining the one or more second simulation parameters (SP2).
5. The method according to one of the preceding claims, wherein the simulation parameters (SP1, SP2) comprise one or more dynamic parameters referring to a movement of the vehi cle (VE) and/or a movement of one or more objects (OB) out of the number of objects (OB) within the 3D scene (SC), wherein the one or more dynamic parameters preferably comprise one or more velocities and/or one or more accelerations.
6. The method according to one of the preceding claims, wherein a training of one or more machine learning algorithms and/or a testing of one or more trained machine learning al gorithms is performed by using at least some of the simulated outputs (OU1, OU2, OU3) of the number of sensors (SE1, SE2, SE3) as training data.
7. The method according to claim 6,
wherein at least one machine learning algorithm and particu larly each machine learning algorithm of the one or more ma chine learning algorithms is based on a number of artificial neural networks (ANN) .
8. A system for computer-implemented simulation of sensor da ta of a vehicle (VE) , where the system comprises a processor (PR) configured to carry out a method in which the following steps are performed:
i) providing simulation parameters (SP1, SP2) in the form of a digital description (DES) of a 3D scene (SC) in the surrounding area of the vehicle (VE) , the 3D scene (SC) comprising a number of objects (OB) in the surrounding area, where at least a part of the simulation parameters (SP1, SP2) are first simulation parameters (SP1), the first simulation parameters (SP1) being derived by seman tic parsing (SPA) of a natural language text (TX) ;
ii) simulating outputs (OU1, OU2, OU3) of a number of sensors (SE1, SE2, SE3) of the vehicle (VE) for a plurality of successive time points based on the simulation parameters (SP1, SP2 ) .
9. The system according to claim 8,
wherein the system is configured to perform a method accord ing to one of claims 2 to 7.
10. A computer program product with program code, which is stored on a non-transitory machine-readable carrier, for car rying out a method according to one of claims 1 to 7 when the program code is executed on a computer.
11. A computer program with program code for carrying out a method according to one of claims 1 to 7 when the program code is executed on a computer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2019/053804 WO2020164732A1 (en) | 2019-02-15 | 2019-02-15 | A method for computer-implemented simulation of sensor data of a vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2019/053804 WO2020164732A1 (en) | 2019-02-15 | 2019-02-15 | A method for computer-implemented simulation of sensor data of a vehicle |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020164732A1 true WO2020164732A1 (en) | 2020-08-20 |
Family
ID=65529661
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2019/053804 WO2020164732A1 (en) | 2019-02-15 | 2019-02-15 | A method for computer-implemented simulation of sensor data of a vehicle |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2020164732A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113238197A (en) * | 2020-12-29 | 2021-08-10 | 杭州电子科技大学 | Radar target identification and data judgment method based on Bert and BiLSTM |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016172009A1 (en) * | 2015-04-24 | 2016-10-27 | Northrop Grumman Systems Corporation | Autonomous vehicle simulation system |
US20170213149A1 (en) * | 2016-01-26 | 2017-07-27 | Ford Global Technologies, Llc | Training Algorithm for Collision Avoidance |
WO2018176000A1 (en) * | 2017-03-23 | 2018-09-27 | DeepScale, Inc. | Data synthesis for autonomous control systems |
-
2019
- 2019-02-15 WO PCT/EP2019/053804 patent/WO2020164732A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016172009A1 (en) * | 2015-04-24 | 2016-10-27 | Northrop Grumman Systems Corporation | Autonomous vehicle simulation system |
US20170213149A1 (en) * | 2016-01-26 | 2017-07-27 | Ford Global Technologies, Llc | Training Algorithm for Collision Avoidance |
WO2018176000A1 (en) * | 2017-03-23 | 2018-09-27 | DeepScale, Inc. | Data synthesis for autonomous control systems |
Non-Patent Citations (7)
Title |
---|
A. CHANG ET AL.: "Semantic parsing for text to 3d scene generation", PROCEEDINGS OF THE ACL 2014 WORKSHOP ON SEMANTIC PARSING, 2014 |
A. CHANG ET AL.: "Text to 3d scene generation with rich lexical grounding", ARXIV: 1505.06289, 23 May 2015 (2015-05-23) |
ANGEL CHANG ET AL: "Semantic Parsing for Text to 3D Scene Generation", PROCEEDINGS OF THE ACL 2014 WORKSHOP ON SEMANTIC PARSING, 1 January 2014 (2014-01-01), Stroudsburg, PA, USA, pages 17 - 21, XP055612177, DOI: 10.3115/v1/W14-2404 * |
BOB COYNE ET AL: "WordsEye", COMPUTER GRAPHICS. SIGGRAPH 2001. CONFERENCE PROCEEDINGS. LOS ANGELES, CA, AUG. 12 - 17, 2001; [COMPUTER GRAPHICS PROCEEDINGS. SIGGRAPH], NEW YORK, NY : ACM, US, 1 August 2001 (2001-08-01), pages 487 - 496, XP058253472, ISBN: 978-1-58113-374-5, DOI: 10.1145/383259.383316 * |
C. MALLET ET AL.: "Full-waveform topographic lidar: State-of-the-art", ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, vol. 64.1, 2009, pages 1 - 16, XP025870136, DOI: doi:10.1016/j.isprsjprs.2008.09.007 |
J. DEVLIN ET AL.: "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", ARXIV: 1810.04805, 11 October 2018 (2018-10-11) |
L. M. SEVERSKY ET AL.: "Real-time automatic 3D scene generation from natural language voice and text descriptions", PROCEEDINGS OF THE 14TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2006, pages 61 - 64, XP058233251, DOI: doi:10.1145/1180639.1180660 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113238197A (en) * | 2020-12-29 | 2021-08-10 | 杭州电子科技大学 | Radar target identification and data judgment method based on Bert and BiLSTM |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10354392B2 (en) | Image guided video semantic object segmentation method and apparatus | |
US10726304B2 (en) | Refining synthetic data with a generative adversarial network using auxiliary inputs | |
US20200192389A1 (en) | Building an artificial-intelligence system for an autonomous vehicle | |
KR102372702B1 (en) | Learning method and learning device for runtime input transformation of real image on real world into virtual image on virtual world, to be used for object detection on real images, by using cycle gan capable of being applied to domain adaptation | |
US10963702B1 (en) | Method and system for video segmentation | |
CN108733837B (en) | Natural language structuring method and device for medical history text | |
EP4390881A1 (en) | Image generation method and related device | |
US10726289B2 (en) | Method and system for automatic image caption generation | |
US20180129977A1 (en) | Machine learning data analysis system and method | |
KR20230133059A (en) | Ai-based digital contents automated production method, apparatus and system | |
CN111709966A (en) | Fundus image segmentation model training method and device | |
KR102206684B1 (en) | Learning method for analyzing driving situation and driving style, and an apparatus for performing the same | |
WO2020164732A1 (en) | A method for computer-implemented simulation of sensor data of a vehicle | |
CN116994021A (en) | Image detection method, device, computer readable medium and electronic equipment | |
CN110532562A (en) | Neural network training method, Chinese idiom misuse detection method, device and electronic equipment | |
CN114676705A (en) | Dialogue relation processing method, computer and readable storage medium | |
KR102582593B1 (en) | Server and method for providing traffic accident information based on black box image using artificial intelligence model | |
CN115049899B (en) | Model training method, reference expression generation method and related equipment | |
CN112698578A (en) | Automatic driving model training method and related equipment | |
CN116824650A (en) | Video generation method and related device of target object | |
CN113282781A (en) | Image retrieval method and device | |
CN113011919A (en) | Method and device for identifying interest object, recommendation method, medium and electronic equipment | |
KR102569016B1 (en) | Automated training based data labeling method, apparatus and computer readable storage medium | |
CN118133231B (en) | Multi-mode data processing method and processing system | |
KR102658711B1 (en) | Method for annotation using boundary designation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19707305 Country of ref document: EP Kind code of ref document: A1 |
|
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19707305 Country of ref document: EP Kind code of ref document: A1 |