WO2018002910A1 - Realistic 3d virtual world creation and simulation for training automated driving systems - Google Patents

Realistic 3d virtual world creation and simulation for training automated driving systems Download PDF

Info

Publication number
WO2018002910A1
WO2018002910A1 PCT/IL2017/050598 IL2017050598W WO2018002910A1 WO 2018002910 A1 WO2018002910 A1 WO 2018002910A1 IL 2017050598 W IL2017050598 W IL 2017050598W WO 2018002910 A1 WO2018002910 A1 WO 2018002910A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
sensor
geographical area
implemented method
computer implemented
Prior art date
Application number
PCT/IL2017/050598
Other languages
French (fr)
Inventor
Dan Atsmon
Original Assignee
Cognata Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cognata Ltd. filed Critical Cognata Ltd.
Priority to EP17819482.5A priority Critical patent/EP3475778B1/en
Priority to CN201780052492.0A priority patent/CN109643125B/en
Priority to CN202211327526.1A priority patent/CN115686005A/en
Priority to US16/313,058 priority patent/US10489972B2/en
Publication of WO2018002910A1 publication Critical patent/WO2018002910A1/en
Priority to US15/990,877 priority patent/US20180349526A1/en
Priority to EP18174957.3A priority patent/EP3410404B1/en
Priority to US16/693,534 priority patent/US11417057B2/en
Priority to US17/885,633 priority patent/US12112432B2/en
Priority to US18/518,654 priority patent/US20240096014A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/3815Road data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/3826Terrain data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3863Structures of map data
    • G01C21/3867Geometry of map features, e.g. shape points, polygons or for simplified maps
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0088Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots characterized by the autonomous decision making process, e.g. artificial intelligence, predefined behaviours
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/80Arrangements for reacting to or preventing system or operator failure
    • G05D1/81Handing over between on-board automatic and on-board manual control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present invention in some embodiments thereof, relates to creating a simulated model of a geographical area, and, more specifically, but not exclusively, to creating a simulated model of a geographical area, optionally including transportation traffic to generate simulation sensory data for training an autonomous driving system.
  • the autonomous vehicles involve a plurality of disciplines targeting a plurality of challenges rising in the development of the autonomous vehicles.
  • a computer implemented method of creating a simulated realistic virtual model of a geographical area for training an autonomous driving system comprising:
  • Generating a virtual three dimensional (3D) realistic model emulating the geographical area by synthesizing a corresponding visual texture for each of the plurality of labeled objects.
  • the synthetic 3D imaging feed is generated to depict the realistic model from a point of view of one or more emulated imaging sensors mounted on the emulated vehicle.
  • Training the autonomous driving systems using the simulated virtual realistic model may allow for significant scalability since a plurality of various ride scenarios may be easily simulated for a plurality of geographical locations. Training, evaluation and/or validation of the autonomous driving systems may be done automatically by an automated system executing the simulated virtual realistic model. Moreover, the training, evaluation and/or validation may be done for a plurality of geographical area, various conditions and/or various scenarios without moving real vehicles in the real world. In addition, the automated training, evaluation and/or validation may be conducted concurrently for the plurality of geographical area, the various conditions and/or the various scenarios. This may significantly reduce the resources, for example, time, hardware resources, human resources and/or the like for training, evaluating and/or validating the autonomous driving systems. Moreover, training the autonomous driving systems using the simulated virtual realistic model may significantly reduce risk since the process is conducted in a virtual environment. Damages, accidents and even life loss which may occur using the currently existing methods for training the autonomous driving systems may be completely prevented and avoided.
  • a system for creating a simulated virtual realistic model of a geographical area for training an autonomous driving system comprising one or more processors adapted to execute code, the code comprising:
  • Code instructions to obtain geographic map data of a geographical area are included in the code instructions.
  • Code instructions to obtain visual imagery data of the geographical area are Code instructions to obtain visual imagery data of the geographical area.
  • the synthetic 3D imaging feed is generated to depict the realistic model from a point of view of one or more emulated imaging sensors mounted on the emulated vehicle.
  • the synthesizing is done using one or more of the following implementations:
  • Extracting the visual texture of one or more of the labeled objects from the visual imagery data Extracting the visual texture of one or more of the labeled objects from the visual imagery data.
  • Using one or more of the synthesis implementations may allow selecting the most appropriate technique for each of the labeled objects detected in the geographical area to create a genuine and highly realistic appearance of the object(s).
  • the synthetic 3D imaging feed is injected to a physical input of the autonomous driving system adapted to receive the input of the one or more imaging sensors.
  • Using the native physical interfaces of the autonomous driving system may significantly reduce and/or completely avoid the need to adjust the autonomous driving system to support the simulated virtual realistic model.
  • the autonomous driving system is implemented as a computer software program.
  • the synthetic 3D imaging data is injected using one or more virtual drivers emulating a feed of the one or more imaging sensors.
  • Using the native software interfaces of the autonomous driving system may significantly reduce and/or completely avoid the need to adjust the autonomous driving system to support the simulated virtual realistic model.
  • one or more mounting attributes of the one or more imaging sensors are adjusted according to analysis of a visibility performance of the one or more emulated imaging sensors emulating the one or more imaging sensors.
  • the mounting attributes may include, for example, a positioning on the emulated vehicle, a Field Of View (FOV), a resolution and an overlap region with one or more adjacent imaging sensors.
  • the emulated imaging sensor(s) may be mounted on the emulated vehicle similarly to the mounting of the real imagining sensor(s) on the real vehicle. Therefore exploring, evaluating and/or assessing the performance of the emulated imaging sensors which may be easily accomplished in the simulated virtual model may directly apply for the real imaging sensor(s).
  • Mounting recommendation(s) including imaging sensor(s) characteristics, model(s) and/or capabilities may therefore be offered to improve performance of the real imaging sensor(s).
  • a sensory ranging data feed simulation which simulates sensory ranging data feed generated by one or more emulated range sensors mounted on the emulated vehicle.
  • the sensory ranging data feed is simulated using a simulated ranging model applying one or more noise patterns associated with one or more range sensors emulated by the one or more emulated range sensors. This may further enhance the virtual realistic model to encompass the sensory ranging data feed which may be an essential feed for the autonomous driving system to identify the vehicle's surroundings and control the autonomous vehicle accordingly.
  • the one or more noise patterns are adjusted according to one or more object attributes of one or more of a plurality of objects emulated in the realistic model.
  • the noise patterns may be applied to the ranging model created for the virtual realistic model in order to increase the realistic characteristics of the model.
  • Generating the simulated sensory ranging data may be based on highly accurate ranging information extracted, for example, from the geographic map data, the visual imagery data and/or other data sources. However, real world sensory ranging data may be far less accurate. In order to feed the autonomous driving system with a realistic sensory ranging data feed, typical noise patterns learned over time for the real world may be applied to the simulated sensory ranging data.
  • one or more mounting attributes of the one or more range sensors are adjusted according to analysis of a range accuracy performance of the one or more range sensors.
  • the mounting attributes may include, for example, a positioning on the emulated vehicle, an FOV, a range and an overlap region with one or more adjacent range sensors.
  • the emulated range sensor(s) may be mounted on the emulated vehicle similarly to the mounting of the real range sensor(s) on the real vehicle. Therefore exploring, evaluating and/or assessing the performance of the emulated range sensors which may be easily accomplished in the simulated virtual model may directly apply for the real range sensor(s).
  • Mounting recommendation(s) including range sensor(s) characteristics, model(s) and/or capabilities may therefore be offered to improve performance of the real range sensor(s).
  • one or more dynamic objects are inserted into the realistic model.
  • the dynamic object(s) may include, for example, a ground vehicle, an aerial vehicle, a naval vehicle, a pedestrian, an animal, vegetation and a dynamically changing road infrastructure object. This may allow creating a plurality of driving scenarios for training, evaluating and/or validating the autonomous driving system.
  • the driving scenarios may emulate real world traffic, road infrastructure objects, pedestrians, animals, vegetation, and/or the like.
  • one or more of a plurality of driver behavior classes are applied for controlling a movement of one or more ground vehicles such as the ground vehicle.
  • the driver behavior classes are adapted to the geographical area according to an analysis of typical driver behavior patterns identified in the geographical area, the one or more driver behavior class is selected according to a density function calculated for the geographical area according to recurrence of drivers prototype corresponding to the one or more driver behavior classes in the geographical area. Adapting the movement of the simulated vehicles to driving classes and patterns as typical to the geographical area may significantly enhance the realistic and/or authentic simulation of the geographical area.
  • simulated motion data is injected to the autonomous driving system.
  • the simulated motion data is emulated by one or more emulated motion sensors associated with the emulated vehicle.
  • the simulated motion data comprising one or more motion parameters, for example, a speed parameter, an acceleration parameter, a direction parameter, an orientation parameter and an elevation parameter. This may further enhance the virtual realistic model to emulate the real world geographical area by including the sensory motion data feed which may serve as a major feed for the autonomous driving system.
  • simulated transport data is injected to the autonomous driving system.
  • the simulated transport data comprises Vehicle to Anything (V2X) communication between the emulated vehicle and one or more other entities. This may further enhance the virtual realistic model to emulate the real world geographical area by including the transport data feed which may serve as a major feed for the autonomous driving system.
  • V2X Vehicle to Anything
  • the synthetic imaging data is adjusted according to one or more environmental characteristics, for example, a lighting condition, a weather condition attribute and timing attribute. This may further increase ability for adjusting the virtual realistic model to simulate diverse environmental conditions to train, evaluate and/or validate the operation of the autonomous driving system in a plurality of scenarios and conditions.
  • environmental characteristics for example, a lighting condition, a weather condition attribute and timing attribute.
  • the geographic map data includes, for example, a two dimensional (2D) map, a 3D map, an orthophoto map, an elevation map and a detailed map comprising object description for objects present in the geographical area.
  • 2D two dimensional
  • 3D map 3D map
  • orthophoto map an elevation map
  • a detailed map comprising object description for objects present in the geographical area.
  • the visual imagery data comprises one or more images which are members of a group consisting of: a ground level image, an aerial image and a satellite image, wherein the one or more images are 2D images or 3D images.
  • Using multiple diverse visual imagery data items may support creating an accurate and/or highly detailed virtual model of the emulated geographical area.
  • each of the plurality of static objects is a member of a group consisting of: a road, a road infrastructure object, an intersection, a building, a monument, a structure, a natural object and a terrain surface. This may allow focusing on the traffic, transportation and/or road infrastructure elements which may be of particular interest for training, evaluating and/or validating the autonomous driving system.
  • the one or more imaging sensors are members of a group consisting of: a camera, a video camera, an infrared camera and a night vision sensor.
  • the virtual realistic model may support emulation of a plurality of different imaging sensors to allow training, evaluating and/or validating of a plurality of autonomous driving systems which may be adapted to receive diverse and/or different sensory imaging data feeds.
  • a computer implemented method of creating a simulated ranging model of a real world scene used for training an autonomous driving system comprising:
  • Each of the plurality of range sensors is associated with positioning data indicating a positioning of the each range sensor, the positioning data is obtained from one or more positioning sensors associated with the each range sensor.
  • noise patterns may then be used to enhance a virtual realistic model replicating a real world geographical area with a realistic sensory ranging feed injected to the autonomous driving system during a training, evaluation and/or validation session.
  • the real ranging data is provided from one or more sources, for example, a real world measurement, a map based calculation and a calculation based on image processing of one or more images of the real world scene. This may allow obtain accurate ranging information for the plurality of objects present on the scene.
  • the plurality of range sensors include, for example, a LIDAR sensor, a radar, a camera, an infrared camera and an ultra-sonic sensor.
  • the virtual realistic model may support emulation of a plurality of different range sensors to allow training, evaluating and/or validating of a plurality of autonomous driving systems which may be adapted to receive diverse sensory ranging data.
  • the one or more positioning sensors include, for example, a Global Positioning system (GPS) sensor, a gyroscope, an accelerometer, an Inertial Measurement Unit (IMU) sensor and an elevation sensor. Positioning information is essential for establishing a reference for the real sensory ranging data collected from the range sensors. Supporting multiple diverse types of motion sensors may allow improved (more accurate) processing and analysis of the collected sensory ranging data.
  • GPS Global Positioning system
  • IMU Inertial Measurement Unit
  • the positioning data includes motion data comprising one or more motion parameters of the associated range sensor, for example, a speed parameter, an acceleration parameter, a direction parameter, an orientation parameter and an elevation parameter.
  • Availability of the motion information may also significantly improve accuracy of the collected sensory ranging data.
  • Motion of the vehicles may affect the readings of the range sensor(s) mounted on the associated vehicles and therefore processing and analyzing the sensory ranging data with respect to the motion data may improve accuracy of the collected ranging data which may eventually result in creating more accurate noise pattern(s).
  • the analysis is a statistics based prediction analysis conducted using one or more machine learning algorithms, for example, a neural network and a Support Vector Machine (SVM).
  • SVM Support Vector Machine
  • Applying the machine learning algorithm(s) over large amounts of sensory data may significantly improve the characterization of the noise patterns associated with the range sensor(s) and/or the objects and/or type of objects detected in the real world scene.
  • the one or more noise patterns comprise one or more noise characteristics, for example, a noise value, a distortion value, a latency value and a calibration offset value.
  • the noise pattern(s) may describe a plurality of noise characteristics originating from the range sensor(s) themselves and/or from characteristics of the objects in the scene. Identifying the noise characteristics, in particular using the machine learning algorithm(s) to detect the noise characteristics may significantly improve the accuracy of the noise patterns which may be further improve the virtual realistic model.
  • the analysis of the sensory data is done according to one or more environmental characteristics detected during acquisition of the ranging sensory data.
  • the environmental characteristics include, for example, a weather condition attribute, and a timing attribute.
  • the environmental conditions may affect the sensory ranging data and therefore analyzing the collected sensory ranging data with respect to the environmental conditions of the scene while acquiring the sensory ranging data may further increase accuracy of the identified noise pattern(s).
  • the one or more noise patterns are adjusted according to one or more object attributes of one or more of the plurality of objects, the one or more object attributes affect the ranging data produced by the each range sensor, the one or more object attributes are members of a group consisting of: an external surface texture, an external surface composition and an external surface material.
  • the characteristics of the objects in the scene and in particular the exterior characteristics of the objects may affect the sensory ranging data and therefore analyzing the collected sensory ranging data with respect to the object(s) characteristic(s) may further increase accuracy of the identified noise pattern(s).
  • the one or more object attributes are extracted from synthetic 3D imaging data generated for the real world scene. Extracting the object(s) characteristic(s) from the synthetic 3D imaging data may be relatively easy as the object(s) in the scene are identified and may be correlated with their characteristic(s) according to predefined rules.
  • the one or more object attributes are retrieved from a metadata record associated with the real world scene. Attribute(s) of similar objects may vary between geographical areas and/or real world scenes. Therefore, retrieving the object(s) attribute(s) from predefined records may allow associating the object(s) with their typical attribute(s) as found in the respective scene to further improve characterizing the noise characteristics and noise patterns for specific geographical areas, locations and/or real world scenes.
  • a computer implemented method of training a driver behavior simulator according to a geographical area comprising:
  • the sensor set comprising one or more motion sensors.
  • Detecting driving behavior patterns and classes by analyzing large amounts of real world sensory data collected at a specific geographical area may allow characterizing the driving behavior at certain geographical areas, regions and/or the like.
  • Using the identified driving behavior patterns and classes to enhance the virtual realistic model may allow adapting the virtual realistic model to specific geographical areas, regions and/or scenes making the virtual realistic model highly realistic.
  • the one or more motion sensors include, for example, a Global Positioning system (GPS) sensor, a gyroscope, an accelerometer, an Inertial Measurement Unit (IMU) sensor and an elevation sensor.
  • GPS Global Positioning system
  • IMU Inertial Measurement Unit
  • the sensory motion and/or positioning data collected from the motion and/or positioning sensor(s) may express the movement of the vehicles. Therefore analyzing the sensory motion and/or positioning data may allow identifying the driving behavior patterns and classes demonstrated in the monitored geographical area.
  • the analysis is a statistics based prediction analysis conducted by an evolving learning algorithm using one or more machine learning algorithms which are members of a group consisting of: a neural network and a Support Vector Machine (SVM). Applying the machine learning algorithm(s) over large amounts of sensory data may significantly improve the characterization of driving behavior patterns and/or classes identified in the specific geographical area.
  • machine learning algorithms which are members of a group consisting of: a neural network and a Support Vector Machine (SVM).
  • each of the plurality of driver behavior patterns comprises one or more motion parameters, for example, a speed parameter, an acceleration parameter, a breaking parameter, a direction parameter and an orientation parameter.
  • the driving behavior pattern(s) may describe a plurality of motion parameters. Identifying the motion parameters, in particular using the machine learning algorithm(s) to detect the motion parameters may significantly improve the accuracy and/or granularity of the detected driving behavior patterns which may be further improve the virtual realistic model.
  • the analysis of the sensory data is done according to one or more environmental characteristics detected during acquisition of the sensory data, for example, a weather condition attribute and a timing attribute.
  • environmental characteristics detected during acquisition of the sensory data for example, a weather condition attribute and a timing attribute.
  • the driving behavior demonstrated by the drivers may be affected by the environmental characteristics. Therefore analyzing the collected sensory ranging data with respect to the environmental conditions detected at the geographical area (the scene) may further increase accuracy of the identified driving behavior pattern(s).
  • the analysis of the sensory data includes analyzing additional sensory data received from one or more outward sensors included in one or more of the plurality of sensor sets.
  • the one or more outward sensors depicting the geographical area as viewed from one of the plurality of vehicles associated with the one or more sensor sets. Enhancing the collected sensory motion and/or positioning data with ranging data may significantly improve the characterization of the driver's prototype(s) detected in the geographical area.
  • the one or more outward sensors include, for example, a camera, a night vision camera, a LIDAR sensor, a radar and an ultra- sonic sensor. Using sensory data collected from a plurality of different range sensors may allow for more flexibility in analyzing the sensory ranging data.
  • one or more of the plurality of driver behavior patterns is enhanced based on the analysis of the additional sensory data.
  • the one or more enhanced driver behavior patterns comprise one or more additional driving characteristics, for example, a tailgating characteristic, an in-lane position characteristic and a double-parking tendency characteristic. Enhancing the driver behavior pattern(s) may allow a more accurate characterization of the driver prototype(s) detected in the geographical area and may thus improve the simulation created using the virtual realistic model.
  • Implementation of the method and/or system of embodiments of the invention can involve performing or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system.
  • a data processor such as a computing platform for executing a plurality of instructions.
  • the data processor includes a volatile memory for storing instructions and/or data and/or a non- volatile storage, for example, a magnetic hard-disk and/or removable media, for storing instructions and/or data.
  • a network connection is provided as well.
  • a display and/or a user input device such as a keyboard or mouse are optionally provided as well.
  • FIG. 1 is a flowchart of an exemplary process of creating a simulated virtual model of a geographical area, according to some embodiments of the present invention
  • FIG. 2A and FIG. 2B are schematic illustrations of an exemplary embodiments of a system for creating a simulated virtual model of a geographical area, according to some embodiments of the present invention
  • FIG. 3 is a flowchart of an exemplary process of training a driver behavior simulator for a certain geographical area, according to some embodiments of the present invention.
  • FIG. 4 is a flowchart of an exemplary process of creating a ranging sensory model of a geographical area, according to some embodiments of the present invention.
  • the present invention in some embodiments thereof, relates to creating a simulated model of a geographical area, and, more specifically, but not exclusively, to creating a simulated model of a geographical area, optionally including transportation traffic to generate simulation sensory data for training an autonomous driving system.
  • a vehicle for example, a ground vehicle, an aerial vehicle and/or a naval vehicle in a certain geographical area using a simulated virtual realistic model replicating the certain geographical area.
  • the simulated virtual realistic model is created to emulate a sensory data feed, for example, imaging data, ranging data, motion data, transport data and/or the like which may be injected to the autonomous driving system during a training session.
  • the virtual realistic model is created by obtaining visual imagery data of the geographical area, for example, one or more 2D and/or 3D images, panoramic image and/or the like captured at ground level, from the air and/or from a satellite.
  • the visual imagery data may be obtained from, for example, Google Earth, Google Street View, OpenStreetCam, Bing maps and/or the like.
  • One or more trained classifiers may be applied to the visual imagery data to identify one or more (target) objects in the images, in particular static objects, for example, a road, a road infrastructure object, an intersection, a sidewalk, a building, a monument, a natural object, a terrain surface and/or the like.
  • the classifier(s) may classify the identified static objects to class labels based on a training sample set adjusted for classifying objects of the same type as the target objects.
  • the identified labeled objects may be superimposed over the geographic map data obtained for the geographical, for example, a 2D map, a 3D map, an orthophoto map, an elevation map, a detailed map comprising object description for objects present in the geographical area and/or the like.
  • the geographic map data may be obtained from, for example, Google maps, OpenStreetMap and/or the like.
  • the labeled objects are overlaid over the geographic map(s) in the respective location, position, orientation, proportion and/or the like identified by analyzing the geographic map data and/or the visual imagery data to create a labeled model of the geographical area.
  • a Conditional Generative Adversarial Neural Network cGAN
  • stitching texture(s) (of the labeled objects) retrieved from the original visual imagery data, overlaying textured images selected from a repository (storage) according to the class label and/or the like the labeled objects in the labeled model may be synthesized with (visual) image pixel data to create the simulated virtual realistic model replicating the geographical area.
  • the virtual realistic model is adjusted according to one or more lighting and/or environmental (e.g. weather, timing etc.) conditions to emulate various real world environmental conditions and/or scenarios, in particular, environmental conditions typical to the certain geographical area.
  • synthetic 3D imaging data may be created and injected to the autonomous driving system.
  • the synthetic 3D imaging data may be generated to depict the virtual realistic model from a point of view of one or more emulated imaging sensors mounted on an emulated vehicle moving in the virtual realistic model.
  • the emulated vehicle may be created in the virtual realistic model to represent a real world vehicle controlled by the autonomous driving system.
  • the emulated imaging sensor(s) emulate one or more imaging sensors, for example, a camera, a video camera, an infrared camera, a night vision sensor and/or the like which are mounted on the real world vehicle controlled by the autonomous driving system.
  • the emulated imaging sensor(s) may be created, mounted and/or positioned on the emulated vehicle according to one or more mounting attributes of the imaging sensor(s) mounting on the real world vehicle, for example, positioning (e.g. location, orientation, elevations, etc.), FOV, range, overlap region with adjacent sensor(s) and/or the like.
  • one or more of the mounting attributes may be adjusted for the emulated imaging sensor(s) to improve perception and/or capture performance of the imaging sensor(s).
  • one or more recommendation may be offered to the autonomous driving system for adjusting the mounting attribute(s) of the imaging sensor(s) mounting on the real world vehicle.
  • the alternate mounting options may further suggest evaluating the capture performance of the imaging sensor(s) using another imaging sensor(s) model having different imaging attributes, i.e. resolution, FOV, magnification and/or the like.
  • the virtual realistic model is enhanced with a sensory model created to simulate the certain geographical area, in particular, a ranging sensory model.
  • a simulated sensory ranging data feed may be injected to the autonomous driving system.
  • the simulated sensory ranging data may be generated as depicted by one or more emulated range sensors mounted on the emulated vehicle to emulate one or more range sensor(s) mounted on the real world vehicle, for example, a Light Detection and Ranging (LIDAR) sensor, a radar, an ultra- sonic sensor, a camera, an infrared camera and/or the like.
  • the emulated range sensor(s) may be mounted on the emulated vehicle according to one or more mounting attributes of the real world range sensor(s) mounted on the real world vehicle controlled by the autonomous driving system and emulated by the emulated vehicle in the virtual realistic model.
  • the ranging model created for the certain geographical area may be highly accurate. However, such accuracy may fail to represent real world sensory ranging data produced by real range sensor(s).
  • the ranging sensory model may therefore apply one or more noise patterns typical and/or inherent to the range sensor(s) emulated in the virtual realistic model.
  • the noise patterns may further include noise effects induced by one or more of the objects detected in the geographical area.
  • the noise pattern(s) may describe one or more noise characteristics, for example, noise, distortion, latency, calibration offset and/or the like.
  • the noise patterns(s) may be identified through big-data analysis and/or analytics over a large data set comprising a plurality of real world range sensor(s) readings collected for the geographical area and/or for other geographical locations.
  • the big-data analysis may be done using one or more machine learning algorithms, for example, a neural network such as, for instance, a Deep learning Neural Network (DNN), a Gaussian Mixture Model (GMM), etc., a Support Vector Machine (SVM) and/or the like.
  • DNN Deep learning Neural Network
  • GMM Gaussian Mixture Model
  • SVM Support Vector Machine
  • the noise pattern(s) may be adjusted according to one or more object attributes of the objects detected in the geographical area, for example, an external surface texture, an external surface composition, an external surface material and/or the like.
  • the noise pattern(s) may also be adjusted according to one or more environmental characteristics, for example, weather, timing (e.g. time of day, date) and/or the like.
  • one or more mounting attributes may be adjusted for the emulated range sensor(s) to improve accuracy performance of the range sensor(s).
  • one or more dynamic objects are injected into the virtual realistic model replicating the geographical area, for example, a ground vehicle, an aerial vehicle, a naval vehicle, a pedestrian, an animal, vegetation and/or the like.
  • the dynamic object(s) may further include dynamically changing road infrastructure objects, for example, a light changing traffic light, an opened/closed railroad gate and/or the like. Movement of one or more of the dynamic objects may be controlled according to movement patterns predefined and/or learned for the certain geographical area. In particular, movement of one or more ground vehicles inserted into the virtual realistic model may be controlled according to driver behavior data received from a driver behavior simulator.
  • the driver behavior data may be adjusted according to one or more driver behavior patterns and/or driver behavior classes exhibited by a plurality of drivers in the certain geographical area, i.e. driver behavior patterns and/or driver behavior classes that may be typical to the certain geographical area.
  • the driver behavior classes may be identified through big-data analysis and/or analytics over a large data set of sensory data, for example, sensory motion data, sensory ranging data and/or the like collected from a plurality of drivers moving in the geographical area.
  • the sensory data may include, for example, speed, acceleration, direction, orientation, elevation, space keeping, position in lane and/or the like.
  • One or more machine learning algorithms for example, a neural network (e.g.
  • DNN, GMM, etc. may be used to analyze the collected sensory data to detect movement patterns which may be indicative of one or more driver behavior patterns.
  • the driver behavior pattern(s) may be typical to the geographical area and therefore, based on the detected driver behavior pattern(s), the drivers in the geographical area may be classified to one or more driver behavior classes representing driver prototypes.
  • the driver behavior data may be further adjusted according to a density function calculated for the geographical area which represents the distribution of the driver prototypes in the simulated geographical area.
  • additional data relating to the emulated vehicle is simulated and injected to the autonomous driving system.
  • the simulated additional data may include, for example, sensory motion data presenting motion information of emulated vehicle, transport data simulating communication of the emulated vehicle with one or more other entities over one or more communication links, for example, Vehicle to Anything (V2X) and/or the like.
  • V2X Vehicle to Anything
  • the simulated sensory data such as the imaging data, the ranging data, the motion data and/or the transport data may be injected to the autonomous driving system using the native interfaces of the autonomous driving system.
  • the simulated sensory data may be injected through the interface(s), port(s) and/or links.
  • the simulated sensory data may be injected using one or more virtual drivers using, for example, Application Programming Interface (API) functions of the autonomous driving system, a Software Development Kit (SDK) provided for the autonomous driving system and/or for the training system and/or the like.
  • API Application Programming Interface
  • SDK Software Development Kit
  • Training autonomous driving systems using the simulated virtual realistic model emulating one or more geographical areas specifically, a simulated virtual realistic model for emulating sensory data such as the imaging data, sensory data, motion data and/or transport data may present significant advantages.
  • the simulated virtual realistic model may be further used for evaluating, validating and/or improving design of autonomous driving systems.
  • autonomous driving systems are typically designed, trained, evaluated and validated in real world conditions in which a real world vehicle controlled by the autonomous driving system is moving in the real world geographical area. This may present major limitations since an extremely large number of driving hours may need to be accumulated in order to properly train and/or evaluate the autonomous driving systems. Moreover, the vehicle controlled by the autonomous driving system needs to be driven in a plurality of geographical locations which may further limit scalability of the training, evaluation and/or validation process. Furthermore, the autonomous driving system may need to be evaluated and/or trained for a plurality of environmental conditions, for example, weather conditions, timing conditions, lighting condition and/or the like as well as for a plurality of traffic conditions, for example, standard traffic, rush hour traffic, vacation time traffic and/or the like.
  • environmental conditions for example, weather conditions, timing conditions, lighting condition and/or the like
  • traffic conditions for example, standard traffic, rush hour traffic, vacation time traffic and/or the like.
  • driver behavior of other vehicles may vary with respect to one or more conditions, for example, different geographical areas, different environmental conditions, different traffic conditions and/or the like.
  • This may also limit scalability of the training, evaluation and/or validation process.
  • the limited scalability of the currently existing methods may further result from the enormous amount of resources that may be required to train, evaluate and/or validate the autonomous driving systems in the plurality of ride scenarios, for example, time, hardware resources, human resources and/or the like. Training the autonomous driving systems using the simulated virtual realistic model on the other hand, may allow for significant scalability since the plurality of various ride scenarios may be easily simulated for a plurality of geographical locations.
  • the training, evaluation and/or validation process may be done automatically by an automated system executing the simulated virtual realistic model. Moreover, the training, evaluation and/or validation process may be done for a plurality of geographical area, various conditions and/or various scenarios in a closed facility without the need to move real vehicles in the real world. In addition, the automated training, evaluation and/or validation process may be conducted concurrently for the plurality of geographical area, the various conditions and/or the various scenarios. This may significantly reduce the resources, for example, time, hardware resources, human resources and/or the like for training, evaluating and/or validating the autonomous driving systems.
  • evaluating and/or validating the autonomous driving systems may significantly reduce risk since the process is conducted in a virtual environment. Damages, accidents and even life loss which may occur using the currently existing methods for training the autonomous driving systems may be completely prevented and avoided.
  • the autonomous driving system may require insignificant and typically no adaptations to support training, evaluation and/or validation using the simulated virtual realistic model.
  • the simulated sensory data feed(s) depicting the simulated virtual realistic model may be injected to the autonomous driving systems through the native interface used by the autonomous driving system to receive real sensory data.
  • the present invention may be a system, a method, and/or a computer program product.
  • the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
  • the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
  • the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • a non- exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • SRAM static random access memory
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disk
  • memory stick a floppy disk
  • mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon
  • a computer readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
  • Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
  • the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
  • Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • ISA instruction-set-architecture
  • machine instructions machine dependent instructions
  • microcode firmware instructions
  • state-setting data or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
  • FPGA field-programmable gate arrays
  • PLA programmable logic arrays
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures.
  • two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • FIG. 1 is a flowchart of an exemplary process of creating a simulated model of a geographical area, according to some embodiments of the present invention.
  • a process 100 may be executed to train an autonomous driving system in a certain geographical area using a simulated virtual 3D model created to replicate the geographical area.
  • the simulated virtual realistic model is created by obtaining visual imagery data of the geographical area which may be processed by one or more trained classifiers to identify one or more objects in the visual imagery data, in particular static objects. The identified objects may be superimposed over geographic map(s) obtained for the geographical area to create a labeled model of the geographical area.
  • the labeled model may be synthesized to create the virtual 3D realistic model replicating the geographical area.
  • the virtual realistic model is adjusted according to one or more lighting and/or environmental conditions to emulate various real world lighting effects, weather conditions, ride scenarios and/or the like.
  • synthetic 3D imaging data may be created and injected to the autonomous driving system.
  • the synthetic 3D imaging data may be generated to depict the virtual realistic model from a point of view of one or more emulated imaging sensors (e.g. a camera, an infrared camera, a video camera, a night vision sensor, etc.) mounted on an emulated vehicle moving in the virtual realistic model which represents the vehicle controlled by the autonomous driving system.
  • a ranging model is simulated for the geographical area to emulate a realistic ranging arena in the simulated virtual realistic model.
  • one or more dynamic objects are injected into the virtual realistic model replicating the geographical area, for example, a ground vehicle, an aerial vehicle, a naval vehicle, a pedestrian, an animal, vegetation and/or the like.
  • movement of one or more ground vehicles inserted to the virtual realistic model may be controlled according to driver behavior patterns identified through big-data analytics of vehicles movement in the geographical area.
  • FIG. 2A and FIG. 2B are schematic illustrations of exemplary embodiments of a system for creating a simulated model of a geographical area, according to some embodiments of the present invention.
  • An exemplary system 200A includes a simulation server 201 comprising an Input/Output (I/O) interface 202, a processor(s) 204 and a storage 206.
  • the I/O interface 202 may provide one or more network interfaces, wired and/or wireless for connecting to one or more networks 230, for example, a Local Area Network (LAN), a Wide Area Network (WAN), a Metropolitan Area Network (MAN), a cellular network and/or the like.
  • LAN Local Area Network
  • WAN Wide Area Network
  • MAN Metropolitan Area Network
  • the processor(s) 204 may be arranged for parallel processing, as clusters and/or as one or more multi core processor(s).
  • the storage 206 may include one or more non-transitory persistent storage devices, for example, a hard drive disk (HDD), a Solid State Disk (SSD) a Flash array and/or the like.
  • the storage 206 may further include one or more networked storage resources accessible over the network(s) 230, for example, a Network Attached Storage (NAS), a storage server, a cloud storage and/or the like.
  • the storage 206 may also utilize one or more volatile memory devices, for example, a Random Access Memory (RAM) device and/or the like for temporary storage of code and/or data.
  • RAM Random Access Memory
  • the processor(s) 204 may execute one or more software modules, for example, a process, a script, an application, an agent, a utility and/or the like which comprise a plurality of program instructions stored in a non-transitory medium such as the storage 206.
  • the processor(s) 204 may execute one or more software modules such as, for example, a simulator 210 for training an autonomous driving system 220 using a simulated model created to replicate one or more geographical areas, a ranging model creator 212 for creating a ranging model of the geographical area(s), and a driver behavior simulator 214 simulating driver behavior in the geographical area(s).
  • one or more of the simulator 210, the ranging model creator 212 and/or the driver behavior simulator 214 are integrated in a single software module.
  • the simulator 210 may include the ranging model creator 212 and/or the driver behavior simulator 214.
  • the simulator 210, the ranging model creator 212 and/or the driver behavior simulator 214 may communicate over the network(s) 230 to obtain, retrieve, receive and/or collect data, for example, geographic map data, visual imagery data, sensory data and/or the like from one or more remote locations.
  • the simulation server 201 and/or part thereof is implemented as a cloud based platform utilizing one or more cloud services, for example, cloud computing, cloud storage, cloud analytics and/or the like.
  • one or more of the software modules for example, the simulator 210, the ranging model creator 212 and/or the driver behavior simulator 214 may be implemented as a cloud based service and/or platform, for example, software as a service (SaaS), platform as a service (PaaS) and/or the like.
  • the autonomous driving system 220 may include an I/O interface such as the I/O interface 202, one or more processors such as the processor(s) 240, and a storage such as the storage 206.
  • the I/O interface of the autonomous driving system 220 may provide a plurality of I/O and/or network interfaces for connecting to one or more peripheral devices, for example, sensors, networks such as the network 230 and/or the like.
  • the I/O interface of the autonomous driving system 220 may provide connectivity to one or more imaging sensors, ranging sensors, motion sensors, V2X communication links, vehicle communication links (e.g. Controller Area Network (CAN), etc.) and and/or the like.
  • CAN Controller Area Network
  • the I/O interface 202 may further include one or more interfaces adapted and/or configured to the native I/O interfaces of the I/O interface of the autonomous driving system 220.
  • the I/O interface 202 may include one or more output links compliant with one or more input links of the I/O interface of the autonomous driving system 220 such that an imaging data feed may be driven from the simulator system 201 to the autonomous driving system 220.
  • the I O interface 202 may provide connectivity to the autonomous driving system 220 to drive ranging sensory data, motion sensory data, transport data and/or the like.
  • the communication between the simulation server 201 and the autonomous driving system 220 may support bidirectional communication from the simulation server 201 to the autonomous driving system 220 and vice versa.
  • the autonomous driving system 220 may execute one or more software modules such as, for example, an autonomous driver 222 for controlling movement, navigation and/or the like of a vehicle, for example, a ground vehicle, an aerial vehicle and/or a naval vehicle.
  • an autonomous driver 222 for controlling movement, navigation and/or the like of a vehicle, for example, a ground vehicle, an aerial vehicle and/or a naval vehicle.
  • the autonomous driving system 220 may be facilitated through the autonomous driver 222 independent of the autonomous driving system 220.
  • the autonomous driver 222 may be executed by one or more processing nodes, for example, the simulation server 201, another processing node 240 connected to the network 230 and/or the like.
  • one or more of the software modules executed by the simulation server, for example, the simulator 210 may communicate with the autonomous driver 222 through one or more software interfaces of the autonomous driver 222 which are typically used to receive the imaging data feed, the ranging sensory data, the motion sensory data, the transport data and/or the like.
  • the software interface(s) may include, for example, a virtual driver, an API function, a system call, an operating system function and/or the like.
  • the communication between the simulator 210 and the autonomous driver 222 may also be established using one or more SDKs provided for the autonomous driver 222 and/or for the simulator 210.
  • the autonomous driver 222 is executed at a remote processing node such as the processing node 240, the simulator 210 and the autonomous driver 222 may communicate with each other using the software interfaces which may be facilitated over one or more of networks such as the networks 230.
  • the process 100 starts with the simulator 210 obtaining geographic map data of one or more geographical areas, for example, an urban area, an open terrain area, a country side area and/or the like.
  • the simulator 210 obtains the geographic map data for a geographical area targeted for training an autonomous driving system such as the autonomous driving system 220 and/or autonomous driver such as the autonomous driver 222.
  • the geographic map data may include one or more maps, for example, a 2D map, a 3D map, an orthophoto map, an elevation map, a detailed map comprising object description for objects present in the geographical area and/or the like.
  • the geographic map data may be obtained from one or more geographic map sources, for example, Google maps, OpenStreetMap and/or the like.
  • the geographic map data may present one or more static objects located in the geographical area, for example, a road, a road infrastructure object, a building, a monument, a structure, a natural object, a terrain surface and/or the like.
  • the geographic map data may include, for example, road width, structure(s) contour outline, structure(s) polygons, structure(s) dimensions (e.g. width, length, height) and/or the like.
  • the simulator 210 may retrieve the geographic map data from the storage 206 and/or from one or more remote location accessible over the network(s) 230.
  • the simulator 210 obtains visual imagery data of the target geographical area.
  • the visual imagery data may include one or more 2D and/or 3D images depicting the geographical area from ground level, from the air and/or from a satellite.
  • the simulator 210 may retrieve the geographic map data from the storage 206 and/or from one or more remote location accessible over the network(s) 230.
  • the simulator 210 may obtain the visual imagery data from one or more visual imagery sources, for example, Google Earth, Google Street View, OpenStreetCam and/or the like.
  • the simulator 210 labels the static objects detected in the imagery data.
  • the simulator 210 may use one or more computer vision classifiers (classification functions), for example, a Convolutional Neural Network (CNN), an SVM and/or the like for classifying the static object(s) detected in the visual imagery data to predefined labels as known in the art.
  • the classifier(s) may identify and label target static objects depicted in the visual imagery data, for example, a road infrastructure object, a building, a monument, a structure, a natural object, a terrain surface and/or the like.
  • the road infrastructure objects may include, for example, roads, road shoulders, pavements, intersections, interchanges, traffic lights, pedestrian crossings and/or the like.
  • the classifier(s) may typically be trained with a training image set adapted for the target static object defined to be recognized in the imagery data, for example, the road infrastructure object, the building, the monument, the structure, the natural object, the terrain surface and/or the like.
  • the training image set may be collected, constructed, adapted, augmented and/or transformed to present features of the static objects in various types, sizes, view angles, colors and/or the like.
  • the simulator 210 prior to analyzing the visual imagery data to identify the static objects, applies noise removal to the visual imagery data to remove unnecessary objects, for example, vehicles, animals, vegetation and/or the like that may obscure (at least partially) the target static objects.
  • the simulator 210 may further apply one or more image processing algorithms to improve and/or enhance visibility of the static objects depicted in the imagery data, for example, adjust image brightness, adjust image color scheme, adjust image contrast and/or the like.
  • the simulator 210 superimposes the labeled static objects over the geographic map data, for example, the geographic map(s) to create a labeled model of the geographical area.
  • the simulator 210 may overlay, fit, align, adjust and/or the like the labeled objects over the geographic map(s) in the respective location, position, orientation, proportion and/or the like as identified by analyzing the geographic map data and/or the visual imagery data.
  • the simulator 210 may associate each of the labeled static objects detected in the visual imagery data with one or more positioning attributes extracted from the geographic map data and/or the visual imagery data, for example, location, position, elevation, orientation, proportion and/or the like. Using the positioning attributes, the simulator 210 may position (i.e.
  • the resulting labeled model may therefore replicate the real world geographical area such that the labeled objects accurately placed and/or located according to real world place, location, position, elevation, orientation and/or the like.
  • the simulator 210 may further create an elevation model that is integrated with the labeled model.
  • the simulator 210 synthesizes the labeled model to assign (visual) image pixel data to each of the labeled static objects in the labeled model to create a virtual 3D visual realistic scene replicating the geographical area.
  • the simulator 210 may apply one or more methods, techniques and/or implementations for synthesizing the labeled scene.
  • the simulator 210 may use one or more machine learning methodologies, techniques and/or algorithms, for example, a cGAN and/or the like to synthesize one or more of the labeled objects.
  • the cGAN as known in the art, may be trained to apply a plurality of visual data transformations, for example, pixel to pixel, label to pixel and/or the like.
  • the cGAN may therefore generate visual appearance imagery (e.g. one or more images) for each of the labels which classify the static object(s) in the labeled model.
  • the cGAN may be trained to perform the reverse operation of the classifier(s) (classification function) such that the cGAN may generate a corresponding visual imagery for label(s) assigned by the classifier(s) to one or more of the static objects in the labeled model. For example, assuming the simulator 210 identified a road object in the visual imagery data of the geographical area and labeled the road object in the labeled scene. The simulator 210 may apply the cGAN(s) to transform the road label of the road object to a real world visual appearance imagery of the road object.
  • the simulator 210 may use the original visual imagery data to extract the texture of one or more of the labeled objects. For example, assuming the simulator 210 identified the road object in the visual imagery data of the geographical area and labeled the road object in the labeled scene. The simulator 210 may extract the texture of the road object from the original visual imagery data and overlay the road object texture on the road object label in the labeled scene.
  • the simulator 210 may use retrieve the texture of one or more of the labeled objects from a repository comprising a plurality of texture images of a plurality of objects.
  • the repository may be facilitated through e or more storage locations, for example, the storage 206 and/or one or more remote storage locations accessible over the network 230.
  • the simulator 210 identified the road object in the visual imagery data of the geographical area and labeled the road object in the labeled scene.
  • the simulator 210 may access the storage location(s) to retrieve the texture of the road object and overlay the road object texture on the road object label in the labeled scene.
  • the simulator 210 may manipulate, for example, increase, decrease, stretch, contract, rotate, transform and/or the like the labeled object(s)' texture extracted for the visual imagery data to fit the location, size, proportion and/or perspective of the labeled object(s) in the labeled scene.
  • the simulator 210 may manipulate the road object texture to fit the road object location, size and/or perspective in the labeled scene as depicted from a certain point of view.
  • the simulator 210 may apply one or more computer graphics standards, specifications, methodologies and/or the like for creating and/or rendering the virtual 3D visual realistic model, for example, OpenGL, DirectX and/or the like.
  • the resulting virtual 3D visual realistic model may be significantly accurate in visually replicating the geographical area.
  • the virtual 3D visual realistic model may therefore be highly suitable for training the autonomous driver 222 for controlling movement of an emulated vehicle, for example, a ground vehicle, an aerial vehicle, a naval vehicle and/or the like in the simulated realistic model replicating the geographical area.
  • the simulator 210 adjusts the simulated realistic model to simulate one or more environmental conditions, for example, a lighting condition, a timing condition, a weather condition and/or the like.
  • the simulator 210 may adjust the lighting conditions of the simulated realistic model according to a time of day and/or a date, i.e. according to an angle of the sun in the sky.
  • the simulator 210 may adjust the lighting conditions of the simulated realistic model according to a weather condition, for example, an overcast, cloudy sky, fog and/or the like.
  • the simulator 210 may simulate rain drops, snow flakes and/or the like to simulate respective weather conditions.
  • the simulator 210 may insert one or more simulated dynamic objects to the simulated virtual realistic model.
  • the simulated dynamic objects may include for example, a ground vehicle, an aerial vehicle, a naval vehicle, a pedestrian, an animal, vegetation and/or the like.
  • the simulated dynamic objects may further include one a dynamically changing road infrastructure object, for example, a light changing traffic light, an opened/closed railroad gate and/or the like.
  • the simulator 210 may apply one or more movement and/or switching patterns to one or more of the simulated dynamic objects to mimic the real world dynamic objects behavior.
  • the simulator 210 may insert a simulated aircraft to the simulated virtual realistic model and controls the simulated aircraft to fly in a valid flight lane as exists in the geographical area.
  • the simulator 210 may place one or more simulated traffic lights at an intersection detected in the geographical area.
  • the simulator 210 may control the simulated traffic light(s) to switch lights in accordance with traffic control directives applicable to traffic light(s) in the geographical area.
  • the simulator 210 may further synchronize switching of multiple simulated traffic lights to imitate real traffic control as applied in the geographic area.
  • the simulator 210 may use a driver behavior simulator to control one or more of the emulated ground vehicles according to one more driver behavior classes and/or patterns identified at the geographical area.
  • the driver behavior classes may include, for example, an aggressive driver, a normal driver, a patient driver, a reckless driver and/or the like.
  • the simulator 210 using the driver behavior simulator, may apply the driver behavior classes according to a density function calculated for the geographical area which represents the distribution of the driver prototypes (classes) detected in the geographical area replicated by the virtual realistic model. This may allow accurate simulation of the virtual realistic model which may thus be significantly similar to the real world geographical area with respect to driving behavior typical to the geographical area.
  • FIG. 3 is a flowchart of an exemplary process of training a driver behavior simulator for a certain geographical area, according to some embodiments of the present invention.
  • An exemplary process 300 may be executed by a driver behavior simulator such as the driver behavior simulator 214 in a system such as the system 200 for training the driver behavior simulator 214 according to driver behavior characteristics detected in the geographical area. While the process 300 may be applied in a plurality of geographical areas for training the driver behavior simulator 214, for brevity, the process 400 is described for a single geographical location.
  • the driver behavior simulator 214 may be used by a simulator such as the simulator 210 to simulate movement of emulated vehicles in a simulated virtual realistic model replicating the geographical area.
  • the process 300 starts with the driver behavior simulator 214 obtaining sensory data from a plurality of sensor sets mounted on a plurality of vehicles driven by a plurality of drivers in the geographical area.
  • the sensory data may include sensory motion data obtained from one or more motion sensors of the sensor set, for example, a Global Positioning system (GPS) sensor, a gyroscope, an accelerometer, an Inertial Measurement Unit (IMU) sensor, an elevation sensor and/or the like.
  • GPS Global Positioning system
  • IMU Inertial Measurement Unit
  • the driver behavior simulator 214 may analyze the sensory data to detect movement patterns of each of the plurality of vehicles in the geographical area.
  • the driver behavior simulator 214 may apply big-data analysis and/or analytics using one or more machine learning algorithms, for example, a neural network (e.g. DNN, GMM, etc.), an SVM and/or the like to analyze large amounts of sensory data collected from the vehicles riding in the geographical areas and detect the movement patterns.
  • the detected movement patterns may be indicative of one or more driver behavior patterns exhibited by one or more of the drivers driving the vehicles in the geographical area.
  • the driver behavior simulator 214 may therefore analyze the detected movement patterns to infer the driver behavior pattern(s).
  • the driver behavior simulator 214 may again apply the machine learning algorithm(s) to identify the driver behavior pattern(s) from analysis of the movement patterns.
  • the driver behavior pattern(s) may include one or more motion parameters, for example, a speed parameter, an acceleration parameter, a breaking parameter, a direction parameter, an orientation parameter and/or the like. Moreover, the driver behavior pattern(s) may be associated with specific locations of interest, for example, an intersection, a road curve, a turning point, an interchange entrance/exit ramp and/or the like. The driver behavior pattern(s) may describe one or more of the motion parameters for the location of interest.
  • the driver behavior simulator 214 may create one or more driver behavior pattern(s) to describe a prolonged driving action, for example, crossing the intersection such that the driver behavior pattern(s) may specify, for example, a speed parameter for the intersection entry phase, a speed parameter for the intersection crossing phase, a speed parameter for the intersection exit phase and/or the like.
  • the driver behavior pattern(s) may describe a direction and/or orientation parameter for one or more phase while exiting the interchange on the exit ramp.
  • the driver behavior pattern(s) may describe an acceleration parameter for one or more phases while entering the interchange entrance ramp.
  • the driver behavior simulator 214 analyzes the sensory data with respect to one or more environmental characteristics detected during acquisition of the sensory data, for example, a weather condition attribute, a timing attribute and/or the like.
  • the environmental characteristics may affect the driving behavior exhibited by at least some of the drivers and the driver behavior simulator 214 may therefore adapt, create and/or amend the driver behavior pattern(s) according to the environmental characteristic(s).
  • the driver behavior may differ between night and day, between summer and winter and/or the like. Therefore, based on the timing attribute associated with the sensory data, the driver behavior simulator 214 may create the driver behavior pattern(s) to distinguish between day and night, summer and winter and/or the like.
  • the driver behavior may change in case of wind, rain, fog, snow and/or the like. Therefore, based on the weather attribute associated with the sensory data, the driver behavior simulator 214 may create the driver behavior pattern(s) for each of the weather conditions.
  • the sensory data comprises additional sensory data obtained from one or more outward sensors included in one or more of the sensor sets mounted on the plurality of vehicles.
  • the outward sensor(s) depict the geographical area as viewed from one of the plurality of vehicles associated with the respective sensor set(s).
  • the outward sensor(s) may include, for example, a camera, an infrared camera, a night vision sensor, a LIDAR sensor, a radar, an ultra-sonic sensor and/or the like.
  • the driver behavior simulator 214 may detect further movement patterns, driver behavior patterns and/or additional driving characteristics of the drivers of the vehicles in the geographical area and may enhance one or more of the driver behavior patterns accordingly.
  • the additional driving characteristics may include, for example, a tailgating characteristic, an in-lane position characteristic, a double-parking tendency characteristic and/or the like.
  • the driver behavior simulator 214 may detect double parking events and may thus associate one or more of the driver behavior patterns with the double-parking tendency characteristic.
  • the driver behavior simulator 214 may identify space keeping parameters, in- lane position parameters and may thus associate one or more of the driver behavior patterns with the tailgating characteristic, the in-lane position characteristic and/or the like.
  • the driver behavior simulator 214 may associate at least some of the drivers with one or more of the driver behavior patterns detected for the vehicles' drivers in the geographical area.
  • the driver behavior simulator 214 may classify at least some of the plurality of drivers to one or more driver behavior classes according to the driver behavior pattern(s) associated with each of the drivers.
  • the driver behavior classes may include, for example, an aggressive driver prototype, a normal driver prototype, a patient driver prototype, a reckless driver prototype and/or the like.
  • the driver behavior simulator 214 may use one or more classification, clustering and/or grouping methods, techniques and/or algorithms for classifying the drivers to the driver behavior classes.
  • the driver behavior simulator 214 may calculate a driver behavior density function associated with the geographical area and/or pat thereof.
  • the driver behavior density function describes a probability of presence and/or distribution (number) of each of the driver prototypes in the geographical area (or part thereof) at a certain time.
  • the driver behavior simulator 214 may calculate the driver behavior density function according to a distribution and/or recurrence of drivers of each of the driver prototypes in the geographical area.
  • the driver behavior simulator 214 may further adjust the calculated driver behavior density function according to one or more of the environmental conditions. For example, the number of driver of certain driver prototype may differ between day and night, for example, more reckless drivers (young people) at night while more normal drivers (people driving to work places) during the day. In another example, during rainy weather conditions, the number of reckless drivers may decrease.
  • the driver behavior simulator 214 may therefore adjust the driver behavior density function accordingly.
  • the driver behavior simulator 214 may be updated with the driver behavior classes and/or the driver behavior density function detected and/or calculated for the geographical area to generate realistic driver behavior data adapted to the geographical area for training the autonomous driving system 220 and/or the autonomous driver 222.
  • the driver behavior simulator 214 may thus be used by the simulator 210 to simulate movement of the vehicles emulated in the simulated virtual realistic model replicating the geographical area to further impersonate the real world simulation.
  • the simulator 210 may generate synthetic 3D imaging data, for example, one or more 2D and/or 3D images of the virtual realistic model replicating the geographical area.
  • the simulator 210 may generate the synthetic 3D imaging data using the functions, utilities, services and/or abilities of the graphic environment, for example, OpenGL, DirectX and/or the like used to generate the simulated virtual realistic model.
  • the simulator 210 may create the synthetic 3D imaging data to depict the virtual realistic model as viewed by one or more emulated imaging sensors, for example, a camera, a video camera, an infrared camera, a night vision sensor and/or the like mounted on the emulated vehicle moving in the simulated virtual realistic model.
  • the emulated imaging sensor(s) may be mounted on the emulated vehicle according to one or more mounting attributes, for example, positioning on the emulated vehicle (e.g.
  • the simulator 210 may inject, provide and/or transmit the synthetic 3D imaging data to the autonomous driving system 220 which may thus be trained to move the emulated vehicle in the simulated virtual realistic model which may appear to the autonomous driving system 220 as the real world geographical area.
  • the imaging data may be injected to the training autonomous driving system as a feed to one or more of the hardware and/or software interfaces used by the autonomous driving system 220 and/or the autonomous driver 222 as described herein above which may be natively used to receive the imaging data feed from the imaging sensor(s) mounted on the controlled vehicle.
  • the simulator 210 may analyze the synthetic 3D imaging data to evaluate a perception performance of the imaging sensor(s).
  • the simulator 210 may evaluate the perception performance of the imaging sensor(s) while applying alternate values to one or more mounting attributes of the imaging sensor(s).
  • the simulator 210 may determine optimal settings for the mounting attribute(s) and may recommend the autonomous driving system and/or the autonomous driver 222 to apply the optimal settings.
  • the designer(s) of the autonomous driving system 220 may take one or more actions, for example, adjust one or more of the imaging sensor(s) mounting attributes, use different imaging sensor(s), add/remove imaging sensor(s) and/or the like.
  • the simulator 210 may use a ranging model to generate sensory data, in particular sensory ranging data simulating sensory ranging data generated by one or range sensors mounted on the emulated vehicle.
  • the range sensor(s) for example, a LIDAR sensor, a radar, an ultra-sonic sensor, a camera, an infrared camera and/or the like emulate the range sensor(s) mounted on the emulated vehicle moving in the simulated virtual realistic model.
  • the emulated range sensor(s) may be mounted on the emulated vehicle according to one or more mounting attributes, for example, positioning on the emulated vehicle (e.g.
  • the simulator 210 may employ the ranging model to enhance the simulated virtual realistic model to simulate with simulated realistic ranging data that may be injected to the autonomous driving system 220 and/or the autonomous driver 222.
  • the simulated sensory ranging data may be injected to the training autonomous driving system 220 as a feed to one or more native inputs of the training autonomous driving system typically connected to one or more of the range sensors. Additionally and/or alternatively, the simulated sensory ranging data may be injected to the training autonomous driver 222 as a feed to one or more software interfaces typically used by the autonomous driver 222 to collect the sensory ranging data.
  • the ranging model created for the geographical area may be highly accurate. However, such accuracy may fail to represent real world sensory ranging data produced by real range sensor(s).
  • the ranging model may therefore apply one or more noise pattern(s) to the simulated sensory ranging data to realistically simulate real world conditions.
  • FIG. 4 is a flowchart of an exemplary process of creating a ranging sensory model of a geographical area, according to some embodiments of the present invention.
  • An exemplary process 400 may be executed by a ranging model creator such as the ranging model creator 212 in a system such as the system 200 to create a ranging model for the simulated realistic model of a geographical area created by a simulator such as the simulator 210. While the process 400 may be applied in a plurality of real world scenes (geographical areas) for training a machine learning algorithm to accurately create a ranging model emulating real world geographical areas, for brevity, the process 400 is described for a single real world scene (geographical area).
  • the process 400 starts with the ranging model creator 212 obtaining real ranging data for objects located in the real world scene (geographical area).
  • the ranging model creator 212 may also analyze the geographic map data obtained for the real world scene to calculate the real ranging data.
  • the ranging model creator 212 may also analyze the visual imagery data obtained for the real world scene using one or more image processing techniques and/or algorithms to calculate the real ranging data. Additionally, the ranging model creator 212 may collect the real ranging data from real world measurements made in the real world scene.
  • the ranging model creator 212 obtains sensory ranging data from each of a plurality of range sensors depicting the real world scene.
  • the range sensors which may include, for example, a LIDAR sensor, a radar, an ultra-sonic sensor, a camera, an infrared camera and/or the like may typically be mounted on a plurality of vehicles travelling in the real world scene, for example, a ground vehicle, an aerial vehicle and/or a naval vehicle.
  • Each of the ranging sensors is associated with positioning data indicating the positioning of each associated range sensor.
  • the ranging model creator 212 may receive the positioning data from one or more positioning sensors typically installed, mounted and/or attached to the vehicle carrying the associated ranging sensor(s).
  • the positioning sensors may include, for example, a GPS sensor, a gyroscope, an accelerometer, an IMU sensor, an elevation sensor and/or the like.
  • the positioning of the respective range sensor(s) may be based on the GPS positioning data, on dead reckoning positioning calculated using the positioning data obtained from the gyroscope, the accelerometer, the IMU sensor and/or the like.
  • the positioning of the respective range sensor(s) may be based on a combination of the GPS positioning and the dead reckoning positioning which may provide improved positioning of the respective range sensor(s).
  • the ranging model creator 212 may accurately calculate the absolute location and/or position of each of the range sensors in the real world scene. This allows the ranging model creator 212 to calculate absolute ranging data for each of the range sensors by adjusting the sensory ranging data produced by the respective range sensor according to the absolute location and/or position of the respective range sensor.
  • the positioning data includes motion data comprising one or more motion parameters of the associated range sensor(s).
  • the motion parameter(s) may include, for example, a speed parameter, an acceleration parameter, a direction parameter, an orientation parameter, an elevation parameter and/or the like.
  • the ranging model creator 212 may improve accuracy of the absolute location and/or position of the associated range sensor(s) in the real world scene.
  • the ranging model creator 212 analyzes the sensory ranging data obtained from the range sensors (after adjusted according to the positioning data) with respect to the real ranging data.
  • the ranging model creator 212 may apply big-data analysis and/or analytics using one or more machine learning algorithms, for example, a neural network (e.g. DNN, GMM, etc.), an SVM and/or the like to analyze large amounts of sensory ranging data for a plurality of real world scenes.
  • a neural network e.g. DNN, GMM, etc.
  • SVM SVM
  • the ranging model creator 212 evaluates the sensory ranging data with relation to one or more mounting parameters of the range sensor(s), for example, a position, an orientation, an FOV and/or the like.
  • the mounting parameter(s) of a respective range sensor may have at least some effect on the sensory ranging data produced by the respective range sensor. Therefore, assuming the mounting parameter(s) are available to the ranging model creator 212, the ranging model creator 212 may identify optimal, preferred and/or recommended settings for one or more of the mounting parameters.
  • the ranging model creator 212 may analyze the sensory ranging data for a plurality of range sensors for a plurality of real world scenes to identify one or more noise patterns exhibited by one or more of the range sensors.
  • the noise pattern(s) may describe one or more noise characteristics, for example, noise, distortion, latency, calibration offset and/or the like which may be typical, inherent and/or characteristic of one or more of the range sensors.
  • the noise may result from other objects affecting which affect the object(s) to which the sensory ranging data refers, for example, partially obscuring the reference object(s) and/or the like.
  • the noise may further describe one or more object attributes, for example, an external surface texture, an external surface composition, an external surface material and/or the like of the reference object(s) and/or of the other objects affecting the reference object(s).
  • object attributes for example, an external surface texture, an external surface composition, an external surface material and/or the like of the reference object(s) and/or of the other objects affecting the reference object(s).
  • some surface textures, surface compositions and/or surface materials may reflect differently rays projected by one or more of the range sensors and may therefore affect the accuracy of the acquired sensory ranging data.
  • the ranging model creator 212 may therefore set the noise pattern(s) to map the sensory ranging data accuracy and/or performance with the attributes of the reference object(s) and/or of the other objects affecting the reference object(s) detected in the real world scene.
  • the ranging model creator 212 may extract the object attribute(s) of the reference object(s) and/or of the other objects from the visual imagery data obtained for the real world scene.
  • the ranging model creator 212 may detect a tree in the visual imagery data and may associate the tree with predefined sensory ranging data accuracy and/or range sensor performance which is predefined and/or learned over time by the ranging model creator 212.
  • the ranging model creator 212 may further obtain the object attribute(s) of the reference object(s) and/or of the other objects from one or more records, for example, a metadata record and/or the like associated with one of more of the objects located in the real world scene. Since the object(s) attributes may vary between different geographical locations, areas and./or real world scenes, retrieving object(s) attribute(s) from the metadata record may allow associating the object(s) with typical attributes as found in the respective real world scene. For example, a detailed map of the real world scene may include designations of structures located in the real world scene and may further include a metadata record associated with one or roe of the objects which describes the object attribute(s).
  • the distortion may result from one or more inherent limitations of the range sensor(s) which may present inaccuracies in the measured range to the reference object(s).
  • the latency may refer to the latency from the time the range sensor(s) captured the sensory ranging data until the time the sensory ranging data is recorded and/or logged by a logging system. This latency may result for one or more reasons, for example, processing time of the range sensor to process the ranging sensory data, communication time for the range sensor to communicate the sensory ranging data to the logging system and/or the like.
  • the calibration offset may result from one or more inherent limitations of the range sensor(s) wherein the range sensor(s) may not be ideally calibrated even following a calibration sequence.
  • the calibration offset may further be inflicted by inaccurate mounting calibration of the range sensor(s) with respect to the vehicle such that the position, orientation, FOV and/or the like of the range sensor(s) may deviate from the intended position, orientation, FOV.
  • Each noise pattern may therefore present values for one or more of the noise characteristics, for example, a noise value, a distortion value, a latency value, a calibration offset value and/or the like.
  • the ranging model creator 212 may update a ranging model with the noise pattern(s) to establish a realistic simulation arena for the virtual realistic model replicating the real world scene (geographical area).
  • the ranging model may be used to generate realistic simulation ranging data that may be injected to the autonomous driver 222 during training.
  • the ranging model creator 212 may analyze the sensory ranging data with respect to the real ranging data while evaluating one or more environmental characteristics detected during acquisition of the sensory ranging data and/or part thereof and which may affect the sensory ranging data acquisition.
  • the environmental characteristics may include, for example, a weather condition attribute, a timing attribute and/or the like.
  • the weather condition attribute may include, for example, a humidity value, a temperature value, a precipitation value (e.g. rain, snow, etc.), a fog condition value and/or the like.
  • the weather condition attribute(s) may affect the sensory ranging data acquisition, for example, foggy conditions may affect performance of one or more of the range sensors for instance, the LIDAR sensor, the radar, the camera, the infrared camera and/or the like.
  • the temperature of the environment may affect the performance of one or more of the range sensors.
  • the timing conditions may include for example, time of day, date and/or the like which may also affect performance of one or more of the range sensors.
  • the performance of one or more of the range sensors may be affected by light conditions, for example, day light, twilight, evening and/or night.
  • the simulator 210 may further generate the simulated sensory ranging data for one or more of the dynamic objects inserted to the virtual realistic model replicating the geographical area. For example, assuming a tree is inserted into the virtual realistic model, the simulator 210 may use the ranging model to create simulated sensory ranging data adapted to the tree according to one or more of the noise pattern(s) identified by the ranging model creator 212 and applied in the ranging model. The simulator 210 may further adapted the simulated sensory ranging data for one or more of the static objects according to the noise, for example, the tree inserted into the virtual realistic model.
  • the simulator 210 may analyze the generated simulation sensory ranging data produced by the emulated range sensor(s) mounted on the emulated vehicle to evaluate performance of the emulated range sensor(s) with respect to one or more of their mounting attributes.
  • the simulator 210 may evaluate the accuracy of the simulated sensory ranging data that is produced by the emulated range sensor(s) and which may be affected by the mounting attribute(s).
  • the simulator 210 may generate and evaluate alternate simulation sensory ranging data as produced by the range sensor having alternate mounting attribute(s), for example, a different orientation, a different position, a different FOV and/or the like.
  • the simulator 210 may determine optimal settings for the mounting attribute(s) and may recommend the autonomous driving system 220 and/or the autonomous driver 222 to apply the optimal settings.
  • the designer(s) of the autonomous driving system 220 may take one or more actions, for example, adjust one or more of the ranging sensor(s) mounting attribute(s), use different range sensor(s), add/remove range sensor(s) and/or the like.
  • the simulator 210 may generate additional simulation data for the virtual realistic model replicating the geographical area, for example, motion data, transport data and/or the like.
  • the simulated motion data may include one or more motion parameters of the emulated vehicle.
  • the motion parameter(s) may include, for example, a speed parameter, an acceleration parameter, a direction parameter, an orientation parameter, an elevation parameter and/or the like.
  • the motion parameters may include a steering wheel angle parameter which may be indicative of the direction of the emulated ground vehicle.
  • the simulated motion data may be injected to the training autonomous driving system 220 through a native input typically used by the autonomous driving system 220 to connect to one or more of the motion sensors and/or positioning sensors.
  • the simulated sensory motion data may be injected to the training autonomous driver 222 as a feed to one or more software interfaces typically used by the autonomous driver 222 to collect the sensory motion data.
  • the simulated motion data may complement the other simulated data of the virtual realistic model to improve replication of a real ride, drive, flight in the geographical area as experienced by the training autonomous driving system 220 and/or the autonomous driver 222.
  • the simulated transport data may include, for example, information simulating communication of the emulated vehicle with one or more other entities, for example, a vehicle, a control center, a road infrastructure object and/or the like.
  • the communication may simulate for example, V2X communication between the emulated vehicle and the other entities.
  • the simulated transport data may be injected to the training autonomous driving system 220 as a feed to one or more native inputs, ports and/or links typically used by the autonomous driving system 220 to connect to V2X communication channel(s). Additionally and/or alternatively, the simulated transport data may be injected to the training autonomous driver 222 as a feed to one or more software interfaces typically used by the autonomous driver 222 to collect the transport data.
  • the simulated transport data may further complement the simulation of the virtual realistic model to improve replication of a real ride, drive, flight in the geographical area as experienced by the training autonomous driving system.
  • imaging sensor range sensor
  • machine learning algorithm neural network
  • composition or method may include additional ingredients and/or steps, but only if the additional ingredients and/or steps do not materially alter the basic and novel characteristics of the claimed composition or method.
  • a compound or “at least one compound” may include a plurality of compounds, including mixtures thereof.
  • range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Evolutionary Computation (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Game Theory and Decision Science (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • Traffic Control Systems (AREA)

Abstract

A computer implemented method of creating a simulated realistic virtual model of a geographical area for training an autonomous driving system, comprising obtaining geographic map data of a geographical area, obtaining visual imagery data of the geographical area, classifying static objects identified in the visual imagery data to corresponding labels to designate labeled objects, superimposing the labeled objects over the geographic map data, generating a virtual 3D realistic model emulating the geographical area by synthesizing a corresponding visual texture for each of the labeled objects and injecting synthetic 3D imaging feed of the realistic model to imaging sensor(s) input(s) of the autonomous driving system controlling movement of an emulated vehicle in the realistic model where the synthetic 3D imaging feed is generated to depict the realistic model from a point of view of emulated imaging sensor(s) mounted on the emulated vehicle.

Description

REALISTIC 3D VIRTUAL WORLD CREATION AND SIMULATION FOR TRAINING AUTOMATED DRIVING SYSTEMS
BACKGROUND
The present invention, in some embodiments thereof, relates to creating a simulated model of a geographical area, and, more specifically, but not exclusively, to creating a simulated model of a geographical area, optionally including transportation traffic to generate simulation sensory data for training an autonomous driving system.
The arena of autonomous vehicles, either ground vehicles, aerial vehicles and/or naval vehicles has witnessed an enormous evolution during recent times. Major resources are invested in the autonomous vehicles technologies and the field is therefore quickly moving forward towards the goal of deploying autonomous vehicles for a plurality of applications, for example, transportation, industrial, military uses and/or the like.
The autonomous vehicles involve a plurality of disciplines targeting a plurality of challenges rising in the development of the autonomous vehicles. However in addition to the design and development of the autonomous vehicles the need for multiple and diversified support eco- systems for training, evaluating and/or validating the autonomous driving systems controlling the autonomous cars.
SUMMARY
According to a first aspect of the present invention there is provided a computer implemented method of creating a simulated realistic virtual model of a geographical area for training an autonomous driving system, comprising:
Obtaining geographic map data of a geographical area.
- Obtaining visual imagery data of the geographical area.
Classifying a plurality of static objects identified in the visual imagery data to corresponding labels to designate a plurality of labeled objects.
Superimposing the plurality of labeled objects over the geographic map data.
Generating a virtual three dimensional (3D) realistic model emulating the geographical area by synthesizing a corresponding visual texture for each of the plurality of labeled objects. Injecting synthetic 3D imaging feed of the realistic model to an input of one or more imaging sensors of the autonomous driving system controlling movement of an emulated vehicle in the realistic model. The synthetic 3D imaging feed is generated to depict the realistic model from a point of view of one or more emulated imaging sensors mounted on the emulated vehicle.
Training the autonomous driving systems using the simulated virtual realistic model may allow for significant scalability since a plurality of various ride scenarios may be easily simulated for a plurality of geographical locations. Training, evaluation and/or validation of the autonomous driving systems may be done automatically by an automated system executing the simulated virtual realistic model. Moreover, the training, evaluation and/or validation may be done for a plurality of geographical area, various conditions and/or various scenarios without moving real vehicles in the real world. In addition, the automated training, evaluation and/or validation may be conducted concurrently for the plurality of geographical area, the various conditions and/or the various scenarios. This may significantly reduce the resources, for example, time, hardware resources, human resources and/or the like for training, evaluating and/or validating the autonomous driving systems. Moreover, training the autonomous driving systems using the simulated virtual realistic model may significantly reduce risk since the process is conducted in a virtual environment. Damages, accidents and even life loss which may occur using the currently existing methods for training the autonomous driving systems may be completely prevented and avoided.
According to a second aspect of the present invention there is provided a system for creating a simulated virtual realistic model of a geographical area for training an autonomous driving system, comprising one or more processors adapted to execute code, the code comprising:
Code instructions to obtain geographic map data of a geographical area.
Code instructions to obtain visual imagery data of the geographical area.
Code instructions to classify a plurality of static objects identified in the visual imagery data to corresponding labels to designate a plurality of labeled objects. - Code instructions to superimpose the plurality of labeled objects over the
geographic map data. Code instructions to generate a virtual three dimensional (3D) realistic model emulating the geographical area by synthesizing a corresponding visual texture for each of the plurality of labeled objects.
Code instructions to inject synthetic 3D imaging feed of the realistic model to an input of one or more imaging sensors of the autonomous driving system controlling movement of an emulated vehicle in the realistic model. The synthetic 3D imaging feed is generated to depict the realistic model from a point of view of one or more emulated imaging sensors mounted on the emulated vehicle.
In a further implementation form of the first and/or second aspects, the synthesizing is done using one or more of the following implementations:
Applying one or more conditional generative adversarial neural network (cGAN) to transform a label of one or more of the plurality of labeled objects to a respective visual texture.
Extracting the visual texture of one or more of the labeled objects from the visual imagery data.
Retrieving the visual texture of one or more of the labeled objects from a repository comprising a plurality of texture images of a plurality of static objects.
Using one or more of the synthesis implementations may allow selecting the most appropriate technique for each of the labeled objects detected in the geographical area to create a genuine and highly realistic appearance of the object(s).
In a further implementation form of the first and/or second aspects, the synthetic 3D imaging feed is injected to a physical input of the autonomous driving system adapted to receive the input of the one or more imaging sensors. Using the native physical interfaces of the autonomous driving system may significantly reduce and/or completely avoid the need to adjust the autonomous driving system to support the simulated virtual realistic model.
In a further implementation form of the first and/or second aspects, the autonomous driving system is implemented as a computer software program. The synthetic 3D imaging data is injected using one or more virtual drivers emulating a feed of the one or more imaging sensors. Using the native software interfaces of the autonomous driving system may significantly reduce and/or completely avoid the need to adjust the autonomous driving system to support the simulated virtual realistic model. In an optional implementation form of the first and/or second aspects, one or more mounting attributes of the one or more imaging sensors are adjusted according to analysis of a visibility performance of the one or more emulated imaging sensors emulating the one or more imaging sensors. The mounting attributes may include, for example, a positioning on the emulated vehicle, a Field Of View (FOV), a resolution and an overlap region with one or more adjacent imaging sensors. The emulated imaging sensor(s) may be mounted on the emulated vehicle similarly to the mounting of the real imagining sensor(s) on the real vehicle. Therefore exploring, evaluating and/or assessing the performance of the emulated imaging sensors which may be easily accomplished in the simulated virtual model may directly apply for the real imaging sensor(s). Mounting recommendation(s) including imaging sensor(s) characteristics, model(s) and/or capabilities may therefore be offered to improve performance of the real imaging sensor(s).
In an optional implementation form of the first and/or second aspects, a sensory ranging data feed simulation which simulates sensory ranging data feed generated by one or more emulated range sensors mounted on the emulated vehicle. The sensory ranging data feed is simulated using a simulated ranging model applying one or more noise patterns associated with one or more range sensors emulated by the one or more emulated range sensors. This may further enhance the virtual realistic model to encompass the sensory ranging data feed which may be an essential feed for the autonomous driving system to identify the vehicle's surroundings and control the autonomous vehicle accordingly.
In an optional implementation form of the first and/or second aspects, the one or more noise patterns are adjusted according to one or more object attributes of one or more of a plurality of objects emulated in the realistic model. The noise patterns may be applied to the ranging model created for the virtual realistic model in order to increase the realistic characteristics of the model. Generating the simulated sensory ranging data may be based on highly accurate ranging information extracted, for example, from the geographic map data, the visual imagery data and/or other data sources. However, real world sensory ranging data may be far less accurate. In order to feed the autonomous driving system with a realistic sensory ranging data feed, typical noise patterns learned over time for the real world may be applied to the simulated sensory ranging data. In an optional implementation form of the first and/or second aspects, one or more mounting attributes of the one or more range sensors are adjusted according to analysis of a range accuracy performance of the one or more range sensors. The mounting attributes may include, for example, a positioning on the emulated vehicle, an FOV, a range and an overlap region with one or more adjacent range sensors. The emulated range sensor(s) may be mounted on the emulated vehicle similarly to the mounting of the real range sensor(s) on the real vehicle. Therefore exploring, evaluating and/or assessing the performance of the emulated range sensors which may be easily accomplished in the simulated virtual model may directly apply for the real range sensor(s). Mounting recommendation(s) including range sensor(s) characteristics, model(s) and/or capabilities may therefore be offered to improve performance of the real range sensor(s).
In an optional implementation form of the first and/or second aspects, one or more dynamic objects are inserted into the realistic model. The dynamic object(s) may include, for example, a ground vehicle, an aerial vehicle, a naval vehicle, a pedestrian, an animal, vegetation and a dynamically changing road infrastructure object. This may allow creating a plurality of driving scenarios for training, evaluating and/or validating the autonomous driving system. The driving scenarios may emulate real world traffic, road infrastructure objects, pedestrians, animals, vegetation, and/or the like.
In an optional implementation form of the first and/or second aspects, one or more of a plurality of driver behavior classes are applied for controlling a movement of one or more ground vehicles such as the ground vehicle. The driver behavior classes are adapted to the geographical area according to an analysis of typical driver behavior patterns identified in the geographical area, the one or more driver behavior class is selected according to a density function calculated for the geographical area according to recurrence of drivers prototype corresponding to the one or more driver behavior classes in the geographical area. Adapting the movement of the simulated vehicles to driving classes and patterns as typical to the geographical area may significantly enhance the realistic and/or authentic simulation of the geographical area.
In an optional implementation form of the first and/or second aspects, simulated motion data is injected to the autonomous driving system. The simulated motion data is emulated by one or more emulated motion sensors associated with the emulated vehicle. The simulated motion data comprising one or more motion parameters, for example, a speed parameter, an acceleration parameter, a direction parameter, an orientation parameter and an elevation parameter. This may further enhance the virtual realistic model to emulate the real world geographical area by including the sensory motion data feed which may serve as a major feed for the autonomous driving system.
In an optional implementation form of the first and/or second aspects, simulated transport data is injected to the autonomous driving system. The simulated transport data comprises Vehicle to Anything (V2X) communication between the emulated vehicle and one or more other entities. This may further enhance the virtual realistic model to emulate the real world geographical area by including the transport data feed which may serve as a major feed for the autonomous driving system.
In an optional implementation form of the first and/or second aspects, the synthetic imaging data is adjusted according to one or more environmental characteristics, for example, a lighting condition, a weather condition attribute and timing attribute. This may further increase ability for adjusting the virtual realistic model to simulate diverse environmental conditions to train, evaluate and/or validate the operation of the autonomous driving system in a plurality of scenarios and conditions.
In a further implementation form of the first and/or second aspects, the geographic map data includes, for example, a two dimensional (2D) map, a 3D map, an orthophoto map, an elevation map and a detailed map comprising object description for objects present in the geographical area. Using a plurality of diverse map data sources may support creating an accurate and/or highly detailed virtual model of the emulated geographical area.
In a further implementation form of the first and/or second aspects, the visual imagery data comprises one or more images which are members of a group consisting of: a ground level image, an aerial image and a satellite image, wherein the one or more images are 2D images or 3D images. Using multiple diverse visual imagery data items may support creating an accurate and/or highly detailed virtual model of the emulated geographical area.
In a further implementation form of the first and/or second aspects, each of the plurality of static objects is a member of a group consisting of: a road, a road infrastructure object, an intersection, a building, a monument, a structure, a natural object and a terrain surface. This may allow focusing on the traffic, transportation and/or road infrastructure elements which may be of particular interest for training, evaluating and/or validating the autonomous driving system.
In a further implementation form of the first and/or second aspects, the one or more imaging sensors are members of a group consisting of: a camera, a video camera, an infrared camera and a night vision sensor. The virtual realistic model may support emulation of a plurality of different imaging sensors to allow training, evaluating and/or validating of a plurality of autonomous driving systems which may be adapted to receive diverse and/or different sensory imaging data feeds.
According to a third aspect of the present invention there is provided a computer implemented method of creating a simulated ranging model of a real world scene used for training an autonomous driving system, comprising:
Obtaining real ranging data to a plurality of objects present in a real world scene.
Obtaining sensory ranging data from a plurality of range sensors depicting the real world scene. Each of the plurality of range sensors is associated with positioning data indicating a positioning of the each range sensor, the positioning data is obtained from one or more positioning sensors associated with the each range sensor.
Analyzing the ranging sensory data, adjusted according to the positioning data, with respect to the real ranging data to identify one or more noise patterns affecting measurement accuracy degradation exhibited by one or more of the plurality of range sensors.
Updating a ranging model with the one or more noise patterns to generate realistic simulation ranging data for training an autonomous driving system.
Using large amounts of real world sensory ranging data collected at a plurality of geographical areas, scenes and/or locations may allow generating significantly accurate noise patterns associated with the range sensors and/or the objects present in the scene. The noise patterns may then be used to enhance a virtual realistic model replicating a real world geographical area with a realistic sensory ranging feed injected to the autonomous driving system during a training, evaluation and/or validation session.
In a further implementation form of the third aspect, the real ranging data is provided from one or more sources, for example, a real world measurement, a map based calculation and a calculation based on image processing of one or more images of the real world scene. This may allow obtain accurate ranging information for the plurality of objects present on the scene.
In a further implementation form of the third aspect, the plurality of range sensors include, for example, a LIDAR sensor, a radar, a camera, an infrared camera and an ultra-sonic sensor. The virtual realistic model may support emulation of a plurality of different range sensors to allow training, evaluating and/or validating of a plurality of autonomous driving systems which may be adapted to receive diverse sensory ranging data.
In a further implementation form of the third aspect, the one or more positioning sensors include, for example, a Global Positioning system (GPS) sensor, a gyroscope, an accelerometer, an Inertial Measurement Unit (IMU) sensor and an elevation sensor. Positioning information is essential for establishing a reference for the real sensory ranging data collected from the range sensors. Supporting multiple diverse types of motion sensors may allow improved (more accurate) processing and analysis of the collected sensory ranging data.
In an optional implementation form of the third aspect, the positioning data includes motion data comprising one or more motion parameters of the associated range sensor, for example, a speed parameter, an acceleration parameter, a direction parameter, an orientation parameter and an elevation parameter. Availability of the motion information may also significantly improve accuracy of the collected sensory ranging data. Motion of the vehicles may affect the readings of the range sensor(s) mounted on the associated vehicles and therefore processing and analyzing the sensory ranging data with respect to the motion data may improve accuracy of the collected ranging data which may eventually result in creating more accurate noise pattern(s).
In a further implementation form of the third aspect, the analysis is a statistics based prediction analysis conducted using one or more machine learning algorithms, for example, a neural network and a Support Vector Machine (SVM). Applying the machine learning algorithm(s) over large amounts of sensory data may significantly improve the characterization of the noise patterns associated with the range sensor(s) and/or the objects and/or type of objects detected in the real world scene. In a further implementation form of the third aspect, the one or more noise patterns comprise one or more noise characteristics, for example, a noise value, a distortion value, a latency value and a calibration offset value. The noise pattern(s) may describe a plurality of noise characteristics originating from the range sensor(s) themselves and/or from characteristics of the objects in the scene. Identifying the noise characteristics, in particular using the machine learning algorithm(s) to detect the noise characteristics may significantly improve the accuracy of the noise patterns which may be further improve the virtual realistic model.
In an optional implementation form of the third aspect, the analysis of the sensory data is done according to one or more environmental characteristics detected during acquisition of the ranging sensory data. The environmental characteristics include, for example, a weather condition attribute, and a timing attribute. The environmental conditions may affect the sensory ranging data and therefore analyzing the collected sensory ranging data with respect to the environmental conditions of the scene while acquiring the sensory ranging data may further increase accuracy of the identified noise pattern(s).
In an optional implementation form of the third aspect, the one or more noise patterns are adjusted according to one or more object attributes of one or more of the plurality of objects, the one or more object attributes affect the ranging data produced by the each range sensor, the one or more object attributes are members of a group consisting of: an external surface texture, an external surface composition and an external surface material. The characteristics of the objects in the scene and in particular the exterior characteristics of the objects may affect the sensory ranging data and therefore analyzing the collected sensory ranging data with respect to the object(s) characteristic(s) may further increase accuracy of the identified noise pattern(s).
In a further implementation form of the third aspect, the one or more object attributes are extracted from synthetic 3D imaging data generated for the real world scene. Extracting the object(s) characteristic(s) from the synthetic 3D imaging data may be relatively easy as the object(s) in the scene are identified and may be correlated with their characteristic(s) according to predefined rules.
In a further implementation form of the third aspect, the one or more object attributes are retrieved from a metadata record associated with the real world scene. Attribute(s) of similar objects may vary between geographical areas and/or real world scenes. Therefore, retrieving the object(s) attribute(s) from predefined records may allow associating the object(s) with their typical attribute(s) as found in the respective scene to further improve characterizing the noise characteristics and noise patterns for specific geographical areas, locations and/or real world scenes.
According to a fourth aspect of the present invention there is provided a computer implemented method of training a driver behavior simulator according to a geographical area, comprising:
Obtaining sensory data generated by a plurality of sensor sets mounted on a plurality of vehicles driven by a plurality of drivers in a geographical area. The sensor set comprising one or more motion sensors.
Analyzing the sensory data to identify a plurality of movement patterns indicative of a plurality of driver behavior patterns exhibited by the plurality of drivers.
Classifying at least some of the plurality of drivers to one of a plurality of driver behavior classes according to one of the plurality of driver behavior patterns detected for the each driver.
Calculating a driver behavior density function associated with the geographical area based on a recurrence of each of the plurality of driver behavior classes detected in the geographical area.
- Updating a driver behavior simulator with the plurality of driver behavior classes and the driver behavior density function to generate realistic driver behavior data adapted to the geographical area for training an autonomous driving system.
Detecting driving behavior patterns and classes by analyzing large amounts of real world sensory data collected at a specific geographical area may allow characterizing the driving behavior at certain geographical areas, regions and/or the like. Using the identified driving behavior patterns and classes to enhance the virtual realistic model may allow adapting the virtual realistic model to specific geographical areas, regions and/or scenes making the virtual realistic model highly realistic.
In a further implementation form of the fourth aspect, the one or more motion sensors include, for example, a Global Positioning system (GPS) sensor, a gyroscope, an accelerometer, an Inertial Measurement Unit (IMU) sensor and an elevation sensor. The sensory motion and/or positioning data collected from the motion and/or positioning sensor(s) may express the movement of the vehicles. Therefore analyzing the sensory motion and/or positioning data may allow identifying the driving behavior patterns and classes demonstrated in the monitored geographical area.
In a further implementation form of the fourth aspect, the analysis is a statistics based prediction analysis conducted by an evolving learning algorithm using one or more machine learning algorithms which are members of a group consisting of: a neural network and a Support Vector Machine (SVM). Applying the machine learning algorithm(s) over large amounts of sensory data may significantly improve the characterization of driving behavior patterns and/or classes identified in the specific geographical area.
In a further implementation form of the fourth aspect, each of the plurality of driver behavior patterns comprises one or more motion parameters, for example, a speed parameter, an acceleration parameter, a breaking parameter, a direction parameter and an orientation parameter. The driving behavior pattern(s) may describe a plurality of motion parameters. Identifying the motion parameters, in particular using the machine learning algorithm(s) to detect the motion parameters may significantly improve the accuracy and/or granularity of the detected driving behavior patterns which may be further improve the virtual realistic model.
In an optional implementation form of the fourth aspect, the analysis of the sensory data is done according to one or more environmental characteristics detected during acquisition of the sensory data, for example, a weather condition attribute and a timing attribute. Naturally, the driving behavior demonstrated by the drivers may be affected by the environmental characteristics. Therefore analyzing the collected sensory ranging data with respect to the environmental conditions detected at the geographical area (the scene) may further increase accuracy of the identified driving behavior pattern(s).
In an optional implementation form of the fourth aspect, the analysis of the sensory data includes analyzing additional sensory data received from one or more outward sensors included in one or more of the plurality of sensor sets. The one or more outward sensors depicting the geographical area as viewed from one of the plurality of vehicles associated with the one or more sensor sets. Enhancing the collected sensory motion and/or positioning data with ranging data may significantly improve the characterization of the driver's prototype(s) detected in the geographical area. In a further implementation form of the fourth aspect, the one or more outward sensors include, for example, a camera, a night vision camera, a LIDAR sensor, a radar and an ultra- sonic sensor. Using sensory data collected from a plurality of different range sensors may allow for more flexibility in analyzing the sensory ranging data.
In an optional implementation form of the fourth aspect, one or more of the plurality of driver behavior patterns is enhanced based on the analysis of the additional sensory data. The one or more enhanced driver behavior patterns comprise one or more additional driving characteristics, for example, a tailgating characteristic, an in-lane position characteristic and a double-parking tendency characteristic. Enhancing the driver behavior pattern(s) may allow a more accurate characterization of the driver prototype(s) detected in the geographical area and may thus improve the simulation created using the virtual realistic model.
Unless otherwise defined, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the invention pertains. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of embodiments of the invention, exemplary methods and/or materials are described below. In case of conflict, the patent specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting.
Implementation of the method and/or system of embodiments of the invention can involve performing or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system.
For example, hardware for performing selected tasks according to embodiments of the invention could be implemented as a chip or a circuit. As software, selected tasks according to embodiments of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In an exemplary embodiment of the invention, one or more tasks according to exemplary embodiments of method and/or system as described herein are performed by a data processor, such as a computing platform for executing a plurality of instructions. Optionally, the data processor includes a volatile memory for storing instructions and/or data and/or a non- volatile storage, for example, a magnetic hard-disk and/or removable media, for storing instructions and/or data. Optionally, a network connection is provided as well. A display and/or a user input device such as a keyboard or mouse are optionally provided as well.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
Some embodiments of the invention are herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of embodiments of the invention. In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments of the invention may be practiced.
In the drawings:
FIG. 1 is a flowchart of an exemplary process of creating a simulated virtual model of a geographical area, according to some embodiments of the present invention;
FIG. 2A and FIG. 2B are schematic illustrations of an exemplary embodiments of a system for creating a simulated virtual model of a geographical area, according to some embodiments of the present invention;
FIG. 3 is a flowchart of an exemplary process of training a driver behavior simulator for a certain geographical area, according to some embodiments of the present invention; and
FIG. 4 is a flowchart of an exemplary process of creating a ranging sensory model of a geographical area, according to some embodiments of the present invention.
DETAILED DESCRIPTION
The present invention, in some embodiments thereof, relates to creating a simulated model of a geographical area, and, more specifically, but not exclusively, to creating a simulated model of a geographical area, optionally including transportation traffic to generate simulation sensory data for training an autonomous driving system.
According to some embodiments of the present invention, there are provided methods, systems and computer program products for training an autonomous driving system controlling a vehicle, for example, a ground vehicle, an aerial vehicle and/or a naval vehicle in a certain geographical area using a simulated virtual realistic model replicating the certain geographical area. The simulated virtual realistic model is created to emulate a sensory data feed, for example, imaging data, ranging data, motion data, transport data and/or the like which may be injected to the autonomous driving system during a training session. The virtual realistic model is created by obtaining visual imagery data of the geographical area, for example, one or more 2D and/or 3D images, panoramic image and/or the like captured at ground level, from the air and/or from a satellite. The visual imagery data may be obtained from, for example, Google Earth, Google Street View, OpenStreetCam, Bing maps and/or the like.
One or more trained classifiers (classification functions) may be applied to the visual imagery data to identify one or more (target) objects in the images, in particular static objects, for example, a road, a road infrastructure object, an intersection, a sidewalk, a building, a monument, a natural object, a terrain surface and/or the like. The classifier(s) may classify the identified static objects to class labels based on a training sample set adjusted for classifying objects of the same type as the target objects. The identified labeled objects may be superimposed over the geographic map data obtained for the geographical, for example, a 2D map, a 3D map, an orthophoto map, an elevation map, a detailed map comprising object description for objects present in the geographical area and/or the like. The geographic map data may be obtained from, for example, Google maps, OpenStreetMap and/or the like. The labeled objects are overlaid over the geographic map(s) in the respective location, position, orientation, proportion and/or the like identified by analyzing the geographic map data and/or the visual imagery data to create a labeled model of the geographical area. Using one or more techniques, for example, a Conditional Generative Adversarial Neural Network (cGAN), stitching texture(s) (of the labeled objects) retrieved from the original visual imagery data, overlaying textured images selected from a repository (storage) according to the class label and/or the like the labeled objects in the labeled model may be synthesized with (visual) image pixel data to create the simulated virtual realistic model replicating the geographical area. Optionally, the virtual realistic model is adjusted according to one or more lighting and/or environmental (e.g. weather, timing etc.) conditions to emulate various real world environmental conditions and/or scenarios, in particular, environmental conditions typical to the certain geographical area. After the virtual realistic model is created, synthetic 3D imaging data may be created and injected to the autonomous driving system. In particular, the synthetic 3D imaging data may be generated to depict the virtual realistic model from a point of view of one or more emulated imaging sensors mounted on an emulated vehicle moving in the virtual realistic model. The emulated vehicle may be created in the virtual realistic model to represent a real world vehicle controlled by the autonomous driving system. Similarly, the emulated imaging sensor(s) emulate one or more imaging sensors, for example, a camera, a video camera, an infrared camera, a night vision sensor and/or the like which are mounted on the real world vehicle controlled by the autonomous driving system. Moreover, the emulated imaging sensor(s) may be created, mounted and/or positioned on the emulated vehicle according to one or more mounting attributes of the imaging sensor(s) mounting on the real world vehicle, for example, positioning (e.g. location, orientation, elevations, etc.), FOV, range, overlap region with adjacent sensor(s) and/or the like. In some embodiments, one or more of the mounting attributes may be adjusted for the emulated imaging sensor(s) to improve perception and/or capture performance of the imaging sensor(s). Based on analysis of the capture performance for alternate mounting options, one or more recommendation may be offered to the autonomous driving system for adjusting the mounting attribute(s) of the imaging sensor(s) mounting on the real world vehicle. The alternate mounting options may further suggest evaluating the capture performance of the imaging sensor(s) using another imaging sensor(s) model having different imaging attributes, i.e. resolution, FOV, magnification and/or the like.
Optionally, the virtual realistic model is enhanced with a sensory model created to simulate the certain geographical area, in particular, a ranging sensory model. Using the simulated ranging model, a simulated sensory ranging data feed may be injected to the autonomous driving system. The simulated sensory ranging data may be generated as depicted by one or more emulated range sensors mounted on the emulated vehicle to emulate one or more range sensor(s) mounted on the real world vehicle, for example, a Light Detection and Ranging (LIDAR) sensor, a radar, an ultra- sonic sensor, a camera, an infrared camera and/or the like. The emulated range sensor(s) may be mounted on the emulated vehicle according to one or more mounting attributes of the real world range sensor(s) mounted on the real world vehicle controlled by the autonomous driving system and emulated by the emulated vehicle in the virtual realistic model.
Since the geographic map data as well as the visual imagery data is available for the certain geographical area, the ranging model created for the certain geographical area may be highly accurate. However, such accuracy may fail to represent real world sensory ranging data produced by real range sensor(s). The ranging sensory model may therefore apply one or more noise patterns typical and/or inherent to the range sensor(s) emulated in the virtual realistic model. The noise patterns may further include noise effects induced by one or more of the objects detected in the geographical area. The noise pattern(s) may describe one or more noise characteristics, for example, noise, distortion, latency, calibration offset and/or the like. The noise patterns(s) may be identified through big-data analysis and/or analytics over a large data set comprising a plurality of real world range sensor(s) readings collected for the geographical area and/or for other geographical locations. The big-data analysis may be done using one or more machine learning algorithms, for example, a neural network such as, for instance, a Deep learning Neural Network (DNN), a Gaussian Mixture Model (GMM), etc., a Support Vector Machine (SVM) and/or the like. Optionally, in order to more accurately simulate the geographical area, the noise pattern(s) may be adjusted according to one or more object attributes of the objects detected in the geographical area, for example, an external surface texture, an external surface composition, an external surface material and/or the like. The noise pattern(s) may also be adjusted according to one or more environmental characteristics, for example, weather, timing (e.g. time of day, date) and/or the like. In some embodiments, one or more mounting attributes may be adjusted for the emulated range sensor(s) to improve accuracy performance of the range sensor(s).
Optionally, one or more dynamic objects are injected into the virtual realistic model replicating the geographical area, for example, a ground vehicle, an aerial vehicle, a naval vehicle, a pedestrian, an animal, vegetation and/or the like. The dynamic object(s) may further include dynamically changing road infrastructure objects, for example, a light changing traffic light, an opened/closed railroad gate and/or the like. Movement of one or more of the dynamic objects may be controlled according to movement patterns predefined and/or learned for the certain geographical area. In particular, movement of one or more ground vehicles inserted into the virtual realistic model may be controlled according to driver behavior data received from a driver behavior simulator. The driver behavior data may be adjusted according to one or more driver behavior patterns and/or driver behavior classes exhibited by a plurality of drivers in the certain geographical area, i.e. driver behavior patterns and/or driver behavior classes that may be typical to the certain geographical area. The driver behavior classes may be identified through big-data analysis and/or analytics over a large data set of sensory data, for example, sensory motion data, sensory ranging data and/or the like collected from a plurality of drivers moving in the geographical area. The sensory data may include, for example, speed, acceleration, direction, orientation, elevation, space keeping, position in lane and/or the like. One or more machine learning algorithms, for example, a neural network (e.g. DNN, GMM, etc.), an SVM and/or the like may be used to analyze the collected sensory data to detect movement patterns which may be indicative of one or more driver behavior patterns. The driver behavior pattern(s) may be typical to the geographical area and therefore, based on the detected driver behavior pattern(s), the drivers in the geographical area may be classified to one or more driver behavior classes representing driver prototypes. The driver behavior data may be further adjusted according to a density function calculated for the geographical area which represents the distribution of the driver prototypes in the simulated geographical area.
Optionally, additional data relating to the emulated vehicle is simulated and injected to the autonomous driving system. The simulated additional data may include, for example, sensory motion data presenting motion information of emulated vehicle, transport data simulating communication of the emulated vehicle with one or more other entities over one or more communication links, for example, Vehicle to Anything (V2X) and/or the like.
The simulated sensory data, such as the imaging data, the ranging data, the motion data and/or the transport data may be injected to the autonomous driving system using the native interfaces of the autonomous driving system. For example, in case the autonomous driving system is a unit having one or more interfaces, ports, links and/or the like, the simulated sensory data may be injected through the interface(s), port(s) and/or links. Additionally and/or alternatively, assuming the autonomous driving system is a computer software program, the simulated sensory data may be injected using one or more virtual drivers using, for example, Application Programming Interface (API) functions of the autonomous driving system, a Software Development Kit (SDK) provided for the autonomous driving system and/or for the training system and/or the like.
Training autonomous driving systems using the simulated virtual realistic model emulating one or more geographical areas, specifically, a simulated virtual realistic model for emulating sensory data such as the imaging data, sensory data, motion data and/or transport data may present significant advantages. The simulated virtual realistic model may be further used for evaluating, validating and/or improving design of autonomous driving systems.
Currently, autonomous driving systems are typically designed, trained, evaluated and validated in real world conditions in which a real world vehicle controlled by the autonomous driving system is moving in the real world geographical area. This may present major limitations since an extremely large number of driving hours may need to be accumulated in order to properly train and/or evaluate the autonomous driving systems. Moreover, the vehicle controlled by the autonomous driving system needs to be driven in a plurality of geographical locations which may further limit scalability of the training, evaluation and/or validation process. Furthermore, the autonomous driving system may need to be evaluated and/or trained for a plurality of environmental conditions, for example, weather conditions, timing conditions, lighting condition and/or the like as well as for a plurality of traffic conditions, for example, standard traffic, rush hour traffic, vacation time traffic and/or the like. This may further limit scalability of the training, evaluation and/or validation process. In addition, driver behavior of other vehicles may vary with respect to one or more conditions, for example, different geographical areas, different environmental conditions, different traffic conditions and/or the like. This may also limit scalability of the training, evaluation and/or validation process. The limited scalability of the currently existing methods may further result from the enormous amount of resources that may be required to train, evaluate and/or validate the autonomous driving systems in the plurality of ride scenarios, for example, time, hardware resources, human resources and/or the like. Training the autonomous driving systems using the simulated virtual realistic model on the other hand, may allow for significant scalability since the plurality of various ride scenarios may be easily simulated for a plurality of geographical locations. The training, evaluation and/or validation process may be done automatically by an automated system executing the simulated virtual realistic model. Moreover, the training, evaluation and/or validation process may be done for a plurality of geographical area, various conditions and/or various scenarios in a closed facility without the need to move real vehicles in the real world. In addition, the automated training, evaluation and/or validation process may be conducted concurrently for the plurality of geographical area, the various conditions and/or the various scenarios. This may significantly reduce the resources, for example, time, hardware resources, human resources and/or the like for training, evaluating and/or validating the autonomous driving systems.
Moreover, using the automated system executing the simulated virtual realistic model for training, evaluating and/or validating the autonomous driving systems may significantly reduce risk since the process is conducted in a virtual environment. Damages, accidents and even life loss which may occur using the currently existing methods for training the autonomous driving systems may be completely prevented and avoided.
Furthermore, the autonomous driving system may require insignificant and typically no adaptations to support training, evaluation and/or validation using the simulated virtual realistic model. The simulated sensory data feed(s) depicting the simulated virtual realistic model may be injected to the autonomous driving systems through the native interface used by the autonomous driving system to receive real sensory data.
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. The invention is capable of other embodiments or of being practiced or carried out in various ways. The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non- exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
Referring now to the drawings, FIG. 1 is a flowchart of an exemplary process of creating a simulated model of a geographical area, according to some embodiments of the present invention. A process 100 may be executed to train an autonomous driving system in a certain geographical area using a simulated virtual 3D model created to replicate the geographical area. The simulated virtual realistic model is created by obtaining visual imagery data of the geographical area which may be processed by one or more trained classifiers to identify one or more objects in the visual imagery data, in particular static objects. The identified objects may be superimposed over geographic map(s) obtained for the geographical area to create a labeled model of the geographical area. Using one or more techniques, for example, a cGAN, stitching texture(s) (of the labeled objects) retrieved from the original visual imagery data, overlaying textured images selected from a repository according to the class label and/or the like the labeled model may be synthesized to create the virtual 3D realistic model replicating the geographical area. Optionally, the virtual realistic model is adjusted according to one or more lighting and/or environmental conditions to emulate various real world lighting effects, weather conditions, ride scenarios and/or the like.
After the virtual realistic model is created, synthetic 3D imaging data may be created and injected to the autonomous driving system. In particular, the synthetic 3D imaging data may be generated to depict the virtual realistic model from a point of view of one or more emulated imaging sensors (e.g. a camera, an infrared camera, a video camera, a night vision sensor, etc.) mounted on an emulated vehicle moving in the virtual realistic model which represents the vehicle controlled by the autonomous driving system. Optionally, a ranging model is simulated for the geographical area to emulate a realistic ranging arena in the simulated virtual realistic model.
Optionally, one or more dynamic objects are injected into the virtual realistic model replicating the geographical area, for example, a ground vehicle, an aerial vehicle, a naval vehicle, a pedestrian, an animal, vegetation and/or the like. Moreover, movement of one or more ground vehicles inserted to the virtual realistic model may be controlled according to driver behavior patterns identified through big-data analytics of vehicles movement in the geographical area.
Reference is also made to FIG. 2A and FIG. 2B, which are schematic illustrations of exemplary embodiments of a system for creating a simulated model of a geographical area, according to some embodiments of the present invention. An exemplary system 200A includes a simulation server 201 comprising an Input/Output (I/O) interface 202, a processor(s) 204 and a storage 206. The I/O interface 202 may provide one or more network interfaces, wired and/or wireless for connecting to one or more networks 230, for example, a Local Area Network (LAN), a Wide Area Network (WAN), a Metropolitan Area Network (MAN), a cellular network and/or the like. The processor(s) 204, homogenous or heterogeneous, may be arranged for parallel processing, as clusters and/or as one or more multi core processor(s). The storage 206 may include one or more non-transitory persistent storage devices, for example, a hard drive disk (HDD), a Solid State Disk (SSD) a Flash array and/or the like. The storage 206 may further include one or more networked storage resources accessible over the network(s) 230, for example, a Network Attached Storage (NAS), a storage server, a cloud storage and/or the like. The storage 206 may also utilize one or more volatile memory devices, for example, a Random Access Memory (RAM) device and/or the like for temporary storage of code and/or data. The processor(s) 204 may execute one or more software modules, for example, a process, a script, an application, an agent, a utility and/or the like which comprise a plurality of program instructions stored in a non-transitory medium such as the storage 206. The processor(s) 204 may execute one or more software modules such as, for example, a simulator 210 for training an autonomous driving system 220 using a simulated model created to replicate one or more geographical areas, a ranging model creator 212 for creating a ranging model of the geographical area(s), and a driver behavior simulator 214 simulating driver behavior in the geographical area(s). Optionally, one or more of the simulator 210, the ranging model creator 212 and/or the driver behavior simulator 214 are integrated in a single software module. For example, the simulator 210 may include the ranging model creator 212 and/or the driver behavior simulator 214. The simulator 210, the ranging model creator 212 and/or the driver behavior simulator 214 may communicate over the network(s) 230 to obtain, retrieve, receive and/or collect data, for example, geographic map data, visual imagery data, sensory data and/or the like from one or more remote locations.
Optionally, the simulation server 201 and/or part thereof is implemented as a cloud based platform utilizing one or more cloud services, for example, cloud computing, cloud storage, cloud analytics and/or the like. Furthermore, one or more of the software modules, for example, the simulator 210, the ranging model creator 212 and/or the driver behavior simulator 214 may be implemented as a cloud based service and/or platform, for example, software as a service (SaaS), platform as a service (PaaS) and/or the like.
The autonomous driving system 220 may include an I/O interface such as the I/O interface 202, one or more processors such as the processor(s) 240, and a storage such as the storage 206. The I/O interface of the autonomous driving system 220 may provide a plurality of I/O and/or network interfaces for connecting to one or more peripheral devices, for example, sensors, networks such as the network 230 and/or the like. In particular, the I/O interface of the autonomous driving system 220 may provide connectivity to one or more imaging sensors, ranging sensors, motion sensors, V2X communication links, vehicle communication links (e.g. Controller Area Network (CAN), etc.) and and/or the like.
The I/O interface 202 may further include one or more interfaces adapted and/or configured to the native I/O interfaces of the I/O interface of the autonomous driving system 220. For example, the I/O interface 202 may include one or more output links compliant with one or more input links of the I/O interface of the autonomous driving system 220 such that an imaging data feed may be driven from the simulator system 201 to the autonomous driving system 220. Similarly, the I O interface 202 may provide connectivity to the autonomous driving system 220 to drive ranging sensory data, motion sensory data, transport data and/or the like. The communication between the simulation server 201 and the autonomous driving system 220 may support bidirectional communication from the simulation server 201 to the autonomous driving system 220 and vice versa.
The autonomous driving system 220 may execute one or more software modules such as, for example, an autonomous driver 222 for controlling movement, navigation and/or the like of a vehicle, for example, a ground vehicle, an aerial vehicle and/or a naval vehicle.
In some embodiments, as shown in system 200B, the autonomous driving system 220 may be facilitated through the autonomous driver 222 independent of the autonomous driving system 220. The autonomous driver 222 may be executed by one or more processing nodes, for example, the simulation server 201, another processing node 240 connected to the network 230 and/or the like. In such implementations and/or deployments, one or more of the software modules executed by the simulation server, for example, the simulator 210 may communicate with the autonomous driver 222 through one or more software interfaces of the autonomous driver 222 which are typically used to receive the imaging data feed, the ranging sensory data, the motion sensory data, the transport data and/or the like. The software interface(s) may include, for example, a virtual driver, an API function, a system call, an operating system function and/or the like. The communication between the simulator 210 and the autonomous driver 222 may also be established using one or more SDKs provided for the autonomous driver 222 and/or for the simulator 210. In case the autonomous driver 222 is executed at a remote processing node such as the processing node 240, the simulator 210 and the autonomous driver 222 may communicate with each other using the software interfaces which may be facilitated over one or more of networks such as the networks 230.
As shown at 102, the process 100 starts with the simulator 210 obtaining geographic map data of one or more geographical areas, for example, an urban area, an open terrain area, a country side area and/or the like. In particular, the simulator 210 obtains the geographic map data for a geographical area targeted for training an autonomous driving system such as the autonomous driving system 220 and/or autonomous driver such as the autonomous driver 222. The geographic map data may include one or more maps, for example, a 2D map, a 3D map, an orthophoto map, an elevation map, a detailed map comprising object description for objects present in the geographical area and/or the like. The geographic map data may be obtained from one or more geographic map sources, for example, Google maps, OpenStreetMap and/or the like. The geographic map data may present one or more static objects located in the geographical area, for example, a road, a road infrastructure object, a building, a monument, a structure, a natural object, a terrain surface and/or the like. The geographic map data may include, for example, road width, structure(s) contour outline, structure(s) polygons, structure(s) dimensions (e.g. width, length, height) and/or the like. The simulator 210 may retrieve the geographic map data from the storage 206 and/or from one or more remote location accessible over the network(s) 230.
As shown at 104, the simulator 210 obtains visual imagery data of the target geographical area. The visual imagery data may include one or more 2D and/or 3D images depicting the geographical area from ground level, from the air and/or from a satellite. The simulator 210 may retrieve the geographic map data from the storage 206 and/or from one or more remote location accessible over the network(s) 230. The simulator 210 may obtain the visual imagery data from one or more visual imagery sources, for example, Google Earth, Google Street View, OpenStreetCam and/or the like.
As shown at 106, the simulator 210 labels the static objects detected in the imagery data. The simulator 210 may use one or more computer vision classifiers (classification functions), for example, a Convolutional Neural Network (CNN), an SVM and/or the like for classifying the static object(s) detected in the visual imagery data to predefined labels as known in the art. In particular, the classifier(s) may identify and label target static objects depicted in the visual imagery data, for example, a road infrastructure object, a building, a monument, a structure, a natural object, a terrain surface and/or the like. The road infrastructure objects may include, for example, roads, road shoulders, pavements, intersections, interchanges, traffic lights, pedestrian crossings and/or the like. The classifier(s) may typically be trained with a training image set adapted for the target static object defined to be recognized in the imagery data, for example, the road infrastructure object, the building, the monument, the structure, the natural object, the terrain surface and/or the like. In order to improve generalization and avoid overfitting, the training image set may be collected, constructed, adapted, augmented and/or transformed to present features of the static objects in various types, sizes, view angles, colors and/or the like.
Optionally, prior to analyzing the visual imagery data to identify the static objects, the simulator 210 applies noise removal to the visual imagery data to remove unnecessary objects, for example, vehicles, animals, vegetation and/or the like that may obscure (at least partially) the target static objects. The simulator 210 may further apply one or more image processing algorithms to improve and/or enhance visibility of the static objects depicted in the imagery data, for example, adjust image brightness, adjust image color scheme, adjust image contrast and/or the like.
As shown at 108, the simulator 210 superimposes the labeled static objects over the geographic map data, for example, the geographic map(s) to create a labeled model of the geographical area. The simulator 210 may overlay, fit, align, adjust and/or the like the labeled objects over the geographic map(s) in the respective location, position, orientation, proportion and/or the like as identified by analyzing the geographic map data and/or the visual imagery data. The simulator 210 may associate each of the labeled static objects detected in the visual imagery data with one or more positioning attributes extracted from the geographic map data and/or the visual imagery data, for example, location, position, elevation, orientation, proportion and/or the like. Using the positioning attributes, the simulator 210 may position (i.e. locate, orient, align, adjust, manipulate and/or the like) the labeled static objects in the geographic map(s). The resulting labeled model may therefore replicate the real world geographical area such that the labeled objects accurately placed and/or located according to real world place, location, position, elevation, orientation and/or the like. The simulator 210 may further create an elevation model that is integrated with the labeled model.
As shown at 110, the simulator 210 synthesizes the labeled model to assign (visual) image pixel data to each of the labeled static objects in the labeled model to create a virtual 3D visual realistic scene replicating the geographical area. The simulator 210 may apply one or more methods, techniques and/or implementations for synthesizing the labeled scene.
In one implementation, the simulator 210 may use one or more machine learning methodologies, techniques and/or algorithms, for example, a cGAN and/or the like to synthesize one or more of the labeled objects. The cGAN, as known in the art, may be trained to apply a plurality of visual data transformations, for example, pixel to pixel, label to pixel and/or the like. The cGAN may therefore generate visual appearance imagery (e.g. one or more images) for each of the labels which classify the static object(s) in the labeled model. The cGAN may be trained to perform the reverse operation of the classifier(s) (classification function) such that the cGAN may generate a corresponding visual imagery for label(s) assigned by the classifier(s) to one or more of the static objects in the labeled model. For example, assuming the simulator 210 identified a road object in the visual imagery data of the geographical area and labeled the road object in the labeled scene. The simulator 210 may apply the cGAN(s) to transform the road label of the road object to a real world visual appearance imagery of the road object.
In another implementation, the simulator 210 may use the original visual imagery data to extract the texture of one or more of the labeled objects. For example, assuming the simulator 210 identified the road object in the visual imagery data of the geographical area and labeled the road object in the labeled scene. The simulator 210 may extract the texture of the road object from the original visual imagery data and overlay the road object texture on the road object label in the labeled scene.
In another implementation, the simulator 210 may use retrieve the texture of one or more of the labeled objects from a repository comprising a plurality of texture images of a plurality of objects. The repository may be facilitated through e or more storage locations, for example, the storage 206 and/or one or more remote storage locations accessible over the network 230. For example, assuming the simulator 210 identified the road object in the visual imagery data of the geographical area and labeled the road object in the labeled scene. The simulator 210 may access the storage location(s) to retrieve the texture of the road object and overlay the road object texture on the road object label in the labeled scene.
Moreover, the simulator 210 may manipulate, for example, increase, decrease, stretch, contract, rotate, transform and/or the like the labeled object(s)' texture extracted for the visual imagery data to fit the location, size, proportion and/or perspective of the labeled object(s) in the labeled scene. For example, the simulator 210 may manipulate the road object texture to fit the road object location, size and/or perspective in the labeled scene as depicted from a certain point of view. The simulator 210 may apply one or more computer graphics standards, specifications, methodologies and/or the like for creating and/or rendering the virtual 3D visual realistic model, for example, OpenGL, DirectX and/or the like.
The resulting virtual 3D visual realistic model may be significantly accurate in visually replicating the geographical area. The virtual 3D visual realistic model may therefore be highly suitable for training the autonomous driver 222 for controlling movement of an emulated vehicle, for example, a ground vehicle, an aerial vehicle, a naval vehicle and/or the like in the simulated realistic model replicating the geographical area.
Optionally, the simulator 210 adjusts the simulated realistic model to simulate one or more environmental conditions, for example, a lighting condition, a timing condition, a weather condition and/or the like. For example, the simulator 210 may adjust the lighting conditions of the simulated realistic model according to a time of day and/or a date, i.e. according to an angle of the sun in the sky. In another example, the simulator 210 may adjust the lighting conditions of the simulated realistic model according to a weather condition, for example, an overcast, cloudy sky, fog and/or the like. In another example, the simulator 210 may simulate rain drops, snow flakes and/or the like to simulate respective weather conditions.
As shown at 112, which is an optional step, the simulator 210 may insert one or more simulated dynamic objects to the simulated virtual realistic model. The simulated dynamic objects may include for example, a ground vehicle, an aerial vehicle, a naval vehicle, a pedestrian, an animal, vegetation and/or the like. The simulated dynamic objects may further include one a dynamically changing road infrastructure object, for example, a light changing traffic light, an opened/closed railroad gate and/or the like. The simulator 210 may apply one or more movement and/or switching patterns to one or more of the simulated dynamic objects to mimic the real world dynamic objects behavior. For example, the simulator 210 may insert a simulated aircraft to the simulated virtual realistic model and controls the simulated aircraft to fly in a valid flight lane as exists in the geographical area. In another example, the simulator 210 may place one or more simulated traffic lights at an intersection detected in the geographical area. The simulator 210 may control the simulated traffic light(s) to switch lights in accordance with traffic control directives applicable to traffic light(s) in the geographical area. The simulator 210 may further synchronize switching of multiple simulated traffic lights to imitate real traffic control as applied in the geographic area.
As shown at 114, which is an optional step, assuming the simulator 210 inserted one or more emulated ground vehicles to the simulated virtual realistic model, the simulator 210 may use a driver behavior simulator to control one or more of the emulated ground vehicles according to one more driver behavior classes and/or patterns identified at the geographical area. The driver behavior classes may include, for example, an aggressive driver, a normal driver, a patient driver, a reckless driver and/or the like. Moreover, assuming the simulator 210 emulates a plurality of vehicles in the virtual realistic model, the simulator 210, using the driver behavior simulator, may apply the driver behavior classes according to a density function calculated for the geographical area which represents the distribution of the driver prototypes (classes) detected in the geographical area replicated by the virtual realistic model. This may allow accurate simulation of the virtual realistic model which may thus be significantly similar to the real world geographical area with respect to driving behavior typical to the geographical area.
Reference is no made to FIG. 3, which is a flowchart of an exemplary process of training a driver behavior simulator for a certain geographical area, according to some embodiments of the present invention. An exemplary process 300 may be executed by a driver behavior simulator such as the driver behavior simulator 214 in a system such as the system 200 for training the driver behavior simulator 214 according to driver behavior characteristics detected in the geographical area. While the process 300 may be applied in a plurality of geographical areas for training the driver behavior simulator 214, for brevity, the process 400 is described for a single geographical location. The driver behavior simulator 214 may be used by a simulator such as the simulator 210 to simulate movement of emulated vehicles in a simulated virtual realistic model replicating the geographical area.
As shown at 302, the process 300 starts with the driver behavior simulator 214 obtaining sensory data from a plurality of sensor sets mounted on a plurality of vehicles driven by a plurality of drivers in the geographical area. In particular, the sensory data may include sensory motion data obtained from one or more motion sensors of the sensor set, for example, a Global Positioning system (GPS) sensor, a gyroscope, an accelerometer, an Inertial Measurement Unit (IMU) sensor, an elevation sensor and/or the like.
As shown at 304, the driver behavior simulator 214 may analyze the sensory data to detect movement patterns of each of the plurality of vehicles in the geographical area. The driver behavior simulator 214 may apply big-data analysis and/or analytics using one or more machine learning algorithms, for example, a neural network (e.g. DNN, GMM, etc.), an SVM and/or the like to analyze large amounts of sensory data collected from the vehicles riding in the geographical areas and detect the movement patterns. The detected movement patterns may be indicative of one or more driver behavior patterns exhibited by one or more of the drivers driving the vehicles in the geographical area. The driver behavior simulator 214 may therefore analyze the detected movement patterns to infer the driver behavior pattern(s). The driver behavior simulator 214 may again apply the machine learning algorithm(s) to identify the driver behavior pattern(s) from analysis of the movement patterns.
The driver behavior pattern(s) may include one or more motion parameters, for example, a speed parameter, an acceleration parameter, a breaking parameter, a direction parameter, an orientation parameter and/or the like. Moreover, the driver behavior pattern(s) may be associated with specific locations of interest, for example, an intersection, a road curve, a turning point, an interchange entrance/exit ramp and/or the like. The driver behavior pattern(s) may describe one or more of the motion parameters for the location of interest. Furthermore, the driver behavior simulator 214 may create one or more driver behavior pattern(s) to describe a prolonged driving action, for example, crossing the intersection such that the driver behavior pattern(s) may specify, for example, a speed parameter for the intersection entry phase, a speed parameter for the intersection crossing phase, a speed parameter for the intersection exit phase and/or the like. In another example, the driver behavior pattern(s) may describe a direction and/or orientation parameter for one or more phase while exiting the interchange on the exit ramp. In another example, the driver behavior pattern(s) may describe an acceleration parameter for one or more phases while entering the interchange entrance ramp.
Optionally, the driver behavior simulator 214 analyzes the sensory data with respect to one or more environmental characteristics detected during acquisition of the sensory data, for example, a weather condition attribute, a timing attribute and/or the like. The environmental characteristics may affect the driving behavior exhibited by at least some of the drivers and the driver behavior simulator 214 may therefore adapt, create and/or amend the driver behavior pattern(s) according to the environmental characteristic(s). For example, the driver behavior may differ between night and day, between summer and winter and/or the like. Therefore, based on the timing attribute associated with the sensory data, the driver behavior simulator 214 may create the driver behavior pattern(s) to distinguish between day and night, summer and winter and/or the like. In another example, the driver behavior may change in case of wind, rain, fog, snow and/or the like. Therefore, based on the weather attribute associated with the sensory data, the driver behavior simulator 214 may create the driver behavior pattern(s) for each of the weather conditions.
Optionally, the sensory data comprises additional sensory data obtained from one or more outward sensors included in one or more of the sensor sets mounted on the plurality of vehicles. The outward sensor(s) depict the geographical area as viewed from one of the plurality of vehicles associated with the respective sensor set(s). The outward sensor(s) may include, for example, a camera, an infrared camera, a night vision sensor, a LIDAR sensor, a radar, an ultra-sonic sensor and/or the like. By analyzing the additional sensory data, the driver behavior simulator 214 may detect further movement patterns, driver behavior patterns and/or additional driving characteristics of the drivers of the vehicles in the geographical area and may enhance one or more of the driver behavior patterns accordingly. The additional driving characteristics may include, for example, a tailgating characteristic, an in-lane position characteristic, a double-parking tendency characteristic and/or the like. For example, based on analysis of imagery data, for example, one or more images received from the imaging sensor(s), the driver behavior simulator 214 may detect double parking events and may thus associate one or more of the driver behavior patterns with the double-parking tendency characteristic. In another example, based on analysis of sensory ranging data received from the range sensor(s), the driver behavior simulator 214 may identify space keeping parameters, in- lane position parameters and may thus associate one or more of the driver behavior patterns with the tailgating characteristic, the in-lane position characteristic and/or the like. The driver behavior simulator 214 may associate at least some of the drivers with one or more of the driver behavior patterns detected for the vehicles' drivers in the geographical area.
As shown at 306, the driver behavior simulator 214 may classify at least some of the plurality of drivers to one or more driver behavior classes according to the driver behavior pattern(s) associated with each of the drivers. The driver behavior classes may include, for example, an aggressive driver prototype, a normal driver prototype, a patient driver prototype, a reckless driver prototype and/or the like. The driver behavior simulator 214 may use one or more classification, clustering and/or grouping methods, techniques and/or algorithms for classifying the drivers to the driver behavior classes.
As shown at 308, the driver behavior simulator 214 may calculate a driver behavior density function associated with the geographical area and/or pat thereof. The driver behavior density function describes a probability of presence and/or distribution (number) of each of the driver prototypes in the geographical area (or part thereof) at a certain time. The driver behavior simulator 214 may calculate the driver behavior density function according to a distribution and/or recurrence of drivers of each of the driver prototypes in the geographical area. The driver behavior simulator 214 may further adjust the calculated driver behavior density function according to one or more of the environmental conditions. For example, the number of driver of certain driver prototype may differ between day and night, for example, more reckless drivers (young people) at night while more normal drivers (people driving to work places) during the day. In another example, during rainy weather conditions, the number of reckless drivers may decrease. The driver behavior simulator 214 may therefore adjust the driver behavior density function accordingly.
As shown at 310, the driver behavior simulator 214 may be updated with the driver behavior classes and/or the driver behavior density function detected and/or calculated for the geographical area to generate realistic driver behavior data adapted to the geographical area for training the autonomous driving system 220 and/or the autonomous driver 222. The driver behavior simulator 214 may thus be used by the simulator 210 to simulate movement of the vehicles emulated in the simulated virtual realistic model replicating the geographical area to further impersonate the real world simulation. Reference is made once again to FIG. 1. As shown at 116, the simulator 210 may generate synthetic 3D imaging data, for example, one or more 2D and/or 3D images of the virtual realistic model replicating the geographical area. The simulator 210 may generate the synthetic 3D imaging data using the functions, utilities, services and/or abilities of the graphic environment, for example, OpenGL, DirectX and/or the like used to generate the simulated virtual realistic model. The simulator 210 may create the synthetic 3D imaging data to depict the virtual realistic model as viewed by one or more emulated imaging sensors, for example, a camera, a video camera, an infrared camera, a night vision sensor and/or the like mounted on the emulated vehicle moving in the simulated virtual realistic model. The emulated imaging sensor(s) may be mounted on the emulated vehicle according to one or more mounting attributes, for example, positioning on the emulated vehicle (e.g. location, orientation, elevations, etc.), FOV, range, overlap region with at least one adjacent range sensor and/or the like of the real imaging sensor(s) mounted on the real vehicle controlled by the autonomous driver 222. The simulator 210 may inject, provide and/or transmit the synthetic 3D imaging data to the autonomous driving system 220 which may thus be trained to move the emulated vehicle in the simulated virtual realistic model which may appear to the autonomous driving system 220 as the real world geographical area. The imaging data may be injected to the training autonomous driving system as a feed to one or more of the hardware and/or software interfaces used by the autonomous driving system 220 and/or the autonomous driver 222 as described herein above which may be natively used to receive the imaging data feed from the imaging sensor(s) mounted on the controlled vehicle.
Optionally, the simulator 210 may analyze the synthetic 3D imaging data to evaluate a perception performance of the imaging sensor(s). The simulator 210 may evaluate the perception performance of the imaging sensor(s) while applying alternate values to one or more mounting attributes of the imaging sensor(s). The simulator 210 may determine optimal settings for the mounting attribute(s) and may recommend the autonomous driving system and/or the autonomous driver 222 to apply the optimal settings. Based on the offered settings, the designer(s) of the autonomous driving system 220 may take one or more actions, for example, adjust one or more of the imaging sensor(s) mounting attributes, use different imaging sensor(s), add/remove imaging sensor(s) and/or the like.
As shown at 118, which is an optional step, the simulator 210 may use a ranging model to generate sensory data, in particular sensory ranging data simulating sensory ranging data generated by one or range sensors mounted on the emulated vehicle. The range sensor(s), for example, a LIDAR sensor, a radar, an ultra-sonic sensor, a camera, an infrared camera and/or the like emulate the range sensor(s) mounted on the emulated vehicle moving in the simulated virtual realistic model. The emulated range sensor(s) may be mounted on the emulated vehicle according to one or more mounting attributes, for example, positioning on the emulated vehicle (e.g. location, orientation, elevations, etc.), FOV, range, overlap region with at least one adjacent range sensor and/or the like of the real range sensor(s) mounted on the real vehicle controlled by the autonomous driver 222. The simulator 210 may employ the ranging model to enhance the simulated virtual realistic model to simulate with simulated realistic ranging data that may be injected to the autonomous driving system 220 and/or the autonomous driver 222. The simulated sensory ranging data may be injected to the training autonomous driving system 220 as a feed to one or more native inputs of the training autonomous driving system typically connected to one or more of the range sensors. Additionally and/or alternatively, the simulated sensory ranging data may be injected to the training autonomous driver 222 as a feed to one or more software interfaces typically used by the autonomous driver 222 to collect the sensory ranging data.
Since the geographic map data as well as the visual imagery data are available for the geographical area, the ranging model created for the geographical area may be highly accurate. However, such accuracy may fail to represent real world sensory ranging data produced by real range sensor(s). The ranging model may therefore apply one or more noise pattern(s) to the simulated sensory ranging data to realistically simulate real world conditions.
Reference is no made to FIG. 4, which is a flowchart of an exemplary process of creating a ranging sensory model of a geographical area, according to some embodiments of the present invention. An exemplary process 400 may be executed by a ranging model creator such as the ranging model creator 212 in a system such as the system 200 to create a ranging model for the simulated realistic model of a geographical area created by a simulator such as the simulator 210. While the process 400 may be applied in a plurality of real world scenes (geographical areas) for training a machine learning algorithm to accurately create a ranging model emulating real world geographical areas, for brevity, the process 400 is described for a single real world scene (geographical area).
As shown at 402, the process 400 starts with the ranging model creator 212 obtaining real ranging data for objects located in the real world scene (geographical area). The ranging model creator 212 may also analyze the geographic map data obtained for the real world scene to calculate the real ranging data. The ranging model creator 212 may also analyze the visual imagery data obtained for the real world scene using one or more image processing techniques and/or algorithms to calculate the real ranging data. Additionally, the ranging model creator 212 may collect the real ranging data from real world measurements made in the real world scene.
As shown at 404, the ranging model creator 212 obtains sensory ranging data from each of a plurality of range sensors depicting the real world scene. The range sensors which may include, for example, a LIDAR sensor, a radar, an ultra-sonic sensor, a camera, an infrared camera and/or the like may typically be mounted on a plurality of vehicles travelling in the real world scene, for example, a ground vehicle, an aerial vehicle and/or a naval vehicle.
Each of the ranging sensors is associated with positioning data indicating the positioning of each associated range sensor. The ranging model creator 212 may receive the positioning data from one or more positioning sensors typically installed, mounted and/or attached to the vehicle carrying the associated ranging sensor(s). The positioning sensors may include, for example, a GPS sensor, a gyroscope, an accelerometer, an IMU sensor, an elevation sensor and/or the like. The positioning of the respective range sensor(s) may be based on the GPS positioning data, on dead reckoning positioning calculated using the positioning data obtained from the gyroscope, the accelerometer, the IMU sensor and/or the like. Optionally, the positioning of the respective range sensor(s) may be based on a combination of the GPS positioning and the dead reckoning positioning which may provide improved positioning of the respective range sensor(s). Using the positioning data, the ranging model creator 212 may accurately calculate the absolute location and/or position of each of the range sensors in the real world scene. This allows the ranging model creator 212 to calculate absolute ranging data for each of the range sensors by adjusting the sensory ranging data produced by the respective range sensor according to the absolute location and/or position of the respective range sensor.
Optionally, the positioning data includes motion data comprising one or more motion parameters of the associated range sensor(s). The motion parameter(s) may include, for example, a speed parameter, an acceleration parameter, a direction parameter, an orientation parameter, an elevation parameter and/or the like. Using the motion data, the ranging model creator 212 may improve accuracy of the absolute location and/or position of the associated range sensor(s) in the real world scene.
As shown at 406, the ranging model creator 212 analyzes the sensory ranging data obtained from the range sensors (after adjusted according to the positioning data) with respect to the real ranging data. The ranging model creator 212 may apply big-data analysis and/or analytics using one or more machine learning algorithms, for example, a neural network (e.g. DNN, GMM, etc.), an SVM and/or the like to analyze large amounts of sensory ranging data for a plurality of real world scenes.
Optionally, the ranging model creator 212 evaluates the sensory ranging data with relation to one or more mounting parameters of the range sensor(s), for example, a position, an orientation, an FOV and/or the like. Naturally, the mounting parameter(s) of a respective range sensor may have at least some effect on the sensory ranging data produced by the respective range sensor. Therefore, assuming the mounting parameter(s) are available to the ranging model creator 212, the ranging model creator 212 may identify optimal, preferred and/or recommended settings for one or more of the mounting parameters.
As shown at 408, using the machine learning algorithm(s), the ranging model creator 212 may analyze the sensory ranging data for a plurality of range sensors for a plurality of real world scenes to identify one or more noise patterns exhibited by one or more of the range sensors. The noise pattern(s) may describe one or more noise characteristics, for example, noise, distortion, latency, calibration offset and/or the like which may be typical, inherent and/or characteristic of one or more of the range sensors.
The noise may result from other objects affecting which affect the object(s) to which the sensory ranging data refers, for example, partially obscuring the reference object(s) and/or the like. The noise may further describe one or more object attributes, for example, an external surface texture, an external surface composition, an external surface material and/or the like of the reference object(s) and/or of the other objects affecting the reference object(s). For example, some surface textures, surface compositions and/or surface materials may reflect differently rays projected by one or more of the range sensors and may therefore affect the accuracy of the acquired sensory ranging data. The ranging model creator 212 may therefore set the noise pattern(s) to map the sensory ranging data accuracy and/or performance with the attributes of the reference object(s) and/or of the other objects affecting the reference object(s) detected in the real world scene. The ranging model creator 212 may extract the object attribute(s) of the reference object(s) and/or of the other objects from the visual imagery data obtained for the real world scene. For example, the ranging model creator 212 may detect a tree in the visual imagery data and may associate the tree with predefined sensory ranging data accuracy and/or range sensor performance which is predefined and/or learned over time by the ranging model creator 212. The ranging model creator 212 may further obtain the object attribute(s) of the reference object(s) and/or of the other objects from one or more records, for example, a metadata record and/or the like associated with one of more of the objects located in the real world scene. Since the object(s) attributes may vary between different geographical locations, areas and./or real world scenes, retrieving object(s) attribute(s) from the metadata record may allow associating the object(s) with typical attributes as found in the respective real world scene. For example, a detailed map of the real world scene may include designations of structures located in the real world scene and may further include a metadata record associated with one or roe of the objects which describes the object attribute(s).
The distortion may result from one or more inherent limitations of the range sensor(s) which may present inaccuracies in the measured range to the reference object(s). The latency may refer to the latency from the time the range sensor(s) captured the sensory ranging data until the time the sensory ranging data is recorded and/or logged by a logging system. This latency may result for one or more reasons, for example, processing time of the range sensor to process the ranging sensory data, communication time for the range sensor to communicate the sensory ranging data to the logging system and/or the like. The calibration offset may result from one or more inherent limitations of the range sensor(s) wherein the range sensor(s) may not be ideally calibrated even following a calibration sequence. The calibration offset may further be inflicted by inaccurate mounting calibration of the range sensor(s) with respect to the vehicle such that the position, orientation, FOV and/or the like of the range sensor(s) may deviate from the intended position, orientation, FOV. Each noise pattern may therefore present values for one or more of the noise characteristics, for example, a noise value, a distortion value, a latency value, a calibration offset value and/or the like.
As shown at 410, the ranging model creator 212 may update a ranging model with the noise pattern(s) to establish a realistic simulation arena for the virtual realistic model replicating the real world scene (geographical area). The ranging model may be used to generate realistic simulation ranging data that may be injected to the autonomous driver 222 during training.
Optionally, the ranging model creator 212 may analyze the sensory ranging data with respect to the real ranging data while evaluating one or more environmental characteristics detected during acquisition of the sensory ranging data and/or part thereof and which may affect the sensory ranging data acquisition. The environmental characteristics may include, for example, a weather condition attribute, a timing attribute and/or the like. The weather condition attribute may include, for example, a humidity value, a temperature value, a precipitation value (e.g. rain, snow, etc.), a fog condition value and/or the like. The weather condition attribute(s) may affect the sensory ranging data acquisition, for example, foggy conditions may affect performance of one or more of the range sensors for instance, the LIDAR sensor, the radar, the camera, the infrared camera and/or the like. In another example, the temperature of the environment may affect the performance of one or more of the range sensors. The timing conditions may include for example, time of day, date and/or the like which may also affect performance of one or more of the range sensors. For example, the performance of one or more of the range sensors may be affected by light conditions, for example, day light, twilight, evening and/or night.
Reference is made once again to FIG. 1.
Using the ranging model, in addition to generating the simulated sensory ranging data for the static objects identified in the geographical area, the simulator 210 may further generate the simulated sensory ranging data for one or more of the dynamic objects inserted to the virtual realistic model replicating the geographical area. For example, assuming a tree is inserted into the virtual realistic model, the simulator 210 may use the ranging model to create simulated sensory ranging data adapted to the tree according to one or more of the noise pattern(s) identified by the ranging model creator 212 and applied in the ranging model. The simulator 210 may further adapted the simulated sensory ranging data for one or more of the static objects according to the noise, for example, the tree inserted into the virtual realistic model.
Optionally, the simulator 210 may analyze the generated simulation sensory ranging data produced by the emulated range sensor(s) mounted on the emulated vehicle to evaluate performance of the emulated range sensor(s) with respect to one or more of their mounting attributes. The simulator 210 may evaluate the accuracy of the simulated sensory ranging data that is produced by the emulated range sensor(s) and which may be affected by the mounting attribute(s). The simulator 210 may generate and evaluate alternate simulation sensory ranging data as produced by the range sensor having alternate mounting attribute(s), for example, a different orientation, a different position, a different FOV and/or the like. The simulator 210 may determine optimal settings for the mounting attribute(s) and may recommend the autonomous driving system 220 and/or the autonomous driver 222 to apply the optimal settings. Based on the offered settings, the designer(s) of the autonomous driving system 220 may take one or more actions, for example, adjust one or more of the ranging sensor(s) mounting attribute(s), use different range sensor(s), add/remove range sensor(s) and/or the like.
As shown at 120, the simulator 210 may generate additional simulation data for the virtual realistic model replicating the geographical area, for example, motion data, transport data and/or the like.
The simulated motion data may include one or more motion parameters of the emulated vehicle. The motion parameter(s) may include, for example, a speed parameter, an acceleration parameter, a direction parameter, an orientation parameter, an elevation parameter and/or the like. For example, assuming the emulated vehicle is a ground vehicle, the motion parameters may include a steering wheel angle parameter which may be indicative of the direction of the emulated ground vehicle. The simulated motion data may be injected to the training autonomous driving system 220 through a native input typically used by the autonomous driving system 220 to connect to one or more of the motion sensors and/or positioning sensors. Additionally and/or alternatively, the simulated sensory motion data may be injected to the training autonomous driver 222 as a feed to one or more software interfaces typically used by the autonomous driver 222 to collect the sensory motion data. The simulated motion data may complement the other simulated data of the virtual realistic model to improve replication of a real ride, drive, flight in the geographical area as experienced by the training autonomous driving system 220 and/or the autonomous driver 222.
The simulated transport data may include, for example, information simulating communication of the emulated vehicle with one or more other entities, for example, a vehicle, a control center, a road infrastructure object and/or the like. The communication may simulate for example, V2X communication between the emulated vehicle and the other entities. The simulated transport data may be injected to the training autonomous driving system 220 as a feed to one or more native inputs, ports and/or links typically used by the autonomous driving system 220 to connect to V2X communication channel(s). Additionally and/or alternatively, the simulated transport data may be injected to the training autonomous driver 222 as a feed to one or more software interfaces typically used by the autonomous driver 222 to collect the transport data. The simulated transport data may further complement the simulation of the virtual realistic model to improve replication of a real ride, drive, flight in the geographical area as experienced by the training autonomous driving system.
It is expected that during the life of a patent maturing from this application many relevant devices, systems, methods and computer programs will be developed and the scope of the terms imaging sensor, range sensor, machine learning algorithm and neural network are intended to include all such new technologies a priori.
As used herein the term "about" refers to ± 10 %. The terms "comprises", "comprising", "includes", "including", "having" and their conjugates mean "including but not limited to". This term encompasses the terms "consisting of" and "consisting essentially of".
The phrase "consisting essentially of" means that the composition or method may include additional ingredients and/or steps, but only if the additional ingredients and/or steps do not materially alter the basic and novel characteristics of the claimed composition or method.
As used herein, the singular form "a", "an" and "the" include plural references unless the context clearly dictates otherwise. For example, the term "a compound" or "at least one compound" may include a plurality of compounds, including mixtures thereof.
Throughout this application, various embodiments of this invention may be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
Whenever a numerical range is indicated herein, it is meant to include any cited numeral (fractional or integral) within the indicated range. The phrases "ranging/ranges between" a first indicate number and a second indicate number and "ranging/ranges from" a first indicate number "to" a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numerals therebetween.
The word "exemplary" is used herein to mean "serving as an example, an instance or an illustration". Any embodiment described as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments and/or to exclude the incorporation of features from other embodiments. The word "optionally" is used herein to mean "is provided in some embodiments and not provided in other embodiments". Any particular embodiment of the invention may include a plurality of "optional" features unless such features conflict.
It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment of the invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements.
Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims.
All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting.

Claims

WHAT IS CLAIMED IS:
1. A computer implemented method of creating a simulated realistic virtual model of a geographical area for training an autonomous driving system, comprising:
obtaining geographic map data of a geographical area;
obtaining visual imagery data of the geographical area;
classifying a plurality of static objects identified in the visual imagery data to corresponding labels to designate a plurality of labeled objects;
superimposing the plurality of labeled objects over the geographic map data; generating a virtual three dimensional (3D) realistic model emulating the geographical area by synthesizing a corresponding visual texture for each of the plurality of labeled objects; and
injecting synthetic 3D imaging feed of the realistic model to an input of at least one imaging sensor of the autonomous driving system controlling movement of an emulated vehicle in the realistic model, the synthetic 3D imaging feed is generated to depict the realistic model from a point of view of at least one emulated imaging sensor mounted on the emulated vehicle.
2. The computer implemented method of claim 1, wherein the synthesizing is done using at least one of the following implementations:
applying at least one conditional generative adversarial neural network (cGAN) to transform a label of at least one of the plurality of labeled objects to a respective visual texture,
extracting the visual texture of at least one of the plurality of labeled objects from the visual imagery data, and
retrieving the visual texture of at least one of the plurality of labeled objects from a repository comprising a plurality of texture images of a plurality of static objects.
3. The computer implemented method of claim 1, wherein the synthetic 3D imaging feed is injected to a physical input of the autonomous driving system adapted to receive the input of the at least one imaging sensor.
4. The computer implemented method of claim 1, wherein the autonomous driving system is implemented as a computer software program, the synthetic 3D imaging data is injected using at least one virtual driver emulating a feed of the at least one imaging sensor.
5. The computer implemented method of claim 1, further comprising adjusting at least one mounting attribute of the at least one imaging sensor according to analysis of a visibility performance of the at least one emulated imaging sensor emulating the at least one imaging sensor, the at least one mounting attribute is a member of a group consisting of: a positioning on the emulated vehicle, a Field Of View (FOV), a resolution and an overlap region with at least one adjacent imaging sensor.
6. The computer implemented method of claim 1, further comprising simulating a sensory ranging data feed generated by at least one emulated range sensor mounted on the emulated vehicle, the sensory ranging data feed is simulated using a simulated ranging model applying at least one noise pattern associated with at least one range sensor emulated by the at least one emulated range sensor.
7. The computer implemented method of claim 6, further comprising the at least one noise pattern is adjusted according to at least one object attribute of at least one of a plurality of objects emulated in the realistic model.
8. The computer implemented method of claim 6, further comprising adjusting at least one mounting attribute of the at least one range sensor according to analysis of a range accuracy performance of the at least one range sensor, the at least one mounting attribute is a member of a group consisting of: a positioning on the emulated vehicle, an FOV, a range and an overlap region with at least one adjacent range sensor.
9. The computer implemented method of claim 1, further comprising inserting at least one simulated dynamic object into the realistic model, the at least one simulated dynamic object is a member of a group consisting of: a ground vehicle, an aerial vehicle, a naval vehicle, a pedestrian, an animal, vegetation and a dynamically changing road infrastructure object.
10. The computer implemented method of claim 9, further comprising applying at least one of a plurality of driver behavior classes for controlling a movement of at least one simulated ground vehicle such as the ground vehicle, the at least one driver behavior class is adapted to the geographical area according to an analysis of typical driver behavior patterns identified in the geographical area, the at least one driver behavior class is selected according to a density function calculated for the geographical area according to recurrence of drivers prototype corresponding to the at least one driver behavior classes in the geographical area.
11. The computer implemented method of claim 1, further comprising injecting to the autonomous driving system simulated motion data emulated by at least one emulated motion sensor associated with the emulated vehicle, the simulated motion data comprising at least one motion parameter, the at least one motion parameter is a member of a group consisting of: a speed parameter, an acceleration parameter, a direction parameter, an orientation parameter and an elevation parameter.
12. The computer implemented method of claim 1, further comprising injecting to the autonomous driving system simulated transport data comprising Vehicle to Anything (V2X) communication between the emulated vehicle and at least one other entity.
13. The computer implemented method of claim 1, further comprising adjusting the synthetic imaging data according to at least one environmental characteristic which is a member of a group consisting of: a lighting condition, a weather condition attribute and a timing attribute.
14. The computer implemented method of claim 1, wherein the geographic map data includes at least one member of a group consisting of: a two dimensional (2D) map, a 3D map, an orthophoto map, an elevation map and a detailed map comprising object description for objects present in the geographical area.
15. The computer implemented method of claim 1, wherein the visual imagery data comprises at least one image which is a member of a group consisting of: a ground level image, an aerial image and a satellite image, wherein the at least one image is a 2D image or a 3D image.
16. The computer implemented method of claim 1, wherein each of the plurality of static objects is a member of a group consisting of: a road, a road infrastructure object, an intersection, a building, a monument, a structure, a natural object and a terrain surface.
17. The computer implemented method of claim 1, wherein the at least one imaging sensor is a member of a group consisting of: a camera, a video camera, an infrared camera and a night vision sensor.
18. A system for creating a simulated virtual realistic model of a geographical area for training an autonomous driving system, comprising:
at least one processor adapted to execute code, the code comprising:
code instructions to obtain geographic map data of a geographical area; code instructions to obtain visual imagery data of the geographical area;
code instructions to classify a plurality of static objects identified in the visual imagery data to corresponding labels to designate a plurality of labeled objects;
code instructions to superimpose the plurality of labeled objects over the geographic map data;
code instructions to generate a virtual three dimensional (3D) realistic model emulating the geographical area by synthesizing a corresponding visual texture for each of the plurality of labeled objects; and
code instructions to inject synthetic 3D imaging feed of the realistic model to an input of at least one imaging sensor of the autonomous driving system controlling movement of an emulated vehicle in the realistic model, the synthetic 3D imaging feed is generated to depict the realistic model from a point of view of at least one emulated imaging sensor mounted on the emulated vehicle.
19. A computer implemented method of creating a simulated ranging model of a real world scene used for training an autonomous driving system, comprising:
obtaining real ranging data to a plurality of objects present in a real world scene;
obtaining sensory ranging data from a plurality of range sensors depicting the real world scene, each of the plurality of range sensors is associated with positioning data indicating a positioning of the each range sensor, the positioning data is obtained from at least one positioning sensor associated with the each range sensor;
analyzing the ranging sensory data, adjusted according to the positioning data, with respect to the real ranging data to identify at least one noise pattern affecting measurement accuracy degradation exhibited by at least one of the plurality of range sensors; and
updating a ranging model with the at least one noise pattern to generate realistic simulation ranging data for training an autonomous driving system.
20. The computer implemented method of claim 19, wherein the real ranging data is provided from at least one source which is a member of a group consisting of: a real world measurement, a map based calculation and a calculation based on image processing of at least one image of the real world scene.
21. The computer implemented method of claim 19, wherein the plurality of range sensors includes at least one member of a group consisting of: a LIDAR sensor, a radar, a camera, an infrared camera and an ultra- sonic sensor.
22. The computer implemented method of claim 19, wherein the at least one positioning sensor which is a member of a group consisting of: a Global Positioning system (GPS) sensor, a gyroscope, an accelerometer, an Inertial Measurement Unit (IMU) sensor and an elevation sensor.
23. The computer implemented method of claim 19, further comprising the positioning data includes motion data comprising at least one motion parameters of the associated each range sensor, the at least one motion parameters is a member of a group consisting of: a speed parameter, an acceleration parameter, a direction parameter, an orientation parameter and an elevation parameter.
24. The computer implemented method of claim 19, wherein the analysis is a statistics based prediction analysis conducted using at least one machine learning algorithm which is a member of a group consisting of: a neural network and a Support Vector Machine (SVM).
25. The computer implemented method of claim 19, wherein the at least one noise pattern comprises at least one noise characteristic which is a member of a group consisting of: a noise value, a distortion value, a latency value and a calibration offset value.
26. The computer implemented method of claim 19, further comprising the analysis of the sensory data is done according to at least one environmental characteristic detected during acquisition of the ranging sensory data, the at least one environmental characteristic is a member of a group consisting of: a weather condition attribute, and a timing attribute.
27. The computer implemented method of claim 19, further comprising adjusting the at least one noise pattern according to at least one object attribute of at least one of the plurality of objects, the at least one object attribute affects the ranging data produced by the each range sensor, the at least one object attribute is a member of a group consisting of: an external surface texture, an external surface composition and an external surface material.
28. The computer implemented method of claim 27, wherein the at least one object attribute is extracted from synthetic 3D imaging data generated for the real world scene.
29. The computer implemented method of claim 27, wherein the at least one object attribute is retrieved from a metadata record associated with the real world scene.
30. A computer implemented method of training a driver behavior simulator according to a geographical area, comprising:
obtaining sensory data generated by a plurality of sensor sets mounted on a plurality of vehicles driven by a plurality of drivers in a geographical area, the sensor set comprising at least one motion sensor;
analyzing the sensory data to identify a plurality of movement patterns indicative of a plurality of driver behavior patterns exhibited by the plurality of drivers;
classifying at least some of the plurality of drivers to one of a plurality of driver behavior classes according to one of the plurality of driver behavior patterns detected for the each driver;
calculating a driver behavior density function associated with the geographical area based on a recurrence of each of the plurality of driver behavior classes detected in the geographical area; and
updating a driver behavior simulator with the plurality of driver behavior classes and the driver behavior density function to generate realistic driver behavior data adapted to the geographical area for training an autonomous driving system.
31. The computer implemented method of claim 30, wherein the at least one motion sensor is a member of a group consisting of: a Global Positioning system (GPS) sensor, a gyroscope, an accelerometer, an Inertial Measurement Unit (EVIU) sensor and an elevation sensor.
32. The computer implemented method of claim 30, wherein the analysis is a statistics based prediction analysis conducted by an evolving learning algorithm using at least one machine learning algorithm which is a member of a group consisting of: a neural network and a Support Vector Machine (SVM).
33. The computer implemented method of claim 30, wherein each of the plurality of driver behavior patterns comprises at least one motion parameter which is a member of a group consisting of: a speed parameter, an acceleration parameter, a breaking parameter, a direction parameter and an orientation parameter.
34. The computer implemented method of claim 30, further comprising the analysis of the sensory data is done according to at least one environmental characteristic detected during acquisition of the sensory data, the at least one environmental characteristic is a member of a group consisting of: a weather condition attribute and a timing attribute.
35. The computer implemented method of claim 30, wherein the analysis of the sensory data further comprising analyzing additional sensory data received from at least one outward sensor included in at least one of the plurality of sensor sets, the at least one outward sensor depicting the geographical area as viewed from one of the plurality of vehicles associated with the at least one sensor set.
36. The computer implemented method of claim 35, wherein the at least one outward sensor is a member of a group consisting of: a camera, a night vision camera, a LIDAR sensor, a radar and an ultra-sonic sensor.
37. The computer implemented method of claim 35, further comprising enhancing at least one of the plurality of driver behavior patterns based on the analysis of the additional sensory data, the at least one enhanced driver behavior pattern comprises at least one additional driving characteristic which is a member of a group consisting of: a tailgating characteristic, an in-lane position characteristic and a double-parking tendency characteristic.
PCT/IL2017/050598 2016-06-28 2017-05-29 Realistic 3d virtual world creation and simulation for training automated driving systems WO2018002910A1 (en)

Priority Applications (9)

Application Number Priority Date Filing Date Title
EP17819482.5A EP3475778B1 (en) 2016-06-28 2017-05-29 Realistic 3d virtual world creation and simulation for training automated driving systems
CN201780052492.0A CN109643125B (en) 2016-06-28 2017-05-29 Realistic 3D virtual world creation and simulation for training an autonomous driving system
CN202211327526.1A CN115686005A (en) 2016-06-28 2017-05-29 System and computer-implemented method for training virtual models of an autopilot system
US16/313,058 US10489972B2 (en) 2016-06-28 2017-05-29 Realistic 3D virtual world creation and simulation for training automated driving systems
US15/990,877 US20180349526A1 (en) 2016-06-28 2018-05-29 Method and system for creating and simulating a realistic 3d virtual world
EP18174957.3A EP3410404B1 (en) 2017-05-29 2018-05-29 Method and system for creating and simulating a realistic 3d virtual world
US16/693,534 US11417057B2 (en) 2016-06-28 2019-11-25 Realistic 3D virtual world creation and simulation for training automated driving systems
US17/885,633 US12112432B2 (en) 2016-06-28 2022-08-11 Realistic 3D virtual world creation and simulation for training automated driving systems
US18/518,654 US20240096014A1 (en) 2016-06-28 2023-11-24 Method and system for creating and simulating a realistic 3d virtual world

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201662355368P 2016-06-28 2016-06-28
US62/355,368 2016-06-28
US201662384733P 2016-09-08 2016-09-08
US62/384,733 2016-09-08

Related Child Applications (3)

Application Number Title Priority Date Filing Date
US16/313,058 A-371-Of-International US10489972B2 (en) 2016-06-28 2017-05-29 Realistic 3D virtual world creation and simulation for training automated driving systems
US15/990,877 Continuation-In-Part US20180349526A1 (en) 2016-06-28 2018-05-29 Method and system for creating and simulating a realistic 3d virtual world
US16/693,534 Continuation US11417057B2 (en) 2016-06-28 2019-11-25 Realistic 3D virtual world creation and simulation for training automated driving systems

Publications (1)

Publication Number Publication Date
WO2018002910A1 true WO2018002910A1 (en) 2018-01-04

Family

ID=60786960

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2017/050598 WO2018002910A1 (en) 2016-06-28 2017-05-29 Realistic 3d virtual world creation and simulation for training automated driving systems

Country Status (4)

Country Link
US (3) US10489972B2 (en)
EP (1) EP3475778B1 (en)
CN (2) CN109643125B (en)
WO (1) WO2018002910A1 (en)

Cited By (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109189872A (en) * 2018-08-13 2019-01-11 武汉中海庭数据技术有限公司 Accurately diagram data verifies device and method
DE102018000880B3 (en) 2018-01-12 2019-02-21 Zf Friedrichshafen Ag Radar-based longitudinal and transverse control
WO2019049133A1 (en) * 2017-09-06 2019-03-14 Osr Enterprises Ag A system and method for generating training materials for a video classifier
CN109815555A (en) * 2018-12-29 2019-05-28 百度在线网络技术(北京)有限公司 The environmental modeling capability assessment method and system of automatic driving vehicle
WO2019175012A1 (en) * 2018-03-14 2019-09-19 Robert Bosch Gmbh Method for generating a training data record for training an artificial intelligence module for a control device of a vehicle
WO2019191306A1 (en) * 2018-03-27 2019-10-03 Nvidia Corporation Training, testing, and verifying autonomous machines using simulated environments
CN110462543A (en) * 2018-03-08 2019-11-15 百度时代网络技术(北京)有限公司 The method based on emulation that perception for assessing automatic driving vehicle requires
US10489972B2 (en) 2016-06-28 2019-11-26 Cognata Ltd. Realistic 3D virtual world creation and simulation for training automated driving systems
WO2019234726A1 (en) * 2018-06-05 2019-12-12 Israel Aerospace Industries Ltd. System and methodology for performance verification of multi-agent autonomous robotic systems
CN110794713A (en) * 2019-12-04 2020-02-14 中国航天空气动力技术研究院 Reconnaissance type unmanned aerial vehicle photoelectric load simulation training system
CN110795818A (en) * 2019-09-12 2020-02-14 腾讯科技(深圳)有限公司 Method and device for determining virtual test scene, electronic equipment and storage medium
WO2020046205A1 (en) * 2018-08-30 2020-03-05 Astropreneurs Hub Pte. Ltd. System and method for simulating a network of one or more vehicles
WO2020060478A1 (en) * 2018-09-18 2020-03-26 Sixan Pte Ltd System and method for training virtual traffic agents
WO2020079685A1 (en) * 2018-10-17 2020-04-23 Cognata Ltd. System and method for generating realistic simulation data for training an autonomous driver
CN111091581A (en) * 2018-10-24 2020-05-01 百度在线网络技术(北京)有限公司 Pedestrian trajectory simulation method and device based on generation of countermeasure network and storage medium
DE102018129880A1 (en) * 2018-11-27 2020-05-28 Valeo Schalter Und Sensoren Gmbh Method for the simulative determination of at least one measurement property of a virtual sensor and computing system
WO2020117870A1 (en) * 2018-12-07 2020-06-11 Zoox, Inc. System and method for modeling physical objects in a simulation
US10685252B2 (en) 2018-10-30 2020-06-16 Here Global B.V. Method and apparatus for predicting feature space decay using variational auto-encoder networks
CN111507459A (en) * 2019-01-30 2020-08-07 斯特拉德视觉公司 Method and apparatus for reducing annotation cost of neural networks
US10860878B2 (en) 2019-02-16 2020-12-08 Wipro Limited Method and system for synthesizing three-dimensional data
CN112286076A (en) * 2020-10-30 2021-01-29 中国兵器科学研究院 Real vehicle fire control trigger data acquisition simulation system
US10922840B2 (en) 2018-12-20 2021-02-16 Here Global B.V. Method and apparatus for localization of position data
US10997433B2 (en) 2018-02-27 2021-05-04 Nvidia Corporation Real-time detection of lanes and boundaries by autonomous vehicles
US11042163B2 (en) 2018-01-07 2021-06-22 Nvidia Corporation Guiding vehicles through vehicle maneuvers using machine learning models
WO2021130066A1 (en) * 2019-12-23 2021-07-01 Robert Bosch Gmbh Training neural networks using a neural network
CN113205070A (en) * 2021-05-27 2021-08-03 三一专用汽车有限责任公司 Visual perception algorithm optimization method and system
US11080590B2 (en) 2018-03-21 2021-08-03 Nvidia Corporation Stereo depth estimation using deep neural networks
US11079764B2 (en) 2018-02-02 2021-08-03 Nvidia Corporation Safety procedure analysis for obstacle avoidance in autonomous vehicles
US11210537B2 (en) 2018-02-18 2021-12-28 Nvidia Corporation Object detection and detection confidence suitable for autonomous driving
CN114043505A (en) * 2021-11-29 2022-02-15 上海大学 Simulation carrier motion simulation device based on mechanical arm and control method
US11256958B1 (en) 2018-08-10 2022-02-22 Apple Inc. Training with simulated images
WO2022050937A1 (en) * 2020-09-02 2022-03-10 Google Llc Condition-aware generation of panoramic imagery
US11308338B2 (en) 2018-12-28 2022-04-19 Nvidia Corporation Distance to obstacle detection in autonomous machine applications
US11308652B2 (en) 2019-02-25 2022-04-19 Apple Inc. Rendering objects to match camera noise
US11308669B1 (en) 2019-09-27 2022-04-19 Apple Inc. Shader for graphical objects
CN115187742A (en) * 2022-09-07 2022-10-14 西安深信科创信息技术有限公司 Method, system and related device for generating automatic driving simulation test scene
WO2022217873A1 (en) * 2021-04-16 2022-10-20 腾讯科技(深圳)有限公司 Virtual and reality-combined multi-human sensing system, method and apparatus, and medium
CN115240409A (en) * 2022-06-17 2022-10-25 上海智能网联汽车技术中心有限公司 Method for extracting dangerous scene based on driver model and traffic flow model
DE102021204611A1 (en) 2021-05-06 2022-11-10 Continental Automotive Technologies GmbH Computer-implemented method for generating training data for use in the field of vehicle occupant observation
US11520345B2 (en) 2019-02-05 2022-12-06 Nvidia Corporation Path perception diversity and redundancy in autonomous machine applications
WO2022261658A1 (en) * 2021-06-09 2022-12-15 Honeywell International Inc. Aircraft identification
US11537139B2 (en) 2018-03-15 2022-12-27 Nvidia Corporation Determining drivable free-space for autonomous vehicles
US11593995B1 (en) 2020-02-21 2023-02-28 Apple Inc. Populating a graphical environment
US11610115B2 (en) 2018-11-16 2023-03-21 Nvidia Corporation Learning to generate synthetic datasets for training neural networks
US11615558B2 (en) 2020-02-18 2023-03-28 Dspace Gmbh Computer-implemented method and system for generating a virtual vehicle environment
US11648945B2 (en) 2019-03-11 2023-05-16 Nvidia Corporation Intersection detection and classification in autonomous machine applications
US11698272B2 (en) 2019-08-31 2023-07-11 Nvidia Corporation Map creation and localization for autonomous driving applications
US11704890B2 (en) 2018-12-28 2023-07-18 Nvidia Corporation Distance to obstacle detection in autonomous machine applications
US11769052B2 (en) 2018-12-28 2023-09-26 Nvidia Corporation Distance estimation to objects and free-space boundaries in autonomous machine applications
US20230350399A1 (en) * 2019-03-29 2023-11-02 Tusimple, Inc. Operational testing of autonomous vehicles
US11966838B2 (en) 2018-06-19 2024-04-23 Nvidia Corporation Behavior-guided path planning in autonomous machine applications
US11978266B2 (en) 2020-10-21 2024-05-07 Nvidia Corporation Occupant attentiveness and cognitive load monitoring for autonomous and semi-autonomous driving applications
US12077190B2 (en) 2020-05-18 2024-09-03 Nvidia Corporation Efficient safety aware path selection and planning for autonomous machine applications

Families Citing this family (120)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9836895B1 (en) 2015-06-19 2017-12-05 Waymo Llc Simulating virtual objects
JP6548690B2 (en) * 2016-10-06 2019-07-24 株式会社アドバンスド・データ・コントロールズ Simulation system, simulation program and simulation method
FR3062229A1 (en) * 2017-01-26 2018-07-27 Parrot Air Support METHOD FOR DISPLAYING ON A SCREEN AT LEAST ONE REPRESENTATION OF AN OBJECT, COMPUTER PROGRAM, ELECTRONIC DISPLAY DEVICE AND APPARATUS THEREOF
KR20180092778A (en) * 2017-02-10 2018-08-20 한국전자통신연구원 Apparatus for providing sensory effect information, image processing engine, and method thereof
US20180267538A1 (en) * 2017-03-15 2018-09-20 Toyota Jidosha Kabushiki Kaisha Log-Based Vehicle Control System Verification
WO2018176000A1 (en) 2017-03-23 2018-09-27 DeepScale, Inc. Data synthesis for autonomous control systems
US10916022B2 (en) * 2017-03-27 2021-02-09 Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences Texture synthesis method, and device for same
JP6780029B2 (en) * 2017-04-27 2020-11-04 ベイジン ディディ インフィニティ テクノロジー アンド ディベロップメント カンパニー リミティッド Systems and methods for route planning
US11573573B2 (en) 2017-06-06 2023-02-07 Plusai, Inc. Method and system for distributed learning and adaptation in autonomous driving vehicles
US11042155B2 (en) * 2017-06-06 2021-06-22 Plusai Limited Method and system for closed loop perception in autonomous driving vehicles
US11392133B2 (en) 2017-06-06 2022-07-19 Plusai, Inc. Method and system for object centric stereo in autonomous driving vehicles
US10643368B2 (en) * 2017-06-27 2020-05-05 The Boeing Company Generative image synthesis for training deep learning machines
US10733755B2 (en) * 2017-07-18 2020-08-04 Qualcomm Incorporated Learning geometric differentials for matching 3D models to objects in a 2D image
US10671349B2 (en) 2017-07-24 2020-06-02 Tesla, Inc. Accelerated mathematical engine
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
US11157441B2 (en) 2017-07-24 2021-10-26 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US11029693B2 (en) * 2017-08-08 2021-06-08 Tusimple, Inc. Neural network based vehicle dynamics model
US11074827B2 (en) * 2017-08-25 2021-07-27 Aurora Flight Sciences Corporation Virtual reality system for aerial vehicle
US10495421B2 (en) 2017-08-25 2019-12-03 Aurora Flight Sciences Corporation Aerial vehicle interception system
US11064184B2 (en) 2017-08-25 2021-07-13 Aurora Flight Sciences Corporation Aerial vehicle imaging and targeting system
US11334762B1 (en) * 2017-09-07 2022-05-17 Aurora Operations, Inc. Method for image analysis
US20190079526A1 (en) * 2017-09-08 2019-03-14 Uber Technologies, Inc. Orientation Determination in Object Detection and Tracking for Autonomous Vehicles
US11657266B2 (en) 2018-11-16 2023-05-23 Honda Motor Co., Ltd. Cooperative multi-goal, multi-agent, multi-stage reinforcement learning
US11093829B2 (en) * 2017-10-12 2021-08-17 Honda Motor Co., Ltd. Interaction-aware decision making
US10739776B2 (en) * 2017-10-12 2020-08-11 Honda Motor Co., Ltd. Autonomous vehicle policy generation
KR102106875B1 (en) * 2017-10-24 2020-05-08 엔에이치엔 주식회사 System and method for avoiding accidents during autonomous driving based on vehicle learning
US20190164007A1 (en) * 2017-11-30 2019-05-30 TuSimple Human driving behavior modeling system using machine learning
US11273836B2 (en) 2017-12-18 2022-03-15 Plusai, Inc. Method and system for human-like driving lane planning in autonomous driving vehicles
AT520781A2 (en) * 2017-12-22 2019-07-15 Avl List Gmbh Behavior model of an environmental sensor
US11620419B2 (en) * 2018-01-24 2023-04-04 Toyota Research Institute, Inc. Systems and methods for identifying human-based perception techniques
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
US11417107B2 (en) * 2018-02-19 2022-08-16 Magna Electronics Inc. Stationary vision system at vehicle roadway
EP3543985A1 (en) * 2018-03-21 2019-09-25 dSPACE digital signal processing and control engineering GmbH Simulation of different traffic situations for a test vehicle
US10877152B2 (en) * 2018-03-27 2020-12-29 The Mathworks, Inc. Systems and methods for generating synthetic sensor data
WO2019239211A2 (en) * 2018-06-11 2019-12-19 Insurance Services Office, Inc. System and method for generating simulated scenes from open map data for machine learning
US11215999B2 (en) 2018-06-20 2022-01-04 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11361457B2 (en) 2018-07-20 2022-06-14 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US10768629B2 (en) * 2018-07-24 2020-09-08 Pony Ai Inc. Generative adversarial network enriched driving simulation
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
US10845818B2 (en) * 2018-07-30 2020-11-24 Toyota Research Institute, Inc. System and method for 3D scene reconstruction of agent operation sequences using low-level/high-level reasoning and parametric models
CN108681264A (en) * 2018-08-10 2018-10-19 成都合纵连横数字科技有限公司 A kind of intelligent vehicle digitalized artificial test device
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
CN109145489B (en) * 2018-09-07 2020-01-17 百度在线网络技术(北京)有限公司 Obstacle distribution simulation method and device based on probability chart and terminal
FR3085761B1 (en) * 2018-09-11 2021-01-15 Continental Automotive France SYSTEM AND METHOD FOR LOCATING THE POSITION OF A ROAD OBJECT BY NON-SUPERVISED AUTOMATIC LEARNING
WO2020077117A1 (en) 2018-10-11 2020-04-16 Tesla, Inc. Systems and methods for training machine models with augmented data
US10754030B2 (en) * 2018-10-23 2020-08-25 Baidu Usa Llc Methods and systems for radar simulation and object classification
US11196678B2 (en) 2018-10-25 2021-12-07 Tesla, Inc. QOS manager for system on a chip communications
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US20200183400A1 (en) * 2018-12-11 2020-06-11 Microsoft Technology Licensing, Llc Rfid-based navigation of an autonomous guided vehicle
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US11150664B2 (en) 2019-02-01 2021-10-19 Tesla, Inc. Predicting three-dimensional features for autonomous driving
US10997461B2 (en) 2019-02-01 2021-05-04 Tesla, Inc. Generating ground truth for machine learning from time series elements
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US10956755B2 (en) 2019-02-19 2021-03-23 Tesla, Inc. Estimating object properties using visual image data
EP3731151A1 (en) * 2019-04-23 2020-10-28 Siemens Aktiengesellschaft Method and device for generating a computer-readable model for a technical system
US11295113B2 (en) * 2019-04-25 2022-04-05 ID Lynx Ltd. 3D biometric identification system for identifying animals
CN111010414B (en) * 2019-04-29 2022-06-21 北京五一视界数字孪生科技股份有限公司 Simulation data synchronization method and device, storage medium and electronic equipment
CN110072193B (en) * 2019-04-29 2019-12-17 清华大学 Communication topological structure for intelligent network vehicle-connecting test
CN110209146B (en) * 2019-05-23 2020-12-01 杭州飞步科技有限公司 Test method, device and equipment for automatic driving vehicle and readable storage medium
CN110162899A (en) * 2019-05-28 2019-08-23 禾多科技(北京)有限公司 Field end laser radar quick deployment method
US11391649B2 (en) * 2019-05-29 2022-07-19 Pony Ai Inc. Driving emulation system for an autonomous vehicle
WO2020257366A1 (en) * 2019-06-17 2020-12-24 DeepMap Inc. Updating high definition maps based on lane closure and lane opening
CN112113593A (en) * 2019-06-20 2020-12-22 宝马股份公司 Method and system for testing sensor configuration of vehicle
US11170567B2 (en) * 2019-06-28 2021-11-09 Woven Planet North America, Inc. Dynamic object detection model based on static map collection data
DE102019209535A1 (en) * 2019-06-28 2020-12-31 Robert Bosch Gmbh Method for providing a digital road map
US11774250B2 (en) * 2019-07-05 2023-10-03 Nvidia Corporation Using high definition maps for generating synthetic sensor data for autonomous vehicles
CN110412374B (en) * 2019-07-24 2024-02-23 苏州瑞地测控技术有限公司 ADAS HIL test system based on multisensor
CN110399688B (en) * 2019-07-30 2023-04-07 奇瑞汽车股份有限公司 Method and device for determining environment working condition of automatic driving and storage medium
CN110782391B (en) * 2019-09-10 2022-06-03 腾讯科技(深圳)有限公司 Image processing method and device in driving simulation scene and storage medium
CN112578781B (en) * 2019-09-29 2022-12-30 华为技术有限公司 Data processing method, device, chip system and medium
CN110673636B (en) * 2019-09-30 2023-01-31 上海商汤临港智能科技有限公司 Unmanned simulation test system and method, and storage medium
CN110750052A (en) * 2019-09-30 2020-02-04 奇点汽车研发中心有限公司 Driving model training method and device, electronic equipment and medium
TWI723565B (en) * 2019-10-03 2021-04-01 宅妝股份有限公司 Method and system for rendering three-dimensional layout plan
US20210150799A1 (en) * 2019-11-15 2021-05-20 Waymo Llc Generating Environmental Data
US20210150087A1 (en) * 2019-11-18 2021-05-20 Sidewalk Labs LLC Methods, systems, and media for data visualization and navigation of multiple simulation results in urban design
CN111123738B (en) * 2019-11-25 2023-06-30 的卢技术有限公司 Method and system for improving training efficiency of deep reinforcement learning algorithm in simulation environment
US11494533B2 (en) * 2019-11-27 2022-11-08 Waymo Llc Simulations with modified agents for testing autonomous vehicle software
US11551414B2 (en) 2019-12-02 2023-01-10 Woven Planet North America, Inc. Simulation architecture for on-vehicle testing and validation
US10999719B1 (en) * 2019-12-03 2021-05-04 Gm Cruise Holdings Llc Peer-to-peer autonomous vehicle communication
US11274930B1 (en) * 2019-12-11 2022-03-15 Amazon Technologies, Inc. System for assessing an autonomously determined map
US20210180960A1 (en) * 2019-12-17 2021-06-17 GM Global Technology Operations LLC Road attribute detection and classification for map augmentation
US11681047B2 (en) 2019-12-19 2023-06-20 Argo AI, LLC Ground surface imaging combining LiDAR and camera data
CN111223354A (en) * 2019-12-31 2020-06-02 塔普翊海(上海)智能科技有限公司 Unmanned trolley, and AR and AI technology-based unmanned trolley practical training platform and method
CN111243058B (en) * 2019-12-31 2024-03-22 富联裕展科技(河南)有限公司 Object simulation image generation method and computer readable storage medium
US11170568B2 (en) 2020-01-23 2021-11-09 Rockwell Collins, Inc. Photo-realistic image generation using geo-specific data
US11544832B2 (en) * 2020-02-04 2023-01-03 Rockwell Collins, Inc. Deep-learned generation of accurate typical simulator content via multiple geo-specific data channels
US11966673B2 (en) * 2020-03-13 2024-04-23 Nvidia Corporation Sensor simulation and learning sensor models with generative machine learning methods
US20220351507A1 (en) * 2020-05-07 2022-11-03 Hypergiant Industries, Inc. Volumetric Baseline Image Generation and Object Identification
CN111797001A (en) * 2020-05-27 2020-10-20 中汽数据有限公司 Method for constructing automatic driving simulation test model based on SCANeR
CN116194350A (en) * 2020-05-27 2023-05-30 柯尼亚塔有限公司 Generating multiple simulated edge condition driving scenarios
CN111929718A (en) * 2020-06-12 2020-11-13 东莞市普灵思智能电子有限公司 Automatic driving object detection and positioning system and method
AT523641B1 (en) * 2020-06-16 2021-10-15 Avl List Gmbh System for testing a driver assistance system of a vehicle
US11514343B2 (en) 2020-06-30 2022-11-29 Waymo Llc Simulating degraded sensor data
US11468551B1 (en) 2020-07-24 2022-10-11 Norfolk Southern Corporation Machine-learning framework for detecting defects or conditions of railcar systems
US11507779B1 (en) 2020-07-24 2022-11-22 Norfolk Southern Corporation Two-stage deep learning framework for detecting the condition of rail car coupler systems
CN111983935B (en) * 2020-08-19 2024-04-05 北京京东叁佰陆拾度电子商务有限公司 Performance evaluation method and device
US11938957B2 (en) * 2020-08-24 2024-03-26 Motional Ad Llc Driving scenario sampling for training/tuning machine learning models for vehicles
CN112016439B (en) * 2020-08-26 2021-06-29 上海松鼠课堂人工智能科技有限公司 Game learning environment creation method and system based on antagonistic neural network
CN112347693B (en) * 2020-10-26 2023-12-22 上海感探号信息科技有限公司 Vehicle running dynamic mirror image simulation method, device and system
CN116529784A (en) * 2020-11-05 2023-08-01 德斯拜思有限公司 Method and system for adding lidar data
KR20220065126A (en) * 2020-11-12 2022-05-20 현대자동차주식회사 Apparatus and Method for detecting driving lane based on multi-sensor
US20220147867A1 (en) * 2020-11-12 2022-05-12 International Business Machines Corporation Validation of gaming simulation for ai training based on real world activities
CN112527940B (en) * 2020-12-18 2024-07-16 上海商汤临港智能科技有限公司 Method and device for generating simulation map, electronic equipment and storage medium
CN116416706A (en) * 2020-12-18 2023-07-11 北京百度网讯科技有限公司 Data acquisition method and device
CN112859907A (en) * 2020-12-25 2021-05-28 湖北航天飞行器研究所 Rocket debris high-altitude detection method based on three-dimensional special effect simulation under condition of few samples
DE102021201177A1 (en) 2021-02-09 2022-08-11 Zf Friedrichshafen Ag Computer-implemented method and computer program for generating routes for an automated driving system
CN113297530B (en) * 2021-04-15 2024-04-09 南京大学 Automatic driving black box test system based on scene search
CN113126628A (en) * 2021-04-26 2021-07-16 上海联适导航技术股份有限公司 Method, system and equipment for automatic driving of agricultural machinery and readable storage medium
CN114563014B (en) * 2021-12-15 2023-08-04 武汉中海庭数据技术有限公司 Opendrive map automatic detection method based on simulation image
KR20230092514A (en) * 2021-12-17 2023-06-26 삼성전자주식회사 Rendering method and device
CN114995401B (en) * 2022-05-18 2024-06-07 广西科技大学 Trolley automatic driving method based on vision and CNN
US12060067B2 (en) * 2022-07-07 2024-08-13 Toyota Motor Engineering & Manufacturing North America, Inc. Systems, methods, and vehicles for correcting driving behavior of a driver of a vehicle
CN115526055B (en) * 2022-09-30 2024-02-13 北京瑞莱智慧科技有限公司 Model robustness detection method, related device and storage medium
CN115687163B (en) * 2023-01-05 2023-04-07 中汽智联技术有限公司 Scene library construction method, device, equipment and storage medium
CN116129292B (en) * 2023-01-13 2024-07-26 华中科技大学 Infrared vehicle target detection method and system based on few sample augmentation
JP2024115459A (en) * 2023-02-14 2024-08-26 株式会社Subaru Vehicles with road surface drawing function
DE102023203633A1 (en) * 2023-04-20 2024-10-24 Stellantis Auto Sas Hybrid traffic simulation with agent models for testing teleoperated vehicles
CN117036649A (en) * 2023-06-09 2023-11-10 电子科技大学 Three-dimensional map display and interaction method based on mixed reality scene

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070088469A1 (en) * 2005-10-04 2007-04-19 Oshkosh Truck Corporation Vehicle control system and method
US20080033684A1 (en) * 2006-07-24 2008-02-07 The Boeing Company Autonomous Vehicle Rapid Development Testbed Systems and Methods
US20140320485A1 (en) * 2008-11-05 2014-10-30 Hover, Inc. System for generating geocoded three-dimensional (3d) models
US20160137206A1 (en) * 2014-11-13 2016-05-19 Nec Laboratories America, Inc. Continuous Occlusion Models for Road Scene Understanding

Family Cites Families (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1412918B1 (en) * 2001-07-12 2007-02-21 DO Labs Method and system for producing formatted data related to geometric distortions
US6917893B2 (en) * 2002-03-14 2005-07-12 Activmedia Robotics, Llc Spatial data collection apparatus and method
US7698055B2 (en) * 2004-11-16 2010-04-13 Microsoft Corporation Traffic forecasting employing modeling and analysis of probabilistic interdependencies and contextual data
KR100657946B1 (en) * 2005-01-28 2006-12-14 삼성전자주식회사 Method and apparatus for creating procedural texture for natural phenomenon in the 3-Dimensions graphics and recording media thereof
JP2006244119A (en) * 2005-03-03 2006-09-14 Kansai Paint Co Ltd Retrieval device for texture image, retrieval method, retrieval program and recording medium with its program recorded
US7352292B2 (en) * 2006-01-20 2008-04-01 Keith Alter Real-time, three-dimensional synthetic vision display of sensor-validated terrain data
CN101201403A (en) * 2007-04-27 2008-06-18 北京航空航天大学 Three-dimensional polarization imaging lidar remote sensor
KR100912715B1 (en) * 2007-12-17 2009-08-19 한국전자통신연구원 Method and apparatus of digital photogrammetry by integrated modeling for different types of sensors
JP5278728B2 (en) * 2008-02-28 2013-09-04 アイシン精機株式会社 Distance image sensor calibration apparatus and calibration method
US8301374B2 (en) * 2009-08-25 2012-10-30 Southwest Research Institute Position estimation for ground vehicle navigation based on landmark identification/yaw rate and perception of landmarks
JP2011204195A (en) * 2010-03-26 2011-10-13 Panasonic Electric Works Co Ltd Device and method for inspection of irregularity
US9043129B2 (en) * 2010-10-05 2015-05-26 Deere & Company Method for governing a speed of an autonomous vehicle
CN102466479B (en) * 2010-11-16 2013-08-21 深圳泰山在线科技有限公司 System and method for measuring anti-jamming distance of moving object
US9195231B2 (en) * 2011-11-09 2015-11-24 Abyssal S.A. System and method of operation for remotely operated vehicles with superimposed 3D imagery
CN102536196B (en) * 2011-12-29 2014-06-18 中国科学院自动化研究所 System and method for underground attitude measurement based on laser ranging and acceleration measurement
US11094137B2 (en) * 2012-02-24 2021-08-17 Matterport, Inc. Employing three-dimensional (3D) data predicted from two-dimensional (2D) images using neural networks for 3D modeling applications and other applications
US9324190B2 (en) * 2012-02-24 2016-04-26 Matterport, Inc. Capturing and aligning three-dimensional scenes
US9380275B2 (en) * 2013-01-30 2016-06-28 Insitu, Inc. Augmented video system providing enhanced situational awareness
KR102027771B1 (en) * 2013-01-31 2019-10-04 한국전자통신연구원 Obstacle detecting apparatus and method for adaptation to vehicle velocity
US9262853B2 (en) 2013-03-15 2016-02-16 Disney Enterprises, Inc. Virtual scene generation based on imagery
US20140309836A1 (en) 2013-04-16 2014-10-16 Neya Systems, Llc Position Estimation and Vehicle Control in Autonomous Multi-Vehicle Convoys
WO2014205757A1 (en) * 2013-06-28 2014-12-31 Google Inc. Systems and methods for generating accurate sensor corrections based on video input
US9996976B2 (en) * 2014-05-05 2018-06-12 Avigilon Fortress Corporation System and method for real-time overlay of map features onto a video feed
JP2016004017A (en) * 2014-06-19 2016-01-12 ダブル技研株式会社 Information acquisition apparatus and image processing method
US10036800B2 (en) * 2014-08-08 2018-07-31 The United States Of America, As Represented By The Secretary Of The Navy Systems and methods for using coherent noise filtering
CN104361593B (en) * 2014-11-14 2017-09-19 南京大学 A kind of color image quality evaluation method based on HVS and quaternary number
KR102534792B1 (en) * 2015-02-10 2023-05-19 모빌아이 비젼 테크놀로지스 엘티디. Sparse map for autonomous vehicle navigation
US9369689B1 (en) * 2015-02-24 2016-06-14 HypeVR Lidar stereo fusion live action 3D model video reconstruction for six degrees of freedom 360° volumetric virtual reality video
US9781569B2 (en) * 2015-03-12 2017-10-03 GM Global Technology Operations LLC Systems and methods for resolving positional ambiguities using access point information
US20160314224A1 (en) * 2015-04-24 2016-10-27 Northrop Grumman Systems Corporation Autonomous vehicle simulation system
US9594378B2 (en) * 2015-07-31 2017-03-14 Delphi Technologies, Inc. Variable object detection field-of-focus for automated vehicle control
US10754034B1 (en) * 2015-09-30 2020-08-25 Near Earth Autonomy, Inc. Apparatus for redirecting field of view of lidar scanner, and lidar scanner including same
WO2017095948A1 (en) * 2015-11-30 2017-06-08 Pilot Ai Labs, Inc. Improved general object detection using neural networks
CN105387796B (en) * 2015-12-07 2017-12-22 贵州新安航空机械有限责任公司 The detection circuit and its detection method of a kind of inductive displacement transducer
US20170160744A1 (en) * 2015-12-08 2017-06-08 Delphi Technologies, Inc. Lane Extension Of Lane-Keeping System By Ranging-Sensor For Automated Vehicle
US9740944B2 (en) * 2015-12-18 2017-08-22 Ford Global Technologies, Llc Virtual sensor data generation for wheel stop detection
US10474964B2 (en) * 2016-01-26 2019-11-12 Ford Global Technologies, Llc Training algorithm for collision avoidance
US10019652B2 (en) * 2016-02-23 2018-07-10 Xerox Corporation Generating a virtual world to assess real-world video analysis performance
CN116659526A (en) * 2016-03-15 2023-08-29 康多尔收购第二分公司 System and method for providing vehicle awareness
US10353053B2 (en) * 2016-04-22 2019-07-16 Huawei Technologies Co., Ltd. Object detection using radar and machine learning
US20180349526A1 (en) 2016-06-28 2018-12-06 Cognata Ltd. Method and system for creating and simulating a realistic 3d virtual world
US10489972B2 (en) 2016-06-28 2019-11-26 Cognata Ltd. Realistic 3D virtual world creation and simulation for training automated driving systems
US20180011953A1 (en) * 2016-07-07 2018-01-11 Ford Global Technologies, Llc Virtual Sensor Data Generation for Bollard Receiver Detection
WO2018107503A1 (en) 2016-12-17 2018-06-21 SZ DJI Technology Co., Ltd. Method and system for simulating visual data
KR20180094725A (en) * 2017-02-16 2018-08-24 삼성전자주식회사 Control method and control apparatus of car for automatic driving and learning method for automatic driving
WO2018176000A1 (en) * 2017-03-23 2018-09-27 DeepScale, Inc. Data synthesis for autonomous control systems
US9952594B1 (en) * 2017-04-07 2018-04-24 TuSimple System and method for traffic data collection using unmanned aerial vehicles (UAVs)
CN110062916A (en) 2017-04-11 2019-07-26 深圳市大疆创新科技有限公司 For simulating the visual simulation system of the operation of moveable platform
US10248129B2 (en) * 2017-04-19 2019-04-02 GM Global Technology Operations LLC Pitch compensation for autonomous vehicles
US10481044B2 (en) * 2017-05-18 2019-11-19 TuSimple Perception simulation for improved autonomous vehicle control
US20190340317A1 (en) 2018-05-07 2019-11-07 Microsoft Technology Licensing, Llc Computer vision through simulated hardware optimization
US11138350B2 (en) * 2018-08-09 2021-10-05 Zoox, Inc. Procedural world generation using tertiary data
US11861458B2 (en) * 2018-08-21 2024-01-02 Lyft, Inc. Systems and methods for detecting and recording anomalous vehicle events
US11868136B2 (en) * 2019-12-19 2024-01-09 Woven By Toyota, U.S., Inc. Geolocalized models for perception, prediction, or planning
US20230316789A1 (en) * 2020-09-14 2023-10-05 Cognata Ltd. Object labeling in images using dense depth maps
US11278810B1 (en) * 2021-04-01 2022-03-22 Sony Interactive Entertainment Inc. Menu placement dictated by user ability and modes of feedback
US20230406315A1 (en) * 2022-06-17 2023-12-21 Nvidia Corporation Encoding junction information in map data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070088469A1 (en) * 2005-10-04 2007-04-19 Oshkosh Truck Corporation Vehicle control system and method
US20080033684A1 (en) * 2006-07-24 2008-02-07 The Boeing Company Autonomous Vehicle Rapid Development Testbed Systems and Methods
US20140320485A1 (en) * 2008-11-05 2014-10-30 Hover, Inc. System for generating geocoded three-dimensional (3d) models
US20160137206A1 (en) * 2014-11-13 2016-05-19 Nec Laboratories America, Inc. Continuous Occlusion Models for Road Scene Understanding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
G. ROSL. SELLARTJ. MATERZYNSKAD. VAZQUEZA. M. LOPEZ: "The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes", 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, pages 3234 - 3243, XP033021505, DOI: 10.1109/CVPR.2016.352
VIRTANEN, J-P.HYYPPA, H.KAMARAINEN, A.HOLLSTROM, T.VASTARANTA, M.HYYPPA, J.: "Intelligent Open Data 3D Maps in a Collaborative Virtual World", ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION, vol. 4, no. 2, 2015, pages 837 - 857, XP055641805, Retrieved from the Internet <URL:https://doi.org/10.3390/ijgi4020837> DOI: 10.3390/ijgi4020837

Cited By (97)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10489972B2 (en) 2016-06-28 2019-11-26 Cognata Ltd. Realistic 3D virtual world creation and simulation for training automated driving systems
US12112432B2 (en) 2016-06-28 2024-10-08 Cognata Ltd. Realistic 3D virtual world creation and simulation for training automated driving systems
US11417057B2 (en) 2016-06-28 2022-08-16 Cognata Ltd. Realistic 3D virtual world creation and simulation for training automated driving systems
WO2019049133A1 (en) * 2017-09-06 2019-03-14 Osr Enterprises Ag A system and method for generating training materials for a video classifier
US11755025B2 (en) 2018-01-07 2023-09-12 Nvidia Corporation Guiding vehicles through vehicle maneuvers using machine learning models
US11042163B2 (en) 2018-01-07 2021-06-22 Nvidia Corporation Guiding vehicles through vehicle maneuvers using machine learning models
US12032380B2 (en) 2018-01-07 2024-07-09 Nvidia Corporation Guiding vehicles through vehicle maneuvers using machine learning models
US11609572B2 (en) 2018-01-07 2023-03-21 Nvidia Corporation Guiding vehicles through vehicle maneuvers using machine learning models
DE102018000880B3 (en) 2018-01-12 2019-02-21 Zf Friedrichshafen Ag Radar-based longitudinal and transverse control
US11079764B2 (en) 2018-02-02 2021-08-03 Nvidia Corporation Safety procedure analysis for obstacle avoidance in autonomous vehicles
US11604470B2 (en) 2018-02-02 2023-03-14 Nvidia Corporation Safety procedure analysis for obstacle avoidance in autonomous vehicles
US11966228B2 (en) 2018-02-02 2024-04-23 Nvidia Corporation Safety procedure analysis for obstacle avoidance in autonomous vehicles
US11210537B2 (en) 2018-02-18 2021-12-28 Nvidia Corporation Object detection and detection confidence suitable for autonomous driving
US12072442B2 (en) 2018-02-18 2024-08-27 Nvidia Corporation Object detection and detection confidence suitable for autonomous driving
US11676364B2 (en) 2018-02-27 2023-06-13 Nvidia Corporation Real-time detection of lanes and boundaries by autonomous vehicles
US10997433B2 (en) 2018-02-27 2021-05-04 Nvidia Corporation Real-time detection of lanes and boundaries by autonomous vehicles
CN110462543A (en) * 2018-03-08 2019-11-15 百度时代网络技术(北京)有限公司 The method based on emulation that perception for assessing automatic driving vehicle requires
US12019414B2 (en) 2018-03-14 2024-06-25 Robert Bosch Gmbh Method for generating a training data set for training an artificial intelligence module for a control device of a vehicle
WO2019175012A1 (en) * 2018-03-14 2019-09-19 Robert Bosch Gmbh Method for generating a training data record for training an artificial intelligence module for a control device of a vehicle
CN111868641A (en) * 2018-03-14 2020-10-30 罗伯特·博世有限公司 Method for generating a training data set for training an artificial intelligence module of a vehicle control unit
US11537139B2 (en) 2018-03-15 2022-12-27 Nvidia Corporation Determining drivable free-space for autonomous vehicles
US11941873B2 (en) 2018-03-15 2024-03-26 Nvidia Corporation Determining drivable free-space for autonomous vehicles
US12039436B2 (en) 2018-03-21 2024-07-16 Nvidia Corporation Stereo depth estimation using deep neural networks
US11604967B2 (en) 2018-03-21 2023-03-14 Nvidia Corporation Stereo depth estimation using deep neural networks
US11080590B2 (en) 2018-03-21 2021-08-03 Nvidia Corporation Stereo depth estimation using deep neural networks
CN111919225B (en) * 2018-03-27 2024-03-26 辉达公司 Training, testing, and validating autonomous machines using a simulated environment
CN111919225A (en) * 2018-03-27 2020-11-10 辉达公司 Training, testing, and validating autonomous machines using a simulated environment
WO2019191306A1 (en) * 2018-03-27 2019-10-03 Nvidia Corporation Training, testing, and verifying autonomous machines using simulated environments
US11436484B2 (en) 2018-03-27 2022-09-06 Nvidia Corporation Training, testing, and verifying autonomous machines using simulated environments
US12099782B2 (en) 2018-06-05 2024-09-24 Elta Systems Ltd. System and methodology for performance verification of multi-agent autonomous robotic systems
WO2019234726A1 (en) * 2018-06-05 2019-12-12 Israel Aerospace Industries Ltd. System and methodology for performance verification of multi-agent autonomous robotic systems
US11966838B2 (en) 2018-06-19 2024-04-23 Nvidia Corporation Behavior-guided path planning in autonomous machine applications
US11256958B1 (en) 2018-08-10 2022-02-22 Apple Inc. Training with simulated images
CN109189872A (en) * 2018-08-13 2019-01-11 武汉中海庭数据技术有限公司 Accurately diagram data verifies device and method
WO2020046205A1 (en) * 2018-08-30 2020-03-05 Astropreneurs Hub Pte. Ltd. System and method for simulating a network of one or more vehicles
WO2020060478A1 (en) * 2018-09-18 2020-03-26 Sixan Pte Ltd System and method for training virtual traffic agents
US12061965B2 (en) 2018-10-17 2024-08-13 Cognata Ltd. System and method for generating realistic simulation data for training an autonomous driver
CN113168176A (en) * 2018-10-17 2021-07-23 柯尼亚塔有限公司 System and method for generating realistic simulation data for training automated driving
WO2020079685A1 (en) * 2018-10-17 2020-04-23 Cognata Ltd. System and method for generating realistic simulation data for training an autonomous driver
US11270165B2 (en) 2018-10-17 2022-03-08 Cognata Ltd. System and method for generating realistic simulation data for training an autonomous driver
US20220188579A1 (en) * 2018-10-17 2022-06-16 Cognata Ltd. System and method for generating realistic simulation data for training an autonomous driver
EP3867722A4 (en) * 2018-10-17 2022-08-03 Cognata Ltd. System and method for generating realistic simulation data for training an autonomous driver
CN111091581A (en) * 2018-10-24 2020-05-01 百度在线网络技术(北京)有限公司 Pedestrian trajectory simulation method and device based on generation of countermeasure network and storage medium
US10685252B2 (en) 2018-10-30 2020-06-16 Here Global B.V. Method and apparatus for predicting feature space decay using variational auto-encoder networks
US11170251B2 (en) 2018-10-30 2021-11-09 Here Global B.V. Method and apparatus for predicting feature space decay using variational auto-encoder networks
US11610115B2 (en) 2018-11-16 2023-03-21 Nvidia Corporation Learning to generate synthetic datasets for training neural networks
DE102018129880A1 (en) * 2018-11-27 2020-05-28 Valeo Schalter Und Sensoren Gmbh Method for the simulative determination of at least one measurement property of a virtual sensor and computing system
US11734472B2 (en) 2018-12-07 2023-08-22 Zoox, Inc. System and method for modeling physical objects in a simulation
WO2020117870A1 (en) * 2018-12-07 2020-06-11 Zoox, Inc. System and method for modeling physical objects in a simulation
US10922840B2 (en) 2018-12-20 2021-02-16 Here Global B.V. Method and apparatus for localization of position data
US11501463B2 (en) 2018-12-20 2022-11-15 Here Global B.V. Method and apparatus for localization of position data
US12093824B2 (en) 2018-12-28 2024-09-17 Nvidia Corporation Distance to obstacle detection in autonomous machine applications
US11704890B2 (en) 2018-12-28 2023-07-18 Nvidia Corporation Distance to obstacle detection in autonomous machine applications
US12073325B2 (en) 2018-12-28 2024-08-27 Nvidia Corporation Distance estimation to objects and free-space boundaries in autonomous machine applications
US11769052B2 (en) 2018-12-28 2023-09-26 Nvidia Corporation Distance estimation to objects and free-space boundaries in autonomous machine applications
US11308338B2 (en) 2018-12-28 2022-04-19 Nvidia Corporation Distance to obstacle detection in autonomous machine applications
US11790230B2 (en) 2018-12-28 2023-10-17 Nvidia Corporation Distance to obstacle detection in autonomous machine applications
CN109815555B (en) * 2018-12-29 2023-04-18 百度在线网络技术(北京)有限公司 Environment modeling capability evaluation method and system for automatic driving vehicle
CN109815555A (en) * 2018-12-29 2019-05-28 百度在线网络技术(北京)有限公司 The environmental modeling capability assessment method and system of automatic driving vehicle
CN111507459B (en) * 2019-01-30 2023-08-18 斯特拉德视觉公司 Method and apparatus for reducing annotation costs for neural networks
CN111507459A (en) * 2019-01-30 2020-08-07 斯特拉德视觉公司 Method and apparatus for reducing annotation cost of neural networks
US12051332B2 (en) 2019-02-05 2024-07-30 Nvidia Corporation Path perception diversity and redundancy in autonomous machine applications
US11520345B2 (en) 2019-02-05 2022-12-06 Nvidia Corporation Path perception diversity and redundancy in autonomous machine applications
US10860878B2 (en) 2019-02-16 2020-12-08 Wipro Limited Method and system for synthesizing three-dimensional data
US11308652B2 (en) 2019-02-25 2022-04-19 Apple Inc. Rendering objects to match camera noise
US11897471B2 (en) 2019-03-11 2024-02-13 Nvidia Corporation Intersection detection and classification in autonomous machine applications
US11648945B2 (en) 2019-03-11 2023-05-16 Nvidia Corporation Intersection detection and classification in autonomous machine applications
US20230350399A1 (en) * 2019-03-29 2023-11-02 Tusimple, Inc. Operational testing of autonomous vehicles
US12099351B2 (en) * 2019-03-29 2024-09-24 Tusimple, Inc. Operational testing of autonomous vehicles
US11713978B2 (en) 2019-08-31 2023-08-01 Nvidia Corporation Map creation and localization for autonomous driving applications
US11698272B2 (en) 2019-08-31 2023-07-11 Nvidia Corporation Map creation and localization for autonomous driving applications
US11788861B2 (en) 2019-08-31 2023-10-17 Nvidia Corporation Map creation and localization for autonomous driving applications
CN110795818B (en) * 2019-09-12 2022-05-17 腾讯科技(深圳)有限公司 Method and device for determining virtual test scene, electronic equipment and storage medium
CN110795818A (en) * 2019-09-12 2020-02-14 腾讯科技(深圳)有限公司 Method and device for determining virtual test scene, electronic equipment and storage medium
US11308669B1 (en) 2019-09-27 2022-04-19 Apple Inc. Shader for graphical objects
CN110794713A (en) * 2019-12-04 2020-02-14 中国航天空气动力技术研究院 Reconnaissance type unmanned aerial vehicle photoelectric load simulation training system
CN114830204A (en) * 2019-12-23 2022-07-29 罗伯特·博世有限公司 Training neural networks through neural networks
WO2021130066A1 (en) * 2019-12-23 2021-07-01 Robert Bosch Gmbh Training neural networks using a neural network
US11615558B2 (en) 2020-02-18 2023-03-28 Dspace Gmbh Computer-implemented method and system for generating a virtual vehicle environment
US11593995B1 (en) 2020-02-21 2023-02-28 Apple Inc. Populating a graphical environment
US12077190B2 (en) 2020-05-18 2024-09-03 Nvidia Corporation Efficient safety aware path selection and planning for autonomous machine applications
WO2022050937A1 (en) * 2020-09-02 2022-03-10 Google Llc Condition-aware generation of panoramic imagery
US12045955B2 (en) 2020-09-02 2024-07-23 Google Llc Condition-aware generation of panoramic imagery
US11978266B2 (en) 2020-10-21 2024-05-07 Nvidia Corporation Occupant attentiveness and cognitive load monitoring for autonomous and semi-autonomous driving applications
CN112286076B (en) * 2020-10-30 2023-12-15 中国兵器科学研究院 Real vehicle fire control triggering data acquisition simulation system
CN112286076A (en) * 2020-10-30 2021-01-29 中国兵器科学研究院 Real vehicle fire control trigger data acquisition simulation system
WO2022217873A1 (en) * 2021-04-16 2022-10-20 腾讯科技(深圳)有限公司 Virtual and reality-combined multi-human sensing system, method and apparatus, and medium
DE102021204611A1 (en) 2021-05-06 2022-11-10 Continental Automotive Technologies GmbH Computer-implemented method for generating training data for use in the field of vehicle occupant observation
CN113205070A (en) * 2021-05-27 2021-08-03 三一专用汽车有限责任公司 Visual perception algorithm optimization method and system
CN113205070B (en) * 2021-05-27 2024-02-20 三一专用汽车有限责任公司 Visual perception algorithm optimization method and system
WO2022261658A1 (en) * 2021-06-09 2022-12-15 Honeywell International Inc. Aircraft identification
CN114043505B (en) * 2021-11-29 2024-03-19 上海大学 Mechanical arm-based simulation carrier motion simulation device and control method
CN114043505A (en) * 2021-11-29 2022-02-15 上海大学 Simulation carrier motion simulation device based on mechanical arm and control method
CN115240409B (en) * 2022-06-17 2024-02-06 上智联(上海)智能科技有限公司 Method for extracting dangerous scene based on driver model and traffic flow model
CN115240409A (en) * 2022-06-17 2022-10-25 上海智能网联汽车技术中心有限公司 Method for extracting dangerous scene based on driver model and traffic flow model
CN115187742A (en) * 2022-09-07 2022-10-14 西安深信科创信息技术有限公司 Method, system and related device for generating automatic driving simulation test scene
CN115187742B (en) * 2022-09-07 2023-02-17 西安深信科创信息技术有限公司 Method, system and related device for generating automatic driving simulation test scene

Also Published As

Publication number Publication date
CN115686005A (en) 2023-02-03
US20200098172A1 (en) 2020-03-26
CN109643125B (en) 2022-11-15
US11417057B2 (en) 2022-08-16
US20190228571A1 (en) 2019-07-25
US12112432B2 (en) 2024-10-08
CN109643125A (en) 2019-04-16
EP3475778A1 (en) 2019-05-01
EP3475778C0 (en) 2024-07-10
US20220383591A1 (en) 2022-12-01
US10489972B2 (en) 2019-11-26
EP3475778B1 (en) 2024-07-10
EP3475778A4 (en) 2019-12-18

Similar Documents

Publication Publication Date Title
US12112432B2 (en) Realistic 3D virtual world creation and simulation for training automated driving systems
US20240096014A1 (en) Method and system for creating and simulating a realistic 3d virtual world
CN111566664B (en) Method, apparatus and system for generating composite image data for machine learning
US10453256B2 (en) Lane boundary detection data generation in virtual environment
Wrenninge et al. Synscapes: A photorealistic synthetic dataset for street scene parsing
EP3410404B1 (en) Method and system for creating and simulating a realistic 3d virtual world
US10901416B2 (en) Scene creation system for autonomous vehicles and methods thereof
CN107229329B (en) Method and system for virtual sensor data generation with deep ground truth annotation
US11656620B2 (en) Generating environmental parameters based on sensor data using machine learning
US11113864B2 (en) Generative image synthesis for training deep learning machines
CN109643367A (en) Crowdsourcing and the sparse map of distribution and lane measurement for autonomous vehicle navigation
US20230150529A1 (en) Dynamic sensor data augmentation via deep learning loop
KR102218881B1 (en) Method and system for determining position of vehicle
CN106023622B (en) A kind of method and apparatus of determining traffic lights identifying system recognition performance
Malik et al. Carla: Car learning to act—an inside out
Nedevschi Semantic segmentation learning for autonomous uavs using simulators and real data
Li et al. A novel traffic simulation framework for testing autonomous vehicles using sumo and carla
Galazka et al. CiThruS2: Open-source photorealistic 3D framework for driving and traffic simulation in real time
Zhuo et al. A novel vehicle detection framework based on parallel vision
Vishnyakov et al. Semantic scene understanding for the autonomous platform
Gałązka et al. CiThruS2: Open-Source Virtual Environment for Simulating Real-Time Drone Operations and Piloting
US20240354921A1 (en) Road defect level prediction
Gräfe et al. Digital Twins of Roads as a Basis for Virtual Driving Tests
Rezaldi et al. 3D Orthomosaic From Unmanned Aerial Vehicle (UAV) Image Data For Real-World Environment 3D Map Creation Autonomous Vehicles Simulation System
Vidović et al. A generative model for the creation of a synthetic dataset for semantic segmentation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17819482

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017819482

Country of ref document: EP

Effective date: 20190128