WO2021146905A1 - Procédé et appareil de construction de simulateur de scène reposant sur l'apprentissage profond, et dispositif informatique - Google Patents

Procédé et appareil de construction de simulateur de scène reposant sur l'apprentissage profond, et dispositif informatique Download PDF

Info

Publication number
WO2021146905A1
WO2021146905A1 PCT/CN2020/073469 CN2020073469W WO2021146905A1 WO 2021146905 A1 WO2021146905 A1 WO 2021146905A1 CN 2020073469 W CN2020073469 W CN 2020073469W WO 2021146905 A1 WO2021146905 A1 WO 2021146905A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
road condition
dangerous road
layer
features
Prior art date
Application number
PCT/CN2020/073469
Other languages
English (en)
Chinese (zh)
Inventor
葛相辰
Original Assignee
深圳元戎启行科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳元戎启行科技有限公司 filed Critical 深圳元戎启行科技有限公司
Priority to CN202080003157.3A priority Critical patent/CN113490940A/zh
Priority to PCT/CN2020/073469 priority patent/WO2021146905A1/fr
Publication of WO2021146905A1 publication Critical patent/WO2021146905A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/06Road conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Definitions

  • This application relates to a method, device and computer equipment for constructing a scene simulator based on deep learning.
  • simulation test scenes which are simulators for virtual simulation test of vehicles through road simulation tests or vehicle simulation software.
  • simulation test scenes generated by such simulators cannot truly simulate The real reaction of the vehicle in the corresponding environment and the low authenticity and reliability of the generated simulated driving scene result in low simulation test efficiency and low accuracy of test results.
  • a method, device and computer device for constructing a scene simulator based on deep learning are provided.
  • a method for constructing a scene simulator based on deep learning includes:
  • the scene simulation layer and the dangerous road condition layer are used to continuously train the scene simulator until the preset conditions are met, and the trained scene simulator is obtained; the scene simulator is used to generate a simulated driving scene when performing a simulation test.
  • a scene simulator construction device based on deep learning including:
  • Data acquisition module used to acquire driving scene data and historical dangerous road condition data
  • the scene simulation training module is used to extract a variety of road condition and scene features in the driving scene data, and use a deep learning algorithm to perform deep learning according to the multiple road condition and scene features to obtain a scene simulation layer;
  • the dangerous road condition training module is used to extract multiple dangerous road condition features from historical dangerous road condition scene data, and train the initial confrontation network according to the multiple dangerous road condition features to obtain the dangerous road condition layer;
  • the scene simulator building module is used to use the scene simulation layer and the dangerous road condition layer to continuously train the scene simulator until the preset conditions are met, and the trained scene simulator is obtained; the scene simulator is used for simulation testing When generating simulated driving scenes.
  • a computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the remote takeover-based vehicle control method provided in any one of the embodiments of the present application when the computer program is executed.
  • One or more non-volatile computer-readable storage media storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, one or more processors execute the readable storage medium to realize the present invention. Apply for the steps of the vehicle control method based on remote takeover provided in any of the embodiments.
  • Fig. 1 is an application scene diagram of a method for constructing a scene simulator based on deep learning in one or more embodiments.
  • Fig. 2 is a schematic flowchart of a method for constructing a scene simulator based on deep learning in one or more embodiments.
  • Fig. 3 is a schematic flow chart of the steps of the simulation layer of the training scene according to one or more embodiments.
  • Fig. 4 is a schematic flowchart of the steps of training a dangerous road condition layer according to one or more embodiments.
  • Fig. 5 is a schematic flowchart of the steps of using a scene simulator to perform a test in another embodiment.
  • Fig. 6 is a block diagram of an apparatus for constructing a scene simulator based on deep learning in accordance with one or more embodiments.
  • Fig. 7 is a block diagram of a device for constructing a scene simulator based on deep learning in another embodiment.
  • Figure 8 is a block diagram of a computer device according to one or more embodiments.
  • the method for constructing a scene simulator based on deep learning can be applied to the application environment as shown in FIG. 1.
  • the server 102 and the vehicle 104 communicate through a network.
  • the server 102 obtains the driving scene data and historical dangerous road condition scene data, it extracts various road condition scene features in the driving scene data, and uses deep learning algorithms to perform deep learning based on the various road condition scene characteristics to obtain a scene simulation layer; extract historical dangerous road condition scenes
  • the initial confrontation network is trained according to the characteristics of multiple dangerous road conditions to obtain the dangerous road condition layer.
  • the server 102 uses the scene simulation layer and the dangerous road condition layer to continuously train the scene simulator until the preset conditions are met, and then the trained scene simulator is obtained.
  • the server 102 then uses the scene simulator to generate a simulated driving scene when performing a simulation test on the vehicle 104.
  • the server 102 may be implemented as an independent server or a server cluster composed of multiple servers, and the vehicle 104 may be various self-driving vehicles.
  • a method for constructing a scene simulator based on deep learning is provided. Taking the method applied to the server in FIG. 1 as an example for description, the method includes the following steps:
  • Step 202 Acquire driving scene data and historical dangerous road condition scene data.
  • the driving scene data may be a variety of road environment data collected in advance, for example, may include historical driving record data of the vehicle, such as road condition data collected by a driving recorder of the vehicle.
  • Driving scene data includes a variety of road types and driving environment factors.
  • road types can include urban roads, dedicated roads, and rural roads; driving environment factors can include weather, air quality, temperature, noise, and lighting brightness.
  • the driving scene data also includes a variety of scene information, such as ground roads, lane lines, signal lights, landmarks, and traffic participants. Traffic participants can include passing vehicles, pedestrians, and moving paths.
  • the historical dangerous road condition scene data may be historical data of multiple types of dangerous road conditions collected from one or more platforms, and the historical dangerous road condition data may be road condition scene data in a real dangerous scene.
  • the types of risk factors can include a variety of factors, such as roadblock factors, traffic rules factors, lane vehicle factors, pedestrian factors, environmental factors and other factors.
  • the server may obtain a large amount of driving scene data and historical dangerous road condition scene data from a local database or a third-party database in advance, so as to construct and train the scene simulator.
  • the scene simulator may be a neural network model based on deep learning.
  • the use of deep learning methods can form more abstract high-level representation attribute categories or features by combining low-level features to discover distributed feature representations of data, to build a neural network that simulates the human brain for analysis and learning, and explain it by imitating the mechanism of the human brain Data, such as images, sounds, text, etc.
  • the scene simulator can also simulate the function of the hardware processor through software, so that the computer can simulate the environment of the hardware processor.
  • Step 204 Extract multiple road condition scene features in the driving scene data, and perform deep learning by using a deep learning algorithm according to the multiple road condition scene features to obtain a scene simulation layer.
  • the server After the server obtains a large amount of driving scene data and historical dangerous road condition scene data, it can first perform feature extraction on multiple road condition scenes in the driving scene data, and extract multiple road condition scene features in the driving scene data.
  • the server may input driving scene data and historical dangerous road condition scene data into a pre-built initial neural network model, which may be constructed using a preset deep learning algorithm and neural network structure.
  • the initial neural network includes multiple levels, such as a scene simulation level, a dangerous road condition learning level, and a dangerous road condition generation level.
  • the server uses the scene simulation layer to extract features from a variety of road conditions in the driving scene data, and extract a variety of road features, lane features, signal light features, landmark building features, pedestrian features, traffic vehicle features, and weather features. Traffic scene characteristics.
  • the neural network corresponding to the scene simulation level then learns various road condition and scene features, and generates a scene simulation layer according to the learned various road condition and scene features.
  • the scene simulation layer can then use the learned various road condition scene features to randomly generate a variety of corresponding models. For example, the scene simulation layer can automatically generate road models, models and other models, vehicle models, and pedestrian models that are included in the road condition scenes. Scene model.
  • Step 206 Extract a variety of dangerous road condition features in the historical dangerous road condition scene data, and train the initial confrontation network according to the multiple dangerous road condition features to obtain a dangerous road condition layer.
  • the server can also perform feature extraction on a large amount of historical dangerous road condition scene data, extract a variety of dangerous road condition features in the historical dangerous road condition scene data, and train the initial confrontation network according to the multiple dangerous road condition characteristics to obtain a dangerous road condition layer.
  • the server may learn a variety of dangerous road condition features through a neural network corresponding to the dangerous road condition level.
  • the neural network corresponding to the dangerous road condition level can be a confrontation network, for example, it can be a generative confrontation network (GAN, Generative Adversarial Networks, deep learning model), and the generative confrontation network model can include a generative model (Generative Model) and a discriminant model (Discriminative Model) mutual game learning to generate data and data enhancement with better output effects.
  • GAN Generative Adversarial Networks
  • Discriminative Model discriminant model
  • the dangerous road condition layer can include a dangerous road condition learning layer and a dangerous road condition generation layer.
  • the server uses the extracted various dangerous road condition features to learn the initial confrontation network to obtain the dangerous road condition learning level.
  • the server further learns the initial confrontation network and dangerous road conditions. Learning and training are carried out at different levels, so as to obtain the dangerous road condition generation level.
  • the dangerous road condition generation layer can then randomly generate a variety of dangerous road condition scene models, for example, various types of dangerous road condition scenes can be automatically and randomly generated through the dangerous road condition generation layer.
  • Step 208 Use the scene simulation layer and the dangerous road condition layer to continuously train the scene simulator until the preset conditions are met, and the trained scene simulator is obtained; the scene simulator is used to generate a simulated driving scene when the vehicle is simulated and tested.
  • the server training After the server training obtains the scene simulation layer and the dangerous road condition layer, it further conducts combined training on the scene simulation layer and the dangerous road condition layer. Specifically, the server may further use a generative confrontation network algorithm to learn and train the scene simulation layer and the dangerous road condition layer, so that the model can randomly generate the dangerous road condition scene in the current simulation scene when generating multiple road condition scenes. In the process of training the model, until the obtained scene simulation satisfies the preset conditions, the trained scene simulator is generated. The server can then use the trained scene simulator to generate a simulated driving scene when performing a simulation test on the unmanned vehicle.
  • the server after the server obtains driving scene data and historical dangerous road condition scene data, it extracts various road condition scene features from the driving scene data, and uses deep learning algorithms to perform deep learning based on various road condition scene features. Through learning, the scene simulation layer can be effectively trained; the server can further extract various dangerous road condition features from historical dangerous road condition scene data, and train the initial confrontation network according to the various dangerous road condition features, so as to obtain the dangerous road condition layer. Training the dangerous road condition layer through the confrontation network can make the dangerous road condition layer obtained by training randomly and effectively generate the dangerous road condition scene.
  • the server uses the scene simulation layer and the dangerous road condition layer to continuously train the scene simulator until the preset conditions are met, thereby obtaining the trained scene simulator; the scene simulator is used to generate a simulated driving scene when the vehicle is simulated and tested.
  • the generative confrontation network is further used for combined training, which can effectively construct a realistic scene simulator, which can effectively generate a realistic and reliable simulation Driving scene.
  • the scene simulator includes a scene simulation layer and a dangerous road condition layer.
  • the scene simulation layer is used to perform in-depth learning of a variety of road condition scenes using a deep learning algorithm, and use the learned characteristics of a variety of road conditions to randomly generate correspondences. A variety of traffic scenes.
  • the scene simulation layer may also include a scene element simulation layer, a scene signal simulation layer, and a driving scene simulation layer. Among them, the scene element simulation layer is used to extract various scene element information features in the driving scene data, learn and train various scene object information features, and use the learned multiple scene element information features to generate simulated scene elements.
  • the scene signal simulation layer is used to extract a variety of scene signal features in the driving scene data, learn and train a variety of scene signal features, and use the learned multiple scene signal features to generate an analog scene signal.
  • the driving scene simulation layer is used to extract a variety of audio and video signal features in the driving scene data, learn and train a variety of audio and video signal features, and obtain and use the learned audio and video signal features to combine analog scene elements and analog scene signals to generate Simulate driving scenes.
  • the dangerous road condition layer includes a dangerous road condition learning layer and a dangerous road condition generation layer.
  • the dangerous road condition learning layer is used to learn and train a variety of dangerous road condition scenarios according to the confrontation network.
  • the dangerous road condition generation layer is used to use the learned characteristics of a variety of dangerous road conditions randomly. Generate a variety of dangerous road conditions.
  • the scene simulation layer can be effectively trained; by training the dangerous road condition layer through the confrontation network, the dangerous road condition layer obtained by the training can be randomly and effectively generated the dangerous road condition scene, so as to be able to Effectively construct a realistic scene simulator.
  • extracting a variety of road condition scene features in the driving scene data using a deep learning algorithm for deep learning according to the multiple road condition scene features, to obtain a scene simulation layer, including: multi-level feature extraction of the driving scene data, Extract scene element information features, scene signal features, and audio and video signal features from the driving scene data; use deep learning algorithms to learn and train the initial network model according to the scene element information features, scene signal features, and audio and video signal features to obtain the trained scene Simulation layer.
  • the driving scene data includes a variety of road conditions and scene information, such as road information, lane information, vehicle information, signal light information, pedestrian information, and feature information.
  • scene information such as road information, lane information, vehicle information, signal light information, pedestrian information, and feature information.
  • feature information refers to various tangible objects on the ground, such as mountains, forests, etc. Buildings, etc., as well as intangibles, such as provinces, county boundaries, and other types of feature information.
  • the server After the server acquires a large amount of driving scene data and historical dangerous road condition scene data, it can first perform multiple levels of feature extraction on multiple road condition scenes in the driving scene data, and extract multiple road condition scene features in the driving scene data.
  • the road condition scene characteristics may include various road condition scene characteristics such as scene element information characteristics, scene signal characteristics, and audio and video signal characteristics.
  • the scene element information feature is the information feature corresponding to the multiple entity elements contained in the driving scene, for example, it may include specific road element, vehicle element, pedestrian element, signal light element and other corresponding element information features.
  • the scene signal feature can be the sensor signal information corresponding to the deeper road information, vehicle information, pedestrian information, signal light information and other information.
  • the audio and video signal characteristics are the audio and video signals corresponding to road information, vehicle information, pedestrian information, signal light information and other information respectively in the audio and video visualization scene.
  • the server may input driving scene data and historical dangerous road condition scene data into a pre-built initial neural network model.
  • the initial neural network includes multiple levels, for example, a scene simulation level, a dangerous road condition level, and the like.
  • the server performs multi-level feature extraction on multiple road conditions in the driving scene data through the scene simulation layer, and extracts multiple scene element information features including road element features, vehicle element features, pedestrian element features, signal element features, etc.; further Extracting scene signal features in the driving scene data according to multiple scene element information features; further extracting audio and video signal features in the driving scene data according to multiple scene element information features and scene signal features.
  • the server uses the deep learning algorithm to learn the corresponding neural network according to the scene element information characteristics, the scene signal characteristics and the audio and video signal characteristics, and generates a scene simulation layer according to the learned various road condition scene characteristics, and further trains the scene simulation layer.
  • the scene simulation layer can then use the learned various road condition scene features to randomly generate a variety of corresponding models.
  • the scene simulation layer can automatically generate road models, models and other models, vehicle models, and pedestrian models that are included in the road condition scenes. Scene model.
  • the scene simulation layer is trained using the extracted multi-level features, so that the scene simulation layer can be effectively constructed.
  • the scene simulation layer includes a scene element simulation layer, a scene signal simulation layer, and a driving scene simulation layer.
  • deep learning algorithms are used for deep learning to obtain the scene simulation layer. The steps include the following:
  • Step 302 Extract multiple types of scene element information features in the driving scene data, and train multiple types of scene object information features to obtain a completed scene element simulation layer.
  • Step 304 Extract various scene signal features in the driving scene data, and train the various scene signal features to obtain a scene signal simulation layer that has been trained.
  • Step 306 Extract a variety of audio and video signal features from the driving scene data, and train the multiple audio and video signal features to obtain a completed driving scene simulation layer.
  • the scene element information includes a variety of environmental role elements, such as specific road entity information, vehicle entity information, pedestrian entity information, signal light entity information, and other environmental role information.
  • the server After the server obtains a large amount of driving scene data and historical dangerous road condition scene data, it inputs the driving scene data and historical dangerous road condition scene data into the pre-built initial neural network model.
  • the initial neural network model can be constructed using preset deep learning algorithms and neural network structures.
  • the initial neural network includes multiple levels, such as a scene simulation level, a dangerous road condition learning level, and a dangerous road condition generation level.
  • the scene simulation level may also include multiple neural network layers, and specifically may include a scene element simulation layer, a scene signal simulation layer, and a driving scene simulation layer.
  • the server further performs multi-level feature extraction on multiple road condition scenes in the driving scene data through multiple neural network layers in the scene simulation level, and extracts multiple road condition scene features in the driving scene data.
  • the server extracts various scene element information features in the driving scene data through the scene element simulation layer, uses preset deep learning algorithms to learn various scene object information features, and compares the scene element simulation layer with the learned features.
  • the neural network is trained until the training condition threshold is met, and the trained scene element simulation layer is obtained.
  • the trained scene element simulation layer can be used to generate simulated scene element information.
  • the server further extracts multiple scene signal features in the driving scene data through the scene signal simulation layer, uses preset deep learning algorithms to learn multiple scene signal features, and trains the neural network corresponding to the scene signal simulation layer according to the learned features , Until the training condition threshold is met, the scene signal simulation layer where the training is completed is obtained.
  • the server can further combine various scene element information features and various scene signal features to learn and train the neural network, so as to obtain a completed scene signal simulation layer.
  • the trained scene signal simulation layer can be used to generate a variety of scene elements and scene signal information.
  • the server extracts a variety of comprehensive audio and video signal features in the driving scene data through the driving scene simulation layer.
  • the multiple comprehensive audio and video signal features include dynamic multiple scene element information features and multiple scene signal features.
  • the server then uses preset deep learning algorithms to learn a variety of audio and video signal features, and trains the neural network corresponding to the driving scene simulation layer according to the learned features, until the training condition threshold is met, and the trained driving scene simulation layer is obtained .
  • the server can further combine various scene element information features, various scene signal features, and various audio and video signal features to learn and train the neural network, so as to obtain a completed driving scene simulation layer.
  • the trained driving scene simulation layer can be used to generate a variety of dynamic driving scene information.
  • the scene simulation layer can be effectively constructed.
  • the dangerous road condition layer includes a dangerous road condition learning layer
  • the step of extracting multiple dangerous road condition features from historical dangerous road condition scene data includes the following contents:
  • Step 402 Extract multiple dangerous road condition features from historical dangerous road condition scene data through the dangerous road condition learning layer.
  • Step 404 Identify the risk factors and risk levels of each dangerous road condition feature.
  • step 406 a set of dangerous scene factors is generated by using multiple dangerous road condition features according to the risk factors and the degree of risk.
  • the server can obtain a large amount of historical dangerous road condition scene data, and the historical dangerous road condition scene data includes dangerous road condition scene information of various risk factors and degree of danger.
  • the server can input a large amount of historical dangerous road condition scene data into the preset neural network corresponding to the dangerous road condition layer. Specifically, the server inputs historical dangerous road condition scene data to the dangerous road condition learning layer in the dangerous road condition layer, and extracts features of historical dangerous road condition scene data through the dangerous road condition learning layer, and extracts multiple dangerous road conditions in the historical dangerous road condition scene data. feature.
  • the server then identifies the risk factors and the degree of risk for each type of dangerous road condition feature. Dangerous factors can be various environmental factors that cause dangerous road conditions, such as the number of lanes, lane vehicles, pedestrians, road obstacles, bad weather, vehicle failures, illegal driving, and other dangerous factors that cause vehicles to fall into a dangerous state. Risk factors can also be multiple factor types corresponding to multiple risk types. The degree of danger can be calculated based on the damage level, risk complexity and probability of occurrence.
  • the server After the server extracts a variety of dangerous road condition features in the historical dangerous road condition scene data, it extracts the dangerous factors in each dangerous road condition feature, as well as the damage level, risk complexity, and probability of occurrence of each dangerous road condition feature, according to the damage level, Dangerous complexity and probability of occurrence calculate the degree of danger of each dangerous road condition feature.
  • the server then generates a set of dangerous scene factors by using multiple dangerous road condition features according to the risk factors and the degree of risk.
  • the set of dangerous scene factors is the characteristics of dangerous road conditions learned by the dangerous road condition learning layer, including multiple risk factors.
  • the dangerous road condition layer includes a dangerous road condition generation layer
  • the initial confrontation network is trained according to a variety of dangerous road condition features to obtain the dangerous road condition layer, including: generating risk factors based on the probability values of the risk factors in the dangerous scene element set Random domain: Use the generative confrontation network to train various dangerous road condition features in the set of dangerous scene factors according to the random domain of risk factors, and obtain the dangerous road condition generation layer after the training; the dangerous road condition generation layer is used to randomly generate dangerous road condition information.
  • the server extracts a variety of dangerous road condition features in historical dangerous road condition scene data through the dangerous road condition learning layer, identifies the risk factors and degree of danger of each dangerous road condition feature, and generates a set of dangerous scene factors based on the risk factors and degree of danger. Later, further use the set of dangerous scene factors to train the dangerous road condition generation layer.
  • the server may also calculate the probability value of each type of dangerous factor in the historical dangerous road condition scene data through the dangerous road condition learning layer, that is, the probability of occurrence of the dangerous factor.
  • the initial confrontation network can be a generative confrontation network
  • the generative confrontation network can include a discriminant model and a generative model. That is, the dangerous road condition learning layer can be a discriminant model, and the dangerous road condition generation layer can be a generative model.
  • the server uses the generative confrontation network to train a variety of dangerous road conditions in the set of dangerous scene factors according to the random domain of risk factors, so that it can effectively train the dangerous road condition generation layer, and the dangerous road condition generation layer can then randomly generate a variety of dangerous road conditions.
  • the scene model for example, can automatically and randomly generate various types of dangerous road condition scenes through the dangerous road condition generation layer.
  • the scene simulation layer and the dangerous road condition layer are used to continuously train the scene simulator until the preset conditions are met, and the trained scene simulator is obtained, including: constructing a scene based on the scene simulation layer and the dangerous road condition layer Simulator; use the generative confrontation network algorithm to continuously train the scene simulation layer and the dangerous road condition layer; until the driving scene generated by the scene simulator meets the condition threshold and the generated dangerous road condition meets the probability threshold, the trained scene simulator is obtained .
  • a scene simulator is constructed according to the scene simulation layer and the dangerous road condition layer.
  • the server further conducts combined training on the scene simulation layer and the dangerous road condition layer.
  • the server can further use the generative confrontation network algorithm to continuously train the scene simulation layer and the dangerous road condition layer.
  • the simulator can generate multiple road condition scenarios based on multiple dangerous road conditions. The probability value of is randomly generated dangerous road scenes in the current simulation scene. Until the driving scene generated by the scene simulator meets the condition threshold and the generated dangerous road condition meets the probability threshold, the training is stopped, thereby obtaining a completed scene simulator.
  • the server can then use the trained scene simulator to generate a simulated driving scene when performing a simulation test on the unmanned vehicle.
  • a road map of the corresponding road type can be constructed.
  • the road map includes various types of ground, lane lines, signal lights, landmarks and other information, and traffic participants are added to the road map
  • the information may include vehicles and pedestrians and their respective movement paths.
  • the simulator can also randomly generate dangerous road conditions with various dangerous factors in the simulated driving scene, such as dangerous pedestrian trajectories, dangerous vehicle trajectories, roadblock information, harsh environments and other dangerous road conditions.
  • the generative confrontation network is further used for combined training, which can effectively construct a realistic scene simulator, which can effectively generate a realistic and reliable simulation Driving scene.
  • the method further includes the step of using a scene simulator to perform a test, which specifically includes the following contents:
  • Step 502 Obtain a simulation test instruction, and call the scene simulator according to the simulated driving instruction.
  • Step 504 Use the scene simulator to generate driving scene information, and randomly generate dangerous road condition information in the driving scene information; make the vehicle perform simulated driving in the generated simulated driving scene.
  • Step 506 Obtain vehicle driving data of the vehicle during the simulated driving process.
  • Step 508 Generate vehicle simulation test information according to the driving scene information and the vehicle driving data.
  • the server uses the driving scene data and historical dangerous road condition scene data to construct and train a scene simulator including a scene simulation layer and a dangerous road condition layer, it can then use the scene simulator to generate a simulated driving scene to perform simulation tests on unmanned vehicles.
  • the vehicle may send a simulation test request to the server, or the vehicle monitoring platform may directly send a simulation test instruction to the server.
  • the server obtains the simulation test instruction, it calls the scene simulator according to the simulated driving instruction.
  • the server further generates driving scene information through the scene simulation layer of the scene simulator, and randomly generates dangerous road condition information in the driving scene information through the dangerous road condition layer of the scene simulator.
  • the vehicle can perform simulated driving in the generated simulated driving scene.
  • the vehicle is equipped with corresponding sensors, so that the vehicle can effectively test the sensor data of the vehicle in a simulated driving.
  • the server can obtain the vehicle driving data of the vehicle in the simulation driving process.
  • the vehicle driving data includes vehicle state data and road image data collected by the vehicle.
  • the vehicle state data may include vehicle operating state data, sensor data, and energy consumption data.
  • the image data includes multiple road image data collected by the collection equipment.
  • the server then generates vehicle simulation test information of the vehicle according to the driving scene information and vehicle driving data, and the vehicle simulation test information can be used to analyze various performance indicators of the unmanned vehicle.
  • the constructed scene simulator to generate driving simulation scenes and perform simulation tests on unmanned vehicles, it is possible to effectively construct simulated driving scenes with high authenticity and reliability, and to effectively conduct simulation tests on unmanned vehicles. Effectively improve the validity and reliability of the test.
  • a device for building a scene simulator based on deep learning including: a data acquisition module 602, a scene simulation training module 604, a dangerous road condition training module 606, and a scene simulator building module 608, of which:
  • the data acquisition module 602 is used to acquire driving scene data and historical dangerous road condition data
  • the scene simulation training module 604 is used to extract various road condition and scene features from the driving scene data, and use a deep learning algorithm to perform deep learning according to the various road condition and scene features to obtain a scene simulation layer;
  • the dangerous road condition training module 606 is used to extract multiple dangerous road condition features from historical dangerous road condition scene data, and train the initial countermeasure network according to the multiple dangerous road condition features to obtain the dangerous road condition layer;
  • the scene simulator building module 608 is used to continuously train the scene simulator using the scene simulation layer and the dangerous road condition layer, until the preset conditions are met, and the trained scene simulator is obtained; the scene simulator is used to generate simulations when the vehicle is simulated and tested Driving scene.
  • the scene simulator includes a scene simulation layer and a dangerous road condition layer, and the scene simulation layer is used for deep learning of the various road condition scenes using a deep learning algorithm, and using the learned multiple road condition scenes Features randomly generate corresponding multiple road condition scenarios;
  • the dangerous road condition layer includes a dangerous road condition learning layer and a dangerous road condition generation layer, and the dangerous road condition learning layer is used to learn and train multiple dangerous road condition scenarios according to the confrontation network.
  • the dangerous road condition generation layer is used to randomly generate a variety of dangerous road condition scenarios using the learned characteristics of a variety of dangerous road conditions.
  • the scene simulation training module 604 is also used to perform multi-level feature extraction on the driving scene data, extracting scene element information features, scene signal features, and audio and video signal features in the driving scene data; and according to the scene element information features , Scene signal characteristics and audio and video signal characteristics Use deep learning algorithms to learn and train the initial network model to obtain the trained scene simulation layer.
  • the scene simulation layer includes a scene element simulation layer, a scene signal simulation layer, and a driving scene simulation layer.
  • the scene simulation training module 604 is also used to extract information features of various scene elements in the driving scene data, and to Train the scene element information characteristics to obtain the trained scene element simulation layer; extract multiple scene signal features in the driving scene data, train multiple scene signal features, and obtain the trained scene signal simulation layer; and extract the driving scene A variety of audio and video signal features in the data are trained on multiple audio and video signal features to obtain a completed driving scene simulation layer.
  • the dangerous road condition layer includes a dangerous road condition learning layer
  • the dangerous road condition training module 606 is also used to extract multiple dangerous road condition features in historical dangerous road condition scene data through the dangerous road condition learning layer; Risk factors and degree of risk; and generate a set of risk scene factors based on risk factors and risk complexity using multiple dangerous road conditions.
  • the dangerous road condition layer includes a dangerous road condition generation layer
  • the dangerous road condition training module 606 is further configured to generate a risk factor random domain according to the probability value of the risk factor in the dangerous scene element set; and use a generative confrontation network according to the risk factor
  • the random domain conducts multiple dangerous road condition feature training on the set of dangerous scene factors, and obtains the completed dangerous road condition generation layer; the generated dangerous road condition layer is used to randomly generate dangerous road condition information.
  • the scene simulator construction module 608 is also used to construct a scene simulator based on the scene simulation layer and the dangerous road condition layer; use the generative confrontation network algorithm to continuously train the scene simulation layer and the dangerous road condition layer; and When the driving scene generated by the scene simulator meets the condition threshold and the generated dangerous road condition meets the probability threshold, the trained scene simulator is obtained.
  • the device further includes a simulation test module 610, which is used to obtain simulation test instructions, call the scene simulator according to the simulated driving instructions; use the scene simulator to generate driving scene information, and randomly Dangerous road condition information is generated from the driving scene information; the vehicle is simulated driving in the generated simulated driving scene; the vehicle driving data of the vehicle in the simulated driving process is obtained; and the vehicle simulation test information is generated based on the driving scene information and the vehicle driving data.
  • a simulation test module 610 which is used to obtain simulation test instructions, call the scene simulator according to the simulated driving instructions; use the scene simulator to generate driving scene information, and randomly Dangerous road condition information is generated from the driving scene information; the vehicle is simulated driving in the generated simulated driving scene; the vehicle driving data of the vehicle in the simulated driving process is obtained; and the vehicle simulation test information is generated based on the driving scene information and the vehicle driving data.
  • the various modules in the device for constructing a scene simulator based on deep learning can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in FIG. 8.
  • the computer equipment includes a processor, a memory, a network interface, and a database connected through a system bus. Among them, the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, computer readable instructions, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium.
  • the database of the computer equipment is used to store data such as driving scene data and historical dangerous road condition scene data.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer readable instructions are executed by the processor to implement an 8 method.
  • FIG. 8 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
  • the specific computer device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
  • a computer device includes a memory and one or more processors.
  • the memory stores computer readable instructions.
  • the one or more processors execute the above method embodiments. step.
  • One or more non-volatile computer-readable storage media storing computer-readable instructions.
  • the computer-readable instructions execute A step of.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Transportation (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Mechanical Engineering (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

L'invention concerne un procédé de construction de simulateur de scène reposant sur l'apprentissage profond, comprenant : l'acquisition de données de scène de conduite et de données de scène de conditions routières dangereuses historiques ; l'extraction d'une pluralité de caractéristiques de scène de conditions routières à partir des données de scène de conduite, et la réalisation d'un apprentissage profond à l'aide d'un algorithme d'apprentissage profond selon la pluralité de caractéristiques de scène de conditions routières pour obtenir une couche de simulation de scène ; l'extraction d'une pluralité de caractéristiques de conditions routières dangereuses à partir des données de scène de conditions routières dangereuses historiques, et l'entraînement d'un réseau antagoniste initial selon la pluralité de caractéristiques de conditions routières dangereuses pour obtenir une couche de conditions routières dangereuses ; et l'entraînement en continu d'un simulateur de scène à l'aide de la couche de simulation de scène et de la couche de conditions routières dangereuses jusqu'à ce qu'une condition prédéfinie soit satisfaite, de façon à obtenir un simulateur de scène entraîné. Le simulateur de scène est configuré pour générer une scène de conduite simulée lorsqu'un test de simulation est effectué.
PCT/CN2020/073469 2020-01-21 2020-01-21 Procédé et appareil de construction de simulateur de scène reposant sur l'apprentissage profond, et dispositif informatique WO2021146905A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080003157.3A CN113490940A (zh) 2020-01-21 2020-01-21 基于深度学习的场景模拟器构建方法、装置和计算机设备
PCT/CN2020/073469 WO2021146905A1 (fr) 2020-01-21 2020-01-21 Procédé et appareil de construction de simulateur de scène reposant sur l'apprentissage profond, et dispositif informatique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/073469 WO2021146905A1 (fr) 2020-01-21 2020-01-21 Procédé et appareil de construction de simulateur de scène reposant sur l'apprentissage profond, et dispositif informatique

Publications (1)

Publication Number Publication Date
WO2021146905A1 true WO2021146905A1 (fr) 2021-07-29

Family

ID=76992631

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/073469 WO2021146905A1 (fr) 2020-01-21 2020-01-21 Procédé et appareil de construction de simulateur de scène reposant sur l'apprentissage profond, et dispositif informatique

Country Status (2)

Country Link
CN (1) CN113490940A (fr)
WO (1) WO2021146905A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114296424A (zh) * 2021-12-06 2022-04-08 苏州挚途科技有限公司 仿真测试系统和方法
CN114771576A (zh) * 2022-05-19 2022-07-22 北京百度网讯科技有限公司 行为数据处理方法、自动驾驶车辆的控制方法及自动驾驶车辆
CN114296424B (zh) * 2021-12-06 2024-05-28 苏州挚途科技有限公司 仿真测试系统和方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115240409B (zh) * 2022-06-17 2024-02-06 上智联(上海)智能科技有限公司 一种基于驾驶员模型和交通流模型提取危险场景的方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108897313A (zh) * 2018-05-23 2018-11-27 清华大学 一种分层式端到端车辆自动驾驶系统构建方法
CN109190648A (zh) * 2018-06-26 2019-01-11 Oppo(重庆)智能科技有限公司 模拟环境生成方法、装置、移动终端及计算机可读取存储介质
CN110569916A (zh) * 2019-09-16 2019-12-13 电子科技大学 用于人工智能分类的对抗样本防御系统及方法
CN110647839A (zh) * 2019-09-18 2020-01-03 深圳信息职业技术学院 自动驾驶策略的生成方法、装置及计算机可读存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102195317B1 (ko) * 2017-05-19 2020-12-28 한국과학기술원 비디오 게임을 활용한 차량 사고 예측 방법
CN108345869B (zh) * 2018-03-09 2022-04-08 南京理工大学 基于深度图像和虚拟数据的驾驶人姿态识别方法
CN108595901A (zh) * 2018-07-09 2018-09-28 黄梓钥 一种自动驾驶汽车标准化安全仿真验证模型数据库系统
CN109597317B (zh) * 2018-12-26 2022-03-18 广州小鹏汽车科技有限公司 一种基于自学习的车辆自动驾驶方法、系统及电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108897313A (zh) * 2018-05-23 2018-11-27 清华大学 一种分层式端到端车辆自动驾驶系统构建方法
CN109190648A (zh) * 2018-06-26 2019-01-11 Oppo(重庆)智能科技有限公司 模拟环境生成方法、装置、移动终端及计算机可读取存储介质
CN110569916A (zh) * 2019-09-16 2019-12-13 电子科技大学 用于人工智能分类的对抗样本防御系统及方法
CN110647839A (zh) * 2019-09-18 2020-01-03 深圳信息职业技术学院 自动驾驶策略的生成方法、装置及计算机可读存储介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114296424A (zh) * 2021-12-06 2022-04-08 苏州挚途科技有限公司 仿真测试系统和方法
CN114296424B (zh) * 2021-12-06 2024-05-28 苏州挚途科技有限公司 仿真测试系统和方法
CN114771576A (zh) * 2022-05-19 2022-07-22 北京百度网讯科技有限公司 行为数据处理方法、自动驾驶车辆的控制方法及自动驾驶车辆

Also Published As

Publication number Publication date
CN113490940A (zh) 2021-10-08

Similar Documents

Publication Publication Date Title
US20170316127A1 (en) Method and apparatus for constructing testing scenario for driverless vehicle
US10019652B2 (en) Generating a virtual world to assess real-world video analysis performance
JP7148718B2 (ja) 場面のパラメトリック上面視表現
JP7471397B2 (ja) 道路シーンにおける多様な長期将来軌道のシミュレーション
JP2022505762A (ja) 画像セマンティックセグメンテーションネットワークのトレーニング方法、装置、機器及びコンピュータプログラム
RU2017146151A (ru) Формирование моделированных данных датчиков для обучения и проверки достоверности моделей обнаружения
RU2016149163A (ru) Генерация данных виртуальных датчиков для выявления колесного упора
CN111797526B (zh) 一种仿真测试场景构建方法及装置
CN113033029A (zh) 自动驾驶仿真方法、装置、电子设备及存储介质
WO2021146905A1 (fr) Procédé et appareil de construction de simulateur de scène reposant sur l'apprentissage profond, et dispositif informatique
DK201770681A1 (en) A method for (re-) training a machine learning component
US11636684B2 (en) Behavior model of an environment sensor
US20200242478A1 (en) Learning method and learning device for updating hd map by reconstructing 3d space by using depth estimation information and class information on each object, which have been acquired through v2x information integration technique, and testing method and testing device using the same
CN111859674A (zh) 一种基于语义的自动驾驶测试图像场景构建方法
CN115830399A (zh) 分类模型训练方法、装置、设备、存储介质和程序产品
Ramakrishna et al. Anti-carla: An adversarial testing framework for autonomous vehicles in carla
WO2021146906A1 (fr) Procédé et appareil de simulation de scénario de test, dispositif informatique et support de stockage
Deter et al. Simulating the Autonomous Future: A Look at Virtual Vehicle Environments and How to Validate Simulation Using Public Data Sets
Lin et al. 3D environmental perception modeling in the simulated autonomous-driving systems
WO2020199057A1 (fr) Système, procédé et dispositif de simulation de pilotage automatique, et support de stockage
Stocco et al. Model vs system level testing of autonomous driving systems: a replication and extension study
Grau et al. A variational deep synthesis approach for perception validation
CN116776288A (zh) 一种智能驾驶感知模型的优化方法、装置及存储介质
Li A scenario-based development framework for autonomous driving
Bai et al. Cyber mobility mirror for enabling cooperative driving automation: A co-simulation platform

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20915599

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20915599

Country of ref document: EP

Kind code of ref document: A1