WO2022116873A1 - 仿真测试方法、装置及系统 - Google Patents
仿真测试方法、装置及系统 Download PDFInfo
- Publication number
- WO2022116873A1 WO2022116873A1 PCT/CN2021/132662 CN2021132662W WO2022116873A1 WO 2022116873 A1 WO2022116873 A1 WO 2022116873A1 CN 2021132662 W CN2021132662 W CN 2021132662W WO 2022116873 A1 WO2022116873 A1 WO 2022116873A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- virtual
- simulator
- sensor
- input signal
- driving state
- Prior art date
Links
- 238000004088 simulation Methods 0.000 title claims abstract description 141
- 238000010998 test method Methods 0.000 title claims abstract description 8
- 238000012545 processing Methods 0.000 claims abstract description 247
- 238000012360 testing method Methods 0.000 claims abstract description 74
- 238000000034 method Methods 0.000 claims description 53
- 238000004422 calculation algorithm Methods 0.000 claims description 32
- 230000001133 acceleration Effects 0.000 claims description 28
- 230000001934 delay Effects 0.000 claims description 15
- 238000001914 filtration Methods 0.000 claims description 9
- 230000001360 synchronised effect Effects 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 description 11
- 230000006399 behavior Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 239000000463 material Substances 0.000 description 7
- 238000004590 computer program Methods 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 241001465754 Metazoa Species 0.000 description 3
- 230000003111 delayed effect Effects 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000002592 echocardiography Methods 0.000 description 2
- 230000000704 physical effect Effects 0.000 description 2
- 238000013179 statistical model Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/15—Vehicle, aircraft or watercraft design
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B17/00—Systems with reflecting surfaces, with or without refracting elements
- G02B17/02—Catoptric systems, e.g. image erecting and reversing system
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B17/00—Systems involving the use of models or simulators of said systems
- G05B17/02—Systems involving the use of models or simulators of said systems electric
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01M—TESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
- G01M17/00—Testing of vehicles
- G01M17/007—Wheeled or endless-tracked vehicles
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/20—Pc systems
- G05B2219/23—Pc programming
- G05B2219/23446—HIL hardware in the loop, simulates equipment to which a control module is fixed
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3664—Environments for testing or debugging software
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2113/00—Details relating to the application field
Definitions
- the embodiments of the present application relate to the field of simulation testing, and in particular, to a simulation testing method, device, and system.
- assisted driving and autonomous driving technologies are rapidly developing and commercializing.
- the use of assisted driving and autonomous driving technology will greatly change the way people travel and have a huge impact on people's work and lifestyle.
- assisted driving and autonomous driving the perception capabilities of sensors on the car and the ability of the car to drive autonomously are critical to being able to drive safely in a variety of different scenarios.
- Related capabilities need to be tested in different scenarios to ensure that these capabilities are reliable.
- the current test methods usually include open road driving tests, closed field driving tests and simulation tests. It is difficult to traverse various test scenarios based on road driving tests. Some analysis shows that more than 100 million kilometers of road tests are required to ensure the coverage of various scenarios, and the efficiency is relatively low.
- Embodiments of the present application provide a simulation testing method, device, and system, so as to provide a method for performing simulation testing in a virtual scene.
- an embodiment of the present application provides a simulation test method, which is applied to an input signal simulator, where the input signal simulator is located in an automatic driving test framework, and the automatic driving test framework further includes a virtual scene simulator and a sensor simulator,
- the virtual scene simulator is used to simulate a virtual scene, and the virtual scene includes a virtual object to be tested, and the virtual object to be tested includes a first driving state and a plurality of virtual sensors, including:
- the processing delay may be the difference between the processing time of the virtual sensor in the simulator and the processing time of the real sensor in the real environment.
- the processing delay can be a positive value or a negative value, exemplarily, if the processing time of the virtual sensor in the simulator is greater than that of the real sensor in the real environment If the processing time of the virtual sensor in the simulator is less than the processing time of the real sensor in the real environment, the processing delay is a negative value. Therefore, it can be determined whether the processing delay satisfies the preset condition by judging whether the processing delay is a positive value or a negative value.
- each virtual sensor corresponds to its own preset front-end model and preset algorithm. Therefore, each virtual sensor includes its own processing delay. That is to say, it is necessary to judge whether the processing delay of each virtual sensor satisfies the preset conditions. .
- the first driving state is predicted based on the processing delay to obtain the second driving state; specifically, the first driving state may include the position, speed and acceleration of the virtual object to be measured . Therefore, when the processing delay of any virtual sensor satisfies a preset condition, for example, the processing delay of the virtual sensor is a positive value, prediction can be made based on the processing delay of the virtual sensor to obtain the second driving state, wherein, The second driving state is a predicted state with the virtual sensor, and the second driving state is the position, speed and acceleration at a future moment.
- a virtual sensor corresponding to the processing delay is used for simulation to obtain one or more first input signals, wherein each first input signal corresponds to each virtual sensor one-to-one; specifically, The input signal simulator may simulate based on the second driving state of each virtual sensor to obtain the first input signal.
- One or more first input signals are sent to the sensor simulator.
- the behavior and performance of the sensor can be simulated accurately, the accuracy of the sensor simulation can be improved, and the efficiency of the simulation test can be improved.
- using a virtual sensor corresponding to the processing delay to perform simulation to obtain one or more first input signals includes:
- Simultaneous simulation is performed using a plurality of virtual sensors corresponding to each processing delay to obtain a plurality of first input signals. Specifically, when a plurality of virtual sensors are simulated at the same time to obtain the first input signal, the signals obtained by the simulation of each virtual sensor can be synchronized to obtain a plurality of synchronized first input signals, thereby making the signal synchronization, thereby improving the accuracy of the simulation test.
- the accuracy of the simulation test can be improved.
- the processing delay is determined by the difference between the first processing time and the second processing time, where the first processing time is the processing time of the virtual sensor in the sensor simulator, and the second processing time is The preset real processing time of the real sensor corresponding to the virtual sensor.
- the performance of the real sensor can be accurately simulated.
- One of the possible implementations includes:
- a virtual sensor corresponding to the processing delay is used for simulation to obtain a second input signal; based on the processing delay, one or more second input signals are The delay is sent to the sensor simulator.
- the input signal simulator may The first driving state is simulated to obtain a second input signal, and the second input signal is delayed and sent based on the processing delay to compensate for the processing delay.
- the processing delay can be compensated, so that the performance of the real sensor can be simulated accurately.
- the sensor simulator is used to receive the first input signal or the second input signal, and perform calculations based on a preset front-end model and a preset algorithm of the virtual sensor to obtain an output signal, and the preset front-end of the virtual sensor
- the performance of the real sensor can be simulated more accurately.
- the virtual scene is obtained by the virtual scene simulator using at least one CPU and/or at least one GPU, and the first input signal or the second input signal is obtained by the input signal simulator using a ray tracing algorithm through at least one GPU simulation obtained.
- the signal is simulated by at least one hardware unit such as CPU and/or GPU, which can improve the speed of signal simulation, thereby improving the efficiency of simulation testing.
- the first driving state includes a first position, a first speed, and a first acceleration of the virtual object to be measured at time t, and the first driving state is predicted based on the processing delay to obtain the second driving state Status includes:
- the Kalman filtering method is used to predict the first driving state of the virtual object to be tested, and the second driving state is obtained, wherein the second driving state includes the second position and the second speed of the virtual object to be tested at t+T. and the second acceleration, T is the processing delay.
- the processing delay can be effectively compensated, thereby improving the accuracy of the simulation test.
- the automatic driving test architecture also includes a digital simulator, a driving system and a power system simulator.
- the digital simulator is used to receive the output signal sent by the sensor simulator, and send the output signal to the driving system.
- the system is used to determine the driving decision based on the output signal, and the power system simulator is used to simulate the driving decision, obtain the third driving state, and feed back the third driving state to the virtual scene simulator, so that the virtual object to be tested is based on the third driving state
- the first driving state is updated.
- a driving decision can be obtained based on the output signal, and the driving state of the virtual object to be tested can be updated based on the driving decision, thereby forming a closed loop of the simulation test, and then It can improve the efficiency of simulation test.
- the virtual sensor includes at least one of a millimeter-wave radar virtual sensor, a lidar virtual sensor, an infrared virtual sensor, and a camera virtual sensor.
- a simulation test can be performed on different sensors, which can improve the flexibility of the test, thereby improving the efficiency of the simulation test.
- an embodiment of the present application provides a simulation test device, which is applied to an input signal simulator, where the input signal simulator is located in an automatic driving test architecture, and the automatic driving test architecture further includes a virtual scene simulator and a sensor simulator.
- the scene simulator is used to simulate a virtual scene, the virtual scene includes a virtual object to be tested, and the virtual object to be tested includes a first driving state and a plurality of virtual sensors, including:
- the receiving circuit is used to obtain the processing delay of each virtual sensor
- a prediction circuit for judging whether each processing delay satisfies the preset condition; if any processing delay satisfies the preset condition, predicting the first driving state based on the processing delay to obtain the second driving state;
- a first simulation circuit for performing simulation based on each second driving state using virtual sensors corresponding to the processing delay to obtain one or more first input signals, wherein each first input signal is associated with each virtual sensor one-to-one correspondence;
- the first sending circuit is used for sending one or more first input signals to the sensor simulator.
- the above-mentioned first simulation circuit is further configured to use a plurality of virtual sensors corresponding to each processing delay to perform synchronous simulation to obtain a plurality of first input signals.
- the above-mentioned processing delay is determined by the difference between the first processing time and the second processing time, wherein the first processing time is the processing time of the virtual sensor in the sensor simulator, and the second processing time is the processing time of the virtual sensor in the sensor simulator.
- the above-mentioned device further includes:
- the second simulation circuit is configured to use a virtual sensor corresponding to the processing delay to perform simulation based on the first driving state to obtain a second input signal if any processing delay does not meet the preset condition;
- the second sending circuit is configured to delay sending one or more second input signals to the sensor simulator based on the processing delay.
- the above-mentioned first input signal or second input signal is obtained by at least one GPU simulation by an input signal simulator using a ray tracing algorithm.
- the first driving state includes the first position, the first speed, and the first acceleration of the virtual object to be tested at time t
- the prediction circuit is further configured to use the Kalman filtering method to be tested based on the processing delay.
- the first driving state of the virtual object is predicted to obtain the second driving state, wherein the second driving state includes the second position, the second speed and the second acceleration of the virtual object to be tested at t+T, where T is the processing delay. Time.
- an embodiment of the present application provides a simulation test system, including: a virtual scene simulator, an input signal simulator, a sensor simulator, a digital simulator, and a system synchronization module; wherein,
- the virtual scene simulator is used for a virtual scene, the virtual scene includes a virtual object to be tested, and the virtual object to be tested includes a first driving state and a plurality of virtual sensors;
- the input signal simulator is used to obtain the processing delay of each virtual sensor; determine whether each processing delay meets the preset condition; if any processing delay meets the preset condition, predict the first driving state based on the processing delay , obtain the second driving state; based on each second driving state, use the virtual sensor corresponding to the processing delay for simulation to obtain one or more first input signals, wherein each first input signal and each virtual sensor One-to-one correspondence; sending one or more first input signals to the sensor simulator;
- the sensor simulator is used for receiving the first input signal, and performing calculation based on the preset front-end model and preset algorithm of the virtual sensor to obtain the output signal;
- the digital simulator is used to receive the output signal sent by the sensor simulator
- the system synchronization module is used to provide the synchronization clock to the virtual scene simulator, input signal simulator, sensor simulator and digital simulator.
- the input signal simulator is further configured to use a plurality of virtual sensors corresponding to each processing delay to perform synchronous simulation to obtain a plurality of first input signals.
- the processing delay is determined by the difference between the first processing time and the second processing time, where the first processing time is the processing time of the virtual sensor in the sensor simulator, and the second processing time is The preset real processing time of the real sensor corresponding to the virtual sensor.
- the input signal simulator is also used to simulate, based on the first driving state, a virtual sensor corresponding to the processing delay if any processing delay does not meet the preset condition, to obtain the second Input signals; delay sending one or more second input signals to the sensor simulator based on the processing delay.
- the sensor simulator is also used to receive the second input signal
- the virtual scene is obtained by the virtual scene simulator using at least one CPU and/or at least one GPU, and the first input signal or the second input signal is obtained by the input signal simulator using a ray tracing algorithm through at least one GPU simulation obtained.
- the first driving state includes the first position, the first speed, and the first acceleration of the virtual object to be measured at time t
- the input signal simulator is also used to use the Kalman filtering method based on the processing delay Predict the first driving state of the virtual object to be tested to obtain a second driving state, wherein the second driving state includes the second position, the second speed and the second acceleration of the virtual object to be tested at t+T, where T is the processing delay. Time.
- One of the possible implementations also includes a driving system and a power system simulator, in which,
- Digital simulators are also used to send output signals to the driving system
- the driving system is used to determine driving decisions based on the output signals
- the power system simulator is used to simulate driving decisions, obtain a third driving state, and feed back the third driving state to the virtual scene simulator, so that the virtual object to be tested updates the first driving state based on the third driving state.
- the above virtual sensor includes at least one of a millimeter-wave radar virtual sensor, a lidar virtual sensor, an infrared virtual sensor, and a camera virtual sensor.
- an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when it runs on a computer, causes the computer to execute the method described in the first aspect.
- an embodiment of the present application provides a computer program, which is used to execute the method described in the first aspect when the computer program is executed by a computer.
- the program in the fifth aspect may be stored in whole or in part on a storage medium packaged with the processor, and may also be stored in part or in part in a memory not packaged with the processor.
- FIG. 1 is a schematic diagram of a system architecture provided by an embodiment of the present application.
- FIG. 2 is a schematic flowchart of a simulation testing method provided by an embodiment of the present application.
- FIG. 3 is a schematic diagram of state prediction provided by an embodiment of the present application.
- FIG. 4 is a schematic structural diagram of a simulation testing device provided by an embodiment of the present application.
- FIG. 5 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
- first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implicitly indicating the number of indicated technical features.
- a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
- plural means two or more.
- a system that can simulate various scenarios in the laboratory, and at the same time for the sensor capabilities, test vehicle frame and power system It is a hardware closed-loop solution for joint testing of autonomous driving software and algorithms.
- the method based on the analog interface is to generate the analog signal received by each sensor through scene simulation, and send the signal to each sensor through the analog interface.
- the method of using the analog interface currently faces the problems of complex system and immature solution. For example, the radar echoes of multiple targets in the simulated scene can only simulate a few targets, which cannot meet the requirements.
- the system that uses the probe wall solution to simulate dozens of target echoes is very expensive and complicated, and there is no mature solution.
- the scene simulation mode of the digital interface is generally adopted.
- the generated scene simulation signal is directly sent to the processing system of assisted driving and automatic driving through the digital interface, without passing through the actual sensor.
- the usual solution is to establish a behavioral model or a statistical model of each sensor to simulate the performance impact of the sensor. In this way, the simultaneous simulation of the behavior of multiple sensors in the same scene cannot be achieved. At the same time, behavior-based or statistical models cannot accurately characterize the performance of sensors in specific scenarios.
- the embodiment of the present application proposes a simulation test method, which can effectively solve the time delay problem caused by the simulation sensor in the simulation test of the digital scene, so that the simulation can be accurately synchronized in the digital scene simulator. Behavior and performance of multiple sensors.
- FIG. 1 shows the system architecture provided by the embodiment of the present application.
- the above-mentioned system architecture includes a virtual scene simulator 100, an input signal The simulator 200 , the sensor simulator 300 , the digital simulator 400 , the assisted driving and automatic driving system 500 (hereinafter referred to as the “driving system” for the convenience of description), the power system simulator 600 and System synchronization module 700 .
- driving system the assisted driving and automatic driving system 500
- the virtual scene simulator 100 is used for constructing a virtual scene and sending virtual scene information to the input signal simulator 200 .
- the virtual scene simulator 100 may be a computer or other types of computing devices, which are not specifically limited in this application.
- the virtual scene simulator 100 may include a high-speed Ethernet network card supporting the IEEE1588 protocol, one or more central processing units (Central Processing Unit, CPU) or graphics processing unit (Graphics Processing Unit, GPU), so as to ensure that the simulated real-time performance.
- the construction of the virtual scene may be implemented by scene simulation software installed in the virtual scene simulator 100, and the scene simulation software may be commercial 3D simulation software or open-source 3D simulation software. No special restrictions are made.
- the virtual scene may include a virtual scene constructed by using a virtual three-dimensional model, and the virtual three-dimensional model simulates real objects in the real world.
- the real objects may include cars, people, animals, trees, roads and buildings, etc. Other real objects may also be included, which are not particularly limited in this embodiment of the present application. Understandably, the three-dimensional model can be regarded as a virtual object.
- a user usually creates a virtual object in a virtual scene, and uses the virtual object as a test object for testing. Therefore, the test object may be the first virtual object, and the first virtual object in the virtual scene is not the same as the first virtual object. All other virtual objects can be regarded as second virtual objects.
- the first virtual object can be a car, so that the automatic driving and assisted driving performance of the car can be tested.
- the second virtual object around the car will Create obstacles to the car, which will affect the driving of the car, which in turn will affect the driving decisions of the car, such as acceleration, deceleration, cornering, and parking.
- the virtual scene simulator 100 can send the information of the virtual objects in the virtual scene to the input signal simulator 200, and the information of the virtual objects can include the coordinates, materials, lighting and other information of the virtual objects.
- the information transmission between the virtual scene simulator 100 and the input signal simulator 200 adopts a standardized interface to ensure that the above-mentioned system architecture does not depend on specific scene simulation software.
- the above-mentioned first virtual object may also be an unmanned aerial vehicle or other types of automatic driving equipment, which is not particularly limited in this embodiment of the present application.
- the virtual scene simulator 100 can establish a corresponding physical model for the virtual object based on the above-mentioned physical phenomenon, and at the same time, the virtual scene simulator 100 can also establish model parameters corresponding to the material of the virtual object, wherein the material of the virtual object is maintained with the material used by the real object. Consistent.
- the virtual scene simulator 100 may further arrange a virtual sensor on the first virtual object, where the virtual sensor is used to simulate and acquire input signals in the virtual environment.
- the input signals obtained by various virtual sensors in the virtual environment can be simulated through the above-mentioned physical models and model parameters.
- the input signal simulator 200 is used to simulate input signals obtained by various virtual sensors in the virtual environment, and send the input signals to the sensor simulator 300 for processing.
- the input signal may be simulated based on the driving state of the first virtual object and virtual scene information.
- the foregoing virtual sensors may include millimeter-wave radar virtual sensors, lidar virtual sensors, infrared virtual sensors, and camera virtual sensors, and may also include other types of virtual sensors, which are not particularly limited in this embodiment of the present application.
- the input signal simulator 200 may be a computer or other types of computing devices, which are not specifically limited in this application.
- the input signal simulator 200 may include a high-speed Ethernet network card supporting the IEEE1588 protocol, and one or more GPUs.
- the simulation process of acquiring the input signal can be realized by the input signal simulation software.
- the format, type, and related parameters of the virtual sensor can be configured, and the information of the virtual object sent from the virtual scene simulator 100 and the material parameters of the related virtual object can be received, And use the ray tracing algorithm to simulate different types of virtual sensors based on the driving state of the first virtual object (for example, the position, speed and acceleration of the first virtual object, etc.), the above-mentioned physical model and model parameters, for example, for each type of virtual sensor.
- the physical effects of the real sensor on the surface of the virtual object are simulated and calculated (for example, including but not limited to reflection, scattering, diffraction and other effects), so that the input signal in the virtual environment can be simulated and obtained, for example, by the transmitter of the virtual sensor A signal is emitted, and the emitted signal has a physical effect with the surface of the virtual object and is returned to the receiver of the virtual sensor, thereby obtaining the input signal of the virtual sensor.
- the simulation of the input signal can be processed by the GPU to ensure the real-time performance of the simulation.
- the input signal simulator 200 can simulate capturing images in the virtual environment through a virtual camera, or simulate detecting virtual objects in the virtual environment through a millimeter-wave radar.
- the sensor simulator 300 is used to simulate the behavior and performance of each virtual sensor.
- the sensor simulator 300 can receive the input signal obtained by the input signal simulator 200, and process the input signal by simulating the processing method of the real sensor, thereby An output signal can be obtained and sent to the digital simulator 400 for processing.
- the sensor simulator 300 may be a computer or other types of computing devices, which are not specifically limited in this application.
- a high-speed Ethernet network card supporting the IEEE1588 protocol, one or more Field Programmable Gate Array (FPGA) acceleration cards or GPUs may be included.
- the sensor simulator 300 can simulate the front-end performance of each virtual sensor and the algorithm of each virtual sensor. The simulation of the performance of the front-end of each virtual sensor can be performed by modeling.
- the input signal of Y is the output signal of the front end of the sensor
- G is the gain of the front end of the sensor
- N is the noise of the front end of the sensor
- I is the interference introduced by the front end of the sensor.
- the output signal Y can be obtained by processing a preset algorithm of the virtual sensor.
- the model of the virtual sensor front-end is constructed based on the real sensor. Therefore, the virtual sensor is used to simulate the front-end of the real sensor by modeling, so that the front-end of the virtual sensor and the front-end of the real sensor can be guaranteed. The performance is statistically consistent.
- the algorithm of the virtual sensor can be performed by directly using the algorithm of the real sensor, which will not be repeated here.
- the sensor simulator 300 can use a corresponding FPGA accelerator card or GPU card to simulate, so as to ensure the real-time requirements and computing power requirements of the simulation.
- the digital simulator 400 is used for receiving the output signal sent by the sensor simulator 300 and can send the output signal to the driving system 500 .
- the digital simulator 400 may be a computer or other types of computing devices, which are not specifically limited in this application.
- the digital simulator 400 may include a high-speed Ethernet network card supporting the IEEE1588 protocol, one or more FPGA acceleration cards, and multiple interfaces.
- the interface includes a physical interface and a digital interface between the digital simulator 400 and the real sensor.
- the digital simulator 400 can use the physical interface and the digital interface to receive real data collected by different real sensors and play back these real data. collected data for testing.
- the physical interface may include but is not limited to Controller Area Network (CAN), Mobile Industry Processor Interface (MIPI), Ethernet (Ethernet), Gigabit Multimedia Serial Link ( Gigabit Multimedia Serial Link, GSML), Flat Panel Display Link (FPDLink--Flat Panel Display Link, FPLINK), Local Interconnect Network (LIN) and 100M-T1 and other interfaces.
- CAN Controller Area Network
- MIPI Mobile Industry Processor Interface
- Ethernet Ethernet
- Gigabit Multimedia Serial Link Gigabit Multimedia Serial Link
- GSML Flat Panel Display Link
- FPDLink--Flat Panel Display Link FPLINK
- Local Interconnect Network LIN
- the digital interface may include but is not limited to a high-speed Peripheral Component Interconnect Express (PCIE) interface, a Serial Advanced Technology Attachment (Serial Advanced Technology Attachment, SATA) interface, a high-speed Ethernet interface, a digital fiber optic interface, and the like.
- PCIE Peripheral Component Interconnect Express
- SATA Serial Advanced Technology Attachment
- Ethernet Ethernet
- digital fiber optic interface and the like.
- the above-mentioned interfaces may also include a physical interface between the digital simulator 400 and the driving system 500 , and a digital interface between the digital simulator 400 and the powertrain simulator 600 .
- the physical interface may include a CAN interface.
- the digital simulator 400 may be connected to the CAN bus of the driving system 500, thereby enabling the digital simulator 400 to send the output signal to the driving system 500,
- the driving system 500 can make driving decisions based on the output signal.
- the digital interface may include an Ethernet interface for transmitting the driving state of the object under test (eg, the vehicle) output by the powertrain simulator 600 .
- the driving system 500 is configured to receive the output signal sent by the digital simulator 400 and make driving decisions based on the output signal.
- the driving system 500 may be located in a real object to be tested.
- the driving system 500 may be located inside a real vehicle to be tested. At this time, the vehicle is used as the object to be tested. It can be understood that the driving system 500 can also be tested independently.
- the driving system 500 can be taken out of a real vehicle, so that the driving system 500 can be used as the object to be tested.
- the driving decision may include operations such as acceleration, braking, deceleration, and turning, and may also include other operations, which are not particularly limited in this embodiment.
- the driving system 500 can send the above driving decision to the powertrain simulator 600, so that the powertrain simulator 600 can perform a simulation based on the driving decision to update the driving states of the real and virtual objects to be tested.
- the power system simulator 600 is used for receiving the driving decision sent by the driving system 500, simulating the dynamic characteristics of the real vehicle based on the driving decision, thereby outputting the driving state corresponding to the real vehicle, and feeding the driving state back to the first driver in the virtual scene.
- a virtual object so that the first virtual object can be updated based on the driving state, thereby completing the simulation test.
- the power system simulator 600 may be a computer or other types of computing devices, which are not specifically limited in this application. In the powertrain simulator 600, multiple interfaces may be included.
- the interface may include a physical interface (eg, a CAN interface) between the powertrain simulator 600 and the driving system 500, and a digital interface (eg, an Ethernet interface) between the powertrain simulator 600 and the virtual scene simulator.
- a physical interface eg, a CAN interface
- a digital interface eg, an Ethernet interface
- the system synchronization module 700 is used to provide a synchronization clock for the virtual scene simulator 100, the input signal simulator 200, the sensor simulator 300 and the digital simulator 400, so as to ensure the virtual scene simulator 100, the input signal simulator 200, and the sensor simulator 300 and clock synchronization with the digital simulator 400.
- the system synchronization module 700 may adopt, for example, a high-speed Ethernet switch supporting the 1588 synchronization protocol, or may be a dedicated synchronization module, which is not particularly limited in this embodiment of the present application.
- the virtual scene simulator 100 , the input signal simulator 200 , the sensor simulator 300 , and the digital simulator 400 may perform data interaction through a high-speed Ethernet switch or other high-speed data connection devices.
- a schematic flowchart of an embodiment of the simulation testing method provided by the embodiment of the present application includes:
- Step 101 using the virtual scene simulator 100 to construct a virtual scene.
- the virtual scene can be constructed by the virtual scene simulator 100 .
- the virtual scene may include a scene to be tested, and the scene to be tested may include a virtual object and a material corresponding to the virtual object.
- the virtual object may include: a car, a person, an animal, a tree, a road, a building, etc., or may include other objects, which are not particularly limited in this embodiment of the present application.
- the virtual object to be tested can also be determined in the virtual scene, for example, the virtual object to be tested can be a car.
- the first parameter of the virtual sensor can be configured through the input signal simulator 200 to simulate the acquisition of the input signal by the virtual sensor.
- the configured first parameter information may include: the number of sensors, the type of the sensors, and the assembly parameters of the sensors on the real vehicle (for example, the sensors in the real vehicle Installation height and angle on the vehicle, etc.), physical parameters of the sensor (for example, the number, position and direction of the transmitting and receiving antennas of the millimeter-wave radar, the frequency, working mode and number of lines of the lidar, the field of view and focal length of the camera Wait).
- the virtual object to be tested can be regarded as the first virtual object, and all virtual objects in the virtual scene except the first virtual object can be regarded as the second virtual object.
- a vehicle in the virtual scene is used as the first virtual object, that is, the virtual object to be tested
- other virtual objects for example, cars, people, animals, trees, roads, buildings, etc.
- Two virtual objects can be used as the first virtual object.
- the second parameter of the virtual sensor may also be configured in the sensor simulator 300 .
- the configured second parameter information may include: the number of sensors, the type of sensors, and front-end parameters of the sensors (for example, front-end gain G, front-end noise N And the interference I) introduced by the front end, the sensor processing delay and the sensor algorithm, etc.
- Step 102 the virtual scene simulator 100 sends the virtual scene information to the input signal simulator 200 .
- the virtual scene information may include related information of all virtual objects in the virtual scene, where the related information may include coordinate positions, material information (for example, plastic, metal, etc.), lighting conditions, etc., to which the embodiments of the present application No special restrictions are made.
- Step 103 the input signal simulator 200 simulates and acquires multiple input signals of the first virtual object.
- the virtual sensor may include: one or more of a millimeter-wave radar virtual sensor, a lidar virtual sensor, an infrared virtual sensor, and a camera virtual sensor.
- the input signal may include echo signals and/or images. Exemplarily, echo signals can be obtained through a millimeter-wave radar virtual sensor, a lidar virtual sensor, and an infrared virtual sensor, and a captured image can also be obtained through a camera virtual sensor.
- the input signal can be obtained after calculation using a ray tracing algorithm based on the driving state S t and virtual scene information of the first virtual object. It can be understood that other algorithms can also be used to obtain the input signal. No special restrictions are made. in,
- Xi is the position of the first virtual object
- Vi is the velocity of the first virtual object
- Ai is the acceleration of the first virtual object. It can be understood that the state of the above-mentioned first virtual object may also include other variables, which are not particularly limited in this embodiment of the present application.
- each input signal has a one-to-one correspondence with each virtual sensor.
- the A echo input signal obtained by the millimeter-wave radar virtual sensor may be the A echo input signal
- the B echo input signal obtained by the lidar virtual sensor may be the B echo input signal
- the C image input signal obtained by the camera virtual sensor may be.
- the multiple input signals can also be synchronized to ensure that the input signal simulator 200 can simultaneously acquire the input signals acquired by multiple virtual sensors for the same scene .
- a simulation processing time Tf is generated.
- the real sensor processes the input signal, it will generate the real processing time Tz.
- the simulation processing time Tf generated by the virtual sensor and the real processing time Tz generated by the real sensor will be different.
- the above-mentioned simulation processing time Tf may be greater than or equal to the real processing time Tz, and the above-mentioned simulation processing time Tf may also be less than or equal to the real processing time Tz.
- delay compensation can be performed on the above-mentioned input signal in the input signal simulator 200, so that the above-mentioned simulation test can better simulate the real scene.
- the delay refers to the difference between the simulation processing time Tf and the real processing time Tz.
- the input signal simulator 200 can also acquire the simulation processing time Tf of each virtual sensor in the sensor simulator 300 .
- the simulation processing time Tf corresponds to the virtual sensor one-to-one.
- the simulation processing time Tf1 of the millimeter-wave radar virtual sensor, the simulation processing time Tf2 of the lidar virtual sensor, the simulation processing time Tf3 of the camera virtual sensor, and the like may be obtained.
- the real processing time Tz of the real sensor corresponding to the virtual sensor can also be obtained.
- the real processing time Tz corresponds to the real sensor one-to-one.
- the real processing time Tz1 of the real sensor of the millimeter wave radar, the real processing time Tz2 of the real sensor of the lidar, the real processing time Tz3 of the real sensor of the camera, and the like can be obtained.
- the input signal simulator 200 may compare the simulated processing time Tf of each virtual sensor with the corresponding real processing time Tz.
- the simulation processing time Tf of the virtual lidar sensor is 5ms
- the real processing time Tz of the real lidar sensor is 10ms
- the driving state of the first virtual object can be predicted, so that the input signal simulator 200 can perform simulation based on the predicted driving state, thereby obtaining the second input signal.
- the second input signal may be a prediction input signal after the Tf-Tz time period, and the prediction method may be Kalman filtering method, or other prediction methods may be used, which is not particularly limited in this embodiment of the present application.
- Xi is the position of the first virtual object
- Vi is the velocity of the first virtual object
- Ai is the acceleration of the first virtual object. It can be understood that the state of the above-mentioned first virtual object may also include other variables, which are not specially limited in this embodiment of the present application.
- S t+T Fi*S t +Bi*Ui+Ni, where,
- Ni is the state prediction noise. Thereby, the prediction of the driving state of the first virtual object in the future can be completed. Next, based on the driving state of the first virtual object in the future, the input signal simulator 300 may be used to simulate the input signal, thereby obtaining the second input signal.
- the simulation processing time of the millimeter-wave radar virtual sensor Tf1 10ms, that is to say, the millimeter-wave radar virtual sensor needs to process the input signal for 10ms, so that the first input signal of the millimeter-wave radar can be obtained;
- the processing time Tf2 12ms, that is to say, the virtual lidar sensor needs to process the input signal after 10ms, so that the first input signal of the lidar can be obtained;
- the real processing time of the real lidar sensor Tz2 8ms, that is to say , the real lidar sensor needs to process the input signal after 5ms, so that the first input signal of the lidar can be obtained.
- the driving state of the first virtual object can be predicted based on the maximum simulation delay (for example, Ty1), and based on the predicted driving state, the millimeter-wave radar virtual sensor and the lidar virtual sensor can be used to simulate , to obtain the second input signal of the millimeter-wave radar and the second input signal of the laser radar, thereby ensuring that all the first input signals can obtain effective delay compensation.
- FIG. 3 is a schematic diagram of the prediction of the second input signal.
- the driving state of the first virtual object at time t is S1
- the millimeter-wave radar virtual sensor on the first virtual object can simulate the input signal 101 based on the driving state S1 at time t.
- the driving state of the first virtual object at time t+T1 is predicted, so that the driving state of the first virtual object at time t+T1 can be obtained as S2, where T1 is the delay, that is, the millimeter wave radar virtual
- T1 is the delay, that is, the millimeter wave radar virtual
- the millimeter-wave radar virtual sensor can obtain the second input signal 102 by simulation based on the driving state S2 at time t+T1, so that the prediction of the second input signal 102 can be completed, and the delay compensation of the input signal can be completed.
- the virtual sensor of the lidar can also predict the input signal of the lidar in the manner shown in FIG. 3 above, thereby obtaining the second input signal of the lidar, which will not be repeated here.
- the simulation delay of each virtual sensor may also be corrected.
- the millimeter-wave radar virtual sensor is used for the delay compensation time of the lidar virtual sensor.
- the input signal simulator 200 can send the second input signal predicted by the lidar with a delay T2, so that the accumulated time of the delay T2 and the simulation processing time corresponds to the predicted second input signal, and further Simulates the performance of a real sensor.
- T2 is the difference between the delay compensation time and the simulation delay.
- Step 104 the input signal simulator 200 sends a plurality of input signals to the sensor simulator 300 .
- the multiple input signals may be sent to the sensor simulator 300, wherein the input signals may include the first input signal and/or the first input signal. Two input signals.
- step 105 the sensor simulator 300 processes multiple input signals and outputs multiple output signals.
- the sensor simulator 300 can process the aforementioned multiple input signals, thereby obtaining multiple output signals.
- each virtual sensor may use different preset front-end models respectively, and the embodiments of the present application do not specifically limit the implementation of the specific front-end models.
- each input signal has a one-to-one correspondence with each output signal.
- the A-echo input signal obtained by the millimeter-wave radar virtual sensor can be input into the front-end model of the millimeter-wave radar virtual sensor and processed using a preset millimeter-wave radar sensor algorithm to obtain the A echo output signal;
- the B echo input signal obtained by the virtual sensor is input to the front-end model of the lidar virtual sensor for processing using the preset lidar sensor algorithm to obtain the B echo output signal; or the C image input signal obtained by the camera virtual sensor is input to the camera virtual sensor.
- the front-end model of C is processed using the preset camera sensor algorithm to obtain the C image output signal.
- Step 106 sending a plurality of output signals to the digital simulator 400 .
- the aforementioned multiple output signals can be sent to the digital simulator 400 .
- Step 107 the digital simulator 400 receives and processes the multiple output signals, and sends the multiple output signals to the driving system 500 .
- the digital simulator 400 may receive the output signal corresponding to each virtual sensor sent by the sensor simulator 300 . In order to test the driving performance of the real vehicle, the digital simulator 400 may also send the above-mentioned multiple output signals to the driving system 500 of the real vehicle.
- step 108 the driving system 500 determines a driving decision based on the plurality of output signals.
- the driving system 500 makes a driving decision after receiving the above-mentioned multiple output signals sent by the digital simulator 400 .
- the driving decision may include operations such as acceleration, deceleration, braking, and turning, and may also include other driving decisions, which are not specifically limited in this embodiment of the present application.
- Step 109 sending the driving decision to the powertrain simulator 600 .
- Step 110 the power system simulator 600 simulates the driving state St' of the real vehicle based on the driving decision.
- this driving state St' corresponds to a driving decision.
- the driving decision is to accelerate, then the driving state of the real vehicle simulated by the power system simulator 600 is acceleration driving, and if the driving decision is to brake, the driving state of the real vehicle simulated by the power system simulator 600 is that it is in the state of braking after braking. Parking status.
- Step 111 feedback the driving state St' to the virtual scene simulator 100, so that the virtual scene simulator 100 updates the driving state St of the first virtual object based on St'.
- the driving state St' can be fed back to the virtual scene simulator 100, thereby enabling the virtual scene simulator 100 to update the driving of the first virtual object based on the St' State St.
- the test vehicle the first virtual object
- the second virtual object the vehicle
- driving decisions can be determined (eg, braking by the driving system 500 ), and the braking decisions can be fed back to the first virtual object, thereby completing the simulation test of the entire system.
- Step 112 the virtual scene simulator 100 updates the driving state of the first virtual object based on the driving state St'.
- the driving state of the object to be tested is predicted, thereby compensating for the delay, which can effectively solve the problem caused by the simulated sensor in the simulation test of the digital scene. Therefore, the behavior and performance of multiple sensors can be accurately and synchronously simulated in a digital scene simulator.
- FIG. 4 is a schematic structural diagram of an embodiment of the simulation test device of the present application.
- the above-mentioned simulation test device 40 is applied to an input signal simulator, and the input signal simulator is located in an automatic driving test framework.
- the automatic driving test framework also Including a virtual scene simulator and a sensor simulator, the virtual scene simulator is used to simulate a virtual scene, the virtual scene includes a virtual object to be tested, and the virtual object to be tested includes a first driving state and a plurality of virtual sensors, which may include: a receiving circuit 41, a prediction circuit 42, first analog circuit 43 and first transmission circuit 44;
- the receiving circuit 41 is used to obtain the processing delay of each virtual sensor
- the prediction circuit 42 is configured to determine whether each processing delay satisfies the preset condition; if any processing delay satisfies the preset condition, predict the first driving state based on the processing delay to obtain the second driving state;
- the first simulation circuit 43 is configured to perform simulation based on each second driving state using a virtual sensor corresponding to the processing delay to obtain one or more first input signals, wherein each first input signal is associated with each virtual sensor.
- the first sending circuit 44 is used for sending one or more first input signals to the sensor simulator.
- the above-mentioned first simulation circuit 43 is further configured to use a plurality of virtual sensors corresponding to each processing delay to perform synchronous simulation to obtain a plurality of first input signals.
- the above-mentioned processing delay is determined by the difference between the first processing time and the second processing time, wherein the first processing time is the processing time of the virtual sensor in the sensor simulator, and the second processing time is the processing time of the virtual sensor in the sensor simulator.
- the above-mentioned apparatus 40 further includes: a second analog circuit 45 and a second sending circuit 46;
- the second simulation circuit 45 is configured to use a virtual sensor corresponding to the processing delay to perform simulation based on the first driving state to obtain a second input signal if any processing delay does not meet the preset condition;
- the second sending circuit 46 is configured to delay sending one or more second input signals to the sensor simulator based on the processing delay.
- the above-mentioned first input signal or second input signal is obtained by at least one GPU simulation by an input signal simulator using a ray tracing algorithm.
- the first driving state includes the first position, the first speed, and the first acceleration of the virtual object to be tested at time t
- the prediction circuit is further configured to use the Kalman filtering method to be tested based on the processing delay.
- the first driving state of the virtual object is predicted to obtain the second driving state, wherein the second driving state includes the second position, the second speed and the second acceleration of the virtual object to be tested at t+T, where T is the processing delay. Time.
- each module of the simulation test apparatus shown in FIG. 4 is only a division of logical functions, and may be fully or partially integrated into a physical entity in actual implementation, or may be physically separated.
- these modules can all be implemented in the form of software calling through processing elements; they can also all be implemented in hardware; some modules can also be implemented in the form of software calling through processing elements, and some modules can be implemented in hardware.
- the detection module may be a separately established processing element, or may be integrated in a certain chip of the electronic device.
- the implementation of other modules is similar.
- all or part of these modules can be integrated together, and can also be implemented independently.
- each step of the above-mentioned method or each of the above-mentioned modules can be completed by an integrated logic circuit of hardware in the processor element or an instruction in the form of software.
- the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more specific integrated circuits (Application Specific Integrated Circuit; hereinafter referred to as: ASIC), or, one or more microprocessors Digital Singnal Processor (hereinafter referred to as: DSP), or, one or more Field Programmable Gate Array (Field Programmable Gate Array; hereinafter referred to as: FPGA), etc.
- ASIC Application Specific Integrated Circuit
- DSP Digital Singnal Processor
- FPGA Field Programmable Gate Array
- these modules can be integrated together and implemented in the form of a system-on-a-chip (System-On-a-Chip; hereinafter referred to as: SOC).
- FIG. 5 is a schematic structural diagram of an embodiment of an electronic device 50 of the present application; wherein, the electronic device 50 may be the above-mentioned input signal simulator 200 .
- the electronic device 50 may be a data processing device or a circuit device built in the data processing device.
- the electronic device 50 can be used to execute the functions/steps in the methods provided by the embodiments shown in FIG. 1 to FIG. 3 of the present application.
- electronic device 50 takes the form of a general-purpose computing device.
- the electronic device 50 described above may include: one or more processors 510; a communication interface 520; a memory 530; a communication bus 540 connecting different system components (including the memory 530 and the processor 510), a database 550; and one or more computer programs .
- the above-mentioned one or more computer programs are stored in the above-mentioned memory, and the above-mentioned one or more computer programs include instructions, when the above-mentioned instructions are executed by the above-mentioned electronic equipment, so that the above-mentioned electronic equipment performs the following steps:
- each second driving state Based on each second driving state, use the virtual sensor corresponding to the processing delay to perform simulation to obtain one or more first input signals, wherein each first input signal corresponds to each virtual sensor one-to-one;
- One or more first input signals are sent to the sensor simulator.
- the step of causing the above-mentioned electronic device to perform simulation using a virtual sensor corresponding to the processing delay to obtain one or more first input signals includes:
- Simultaneous simulation is performed using a plurality of virtual sensors corresponding to each of the processing delays to obtain a plurality of first input signals.
- the above-mentioned processing delay is determined by the difference between the first processing time and the second processing time, wherein the first processing time is the processing time of the virtual sensor in the sensor simulator, and the second processing time is the processing time of the virtual sensor in the sensor simulator.
- the above-mentioned electronic device when executed by the above-mentioned electronic device, the above-mentioned electronic device further executes the following steps:
- a virtual sensor corresponding to the processing delay is used for simulation to obtain a second input signal
- Delay sending one or more second input signals to the sensor simulator based on the processing delay.
- the above-mentioned sensor simulator is used to receive the first input signal or the second input signal, and perform calculations based on a preset front-end model and a preset algorithm of the virtual sensor to obtain an output signal, and the preset of the virtual sensor
- the above-mentioned virtual scene is simulated and obtained by the virtual scene simulator using at least one CPU and/or at least one GPU, and the first input signal or the second input signal is obtained by the input signal simulator using a ray tracing algorithm through at least one CPU and/or at least one GPU.
- a GPU simulation is obtained.
- the first driving state includes a first position, a first speed, and a first acceleration of the virtual object to be measured at time t, and when the above-mentioned instruction is executed by the above-mentioned electronic device, the above-mentioned electronic device executes the operation based on the above-mentioned
- the step of predicting the first driving state by delaying the processing, and obtaining the second driving state includes:
- a Kalman filtering method is used to predict the first driving state of the virtual object to be tested, and a second driving state is obtained, wherein the second driving state includes the virtual object to be tested at t+ The second position, the second velocity, and the second acceleration of T, where T is the processing delay.
- the automatic driving test architecture also includes a digital simulator, a driving system and a power system simulator.
- the digital simulator is used to receive the output signal sent by the sensor simulator, and send the output signal to the driving system.
- the system is used to determine the driving decision based on the output signal, and the power system simulator is used to simulate the driving decision, obtain the third driving state, and feed back the third driving state to the virtual scene simulator, so that the virtual object to be tested is based on the third driving state
- the first driving state is updated.
- the above virtual sensor includes at least one of a millimeter-wave radar virtual sensor, a lidar virtual sensor, an infrared virtual sensor, and a camera virtual sensor.
- the interface connection relationship between the modules illustrated in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 50 .
- the electronic device 50 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
- the above-mentioned electronic device 50 includes corresponding hardware structures and/or software modules for executing each function.
- the embodiments of the present application can be implemented in hardware or a combination of hardware and computer software. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of the embodiments of the present invention.
- each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
- the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. It should be noted that, the division of modules in the embodiment of the present invention is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
- Each functional unit in each of the embodiments of the embodiments of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
- the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
- the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
- a computer-readable storage medium includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute all or part of the steps of the methods described in the various embodiments of the present application.
- the aforementioned storage medium includes: flash memory, removable hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program codes.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Automation & Control Theory (AREA)
- Optics & Photonics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
Claims (24)
- 一种仿真测试方法,其特征在于,应用于输入信号模拟器,所述输入信号模拟器位于自动驾驶测试架构,所述自动驾驶测试架构还包括虚拟场景模拟器及传感器模拟器,所述虚拟场景模拟器用于模拟虚拟场景,所述虚拟场景包括待测虚拟对象,所述待测虚拟对象包括第一驾驶状态及多个虚拟传感器,所述方法包括:获取每个所述虚拟传感器的处理延时;判断每个所述处理延时是否满足预设条件;若任一所述处理延时满足所述预设条件,则基于所述处理延时对所述第一驾驶状态进行预测,得到第二驾驶状态;基于每个所述第二驾驶状态,使用与所述处理延时对应的虚拟传感器进行模拟,得到一个或多个第一输入信号,其中,每个所述第一输入信号与每个所述虚拟传感器一一对应;将一个或多个所述第一输入信号发送至所述传感器模拟器。
- 根据权利要求1所述的方法,其特征在于,所述使用与所述处理延时对应的虚拟传感器进行模拟,得到一个或多个第一输入信号包括:使用多个分别与每个所述处理延时对应的虚拟传感器进行同步模拟,得到多个第一输入信号。
- 根据权利要求1或2所述的方法,其特征在于,所述处理延时由第一处理时间及第二处理时间的差值确定,其中,所述第一处理时间为所述虚拟传感器在所述传感器模拟器中的处理时间,所述第二处理时间为与所述虚拟传感器对应的真实传感器的预设真实处理时间。
- 根据权利要求1-3任一项所述的方法,其特征在于,所述方法还包括:若任一所述处理延时不满足所述预设条件,则基于所述第一驾驶状态,使用与所述处理延时对应的虚拟传感器进行模拟,得到第二输入信号;基于所述处理延时将一个或多个所述第二输入信号延时发送给所述传感器模拟器。
- 根据权利要求4所述的方法,其特征在于,所述传感器模拟器用于接收所述第一输入信号或第二输入信号,并基于所述虚拟传感器的预置前端模型和预设算法进行计算,得到输出信号,所述虚拟传感器的预置前端模型为Y=G*X+N+I,其中,Y为所述前端模型的输出信号,X为所述第一输入信号或所述第二输入信号,G为所述虚拟传感器的前端的增益,N为所述虚拟传感器的前端的噪声,I为所述虚拟传感器的前端引入的干扰。
- 根据权利要求4所述的方法,其特征在于,所述虚拟场景由所述虚拟场景模拟器使用至少一个CPU和/或至少一个GPU模拟获得,所述第一输入信号或所述第二输入信号由所述输入信号模拟器使用射线追踪算法通过至少一个GPU模拟获得。
- 根据权利要求1所述的方法,其特征在于,所述第一驾驶状态包括所述待测虚拟对象在t时刻的第一位置、第一速度及第一加速度,所述基于所述处理延时对所述第一驾驶状态进行预测,得到第二驾驶状态包括:基于所述处理延时,使用Kalman滤波方法对所述待测虚拟对象的第一驾驶状态进行预测,得到第二驾驶状态,其中,所述第二驾驶状态包括所述待测虚拟对象在t+T 的第二位置、第二速度及第二加速度,T为所述处理延时。
- 根据权利要求1所述的方法,其特征在于,所述自动驾驶测试架构还包括数字模拟器、驾驶系统及动力系统模拟器,所述数字模拟器用于接收所述传感器模拟器发送的输出信号,并将所述输出信号发送至所述驾驶系统,所述驾驶系统用于基于所述输出信号确定驾驶决策,所述动力系统模拟器用于对所述驾驶决策进行模拟,得到第三驾驶状态,并将所述第三驾驶状态反馈给所述虚拟场景模拟器,使得所述待测虚拟对象基于所述第三驾驶状态对所述第一驾驶状态进行更新。
- 根据权利要求1-8任一项所述的方法,其特征在于,所述虚拟传感器包括毫米波雷达虚拟传感器、激光雷达虚拟传感器、红外虚拟传感器和摄像头虚拟传感器中的至少一种。
- 一种仿真测试装置,其特征在于,应用于输入信号模拟器,所述输入信号模拟器位于自动驾驶测试架构,所述自动驾驶测试架构还包括虚拟场景模拟器及传感器模拟器,所述虚拟场景模拟器用于模拟虚拟场景,所述虚拟场景包括待测虚拟对象,所述待测虚拟对象包括第一驾驶状态及多个虚拟传感器,包括:接收电路,用于获取每个所述虚拟传感器的处理延时;预测电路,用于判断每个所述处理延时是否满足预设条件;若任一所述处理延时满足所述预设条件,则基于所述处理延时对所述第一驾驶状态进行预测,得到第二驾驶状态;第一模拟电路,用于基于每个所述第二驾驶状态,使用与所述处理延时对应的虚拟传感器进行模拟,得到一个或多个第一输入信号,其中,每个所述第一输入信号与每个所述虚拟传感器一一对应;第一发送电路,用于将一个或多个所述第一输入信号发送至所述传感器模拟器。
- 根据权利要求10所述的装置,其特征在于,所述第一模拟电路还用于使用多个分别与每个所述处理延时对应的虚拟传感器进行同步模拟,得到多个第一输入信号。
- 根据权利要求10或11所述的装置,其特征在于,所述处理延时由第一处理时间及第二处理时间的差值确定,其中,所述第一处理时间为所述虚拟传感器在所述传感器模拟器中的处理时间,所述第二处理时间为与所述虚拟传感器对应的真实传感器的预设真实处理时间。
- 根据权利要求10-12任一项所述的装置,其特征在于,所述装置还包括:第二模拟电路,用于若任一所述处理延时不满足所述预设条件,则基于所述第一驾驶状态,使用与所述处理延时对应的虚拟传感器进行模拟,得到第二输入信号;第二发送电路,用于基于所述处理延时将一个或多个所述第二输入信号延时发送给所述传感器模拟器。
- 根据权利要求13所述的装置,其特征在于,所述第一输入信号或所述第二输入信号由所述输入信号模拟器使用射线追踪算法通过至少一个GPU模拟获得。
- 根据权利要求10所述的装置,其特征在于,所述第一驾驶状态包括所述待测虚拟对象在t时刻的第一位置、第一速度及第一加速度,所述预测电路还用于基于所述处理延时,使用Kalman滤波方法对所述待测虚拟对象的第一驾驶状态进行预测,得到第二驾驶状态,其中,所述第二驾驶状态包括所述待测虚拟对象在t+T的第二位 置、第二速度及第二加速度,T为所述处理延时。
- 一种仿真测试系统,其特征在于,包括:虚拟场景模拟器、输入信号模拟器、传感器模拟器、数字模拟器及系统同步模块;其中,所述虚拟场景模拟器用于模拟虚拟场景,所述虚拟场景包括待测虚拟对象,所述待测虚拟对象包括第一驾驶状态及多个虚拟传感器;所述输入信号模拟器用于获取每个所述虚拟传感器的处理延时;判断每个所述处理延时是否满足预设条件;若任一所述处理延时满足所述预设条件,则基于所述处理延时对所述第一驾驶状态进行预测,得到第二驾驶状态;基于每个所述第二驾驶状态,使用与所述处理延时对应的虚拟传感器进行模拟,得到一个或多个第一输入信号,其中,每个所述第一输入信号与每个所述虚拟传感器一一对应;将一个或多个所述第一输入信号发送至所述传感器模拟器;所述传感器模拟器用于接收所述第一输入信号,并基于所述虚拟传感器的预置前端模型和预设算法进行计算,得到输出信号;所述数字模拟器用于接收所述传感器模拟器发送的输出信号;所述系统同步模块用于向所述虚拟场景模拟器、所述输入信号模拟器、所述传感器模拟器及所述数字模拟器提供同步时钟。
- 根据权利要求16所述的系统,其特征在于,所述输入信号模拟器还用于使用多个分别与每个所述处理延时对应的虚拟传感器进行同步模拟,得到多个第一输入信号。
- 根据权利要求16或17所述的系统,其特征在于,所述处理延时由第一处理时间及第二处理时间的差值确定,其中,所述第一处理时间为所述虚拟传感器在所述传感器模拟器中的处理时间,所述第二处理时间为与所述虚拟传感器对应的真实传感器的预设真实处理时间。
- 根据权利要求16-18任一项所述的系统,其特征在于,所述输入信号模拟器还用于若任一所述处理延时不满足所述预设条件,则基于所述第一驾驶状态,使用与所述处理延时对应的虚拟传感器进行模拟,得到第二输入信号;基于所述处理延时将一个或多个所述第二输入信号延时发送给所述传感器模拟器。
- 根据权利要求19所述的系统,其特征在于,所述传感器模拟器还用于接收第二输入信号,所述虚拟传感器的预置前端模型为Y=G*X+N+I,其中,Y为所述前端模型的输出信号,X为所述第一输入信号或所述第二输入信号,G为所述虚拟传感器的前端的增益,N为所述虚拟传感器的前端的噪声,I为所述虚拟传感器的前端引入的干扰。
- 根据权利要求19所述的系统,其特征在于,所述虚拟场景由所述虚拟场景模拟器使用至少一个CPU和/或至少一个GPU模拟获得,所述第一输入信号或所述第二输入信号由所述输入信号模拟器使用射线追踪算法通过至少一个GPU模拟获得。
- 根据权利要求16所述的系统,其特征在于,所述第一驾驶状态包括所述待测虚拟对象在t时刻的第一位置、第一速度及第一加速度,所述输入信号模拟器还用于基于所述处理延时,使用Kalman滤波方法对所述待测虚拟对象的第一驾驶状态进行预测,得到第二驾驶状态,其中,所述第二驾驶状态包括所述待测虚拟对象在t+T的 第二位置、第二速度及第二加速度,T为所述处理延时。
- 根据权利要求16所述的系统,其特征在于,还包括驾驶系统及动力系统模拟器,其中,所述数字模拟器还用于将所述输出信号发送至所述驾驶系统;所述驾驶系统用于基于所述输出信号确定驾驶决策;所述动力系统模拟器用于对所述驾驶决策进行模拟,得到第三驾驶状态,并将所述第三驾驶状态反馈给所述虚拟场景模拟器,使得所述待测虚拟对象基于所述第三驾驶状态对所述第一驾驶状态进行更新。
- 根据权利要求16-23任一项所述的系统,其特征在于,所述虚拟传感器包括毫米波雷达虚拟传感器、激光雷达虚拟传感器、红外虚拟传感器和摄像头虚拟传感器中的至少一种。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020237022187A KR20230116880A (ko) | 2020-12-03 | 2021-11-24 | 시뮬레이션 테스트 방법, 장치 및 시스템 |
EP21899906.8A EP4250023A4 (en) | 2020-12-03 | 2021-11-24 | SIMULATION TEST METHOD, APPARATUS AND SYSTEM |
JP2023533766A JP2023551939A (ja) | 2020-12-03 | 2021-11-24 | シミュレーション試験方法、装置及びシステム |
US18/327,977 US20230306159A1 (en) | 2020-12-03 | 2023-06-02 | Simulation test method, apparatus, and system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011408608.X | 2020-12-03 | ||
CN202011408608.XA CN114609923A (zh) | 2020-12-03 | 2020-12-03 | 仿真测试方法、装置及系统 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/327,977 Continuation US20230306159A1 (en) | 2020-12-03 | 2023-06-02 | Simulation test method, apparatus, and system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022116873A1 true WO2022116873A1 (zh) | 2022-06-09 |
Family
ID=81853801
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/132662 WO2022116873A1 (zh) | 2020-12-03 | 2021-11-24 | 仿真测试方法、装置及系统 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20230306159A1 (zh) |
EP (1) | EP4250023A4 (zh) |
JP (1) | JP2023551939A (zh) |
KR (1) | KR20230116880A (zh) |
CN (1) | CN114609923A (zh) |
WO (1) | WO2022116873A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117115364B (zh) * | 2023-10-24 | 2024-01-19 | 芯火微测(成都)科技有限公司 | 微处理器sip电路测试状态监控方法、系统及存储介质 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108681264A (zh) * | 2018-08-10 | 2018-10-19 | 成都合纵连横数字科技有限公司 | 一种智能车辆数字化仿真测试装置 |
CN108877374A (zh) * | 2018-07-24 | 2018-11-23 | 长安大学 | 基于虚拟现实与驾驶模拟器的车辆队列仿真系统和方法 |
CN109213126A (zh) * | 2018-09-17 | 2019-01-15 | 安徽江淮汽车集团股份有限公司 | 自动驾驶汽车测试系统和方法 |
CN110779730A (zh) * | 2019-08-29 | 2020-02-11 | 浙江零跑科技有限公司 | 基于虚拟驾驶场景车辆在环的l3级自动驾驶系统测试方法 |
CN110794712A (zh) * | 2019-12-03 | 2020-02-14 | 清华大学苏州汽车研究院(吴江) | 一种自动驾驶虚景在环测试系统和方法 |
US20200167436A1 (en) * | 2018-11-27 | 2020-05-28 | Hitachi, Ltd. | Online self-driving car virtual test and development system |
CN111505965A (zh) * | 2020-06-17 | 2020-08-07 | 深圳裹动智驾科技有限公司 | 自动驾驶车辆仿真测试的方法、装置、计算机设备及存储介质 |
CN111881520A (zh) * | 2020-07-31 | 2020-11-03 | 广州文远知行科技有限公司 | 一种自动驾驶测试的异常检测方法、装置、计算机设备及存储介质 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10635761B2 (en) * | 2015-04-29 | 2020-04-28 | Energid Technologies Corporation | System and method for evaluation of object autonomy |
CN107807542A (zh) * | 2017-11-16 | 2018-03-16 | 北京北汽德奔汽车技术中心有限公司 | 自动驾驶仿真系统 |
-
2020
- 2020-12-03 CN CN202011408608.XA patent/CN114609923A/zh active Pending
-
2021
- 2021-11-24 EP EP21899906.8A patent/EP4250023A4/en active Pending
- 2021-11-24 WO PCT/CN2021/132662 patent/WO2022116873A1/zh active Application Filing
- 2021-11-24 KR KR1020237022187A patent/KR20230116880A/ko active Search and Examination
- 2021-11-24 JP JP2023533766A patent/JP2023551939A/ja active Pending
-
2023
- 2023-06-02 US US18/327,977 patent/US20230306159A1/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108877374A (zh) * | 2018-07-24 | 2018-11-23 | 长安大学 | 基于虚拟现实与驾驶模拟器的车辆队列仿真系统和方法 |
CN108681264A (zh) * | 2018-08-10 | 2018-10-19 | 成都合纵连横数字科技有限公司 | 一种智能车辆数字化仿真测试装置 |
CN109213126A (zh) * | 2018-09-17 | 2019-01-15 | 安徽江淮汽车集团股份有限公司 | 自动驾驶汽车测试系统和方法 |
US20200167436A1 (en) * | 2018-11-27 | 2020-05-28 | Hitachi, Ltd. | Online self-driving car virtual test and development system |
CN110779730A (zh) * | 2019-08-29 | 2020-02-11 | 浙江零跑科技有限公司 | 基于虚拟驾驶场景车辆在环的l3级自动驾驶系统测试方法 |
CN110794712A (zh) * | 2019-12-03 | 2020-02-14 | 清华大学苏州汽车研究院(吴江) | 一种自动驾驶虚景在环测试系统和方法 |
CN111505965A (zh) * | 2020-06-17 | 2020-08-07 | 深圳裹动智驾科技有限公司 | 自动驾驶车辆仿真测试的方法、装置、计算机设备及存储介质 |
CN111881520A (zh) * | 2020-07-31 | 2020-11-03 | 广州文远知行科技有限公司 | 一种自动驾驶测试的异常检测方法、装置、计算机设备及存储介质 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4250023A4 |
Also Published As
Publication number | Publication date |
---|---|
JP2023551939A (ja) | 2023-12-13 |
KR20230116880A (ko) | 2023-08-04 |
EP4250023A1 (en) | 2023-09-27 |
US20230306159A1 (en) | 2023-09-28 |
CN114609923A (zh) | 2022-06-10 |
EP4250023A4 (en) | 2024-05-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6548691B2 (ja) | 画像生成システム、プログラム及び方法並びにシミュレーションシステム、プログラム及び方法 | |
US20210406562A1 (en) | Autonomous drive emulation methods and devices | |
US10635844B1 (en) | Methods and systems for simulating vision sensor detection at medium fidelity | |
Muckenhuber et al. | Object-based sensor model for virtual testing of ADAS/AD functions | |
WO2018066352A1 (ja) | 画像生成システム、プログラム及び方法並びにシミュレーションシステム、プログラム及び方法 | |
CN107103104B (zh) | 一种基于跨层协同架构的车辆智能网联测试系统 | |
EP3872633A1 (en) | Autonomous driving vehicle simulation method in virtual environment | |
US11941888B2 (en) | Method and device for generating training data for a recognition model for recognizing objects in sensor data of a sensor, in particular, of a vehicle, method for training and method for activating | |
WO2022246860A1 (zh) | 一种自动驾驶系统的性能测试方法 | |
WO2020220248A1 (zh) | 自动驾驶车辆的仿真测试方法、系统、存储介质和车辆 | |
WO2022116873A1 (zh) | 仿真测试方法、装置及系统 | |
CN112286079A (zh) | 一种高拟真度无人机航电半实物实景仿真系统 | |
CN115879323A (zh) | 自动驾驶仿真测试方法、电子设备及计算机可读存储介质 | |
CN116601612A (zh) | 用于测试车辆的控制器的方法和系统 | |
CN209002122U (zh) | 一种摄像头控制器测试系统 | |
WO2024131679A1 (zh) | 车路云融合的道路环境场景仿真方法、电子设备及介质 | |
CN114280562A (zh) | 雷达仿真测试方法和实施该方法的计算机可读存储介质 | |
WO2023213083A1 (zh) | 目标检测方法、装置和无人车 | |
CN116451439A (zh) | 一种泊车硬件闭环测试系统 | |
CN113468735B (zh) | 一种激光雷达仿真方法、装置、系统和存储介质 | |
CN115384526A (zh) | 调试系统和调试方法 | |
WO2022256976A1 (zh) | 稠密点云真值数据的构建方法、系统和电子设备 | |
US20190152486A1 (en) | Low-latency test bed for an image- processing system | |
CN112560258B (zh) | 一种测试方法、装置、设备及存储介质 | |
JP6548708B2 (ja) | 画像処理システムのための低レイテンシの試験機 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21899906 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023533766 Country of ref document: JP |
|
ENP | Entry into the national phase |
Ref document number: 20237022187 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2021899906 Country of ref document: EP Effective date: 20230620 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |