WO2022116873A1 - 仿真测试方法、装置及系统 - Google Patents

仿真测试方法、装置及系统 Download PDF

Info

Publication number
WO2022116873A1
WO2022116873A1 PCT/CN2021/132662 CN2021132662W WO2022116873A1 WO 2022116873 A1 WO2022116873 A1 WO 2022116873A1 CN 2021132662 W CN2021132662 W CN 2021132662W WO 2022116873 A1 WO2022116873 A1 WO 2022116873A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
simulator
sensor
input signal
driving state
Prior art date
Application number
PCT/CN2021/132662
Other languages
English (en)
French (fr)
Inventor
孔红伟
陈昌辉
王文杰
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to KR1020237022187A priority Critical patent/KR20230116880A/ko
Priority to EP21899906.8A priority patent/EP4250023A4/en
Priority to JP2023533766A priority patent/JP2023551939A/ja
Publication of WO2022116873A1 publication Critical patent/WO2022116873A1/zh
Priority to US18/327,977 priority patent/US20230306159A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/15Vehicle, aircraft or watercraft design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B17/00Systems with reflecting surfaces, with or without refracting elements
    • G02B17/02Catoptric systems, e.g. image erecting and reversing system
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B17/00Systems involving the use of models or simulators of said systems
    • G05B17/02Systems involving the use of models or simulators of said systems electric
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M17/00Testing of vehicles
    • G01M17/007Wheeled or endless-tracked vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/23Pc programming
    • G05B2219/23446HIL hardware in the loop, simulates equipment to which a control module is fixed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3664Environments for testing or debugging software
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2113/00Details relating to the application field

Definitions

  • the embodiments of the present application relate to the field of simulation testing, and in particular, to a simulation testing method, device, and system.
  • assisted driving and autonomous driving technologies are rapidly developing and commercializing.
  • the use of assisted driving and autonomous driving technology will greatly change the way people travel and have a huge impact on people's work and lifestyle.
  • assisted driving and autonomous driving the perception capabilities of sensors on the car and the ability of the car to drive autonomously are critical to being able to drive safely in a variety of different scenarios.
  • Related capabilities need to be tested in different scenarios to ensure that these capabilities are reliable.
  • the current test methods usually include open road driving tests, closed field driving tests and simulation tests. It is difficult to traverse various test scenarios based on road driving tests. Some analysis shows that more than 100 million kilometers of road tests are required to ensure the coverage of various scenarios, and the efficiency is relatively low.
  • Embodiments of the present application provide a simulation testing method, device, and system, so as to provide a method for performing simulation testing in a virtual scene.
  • an embodiment of the present application provides a simulation test method, which is applied to an input signal simulator, where the input signal simulator is located in an automatic driving test framework, and the automatic driving test framework further includes a virtual scene simulator and a sensor simulator,
  • the virtual scene simulator is used to simulate a virtual scene, and the virtual scene includes a virtual object to be tested, and the virtual object to be tested includes a first driving state and a plurality of virtual sensors, including:
  • the processing delay may be the difference between the processing time of the virtual sensor in the simulator and the processing time of the real sensor in the real environment.
  • the processing delay can be a positive value or a negative value, exemplarily, if the processing time of the virtual sensor in the simulator is greater than that of the real sensor in the real environment If the processing time of the virtual sensor in the simulator is less than the processing time of the real sensor in the real environment, the processing delay is a negative value. Therefore, it can be determined whether the processing delay satisfies the preset condition by judging whether the processing delay is a positive value or a negative value.
  • each virtual sensor corresponds to its own preset front-end model and preset algorithm. Therefore, each virtual sensor includes its own processing delay. That is to say, it is necessary to judge whether the processing delay of each virtual sensor satisfies the preset conditions. .
  • the first driving state is predicted based on the processing delay to obtain the second driving state; specifically, the first driving state may include the position, speed and acceleration of the virtual object to be measured . Therefore, when the processing delay of any virtual sensor satisfies a preset condition, for example, the processing delay of the virtual sensor is a positive value, prediction can be made based on the processing delay of the virtual sensor to obtain the second driving state, wherein, The second driving state is a predicted state with the virtual sensor, and the second driving state is the position, speed and acceleration at a future moment.
  • a virtual sensor corresponding to the processing delay is used for simulation to obtain one or more first input signals, wherein each first input signal corresponds to each virtual sensor one-to-one; specifically, The input signal simulator may simulate based on the second driving state of each virtual sensor to obtain the first input signal.
  • One or more first input signals are sent to the sensor simulator.
  • the behavior and performance of the sensor can be simulated accurately, the accuracy of the sensor simulation can be improved, and the efficiency of the simulation test can be improved.
  • using a virtual sensor corresponding to the processing delay to perform simulation to obtain one or more first input signals includes:
  • Simultaneous simulation is performed using a plurality of virtual sensors corresponding to each processing delay to obtain a plurality of first input signals. Specifically, when a plurality of virtual sensors are simulated at the same time to obtain the first input signal, the signals obtained by the simulation of each virtual sensor can be synchronized to obtain a plurality of synchronized first input signals, thereby making the signal synchronization, thereby improving the accuracy of the simulation test.
  • the accuracy of the simulation test can be improved.
  • the processing delay is determined by the difference between the first processing time and the second processing time, where the first processing time is the processing time of the virtual sensor in the sensor simulator, and the second processing time is The preset real processing time of the real sensor corresponding to the virtual sensor.
  • the performance of the real sensor can be accurately simulated.
  • One of the possible implementations includes:
  • a virtual sensor corresponding to the processing delay is used for simulation to obtain a second input signal; based on the processing delay, one or more second input signals are The delay is sent to the sensor simulator.
  • the input signal simulator may The first driving state is simulated to obtain a second input signal, and the second input signal is delayed and sent based on the processing delay to compensate for the processing delay.
  • the processing delay can be compensated, so that the performance of the real sensor can be simulated accurately.
  • the sensor simulator is used to receive the first input signal or the second input signal, and perform calculations based on a preset front-end model and a preset algorithm of the virtual sensor to obtain an output signal, and the preset front-end of the virtual sensor
  • the performance of the real sensor can be simulated more accurately.
  • the virtual scene is obtained by the virtual scene simulator using at least one CPU and/or at least one GPU, and the first input signal or the second input signal is obtained by the input signal simulator using a ray tracing algorithm through at least one GPU simulation obtained.
  • the signal is simulated by at least one hardware unit such as CPU and/or GPU, which can improve the speed of signal simulation, thereby improving the efficiency of simulation testing.
  • the first driving state includes a first position, a first speed, and a first acceleration of the virtual object to be measured at time t, and the first driving state is predicted based on the processing delay to obtain the second driving state Status includes:
  • the Kalman filtering method is used to predict the first driving state of the virtual object to be tested, and the second driving state is obtained, wherein the second driving state includes the second position and the second speed of the virtual object to be tested at t+T. and the second acceleration, T is the processing delay.
  • the processing delay can be effectively compensated, thereby improving the accuracy of the simulation test.
  • the automatic driving test architecture also includes a digital simulator, a driving system and a power system simulator.
  • the digital simulator is used to receive the output signal sent by the sensor simulator, and send the output signal to the driving system.
  • the system is used to determine the driving decision based on the output signal, and the power system simulator is used to simulate the driving decision, obtain the third driving state, and feed back the third driving state to the virtual scene simulator, so that the virtual object to be tested is based on the third driving state
  • the first driving state is updated.
  • a driving decision can be obtained based on the output signal, and the driving state of the virtual object to be tested can be updated based on the driving decision, thereby forming a closed loop of the simulation test, and then It can improve the efficiency of simulation test.
  • the virtual sensor includes at least one of a millimeter-wave radar virtual sensor, a lidar virtual sensor, an infrared virtual sensor, and a camera virtual sensor.
  • a simulation test can be performed on different sensors, which can improve the flexibility of the test, thereby improving the efficiency of the simulation test.
  • an embodiment of the present application provides a simulation test device, which is applied to an input signal simulator, where the input signal simulator is located in an automatic driving test architecture, and the automatic driving test architecture further includes a virtual scene simulator and a sensor simulator.
  • the scene simulator is used to simulate a virtual scene, the virtual scene includes a virtual object to be tested, and the virtual object to be tested includes a first driving state and a plurality of virtual sensors, including:
  • the receiving circuit is used to obtain the processing delay of each virtual sensor
  • a prediction circuit for judging whether each processing delay satisfies the preset condition; if any processing delay satisfies the preset condition, predicting the first driving state based on the processing delay to obtain the second driving state;
  • a first simulation circuit for performing simulation based on each second driving state using virtual sensors corresponding to the processing delay to obtain one or more first input signals, wherein each first input signal is associated with each virtual sensor one-to-one correspondence;
  • the first sending circuit is used for sending one or more first input signals to the sensor simulator.
  • the above-mentioned first simulation circuit is further configured to use a plurality of virtual sensors corresponding to each processing delay to perform synchronous simulation to obtain a plurality of first input signals.
  • the above-mentioned processing delay is determined by the difference between the first processing time and the second processing time, wherein the first processing time is the processing time of the virtual sensor in the sensor simulator, and the second processing time is the processing time of the virtual sensor in the sensor simulator.
  • the above-mentioned device further includes:
  • the second simulation circuit is configured to use a virtual sensor corresponding to the processing delay to perform simulation based on the first driving state to obtain a second input signal if any processing delay does not meet the preset condition;
  • the second sending circuit is configured to delay sending one or more second input signals to the sensor simulator based on the processing delay.
  • the above-mentioned first input signal or second input signal is obtained by at least one GPU simulation by an input signal simulator using a ray tracing algorithm.
  • the first driving state includes the first position, the first speed, and the first acceleration of the virtual object to be tested at time t
  • the prediction circuit is further configured to use the Kalman filtering method to be tested based on the processing delay.
  • the first driving state of the virtual object is predicted to obtain the second driving state, wherein the second driving state includes the second position, the second speed and the second acceleration of the virtual object to be tested at t+T, where T is the processing delay. Time.
  • an embodiment of the present application provides a simulation test system, including: a virtual scene simulator, an input signal simulator, a sensor simulator, a digital simulator, and a system synchronization module; wherein,
  • the virtual scene simulator is used for a virtual scene, the virtual scene includes a virtual object to be tested, and the virtual object to be tested includes a first driving state and a plurality of virtual sensors;
  • the input signal simulator is used to obtain the processing delay of each virtual sensor; determine whether each processing delay meets the preset condition; if any processing delay meets the preset condition, predict the first driving state based on the processing delay , obtain the second driving state; based on each second driving state, use the virtual sensor corresponding to the processing delay for simulation to obtain one or more first input signals, wherein each first input signal and each virtual sensor One-to-one correspondence; sending one or more first input signals to the sensor simulator;
  • the sensor simulator is used for receiving the first input signal, and performing calculation based on the preset front-end model and preset algorithm of the virtual sensor to obtain the output signal;
  • the digital simulator is used to receive the output signal sent by the sensor simulator
  • the system synchronization module is used to provide the synchronization clock to the virtual scene simulator, input signal simulator, sensor simulator and digital simulator.
  • the input signal simulator is further configured to use a plurality of virtual sensors corresponding to each processing delay to perform synchronous simulation to obtain a plurality of first input signals.
  • the processing delay is determined by the difference between the first processing time and the second processing time, where the first processing time is the processing time of the virtual sensor in the sensor simulator, and the second processing time is The preset real processing time of the real sensor corresponding to the virtual sensor.
  • the input signal simulator is also used to simulate, based on the first driving state, a virtual sensor corresponding to the processing delay if any processing delay does not meet the preset condition, to obtain the second Input signals; delay sending one or more second input signals to the sensor simulator based on the processing delay.
  • the sensor simulator is also used to receive the second input signal
  • the virtual scene is obtained by the virtual scene simulator using at least one CPU and/or at least one GPU, and the first input signal or the second input signal is obtained by the input signal simulator using a ray tracing algorithm through at least one GPU simulation obtained.
  • the first driving state includes the first position, the first speed, and the first acceleration of the virtual object to be measured at time t
  • the input signal simulator is also used to use the Kalman filtering method based on the processing delay Predict the first driving state of the virtual object to be tested to obtain a second driving state, wherein the second driving state includes the second position, the second speed and the second acceleration of the virtual object to be tested at t+T, where T is the processing delay. Time.
  • One of the possible implementations also includes a driving system and a power system simulator, in which,
  • Digital simulators are also used to send output signals to the driving system
  • the driving system is used to determine driving decisions based on the output signals
  • the power system simulator is used to simulate driving decisions, obtain a third driving state, and feed back the third driving state to the virtual scene simulator, so that the virtual object to be tested updates the first driving state based on the third driving state.
  • the above virtual sensor includes at least one of a millimeter-wave radar virtual sensor, a lidar virtual sensor, an infrared virtual sensor, and a camera virtual sensor.
  • an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when it runs on a computer, causes the computer to execute the method described in the first aspect.
  • an embodiment of the present application provides a computer program, which is used to execute the method described in the first aspect when the computer program is executed by a computer.
  • the program in the fifth aspect may be stored in whole or in part on a storage medium packaged with the processor, and may also be stored in part or in part in a memory not packaged with the processor.
  • FIG. 1 is a schematic diagram of a system architecture provided by an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a simulation testing method provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of state prediction provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a simulation testing device provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implicitly indicating the number of indicated technical features.
  • a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
  • plural means two or more.
  • a system that can simulate various scenarios in the laboratory, and at the same time for the sensor capabilities, test vehicle frame and power system It is a hardware closed-loop solution for joint testing of autonomous driving software and algorithms.
  • the method based on the analog interface is to generate the analog signal received by each sensor through scene simulation, and send the signal to each sensor through the analog interface.
  • the method of using the analog interface currently faces the problems of complex system and immature solution. For example, the radar echoes of multiple targets in the simulated scene can only simulate a few targets, which cannot meet the requirements.
  • the system that uses the probe wall solution to simulate dozens of target echoes is very expensive and complicated, and there is no mature solution.
  • the scene simulation mode of the digital interface is generally adopted.
  • the generated scene simulation signal is directly sent to the processing system of assisted driving and automatic driving through the digital interface, without passing through the actual sensor.
  • the usual solution is to establish a behavioral model or a statistical model of each sensor to simulate the performance impact of the sensor. In this way, the simultaneous simulation of the behavior of multiple sensors in the same scene cannot be achieved. At the same time, behavior-based or statistical models cannot accurately characterize the performance of sensors in specific scenarios.
  • the embodiment of the present application proposes a simulation test method, which can effectively solve the time delay problem caused by the simulation sensor in the simulation test of the digital scene, so that the simulation can be accurately synchronized in the digital scene simulator. Behavior and performance of multiple sensors.
  • FIG. 1 shows the system architecture provided by the embodiment of the present application.
  • the above-mentioned system architecture includes a virtual scene simulator 100, an input signal The simulator 200 , the sensor simulator 300 , the digital simulator 400 , the assisted driving and automatic driving system 500 (hereinafter referred to as the “driving system” for the convenience of description), the power system simulator 600 and System synchronization module 700 .
  • driving system the assisted driving and automatic driving system 500
  • the virtual scene simulator 100 is used for constructing a virtual scene and sending virtual scene information to the input signal simulator 200 .
  • the virtual scene simulator 100 may be a computer or other types of computing devices, which are not specifically limited in this application.
  • the virtual scene simulator 100 may include a high-speed Ethernet network card supporting the IEEE1588 protocol, one or more central processing units (Central Processing Unit, CPU) or graphics processing unit (Graphics Processing Unit, GPU), so as to ensure that the simulated real-time performance.
  • the construction of the virtual scene may be implemented by scene simulation software installed in the virtual scene simulator 100, and the scene simulation software may be commercial 3D simulation software or open-source 3D simulation software. No special restrictions are made.
  • the virtual scene may include a virtual scene constructed by using a virtual three-dimensional model, and the virtual three-dimensional model simulates real objects in the real world.
  • the real objects may include cars, people, animals, trees, roads and buildings, etc. Other real objects may also be included, which are not particularly limited in this embodiment of the present application. Understandably, the three-dimensional model can be regarded as a virtual object.
  • a user usually creates a virtual object in a virtual scene, and uses the virtual object as a test object for testing. Therefore, the test object may be the first virtual object, and the first virtual object in the virtual scene is not the same as the first virtual object. All other virtual objects can be regarded as second virtual objects.
  • the first virtual object can be a car, so that the automatic driving and assisted driving performance of the car can be tested.
  • the second virtual object around the car will Create obstacles to the car, which will affect the driving of the car, which in turn will affect the driving decisions of the car, such as acceleration, deceleration, cornering, and parking.
  • the virtual scene simulator 100 can send the information of the virtual objects in the virtual scene to the input signal simulator 200, and the information of the virtual objects can include the coordinates, materials, lighting and other information of the virtual objects.
  • the information transmission between the virtual scene simulator 100 and the input signal simulator 200 adopts a standardized interface to ensure that the above-mentioned system architecture does not depend on specific scene simulation software.
  • the above-mentioned first virtual object may also be an unmanned aerial vehicle or other types of automatic driving equipment, which is not particularly limited in this embodiment of the present application.
  • the virtual scene simulator 100 can establish a corresponding physical model for the virtual object based on the above-mentioned physical phenomenon, and at the same time, the virtual scene simulator 100 can also establish model parameters corresponding to the material of the virtual object, wherein the material of the virtual object is maintained with the material used by the real object. Consistent.
  • the virtual scene simulator 100 may further arrange a virtual sensor on the first virtual object, where the virtual sensor is used to simulate and acquire input signals in the virtual environment.
  • the input signals obtained by various virtual sensors in the virtual environment can be simulated through the above-mentioned physical models and model parameters.
  • the input signal simulator 200 is used to simulate input signals obtained by various virtual sensors in the virtual environment, and send the input signals to the sensor simulator 300 for processing.
  • the input signal may be simulated based on the driving state of the first virtual object and virtual scene information.
  • the foregoing virtual sensors may include millimeter-wave radar virtual sensors, lidar virtual sensors, infrared virtual sensors, and camera virtual sensors, and may also include other types of virtual sensors, which are not particularly limited in this embodiment of the present application.
  • the input signal simulator 200 may be a computer or other types of computing devices, which are not specifically limited in this application.
  • the input signal simulator 200 may include a high-speed Ethernet network card supporting the IEEE1588 protocol, and one or more GPUs.
  • the simulation process of acquiring the input signal can be realized by the input signal simulation software.
  • the format, type, and related parameters of the virtual sensor can be configured, and the information of the virtual object sent from the virtual scene simulator 100 and the material parameters of the related virtual object can be received, And use the ray tracing algorithm to simulate different types of virtual sensors based on the driving state of the first virtual object (for example, the position, speed and acceleration of the first virtual object, etc.), the above-mentioned physical model and model parameters, for example, for each type of virtual sensor.
  • the physical effects of the real sensor on the surface of the virtual object are simulated and calculated (for example, including but not limited to reflection, scattering, diffraction and other effects), so that the input signal in the virtual environment can be simulated and obtained, for example, by the transmitter of the virtual sensor A signal is emitted, and the emitted signal has a physical effect with the surface of the virtual object and is returned to the receiver of the virtual sensor, thereby obtaining the input signal of the virtual sensor.
  • the simulation of the input signal can be processed by the GPU to ensure the real-time performance of the simulation.
  • the input signal simulator 200 can simulate capturing images in the virtual environment through a virtual camera, or simulate detecting virtual objects in the virtual environment through a millimeter-wave radar.
  • the sensor simulator 300 is used to simulate the behavior and performance of each virtual sensor.
  • the sensor simulator 300 can receive the input signal obtained by the input signal simulator 200, and process the input signal by simulating the processing method of the real sensor, thereby An output signal can be obtained and sent to the digital simulator 400 for processing.
  • the sensor simulator 300 may be a computer or other types of computing devices, which are not specifically limited in this application.
  • a high-speed Ethernet network card supporting the IEEE1588 protocol, one or more Field Programmable Gate Array (FPGA) acceleration cards or GPUs may be included.
  • the sensor simulator 300 can simulate the front-end performance of each virtual sensor and the algorithm of each virtual sensor. The simulation of the performance of the front-end of each virtual sensor can be performed by modeling.
  • the input signal of Y is the output signal of the front end of the sensor
  • G is the gain of the front end of the sensor
  • N is the noise of the front end of the sensor
  • I is the interference introduced by the front end of the sensor.
  • the output signal Y can be obtained by processing a preset algorithm of the virtual sensor.
  • the model of the virtual sensor front-end is constructed based on the real sensor. Therefore, the virtual sensor is used to simulate the front-end of the real sensor by modeling, so that the front-end of the virtual sensor and the front-end of the real sensor can be guaranteed. The performance is statistically consistent.
  • the algorithm of the virtual sensor can be performed by directly using the algorithm of the real sensor, which will not be repeated here.
  • the sensor simulator 300 can use a corresponding FPGA accelerator card or GPU card to simulate, so as to ensure the real-time requirements and computing power requirements of the simulation.
  • the digital simulator 400 is used for receiving the output signal sent by the sensor simulator 300 and can send the output signal to the driving system 500 .
  • the digital simulator 400 may be a computer or other types of computing devices, which are not specifically limited in this application.
  • the digital simulator 400 may include a high-speed Ethernet network card supporting the IEEE1588 protocol, one or more FPGA acceleration cards, and multiple interfaces.
  • the interface includes a physical interface and a digital interface between the digital simulator 400 and the real sensor.
  • the digital simulator 400 can use the physical interface and the digital interface to receive real data collected by different real sensors and play back these real data. collected data for testing.
  • the physical interface may include but is not limited to Controller Area Network (CAN), Mobile Industry Processor Interface (MIPI), Ethernet (Ethernet), Gigabit Multimedia Serial Link ( Gigabit Multimedia Serial Link, GSML), Flat Panel Display Link (FPDLink--Flat Panel Display Link, FPLINK), Local Interconnect Network (LIN) and 100M-T1 and other interfaces.
  • CAN Controller Area Network
  • MIPI Mobile Industry Processor Interface
  • Ethernet Ethernet
  • Gigabit Multimedia Serial Link Gigabit Multimedia Serial Link
  • GSML Flat Panel Display Link
  • FPDLink--Flat Panel Display Link FPLINK
  • Local Interconnect Network LIN
  • the digital interface may include but is not limited to a high-speed Peripheral Component Interconnect Express (PCIE) interface, a Serial Advanced Technology Attachment (Serial Advanced Technology Attachment, SATA) interface, a high-speed Ethernet interface, a digital fiber optic interface, and the like.
  • PCIE Peripheral Component Interconnect Express
  • SATA Serial Advanced Technology Attachment
  • Ethernet Ethernet
  • digital fiber optic interface and the like.
  • the above-mentioned interfaces may also include a physical interface between the digital simulator 400 and the driving system 500 , and a digital interface between the digital simulator 400 and the powertrain simulator 600 .
  • the physical interface may include a CAN interface.
  • the digital simulator 400 may be connected to the CAN bus of the driving system 500, thereby enabling the digital simulator 400 to send the output signal to the driving system 500,
  • the driving system 500 can make driving decisions based on the output signal.
  • the digital interface may include an Ethernet interface for transmitting the driving state of the object under test (eg, the vehicle) output by the powertrain simulator 600 .
  • the driving system 500 is configured to receive the output signal sent by the digital simulator 400 and make driving decisions based on the output signal.
  • the driving system 500 may be located in a real object to be tested.
  • the driving system 500 may be located inside a real vehicle to be tested. At this time, the vehicle is used as the object to be tested. It can be understood that the driving system 500 can also be tested independently.
  • the driving system 500 can be taken out of a real vehicle, so that the driving system 500 can be used as the object to be tested.
  • the driving decision may include operations such as acceleration, braking, deceleration, and turning, and may also include other operations, which are not particularly limited in this embodiment.
  • the driving system 500 can send the above driving decision to the powertrain simulator 600, so that the powertrain simulator 600 can perform a simulation based on the driving decision to update the driving states of the real and virtual objects to be tested.
  • the power system simulator 600 is used for receiving the driving decision sent by the driving system 500, simulating the dynamic characteristics of the real vehicle based on the driving decision, thereby outputting the driving state corresponding to the real vehicle, and feeding the driving state back to the first driver in the virtual scene.
  • a virtual object so that the first virtual object can be updated based on the driving state, thereby completing the simulation test.
  • the power system simulator 600 may be a computer or other types of computing devices, which are not specifically limited in this application. In the powertrain simulator 600, multiple interfaces may be included.
  • the interface may include a physical interface (eg, a CAN interface) between the powertrain simulator 600 and the driving system 500, and a digital interface (eg, an Ethernet interface) between the powertrain simulator 600 and the virtual scene simulator.
  • a physical interface eg, a CAN interface
  • a digital interface eg, an Ethernet interface
  • the system synchronization module 700 is used to provide a synchronization clock for the virtual scene simulator 100, the input signal simulator 200, the sensor simulator 300 and the digital simulator 400, so as to ensure the virtual scene simulator 100, the input signal simulator 200, and the sensor simulator 300 and clock synchronization with the digital simulator 400.
  • the system synchronization module 700 may adopt, for example, a high-speed Ethernet switch supporting the 1588 synchronization protocol, or may be a dedicated synchronization module, which is not particularly limited in this embodiment of the present application.
  • the virtual scene simulator 100 , the input signal simulator 200 , the sensor simulator 300 , and the digital simulator 400 may perform data interaction through a high-speed Ethernet switch or other high-speed data connection devices.
  • a schematic flowchart of an embodiment of the simulation testing method provided by the embodiment of the present application includes:
  • Step 101 using the virtual scene simulator 100 to construct a virtual scene.
  • the virtual scene can be constructed by the virtual scene simulator 100 .
  • the virtual scene may include a scene to be tested, and the scene to be tested may include a virtual object and a material corresponding to the virtual object.
  • the virtual object may include: a car, a person, an animal, a tree, a road, a building, etc., or may include other objects, which are not particularly limited in this embodiment of the present application.
  • the virtual object to be tested can also be determined in the virtual scene, for example, the virtual object to be tested can be a car.
  • the first parameter of the virtual sensor can be configured through the input signal simulator 200 to simulate the acquisition of the input signal by the virtual sensor.
  • the configured first parameter information may include: the number of sensors, the type of the sensors, and the assembly parameters of the sensors on the real vehicle (for example, the sensors in the real vehicle Installation height and angle on the vehicle, etc.), physical parameters of the sensor (for example, the number, position and direction of the transmitting and receiving antennas of the millimeter-wave radar, the frequency, working mode and number of lines of the lidar, the field of view and focal length of the camera Wait).
  • the virtual object to be tested can be regarded as the first virtual object, and all virtual objects in the virtual scene except the first virtual object can be regarded as the second virtual object.
  • a vehicle in the virtual scene is used as the first virtual object, that is, the virtual object to be tested
  • other virtual objects for example, cars, people, animals, trees, roads, buildings, etc.
  • Two virtual objects can be used as the first virtual object.
  • the second parameter of the virtual sensor may also be configured in the sensor simulator 300 .
  • the configured second parameter information may include: the number of sensors, the type of sensors, and front-end parameters of the sensors (for example, front-end gain G, front-end noise N And the interference I) introduced by the front end, the sensor processing delay and the sensor algorithm, etc.
  • Step 102 the virtual scene simulator 100 sends the virtual scene information to the input signal simulator 200 .
  • the virtual scene information may include related information of all virtual objects in the virtual scene, where the related information may include coordinate positions, material information (for example, plastic, metal, etc.), lighting conditions, etc., to which the embodiments of the present application No special restrictions are made.
  • Step 103 the input signal simulator 200 simulates and acquires multiple input signals of the first virtual object.
  • the virtual sensor may include: one or more of a millimeter-wave radar virtual sensor, a lidar virtual sensor, an infrared virtual sensor, and a camera virtual sensor.
  • the input signal may include echo signals and/or images. Exemplarily, echo signals can be obtained through a millimeter-wave radar virtual sensor, a lidar virtual sensor, and an infrared virtual sensor, and a captured image can also be obtained through a camera virtual sensor.
  • the input signal can be obtained after calculation using a ray tracing algorithm based on the driving state S t and virtual scene information of the first virtual object. It can be understood that other algorithms can also be used to obtain the input signal. No special restrictions are made. in,
  • Xi is the position of the first virtual object
  • Vi is the velocity of the first virtual object
  • Ai is the acceleration of the first virtual object. It can be understood that the state of the above-mentioned first virtual object may also include other variables, which are not particularly limited in this embodiment of the present application.
  • each input signal has a one-to-one correspondence with each virtual sensor.
  • the A echo input signal obtained by the millimeter-wave radar virtual sensor may be the A echo input signal
  • the B echo input signal obtained by the lidar virtual sensor may be the B echo input signal
  • the C image input signal obtained by the camera virtual sensor may be.
  • the multiple input signals can also be synchronized to ensure that the input signal simulator 200 can simultaneously acquire the input signals acquired by multiple virtual sensors for the same scene .
  • a simulation processing time Tf is generated.
  • the real sensor processes the input signal, it will generate the real processing time Tz.
  • the simulation processing time Tf generated by the virtual sensor and the real processing time Tz generated by the real sensor will be different.
  • the above-mentioned simulation processing time Tf may be greater than or equal to the real processing time Tz, and the above-mentioned simulation processing time Tf may also be less than or equal to the real processing time Tz.
  • delay compensation can be performed on the above-mentioned input signal in the input signal simulator 200, so that the above-mentioned simulation test can better simulate the real scene.
  • the delay refers to the difference between the simulation processing time Tf and the real processing time Tz.
  • the input signal simulator 200 can also acquire the simulation processing time Tf of each virtual sensor in the sensor simulator 300 .
  • the simulation processing time Tf corresponds to the virtual sensor one-to-one.
  • the simulation processing time Tf1 of the millimeter-wave radar virtual sensor, the simulation processing time Tf2 of the lidar virtual sensor, the simulation processing time Tf3 of the camera virtual sensor, and the like may be obtained.
  • the real processing time Tz of the real sensor corresponding to the virtual sensor can also be obtained.
  • the real processing time Tz corresponds to the real sensor one-to-one.
  • the real processing time Tz1 of the real sensor of the millimeter wave radar, the real processing time Tz2 of the real sensor of the lidar, the real processing time Tz3 of the real sensor of the camera, and the like can be obtained.
  • the input signal simulator 200 may compare the simulated processing time Tf of each virtual sensor with the corresponding real processing time Tz.
  • the simulation processing time Tf of the virtual lidar sensor is 5ms
  • the real processing time Tz of the real lidar sensor is 10ms
  • the driving state of the first virtual object can be predicted, so that the input signal simulator 200 can perform simulation based on the predicted driving state, thereby obtaining the second input signal.
  • the second input signal may be a prediction input signal after the Tf-Tz time period, and the prediction method may be Kalman filtering method, or other prediction methods may be used, which is not particularly limited in this embodiment of the present application.
  • Xi is the position of the first virtual object
  • Vi is the velocity of the first virtual object
  • Ai is the acceleration of the first virtual object. It can be understood that the state of the above-mentioned first virtual object may also include other variables, which are not specially limited in this embodiment of the present application.
  • S t+T Fi*S t +Bi*Ui+Ni, where,
  • Ni is the state prediction noise. Thereby, the prediction of the driving state of the first virtual object in the future can be completed. Next, based on the driving state of the first virtual object in the future, the input signal simulator 300 may be used to simulate the input signal, thereby obtaining the second input signal.
  • the simulation processing time of the millimeter-wave radar virtual sensor Tf1 10ms, that is to say, the millimeter-wave radar virtual sensor needs to process the input signal for 10ms, so that the first input signal of the millimeter-wave radar can be obtained;
  • the processing time Tf2 12ms, that is to say, the virtual lidar sensor needs to process the input signal after 10ms, so that the first input signal of the lidar can be obtained;
  • the real processing time of the real lidar sensor Tz2 8ms, that is to say , the real lidar sensor needs to process the input signal after 5ms, so that the first input signal of the lidar can be obtained.
  • the driving state of the first virtual object can be predicted based on the maximum simulation delay (for example, Ty1), and based on the predicted driving state, the millimeter-wave radar virtual sensor and the lidar virtual sensor can be used to simulate , to obtain the second input signal of the millimeter-wave radar and the second input signal of the laser radar, thereby ensuring that all the first input signals can obtain effective delay compensation.
  • FIG. 3 is a schematic diagram of the prediction of the second input signal.
  • the driving state of the first virtual object at time t is S1
  • the millimeter-wave radar virtual sensor on the first virtual object can simulate the input signal 101 based on the driving state S1 at time t.
  • the driving state of the first virtual object at time t+T1 is predicted, so that the driving state of the first virtual object at time t+T1 can be obtained as S2, where T1 is the delay, that is, the millimeter wave radar virtual
  • T1 is the delay, that is, the millimeter wave radar virtual
  • the millimeter-wave radar virtual sensor can obtain the second input signal 102 by simulation based on the driving state S2 at time t+T1, so that the prediction of the second input signal 102 can be completed, and the delay compensation of the input signal can be completed.
  • the virtual sensor of the lidar can also predict the input signal of the lidar in the manner shown in FIG. 3 above, thereby obtaining the second input signal of the lidar, which will not be repeated here.
  • the simulation delay of each virtual sensor may also be corrected.
  • the millimeter-wave radar virtual sensor is used for the delay compensation time of the lidar virtual sensor.
  • the input signal simulator 200 can send the second input signal predicted by the lidar with a delay T2, so that the accumulated time of the delay T2 and the simulation processing time corresponds to the predicted second input signal, and further Simulates the performance of a real sensor.
  • T2 is the difference between the delay compensation time and the simulation delay.
  • Step 104 the input signal simulator 200 sends a plurality of input signals to the sensor simulator 300 .
  • the multiple input signals may be sent to the sensor simulator 300, wherein the input signals may include the first input signal and/or the first input signal. Two input signals.
  • step 105 the sensor simulator 300 processes multiple input signals and outputs multiple output signals.
  • the sensor simulator 300 can process the aforementioned multiple input signals, thereby obtaining multiple output signals.
  • each virtual sensor may use different preset front-end models respectively, and the embodiments of the present application do not specifically limit the implementation of the specific front-end models.
  • each input signal has a one-to-one correspondence with each output signal.
  • the A-echo input signal obtained by the millimeter-wave radar virtual sensor can be input into the front-end model of the millimeter-wave radar virtual sensor and processed using a preset millimeter-wave radar sensor algorithm to obtain the A echo output signal;
  • the B echo input signal obtained by the virtual sensor is input to the front-end model of the lidar virtual sensor for processing using the preset lidar sensor algorithm to obtain the B echo output signal; or the C image input signal obtained by the camera virtual sensor is input to the camera virtual sensor.
  • the front-end model of C is processed using the preset camera sensor algorithm to obtain the C image output signal.
  • Step 106 sending a plurality of output signals to the digital simulator 400 .
  • the aforementioned multiple output signals can be sent to the digital simulator 400 .
  • Step 107 the digital simulator 400 receives and processes the multiple output signals, and sends the multiple output signals to the driving system 500 .
  • the digital simulator 400 may receive the output signal corresponding to each virtual sensor sent by the sensor simulator 300 . In order to test the driving performance of the real vehicle, the digital simulator 400 may also send the above-mentioned multiple output signals to the driving system 500 of the real vehicle.
  • step 108 the driving system 500 determines a driving decision based on the plurality of output signals.
  • the driving system 500 makes a driving decision after receiving the above-mentioned multiple output signals sent by the digital simulator 400 .
  • the driving decision may include operations such as acceleration, deceleration, braking, and turning, and may also include other driving decisions, which are not specifically limited in this embodiment of the present application.
  • Step 109 sending the driving decision to the powertrain simulator 600 .
  • Step 110 the power system simulator 600 simulates the driving state St' of the real vehicle based on the driving decision.
  • this driving state St' corresponds to a driving decision.
  • the driving decision is to accelerate, then the driving state of the real vehicle simulated by the power system simulator 600 is acceleration driving, and if the driving decision is to brake, the driving state of the real vehicle simulated by the power system simulator 600 is that it is in the state of braking after braking. Parking status.
  • Step 111 feedback the driving state St' to the virtual scene simulator 100, so that the virtual scene simulator 100 updates the driving state St of the first virtual object based on St'.
  • the driving state St' can be fed back to the virtual scene simulator 100, thereby enabling the virtual scene simulator 100 to update the driving of the first virtual object based on the St' State St.
  • the test vehicle the first virtual object
  • the second virtual object the vehicle
  • driving decisions can be determined (eg, braking by the driving system 500 ), and the braking decisions can be fed back to the first virtual object, thereby completing the simulation test of the entire system.
  • Step 112 the virtual scene simulator 100 updates the driving state of the first virtual object based on the driving state St'.
  • the driving state of the object to be tested is predicted, thereby compensating for the delay, which can effectively solve the problem caused by the simulated sensor in the simulation test of the digital scene. Therefore, the behavior and performance of multiple sensors can be accurately and synchronously simulated in a digital scene simulator.
  • FIG. 4 is a schematic structural diagram of an embodiment of the simulation test device of the present application.
  • the above-mentioned simulation test device 40 is applied to an input signal simulator, and the input signal simulator is located in an automatic driving test framework.
  • the automatic driving test framework also Including a virtual scene simulator and a sensor simulator, the virtual scene simulator is used to simulate a virtual scene, the virtual scene includes a virtual object to be tested, and the virtual object to be tested includes a first driving state and a plurality of virtual sensors, which may include: a receiving circuit 41, a prediction circuit 42, first analog circuit 43 and first transmission circuit 44;
  • the receiving circuit 41 is used to obtain the processing delay of each virtual sensor
  • the prediction circuit 42 is configured to determine whether each processing delay satisfies the preset condition; if any processing delay satisfies the preset condition, predict the first driving state based on the processing delay to obtain the second driving state;
  • the first simulation circuit 43 is configured to perform simulation based on each second driving state using a virtual sensor corresponding to the processing delay to obtain one or more first input signals, wherein each first input signal is associated with each virtual sensor.
  • the first sending circuit 44 is used for sending one or more first input signals to the sensor simulator.
  • the above-mentioned first simulation circuit 43 is further configured to use a plurality of virtual sensors corresponding to each processing delay to perform synchronous simulation to obtain a plurality of first input signals.
  • the above-mentioned processing delay is determined by the difference between the first processing time and the second processing time, wherein the first processing time is the processing time of the virtual sensor in the sensor simulator, and the second processing time is the processing time of the virtual sensor in the sensor simulator.
  • the above-mentioned apparatus 40 further includes: a second analog circuit 45 and a second sending circuit 46;
  • the second simulation circuit 45 is configured to use a virtual sensor corresponding to the processing delay to perform simulation based on the first driving state to obtain a second input signal if any processing delay does not meet the preset condition;
  • the second sending circuit 46 is configured to delay sending one or more second input signals to the sensor simulator based on the processing delay.
  • the above-mentioned first input signal or second input signal is obtained by at least one GPU simulation by an input signal simulator using a ray tracing algorithm.
  • the first driving state includes the first position, the first speed, and the first acceleration of the virtual object to be tested at time t
  • the prediction circuit is further configured to use the Kalman filtering method to be tested based on the processing delay.
  • the first driving state of the virtual object is predicted to obtain the second driving state, wherein the second driving state includes the second position, the second speed and the second acceleration of the virtual object to be tested at t+T, where T is the processing delay. Time.
  • each module of the simulation test apparatus shown in FIG. 4 is only a division of logical functions, and may be fully or partially integrated into a physical entity in actual implementation, or may be physically separated.
  • these modules can all be implemented in the form of software calling through processing elements; they can also all be implemented in hardware; some modules can also be implemented in the form of software calling through processing elements, and some modules can be implemented in hardware.
  • the detection module may be a separately established processing element, or may be integrated in a certain chip of the electronic device.
  • the implementation of other modules is similar.
  • all or part of these modules can be integrated together, and can also be implemented independently.
  • each step of the above-mentioned method or each of the above-mentioned modules can be completed by an integrated logic circuit of hardware in the processor element or an instruction in the form of software.
  • the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more specific integrated circuits (Application Specific Integrated Circuit; hereinafter referred to as: ASIC), or, one or more microprocessors Digital Singnal Processor (hereinafter referred to as: DSP), or, one or more Field Programmable Gate Array (Field Programmable Gate Array; hereinafter referred to as: FPGA), etc.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Singnal Processor
  • FPGA Field Programmable Gate Array
  • these modules can be integrated together and implemented in the form of a system-on-a-chip (System-On-a-Chip; hereinafter referred to as: SOC).
  • FIG. 5 is a schematic structural diagram of an embodiment of an electronic device 50 of the present application; wherein, the electronic device 50 may be the above-mentioned input signal simulator 200 .
  • the electronic device 50 may be a data processing device or a circuit device built in the data processing device.
  • the electronic device 50 can be used to execute the functions/steps in the methods provided by the embodiments shown in FIG. 1 to FIG. 3 of the present application.
  • electronic device 50 takes the form of a general-purpose computing device.
  • the electronic device 50 described above may include: one or more processors 510; a communication interface 520; a memory 530; a communication bus 540 connecting different system components (including the memory 530 and the processor 510), a database 550; and one or more computer programs .
  • the above-mentioned one or more computer programs are stored in the above-mentioned memory, and the above-mentioned one or more computer programs include instructions, when the above-mentioned instructions are executed by the above-mentioned electronic equipment, so that the above-mentioned electronic equipment performs the following steps:
  • each second driving state Based on each second driving state, use the virtual sensor corresponding to the processing delay to perform simulation to obtain one or more first input signals, wherein each first input signal corresponds to each virtual sensor one-to-one;
  • One or more first input signals are sent to the sensor simulator.
  • the step of causing the above-mentioned electronic device to perform simulation using a virtual sensor corresponding to the processing delay to obtain one or more first input signals includes:
  • Simultaneous simulation is performed using a plurality of virtual sensors corresponding to each of the processing delays to obtain a plurality of first input signals.
  • the above-mentioned processing delay is determined by the difference between the first processing time and the second processing time, wherein the first processing time is the processing time of the virtual sensor in the sensor simulator, and the second processing time is the processing time of the virtual sensor in the sensor simulator.
  • the above-mentioned electronic device when executed by the above-mentioned electronic device, the above-mentioned electronic device further executes the following steps:
  • a virtual sensor corresponding to the processing delay is used for simulation to obtain a second input signal
  • Delay sending one or more second input signals to the sensor simulator based on the processing delay.
  • the above-mentioned sensor simulator is used to receive the first input signal or the second input signal, and perform calculations based on a preset front-end model and a preset algorithm of the virtual sensor to obtain an output signal, and the preset of the virtual sensor
  • the above-mentioned virtual scene is simulated and obtained by the virtual scene simulator using at least one CPU and/or at least one GPU, and the first input signal or the second input signal is obtained by the input signal simulator using a ray tracing algorithm through at least one CPU and/or at least one GPU.
  • a GPU simulation is obtained.
  • the first driving state includes a first position, a first speed, and a first acceleration of the virtual object to be measured at time t, and when the above-mentioned instruction is executed by the above-mentioned electronic device, the above-mentioned electronic device executes the operation based on the above-mentioned
  • the step of predicting the first driving state by delaying the processing, and obtaining the second driving state includes:
  • a Kalman filtering method is used to predict the first driving state of the virtual object to be tested, and a second driving state is obtained, wherein the second driving state includes the virtual object to be tested at t+ The second position, the second velocity, and the second acceleration of T, where T is the processing delay.
  • the automatic driving test architecture also includes a digital simulator, a driving system and a power system simulator.
  • the digital simulator is used to receive the output signal sent by the sensor simulator, and send the output signal to the driving system.
  • the system is used to determine the driving decision based on the output signal, and the power system simulator is used to simulate the driving decision, obtain the third driving state, and feed back the third driving state to the virtual scene simulator, so that the virtual object to be tested is based on the third driving state
  • the first driving state is updated.
  • the above virtual sensor includes at least one of a millimeter-wave radar virtual sensor, a lidar virtual sensor, an infrared virtual sensor, and a camera virtual sensor.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 50 .
  • the electronic device 50 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the above-mentioned electronic device 50 includes corresponding hardware structures and/or software modules for executing each function.
  • the embodiments of the present application can be implemented in hardware or a combination of hardware and computer software. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of the embodiments of the present invention.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. It should be noted that, the division of modules in the embodiment of the present invention is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
  • Each functional unit in each of the embodiments of the embodiments of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • a computer-readable storage medium includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: flash memory, removable hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Automation & Control Theory (AREA)
  • Optics & Photonics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种仿真测试方法、装置(40)及系统,涉及仿真测试领域,仿真测试方法包括:获取每个虚拟传感器的处理延时;判断每个处理延时是否满足预设条件;若任一处理延时满足预设条件,则基于处理延时对第一驾驶状态进行预测,得到第二驾驶状态;基于每个第二驾驶状态,使用与处理延时对应的虚拟传感器进行模拟,得到一个或多个第一输入信号,其中,每个第一输入信号与每个虚拟传感器一一对应;将一个或多个第一输入信号发送至传感器模拟器(300),能够提高对传感器仿真的准确性,提高仿真测试的效率。

Description

仿真测试方法、装置及系统
本申请要求于2020年12月03日提交中国专利局、申请号为202011408608.X、申请名称为“仿真测试方法、装置及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及仿真测试领域,尤其涉及一种仿真测试方法、装置及系统。
背景技术
辅助驾驶与自动驾驶技术正在快速的发展和商用化。使用辅助驾驶和自动驾驶技术将极大的改变人们的交通方式,对人们的工作生活方式产生巨大的影响。使用辅助驾驶和自动驾驶,汽车上传感器的感知能力以及汽车的自动驾驶的能力对于能否在各种不同场景下安全的驾驶至关重要。相关的能力需要在不同的场景下经过测试才能确保这些能力是可靠的。目前的测试方式通常包括开放道路驾驶测试、封闭场地驾驶测试以及仿真测试的方式。基于道路驾驶测试的方式很难遍历各种不同的测试场景,有分析表明需要上亿公里以上的道路测试才能保证对于各种场景的覆盖,效率比较低。同时,对于极限的危险场景测试,由于安全方面的考虑,很难在场地测试中复现。而纯仿真的测试,比如Waymo的Carcraft,对于传感器的感知能力、车架与动力系统的能力以及其与自动驾驶算法的协同等都无法测试,因而无法有效保证仿真测试与道路测试的一致性。
发明内容
本申请实施例提供了一种仿真测试方法、装置及系统,以提供一种在虚拟场景中进行仿真测试的方式。
第一方面,本申请实施例提供了一种仿真测试方法,应用于输入信号模拟器,该输入信号模拟器位于自动驾驶测试架构,该自动驾驶测试架构还包括虚拟场景模拟器及传感器模拟器,该虚拟场景模拟器用于模拟虚拟场景,该虚拟场景包括待测虚拟对象,该待测虚拟对象包括第一驾驶状态及多个虚拟传感器,包括:
获取每个虚拟传感器的处理延时;具体地,该处理延时可以是虚拟传感器在模拟器中的处理时间与真实传感器在真实环境中的处理时间的差值。
判断每个处理延时是否满足预设条件;具体地,该处理延时可以是正值,也可以是负值,示例性的,若虚拟传感器在模拟器中的处理时间大于真实传感器在真实环境中的处理时间,则处理延时为正值;若虚拟传感器在模拟器中的处理时间小于真实传感器在真实环境中的处理时间,则处理延时负正值。因此,可以通过判断该处理延时是正值还是负值,以确定该处理延时是否满足预设条件。其中,每个虚拟传感器都对应各自的预置前端模型和预设算法,因此虚拟传感器都包含各自的处理延时,也就是说,需要分别判断每个虚拟传感器的处理延时是否满足预设条件。
若任一处理延时满足预设条件,则基于处理延时对第一驾驶状态进行预测,得到 第二驾驶状态;具体地,该第一驾驶状态可以包括待测虚拟对象的位置、速度和加速度。因此,当任一虚拟传感器的处理延时满足预设条件,例如,该虚拟传感器的处理延时为正值,则可以基于该虚拟传感器的处理延时进行预测,得到第二驾驶状态,其中,该第二驾驶状态为与该虚拟传感器的预测状态,该第二驾驶状态为未来时刻的位置、速度和加速度。
基于每个第二驾驶状态,使用与处理延时对应的虚拟传感器进行模拟,得到一个或多个第一输入信号,其中,每个第一输入信号与每个虚拟传感器一一对应;具体地,输入信号模拟器可以基于每个虚拟传感器的第二驾驶状态进行模拟,以获得第一输入信号。
将一个或多个第一输入信号发送至传感器模拟器。
本实施例中,通过对虚拟传感器的处理延时的判断,并对处理延时进行延时补偿,可以准确仿真传感器的行为与性能,提高对传感器仿真的准确性,提高仿真测试的效率。
其中一种可能的实现方式中,使用与处理延时对应的虚拟传感器进行模拟,得到一个或多个第一输入信号包括:
使用多个分别与每个处理延时对应的虚拟传感器进行同步模拟,得到多个第一输入信号。具体地,当多个虚拟传感器同时进行模拟,以获得第一输入信号时,可以对每个虚拟传感器模拟获得的信号的进行同步,以获得多个同步的第一输入信号,由此可以使得信号同步,进而使得仿真测试的准确性提高。
本实施例中,通过对第一输入信号进行同步,可以提高仿真测试的准确性。
其中一种可能的实现方式中,处理延时由第一处理时间及第二处理时间的差值确定,其中,第一处理时间为虚拟传感器在传感器模拟器中的处理时间,第二处理时间为与虚拟传感器对应的真实传感器的预设真实处理时间。
本实施例中,通过将虚拟传感器在传感器模拟器中的处理时间与真实传感器的预设真实处理时间之间的差值作为处理延时,可以准确的仿真真实传感器的性能。
其中一种可能的实现方式中,包括:
若任一处理延时不满足预设条件,则基于第一驾驶状态,使用与处理延时对应的虚拟传感器进行模拟,得到第二输入信号;基于处理延时将一个或多个第二输入信号延时发送给传感器模拟器。具体地,若虚拟传感器的处理延时不满足预设条件,示例性的,若该处理延时为负值,也就是说,该虚拟传感器没有获得第二预测状态,则输入信号模拟器可以根据第一驾驶状态进行模拟,以获得第二输入信号,并将该第二输入信号基于处理延时进行延时发送,以补偿该处理延时。
本实施例中,通过对第二输入信号进行延时发送,可以补偿该处理延时,由此可以准确仿真真实传感器的性能。
其中一种可能的实现方式中,传感器模拟器用于接收第一输入信号或第二输入信号,并基于虚拟传感器的预置前端模型和预设算法进行计算,得到输出信号,虚拟传感器的预置前端模型为Y=G*X+N+I,其中,Y为前端模型的输出信号,X为第一输入信号或第二输入信号,G为虚拟传感器的前端的增益,N为虚拟传感器的前端的噪声,I为虚拟传感器的前端引入的干扰。
本实施例中,通过在虚拟传感器中设置预置前端模型和预设算法,由此对真实传感器进行模拟,可以更准确的仿真真实传感器的性能。
其中一种可能的实现方式中,虚拟场景由虚拟场景模拟器使用至少一个CPU和/或至少一个GPU模拟获得,第一输入信号或第二输入信号由输入信号模拟器使用射线追踪算法通过至少一个GPU模拟获得。
本实施例中,通过至少一个CPU和/或GPU等硬件单元对信号进行模拟,可以提高信号模拟的速度,进而提高仿真测试的效率。
其中一种可能的实现方式中,第一驾驶状态包括待测虚拟对象在t时刻的第一位置、第一速度及第一加速度,基于处理延时对第一驾驶状态进行预测,得到第二驾驶状态包括:
基于处理延时,使用Kalman滤波方法对待测虚拟对象的第一驾驶状态进行预测,得到第二驾驶状态,其中,第二驾驶状态包括待测虚拟对象在t+T的第二位置、第二速度及第二加速度,T为所述处理延时。
本实施例中,通过对未来时刻的位置、速度及加速度的预测,可以有效对处理延时进行补偿,由此可以提高仿真测试的准确性。
其中一种可能的实现方式中,自动驾驶测试架构还包括数字模拟器、驾驶系统及动力系统模拟器,数字模拟器用于接收传感器模拟器发送的输出信号,并将输出信号发送至驾驶系统,驾驶系统用于基于输出信号确定驾驶决策,动力系统模拟器用于对驾驶决策进行模拟,得到第三驾驶状态,并将第三驾驶状态反馈给虚拟场景模拟器,使得待测虚拟对象基于第三驾驶状态对第一驾驶状态进行更新。
本实施例中,通过引入数字模拟器、驾驶系统及动力系统模拟器,可以基于输出信号获得驾驶决策,并基于驾驶决策对待测虚拟对象的驾驶状态进行更新,由此形成仿真测试的闭环,进而可以提高仿真测试的效率。
其中一种可能的实现方式中,虚拟传感器包括毫米波雷达虚拟传感器、激光雷达虚拟传感器、红外虚拟传感器和摄像头虚拟传感器中的至少一种。
本实施例中,通过引入多种虚拟传感器,可以对不同传感器进行仿真测试,可以提高测试的灵活性,进而提高仿真测试的效率。
第二方面,本申请实施例提供一种仿真测试装置,应用于输入信号模拟器,该输入信号模拟器位于自动驾驶测试架构,该自动驾驶测试架构还包括虚拟场景模拟器及传感器模拟器,虚拟场景模拟器用于模拟虚拟场景,虚拟场景包括待测虚拟对象,待测虚拟对象包括第一驾驶状态及多个虚拟传感器,包括:
接收电路,用于获取每个虚拟传感器的处理延时;
预测电路,用于判断每个处理延时是否满足预设条件;若任一处理延时满足预设条件,则基于处理延时对第一驾驶状态进行预测,得到第二驾驶状态;
第一模拟电路,用于基于每个第二驾驶状态,使用与处理延时对应的虚拟传感器进行模拟,得到一个或多个第一输入信号,其中,每个第一输入信号与每个虚拟传感器一一对应;
第一发送电路,用于将一个或多个第一输入信号发送至传感器模拟器。
其中一种可能的实现方式中,上述第一模拟电路还用于使用多个分别与每个处理 延时对应的虚拟传感器进行同步模拟,得到多个第一输入信号。
其中一种可能的实现方式中,上述处理延时由第一处理时间及第二处理时间的差值确定,其中,第一处理时间为虚拟传感器在传感器模拟器中的处理时间,第二处理时间为与虚拟传感器对应的真实传感器的预设真实处理时间。
其中一种可能的实现方式中,上述装置还包括:
第二模拟电路,用于若任一处理延时不满足预设条件,则基于第一驾驶状态,使用与处理延时对应的虚拟传感器进行模拟,得到第二输入信号;
第二发送电路,用于基于处理延时将一个或多个第二输入信号延时发送给传感器模拟器。
其中一种可能的实现方式中,上述第一输入信号或第二输入信号由输入信号模拟器使用射线追踪算法通过至少一个GPU模拟获得。
其中一种可能的实现方式中,第一驾驶状态包括待测虚拟对象在t时刻的第一位置、第一速度及第一加速度,预测电路还用于基于处理延时,使用Kalman滤波方法对待测虚拟对象的第一驾驶状态进行预测,得到第二驾驶状态,其中,第二驾驶状态包括待测虚拟对象在t+T的第二位置、第二速度及第二加速度,T为所述处理延时。
第三方面,本申请实施例提供一种仿真测试系统,包括:虚拟场景模拟器、输入信号模拟器、传感器模拟器、数字模拟器及系统同步模块;其中,
虚拟场景模拟器用于虚拟场景,虚拟场景包括待测虚拟对象,待测虚拟对象包括第一驾驶状态及多个虚拟传感器;
输入信号模拟器用于获取每个虚拟传感器的处理延时;判断每个处理延时是否满足预设条件;若任一处理延时满足预设条件,则基于处理延时对第一驾驶状态进行预测,得到第二驾驶状态;基于每个第二驾驶状态,使用与处理延时对应的虚拟传感器进行模拟,得到一个或多个第一输入信号,其中,每个第一输入信号与每个虚拟传感器一一对应;将一个或多个第一输入信号发送至传感器模拟器;
传感器模拟器用于接收第一输入信号,并基于虚拟传感器的预置前端模型和预设算法进行计算,得到输出信号;
数字模拟器用于接收传感器模拟器发送的输出信号;
系统同步模块用于向虚拟场景模拟器、输入信号模拟器、传感器模拟器及数字模拟器提供同步时钟。
其中一种可能的实现方式中,输入信号模拟器还用于使用多个分别与每个处理延时对应的虚拟传感器进行同步模拟,得到多个第一输入信号。
其中一种可能的实现方式中,处理延时由第一处理时间及第二处理时间的差值确定,其中,第一处理时间为虚拟传感器在传感器模拟器中的处理时间,第二处理时间为与虚拟传感器对应的真实传感器的预设真实处理时间。
其中一种可能的实现方式中,输入信号模拟器还用于若任一处理延时不满足预设条件,则基于第一驾驶状态,使用与处理延时对应的虚拟传感器进行模拟,得到第二输入信号;基于处理延时将一个或多个第二输入信号延时发送给传感器模拟器。
其中一种可能的实现方式中,传感器模拟器还用于接收第二输入信号,虚拟传感器的预置前端模型为Y=G*X+N+I,其中,Y为前端模型的输出信号,X为第一输入信 号或第二输入信号,G为虚拟传感器的前端的增益,N为虚拟传感器的前端的噪声,I为虚拟传感器的前端引入的干扰。
其中一种可能的实现方式中,虚拟场景由虚拟场景模拟器使用至少一个CPU和/或至少一个GPU模拟获得,第一输入信号或第二输入信号由输入信号模拟器使用射线追踪算法通过至少一个GPU模拟获得。
其中一种可能的实现方式中,第一驾驶状态包括待测虚拟对象在t时刻的第一位置、第一速度及第一加速度,输入信号模拟器还用于基于处理延时,使用Kalman滤波方法对待测虚拟对象的第一驾驶状态进行预测,得到第二驾驶状态,其中,第二驾驶状态包括待测虚拟对象在t+T的第二位置、第二速度及第二加速度,T为处理延时。
其中一种可能的实现方式中,还包括驾驶系统及动力系统模拟器,其中,
数字模拟器还用于将输出信号发送至驾驶系统;
驾驶系统用于基于输出信号确定驾驶决策;
动力系统模拟器用于对驾驶决策进行模拟,得到第三驾驶状态,并将第三驾驶状态反馈给虚拟场景模拟器,使得待测虚拟对象基于第三驾驶状态对第一驾驶状态进行更新。
其中一种可能的实现方式中,上述虚拟传感器包括毫米波雷达虚拟传感器、激光雷达虚拟传感器、红外虚拟传感器和摄像头虚拟传感器中的至少一种。
第四方面,本申请实施例提供一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序,当其在计算机上运行时,使得计算机执行如第一方面所述的方法。
第五方面,本申请实施例提供一种计算机程序,当上述计算机程序被计算机执行时,用于执行第一方面所述的方法。
在一种可能的设计中,第五方面中的程序可以全部或者部分存储在与处理器封装在一起的存储介质上,也可以部分或者全部存储在不与处理器封装在一起的存储器上。
附图说明
图1为本申请实施例提供的系统架构示意图;
图2为本申请实施例提供的仿真测试方法的流程示意图;
图3为本申请实施例提供的状态预测示意图;
图4为本申请实施例提供的仿真测试装置的结构示意图;
图5为本申请实施例提供的电子设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请实施例的描述中,除 非另有说明,“多个”的含义是两个或两个以上。
目前,为了提高辅助驾驶以及自动驾驶测试的效率,并能够改善仿真测试与道路测试的一致性,需要一种能在实验室模拟各种场景,同时对于传感器能力、测试车辆的车架以及动力系统的能力,通过自动驾驶软件与算法进行联合测试的硬件闭环的方案。在实验室进行场景模拟有基于模拟接口的方式和基于数字接口的方式。基于模拟接口的方式是通过场景模拟产生各个传感器接收到的模拟信号,通过模拟接口将信号发送给各个传感器。使用模拟接口的方式,目前面临系统复杂以及方案不成熟的问题。比如,模拟的场景中的多个目标雷达回波只能模拟少数几个目标,无法满足要求。而使用探头墙的方案来模拟几十个目标回波的系统非常昂贵复杂,且无成熟方案。而模拟的场景中的多目标的LiDAR回波信号,目前尚无方案。因此,一般采用数字接口的场景模拟方式。在这种模拟方式中,产生的场景模拟信号通过数字接口直接发送给辅助驾驶以及自动驾驶的处理系统,并不经过实际的传感器。通常的方案是建立各个传感器的行为模型或统计模型等来模拟传感器的性能影响。这种方式无法实现对于同一场景的多传感器行为的同步的模拟。同时,基于行为或统计模型也无法准确刻画传感器在特定场景下的性能。也有方案提出使用物理模型的方式结合传感器的算法来模拟传感器在特定场景下的性能。但是,这种方案没有考虑传感器模拟前端的性能影响。同时,也未考虑多个传感器模拟信号之间的同步问题以及时延问题。自动驾驶中多传感器的融合,要求场景模拟时能同步模拟不同的传感器对于同一场景的感知。同时,为了保证硬件闭环的系统能够与真实路测的行为一致,场景模拟系统中必须要能够补偿场景模拟中产生的额外延时。
基于上述问题,本申请实施例提出了一种仿真测试方法,该方法可以有效解决在数字场景仿真测试中的仿真传感器带来的时延问题,由此可以在数字场景模拟器中准确的同步仿真多个传感器的行为与性能。
现结合图1-图3对本申请实施例提供的仿真测试方法进行说明,如图1所示为本申请实施例提供的系统架构,参考图1,上述系统架构包括虚拟场景模拟器100、输入信号模拟器200、传感器模拟器300、数字模拟器400、辅助驾驶和自动驾驶系统500(为说明方便,下文将“辅助驾驶和自动驾驶系统”简称为“驾驶系统”)、动力系统模拟器600及系统同步模块700。其中,
虚拟场景模拟器100用于构建虚拟场景,并将虚拟场景信息发送给输入信号模拟器200。在具体实现时,该虚拟场景模拟器100可以是一台计算机,也可以是其他类型的计算设备,本申请对此不作特殊限定。在该虚拟场景模拟器100中,可以包括支持IEEE1588协议的高速以太网网卡、一个或多个中央处理器(Central Processing Unit,CPU)或图形处理器(Graphics Processing Unit,GPU),以保证模拟的实时性能。其中,该虚拟场景的构建可以通过安装在虚拟场景模拟器100中的场景模拟软件实现,该场景模拟软件可以是商用的3D模拟软件,也可以是开源的3D模拟软件,本申请实施例对此不作特殊限定。
该虚拟场景可以包括使用虚拟的三维模型搭建的虚拟场景,该虚拟的三维模型模拟的是真实世界中的真实物体,例如,真实物体可以包括车、人、动物、树木、道路及建筑物等,也可以包括其他真实物体,本申请实施例对此不作特殊限定。可以理解 的是,该三维模型可以视为虚拟对象。在具体实现时,用户通常会在虚拟场景中创建一个虚拟对象,将该虚拟对象作为测试对象进行测试,因此,该测试对象可以是第一虚拟对象,而虚拟场景中除该第一虚拟对象之外的虚拟对象都可以视为第二虚拟对象。示例性的,该第一虚拟对象可以是一辆车,由此可以对该车的自动驾驶和辅助驾驶性能进行测试,举例来说,在该车行驶途中,该车周围的第二虚拟对象会对该车造成障碍,由此会对该车的行驶产生影响,进而会影响到该车的驾驶决策,例如,加速、减速、转弯及停车等。通过上述场景模拟软件,虚拟场景模拟器100可以将虚拟场景中的虚拟对象的信息发送给输入信号模拟器200,该虚拟对象的信息可以包括虚拟对象的坐标、材料、光照等信息。其中,虚拟场景模拟器100与输入信号模拟器200之间信息的传输,采用的是标准化的接口,以保证上述系统架构不依赖于特定的场景模拟软件。
可以理解的是,上述第一虚拟对象也可以是无人机或其他类型的自动驾驶设备,本申请实施例对此不作特殊限定。
此外,对于每个真实物体,由于可见光、毫米波雷达、激光雷达以及红外雷达等信号照射在该真实物体的表面上时,会发生反射、散射和绕射等物理现象,因此,虚拟场景模拟器100可以基于上述物理现象对虚拟对象建立对应的物理模型,同时,虚拟场景模拟器100还可以建立与该虚拟对象的材料对应的模型参数,其中,该虚拟对象的材料与真实物体使用的材料保持一致。
为了对第一虚拟对象进行测试,虚拟场景模拟器100还可以在该第一虚拟对象上布置虚拟传感器,该虚拟传感器用于模拟获取虚拟环境中的输入信号。通过上述物理模型及模型参数可以模拟各类虚拟传感器在虚拟环境中获取的输入信号。
输入信号模拟器200用于模拟各类虚拟传感器在虚拟环境中获取的输入信号,并将该输入信号发送给传感器模拟器300进行处理。其中,该输入信号可以基于第一虚拟对象的驾驶状态及虚拟场景信息进行模拟。上述虚拟传感器可以包括毫米波雷达虚拟传感器、激光雷达虚拟传感器、红外虚拟传感器和摄像头虚拟传感器,也可以包括其他类型的虚拟传感器,本申请实施例对此不作特殊限定。在具体实现时,该输入信号模拟器200可以是一台计算机,也可以是其他类型的计算设备,本申请对此不作特殊限定。在该输入信号模拟器200中,可以包括支持IEEE1588协议的高速以太网网卡、一个或多个GPU。此外,对输入信号进行获取的模拟过程可以通过输入信号模拟软件实现。可以理解的是,在该输入信号模拟软件中,可以配置虚拟传感器的格式、类型、以及相关的参数,接收从虚拟场景模拟器100发送的虚拟对象的信息,以及相关的虚拟对象的材料参数,并使用射线追踪算法,基于第一虚拟对象的驾驶状态(例如,第一虚拟对象的位置、速度及加速度等)、上述物理模型及模型参数对不同类型的虚拟传感器进行模拟,例如,对于每种真实传感器在虚拟对象表面发生的物理效应进行模拟计算(例如,包括但不限于反射、散射,绕射等效应),由此可以模拟获取虚拟环境中的输入信号,例如,由虚拟传感器的发射机发射信号,该发射信号与虚拟对象的表面发生物理效应后返回给虚拟传感器的接收机,由此得到虚拟传感器的输入信号。并可以通过上述GPU对上述输入信号的模拟进行处理,以保证模拟的实时性。示例性的,输入信号模拟器200可以通过虚拟摄像头模拟拍摄虚拟环境中的图像,或 通过毫米波雷达模拟探测虚拟环境中的虚拟对象。
传感器模拟器300用于模拟各个虚拟传感器的行为及性能,例如,该传感器模拟器300可以接收输入信号模拟器200获取的输入信号,对该输入信号通过模拟真实传感器的处理方式进行处理,由此可以获得输出信号,并可以将该输出信号发送给数字模拟器400进行处理。在具体实现时,该传感器模拟器300可以是一台计算机,也可以是其他类型的计算设备,本申请对此不作特殊限定。在该传感器模拟器300中,可以包括支持IEEE1588协议的高速以太网网卡、一个或多个现场可编程门阵列(Field Programmable Gate Array,FPGA)加速卡或GPU。示例性的,该传感器模拟器300可以模拟各虚拟传感器的前端的性能及各虚拟传感器的算法。其中,对于各虚拟传感器的前端的性能的模拟可以通过建模方式进行,示例性的,每个虚拟传感器前端的模型可以预先构建为:Y=G*X+N+I,其中,X为传感器的输入信号,Y为传感器前端的输出信号,G为传感器前端的增益,N为传感器前端的噪声,I为传感器前端引入的干扰。该输出信号Y可以通过虚拟传感器的预设算法处理得到。
可以理解的是,该虚拟传感器前端的模型是基于真实的传感器构建,因此,通过建模的方式使用虚拟传感器对真实传感器的前端进行模拟,由此可以保证虚拟传感器的前端与真实传感器的前端的性能在统计特性上是一致的。
此外,对于虚拟传感器的算法,可以通过直接使用真实传感器的算法的方式来进行,在此不再赘述。对于不同真实传感器的算法,传感器模拟器300可以使用相应的FPGA加速卡或GPU卡进行模拟,以保证模拟的实时性要求及计算能力要求。
数字模拟器400用于接收传感器模拟器300发送的输出信号,并可以将该输出信号发送给驾驶系统500。在具体实现时,该数字模拟器400可以是一台计算机,也可以是其他类型的计算设备,本申请对此不作特殊限定。在该数字模拟器400中,可以包括支持IEEE1588协议的高速以太网网卡、一个或多个FPGA加速卡,以及多个接口。
其中,该接口包括数字模拟器400与真实传感器之间的物理接口及数字接口,数字模拟器400可以用过该物理接口及数字接口,接收不同的真实传感器采集的真实数据,并通过回放这些真实的采集数据来进行测试。其中,该物理接口可以包括但不限于控制器局域网络(Controller Area Network,CAN)、移动产业处理器接口(Mobile Industry Processor Interface,MIPI)、以太网(Ethernet)、千兆多媒体串行链路(Gigabit Multimedia Serial Link,GSML)、平板显示链路(FPDLink--Flat Panel Display Link,FPLINK)、本地互联网络(Local Interconnect Network,LIN)及100M-T1等接口。该数字接口可以包括但不限于高速外部设备互连(Peripheral Component Interconnect Express,PCIE)接口、串行高级技术附件(Serial Advanced Technology Attachment,SATA)接口、高速以太网接口及数字光纤接口等。
此外,上述接口还可以包括数字模拟器400与驾驶系统500之间的物理接口,以及数字模拟器400与动力系统模拟器600之间的数字接口。其中,该物理接口可以包括CAN接口,示例性的,通过该CAN接口,数字模拟器400可以与驾驶系统500的CAN总线相连,由此可以使得数字模拟器400将输出信号发送给驾驶系统500,进而使得该驾驶系统500可以基于该输出信号进行驾驶决策。该数字接口可以包括以太网 接口,该以太网接口用于传输动力系统模拟器600输出的待测对象(例如,车辆)的驾驶状态。
驾驶系统500用于接收数字模拟器400发送的输出信号,并基于该输出信号进行驾驶决策。其中,驾驶系统500可以位于真实的待测对象中,示例性的,该驾驶系统500可以位于一辆真实的待测车辆内部,此时,该车辆作为待测对象。可以理解的是,也可以单独对该驾驶系统500进行测试,示例性的,可以将该驾驶系统500从真实车辆中取出,由此可以将该驾驶系统500作为待测对象。此外,该驾驶决策可以包括加速、刹车、减速、转弯等操作,也可以包括其他操作,本实施例对此不作特殊限定。接着,驾驶系统500可以将上述驾驶决策发送给动力系统模拟器600,使得动力系统模拟器600可以基于该驾驶决策进行模拟,以更新待测的真实对象和虚拟对象的驾驶状态。
动力系统模拟器600用于接收驾驶系统500发送的驾驶决策,基于该驾驶决策模拟实车的动态特性,由此输出与该实车对应的驾驶状态,将该驾驶状态反馈给虚拟场景中的第一虚拟对象,使得该第一虚拟对象可以基于该驾驶状态进行更新,由此可以完成模拟测试。在具体实现时,该动力系统模拟器600可以是一台计算机,也可以是其他类型的计算设备,本申请对此不作特殊限定。在该动力系统模拟器600中,可以包括多个接口。该接口可以包括动力系统模拟器600与驾驶系统500之间的物理接口(例如,CAN接口),以及动力系统模拟器600与虚拟场景仿真器之间的数字接口(例如,以太网接口)。其中,通过该物理接口,动力系统模拟器600可以接收驾驶系统500发送的驾驶决策;通过该数字接口,动力系统模拟器600可以将实车状态发送给虚拟场景中的第一虚拟对象。
系统同步模块700用于为虚拟场景模拟器100、输入信号模拟器200、传感器模拟器300及与数字模拟器400提供同步时钟,以保证虚拟场景模拟器100、输入信号模拟器200、传感器模拟器300及与数字模拟器400之间的时钟同步。在具体实现时,该系统同步模块700可以采用例如支持1588同步协议的高速以太网交换机,也可以是专用的同步模块,本申请实施例对此不作特殊限定。
可以理解的是,虚拟场景模拟器100、输入信号模拟器200、传感器模拟器300、与数字模拟器400之间可以通过高速以太网交换机或其他高速数据连接设备进行数据交互。
如图2所示为本申请实施例提供的仿真测试方法一个实施例的流程示意图,包括:
步骤101,使用虚拟场景模拟器100构建虚拟场景。
具体地,可以通过虚拟场景模拟器100构建虚拟场景。其中,该虚拟场景可以包括待测的场景,该待测的场景可以包括虚拟对象以及与该虚拟对象对应的材料。示例性的,该虚拟对象可以包括:车、人、动物、树木、道路、建筑物等,也可以包括其他物体,本申请实施例对此不作特殊限定。此外,在虚拟场景中还可以确定待测的虚拟对象,例如,该待测虚拟对象可以是一辆车。
为了对该虚拟对象进行模拟测试,该虚拟对象上还可以配置多个虚拟传感器。因此,可以通过输入信号模拟器200对虚拟传感器的第一参数进行配置,以模拟虚拟传感器对输入信号的获取。其中,在对该多个虚拟传感器的第一参数进行配置时,该配 置的第一参数信息可以包括:传感器的个数、传感器的类型、传感器在实车上的装配参数(例如,传感器在实车上的安装高度和角度等)、传感器物理参数(例如,毫米波雷达的发射及接收天线的个数、位置及方向,激光雷达的频率、工作方式及线数,摄像头的视场角及焦距等)。
可以理解的是,在构建的虚拟场景中,可以将该待测虚拟对象视为第一虚拟对象,将虚拟场景中除第一虚拟对象以外的所有虚拟对象视为第二虚拟对象。示例性的,若将虚拟场景中一辆车作为第一虚拟对象,也就是待测虚拟对象,则可以将其他虚拟对象(例如,车、人、动物、树木、道路、建筑物等)作为第二虚拟对象。
需要说明的是,除了对虚拟场景和虚拟对象相关的参数进行配置外,还可以在传感器模拟器300中对虚拟传感器的第二参数进行配置。其中,在对该多个虚拟传感器的第二参数进行配置时,该配置的第二参数信息可以包括:传感器的个数、传感器的类型、传感器的前端参数(例如,前端增益G、前端噪声N及前端引入的干扰I)、传感器处理时延及传感器算法等。
步骤102,虚拟场景模拟器100将虚拟场景信息发送给输入信号模拟器200。
具体的,该虚拟场景信息可以包括虚拟场景中所有虚拟对象的相关信息,其中,该相关信息可以包括坐标位置、材料信息(例如,塑料、金属等)、光照条件等,本申请实施例对此不作特殊限定。
步骤103,输入信号模拟器200模拟获取第一虚拟对象的多个输入信号。
具体地,第一虚拟对象在上述虚拟场景中运动时,可以在上述输入信号模拟器200中通过上述虚拟传感器模拟获取该第一虚拟对象周边的多个输入信号。其中,该虚拟传感器可以包括:毫米波雷达虚拟传感器、激光雷达虚拟传感器、红外虚拟传感器和摄像头虚拟传感器中的一种或多种。该输入信号可以包括回波信号和/或图像。示例性的,可以通过毫米波雷达虚拟传感器、激光雷达虚拟传感器、红外虚拟传感器得到回波信号,也可以通过摄像头虚拟传感器得到拍摄的图像。该输入信号可以基于该第一虚拟对象的驾驶状态S t、虚拟场景信息,并使用射线追踪算法计算后获得,可以理解的是,也可以使用其他算法计算获得输入信号,本申请实施例对此不作特殊限定。其中,
Figure PCTCN2021132662-appb-000001
Xi为该第一虚拟对象的位置,Vi为该第一虚拟对象的速度,Ai为该第一虚拟对象的加速度。可以理解的是,上述第一虚拟对象的状态还可以包括其他变量,本申请实施例对此不作特殊限定。
可以理解的是,每个输入信号与每个虚拟传感器一一对应。示例性的,由毫米波雷达虚拟传感器获取的可以是A回波输入信号,由激光雷达虚拟传感器获取的可以是B回波输入信号,由摄像头虚拟传感器获取的可以是C图像输入信号。
此外,在通过输入信号模拟器200获取上述多个输入信号时,还可以对该多个输入信号进行同步,以保证输入信号模拟器200可以同时获取到多个虚拟传感器对于同一场景获取的输入信号。
进一步地,当传感器模拟器300对上述多个输入信号进行处理时,会产生仿真处 理时间Tf。而真实传感器在对输入信号进行处理时,会产生真实处理时间Tz。可以理解的是,由于虚拟传感器是对真实的传感器进行模拟,与真实的传感器有差异,因此,上述虚拟传感器产生的仿真处理时间Tf与真实传感器产生的真实处理时间Tz会有差异性。例如,上述仿真处理时间Tf可能大于或等于真实处理时间Tz,上述仿真处理时间Tf也可能小于或等于真实处理时间Tz。当上述仿真处理时间Tf与真实处理时间Tz不一致时,可以在输入信号模拟器200中对上述输入信号进行延时补偿,使得上述仿真测试可以更好的仿真真实场景。其中,该延时指的是上述仿真处理时间Tf与真实处理时间Tz之间的差值。
因此,输入信号模拟器200还可以获取传感器模拟器300中各虚拟传感器的仿真处理时间Tf。其中,该仿真处理时间Tf与虚拟传感器一一对应。示例性的,可以获取毫米波雷达虚拟传感器的仿真处理时间Tf1、激光雷达虚拟传感器的仿真处理时间Tf2及摄像头虚拟传感器的仿真处理时间Tf3等。接着,还可以获取与虚拟传感器对应的真实传感器的真实处理时间Tz。其中,该真实处理时间Tz与真实传感器一一对应。示例性的,可以获取毫米波雷达真实传感器的真实处理时间Tz1、激光雷达真实传感器的真实处理时间Tz2及摄像头真实传感器的真实处理时间Tz3等。
接着,输入信号模拟器200可以将每个虚拟传感器的仿真处理时间Tf与对应的真实处理时间Tz进行比较。
若该仿真处理时间Tf<真实处理时间Tz,则可以基于第一虚拟对象的当前状态St模拟获得第一输入信号,并将与该虚拟传感器对应的第一输入信号进行延时发送,其中,该延时=Tz-Tf。示例性的,假设激光雷达虚拟传感器的仿真处理时间Tf为5ms,激光雷达真实传感器的真实处理时间Tz为10ms,则可以将与该激光雷达虚拟传感器对应的第一输入信号延时Tz-Tf=10-5=5ms发送,由此可以将仿真处理时间与真实传感器的处理时间匹配,进而可以提高仿真测试的准确度。
若该仿真处理时间Tf>真实处理时间Tz,也就是说,虚拟传感器处理的时间比真实传感器处理的时间长,造成了延时,因此,需要对该延时进行补偿。在具体实现时,可以对第一虚拟对象的驾驶状态进行预测,由此可以使得输入信号模拟器200基于预测驾驶状态进行模拟,进而得到第二输入信号。其中,该第二输入信号可以是Tf-Tz时间段后的预测输入信号,该预测的方式可以是通过Kalman滤波方法,也可以使用其他预测方法,本申请实施例对此不作特殊限定。
示例性的,假设t时刻的第一虚拟对象的驾驶状态为S t,其中,
Figure PCTCN2021132662-appb-000002
Xi为该第一虚拟对象的位置,Vi为该第一虚拟对象的速度,Ai为该第一虚拟对象的加速度。可以理解的是,上述第一虚拟对象的状态还可以包括其他变量,本申请实施例对此不作特殊限定。则t+T时刻的第一虚拟对象的预测驾驶状态为S t+T=Fi*S t+Bi*Ui+Ni,其中,
Figure PCTCN2021132662-appb-000003
Figure PCTCN2021132662-appb-000004
Ui=ΔAi,
Ni为状态预测噪声。由此可以完成对该第一虚拟对象在未来时刻的驾驶状态的预测。接着,可以基于该第一虚拟对象在未来时刻的驾驶状态,使用输入信号模拟器300对输入信号进行模拟,由此可以得到第二输入信号。
举例来说,假设毫米波雷达虚拟传感器的仿真处理时间Tf1=10ms,也就是说,毫米波雷达虚拟传感器需经过10ms时间对输入信号进行处理,由此可以得到毫米波雷达第一输入信号;毫米波雷达真实传感器的真实处理时间为Tz1=5ms,也就是说,毫米波雷达真实传感器需经过5ms时间对输入信号进行处理,由此可以得到毫米波雷达第一输入信号;激光雷达虚拟传感器的仿真处理时间Tf2=12ms,也就是说,激光雷达虚拟传感器需经过10ms时间对输入信号进行处理,由此可以得到激光雷达第一输入信号;激光雷达真实传感器的真实处理时间Tz2=8ms,也就是说,激光雷达真实传感器需经过5ms时间对输入信号进行处理,由此可以得到激光雷达第一输入信号。由于毫米波雷达虚拟传感器的仿真延时Ty1=Tf1-Tz1=10-5=5ms,而激光雷达虚拟传感器仿真延时Ty2=仿真处理时间Tf2-Tz2=12-8=4ms,也就是说,Ty1>Ty2,此时,可以基于最大的仿真延时(例如,Ty1),对第一虚拟对象的驾驶状态进行预测,并基于该预测驾驶状态分别使用毫米波雷达虚拟传感器及激光雷达虚拟传感器进行模拟,得到毫米波雷达第二输入信号及激光雷达第二输入信号,由此可以保证所有的第一输入信号都可以得到有效的延时补偿。
现以毫米波雷达虚拟传感器为例进行说明,图3为第二输入信号预测示意图。如图3所示,第一虚拟对象在t时刻的驾驶状态为S1,该第一虚拟对象上的毫米波雷达虚拟传感器可以在t时刻基于驾驶状态S1模拟获得输入信号101。接着,对t+T1时刻的第一虚拟对象的驾驶状态进行预测,由此可以得到第一虚拟对象在t+T1时刻的驾驶状态为S2,其中,T1为延时,也就是毫米波雷达虚拟传感器的仿真处理时间Tf1与毫米波雷达真实传感器的真实处理时间Tz1的差值。毫米波雷达虚拟传感器在t+T1时刻可以基于驾驶状态S2模拟获得第二输入信号102,由此可以完成对第二输入信号102的预测,从而可以完成对输入信号的延时补偿。
可以理解的是,激光雷达虚拟传感器也可以通过上述图3中的方式对激光雷达的输入信号进行预测,由此可以得到激光雷达第二输入信号,在此不再赘述。
可选地,当输入信号模拟器200完成输入信号的延时补偿之后,还可以对每个虚拟传感器的仿真延时进行校正。以上述毫米波雷达虚拟传感器与激光雷达虚拟传感器为例,当毫米波雷达虚拟传感器与激光雷达虚拟传感器延时补偿完之后,由于激光雷达虚拟传感器延时补偿的时间采用的是毫米波雷达虚拟传感器延时补偿的时间,例如,激光雷达虚拟传感器的仿真延时Ty2=Tf2-Tz2=12-8=4ms,然而激光雷达虚拟传感器延时补偿的时间为5ms,因此,激光雷达虚拟传感器延时补偿的时间与激光雷达虚拟传感器仿真延时Ty2不匹配。此时,输入信号模拟器200可以对激光雷达预测的第二输入信号延时T2发送,由此可以使得该延时T2与仿真处理时间的累计时间与该预测的第二输入信号对应,进而可以模拟真实传感器的性能。其中,T2为延时补偿的时间 与仿真延时的差值。示例性的,假设激光雷达虚拟传感器延时补偿的时间为5ms,激光雷达虚拟传感器的仿真延时为4ms,则T2=延时补偿的时间-仿真延时=5-4=1ms。也就是说,将激光雷达第二输出信号延时1ms发送。激光雷达真实传感器的真实处理时间Tz2=8ms,而激光雷达虚拟传感器延时补偿的时间为5ms,也就是说,预测的是Tz2+5=8+5=13ms后的第二输入信号;而激光雷达虚拟传感器的仿真处理时间Tf2为12ms,因此,可以对该第二输入信号延时13-12=1ms发送,由此可以保证该预测的第二输入信号与传感器模拟器300的仿真处理时间匹配。
步骤104,输入信号模拟器200将多个输入信号发送给传感器模拟器300。
具体地,当输入信号模拟器200获取第一虚拟对象的多个输入信号后,可以将上述多个输入信号发送给传感器模拟器300,其中,该输入信号可以包括第一输入信号和/或第二输入信号。
步骤105,传感器模拟器300对多个输入信号进行处理,输出多个输出信号。
具体地,传感器模拟器300收到输入信号模拟器200发送的多个输入信号后,可以对上述多个输入信号进行处理,由此可以得到多个输出信号。其中,处理的过程可以按照虚拟传感器的前端模型(例如,Y=G*X+N+I)及该虚拟传感器的预设算法进行。
需要说明的是,每个虚拟传感器可以分别采用不同的预置前端模型,本申请实施例对具体的前端模型的实现方式不作特殊限定。
可以理解的是,每个输入信号与每个输出信号一一对应。示例性的,可以将由毫米波雷达虚拟传感器获取的A回波输入信号输入毫米波雷达虚拟传感器的前端模型中使用预设毫米波雷达传感器算法进行处理,得到A回波输出信号;或将由激光雷达虚拟传感器获取的B回波输入信号输入激光雷达虚拟传感器的前端模型中使用预设激光雷达传感器算法进行处理,得到B回波输出信号;或将由摄像头虚拟传感器获取的C图像输入信号输入摄像头虚拟传感器的前端模型中使用预设摄像头传感器算法进行处理,得到C图像输出信号。
步骤106,将多个输出信号发送给数字模拟器400。
具体地,当传感器模拟器300对多个输入信号进行处理,得到多个输出信号后,可以将上述多个输出信号发送给数字模拟器400。
步骤107,数字模拟器400接收多个输出信号进行处理,将多个输出信号发送给驾驶系统500。
具体地,数字模拟器400可以接收传感器模拟器300发送的与每个虚拟传感器对应的输出信号。为了测试真实车辆的驾驶性能,数字模拟器400还可以将上述多个输出信号发送给真实车辆的驾驶系统500。
步骤108,驾驶系统500基于上述多个多个输出信号确定驾驶决策。
具体地,驾驶系统500接收到数字模拟器400发送的上述多个多个输出信号后,做出驾驶决策。其中,该驾驶决策可以包括加速、减速、刹车、转弯等操作,也可以包括其他驾驶决策,本申请实施例对此不作特殊限定。
步骤109,将驾驶决策发送给动力系统模拟器600。
步骤110,动力系统模拟器600基于驾驶决策模拟实车的驾驶状态St’。
具体地,该驾驶状态St’与驾驶决策对应。示例性的,若驾驶决策为加速,则动力 系统模拟器600模拟的实车的驾驶状态为加速行驶,若驾驶决策为刹车,则动力系统模拟器600模拟的实车的驾驶状态为刹车后处于停车状态。
步骤111,将驾驶状态St’反馈给虚拟场景模拟器100,使得虚拟场景模拟器100基于St’更新第一虚拟对象的驾驶状态St。
具体地,动力系统模拟器600得到驾驶状态St’后,可以将该驾驶状态St’反馈给虚拟场景模拟器100,由此可以使得虚拟场景模拟器100基于该St’更新第一虚拟对象的驾驶状态St。示例性的,以跟车为例,假设测试车辆(第一虚拟对象)跟在另一辆车(第二虚拟对象)后面,以测试该第一虚拟对象的自动驾驶性能。当第二虚拟对象刹车后,导致第一虚拟对象与第二虚拟对象之间的距离越来越近,通过对第一虚拟对象上的虚拟传感器获取到的传感数据(例如,第二输出信号)进行分析,可以确定驾驶决策(例如,由驾驶系统500确定刹车),并可以将刹车的决策反馈给第一虚拟对象,由此完成整个系统的模拟测试。
步骤112,虚拟场景模拟器100基于驾驶状态St’对第一虚拟对象的驾驶状态进行更新。
通过本申请实施例中,基于仿真传感器与真实传感器之间的延时,对待测对象驾驶状态的预测,由此对该延时进行补偿,可以有效解决在数字场景仿真测试中的仿真传感器带来的时延问题,进而可以在数字场景模拟器中准确的同步仿真多个传感器的行为与性能。
图4为本申请仿真测试装置一个实施例的结构示意图,如图4所示,上述仿真测试装置40应用于输入信号模拟器,该输入信号模拟器位于自动驾驶测试架构,该自动驾驶测试架构还包括虚拟场景模拟器及传感器模拟器,虚拟场景模拟器用于模拟虚拟场景,虚拟场景包括待测虚拟对象,待测虚拟对象包括第一驾驶状态及多个虚拟传感器,可以包括:接收电路41、预测电路42、第一模拟电路43及第一发送电路44;
接收电路41,用于获取每个虚拟传感器的处理延时;
预测电路42,用于判断每个处理延时是否满足预设条件;若任一处理延时满足预设条件,则基于处理延时对第一驾驶状态进行预测,得到第二驾驶状态;
第一模拟电路43,用于基于每个第二驾驶状态,使用与处理延时对应的虚拟传感器进行模拟,得到一个或多个第一输入信号,其中,每个第一输入信号与每个虚拟传感器一一对应;
第一发送电路44,用于将一个或多个第一输入信号发送至传感器模拟器。
其中一种可能的实现方式中,上述第一模拟电路43还用于使用多个分别与每个处理延时对应的虚拟传感器进行同步模拟,得到多个第一输入信号。
其中一种可能的实现方式中,上述处理延时由第一处理时间及第二处理时间的差值确定,其中,第一处理时间为虚拟传感器在传感器模拟器中的处理时间,第二处理时间为与虚拟传感器对应的真实传感器的预设真实处理时间。
其中一种可能的实现方式中,上述装置40还包括:第二模拟电路45及第二发送电路46;
第二模拟电路45,用于若任一处理延时不满足预设条件,则基于第一驾驶状态,使用与处理延时对应的虚拟传感器进行模拟,得到第二输入信号;
第二发送电路46,用于基于处理延时将一个或多个第二输入信号延时发送给传感器模拟器。
其中一种可能的实现方式中,上述第一输入信号或第二输入信号由输入信号模拟器使用射线追踪算法通过至少一个GPU模拟获得。
其中一种可能的实现方式中,第一驾驶状态包括待测虚拟对象在t时刻的第一位置、第一速度及第一加速度,预测电路还用于基于处理延时,使用Kalman滤波方法对待测虚拟对象的第一驾驶状态进行预测,得到第二驾驶状态,其中,第二驾驶状态包括待测虚拟对象在t+T的第二位置、第二速度及第二加速度,T为所述处理延时。
应理解以上图4所示的仿真测试装置的各个模块的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。且这些模块可以全部以软件通过处理元件调用的形式实现;也可以全部以硬件的形式实现;还可以部分模块以软件通过处理元件调用的形式实现,部分模块通过硬件的形式实现。例如,检测模块可以为单独设立的处理元件,也可以集成在电子设备的某一个芯片中实现。其它模块的实现与之类似。此外这些模块全部或部分可以集成在一起,也可以独立实现。在实现过程中,上述方法的各步骤或以上各个模块可以通过处理器元件中的硬件的集成逻辑电路或者软件形式的指令完成。
例如,以上这些模块可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个特定集成电路(Application Specific Integrated Circuit;以下简称:ASIC),或,一个或多个微处理器(Digital Singnal Processor;以下简称:DSP),或,一个或者多个现场可编程门阵列(Field Programmable Gate Array;以下简称:FPGA)等。再如,这些模块可以集成在一起,以片上系统(System-On-a-Chip;以下简称:SOC)的形式实现。
图5为本申请电子设备50一个实施例的结构示意图;其中,该电子设备50可以是上述输入信号模拟器200。如图5所示,上述电子设备50可以是数据处理设备,也可以是内置于上述数据处理设备的电路设备。该电子设备50可以用于执行本申请图1-图3所示实施例提供的方法中的功能/步骤。
如图5所示,电子设备50以通用计算设备的形式表现。
上述电子设备50可以包括:一个或多个处理器510;通信接口520;存储器530;连接不同系统组件(包括存储器530和处理器510)的通信总线540,数据库550;以及一个或多个计算机程序。
其中,上述一个或多个计算机程序被存储在上述存储器中,上述一个或多个计算机程序包括指令,当上述指令被上述电子设备执行时,使得上述电子设备执行以下步骤:
获取每个虚拟传感器的处理延时;
判断每个处理延时是否满足预设条件;
若任一处理延时满足预设条件,则基于处理延时对第一驾驶状态进行预测,得到第二驾驶状态;
基于每个第二驾驶状态,使用与处理延时对应的虚拟传感器进行模拟,得到一个或多个第一输入信号,其中,每个第一输入信号与每个虚拟传感器一一对应;
将一个或多个第一输入信号发送至传感器模拟器。
其中一种可能的实现方式中,上述指令被上述电子设备执行时,使得上述电子设备执行使用与所述处理延时对应的虚拟传感器进行模拟,得到一个或多个第一输入信号的步骤包括:
使用多个分别与每个所述处理延时对应的虚拟传感器进行同步模拟,得到多个第一输入信号。
其中一种可能的实现方式中,上述处理延时由第一处理时间及第二处理时间的差值确定,其中,第一处理时间为虚拟传感器在传感器模拟器中的处理时间,第二处理时间为与虚拟传感器对应的真实传感器的预设真实处理时间。
其中一种可能的实现方式中,上述指令被上述电子设备执行时,使得上述电子设备还执行以下步骤:
若任一处理延时不满足预设条件,则基于第一驾驶状态,使用与处理延时对应的虚拟传感器进行模拟,得到第二输入信号;
基于处理延时将一个或多个第二输入信号延时发送给传感器模拟器。
其中一种可能的实现方式中,上述传感器模拟器用于接收第一输入信号或第二输入信号,并基于虚拟传感器的预置前端模型和预设算法进行计算,得到输出信号,虚拟传感器的预置前端模型为Y=G*X+N+I,其中,Y为前端模型的输出信号,X为第一输入信号或第二输入信号,G为虚拟传感器的前端的增益,N为虚拟传感器的前端的噪声,I为虚拟传感器的前端引入的干扰。
其中一种可能的实现方式中,上述虚拟场景由虚拟场景模拟器使用至少一个CPU和/或至少一个GPU模拟获得,第一输入信号或第二输入信号由输入信号模拟器使用射线追踪算法通过至少一个GPU模拟获得。
其中一种可能的实现方式中,第一驾驶状态包括待测虚拟对象在t时刻的第一位置、第一速度及第一加速度,上述指令被上述电子设备执行时,使得上述电子设备执行基于所述处理延时对所述第一驾驶状态进行预测,得到第二驾驶状态的步骤包括:
基于所述处理延时,使用Kalman滤波方法对所述待测虚拟对象的第一驾驶状态进行预测,得到第二驾驶状态,其中,所述第二驾驶状态包括所述待测虚拟对象在t+T的第二位置、第二速度及第二加速度,T为所述处理延时。
其中一种可能的实现方式中,自动驾驶测试架构还包括数字模拟器、驾驶系统及动力系统模拟器,数字模拟器用于接收传感器模拟器发送的输出信号,并将输出信号发送至驾驶系统,驾驶系统用于基于输出信号确定驾驶决策,动力系统模拟器用于对驾驶决策进行模拟,得到第三驾驶状态,并将第三驾驶状态反馈给虚拟场景模拟器,使得待测虚拟对象基于第三驾驶状态对第一驾驶状态进行更新。
其中一种可能的实现方式中,上述虚拟传感器包括毫米波雷达虚拟传感器、激光雷达虚拟传感器、红外虚拟传感器和摄像头虚拟传感器中的至少一种。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备50的结构限定。在本申请另一些实施例中,电子设备50也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
可以理解的是,上述电子设备50为了实现上述功能,其包含了执行各个功能相应 的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明实施例的范围。
本申请实施例可以根据上述方法示例对上述电子设备等进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本发明实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (24)

  1. 一种仿真测试方法,其特征在于,应用于输入信号模拟器,所述输入信号模拟器位于自动驾驶测试架构,所述自动驾驶测试架构还包括虚拟场景模拟器及传感器模拟器,所述虚拟场景模拟器用于模拟虚拟场景,所述虚拟场景包括待测虚拟对象,所述待测虚拟对象包括第一驾驶状态及多个虚拟传感器,所述方法包括:
    获取每个所述虚拟传感器的处理延时;
    判断每个所述处理延时是否满足预设条件;
    若任一所述处理延时满足所述预设条件,则基于所述处理延时对所述第一驾驶状态进行预测,得到第二驾驶状态;
    基于每个所述第二驾驶状态,使用与所述处理延时对应的虚拟传感器进行模拟,得到一个或多个第一输入信号,其中,每个所述第一输入信号与每个所述虚拟传感器一一对应;
    将一个或多个所述第一输入信号发送至所述传感器模拟器。
  2. 根据权利要求1所述的方法,其特征在于,所述使用与所述处理延时对应的虚拟传感器进行模拟,得到一个或多个第一输入信号包括:
    使用多个分别与每个所述处理延时对应的虚拟传感器进行同步模拟,得到多个第一输入信号。
  3. 根据权利要求1或2所述的方法,其特征在于,所述处理延时由第一处理时间及第二处理时间的差值确定,其中,所述第一处理时间为所述虚拟传感器在所述传感器模拟器中的处理时间,所述第二处理时间为与所述虚拟传感器对应的真实传感器的预设真实处理时间。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述方法还包括:
    若任一所述处理延时不满足所述预设条件,则基于所述第一驾驶状态,使用与所述处理延时对应的虚拟传感器进行模拟,得到第二输入信号;
    基于所述处理延时将一个或多个所述第二输入信号延时发送给所述传感器模拟器。
  5. 根据权利要求4所述的方法,其特征在于,所述传感器模拟器用于接收所述第一输入信号或第二输入信号,并基于所述虚拟传感器的预置前端模型和预设算法进行计算,得到输出信号,所述虚拟传感器的预置前端模型为Y=G*X+N+I,其中,Y为所述前端模型的输出信号,X为所述第一输入信号或所述第二输入信号,G为所述虚拟传感器的前端的增益,N为所述虚拟传感器的前端的噪声,I为所述虚拟传感器的前端引入的干扰。
  6. 根据权利要求4所述的方法,其特征在于,所述虚拟场景由所述虚拟场景模拟器使用至少一个CPU和/或至少一个GPU模拟获得,所述第一输入信号或所述第二输入信号由所述输入信号模拟器使用射线追踪算法通过至少一个GPU模拟获得。
  7. 根据权利要求1所述的方法,其特征在于,所述第一驾驶状态包括所述待测虚拟对象在t时刻的第一位置、第一速度及第一加速度,所述基于所述处理延时对所述第一驾驶状态进行预测,得到第二驾驶状态包括:
    基于所述处理延时,使用Kalman滤波方法对所述待测虚拟对象的第一驾驶状态进行预测,得到第二驾驶状态,其中,所述第二驾驶状态包括所述待测虚拟对象在t+T 的第二位置、第二速度及第二加速度,T为所述处理延时。
  8. 根据权利要求1所述的方法,其特征在于,所述自动驾驶测试架构还包括数字模拟器、驾驶系统及动力系统模拟器,所述数字模拟器用于接收所述传感器模拟器发送的输出信号,并将所述输出信号发送至所述驾驶系统,所述驾驶系统用于基于所述输出信号确定驾驶决策,所述动力系统模拟器用于对所述驾驶决策进行模拟,得到第三驾驶状态,并将所述第三驾驶状态反馈给所述虚拟场景模拟器,使得所述待测虚拟对象基于所述第三驾驶状态对所述第一驾驶状态进行更新。
  9. 根据权利要求1-8任一项所述的方法,其特征在于,所述虚拟传感器包括毫米波雷达虚拟传感器、激光雷达虚拟传感器、红外虚拟传感器和摄像头虚拟传感器中的至少一种。
  10. 一种仿真测试装置,其特征在于,应用于输入信号模拟器,所述输入信号模拟器位于自动驾驶测试架构,所述自动驾驶测试架构还包括虚拟场景模拟器及传感器模拟器,所述虚拟场景模拟器用于模拟虚拟场景,所述虚拟场景包括待测虚拟对象,所述待测虚拟对象包括第一驾驶状态及多个虚拟传感器,包括:
    接收电路,用于获取每个所述虚拟传感器的处理延时;
    预测电路,用于判断每个所述处理延时是否满足预设条件;若任一所述处理延时满足所述预设条件,则基于所述处理延时对所述第一驾驶状态进行预测,得到第二驾驶状态;
    第一模拟电路,用于基于每个所述第二驾驶状态,使用与所述处理延时对应的虚拟传感器进行模拟,得到一个或多个第一输入信号,其中,每个所述第一输入信号与每个所述虚拟传感器一一对应;
    第一发送电路,用于将一个或多个所述第一输入信号发送至所述传感器模拟器。
  11. 根据权利要求10所述的装置,其特征在于,所述第一模拟电路还用于使用多个分别与每个所述处理延时对应的虚拟传感器进行同步模拟,得到多个第一输入信号。
  12. 根据权利要求10或11所述的装置,其特征在于,所述处理延时由第一处理时间及第二处理时间的差值确定,其中,所述第一处理时间为所述虚拟传感器在所述传感器模拟器中的处理时间,所述第二处理时间为与所述虚拟传感器对应的真实传感器的预设真实处理时间。
  13. 根据权利要求10-12任一项所述的装置,其特征在于,所述装置还包括:
    第二模拟电路,用于若任一所述处理延时不满足所述预设条件,则基于所述第一驾驶状态,使用与所述处理延时对应的虚拟传感器进行模拟,得到第二输入信号;
    第二发送电路,用于基于所述处理延时将一个或多个所述第二输入信号延时发送给所述传感器模拟器。
  14. 根据权利要求13所述的装置,其特征在于,所述第一输入信号或所述第二输入信号由所述输入信号模拟器使用射线追踪算法通过至少一个GPU模拟获得。
  15. 根据权利要求10所述的装置,其特征在于,所述第一驾驶状态包括所述待测虚拟对象在t时刻的第一位置、第一速度及第一加速度,所述预测电路还用于基于所述处理延时,使用Kalman滤波方法对所述待测虚拟对象的第一驾驶状态进行预测,得到第二驾驶状态,其中,所述第二驾驶状态包括所述待测虚拟对象在t+T的第二位 置、第二速度及第二加速度,T为所述处理延时。
  16. 一种仿真测试系统,其特征在于,包括:虚拟场景模拟器、输入信号模拟器、传感器模拟器、数字模拟器及系统同步模块;其中,
    所述虚拟场景模拟器用于模拟虚拟场景,所述虚拟场景包括待测虚拟对象,所述待测虚拟对象包括第一驾驶状态及多个虚拟传感器;
    所述输入信号模拟器用于获取每个所述虚拟传感器的处理延时;判断每个所述处理延时是否满足预设条件;若任一所述处理延时满足所述预设条件,则基于所述处理延时对所述第一驾驶状态进行预测,得到第二驾驶状态;基于每个所述第二驾驶状态,使用与所述处理延时对应的虚拟传感器进行模拟,得到一个或多个第一输入信号,其中,每个所述第一输入信号与每个所述虚拟传感器一一对应;将一个或多个所述第一输入信号发送至所述传感器模拟器;
    所述传感器模拟器用于接收所述第一输入信号,并基于所述虚拟传感器的预置前端模型和预设算法进行计算,得到输出信号;
    所述数字模拟器用于接收所述传感器模拟器发送的输出信号;
    所述系统同步模块用于向所述虚拟场景模拟器、所述输入信号模拟器、所述传感器模拟器及所述数字模拟器提供同步时钟。
  17. 根据权利要求16所述的系统,其特征在于,所述输入信号模拟器还用于使用多个分别与每个所述处理延时对应的虚拟传感器进行同步模拟,得到多个第一输入信号。
  18. 根据权利要求16或17所述的系统,其特征在于,所述处理延时由第一处理时间及第二处理时间的差值确定,其中,所述第一处理时间为所述虚拟传感器在所述传感器模拟器中的处理时间,所述第二处理时间为与所述虚拟传感器对应的真实传感器的预设真实处理时间。
  19. 根据权利要求16-18任一项所述的系统,其特征在于,所述输入信号模拟器还用于若任一所述处理延时不满足所述预设条件,则基于所述第一驾驶状态,使用与所述处理延时对应的虚拟传感器进行模拟,得到第二输入信号;基于所述处理延时将一个或多个所述第二输入信号延时发送给所述传感器模拟器。
  20. 根据权利要求19所述的系统,其特征在于,所述传感器模拟器还用于接收第二输入信号,所述虚拟传感器的预置前端模型为Y=G*X+N+I,其中,Y为所述前端模型的输出信号,X为所述第一输入信号或所述第二输入信号,G为所述虚拟传感器的前端的增益,N为所述虚拟传感器的前端的噪声,I为所述虚拟传感器的前端引入的干扰。
  21. 根据权利要求19所述的系统,其特征在于,所述虚拟场景由所述虚拟场景模拟器使用至少一个CPU和/或至少一个GPU模拟获得,所述第一输入信号或所述第二输入信号由所述输入信号模拟器使用射线追踪算法通过至少一个GPU模拟获得。
  22. 根据权利要求16所述的系统,其特征在于,所述第一驾驶状态包括所述待测虚拟对象在t时刻的第一位置、第一速度及第一加速度,所述输入信号模拟器还用于基于所述处理延时,使用Kalman滤波方法对所述待测虚拟对象的第一驾驶状态进行预测,得到第二驾驶状态,其中,所述第二驾驶状态包括所述待测虚拟对象在t+T的 第二位置、第二速度及第二加速度,T为所述处理延时。
  23. 根据权利要求16所述的系统,其特征在于,还包括驾驶系统及动力系统模拟器,其中,
    所述数字模拟器还用于将所述输出信号发送至所述驾驶系统;
    所述驾驶系统用于基于所述输出信号确定驾驶决策;
    所述动力系统模拟器用于对所述驾驶决策进行模拟,得到第三驾驶状态,并将所述第三驾驶状态反馈给所述虚拟场景模拟器,使得所述待测虚拟对象基于所述第三驾驶状态对所述第一驾驶状态进行更新。
  24. 根据权利要求16-23任一项所述的系统,其特征在于,所述虚拟传感器包括毫米波雷达虚拟传感器、激光雷达虚拟传感器、红外虚拟传感器和摄像头虚拟传感器中的至少一种。
PCT/CN2021/132662 2020-12-03 2021-11-24 仿真测试方法、装置及系统 WO2022116873A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020237022187A KR20230116880A (ko) 2020-12-03 2021-11-24 시뮬레이션 테스트 방법, 장치 및 시스템
EP21899906.8A EP4250023A4 (en) 2020-12-03 2021-11-24 SIMULATION TEST METHOD, APPARATUS AND SYSTEM
JP2023533766A JP2023551939A (ja) 2020-12-03 2021-11-24 シミュレーション試験方法、装置及びシステム
US18/327,977 US20230306159A1 (en) 2020-12-03 2023-06-02 Simulation test method, apparatus, and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011408608.X 2020-12-03
CN202011408608.XA CN114609923A (zh) 2020-12-03 2020-12-03 仿真测试方法、装置及系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/327,977 Continuation US20230306159A1 (en) 2020-12-03 2023-06-02 Simulation test method, apparatus, and system

Publications (1)

Publication Number Publication Date
WO2022116873A1 true WO2022116873A1 (zh) 2022-06-09

Family

ID=81853801

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/132662 WO2022116873A1 (zh) 2020-12-03 2021-11-24 仿真测试方法、装置及系统

Country Status (6)

Country Link
US (1) US20230306159A1 (zh)
EP (1) EP4250023A4 (zh)
JP (1) JP2023551939A (zh)
KR (1) KR20230116880A (zh)
CN (1) CN114609923A (zh)
WO (1) WO2022116873A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115364B (zh) * 2023-10-24 2024-01-19 芯火微测(成都)科技有限公司 微处理器sip电路测试状态监控方法、系统及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108681264A (zh) * 2018-08-10 2018-10-19 成都合纵连横数字科技有限公司 一种智能车辆数字化仿真测试装置
CN108877374A (zh) * 2018-07-24 2018-11-23 长安大学 基于虚拟现实与驾驶模拟器的车辆队列仿真系统和方法
CN109213126A (zh) * 2018-09-17 2019-01-15 安徽江淮汽车集团股份有限公司 自动驾驶汽车测试系统和方法
CN110779730A (zh) * 2019-08-29 2020-02-11 浙江零跑科技有限公司 基于虚拟驾驶场景车辆在环的l3级自动驾驶系统测试方法
CN110794712A (zh) * 2019-12-03 2020-02-14 清华大学苏州汽车研究院(吴江) 一种自动驾驶虚景在环测试系统和方法
US20200167436A1 (en) * 2018-11-27 2020-05-28 Hitachi, Ltd. Online self-driving car virtual test and development system
CN111505965A (zh) * 2020-06-17 2020-08-07 深圳裹动智驾科技有限公司 自动驾驶车辆仿真测试的方法、装置、计算机设备及存储介质
CN111881520A (zh) * 2020-07-31 2020-11-03 广州文远知行科技有限公司 一种自动驾驶测试的异常检测方法、装置、计算机设备及存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10635761B2 (en) * 2015-04-29 2020-04-28 Energid Technologies Corporation System and method for evaluation of object autonomy
CN107807542A (zh) * 2017-11-16 2018-03-16 北京北汽德奔汽车技术中心有限公司 自动驾驶仿真系统

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108877374A (zh) * 2018-07-24 2018-11-23 长安大学 基于虚拟现实与驾驶模拟器的车辆队列仿真系统和方法
CN108681264A (zh) * 2018-08-10 2018-10-19 成都合纵连横数字科技有限公司 一种智能车辆数字化仿真测试装置
CN109213126A (zh) * 2018-09-17 2019-01-15 安徽江淮汽车集团股份有限公司 自动驾驶汽车测试系统和方法
US20200167436A1 (en) * 2018-11-27 2020-05-28 Hitachi, Ltd. Online self-driving car virtual test and development system
CN110779730A (zh) * 2019-08-29 2020-02-11 浙江零跑科技有限公司 基于虚拟驾驶场景车辆在环的l3级自动驾驶系统测试方法
CN110794712A (zh) * 2019-12-03 2020-02-14 清华大学苏州汽车研究院(吴江) 一种自动驾驶虚景在环测试系统和方法
CN111505965A (zh) * 2020-06-17 2020-08-07 深圳裹动智驾科技有限公司 自动驾驶车辆仿真测试的方法、装置、计算机设备及存储介质
CN111881520A (zh) * 2020-07-31 2020-11-03 广州文远知行科技有限公司 一种自动驾驶测试的异常检测方法、装置、计算机设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4250023A4

Also Published As

Publication number Publication date
JP2023551939A (ja) 2023-12-13
KR20230116880A (ko) 2023-08-04
EP4250023A1 (en) 2023-09-27
US20230306159A1 (en) 2023-09-28
CN114609923A (zh) 2022-06-10
EP4250023A4 (en) 2024-05-29

Similar Documents

Publication Publication Date Title
JP6548691B2 (ja) 画像生成システム、プログラム及び方法並びにシミュレーションシステム、プログラム及び方法
US20210406562A1 (en) Autonomous drive emulation methods and devices
US10635844B1 (en) Methods and systems for simulating vision sensor detection at medium fidelity
Muckenhuber et al. Object-based sensor model for virtual testing of ADAS/AD functions
WO2018066352A1 (ja) 画像生成システム、プログラム及び方法並びにシミュレーションシステム、プログラム及び方法
CN107103104B (zh) 一种基于跨层协同架构的车辆智能网联测试系统
EP3872633A1 (en) Autonomous driving vehicle simulation method in virtual environment
US11941888B2 (en) Method and device for generating training data for a recognition model for recognizing objects in sensor data of a sensor, in particular, of a vehicle, method for training and method for activating
WO2022246860A1 (zh) 一种自动驾驶系统的性能测试方法
WO2020220248A1 (zh) 自动驾驶车辆的仿真测试方法、系统、存储介质和车辆
WO2022116873A1 (zh) 仿真测试方法、装置及系统
CN112286079A (zh) 一种高拟真度无人机航电半实物实景仿真系统
CN115879323A (zh) 自动驾驶仿真测试方法、电子设备及计算机可读存储介质
CN116601612A (zh) 用于测试车辆的控制器的方法和系统
CN209002122U (zh) 一种摄像头控制器测试系统
WO2024131679A1 (zh) 车路云融合的道路环境场景仿真方法、电子设备及介质
CN114280562A (zh) 雷达仿真测试方法和实施该方法的计算机可读存储介质
WO2023213083A1 (zh) 目标检测方法、装置和无人车
CN116451439A (zh) 一种泊车硬件闭环测试系统
CN113468735B (zh) 一种激光雷达仿真方法、装置、系统和存储介质
CN115384526A (zh) 调试系统和调试方法
WO2022256976A1 (zh) 稠密点云真值数据的构建方法、系统和电子设备
US20190152486A1 (en) Low-latency test bed for an image- processing system
CN112560258B (zh) 一种测试方法、装置、设备及存储介质
JP6548708B2 (ja) 画像処理システムのための低レイテンシの試験機

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21899906

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023533766

Country of ref document: JP

ENP Entry into the national phase

Ref document number: 20237022187

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021899906

Country of ref document: EP

Effective date: 20230620

NENP Non-entry into the national phase

Ref country code: DE