CN116432392A - Automatic driving simulation test method and test device - Google Patents

Automatic driving simulation test method and test device Download PDF

Info

Publication number
CN116432392A
CN116432392A CN202310188574.5A CN202310188574A CN116432392A CN 116432392 A CN116432392 A CN 116432392A CN 202310188574 A CN202310188574 A CN 202310188574A CN 116432392 A CN116432392 A CN 116432392A
Authority
CN
China
Prior art keywords
test
target
automatic driving
host vehicle
virtual host
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310188574.5A
Other languages
Chinese (zh)
Inventor
孙搏
刘新晓
潘余曦
杨子江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Xinxin Information Technology Co ltd
Original Assignee
Xi'an Xinxin Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Xinxin Information Technology Co ltd filed Critical Xi'an Xinxin Information Technology Co ltd
Priority to CN202310188574.5A priority Critical patent/CN116432392A/en
Publication of CN116432392A publication Critical patent/CN116432392A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application is applicable to the technical field of automatic driving simulation test, and provides an automatic driving simulation test method and a test device, wherein the test method comprises the following steps: creating a test job, wherein the test job comprises a plurality of test tasks with the same test objects, the test tasks comprise test objects and a test environment, the test environment comprises a virtual host vehicle and a test scene which are decoupled mutually, the virtual host vehicle is used for carrying the test objects to carry out automatic driving simulation test in the test scene, and the test objects support an automatic driving control algorithm; and according to the test operation, completing the automatic driving simulation test of the test object in a plurality of test tasks. According to the technical scheme, the plurality of test tasks are created in batches for the same test object, the test operation is generated, the plurality of test tasks in the test operation are subjected to automatic driving simulation test in batches, the creation efficiency of the test operation is effectively improved, and meanwhile, the test efficiency of the automatic driving simulation test is improved.

Description

Automatic driving simulation test method and test device
Technical Field
The application belongs to the technical field of automatic driving simulation test, and particularly relates to an automatic driving simulation test method and an automatic driving simulation test device.
Background
In the whole process of development of an automatic driving system, the automatic driving simulation test has a crucial meaning.
At present, an automatic driving simulation test generally forms a test scene by a digital twin technology from a scene map, a weather environment, a dynamic participant, a static object, an evaluation rule and the like, tests a tested vehicle controlled by an automatic driving system, and determines whether the automatic driving vehicle passes the test in various scenes of a virtual environment according to the evaluation rule. And a technician can modify the automatic driving algorithm according to the test result to finish research and development upgrading.
However, in the traditional automatic driving simulation test method, simulation test scenes, test tasks and the like are required to be manually created one by a tester, the creation process is too complicated, and the technical requirements on the tester are high, so that the efficiency of the automatic driving simulation test is low.
Therefore, how to improve the test efficiency of the autopilot simulation test is a problem to be solved.
Disclosure of Invention
The embodiment of the application provides an automatic driving simulation test method and a test device, which improve the test efficiency of automatic driving simulation test.
In a first aspect, an embodiment of the present application provides an autopilot simulation test method, where the method includes: creating a test job, wherein the test job comprises a plurality of test tasks with the same test objects, the test tasks comprise test objects and a test environment, the test environment comprises a virtual host vehicle and a test scene which are decoupled mutually, the virtual host vehicle is used for carrying the test objects to carry out automatic driving simulation test in the test scene, and the test objects support an automatic driving control algorithm; and completing the automatic driving simulation test of the test object in the plurality of test tasks according to the test operation.
In one possible implementation, the method further includes creating the plurality of test tasks, the creating the plurality of test tasks including: acquiring a test request, wherein the test request comprises a target test object; determining a corresponding target virtual host vehicle and a plurality of target test scenes based on the target test object, wherein the plurality of target test scenes are generated based on corresponding configuration of any one model of a vehicle model, a sensor model or a dynamics model of the target virtual host vehicle; and generating test tasks of the target virtual host vehicle in different target test scenes according to the target test objects and aiming at each test task.
In one possible implementation, the method further includes creating the target virtual host vehicle, the creating the target virtual host vehicle including: obtaining a virtual host vehicle request, wherein the virtual host vehicle request comprises at least one target vehicle model and at least one target dynamics model or target sensor model; and acquiring key position information of the target vehicle model, determining the installation mode and the gesture of the at least one target dynamic model or the target sensor, and generating the target virtual host vehicle.
In one possible implementation, the method further includes creating the target test scenario, the creating the target test scenario including: acquiring a test scene request, wherein the test scene request comprises at least one map and at least one environment template; and obtaining at least one target traffic flow and at least one target evaluation index corresponding to the target traffic flow according to the at least one map and the at least one environment template, and generating at least one target test scene.
In one possible implementation, the acquiring at least one target traffic flow includes: and acquiring the initial positions of the target traffic participants and the target path planning, and generating the target traffic flow according to a preset traffic behavior criterion.
In one possible implementation manner, the obtaining the target evaluation index corresponding to at least one target traffic flow includes: and acquiring an evaluation request of the target traffic flow, and determining a target evaluation index of the evaluation request.
In one possible implementation, the method further includes: recording the operation state, the test result and the test time of each test task in the plurality of test tasks, wherein the operation state comprises non-operation, operation and completion, and the test result comprises pass, fail and invalidation.
In one possible implementation, the method further includes: determining screening conditions based on a screening condition indication instruction, wherein the screening conditions comprise one or more of the operation state, the test result and the test time; screening a plurality of test operations according to the screening conditions to obtain a target test operation list; and displaying the target test job list.
In one possible implementation, the method further includes: selecting a target test job from the target test job list based on a test job selection instruction; and performing automatic driving simulation test according to the target test operation.
In a second aspect, an embodiment of the present application provides an autopilot simulation test apparatus, the apparatus including: the system comprises a creation module, a test operation module and a control module, wherein the creation module is used for creating a test operation, the test operation comprises a plurality of test tasks with the same test objects, the test tasks comprise test objects and a test environment, the test environment comprises a virtual host vehicle and a test scene which are decoupled mutually, the virtual host vehicle is used for carrying the test objects to carry out automatic driving simulation test in the test scene, and the test objects support an automatic driving control algorithm; and the test module is used for completing the automatic driving simulation test of the test object in the plurality of test tasks according to the test operation.
In a third aspect, an embodiment of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the method according to the first aspect or any implementation manner of the first aspect when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program that, when executed by a processor, implements a method according to the first aspect or any one of the implementations.
In a fifth aspect, embodiments of the present application provide a computer program product, which when run on a terminal device, causes the terminal device to perform the method according to the first aspect or any implementation manner of the first aspect.
Compared with the prior art, the embodiment of the application has the beneficial effects that:
1. for the same test object, a plurality of test scenes can be selected, a plurality of test tasks corresponding to the test scenes one by one are created in batches, and the plurality of test tasks with the same test object are created as test jobs, so that the creation efficiency of the test jobs is improved; further, a plurality of test tasks in the test operation can be synchronously executed, so that the test efficiency of the automatic driving simulation test is improved.
2. The screening conditions of the test operation can be determined from a plurality of dimensions such as the operation state, the test result, the test time and the like, the target test operation is screened out in a large number of test operations, the test operation which needs to be subjected to key analysis is conveniently and directly positioned, and the management efficiency of the test operation is improved; in addition, after the target test operation list is obtained through screening, the test operation needing to be subjected to the automatic driving simulation test again can be selected from the screened target test operation list according to actual needs to be subjected to the retest, so that the rapid retest of the key test operation is realized, and the test efficiency of the automatic driving simulation test is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of an automatic driving simulation test method according to an embodiment of the present application;
FIG. 2 is a flow chart of a method for creating a test job according to an embodiment of the present application;
FIG. 3 is a flowchart of a method for creating a virtual host vehicle according to an embodiment of the present disclosure;
FIG. 4 is a flowchart of a method for creating a test scenario according to an embodiment of the present application;
FIG. 5 is a flow chart of a screening method for testing operation according to an embodiment of the present disclosure;
FIG. 6 is a schematic flow chart of a screening method of test operation according to an embodiment of the present disclosure;
FIG. 7 is a block diagram of an autopilot simulation test apparatus according to one embodiment of the present application;
fig. 8 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
For ease of understanding, concepts related to the embodiments of the present application will be described first.
An autopilot system: the system is software and a communication system for realizing real-time continuous control on the vehicle through communication, a computer and a control technology, so that the vehicle can realize automatic driving under the condition that a driver does not take over or takes over in a limited manner, and is called a self-driving system for short.
Automatic driving simulation test: the method is characterized in that a virtual test environment is created through a computer virtual reconstruction technology, physical environment and traffic environment encountered by real-world traffic are simulated, whether a potential risk can be correctly identified by a test automatic driving system, driving actions are decided, and vehicle control is completed so as to provide safe and comfortable driving experience. The simulation test is helpful for the automatic driving system developer to reduce the test cost and promote the development progress.
Cloud simulation: refers to a large-scale simulation test program performed in a dynamic scheduling and deployment mode in a simulator cluster. The performance indexes of cloud simulation mainly comprise: flexibility, e.g., expansion, contraction time consumption; server resource utilization, such as central processing unit (central processing unit, CPU) utilization, graphics processor (graphics processing unit, GPU) utilization, storage device utilization, and the like.
The tested system comprises: the test object of the cloud simulation program is usually an auxiliary driving system or an automatic driving system.
Closed loop simulation: the simulation method generates real and/or virtual sensor data and positioning equipment data according to data such as a map, a test scene, a test environment, a state of a tested vehicle, a state of a non-tested vehicle and the like and information such as data acquired in an actual traffic environment, sends the real and/or virtual sensor data and positioning equipment data to a tested system, sends data output by the tested system to a simulated or real vehicle to obtain position and posture change of the vehicle, generates new data of a real or virtual sensor and/or positioning equipment according to the change, and repeatedly and continuously performs qualitative and quantitative evaluation on the output of the tested system and the state of the vehicle.
Software in loop simulation: a closed loop simulation method is characterized in that the object to be tested is usually software of a software aided driving or automatic driving system.
Traffic participants: in virtual simulation testing, objects, such as motor vehicles, pedestrians, animals, etc., appear on the road with or without autonomous locomotion and can affect the driving process of the vehicle under test.
The test person: generally referred to as a worker who tests the algorithm through a simulation system.
Testing tasks: the test task is used for describing the traffic environment of the automatic driving vehicle in the simulation environment and observing the passing performance of the automatic driving vehicle.
Testing operation: in the application, a data structure which is formed by combining batch test task commonalities and is convenient for a user to quickly create, execute and edit is shown.
The latest update time: the time when the data structures such as the test task/test job/virtual host vehicle/test scene and the like are edited and modified last time and the state is updated.
When a vehicle runs on a real road, a large number of complex scenes can be met, and enough extremely rare scenes are difficult to cover in the research and development process. If a scene which is not considered by a researcher is met during road driving, the automatic driving cannot accurately judge the decision and control the vehicle, and severe accidents are most likely to be caused. In the whole process of development of an automatic driving system, a test scene which is abundant enough and is simulated is quickly constructed through simulation test, and the method is crucial to the commercialization of the acceleration automatic driving technology.
At present, an automatic driving simulation test generally forms a scene map, weather environment, dynamic participants, static objects and evaluation rules into a tested vehicle controlled by a test scene test automatic driving system through a digital twin technology. And in the test process, the automatic driving host vehicle encounters various conditions described in the test scene in the virtual environment, and whether the automatic driving vehicle passes the test is determined through an evaluation rule. After a large number of tests are completed, the research and development personnel modify the algorithm of the automatic driving system according to the problems exposed by the tests, and after the version is updated, the test is continued to form a spiral ascending research and development upgrading mode driven by test data.
However, the existing automatic driving simulation test method has the following problems:
1. when the automatic driving simulation test software creates a simulation test task, a tested vehicle, a test scene and a vehicle control system side need to be selected to form a complete test task, and the traditional simulation software needs to be manually created one by a tester, so that larger workload is generated.
2. In the traditional automatic driving simulation test, the steps of setting up a test scene and creating a test task are excessively complicated, the technical requirements on testers are extremely high, the testers are required to understand the scene construction strategy and the technology, and the coordination and the common setting up of multiple people are difficult, so that the test efficiency of the simulation test is lower, and the cost is higher.
3. In the process of simultaneously executing batch test scenes, the running progress is focused at required time, the running abnormal tasks are found out and processed in time, the task execution condition is focused at the moment, and the analysis result after the tasks are completed is checked, so that the workload of the automatic driving simulation test is larger, and the working efficiency is lower.
4. The traditional simulation software only provides the original data without statistical arrangement, or does not support multi-condition cascade screening, or does not provide a batch creation execution checking result function according to the unified tested target, so that the test progress and test result of the test task cannot be checked quickly, and the test efficiency of the automatic driving simulation test is low.
Aiming at the problems in the prior art, the application provides an automatic driving simulation test method and a test device, which have the following technical effects:
1. the method comprises the steps that a simulation test task data structure with complex content is divided into a test scene, a virtual host vehicle, a vehicle control system and bottom layer resources, different personnel are supported to maintain the bottom layer resources respectively, unexecuted test tasks are assembled in batches before testing, efficient simulation test task creation and resource maintenance are achieved, and the bottom layer resources comprise but are not limited to 'host vehicle resources' supporting virtual host vehicle creation and 'scene resources' supporting test scene creation;
2. The test tasks with common test requirements are abstracted into test tasks, the test tasks can be configured according to the same test objects when being created, a large number of simulation test tasks can be quickly created by selecting batch test scenes, the workload of testers is reduced, and the creation efficiency of the test tasks is improved, wherein the test objects support an automatic driving control algorithm;
3. the multi-dimensional cascade screening can be performed according to the execution progress and the execution result of the test job and whether the test job passes through the test and the like, so that the test task expected to be checked can be conveniently and quickly screened, the use experience of a user is improved, and the test efficiency of the automatic driving simulation test is improved.
The technical scheme of the present application will be described in detail with reference to the accompanying drawings.
Fig. 1 is a flow chart of an automatic driving simulation test method according to an embodiment of the present application. As shown in fig. 1, the method includes at least S101 to S102.
S101, creating a test job, wherein the test job comprises a plurality of test tasks with the same test objects, the test tasks comprise test objects and test environments, the test environments comprise virtual host vehicles and test scenes which are decoupled mutually, and the virtual host vehicles are used for carrying the test objects to carry out automatic driving simulation test in the test scenes, wherein the test objects support an automatic driving control algorithm.
In one possible implementation, a test request is obtained, the test request including a target test object; determining a corresponding target virtual host vehicle and a plurality of target test scenes based on the target test object, wherein the plurality of target test scenes can be generated based on corresponding configuration of one model of a vehicle model, a sensor model or a vehicle dynamics model of the target virtual host vehicle; according to the target test objects, generating test tasks of the target virtual host vehicle in different target test scenes according to the correspondence of each test task; and creating a test job according to the generated plurality of test tasks.
As an example, the target test object may be an autopilot system, an advanced driving assistance system (advanced driving assistance system, ADAS), or a sensory fusion system, or the like.
It should be noted that, the test object in the embodiment of the present application may be various autopilot systems in the simulation platform, or may be an external autopilot system.
In one possible implementation manner, the target virtual host vehicle is composed of a vehicle model, a vehicle dynamics model and a sensor model, and is used for carrying target test objects to perform simulation tests in a plurality of target test scenes, transmitting scene information, receiving and executing control commands.
As one example, a vehicle model is used to describe information about the appearance, size, structure, etc. of a target virtual host vehicle within a target test scene.
As one example, a vehicle dynamics model is used to describe the athletic performance of a target virtual host vehicle within a target test scene.
As an example, the sensor model may include a camera model, a lidar model, a millimeter wave radar model, an ultrasonic radar model, a global satellite navigation system (global navigation satellite system, GNSS)/inertial measurement unit (inertial measurement unit, IMU), and the like.
The camera model is used for simulating a real camera mechanism, capturing and acquiring image information of the surrounding environment of the target virtual host vehicle through light rays, and informing a target test object.
The laser radar model is used for simulating a real laser radar, specifically simulating a laser emission and reflection mechanism, and informing the position, speed, size and other information of objects such as obstacles, vehicles and the like in the surrounding environment of the target virtual host vehicle to the target test object through point cloud data.
The millimeter wave radar model is used for simulating a real millimeter wave radar, specifically simulating electromagnetic wave emission and reflection mechanisms, acquiring information such as speed, position, size and the like of a moving object in a driving environment around a target virtual host vehicle, and informing a target test object.
The ultrasonic radar model is used for simulating a real ultrasonic radar, specifically simulating a sound wave emission and reflection mechanism, acquiring position information of objects in a relatively close range around a target virtual host vehicle, and informing a target test object.
Illustratively, the GNSS/IMU is operable to sense location and acceleration information of the target virtual host vehicle.
As an example, the vehicle dynamics model is used for modeling according to a target virtual host vehicle, obtaining control instructions such as steering angle, accelerator pedal force, and decelerator pedal force, performing motion simulation through a transmission system, a tire system, and the like, controlling the virtual host vehicle to move in a target test scene, and making an operation performance fitting a real vehicle.
In one possible implementation, the target test scene may be composed of a map, an environmental template, other dynamic elements such as interfering vehicles/pedestrians, obstacles, static elements such as various traffic facilities, evaluation indexes, and the like.
S102, according to the test operation, completing the automatic driving simulation test of the test object in a plurality of test tasks.
In one possible implementation manner, automatic driving simulation test is performed on a plurality of test tasks in a test job in batches, in a target test environment of each test task, a target virtual host vehicle collects scene information according to various loaded sensor models, the collected scene information is sent to a target test object, the target test object generates a control instruction according to the scene information and sends the control instruction to the target virtual host vehicle, a vehicle dynamics model performs motion simulation according to the control instruction sent by the target test object, the target virtual host vehicle is controlled to move in a target test scene, and then the motion of the target virtual host vehicle in the target test scene is evaluated through an evaluation index configured in the target test scene.
As an example, the scene information collected by the various sensor models may include image information of the surrounding environment of the target virtual host vehicle collected by the camera model, information such as the position, the speed, the size, etc. of the obstacle in the surrounding environment of the target virtual host vehicle collected by the laser radar model, information such as the speed, the position, the size, etc. of the moving object in the surrounding driving environment of the target virtual host vehicle collected by the millimeter wave radar model, position information of the object in a relatively close distance around the target virtual host vehicle collected by the ultrasonic radar model, and position and acceleration information of the target virtual host vehicle collected by the GNSS/IMU, etc.
As an example, the control instruction generated by the target test object according to the scene information may include a steering angle, an accelerator pedal force, a decelerator pedal force, and the like.
As an example, the evaluation index is classified into a logical constraint index and a numerical constraint index.
The logic constraint type indexes comprise logic conditions such as red light running and solid line pressing, and when the action of the target virtual host vehicle in the target test scene violates the logic constraint type indexes in the simulation test process, trigger data are recorded, and the test result of the test task is marked as failure.
The numerical constraint type indexes comprise numerical monitoring of the speed, the acceleration and the like of the target virtual host vehicle, a limiting range of each index needs to be set in advance, when the parameter numerical value of the target virtual host vehicle exceeds the limiting range (such as overspeed and the like) in the simulation test process, index event triggering data are recorded, and the test result of the test task is marked as failure.
In one possible implementation, the test results of the test tasks include pass, fail, and invalidate.
As an example, when all evaluation indexes in a test task pass, the test result of the test task is passing; if the failed evaluation index exists in the test task, the test result of the test task is failure; when an evaluation index which cannot be normally executed due to resource loading abnormality, service breakdown and the like exists in the test task, the test result of the test task is invalid.
In one possible implementation manner, during the process of performing an autopilot simulation test on a target test object according to a test job, a job status, a test result and a test time of each test task in the test job are recorded, wherein the job status includes not running, running and completed.
According to the technical scheme, the automatic driving simulation test is simultaneously carried out on a plurality of test tasks in the test operation, so that the test efficiency of the automatic driving simulation test is improved.
As an example, the step of creating a test job in S101 may refer to fig. 2. Fig. 2 is a flowchart of a method for creating a test job according to an embodiment of the present application. As shown in fig. 2, the method includes at least S201 to S204.
S201, acquiring a test request, wherein the test request comprises a target test object.
In one possible implementation manner, after the test request is acquired, the test request is analyzed, the name or the identifier of the test object carried by the test request is obtained, and the test object corresponding to the name or the identifier of the test object is used as the target test object of the automatic driving simulation test.
As one example, the target test object supports various types of autopilot control algorithms, including, but not limited to, autopilot systems, ADAS assistance systems, and sensory fusion systems.
S202, determining a corresponding target virtual host vehicle and a plurality of target test scenes based on the target test object, wherein the plurality of target test scenes can be generated based on a corresponding configuration of one of a vehicle model, a sensor model or a vehicle dynamics model of the target virtual host vehicle.
In one possible implementation, a virtual host vehicle request is obtained, the virtual host vehicle request including at least one target vehicle model, and at least one target vehicle dynamics model or target sensor model; and acquiring key position information of the target vehicle model, determining the installation mode and the gesture of at least one target vehicle dynamics model or target sensor, and generating the target virtual host vehicle.
As an example, resolving the obtained virtual host vehicle request to obtain 1 target vehicle model, 2 target vehicle dynamics models and 3 target sensor models carried in the virtual host vehicle request, selecting the 1 target vehicle model from a vehicle model library, selecting the 2 target vehicle dynamics models from a vehicle dynamics model library, and selecting the 3 target sensor models from a sensor model library; analyzing the obtained key position information of the target vehicle model, obtaining the installation modes and the installation postures of the 2 target vehicle dynamics models and the 3 target sensor models indicated by the key position information, and generating the target virtual host vehicle according to the 1 target vehicle model, the 2 target vehicle dynamics models and the 3 target sensor models carried in the virtual host vehicle request, and the installation modes and the installation postures of the 2 target vehicle dynamics models and the 3 target sensor models indicated by the key position information.
As one example, a plurality of different types of vehicle models are stored in a vehicle model library, a plurality of different types of vehicle dynamics models are stored in a vehicle dynamics model library, and a plurality of different types of sensor models are stored in a sensor model library.
The target vehicle model may be a three-dimensional (3D) model of a vehicle used in a simulation simulator, or may be a 3D model of a vehicle selected from a library of vehicle models, and the vehicle model is mainly used to describe information such as an appearance style, a size, and an wheelbase of the vehicle.
The target vehicle dynamics model may be a vehicle dynamics model used in a simulation simulator, or may be a vehicle dynamics model selected from a vehicle dynamics model library, for example. The target vehicle dynamics model is mainly used for describing running of the vehicle under different steering angles and pedal forces through power torque simulation, transmission simulation, suspension simulation, tire simulation and the like, and controlling a target virtual host vehicle to make motion performance which highly accords with a real vehicle on a real road under a target test scene.
By way of example, the object sensor model may be a usual sensor for an autonomous vehicle, for example, the sensor model may include a camera model, a lidar model, a millimeter wave radar model, an ultrasonic radar model, a GNSS/IMU, and the like.
In one possible implementation, a test scenario request is obtained, the test scenario request including at least one map and at least one environment template; and acquiring at least one target traffic flow and a target evaluation index corresponding to the at least one target traffic flow according to the at least one map and the at least one environment template, and generating at least one target test scene.
As an example, the acquired test scene request is parsed to obtain a map and an environment template carried in the test scene request, and the map and the environment template carried in the test scene are used as a target map and a target environment template, respectively selecting a target map from a map resource library, and selecting a target environment model from an environment model library.
Illustratively, a plurality of different types of maps are stored in a map repository, each of the plurality of maps in the map repository including, but not limited to, road structures and buildings.
The environment model library comprises a plurality of different environment templates, and the environment templates are used for describing illumination conditions and weather conditions.
As one example, a target traffic participant, an initial location of a target path, and a target path plan are obtained, and a target traffic flow is generated according to preset traffic behavior criteria.
For example, the target traffic participant may be an interfering vehicle or pedestrian, the target path plan may be a specific movement path of the target traffic participant from an initial position in the target map, and the preset traffic behavior criteria may include that the target virtual host vehicle stops driving when the traffic signal is red light, continues driving when the traffic signal is green light, and so on.
As another example, behavior logic of a target traffic stream is obtained from which a random traffic stream is generated through artificial intelligence.
As one example, an evaluation request for a target traffic flow is obtained, and a target evaluation index for the evaluation request is determined.
The evaluation request of the obtained target traffic flow is analyzed to obtain an evaluation index carried by the evaluation request, the evaluation index is determined to be a target evaluation index, and the behavior of the target virtual host vehicle in the target traffic flow of the target test scene is evaluated according to the target evaluation index to obtain the test result of the target test object in the target test scene.
As an example, the evaluation index is classified into a logical constraint index and a numerical constraint index.
The logic constraint type index comprises logic conditions such as red light running and solid line pressing, trigger data are recorded when the virtual host vehicle violates the logic constraint type index in the simulation test process, and the test result of the test task is marked as failure.
The numerical constraint type indexes comprise numerical monitoring of the speed, the acceleration and the like of the virtual host vehicle, a limiting range of each index needs to be set in advance, when the parameter numerical value of the virtual host vehicle exceeds the limiting range (such as overspeed and the like) in the simulation test process, index event triggering data are recorded, and the test result of the test task is marked as failure.
S203, according to the target test objects, corresponding test tasks of the target virtual host vehicle in different target test scenes are generated aiming at each test task.
In one possible implementation manner, each test task includes a target test object, a target virtual host vehicle and a target test scene, where the plurality of target test scenes correspond to a plurality of test tasks, and the number of target test scenes is equal to the number of test tasks.
S204, creating a test job according to a plurality of test tasks with the same target test objects.
In one possible implementation, multiple test tasks with the same target test object are combined into a test job, and the multiple test tasks in the test job can perform automatic driving simulation test in batches.
In the technical scheme provided by the application, the same test object can simultaneously select a plurality of test scenes, a plurality of test tasks in the test operation can be generated rapidly, the generated plurality of test tasks can be combined into the test operation, and the plurality of test tasks in the test operation can be executed in batches, so that the creation efficiency of the test operation is effectively improved, and meanwhile, the test efficiency of the automatic driving simulation test is improved.
Fig. 3 is a flowchart of a method for creating a virtual host vehicle according to an embodiment of the present application. As shown in fig. 3, the method includes at least S301 to S307.
S301, selecting a target vehicle model from a vehicle model library based on a vehicle model selection instruction.
In one possible implementation, the vehicle model selection instructions are used to indicate a target vehicle model.
As an example, after receiving a vehicle model selection instruction sent by a user through a man-machine interaction manner, the vehicle model selection instruction is parsed, a target vehicle model indicated by the vehicle model selection instruction is obtained, and the target vehicle model is selected from a vehicle model library.
In one possible implementation, a plurality of different types of vehicle models are stored in a vehicle model library.
In one possible implementation, the vehicle model may be a three-dimensional (3D) model of the vehicle used in the simulator, or may be a 3D model of the vehicle selected from a library of vehicle models, and the vehicle model is mainly used to describe information such as appearance style, size, and wheelbase of the vehicle.
In one possible implementation, if there is no actual desired vehicle model in the vehicle 3D model or the vehicle model library used by the simulation simulator, a new vehicle model may be created by adjusting information such as appearance style, size, and wheelbase.
S302, selecting a target vehicle dynamics model from a vehicle dynamics model library based on the vehicle dynamics model selection instruction.
In one possible implementation, the vehicle dynamics model selection instruction is used to instruct the target vehicle dynamics model.
As an example, upon receiving a vehicle dynamics model selection instruction sent by a user through a man-machine interaction manner, the vehicle dynamics model selection instruction is parsed, a target vehicle dynamics model indicated by the vehicle dynamics model selection instruction is obtained, and the target vehicle dynamics model is selected from a vehicle dynamics model library.
In one possible implementation, a plurality of different types of vehicle dynamics models are stored in a vehicle dynamics model library.
In one possible implementation, the vehicle dynamics model may be a vehicle dynamics model used in a simulation simulator, or may be a vehicle dynamics model selected from a library of vehicle dynamics models. The vehicle dynamics model is mainly used for describing running of the vehicle under different steering angles and pedal forces through power torque simulation, transmission simulation, suspension simulation, tire simulation and the like, and controlling the virtual host vehicle to make motion performance which highly accords with the real vehicle on the real road under the test scene.
In one possible implementation, if there is no actual desired vehicle dynamics model in the vehicle dynamics model or vehicle dynamics model library used by the simulation simulator, a new vehicle dynamics model may be created in the form of configuration parameters.
S303, selecting a target sensor model from a sensor model library based on the sensor model selection instruction.
In one possible implementation, the sensor model selection instructions are used to instruct the target sensor model.
As an example, after receiving a sensor model selection instruction sent by a user through a man-machine interaction manner, the sensor model selection instruction is parsed, a target sensor model indicated by the sensor model selection instruction is obtained, and the target sensor model is selected from a sensor model library.
In one possible implementation, the sensor model may be a usual sensor for an autonomous vehicle, for example, the sensor model may include a camera model, a lidar model, a millimeter wave radar model, an ultrasonic radar model, a GNSS/IMU, and the like.
In one possible implementation, a plurality of different types of sensor models are stored in a sensor model library.
In one possible implementation, the sensor model can simulate model parameters loaded by a real vehicle with high fidelity through physical modeling, artificial intelligence (Artificial Intelligence, AI) processing and other modes, and high-quality restoration with input and output of the real sensor is realized.
S304, based on the assembly parameter configuration instruction, the assembly parameters of the target sensor model are configured.
In one possible implementation, the fitting parameter configuration instructions are used to indicate fitting parameters of the target sensor model.
As an example, after receiving an assembly parameter configuration instruction sent by a user through a man-machine interaction mode, analyzing the assembly parameter configuration instruction to obtain an assembly parameter of a target sensor model indicated by the assembly parameter configuration instruction, and performing parameter configuration for the target sensor model according to the assembly parameter of the target sensor model.
In one possible implementation, the assembly parameters of the target sensor model may include installation parameters such as an installation position and an installation posture of the target sensor model on the target vehicle model.
As an example, the installation parameters such as the installation position and the installation posture of the target sensor model may be determined according to the structural design of the target vehicle model.
S305, determining whether to continue adding the sensor model.
If yes, executing S303 to S304;
if not, S306 is performed.
S306, creating a virtual host vehicle according to the selected target vehicle model, the target vehicle dynamics model, the target sensor model and the assembly parameters of the target sensor model.
S307, the created virtual host vehicle is stored in the virtual host garage.
In one possible implementation manner, when the automatic driving simulation test is performed, the virtual host vehicle meeting the test requirement can be directly selected from the virtual host garage as a target virtual host vehicle, and the object to be tested is carried for testing.
According to the technical scheme, the distributed creation of the virtual host vehicle is supported, the creation efficiency of the virtual host vehicle is improved, and meanwhile, the test efficiency of the automatic driving simulation test is improved.
Fig. 4 is a flowchart of a method for creating a test scenario according to an embodiment of the present application. As shown in fig. 4, the method includes at least S401 to S407.
S401, selecting a target map from a map repository based on the map selection instruction.
In one possible implementation, the map selection instruction is used to indicate a target map.
As an example, after receiving a map selection instruction sent by a user through a man-machine interaction mode, the map selection instruction is parsed, a target map indicated by the map selection instruction is obtained, and the target map is selected from a map resource library.
In one possible implementation, a plurality of different types of maps are stored in a map repository, each of the plurality of maps in the map repository including, but not limited to, road structures and buildings.
In one possible implementation manner, if the map stored in the map resource library cannot meet the conditions of the actual automatic driving simulation test, a new map meeting the test requirements can be created according to the actual requirements, and the created new map is stored in the map resource library.
S402, selecting a target environment template from an environment template library based on the environment template selection instruction.
In one possible implementation, the environment template selection instruction is used to indicate a target environment template.
As an example, after receiving an environment template selection instruction sent by a user through a man-machine interaction mode, analyzing the environment template selection instruction to obtain a target environment template indicated by the environment template selection instruction, and selecting the target environment template from an environment template library.
In one possible implementation, a plurality of different kinds of environment templates are included in the environment model library.
In one possible implementation, an environmental template is used to describe lighting conditions as well as weather conditions.
S403, configuring the dynamic element based on the dynamic element configuration instruction.
In one possible implementation, the dynamic element configuration instruction is used to indicate a target dynamic element.
As an example, after receiving a dynamic element configuration instruction sent by a user through a man-machine interaction mode, analyzing the dynamic element configuration instruction to obtain a target dynamic element indicated by the dynamic element configuration instruction, and configuring the dynamic element according to the target dynamic element indicated by the dynamic element configuration instruction.
In one possible implementation, dynamic elements in a test scene may include traffic participants, such as vehicles, pedestrians, and animals.
In one possible implementation, configuring dynamic elements in a test scenario supports two modes.
As an example, information such as an initial position, a path plan, a specific behavior, and a trigger condition of each specific behavior of a plurality of target dynamic elements and each target dynamic element is indicated in the dynamic element configuration instruction, and then the plurality of target dynamic elements are added one by one based on the dynamic element configuration instruction, and the initial position, the path plan, the specific behavior, and the trigger condition of each specific behavior of each target dynamic element are set.
As another example, random lot automation generates dynamic elements that decide on behavior in a test scenario in the form of AI control according to a behavior logic plan.
S404, configuring the static element based on the static element configuration instruction.
In one possible implementation, the static element configuration instruction is used to indicate a target static element.
As an example, after receiving a static element configuration instruction sent by a user through a man-machine interaction mode, analyzing the static element configuration instruction to obtain a target static element indicated by the static element configuration instruction, and configuring the static element according to the target static element indicated by the static element configuration instruction.
In one possible implementation, the static elements in the test scene may include non-moving objects such as obstacles, road signs, traffic lights, lifters, and the like.
As an example, if a plurality of target static elements and the position, specific action, state timing control, specific action, trigger condition of state change, and the like of each target static element are indicated in the static element configuration instruction, the plurality of target static elements are added one by one based on the static element configuration instruction, and the position, specific action, state timing control, specific action, trigger condition of state change, and the like of each target static are set.
S405, configuring an evaluation index based on the evaluation index configuration instruction.
In one possible implementation, the evaluation index configuration instruction is used to indicate a target evaluation index.
As an example, after receiving an evaluation index configuration instruction sent by a user through a man-machine interaction manner, analyzing the evaluation index configuration instruction to obtain a target evaluation index indicated by the evaluation index configuration instruction, and configuring the evaluation index according to the target evaluation index indicated by the evaluation index configuration instruction.
In one possible implementation manner, the test result of the object to be tested in the test scene can be obtained through various evaluation indexes in the test scene.
As an example, the evaluation index is classified into a logical constraint index and a numerical constraint index.
The logic constraint type index comprises logic conditions such as red light running and solid line pressing, trigger data are recorded when the virtual host vehicle violates the logic constraint type index in the simulation test process, and the test result of the test task is marked as failure.
The numerical constraint type indexes comprise numerical monitoring of the speed, the acceleration and the like of the virtual host vehicle, a limiting range of each index needs to be set in advance, when the parameter numerical value of the virtual host vehicle exceeds the limiting range (such as overspeed and the like) in the simulation test process, index event triggering data are recorded, and the test result of the test task is marked as failure.
S406, creating a test scene according to the target map, the target environment template, the configured dynamic elements, static elements and the evaluation index.
S407, storing the created test scene into a test scene library.
In one possible implementation manner, when the automatic driving simulation test is performed, a test scene meeting the test requirement can be directly selected from the test scene library as a target test scene, so that the target virtual host vehicle carrying the target test object is tested in the target test scene.
According to the technical scheme, the distributed creation of the test scene is supported, the creation efficiency of the test scene is improved, and meanwhile, the test efficiency of the automatic driving simulation test is improved.
Fig. 5 is a flowchart of a screening method of a test operation according to an embodiment of the present application. As shown in fig. 5, the method includes at least S501 to S503.
S501, determining screening conditions based on the screening condition indication instruction.
In one possible implementation, the screening condition indication instruction is used to indicate a screening condition of the test job.
As an example, after receiving a screening condition indication instruction sent by a user through a man-machine interaction manner, analyzing the screening condition indication instruction to obtain a target screening condition indicated by the screening condition indication instruction, and screening a plurality of test jobs according to the target screening condition indicated by the screening condition indication instruction.
In one possible implementation, the screening conditions may include one or more of job status, test results, test time, and test name.
As one example, job status may include not running, and completed.
Illustratively, not running means not running after the test job is created, and the tester can quickly screen the created but not tested test job/task for testing by selecting this option.
By way of example, in-run indicates that the user has selected to run, but not run, a test job in both the in-line and in-run states may be screened out, and a tester may screen out the test job being tested through this option, focusing on the execution progress of the test job.
Illustratively, having completed the test tasks in the test job indicates that all of the execution has completed, the tester can quickly screen the test job that has been tested completed through this option, and the tester can identify defects in the autopilot system algorithm by analyzing the test results, process data, to improve the system.
It should be noted that 3 options in the job state support multiple options, and when multiple options are selected, the content of the union between the multiple options is taken.
As one example, test results may include pass, fail, and invalidate.
By way of example, by indicating that all test tasks under test pass the test, the test job in this state does not require excessive attention from the tester.
Illustratively, failure indicates that the test job has a test task that fails the test, and the tester needs to pay attention to such test task and analyze the problem to improve the algorithm of the object to be tested.
Illustratively, invalidity indicates that there are test tasks within the test job that fail to operate, and the tester needs to pay attention to the cause of such test tasks failing to operate.
It should be noted that 3 options in the test result support multiple options, and when multiple options are selected, the content of the union between the multiple options is taken.
As one example, the test time may include the present day, the present week, the present month, and all.
For example, when the selected test time is the current day, only the test operation with the change today is displayed, so that the tester can be helped to observe the test tasks modified and tested on the current day.
For example, when the selected test time is the present week, only test jobs having a change within the present week are displayed.
For example, when the selected test time is the present month, only the test job having a change in the present month is displayed.
Illustratively, when the selected test time is all, all test jobs are presented.
It should be noted that each option in the test time only supports a single selection.
As one example, test names in the screening conditions may screen test jobs whose names contain the text in the form of fuzzy searches.
As an example, the screening conditions may be completed, failed, and today, screening of test jobs is performed according to the set screening conditions.
S502, screening a plurality of test jobs according to screening conditions to obtain a target test job list.
In one possible implementation, when the filtering condition includes a plurality of filtering options, filtering content can be configured according to the plurality of filtering options, and an intersection is taken and displayed in the target test job list according to at least one filtering option.
As an example, when the filtering condition is completed, failed, and today, each test job in the generated target test job list is a test job that has been changed today, the job status is completed, and the test result is failed.
S503, displaying a target test job list.
In one possible implementation, after the test operation is completed, the tester may find the desired test operation according to the screening conditions, analyze each test operation according to the displayed target test operation list, and optimize the algorithm.
As an example, when the test result of the test job is passing, it indicates that the test object successfully passes the test scene, and the tester may not pay attention to the process data of the test job in the test process.
As another example, when the test result of the test job is failure, it means that the object to be tested cannot pass the test when facing the test scene. The tester can enter the test operation and task details, check the test process data, analyze the failure cause and optimize the algorithm of the object to be tested.
As yet another example, when the test result of the test job is invalid, it indicates that the object to be tested is abnormally operated in the test scenario. The tester needs to enter test operation and task details, analyze the reason of operation failure through the log, and repair the test environment so as to continue testing.
According to the technical scheme, the screening conditions of the test operation can be determined from the operation state, the test result, the test time and other dimensions, the target test operation is screened out in a large number of test operations, the test operation requiring key analysis is conveniently and directly positioned, and the management efficiency of the test operation is improved.
Fig. 6 is another flow chart of a screening method of a test operation according to an embodiment of the present application. As shown in fig. 6, the method includes at least S601 to S605.
S601, determining screening conditions based on the screening condition indication instruction.
S602, screening a plurality of test jobs according to the screening conditions to obtain a target test job list.
S603, displaying a target test job list.
It should be noted that S601 to S603 may refer to S501 to S503, and will not be described here.
S604, selecting a target test job from the target test job list based on the test job selection instruction.
In one possible implementation, the test job selection instruction is used to indicate a target test job that requires an autopilot simulation test again.
As an example, after receiving a test job selection instruction sent by a user through a man-machine interaction manner, analyzing the test job selection instruction, obtaining a target test job indicated by the test job selection instruction, and performing an automatic driving simulation test again on the target test job indicated by the test job selection instruction.
As an example, when the filtering condition set in S601 is a test job in which the job status is not running, and all test jobs in the target test job list displayed in S603 are test jobs in the non-running status, the tester may select a plurality of test jobs in batch from the displayed target test job list as target test jobs.
As an example, when the screening condition set in S601 is that the test result is failed, all the test jobs in the target test job list displayed in S603 are test jobs whose test result is failed, and after the test personnel are subjected to problem analysis and algorithm optimization, a plurality of test jobs are selected in batch from the displayed target test job list as target test jobs.
As an example, when the screening condition set in S601 is that the test result is invalid, all the test jobs in the target test job list displayed in S603 are test jobs whose test result is invalid, and after repairing the test environment, the tester selects a plurality of test jobs in batch from the displayed target test job list as target test jobs.
S605, performing automatic driving simulation test according to the target test operation.
According to the technical scheme, after the target test operation list is obtained through screening under the multi-dimensional screening condition, the test operation needing to be subjected to the automatic driving simulation test again can be selected from the screened target test operation list according to actual needs to be subjected to the retest, so that the rapid retest of the key test operation is realized, and the test efficiency of the automatic driving simulation test is improved.
Fig. 7 is a block diagram of an autopilot simulation test apparatus according to an embodiment of the present application, and for convenience of explanation, only a portion related to the embodiment of the present application is shown. Referring to fig. 7, the autopilot simulation test arrangement 700 may include a creation module 701 and a test module 702.
In one implementation, the apparatus 700 may be used to implement the method illustrated in FIG. 1 described above. For example, the creation module 701 is used to implement S101, and the test module 702 is used to implement S102.
In another possible implementation, the apparatus 700 further includes an acquisition module and a determination module, where the apparatus 700 in this implementation may be used to implement the method illustrated in fig. 2 described above. For example, the acquisition module is used to implement S201, the determination module is used to implement S202, and the creation module 701 is used to implement S203 and S204.
In yet another possible implementation, the apparatus 700 further includes a selecting module, a configuring module, a determining module, and a storing module, where the apparatus 700 in this implementation may be used to implement the method shown in fig. 3 and described above. For example, the selection module is used to implement S301, S302, and S303, the configuration module is used to implement S304, the determination module is used to implement S305, the creation module 701 is used to implement S306, and the storage module is used to implement S307.
In yet another possible implementation, the apparatus 700 further includes a selection module, a configuration module, and a storage module, where the apparatus 700 in this implementation may be used to implement the method illustrated in fig. 4 described above. For example, the selection module is used to implement S401 and S402, the configuration module is used to implement S403, S404, and S405, the creation module 701 is used to implement S406, and the storage module is used to implement S407.
In yet another possible implementation, the apparatus 700 further includes a determining module, a screening module, and a display module, where the apparatus 700 in this implementation may be used to implement the method illustrated in fig. 5 above. For example, the determining module is used for implementing S501, the screening module is used for implementing S502, and the display module is used for implementing S503.
In yet another possible implementation, the apparatus 700 further includes a determining module, a screening module, a display module, and a selecting module, where the apparatus 700 in this implementation may be used to implement the method shown in fig. 6 and described above. For example, the determining module is used for implementing S601, the screening module is used for implementing S602, the display module is used for implementing S603, the selecting module is used for implementing S604, and the testing module 702 is used for implementing S605.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein again.
Fig. 8 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 8, the terminal device 8 of this embodiment includes: at least one processor 80 (only one shown in fig. 8), a memory 81 and a computer program 82 stored in the memory 81 and executable on the at least one processor 80, the processor 80 implementing the steps in any of the method embodiments described above when executing the computer program 82.
The terminal device 8 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, etc. The terminal device may include, but is not limited to, a processor 80, a memory 81. It will be appreciated by those skilled in the art that fig. 8 is merely an example of the terminal device 8 and is not limiting of the terminal device 8, and may include more or fewer components than shown, or may combine certain components, or different components, such as may also include input-output devices, network access devices, etc.
The processor 80 may be a central processing unit (Central Processing Unit, CPU), the processor 80 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 81 may in some embodiments be an internal storage unit of the terminal device 8, such as a hard disk or a memory of the terminal device 8. The memory 81 may in other embodiments also be an external storage device of the terminal device 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device 8. Further, the memory 81 may also include both an internal storage unit and an external storage device of the terminal device 8. The memory 81 is used for storing an operating system, application programs, boot loader (BootLoader), data, other programs etc., such as program codes of the computer program etc. The memory 81 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The embodiment of the application also provides a network device, which comprises: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, the processor implementing the steps of any of the method embodiments described above when the computer program is executed.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps that may implement the various method embodiments described above.
Embodiments of the present application provide a computer program product which, when run on a mobile terminal, causes the mobile terminal to perform steps that may be performed in the various method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/terminal apparatus, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. An automated driving simulation test method, the method comprising:
creating a test job, wherein the test job comprises a plurality of test tasks with the same test objects, the test tasks comprise test objects and a test environment, the test environment comprises a virtual host vehicle and a test scene which are decoupled mutually, the virtual host vehicle is used for carrying the test objects to carry out automatic driving simulation test in the test scene, and the test objects support an automatic driving control algorithm;
And completing the automatic driving simulation test of the test object in the plurality of test tasks according to the test operation.
2. The method of claim 1, further comprising creating the plurality of test tasks, the creating the plurality of test tasks comprising:
acquiring a test request, wherein the test request comprises a target test object;
determining a corresponding target virtual host vehicle and a plurality of target test scenes based on the target test object, wherein the plurality of target test scenes are generated based on corresponding configuration of any one model of a vehicle model, a sensor model or a dynamics model of the target virtual host vehicle;
and generating test tasks of the target virtual host vehicle in different target test scenes according to the target test objects and aiming at each test task.
3. The method of claim 2, further comprising creating the target virtual host vehicle, the creating the target virtual host vehicle comprising:
obtaining a virtual host vehicle request, wherein the virtual host vehicle request comprises at least one target vehicle model and at least one target dynamics model or target sensor model;
And acquiring key position information of the target vehicle model, determining the installation mode and the gesture of the at least one target dynamic model or the target sensor, and generating the target virtual host vehicle.
4. The method of claim 2, further comprising creating the target test scenario, the creating the target test scenario comprising:
acquiring a test scene request, wherein the test scene request comprises at least one map and at least one environment template;
and obtaining at least one target traffic flow and at least one target evaluation index corresponding to the target traffic flow according to the at least one map and the at least one environment template, and generating at least one target test scene.
5. The method of claim 4, wherein the acquiring at least one target traffic stream comprises:
and acquiring the initial positions of the target traffic participants and the target path planning, and generating the target traffic flow according to a preset traffic behavior criterion.
6. The method of claim 4, wherein the obtaining the target evaluation index corresponding to the at least one target traffic flow comprises:
And acquiring an evaluation request of the target traffic flow, and determining a target evaluation index of the evaluation request.
7. The method according to claim 1, wherein the method further comprises:
recording the operation state, the test result and the test time of each test task in the plurality of test tasks, wherein the operation state comprises non-operation, operation and completion, and the test result comprises pass, fail and invalidation.
8. The method of claim 7, wherein the method further comprises:
determining screening conditions based on a screening condition indication instruction, wherein the screening conditions comprise one or more of the operation state, the test result and the test time;
screening a plurality of test operations according to the screening conditions to obtain a target test operation list;
and displaying the target test job list.
9. The method of claim 8, wherein the method further comprises:
selecting a target test job from the target test job list based on a test job selection instruction;
and performing automatic driving simulation test according to the target test operation.
10. An autopilot simulation test apparatus, the apparatus comprising:
the system comprises a creation module, a test operation module and a control module, wherein the creation module is used for creating a test operation, the test operation comprises a plurality of test tasks with the same test objects, the test tasks comprise test objects and a test environment, the test environment comprises a virtual host vehicle and a test scene which are decoupled mutually, the virtual host vehicle is used for carrying the test objects to carry out automatic driving simulation test in the test scene, and the test objects support an automatic driving control algorithm;
and the test module is used for completing the automatic driving simulation test of the test object in the plurality of test tasks according to the test operation.
CN202310188574.5A 2023-03-01 2023-03-01 Automatic driving simulation test method and test device Pending CN116432392A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310188574.5A CN116432392A (en) 2023-03-01 2023-03-01 Automatic driving simulation test method and test device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310188574.5A CN116432392A (en) 2023-03-01 2023-03-01 Automatic driving simulation test method and test device

Publications (1)

Publication Number Publication Date
CN116432392A true CN116432392A (en) 2023-07-14

Family

ID=87093294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310188574.5A Pending CN116432392A (en) 2023-03-01 2023-03-01 Automatic driving simulation test method and test device

Country Status (1)

Country Link
CN (1) CN116432392A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118093404A (en) * 2024-02-29 2024-05-28 重庆赛力斯新能源汽车设计院有限公司 Driving decision function testing method and device and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118093404A (en) * 2024-02-29 2024-05-28 重庆赛力斯新能源汽车设计院有限公司 Driving decision function testing method and device and electronic equipment

Similar Documents

Publication Publication Date Title
US11513523B1 (en) Automated vehicle artificial intelligence training based on simulations
CN110795818B (en) Method and device for determining virtual test scene, electronic equipment and storage medium
Berger et al. Engineering autonomous driving software
JP2022511333A (en) Methods and systems for modifying control units of autonomous vehicles
CN111680362B (en) Automatic driving simulation scene acquisition method, device, equipment and storage medium
KR20190143832A (en) Method for testing air traffic management electronic system, associated electronic device and platform
CN116432392A (en) Automatic driving simulation test method and test device
CN117724982A (en) Simulation evaluation method and device, electronic equipment and storage medium
CN114415542B (en) Autonomous driving simulation system, method, server and medium
CN108959805B (en) Automatic-driving hardware-in-loop simulation cloud platform and method
CN113673088B (en) A method and system for constructing an automated testing system
JP2022136983A (en) Automatic generation of integrated test procedures using system test procedures
Coe et al. Virtualized in situ software update verification: verification of over-the-air automotive software updates
CN118708489A (en) Automatic driving simulation evaluation method, device, electronic device and storage medium
CN117197296A (en) Traffic road scene simulation method, electronic equipment and storage medium
US20230228594A1 (en) Validating high definition mapping data
CN116679585A (en) Ground comprehensive test equipment, method, device, equipment and medium for flight control system
CN115454486A (en) Method, device, equipment and medium for managing automatic driving software
Sun et al. An intelligent driving simulation platform: architecture, implementation and application
US20240289252A1 (en) Deferred creation of remote debugging sessions
CN113805546B (en) Model deployment method and device, computer equipment and storage medium
US20240289258A1 (en) Remote debugging sessions for flaky tests
CN115542770B (en) Test method and device of automatic driving simulation platform, electronic equipment and medium
CN118395673A (en) Automatic driving algorithm testing method, electronic equipment, storage medium and computer program product
CN114755035B (en) Intelligent driving multidimensional test method based on vehicle-mounted terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination