WO2023123130A1 - Procédé et appareil pour système de conduite autonome, dispositif électronique et support - Google Patents
Procédé et appareil pour système de conduite autonome, dispositif électronique et support Download PDFInfo
- Publication number
- WO2023123130A1 WO2023123130A1 PCT/CN2021/142708 CN2021142708W WO2023123130A1 WO 2023123130 A1 WO2023123130 A1 WO 2023123130A1 CN 2021142708 W CN2021142708 W CN 2021142708W WO 2023123130 A1 WO2023123130 A1 WO 2023123130A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- driving
- scenarios
- score
- candidate
- scene
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 238000012360 testing method Methods 0.000 claims abstract description 58
- 238000004590 computer program Methods 0.000 claims abstract description 13
- 210000002569 neuron Anatomy 0.000 claims description 49
- 230000008447 perception Effects 0.000 claims description 34
- 238000012545 processing Methods 0.000 claims description 22
- 230000006399 behavior Effects 0.000 claims description 20
- 238000003062 neural network model Methods 0.000 claims description 10
- 238000004088 simulation Methods 0.000 claims description 9
- 230000006870 function Effects 0.000 description 24
- 238000010586 diagram Methods 0.000 description 22
- 230000008569 process Effects 0.000 description 21
- 230000003068 static effect Effects 0.000 description 7
- 230000009471 action Effects 0.000 description 5
- 230000004913 activation Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000007547 defect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 4
- 230000035772 mutation Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 241000255969 Pieris brassicae Species 0.000 description 1
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 108090000623 proteins and genes Proteins 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
Definitions
- the present invention relates to the technical field of intelligent transportation, in particular to methods, devices, electronic equipment and media for automatic driving systems.
- Autonomous driving technology uses various types of sensors installed on the vehicle, such as visual cameras, ultrasonic radar, lidar, etc. to capture the environmental information of the vehicle, and then realizes perception, prediction, and planning through artificial intelligence (AI) algorithms, and then controls the vehicle. drive. Due to the lack of explainability and uncertain behavior of AI algorithms, the automatic driving system needs to be fully tested and trained in a large number of driving scenarios, especially in driving scenarios with safety risks, to ensure the consistency of its perception results and subsequent control behaviors. correctness. However, the existing driving scenarios are difficult to cover all possible scenarios, which leads to high safety risks for the autonomous driving system when encountering scenarios that have never been encountered.
- AI artificial intelligence
- Embodiments of the present disclosure provide a solution for an automatic driving system, which can provide rich and effective driving scenarios to train the automatic driving system.
- a method for an automatic driving system comprising: generating candidate driving scenarios from a plurality of driving scenarios in a set of driving scenarios; by testing the candidate driving scenarios in the automatic driving system, determining a test score for the candidate driving scenario; and updating the set of driving scenarios with the candidate driving scenario if it is determined that the test score exceeds the score threshold.
- an apparatus for an automatic driving system comprising: a generation unit configured to generate candidate driving scenarios from a plurality of driving scenarios in a set of driving scenarios; a scoring unit configured to A test score of the candidate driving scene is determined by testing the candidate driving scene in the automatic driving system; and an updating unit configured to update the set of driving scenes with the candidate driving scene if it is determined that the test score exceeds a score threshold.
- an electronic device comprising: at least one processing unit; at least one memory, the at least one memory being coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, the The instructions, when executed by the at least one processing unit, cause the electronic device to perform the method according to the first aspect of the present disclosure.
- a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the method according to the first aspect of the present disclosure is implemented.
- a computer program product comprising computer executable instructions, wherein the computer executable instructions, when executed by a processor, implement the method according to the first aspect of the present disclosure.
- Figure 1 shows a schematic diagram of an example environment in which various embodiments according to the present disclosure can be implemented
- FIG. 2 shows a schematic block diagram of an example system for testing an automated driving system according to an embodiment of the present disclosure
- FIG. 3 shows a schematic flowchart of an example process for an automatic driving system according to an embodiment of the present disclosure
- FIG. 4 shows a schematic flowchart of an example process of generating candidate driving scenarios according to an embodiment of the present disclosure
- FIG. 5 shows a schematic flowchart of an example process of evaluating a driving scene according to an embodiment of the present disclosure
- Figure 6 shows a schematic diagram of an example perception model according to an embodiment of the present disclosure
- FIG. 7 shows a schematic block diagram of an automatic driving system device according to an embodiment of the present disclosure.
- Fig. 8 shows a schematic block diagram of an example device that may be used to implement embodiments of the present disclosure.
- the autonomous driving system uses various types of sensors to perceive the surrounding environment, and combines AI technology to control the vehicle. Therefore, the perception and control of autonomous driving systems depend on specific driving scenarios, so a large number of driving scenarios need to be used to test and train autonomous driving systems.
- a traditional approach is to use actual road tests. For example, set up a self-driving fleet with a large number of test drivers, and then conduct driving tests in a licensed real traffic environment. Although the driving scene is real, the road measurement reaches a certain level, the scene is usually repeated, and it is difficult to encounter a new driving scene. This leads to high safety risks for autonomous driving systems when encountering uncommon driving scenarios.
- Another traditional method is to manually design driving scenarios in a simulation environment for testing. For example, by configuring the parameters of the vehicle's own sensors and the surrounding environment, the behavior or performance of the autonomous driving system in a specific environment can be tested.
- this method relies on manual or automatic parameter configuration to form the scene, so the scene is also not rich enough to achieve a relatively comprehensive scene coverage.
- the present disclosure provides a method for an automatic driving system.
- driving scenarios in an existing driving scenario set are used to generate candidate driving scenarios.
- the driving scenarios in the driving scenario set are, for example, the driving scenarios that have been confirmed or evaluated as having relatively high safety risks.
- the generated candidate driving scenarios can be applied to the automatic driving system for testing in a simulation environment, so as to determine the test scores of the candidate driving scenarios. Test scores measure how good a driving scenario is. If the test score exceeds the score threshold, the candidate driving scene is considered to be an "excellent" driving scene, and the candidate driving scene can be used to update the existing driving scene set.
- the method described above can be performed iteratively, thereby continuously updating the set of driving scenarios that can be used to test and train the autonomous driving system.
- the generated driving scene is often a scene not encountered in the road test, or a scene not imagined in the manual simulation.
- the coverage and richness of driving scenarios are improved, which is conducive to the discovery of defects or loopholes in the automatic driving system, thereby indirectly improving the accuracy, safety and robustness of the automatic driving system.
- FIG. 1 shows a schematic diagram of an example environment 100 in which various embodiments according to the present disclosure can be implemented.
- Environment 100 includes simulator 110 and automated driving system 120 .
- Simulator 110 may generate and provide driving scenario 132 to automated driving system 120 .
- Driving scenarios 132 simulate real-world environments.
- the driving scene 132 may include but not limited to road topology, time dynamics, weather dynamics, road surface state, traffic dynamics, landscape information and other scene features.
- Road topology may include, for example, road geometry (e.g., straight roads, curved roads, T-junctions, presence or absence of entrances or exits, etc.), road width (e.g., number of lanes, lane width, etc.), road gradient (e.g., uphill or Downhill, slope size, etc.), traffic light position, etc.
- a time dynamic can be a time of day. According to the time dynamics, combined with the geographic location and date of the vehicle, the intensity, angle, and shadow of the light can be determined.
- Weather dynamics may include weather conditions such as sunny, cloudy, raining, snowing, fog, humidity, and the like.
- weather dynamics and time dynamics may affect the image information or other perception information collected by the vehicle's sensors, thereby affecting the perception behavior of the vehicle.
- the state of the road surface can include the state of the road surface material, new and old, aging, etc.
- the road surface may affect the dynamic model of the vehicle.
- Traffic dynamics include the position, speed, etc. of other vehicles or pedestrians.
- Landscape information includes, for example, roadside buildings, trees, flowers and plants.
- the driving scenario 132 can be implemented by various types of modeling tools (eg, computer graphics engines), and can be configured manually or automatically.
- Autopilot system 120 is tested in response to received driving scenarios 132 .
- the automatic driving system 120 can combine the location and attitude information of its own vehicle, sensor configuration (for example, various types of sensors, such as visual sensors, ultrasonic radar, lidar, etc., the number of sensors, installation locations, etc. ) to generate various types of images, and perceive the surrounding environment based on the generated images, for example, recognize surrounding objects and movements.
- the position of the vehicle can be determined by the global positioning system (GPS), and the attitude of the vehicle can be updated in real time by the on-board inertial sensor (IMU).
- GPS global positioning system
- IMU on-board inertial sensor
- the generated images may be input to a perception model of an autonomous driving system.
- the perception model may be, for example, a trained neural network model that can detect or recognize objects in an input image as a result of perception.
- the autonomous driving system 120 relies on the perception results to control the vehicle.
- the automatic driving system 120 may use a decision model such as a neural network model or other models to generate vehicle control commands, such as acceleration, braking, steering, lights, and the like.
- Perceptual model behavior and vehicle control instructions 134 generated by automated driving system 120 in driving scenario 132 may be provided to simulator 110 .
- the simulator 120 can evaluate the performance of the automatic driving system 120 according to the perception model behavior and the vehicle control instruction 134 , so as to find defects and loopholes of the automatic driving system 120 .
- perception model behavior and vehicle control instructions 134 can also be analyzed for evaluating the pros and cons of the driving scenario 132 . If the performance of the automated driving system 120 in the driving scenario 132 is poor, it indicates that the driving scenario 132 is a better driving scenario, since it is beneficial to indirectly improve the accuracy, safety and robustness of the automated driving system 120 .
- FIG. 1 shows an example environment 100 for embodiments of the present disclosure, but those skilled in the art will appreciate that embodiments of the present disclosure may also be implemented in environments other than those shown in FIG. 1 .
- embodiments of the present disclosure may also be implemented within the emulator 110, or as an additional component of the emulator 110, or without an emulator.
- embodiments of the present disclosure provide a system and method based on evolutionary computation.
- evolutionary computation strategically evolve driving scenarios in a simulation environment (eg, simulator 110 ) to generate new driving scenarios that potentially cause safety accidents or traffic violations.
- the generated new driving scenarios are also referred to as candidate driving scenarios.
- the candidate driving scenarios are applied to an automatic driving system (eg, automatic driving system 120 ) for testing.
- an automatic driving system eg, automatic driving system 120
- Driving scenarios determined to be "excellent" can be used to evolve more driving scenarios.
- FIG. 2 shows a schematic block diagram of an example system 200 for testing an automated driving system according to an embodiment of the present disclosure.
- System 200 includes simulator 110 and automatic driving system 120 . Similar to what was described above with reference to FIG. 1 , simulator 110 may generate candidate driving scenarios 116 and input driving scenarios 116 to automated driving system 120 for testing. The test results of the autonomous driving system 120 , including the behavior of the perception model and the vehicle control commands 134 , are fed back to the simulator 110 .
- the aforementioned candidate driving scenarios 116 may be generated based on evolutionary computation.
- the simulator 110 includes a collection 112 of driving scenarios, which may also be referred to as a library of driving scenarios.
- the driving scenarios in set 112 may be derived from or initialized based on accident reports, papers, etc. 202 related to autonomous driving.
- Accident reports and paper 202 mention scenarios in which autonomous driving accidents have occurred in the real world or in simulated environments.
- Real or simulated world accident scenarios are modeled to form driving scenarios in simulator 110 and added to collection 112 .
- candidate driving scenarios 116 may be generated from a plurality of driving scenarios in set 112 , for example two driving scenarios.
- a driving scene is regarded as an individual, and a collection of driving scenes 112 constitutes a population; characteristics of driving scenes, such as road topology, weather dynamics, time dynamics, road surface state, traffic dynamics, landscape information, etc. , are considered as individual genes.
- characteristics of driving scenes such as road topology, weather dynamics, time dynamics, road surface state, traffic dynamics, landscape information, etc.
- operations such as genetic crossover, inheritance, and mutation are performed on the population composed of driving scenarios to evolve the next generation of driving scenarios as the candidate driving scenarios 116 to be tested.
- the candidate driving scenarios 116 may be provided to the automated driving system 120 for testing to discover defects or vulnerabilities of the automated driving system 120 . Details of generating candidate driving scenarios 116 will be described below with reference to FIG. 4 .
- the automated driving system 120 includes a perception model 122 and a vehicle control unit 124 .
- Perception model 120 may include a computational model such as a neural network model. Utilize the calculation model to identify objects in the surrounding environment, such as other vehicles or pedestrians, from the surrounding images of the vehicle (for example, visual camera images, ultrasonic radar images, and lidar images), and then the vehicle control unit 124 can The recognition results are used to generate vehicle control commands, such as acceleration, braking, steering, lights, etc.
- the automatic driving system 120 can synthesize visual camera images, ultrasonic radar images, lidar images, etc.
- the perception module 122 perceives the surrounding environment according to the synthesized image.
- the virtual image may also be synthesized by the simulator 110 and provided to the automatic driving system 120 .
- the perception model behavior and vehicle control commands generated by the autonomous driving system 120 during testing may be provided to the scenario evaluation unit 118 of the simulator.
- the scenario evaluation unit 118 may evaluate the corresponding candidate driving scenario based on the perception model behavior and the vehicle control instruction, that is, determine whether the candidate driving scenario 116 is an expected driving scenario. For example, if the automatic driving system 120 has misperceptions that lead to traffic accidents, or the vehicle violates traffic rules, such a driving scene can be considered as an expected driving scene. Details of evaluating the driving scene will be described below with reference to FIGS. 5 and 6 .
- Better driving scenarios can be used to update the set 112 of driving scenarios. The process of evolution, testing, evaluation, and updating can be repeated, so that the set 112 is iteratively updated, so that the set of driving scenarios 112 includes driving scenarios that can effectively improve the performance of the automatic driving system 120 .
- FIG. 3 shows a schematic flowchart of an example process 300 for an automated driving system according to an embodiment of the present disclosure.
- Process 300 may be implemented, for example, in emulator 110, or in a separate component communicatively coupled with the emulator. For ease of illustration, process 300 is described in conjunction with FIG. 2 .
- candidate driving scenarios 116 are generated from a plurality of driving scenarios in set of driving scenarios 112 .
- Set 112 before generating candidate scenes, the set 112 needs to be initialized.
- Set 112 may be initialized by scene modeling from academic papers, accident reports, and the like.
- driving scenarios in which autonomous driving accidents have occurred in the real world or in a simulation environment can be constructed based on academic papers and accident reports.
- a driving scene includes several scene features, such as road topology, temporal dynamics, weather dynamics, road surface state, traffic dynamics, landscape information, and so on. Relevant information can be extracted from academic papers, accident reports.
- an autopilot-related accident is mentioned in a piece of news, as follows.
- the features of the driving scene can be extracted, and the driving scene at that time can be constructed.
- the road condition is "crossroads”
- the time feed is “early morning”
- the weather feed is “cloudy”
- the traffic feed is "white truck driving across the road”.
- driving scenarios constructed from accident reports, papers and the like do not necessarily cover all of the scenario characteristics.
- the unextracted scene features can use default parameters or be set to be empty. In this way, a set 112 comprising a plurality of driving scenarios is constructed. Set 112 will serve as the population to be used in evolutionary computation 114 .
- two or more driving scenarios may be selected from set 112 to generate candidate driving scenarios 116 .
- FIG. 4 shows a schematic flow diagram of an example process 400 of generating candidate driving scenarios 116 according to an embodiment of the disclosure.
- a combined driving scenario is generated based on a combination of scene features of the plurality of driving scenarios in the set of driving scenarios 112 .
- This operation may also be referred to as "crossover".
- two driving scenarios may be randomly selected from the already constructed set 112, here referred to as the first driving scenario and the second driving scenario (both may be collectively referred to as parent driving scenarios), and then their scene features are cross-combined. That is to say, part of scene features of the combined driving scene may come from the first driving scene, and part of scene features may come from the second driving scene.
- more driving scenarios can be selected from the set 112 to generate a combined driving scenario, not limited to two.
- the proportion of scene features derived from the parent driving scene it is also possible to set. For example, for two parent driving scenarios, it can be set that 50% of the scene features come from the first driving scene, and 50% of the scene features come from the second driving scene. This ratio can be set arbitrarily, and is not limited thereto. For more parent driving scenarios, the scale can be set similarly.
- candidate driving scenarios 116 are generated by adjusting at least one scene characteristic of the combined driving scenarios. This operation may also be called "mutation".
- the magnitude of the variation may be controlled within a certain range such that only a certain percentage (eg, 10%, 5%, etc.) or less of the scene features of the driving scene are allowed to be adjusted. For example, one or two features of the driving scene that allow adjustment of the combination derived from block 402 are adjusted, eg, weather dynamics are adjusted from 2pm to 9pm. It can be seen that this candidate driving scenario 116 may not have been encountered before.
- the candidate driving scene 116 Since the parent driving scene of the candidate driving scene 116 comes from the set 112, it is considered to be a driving scene that can cause a safety accident or a traffic violation. Therefore, the candidate driving scene 116 thus evolved has a higher probability of causing a safety accident or a traffic violation. . In addition, a candidate driving scenario 116, which may have never been encountered before, is obtained by "mutating" at the same time. Therefore, candidate driving scenarios 116 have high testing and training value for automated driving system 120 .
- FIG. 5 shows a schematic flow diagram of an example process 500 of evaluating a driving scenario according to an embodiment of the disclosure.
- a first score for the candidate driving scenario 116 is determined.
- the first score is calculated by a dynamic-based fitness function, and is used to measure the dynamic behavior of the automatic driving system 112 .
- the dynamic fitness function may be aimed at causing safety incidents or traffic violations.
- the dynamic fitness function may determine the distance of the vehicle from other objects based on vehicle control commands generated by the automated driving system 120 (eg, generated by the vehicle control unit 124 ).
- the distance may include the lateral distance between the vehicle and other vehicles, pedestrians, lanes, road shoulders, obstacles, etc. in the driving scene.
- Distances may also include longitudinal distances between the vehicle and other vehicles, pedestrians, lanes, shoulders, obstacles, etc. in the driving scene. According to the above-mentioned lateral distance and longitudinal distance, it can be determined whether the vehicle has a safety accident and the degree of the safety accident.
- the dynamic fitness function can also determine whether the vehicle violates traffic rules based on the vehicle control instructions generated by the automatic driving system 120 . For example, is the vehicle compacting lines, violating traffic lights, speeding, violating traffic signs, etc.
- the dynamic fitness function may also compare the vehicle control commands with the control commands for avoiding accidents. For example, when testing the candidate driving scenarios 116 , a set of expected vehicle control instructions may be preset or acquired, and the expected vehicle control instructions are correct operation instructions for avoiding accidents. If the vehicle control instruction generated by the automatic driving system 120 is quite different from the control instruction for avoiding accidents, it can be considered that the automatic driving system 120 has a greater safety risk in the driving scene. If the difference is small, it can be considered that the automatic driving system 120 can handle the driving scenario well.
- the first score may be given a higher value.
- a second score for the candidate driving scenario 116 is determined based on the behavior of the perception model 122 of the automated driving system 112 .
- the second score is calculated by a static-based fitness function, and is used to measure the static behavior of the automatic driving system 112 .
- the static fitness function may be based on improving the coverage of the perception model 122 .
- the perception model 122 may include a neural network model including neurons (also referred to as nodes) arranged in multiple layers.
- FIG. 6 shows a schematic diagram of an example perception model 600 according to an embodiment of the disclosure.
- the neural network model 600 may include multiple layers arranged in sequence. According to the direction of signal transmission, the neural network model may include an input layer 602 , a hidden layer 604 , and an output layer 606 .
- Hidden layer 604 may include two or more layers.
- the input layer 602 may receive, for example, a preprocessed image, and then the preprocessed image undergoes a series of coding or processing by the hidden layer 604 to extract image features, and finally the output layer 606 outputs inference results, such as identifying objects in the image.
- each layer of the neural network model 600 includes several neurons 601 .
- the neuron 601 receives outputs from at least a part of neurons of the previous layer as input signals, and the input signal may be a linear combination of output signals of the neurons of the previous layer (that is, the outputs of the neurons of the previous layer are weighted summation, and a bias signal can also be appended).
- the neuron 601 has a corresponding activation function, and when the input signal satisfies the activation condition defined by the activation function, the neuron 601 is activated to generate an output signal.
- typical activation functions may include Sigmoid function, Tanh function, linear rectified Relu function, Relu function with leakage, etc.
- the index of the static fitness function may include the number of the first type of target neurons, and the first type of target neurons refers to neurons that are activated for the first time among the neurons of the perceptual model 122 .
- the neurons of the perception model 122 are sometimes activated and sometimes not.
- information on whether neurons in the perception model 122 are activated in the current driving scene may be extracted. If a certain neuron is activated in the current scene and has not been activated in other scenes before, the neuron is determined as the first type of target neuron. It should be understood that the more the first type of target neurons, the more “novel” the current candidate driving scene 116 is for the automatic driving system 120 . In other words, this candidate driving scenario 116 has a higher testing and training value.
- the index of the static fitness function may also include the number of the second type of target neuron, the second type of target neuron refers to the output symbol and its parent (or grandparent, even higher ) neurons whose output signs are different when the driving scene is tested.
- the output of some activation functions can be positive or negative (such as the Tanh function). Therefore, when the automatic driving system 120 is tested under the current candidate driving scenario 116 , the symbol information of the output of the activated neurons of the perception model 122 can also be obtained. If the output sign of an activated neuron in the current driving scene is different from its output sign in two or more driving scenes of the previous generation, the neuron can be determined as the second type of target neuron. It should be understood that the more the second type of target neurons, the more “novel” the current candidate driving scene 116 is for the automatic driving system 120 . In other words, this candidate driving scenario 116 has a higher testing and training value.
- the static fitness function may determine the second score based on the above-mentioned number of the first type of target neurons and the number of the second type of target neurons. For example, the sum of the ratios of the number of the first type of target neurons and the number of the second type of target neurons to the total number of neurons of the perception model 122 may be determined as the second score. It should be understood that other manners are also possible, and the present disclosure does not limit this.
- a test score for the candidate driving scenario 116 is determined based on the first score and the second score.
- the test score for candidate driving scenario 116 may be determined by adding, or calculating a weighted sum of, the first score and the second score.
- the weight of the first score is 0.9
- the weight of the second score is 0.1, which is not limited in the present disclosure. It should be understood that the manner of combining the first score and the second score to determine the test score is not limited thereto.
- process 500 of driving scenario evaluation described with reference to FIG. 5 is shown as being implemented within the simulator 110, it should be understood that the process can also be implemented by components other than the simulator 110, for example, it can be implemented in conjunction with the simulator 110. 110 communication is implemented within a separate component.
- process 300 proceeds to block 308; if not, process 300 returns to block 302 to repeat the operations described at blocks 302, 304, and 306.
- the score threshold may be predefined, or updated as process 300 is repeatedly performed. As the process 300 repeats, the number of driving scenarios "seen" by the automatic driving system 120 also increases. At this time, the score threshold can be appropriately lowered, so as to ensure that the collection of driving scenarios 112 (ie, the population) can be continuously updated.
- the set of driving scenarios 112 is updated with the candidate driving scenario.
- a candidate driving scenario 116 with a score above a score threshold means that the driving scenario is an expected driving scenario, eg, leading to a safety incident or new behavior of the perception model.
- driving scenarios may be added to the population, ie, set of driving scenarios 112, and may continue to be used to evolve further driving scenarios.
- the generated driving scenes are often scenes that have never been encountered in the road test, or scenes that have not been imagined in the artificial simulation, thereby improving the coverage and richness of the driving scene. This is conducive to the discovery of defects or loopholes in the automatic driving system, which can indirectly improve the accuracy, safety and robustness of the automatic driving system.
- Step 1 Obtain four real accidents from news reports, and perform modeling to obtain corresponding scenarios. These four scenarios serve as the initial population.
- Scenario 1 ⁇ The road condition is "crossroads”, the time dynamics is “early morning”, the weather is “cloudy”, and the traffic dynamics is "white truck driving across the road” ⁇ .
- Scenario 2 ⁇ The road condition is "Curve”, the time dynamic is "Night”, the weather is “Cloudy”, and the traffic dynamic is "None" ⁇ .
- Scenario 3 ⁇ The road condition is "straight highway”, the time dynamics is “daytime”, the weather is “sunny”, and the traffic dynamics is "a white truck with a rollover ahead” ⁇ .
- Scene 4 ⁇ The road condition is "T intersection”, the time dynamic is "daytime”, the weather is “sunny”, and the traffic dynamic is "a car turning suddenly ahead” ⁇ .
- Step 2 Select two scenes in the initial population as parent scenes, cross the four features, and the probability of each scene being selected is 50%. For example, select Scenario 1 and Scenario 4 as parent nodes, and generate sub-scenes to be tested as follows:
- Time dynamics daytime (inherited from scene 4)
- Step three after generating child nodes, mutate a feature with a mutation probability of 5%.
- a T-junction becomes a "crossroad”.
- a new scene 1 is obtained through mutation.
- the new scene 1 is:
- Traffic dynamics white trucks running across the road ahead.
- Step 4 Test the above driving scenario, obtain vehicle control instructions and perception model behavior, and calculate the test score of the driving scenario.
- the test score may be calculated with the process described with reference to FIG. 5 .
- Step five select excellent individuals according to the test scores to form a new population.
- a test score for a driving scenario is above a score threshold, it is added to the population.
- Step 6 repeat the above steps 2 and 3. For example, scene 3 is crossed with new scene 1, and then time-mutated to generate new scene 2.
- Traffic dynamics A white truck overturned ahead. (inherited from scene 3)
- Step seven test the above-mentioned new scenario 2 and a security incident occurs.
- the new scene 2 is the target scene.
- FIG. 7 shows a schematic block diagram of an automatic driving system device 700 according to an embodiment of the present disclosure.
- the apparatus 700 may be implemented, for example, in the emulator 110 shown in FIG. 1 and FIG. 2 , or may be implemented in a separate component that the emulator 110 communicates with, which is not limited in the present disclosure.
- the device 700 includes a generating unit 710 , a scoring unit 720 and an updating unit 730 .
- the generating unit 710 is configured to generate candidate driving scenarios from a plurality of driving scenarios in the set of driving scenarios.
- the scoring unit 720 is configured to determine a test score for a candidate driving scenario by testing the candidate driving scenario in the automatic driving system.
- the update unit 730 is configured to update the set of driving scenarios with the candidate driving scenarios if it is determined that the test score exceeds the score threshold.
- the generating unit 710 may also be configured to: generate a combined driving scene based on a combination of scene features of a plurality of driving scenes in the set of driving scenes; and adjust at least one scene feature of the combined driving scene to generate candidate driving scenarios.
- the scene features may include at least one of the following: road topology, time of day, weather, road surface state, traffic dynamics, and landscape.
- the scoring unit 720 can also be configured to: determine the first score of the candidate driving scene based on the vehicle control command of the automatic driving system; determine the second score of the candidate driving scene based on the behavior of the perception model of the automatic driving system. scoring; and determining a test score for the candidate driving scenario based on the first score and the second score.
- the scoring unit 720 may also be configured to determine the first score by at least one of the following: determining the distance between the vehicle and other objects based on the vehicle control instruction; determining whether the vehicle violates traffic rules based on the vehicle control instruction; and The vehicle control commands are compared with the control commands for avoiding accidents.
- the perceptual model may be a neural network model including neurons
- the scoring unit 720 may also be configured to determine the second score by at least one of the following: determining that the neurons of the perceptual model are activated for the first time The number of the first type of target neurons; and determine the number of the second type of target neurons, the output symbols of each second type of target neurons and the output symbols of the second type of target neurons when multiple driving scenarios are tested all different.
- the apparatus 700 may further include an initialization unit (not shown), configured to initialize a set of driving scenarios based on scenarios where automatic driving accidents have occurred in the real world or in a simulation environment.
- an initialization unit (not shown), configured to initialize a set of driving scenarios based on scenarios where automatic driving accidents have occurred in the real world or in a simulation environment.
- the plurality of driving scenarios may include two driving scenarios.
- FIG. 8 shows a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure.
- device 800 includes a central processing unit (CPU) 801 that can be programmed according to computer program instructions stored in read only memory (ROM) 802 or loaded from storage unit 808 into random access memory (RAM) 803 program instructions to perform various appropriate actions and processes.
- ROM read only memory
- RAM random access memory
- various programs and data necessary for the operation of the device 800 can also be stored.
- the CPU 801, ROM 802, and RAM 803 are connected to each other via a bus 804.
- An input/output (I/O) interface 805 is also connected to the bus 804 .
- I/O input/output
- the I/O interface 805 includes: an input unit 806, such as a keyboard, a mouse, etc.; an output unit 807, such as various types of displays, speakers, etc.; a storage unit 808, such as a magnetic disk, an optical disk, etc. ; and a communication unit 809, such as a network card, a modem, a wireless communication transceiver, and the like.
- the communication unit 809 allows the device 800 to exchange information/data with other devices over a computer network such as the Internet and/or various telecommunication networks.
- the various procedures and processes described above can be executed by the processing unit 801 .
- the various procedures and processes described above may be implemented as computer software programs tangibly embodied on a machine-readable medium, such as the storage unit 808 .
- part or all of the computer program may be loaded and/or installed on the device 800 via the ROM 802 and/or the communication unit 809.
- the computer program is loaded into RAM 803 and executed by CPU 801, one or more actions of the procedures and processes described above may be performed.
- the present disclosure may be a method, apparatus, system and/or computer program product.
- a computer program product may include a computer readable storage medium having computer readable program instructions thereon for carrying out various aspects of the present disclosure.
- a computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device.
- a computer readable storage medium may be, for example, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- Computer-readable storage media include: portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or flash memory), static random access memory (SRAM), compact disc read only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanically encoded device, such as a printer with instructions stored thereon A hole card or a raised structure in a groove, and any suitable combination of the above.
- RAM random access memory
- ROM read-only memory
- EPROM erasable programmable read-only memory
- flash memory static random access memory
- SRAM static random access memory
- CD-ROM compact disc read only memory
- DVD digital versatile disc
- memory stick floppy disk
- mechanically encoded device such as a printer with instructions stored thereon
- a hole card or a raised structure in a groove and any suitable combination of the above.
- computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., pulses of light through fiber optic cables), or transmitted electrical signals.
- Computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or downloaded to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
- the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
- a network adapter card or a network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
- Computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or Source or object code written in any combination, including object-oriented programming languages—such as Smalltalk, C++, etc., and conventional procedural programming languages—such as “C” or similar programming languages.
- Computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
- the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as via the Internet using an Internet service provider). connect).
- LAN local area network
- WAN wide area network
- an electronic circuit such as a programmable logic circuit, field programmable gate array (FPGA), or programmable logic array (PLA)
- FPGA field programmable gate array
- PDA programmable logic array
- These computer readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that when executed by the processing unit of the computer or other programmable data processing apparatus , producing an apparatus for realizing the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
- These computer-readable program instructions can also be stored in a computer-readable storage medium, and these instructions cause computers, programmable data processing devices and/or other devices to work in a specific way, so that the computer-readable medium storing instructions includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks in flowcharts and/or block diagrams.
- each block in a flowchart or block diagram may represent a module, a program segment, or a portion of an instruction that contains one or more executable instruction.
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified function or action , or may be implemented by a combination of dedicated hardware and computer instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)
- Debugging And Monitoring (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Traffic Control Systems (AREA)
Abstract
Procédé pour un système de conduite autonome, consistant : à générer un scénario de conduite candidat à partir d'une pluralité de scénarios de conduite dans un ensemble de scénarios de conduite ; par test du scénario de conduite candidat dans le système de conduite autonome, à déterminer un score de test du scénario de conduite candidat ; et si le score de test est déterminé comme dépassant un seuil de score, à mettre à jour l'ensemble de scénarios de conduite au moyen du scénario de conduite candidat. Sont également divulgués un appareil de conduite autonome, un dispositif électronique, un support de stockage lisible par ordinateur et un produit-programme informatique.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/142708 WO2023123130A1 (fr) | 2021-12-29 | 2021-12-29 | Procédé et appareil pour système de conduite autonome, dispositif électronique et support |
CN202180033716.XA CN116685955A (zh) | 2021-12-29 | 2021-12-29 | 用于自动驾驶系统的方法、装置、电子设备和介质 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/142708 WO2023123130A1 (fr) | 2021-12-29 | 2021-12-29 | Procédé et appareil pour système de conduite autonome, dispositif électronique et support |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023123130A1 true WO2023123130A1 (fr) | 2023-07-06 |
Family
ID=86996954
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/142708 WO2023123130A1 (fr) | 2021-12-29 | 2021-12-29 | Procédé et appareil pour système de conduite autonome, dispositif électronique et support |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN116685955A (fr) |
WO (1) | WO2023123130A1 (fr) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117762816A (zh) * | 2024-01-08 | 2024-03-26 | 中科南京软件技术研究院 | 自动驾驶系统仿真测试方法、系统、设备及存储介质 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118672931A (zh) * | 2024-08-23 | 2024-09-20 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | 一种自动驾驶仿真测试场景生成方法、系统及介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102019211009A1 (de) * | 2019-07-25 | 2021-01-28 | Zf Friedrichshafen Ag | Verfahren und Computerprogramm zum Simulieren eines autonomen Fahrzeugs in einer Mehrzahl von Testfällen |
CN112380137A (zh) * | 2020-12-04 | 2021-02-19 | 清华大学苏州汽车研究院(吴江) | 一种自动驾驶场景的确定方法、装置、设备及存储介质 |
CN113640014A (zh) * | 2021-08-13 | 2021-11-12 | 北京赛目科技有限公司 | 自动驾驶车辆测试场景的构建方法、装置及可读存储介质 |
CN113688042A (zh) * | 2021-08-25 | 2021-11-23 | 北京赛目科技有限公司 | 测试场景的确定方法、装置、电子设备及可读存储介质 |
WO2021245201A1 (fr) * | 2020-06-03 | 2021-12-09 | Five AI Limited | Test et simulation en conduite autonome |
-
2021
- 2021-12-29 CN CN202180033716.XA patent/CN116685955A/zh active Pending
- 2021-12-29 WO PCT/CN2021/142708 patent/WO2023123130A1/fr active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102019211009A1 (de) * | 2019-07-25 | 2021-01-28 | Zf Friedrichshafen Ag | Verfahren und Computerprogramm zum Simulieren eines autonomen Fahrzeugs in einer Mehrzahl von Testfällen |
WO2021245201A1 (fr) * | 2020-06-03 | 2021-12-09 | Five AI Limited | Test et simulation en conduite autonome |
CN112380137A (zh) * | 2020-12-04 | 2021-02-19 | 清华大学苏州汽车研究院(吴江) | 一种自动驾驶场景的确定方法、装置、设备及存储介质 |
CN113640014A (zh) * | 2021-08-13 | 2021-11-12 | 北京赛目科技有限公司 | 自动驾驶车辆测试场景的构建方法、装置及可读存储介质 |
CN113688042A (zh) * | 2021-08-25 | 2021-11-23 | 北京赛目科技有限公司 | 测试场景的确定方法、装置、电子设备及可读存储介质 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117762816A (zh) * | 2024-01-08 | 2024-03-26 | 中科南京软件技术研究院 | 自动驾驶系统仿真测试方法、系统、设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN116685955A (zh) | 2023-09-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Chen et al. | A deep learning algorithm for simulating autonomous driving considering prior knowledge and temporal information | |
US10346564B2 (en) | Dynamic virtual object generation for testing autonomous vehicles in simulated driving scenarios | |
CN112868022B (zh) | 自动驾驶车辆的驾驶场景 | |
US20220121550A1 (en) | Autonomous Vehicle Testing Systems and Methods | |
JP7075366B2 (ja) | 運転場面データを分類するための方法、装置、機器及び媒体 | |
CN112703459B (zh) | 对抗场景的迭代生成 | |
CN114638148A (zh) | 用于自动化交通工具的文化敏感驾驶的安全的并且可扩展的模型 | |
CN110647839A (zh) | 自动驾驶策略的生成方法、装置及计算机可读存储介质 | |
WO2023123130A1 (fr) | Procédé et appareil pour système de conduite autonome, dispositif électronique et support | |
JP7345639B2 (ja) | マルチエージェントシミュレーション | |
JP2021528798A (ja) | 場面のパラメトリック上面視表現 | |
US20220227397A1 (en) | Dynamic model evaluation package for autonomous driving vehicles | |
CN118228612B (zh) | 一种基于强化学习的自然性自动驾驶场景生成方法及装置 | |
EP3920128A1 (fr) | Domaines de conception opérationnelle en conduite autonome | |
Li et al. | Development and testing of advanced driver assistance systems through scenario-based system engineering | |
CN117056153A (zh) | 校准和验证驾驶员辅助系统和/或自动驾驶系统的方法、系统和计算机程序产品 | |
CN116194350A (zh) | 生成多个模拟边缘情况驾驶场景 | |
US20230278582A1 (en) | Trajectory value learning for autonomous systems | |
Darapaneni et al. | Autonomous car driving using deep learning | |
CN117521389A (zh) | 一种基于车路协同感知仿真平台的车辆感知测试方法 | |
CN117151246A (zh) | 智能体决策方法、控制方法、电子设备及存储介质 | |
CN116710732A (zh) | 自主交通工具运动计划中的稀有事件仿真 | |
CN116466697A (zh) | 用于运载工具的方法、系统以及存储介质 | |
CN117818659A (zh) | 车辆安全决策方法、装置、电子设备、存储介质及车辆 | |
Lu et al. | DeepQTest: Testing Autonomous Driving Systems with Reinforcement Learning and Real-world Weather Data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 202180033716.X Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21969498 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |