CN112559371A - Automatic driving test method and device and electronic equipment - Google Patents

Automatic driving test method and device and electronic equipment Download PDF

Info

Publication number
CN112559371A
CN112559371A CN202011546382.XA CN202011546382A CN112559371A CN 112559371 A CN112559371 A CN 112559371A CN 202011546382 A CN202011546382 A CN 202011546382A CN 112559371 A CN112559371 A CN 112559371A
Authority
CN
China
Prior art keywords
data
automatic driving
semantic
scene
scene data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011546382.XA
Other languages
Chinese (zh)
Other versions
CN112559371B (en
Inventor
李建平
李丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202011546382.XA priority Critical patent/CN112559371B/en
Publication of CN112559371A publication Critical patent/CN112559371A/en
Application granted granted Critical
Publication of CN112559371B publication Critical patent/CN112559371B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M17/00Testing of vehicles
    • G01M17/007Wheeled or endless-tracked vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • General Engineering & Computer Science (AREA)
  • Feedback Control In General (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses an automatic driving test method, an automatic driving test device and electronic equipment, and relates to the technical field of artificial intelligence such as automatic driving in computer technology. The specific scheme is as follows: acquiring first automatic driving test data of an automatic driving vehicle under a first version algorithm; performing semantic classification on the first automatic driving test data to determine multi-class semantic scene data; performing parameter classification on each type of semantic scene data respectively, and determining a plurality of parameter level scene data of each type of semantic scene data; a first difference between a first index value of the parameter level scenario data of the first autopilot test data and a first index value of the parameter level scenario data of the second autopilot test data is determined. And finishing comparison between the first index values of the parameter level scene data under the two versions of algorithms to realize the automatic driving test. The automatic driving test data does not need to be analyzed manually, and the automatic driving test efficiency is improved.

Description

Automatic driving test method and device and electronic equipment
Technical Field
The application relates to the technical field of artificial intelligence such as automatic driving in computer technology, in particular to an automatic driving test method and device and electronic equipment.
Background
With the continuous development of automatic driving technology, more and more automatic driving vehicles are provided, and the automatic driving vehicles are more and more intelligent. Because the scenes to be processed by the automatic driving system are very complex and changeable, large-scale road tests are often required in order to accurately evaluate the performance of the algorithm in the algorithm iteration process, and some macroscopic evaluation indexes are used for reflecting the overall performance of the algorithm, such as the number of times of emergency braking in each kilometer, the number of times of collision in each kilometer and the like, and represent the final performance of the algorithm in the service level. For very complex systems such as autopilot, the interpretability of the change of the macroscopic evaluation index needs to be realized, i.e. it needs to be able to locate where the change of the algorithm causes the change of the overall macroscopic evaluation index by an effective method, so as to better guide the iteration of the algorithm developer.
At present, the method commonly adopted mainly performs manual analysis on data in the test process, for example, for macro evaluation indexes of the number of times of sudden braking per kilometer, the scenes of sudden braking at each time are generally classified manually (for example, vehicles are cut in a non-intersection straight line, vehicles are driven to move straight when the intersection turns left, and the like), and then the scene distribution of sudden braking of two versions of algorithms is compared to realize the explanation of index change, so as to complete the test of automatic driving.
Disclosure of Invention
The application provides an automatic driving test method and device and electronic equipment.
In a first aspect, an embodiment of the present application provides an automatic driving test method, including:
acquiring first automatic driving test data of an automatic driving vehicle under a first version algorithm;
performing semantic classification on the first automatic driving test data to determine multi-class semantic scene data;
performing parameter classification on each type of semantic scene data respectively, and determining a plurality of parameter level scene data of each type of semantic scene data;
determining a first difference value between a first index value of parameter level scene data of the first automatic driving test data and a first index value of parameter level scene data of second automatic driving test data, wherein the first version algorithm is an algorithm after iteration of a second version algorithm, and the second automatic driving test data is automatic driving test data of the automatic driving vehicle under the second version algorithm.
In the automatic driving test method in the embodiment of the application, semantic classification is firstly carried out on first automatic driving test data, and multi-class semantic scene data are determined; and then, performing parameter classification on each type of semantic scene data respectively, determining a plurality of parameter level scene data of each type of semantic scene data, determining a first difference value between a first index value of the parameter level scene data of the first automatic driving test data and a first index value of the parameter level scene data of the second automatic driving test data, and completing comparison between the first index value of the parameter level scene data of the first automatic driving test data and the first index value of the parameter level scene data of the second automatic driving test data. The automatic driving test data does not need to be analyzed manually, and the automatic driving test efficiency is improved.
In a second aspect, an embodiment of the present application provides an automatic driving test device, the device including:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring first automatic driving test data of an automatic driving vehicle under a first version algorithm;
the first classification module is used for performing semantic classification on the first automatic driving test data and determining multi-class semantic scene data;
the second classification module is used for performing parameter classification on each type of semantic scene data respectively and determining a plurality of parameter level scene data of each type of semantic scene data;
the first determining module is configured to determine a first difference between a first index value of parameter level scene data of the first autopilot test data and a first index value of parameter level scene data of second autopilot test data, where the first version algorithm is an algorithm after iteration of a second version algorithm, and the second autopilot test data is autopilot test data of the autopilot vehicle under the second version algorithm.
In a third aspect, an embodiment of the present application further provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the autopilot testing method provided by the embodiments of the application.
In a fourth aspect, an embodiment of the present application further provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the automated driving test method provided by the embodiments of the present application.
In a fifth aspect, an embodiment of the present application provides a computer program product, which includes a computer program that, when executed by a processor, implements the automatic driving test method provided by the embodiments of the present application.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is one of the flow diagrams of an automated driving test method according to one embodiment provided herein;
FIG. 2 is a second schematic flow chart of an automatic driving test method according to an embodiment of the present disclosure;
FIG. 3 is a scene split view of one embodiment provided herein;
FIG. 4 is one of the block diagrams of an autopilot test apparatus of an embodiment provided herein;
FIG. 5 is a second block diagram of an automatic driving test device according to an embodiment of the present disclosure;
fig. 6 is a block diagram of an electronic device for implementing the automatic driving test method according to the embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
As shown in fig. 1, according to an embodiment of the present application, there is provided an automatic driving test method applicable to an electronic device, the method including:
step S101: first autopilot test data of an autopilot vehicle under a first version of an algorithm is obtained.
The first automatic driving test data comprises multiple types of semantic scene data, and each type of semantic scene data comprises multiple parameter level scene data.
The first version of the algorithm may be understood as a first version of an autopilot algorithm for an autopilot vehicle, the autopilot being based on the autopilot algorithm for autopilot. The automatic driving vehicle automatically drives on the basis of the first version algorithm, and automatic driving test data are generated and recorded in the automatic driving process to obtain first automatic driving test data. It should be noted that the first autopilot data may include the driving data (e.g., position, speed, and operating state, etc.) of the autopilot vehicle itself and the environment data obtained by detecting the environment (i.e., the environment information collected by the autopilot vehicle), for example, the environment data may include the traffic environment data around the autopilot vehicle (e.g., data of other traffic participants (e.g., other vehicles, pedestrians, etc.) and signal lights, etc.).
Step S102: and performing semantic classification on the first automatic driving test data to determine multi-class semantic scene data.
The first automatic driving data comprise multi-type semantic scene data, the automatic driving vehicle carries out driving test on the basis of a first version algorithm, data under various scenes can be recorded in the driving process, the first automatic driving test data can be divided into data of different semantic scene types, namely, the first automatic driving data are divided into finer semantic scene data to obtain the multi-type semantic scene data, and the first automatic driving test data can be regarded as combination of the multi-type semantic scene data and can be expressed through the multi-type semantic scene data. For example, the semantic scene data may include, but is not limited to, intersection straight-through scene data, non-intersection scene data, left turn scene data, right turn scene data, and the like.
Step S103: and respectively carrying out parameter classification on each type of semantic scene data, and determining a plurality of parameter level scene data of each type of semantic scene data.
Each type of semantic scene data comprises a plurality of parameter level scene data, and any type of semantic scene data can be further refined and can be divided into different parameter level scene types of data, namely, the semantic scene data is divided into a plurality of parameter level scene data.
As one example, the parameters may include, but are not limited to, at least one of a vehicle speed, an obstacle cut-in distance, and an obstacle cut-in angle of the autonomous vehicle, among others. Each type of semantic scene data has corresponding parameters, and for any type of semantic scene data, parameter classification can be performed by using the corresponding parameters to obtain a plurality of parameter level scene data, that is, for each type of semantic scene data, a plurality of parameter level scene data can be obtained by classification respectively.
For example, for the intersection straight-ahead cut-in scene data (semantic scene data of one kind), the intersection straight-ahead cut-in scene data is divided according to the parameters such as the vehicle speed, the barrier cut-in distance and the like, that is, the scene data corresponding to the vehicle speed is extracted from the straight-ahead cut-in scene data to obtain the parameter level data of the vehicle speed, the scene data corresponding to the barrier cut-in speed is extracted from the straight-ahead cut-in scene data to obtain the parameter level data of the barrier cut-in speed, the scene data corresponding to the barrier cut-in distance is extracted from the straight-ahead cut-in scene data to obtain the parameter level data of the barrier cut-in distance, and thus the straight-ahead cut-in scene data is subjected to parameter classification to obtain the three parameter level scene data.
Step S104: a first difference between a first index value of the parameter level scenario data of the first autopilot test data and a first index value of the parameter level scenario data of the second autopilot test data is determined.
The first version algorithm is an algorithm after iteration of the second version algorithm, and the second automatic driving test data is automatic driving test data of the automatic driving vehicle under the second version algorithm. A first difference between a first index value of the parameter level scenario data of the first autopilot test data and a first index value of the parameter level scenario data of the second autopilot test data is a difference between the first index value of the parameter level scenario data of the first autopilot test data minus the first index value of the parameter level scenario data of the second autopilot test data. That is, a first difference value between the first index value of each parameter level scenario data of the first autopilot test data and the first index value of the corresponding parameter level scenario data of the second autopilot test data is determined.
It should be noted that there are multiple types of semantic scene data under the first autopilot data, and there are multiple parameter level scene data under each type of semantic scene data, then the first index value of the parameter level scene data of the first autopilot test data includes the first index value of the multiple parameter level scene data under each type of semantic scene data in the multiple types of semantic scene data of the first autopilot test data, and similarly, the first index value of the parameter level scene data of the second autopilot test data includes the first index value of the multiple parameter level scene data under each type of semantic scene data in the multiple types of semantic scene data of the second autopilot test data. For example, if five types of semantic scene data are divided into 6 pieces of parameter level scene data, 30 pieces of parameter level scene data are total, and the first index value of each piece of parameter level scene data can be counted, so that 30 pieces of first index values can be obtained, and 30 pieces of first difference values can be obtained.
The first index may be understood as an index set, and may include a plurality of indices, for example, the number of hard stops per kilometer, the number of collisions per kilometer, and the like, the larger the values of these indices, the worse the autonomous driving performance of the autonomous vehicle is indicated, and the smaller the values of these indices, the better the autonomous driving performance of the autonomous vehicle is indicated. The number of hard stops per kilometer for the first automatic driving test data is the number of all hard stops occurring in the first automatic driving test data divided by the driving mileage of the first automatic driving test data, and the number of hard stops per kilometer for a certain type of semantic scene data in the first automatic driving test data is the number of hard stops occurring in the type of semantic scene data divided by the driving mileage of the type of semantic scene data. The sum of the driving mileage of the multiple types of semantic scene data is the driving mileage of the first automatic driving test data. The emergency braking times of the first automatic driving test data are the sum of the emergency braking times of the multiple types of semantic scene data.
Determining a first difference value between a first index value of parameter level scene data of the first automatic driving test data and a first index value of parameter level scene data of the second automatic driving test data, namely, completing comparison between the first index values of the first automatic driving test data and the second automatic driving test data under the parameter level scene under a first version algorithm and a second version algorithm, determining a variation between the first index values of the first automatic driving test data and the second automatic driving test data under the parameter level scene under the first version algorithm and the second version algorithm, namely determining a difference between the first index values of the first automatic driving test data and the second automatic driving test data under the parameter level scene under the first version algorithm and the second version algorithm, and completing the automatic driving test.
It should be noted that the second version of the algorithm may be understood as a second version of an automatic driving algorithm for an automatic driving vehicle, and the automatic driving is performed based on the automatic driving algorithm. And the automatic driving vehicle automatically drives on the basis of the second version algorithm, and automatic driving test data are generated and recorded in the automatic driving process to obtain second automatic driving test data. Namely, the autonomous vehicle needs to complete two road tests to obtain first autonomous driving test data and second autonomous driving test data.
The process of determining the first index value of the parameter level scenario data of the second autopilot test data is similar to the process of determining the first index value of the parameter level scenario data of the first autopilot test data, except that the test data is different, one is to use the first autopilot test data, and the other is to use the second autopilot test data, that is, the first index value of the parameter level scenario data of the second autopilot test data can be determined in advance before determining the first difference between the first index value of the parameter level scenario data of the first autopilot test data and the first index value of the parameter level scenario data of the second autopilot test data, and the process of determining the first index value of the parameter level scenario data of the second autopilot test data can include: acquiring second automatic driving test data of the automatic driving vehicle under a second version algorithm, wherein the second automatic driving test data comprises multiple types of semantic scene data (different from the multiple types of semantic scene data of the first automatic driving test data), and each type of semantic scene data in the second automatic driving test data comprises multiple parameter level scene data (different from the multiple types of parameter level scene data of each type of semantic scene data in the first automatic driving test data); performing semantic classification on the second automatic driving test data, and determining multi-class semantic scene data of the second automatic driving test data; performing parameter classification on each type of semantic scene data of the second automatic driving test data respectively, and determining a plurality of parameter level scene data of each type of semantic scene data of the second automatic driving test data; and determining a first index value of the parameter level scene data of the second automatic driving test data.
In the automatic driving test method in the embodiment of the application, semantic classification is firstly carried out on first automatic driving test data, and multi-class semantic scene data are determined; and then, performing parameter classification on each type of semantic scene data respectively, determining a plurality of parameter level scene data of each type of semantic scene data, determining a first difference value between a first index value of the parameter level scene data of the first automatic driving test data and a first index value of the parameter level scene data of the second automatic driving test data, and completing comparison between the first index value of the parameter level scene data of the first automatic driving test data and the first index value of the parameter level scene data of the second automatic driving test data. The automatic driving test data does not need to be analyzed manually, and the automatic driving test efficiency is improved. Meanwhile, the manual analysis cost can be reduced, and the test cost is reduced. In addition, the difference between the first index values under the parameter level scene data is positioned under two version algorithms, so that the comparison accuracy between the first index values under the first version algorithm and the second version algorithm can be improved, and more detailed and accurate comparison between different version algorithms can be realized. The problem of poor result accuracy caused by non-uniform manual analysis standards can be solved, and the accuracy of comparison between the first index values under the first version algorithm and the second version algorithm is improved.
In one embodiment, after determining a first difference value between a first index value of the parameter level scenario data of the first autopilot test data and a first index value of the parameter level scenario data of the second autopilot test data, it may further include: for example, for the index of the number of times of emergency braking per kilometer, if the first difference between the value of the number of times of emergency braking per kilometer of the first automatic driving test data and the value of the number of times of emergency braking per kilometer of the second automatic driving test data is smaller than a preset value (the preset value is smaller than or equal to zero), it can be considered that for the index of the number of times of emergency braking per kilometer, the test of the first version algorithm passes, and the test result that the test passes can be obtained. If the first difference between the value of the number of hard brakes per kilometer of the first automatic driving test data and the value of the number of hard brakes per kilometer of the second automatic driving test data is greater than or equal to the preset value, it can be considered that the test of the first version algorithm fails for the index of the number of hard brakes, and the test result that the test fails can be obtained.
In one embodiment, after determining a first difference value between a first index value of the parameter level scenario data of the first autopilot test data and a first index value of the parameter level scenario data of the second autopilot test data, it may further include: and determining a first target algorithm according to the plurality of first difference values. The first indexes of the parameter level scene data can be positioned through the plurality of first difference values, so that the first indexes of the parameter level scene data have larger influence on the whole first indexes of the vehicle, the second target algorithm can be positioned from the first version algorithm according to the plurality of first difference values, namely the algorithm with larger influence on the first indexes of the automatic driving test data is positioned, and therefore the iteration of developers on the algorithm can be better guided in the following process.
In this embodiment, classification of parameter levels can be realized, so that more detailed and accurate comparison between different algorithm versions is realized, so as to locate a first target algorithm causing index change, and thus, iteration of an algorithm developer is better guided.
In one embodiment, after semantically classifying the first autopilot test data and determining the multiclass semantic scene data, the method further includes:
determining a second difference value between a first index value of the semantic scene data of the first automatic driving test data and a first index value of the semantic scene data of the second automatic driving test data.
The second difference value between the first index value of the semantic scene data of the first automatic driving test data and the first index value of the semantic scene data of the second automatic driving test data is the difference value of the first index value of the semantic scene data of the first automatic driving test data minus the first index value of the semantic scene data of the second automatic driving test data.
The first index value of the semantic scene data of the first automatic driving test data comprises a first index value of each semantic scene data in multiple types of semantic scene data of the first automatic driving test data, and the first index value of the semantic scene data of the second automatic driving test data comprises a first index value of each semantic scene data in multiple types of semantic scene data of the second automatic driving test data. Namely, a second difference value between the first index value of each semantic scene data of the first automatic driving test data and the first index value of the corresponding semantic scene data of the second automatic driving test data is determined. For example, five types of semantic scene data, the first index value of each semantic scene data may be counted, and then five second difference values may be obtained.
In this embodiment, the comparison between the first index values of the first autopilot test data and the second autopilot test data in the semantic scene under the first version algorithm and the second version algorithm may be further determined, and the variation between the first index values of the first autopilot test data and the second autopilot test data in the semantic scene under the first version algorithm and the second version algorithm may be determined, that is, the difference between the first index values of the first autopilot test data and the second autopilot test data in the parameter level scene under the first version algorithm and the second version algorithm may be determined, so that the integrity of the index value comparison under the two version algorithms is improved, and the integrity of the autopilot test is improved.
As one example, the second target algorithm may be determined based on a plurality of second difference values. Optionally, a second target algorithm is determined from the first version algorithm, for example, the second target algorithm may be an algorithm of a target semantic scene, and the target semantic scene may be a semantic scene of semantic scene data corresponding to a minimum difference value in a plurality of second difference value scenes in a plurality of semantic scene data of the first automatic driving data.
As shown in fig. 2, in an embodiment, the step S102 of semantically classifying the first autopilot test data and determining multiple types of semantic scene data includes:
step S1021: and performing semantic analysis on the first automatic driving test data to determine a plurality of scene labels.
It is understood that the plurality of scenario labels (tags) are a plurality of scenarios in the first autopilot test data, for example, including but not limited to scenarios such as a straight-ahead of the host vehicle, lane change, obstacle car cut-in, left turn of the obstacle car, etc. The method comprises the steps of performing semantic analysis on first automatic driving test data in the process of determining a plurality of scene labels, and determining the plurality of scene labels by combining map data.
Step S1022: and determining the multi-class semantic scene according to the scene labels.
Any semantic scene is a combination of at least two scene labels in the multiple scene labels, namely the semantic scene is a scene obtained by combining at least two scenes in the multiple scenes obtained by performing semantic analysis on the first automatic driving test data. For example, scene labels of the main vehicle going straight, the intersection and the obstacle vehicle turning left are combined to obtain a semantic scene that the main vehicle goes straight at the intersection and turns left when meeting the obstacle vehicle.
Step S1023: and classifying the first automatic driving test data by utilizing the multi-class semantic scenes to determine the multi-class semantic scene data.
After the multi-type semantic scene is determined, the first automatic driving test data can be split to obtain the multi-type semantic scene data.
In this embodiment, firstly, semantic parsing is performed on the first autopilot test data to determine a plurality of scene tags, then a plurality of scene tags are utilized to determine a plurality of types of semantic scenes, so that semantic scene classification accuracy can be improved, then the plurality of types of semantic scenes are used to classify the first autopilot test data to determine a plurality of types of semantic scene data, and thus, accuracy of the obtained semantic scene data can be improved.
In one embodiment, before determining a first difference value between a first index value of the parameter level scenario data of the first autopilot test data and a first index value of the parameter level scenario data of the second autopilot test data, the method further comprises: determining a first index value of a plurality of interval data of target parameter level scene data;
the target parameter level scene data is any parameter level scene data in the parameter level scene data of the multiple types of semantic scene data, and the first index value of the target parameter level scene data comprises first index values of multiple sections of data of the target parameter level scene data.
For the target parameter level scene data, the target parameter level scene data can be subjected to interval division based on preset division intervals of the target parameters to obtain a plurality of interval data. For example, as for the parameter of the obstacle cut-in speed, the obstacle cut-in speed level scene data may be divided into a plurality of speed section data according to the magnitude of the obstacle cut-in speed. For example, the obstacle cut-in speed is divided into a section of (0km/h, 10 km/h) and a section of (10km/h, 30 km/h), km/h being km/h, and the parameter level scene data of the obstacle cut-in speed is divided according to the section, so that the obtained data of the plurality of sections comprises data of (0km/h, 10 km/h) and data of (10km/h, 30 km/h), wherein the data of (0km/h, 10 km/h) corresponds to a first index value, and the data of (10km/h, 30 km/h) corresponds to a first index value.
In this embodiment, the data may be further divided into different sections based on the parameter level scene data, a first index value of each of the plurality of section data of the parameter level scene data may be determined first, and a first difference between the first index value of the parameter level scene data of the first autopilot test data and the first index value of the parameter level scene data of the second autopilot test data may be determined in sequence according to the first index value of the plurality of section data of the target parameter level scene data, so that a more refined difference may be further obtained, and more refined and accurate index value comparison under two versions of algorithms may be achieved.
The following describes a process of the automatic driving test method in an embodiment, and the automatic driving test method can be applied to an automatic driving scenario.
For testing the automatic driving algorithm of the automatic driving vehicle, a commonly used mode is that the automatic driving vehicle actually runs in a road on the basis of two different versions of algorithms to obtain automatic driving data under the two different versions of algorithms, namely first automatic driving test data and second automatic driving test data. The behavior of the two versions of the algorithm output under the relevant inputs is observed. The actual road test is a continuous process in the time dimension, and the encountered scenes do not change with the subjective will of people, so that obviously, the same input cannot be achieved for the two road tests of the two versions of the algorithm. However, from the perspective of probability, when the mileage of the road test reaches a certain scale, the probability of each scene appearing tends to be stable, and macroscopically, the input of the algorithms by two road tests can be considered to be consistent, that is, it can be understood that a plurality of semantic scenes between two different versions of algorithms are the same, and parameter level scenes between two different versions of algorithms are the same, and data recorded under the same semantic scene or parameter level scene between two different versions of algorithms are different. However, the consistency in macroscopic probability causes difficulty in interpretability for the algorithm, and for developers, the interpretability of the macroscopic index means that the change of the index can be depicted in a finer granularity, so that the comparison of the effects of the algorithm under the same input condition is realized.
In order to realize interpretability of the effect index, the method of the embodiment of the application classifies the automatic driving test data in the continuous process at a semantic level and a parameter level, the automatic driving test data in the continuous process of road test is depicted as scene data at the semantic level and the parameter level at a finer granularity, and the effect of the algorithm is compared on the classified scene data, so that the problem that after the index is determined to change, which semantic scenes and which parameter level scenes are determined to change is solved.
First, first automatic driving test data of a next large-scale road test by a first version algorithm are obtained, a full amount of real scene data, namely a scene data full set, can be obtained based on the first automatic driving test data and can be recorded as a road _ test, and the first automatic driving test data comprises the full amount of real scene data. The first index value of the first autopilot test data can be understood as the first index value of the scene data corpus, and the first index is the index set, i.e. the first index, which characterizes the first version of algorithm performance under the input of this scene data corpus.
The above explanation of the first index means that the full real scene data road _ test is split at a finer granularity, the full real scene data road _ test is split into a plurality of semantic scene data, and each semantic scene data is split into a plurality of parameter levels, that is, the scene data of two layers is obtained by splitting, as shown in fig. 3.
Generated from autonomous vehicle road testing are temporally continuous data that may include the location and speed of the autonomous vehicle, operating state information, and perceived ambient traffic environment data (e.g., other traffic vehicle participants, signal lights, etc.) and semantic scene splitting is essentially the process of semantically classifying these continuous data. The semantic classification process is mainly divided into two steps, firstly, analyzing automatic driving test data in the automatic driving vehicle road test process, and analyzing scene tags with fine granularity by combining map data, such as main vehicle straight going, lane changing, obstacle vehicle cut-in, obstacle vehicle left turning and the like; and secondly, combining the tags through an event expression to obtain a desired semantic scene, such as a scene tag that the host vehicle passes through the barrier vehicle at the intersection (combining the scene tags that the host vehicle passes through the intersection and the barrier vehicle turns left). By combining a plurality of semantic level scenes, the full amount of scenes of one complete road test can be split and depicted.
For each semantic scene, key characteristic parameters in the semantic scene are analyzed according to requirements, for example, for the semantic scene of a non-intersection straight-going cut vehicle scene, the speed, the cut-in distance and the cut-in angle of an automatic driving vehicle can be selected as key parameters, and then parameter level scene data is calculated from data of each scene. And carrying out interval division on each parameter through analysis on parameter distribution of the scene with the same semantic level, wherein the interval division is carried out on each parameter, such as the cut-in speed of the obstacle vehicle is divided into (0km/h, 10 km/h) and (10km/h, 30 km/h), and the like.
The semantic scene mainly focuses on the characteristics of behavior levels of the automatic driving vehicle and other traffic participants, such as intersection straight-ahead driving, intersection left-turning, non-intersection straight-ahead driven switching and the like, the full real scene data read _ test can be split in the semantic level by performing semantic analysis on the full real scene data read _ test, and the full real scene data can be regarded as a group of semantic scene data
Figure BDA0002856432550000121
The joint probability distribution of (2) can express the full-scale real scene data through the joint probability distribution of a group of semantic scene data.
Semantic _ scene for certain semantic scene datai(i.e. the ith semantic scene data) and can also be classified and characterized in parameter level, firstly, extracting a key parameter group for describing a certain semantic scene
Figure BDA0002856432550000131
For example, for a non-intersection straight-ahead cut-in vehicle scene, parameters such as the speed of a main vehicle, the cut-in speed of an obstacle vehicle, the cut-in distance, the cut-in angle and the like can be selected for parameter classification, and then the semantic scene data can be expressed by the parameter level scene data joint probability distribution of the group of parameters.
After the full real scene data under the first version algorithm and the full real scene data under the second version algorithm are obtained, the difference value delta p (indicator _ set | road _ test) between the two is calculated, and the change of the first index under the full set of the two road tests is obtained. After the semantic level and the parameter level of the full real scene data rod _ test under the first version algorithm are classified, a first index value of the semantic scene data and a first index value of the parameter level scene data of the full real scene data under the first version algorithm can be obtained. Through the similar process, the first index value of the semantic scene data of the full real scene data (that is, the full real scene data based on the second autopilot test data, where the first index value of the second autopilot test data can be understood as the first index value of the full real scene data, and the second autopilot test data is data obtained by performing a road test on the basis of the second version algorithm) and the first index value of the parameter-level scene data under the second version algorithm can be obtained as well. Calculating a second difference between first index values of the semantic scene data under the two algorithm versions to obtain the change of the first index of the road test under the semantic scene twice, and calculating a first difference delta p (indicator _ set | parameter) between the first index values of the parameter level scene data under the two algorithm versions to obtain the change of the first index of the road test under the parameter level scene twice. In other words, in the embodiment of the application, the semantic grade analysis is performed on the change of the first index by performing semantic classification on the scene data, and the parameter grade classification can also be performed on the semantic scene data according to the selected key parameters, so that more detailed and accurate comparison among algorithms of different versions is realized.
As shown in fig. 4, the present application further provides an automatic driving test apparatus 400 according to an embodiment of the present application, the apparatus including:
a first obtaining module 401, configured to obtain first autopilot test data of an autopilot vehicle under a first version algorithm;
a first classification module 402, configured to perform semantic classification on the first automatic driving test data, and determine multiple types of semantic scene data;
a second classification module 403, configured to perform parameter classification on each type of semantic scene data, and determine multiple parameter-level scene data of each type of semantic scene data;
a first determining module 404, configured to determine a first difference between a first index value of the parameter level scenario data of the first autopilot test data and a first index value of the parameter level scenario data of the second autopilot test data, where the first version algorithm is an algorithm after iteration of the second version algorithm, and the second autopilot test data is autopilot test data of an autopilot vehicle under the second version algorithm.
In one embodiment, the apparatus further comprises:
the second determination module is used for determining a second difference value between the first index value of the semantic scene data of the first automatic driving test data and the first index value of the semantic scene data of the second automatic driving test data.
As shown in fig. 5, in one embodiment, the first classification module 402 includes:
the semantic parsing module 4021 is configured to perform semantic parsing on the first automatic driving test data to determine a plurality of scene tags;
the scene determining module 4022 is configured to determine multiple types of semantic scenes according to the multiple scene tags, where any type of semantic scene is a combination of at least two scene tags in the multiple scene tags;
the classification submodule 4033 is configured to classify the first autopilot test data using the multiple types of semantic scenes, and determine multiple types of semantic scene data.
In one embodiment, the apparatus further comprises:
the third determination module is used for determining a first index value of a plurality of interval data of the target parameter level scene data;
the target parameter level scene data is any parameter level scene data in the parameter level scene data of the multiple types of semantic scene data, and the first index value of the target parameter level scene data comprises first index values of multiple sections of data of the target parameter level scene data.
The automatic driving test device of each embodiment is a device for implementing the automatic driving test method of each embodiment, and has corresponding technical features and technical effects, which are not described herein again.
There is also provided, in accordance with an embodiment of the present application, an electronic device, a readable storage medium, and a computer program product.
The non-transitory computer readable storage medium of an embodiment of the present application stores computer instructions for causing a computer to perform the automated driving test method provided herein.
The computer program product of the embodiments of the present application includes a computer program, and the computer program is used to enable a computer to execute the automatic driving test method provided by the embodiments of the present application.
FIG. 5 illustrates a schematic block diagram of an example electronic device 500 that can be used to implement embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 6, the electronic device 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data necessary for the operation of the electronic apparatus 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Various components in the electronic device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the electronic device 600 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 executes the respective methods and processes described above, such as the automatic driving test method. For example, in some embodiments, the autopilot testing method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into RAM603 and executed by the computing unit 601, one or more steps of the autopilot testing method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the autopilot test method in any other suitable manner (e.g., by means of firmware). Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present application may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (11)

1. An automated driving test method, the method comprising:
acquiring first automatic driving test data of an automatic driving vehicle under a first version algorithm;
performing semantic classification on the first automatic driving test data to determine multi-class semantic scene data;
performing parameter classification on each type of semantic scene data respectively, and determining a plurality of parameter level scene data of each type of semantic scene data;
determining a first difference value between a first index value of parameter level scene data of the first automatic driving test data and a first index value of parameter level scene data of second automatic driving test data, wherein the first version algorithm is an algorithm after iteration of a second version algorithm, and the second automatic driving test data is automatic driving test data of the automatic driving vehicle under the second version algorithm.
2. The method of claim 1, wherein the semantically classifying the first autopilot test data, after determining the multi-class semantic scene data, further comprises:
determining a second difference between a first index value of semantic scene data of the first autopilot test data and a first index value of semantic scene data of the second autopilot test data.
3. The method of claim 1, wherein the semantically classifying the first autopilot test data and determining the multi-class semantic scene data comprises:
performing semantic analysis on the first automatic driving test data to determine a plurality of scene labels;
determining multiple types of semantic scenes according to the scene labels, wherein any type of semantic scene is the combination of at least two scene labels in the scene labels;
and classifying the first automatic driving test data by utilizing the multiple types of semantic scenes to determine the multiple types of semantic scene data.
4. The method of claim 1, wherein prior to determining the first difference between the first indicator value of the parameter level scenario data of the first autopilot test data and the first indicator value of the parameter level scenario data of the second autopilot test data, further comprising:
determining a first index value of a plurality of interval data of target parameter level scene data;
the target parameter level scene data is any parameter level scene data in the parameter level scene data of the multiple types of semantic scene data, and the first index value of the target parameter level scene data comprises first index values of multiple sections of data of the target parameter level scene data.
5. An autopilot testing apparatus, the apparatus comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring first automatic driving test data of an automatic driving vehicle under a first version algorithm;
the first classification module is used for performing semantic classification on the first automatic driving test data and determining multi-class semantic scene data;
the second classification module is used for performing parameter classification on each type of semantic scene data respectively and determining a plurality of parameter level scene data of each type of semantic scene data;
the first determining module is configured to determine a first difference between a first index value of parameter level scene data of the first autopilot test data and a first index value of parameter level scene data of second autopilot test data, where the first version algorithm is an algorithm after iteration of a second version algorithm, and the second autopilot test data is autopilot test data of the autopilot vehicle under the second version algorithm.
6. The apparatus of claim 5, wherein the apparatus further comprises:
the second determination module is used for determining a second difference value between the first index value of the semantic scene data of the first automatic driving test data and the first index value of the semantic scene data of the second automatic driving test data.
7. The apparatus of claim 5, wherein the first classification module comprises:
the semantic analysis module is used for performing semantic analysis on the first automatic driving test data to determine a plurality of scene labels;
a scene determining module, configured to determine multiple types of semantic scenes according to the multiple scene tags, where any type of semantic scene is a combination of at least two scene tags in the multiple scene tags;
and the classification submodule is used for classifying the first automatic driving test data by utilizing the multi-class semantic scenes to determine the multi-class semantic scene data.
8. The apparatus of claim 5, wherein the apparatus further comprises:
the third determination module is used for determining a first index value of a plurality of interval data of the target parameter level scene data;
the target parameter level scene data is any parameter level scene data in the parameter level scene data of the multiple types of semantic scene data, and the first index value of the target parameter level scene data comprises first index values of multiple sections of data of the target parameter level scene data.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the autopilot testing method of any of claims 1-4.
10. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to execute the automated driving test method of any one of claims 1-4.
11. A computer program product comprising a computer program which, when executed by a processor, implements an autopilot testing method according to any one of claims 1-4.
CN202011546382.XA 2020-12-24 2020-12-24 Automatic driving test method and device and electronic equipment Active CN112559371B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011546382.XA CN112559371B (en) 2020-12-24 2020-12-24 Automatic driving test method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011546382.XA CN112559371B (en) 2020-12-24 2020-12-24 Automatic driving test method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112559371A true CN112559371A (en) 2021-03-26
CN112559371B CN112559371B (en) 2023-07-28

Family

ID=75030548

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011546382.XA Active CN112559371B (en) 2020-12-24 2020-12-24 Automatic driving test method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112559371B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538734A (en) * 2021-07-30 2021-10-22 阿波罗智联(北京)科技有限公司 Method, apparatus, electronic device and storage medium for processing driving data
CN115828638A (en) * 2023-01-09 2023-03-21 西安深信科创信息技术有限公司 Automatic driving test scene script generation method and device and electronic equipment
EP4151979A3 (en) * 2021-12-28 2023-06-28 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Test method and test apparatus for automatic driving, and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017051120A1 (en) * 2015-09-24 2017-03-30 Renault S.A.S Driving assistance device for estimating the danger of a situation
US20190057509A1 (en) * 2017-08-16 2019-02-21 Nvidia Corporation Learning rigidity of dynamic scenes for three-dimensional scene flow estimation
US20190213103A1 (en) * 2018-01-08 2019-07-11 Waymo Llc Software validation for autonomous vehicles
CN111122175A (en) * 2020-01-02 2020-05-08 北京百度网讯科技有限公司 Method and device for testing automatic driving system
CN111290370A (en) * 2020-03-03 2020-06-16 腾讯科技(深圳)有限公司 Automatic driving performance detection method and device
CN111680362A (en) * 2020-05-29 2020-09-18 北京百度网讯科技有限公司 Method, device and equipment for acquiring automatic driving simulation scene and storage medium
CN111859528A (en) * 2020-06-05 2020-10-30 北京百度网讯科技有限公司 Simulation test method, device and storage medium for automatic driving strategy control module
CN112001097A (en) * 2020-10-29 2020-11-27 深圳裹动智驾科技有限公司 Method for analyzing and visualizing results of automatic driving simulation and computer device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017051120A1 (en) * 2015-09-24 2017-03-30 Renault S.A.S Driving assistance device for estimating the danger of a situation
US20190057509A1 (en) * 2017-08-16 2019-02-21 Nvidia Corporation Learning rigidity of dynamic scenes for three-dimensional scene flow estimation
US20190213103A1 (en) * 2018-01-08 2019-07-11 Waymo Llc Software validation for autonomous vehicles
CN111122175A (en) * 2020-01-02 2020-05-08 北京百度网讯科技有限公司 Method and device for testing automatic driving system
CN111290370A (en) * 2020-03-03 2020-06-16 腾讯科技(深圳)有限公司 Automatic driving performance detection method and device
CN111680362A (en) * 2020-05-29 2020-09-18 北京百度网讯科技有限公司 Method, device and equipment for acquiring automatic driving simulation scene and storage medium
CN111859528A (en) * 2020-06-05 2020-10-30 北京百度网讯科技有限公司 Simulation test method, device and storage medium for automatic driving strategy control module
CN112001097A (en) * 2020-10-29 2020-11-27 深圳裹动智驾科技有限公司 Method for analyzing and visualizing results of automatic driving simulation and computer device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈建中;吴发勇;王建伟;: "自动驾驶系统量产测试评价体系探究", 中国汽车, no. 08 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538734A (en) * 2021-07-30 2021-10-22 阿波罗智联(北京)科技有限公司 Method, apparatus, electronic device and storage medium for processing driving data
EP4151979A3 (en) * 2021-12-28 2023-06-28 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Test method and test apparatus for automatic driving, and storage medium
CN115828638A (en) * 2023-01-09 2023-03-21 西安深信科创信息技术有限公司 Automatic driving test scene script generation method and device and electronic equipment

Also Published As

Publication number Publication date
CN112559371B (en) 2023-07-28

Similar Documents

Publication Publication Date Title
CN113408141B (en) Automatic driving test method and device and electronic equipment
US20230138650A1 (en) Test method for automatic driving, and electronic device
CN112559371A (en) Automatic driving test method and device and electronic equipment
CN113361578B (en) Training method and device for image processing model, electronic equipment and storage medium
CN114282670A (en) Neural network model compression method, device and storage medium
CN112559378A (en) Automatic driving algorithm evaluation method and device and scene library generation method and device
CN114648676A (en) Point cloud processing model training and point cloud instance segmentation method and device
CN114676178A (en) Accident detection method and device and electronic equipment
CN114596709A (en) Data processing method, device, equipment and storage medium
CN114973656A (en) Method, device, equipment, medium and product for evaluating traffic interaction performance
CN114987494A (en) Driving scene processing method and device and electronic equipment
CN114970737A (en) Accident reason determining method, device, equipment and storage medium
CN113850297A (en) Road data monitoring method and device, electronic equipment and storage medium
CN113887101A (en) Visualization method and device of network model, electronic equipment and storage medium
CN113807391A (en) Task model training method and device, electronic equipment and storage medium
CN113032251A (en) Method, device and storage medium for determining service quality of application program
CN112541708B (en) Index determination method and device and electronic equipment
CN114677570B (en) Road information updating method, device, electronic equipment and storage medium
US11772681B2 (en) Method and apparatus for processing autonomous driving simulation data, and electronic device
CN113836358A (en) Data processing method and device, electronic equipment and storage medium
CN116401111B (en) Function detection method and device of brain-computer interface, electronic equipment and storage medium
CN113947897B (en) Method, device and equipment for acquiring road traffic condition and automatic driving vehicle
CN114572233B (en) Model set-based prediction method, electronic equipment and automatic driving vehicle
CN114093170B (en) Generation method, system and device of annunciator control scheme and electronic equipment
CN114882309A (en) Training and target re-recognition method and device for target re-recognition model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant