CN114154232A - Automatic driving scene recurrence detection method, device, equipment and storage medium - Google Patents

Automatic driving scene recurrence detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN114154232A
CN114154232A CN202111285256.8A CN202111285256A CN114154232A CN 114154232 A CN114154232 A CN 114154232A CN 202111285256 A CN202111285256 A CN 202111285256A CN 114154232 A CN114154232 A CN 114154232A
Authority
CN
China
Prior art keywords
vehicle
real
difference
acceleration
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111285256.8A
Other languages
Chinese (zh)
Inventor
吴佳晨
郑子威
谭伟华
韩旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Weride Technology Co Ltd
Original Assignee
Guangzhou Weride Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Weride Technology Co Ltd filed Critical Guangzhou Weride Technology Co Ltd
Priority to CN202111285256.8A priority Critical patent/CN114154232A/en
Publication of CN114154232A publication Critical patent/CN114154232A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/15Vehicle, aircraft or watercraft design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a method, a device, equipment and a storage medium for detecting the recurrence of an automatic driving scene, which comprise the following steps: s1, acquiring real operation information of the automatic driving vehicle in a real scene and simulation operation information in a simulation test; s2, comparing the real operation information with the simulation operation information to obtain a comparison result; s3, judging whether the simulation test reproduces the real scene according to the comparison result: if the comparison result is within the comparison value range, judging that the simulation test reproduces a real scene; and if the comparison result is out of the comparison value range, judging that the simulation test does not reproduce the real scene. The method and the device can judge the recurrence condition of the automatic driving scene quickly and timely, are favorable for saving labor cost, have expandability and can ensure the uniformity of judgment standards.

Description

Automatic driving scene recurrence detection method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of automatic driving, in particular to a method, a device, equipment and a storage medium for detecting recurrence of an automatic driving scene.
Background
In the field of automatic driving, the simulation test of a real data scene can present or reflect the decision and behavior of an automatic driving vehicle in real scene data to a certain extent. The real data scene can be reproduced in the simulation test, and is the basis for the subsequent test development. Therefore, whether the real data scene of the automatic driving can be reproduced in the simulation test is a very important detection item in the automatic driving field. The existing real data scene reproduction detection mainly depends on manual judgment. This causes the following problems:
1. humans cannot process multiple test scenarios simultaneously in large batches. With the continuous expansion of the scale of the automatic driving drive test, a large amount of test scenes derived from real data can be generated every day. The manual judgment is not immediate.
2. The human subjective judgment needs to consume a large amount of labor cost and has no expandability.
3. Different people have prejudice to the judgment of scene recurrence, and the unified standard is difficult to guarantee.
Disclosure of Invention
Therefore, the technical problem solved by the embodiments of the present application is to provide an automatic driving scene recurrence detection method, apparatus, device, and storage medium, which can not only quickly and timely determine the recurrence situation of the automatic driving scene, but also is beneficial to saving labor cost, has expandability, and can also ensure the uniformity of the determination standard.
In order to solve the technical problem, the technical scheme adopted by the application specifically comprises the following steps:
in one aspect, an embodiment of the present application provides an automatic driving scene recurrence detection method, including:
s1, acquiring real operation information of the automatic driving vehicle in a real scene and simulation operation information in a simulation test;
s2, comparing the real operation information with the simulation operation information to obtain a comparison result;
s3, judging whether the simulation test reproduces the real scene according to the comparison result:
if the comparison result is within the comparison value range, judging that the simulation test reproduces a real scene;
and if the comparison result is out of the comparison value range, judging that the simulation test does not reproduce the real scene.
Further, the real operation information comprises real vehicle behavior information and real planned path information; the simulation operation information comprises simulation vehicle behavior information and simulation planning path information; the comparison result comprises a vehicle behavior difference degree and a planning path difference degree;
the S1 includes:
s11, acquiring real vehicle behavior information and real planning path information of the automatic driving vehicle in a real scene;
s12, acquiring the simulation vehicle behavior information and the simulation planning path information of the automatic driving vehicle in the simulation scene;
the S2 includes:
s21, comparing the real vehicle behavior information with the simulated vehicle behavior information to obtain the vehicle behavior difference degree;
s22, comparing the real planning path information with the simulation planning path information to obtain the planning path difference degree;
the S3 includes:
judging whether the simulation test reproduces a real scene according to the vehicle behavior difference degree and the planning path difference degree:
if the vehicle behavior difference degree and the planned path difference degree are respectively in the vehicle behavior comparison value range and the planned path comparison value range, judging that the simulation test reproduces a real scene;
and if the vehicle behavior difference degree and/or the planned path difference degree are respectively out of the vehicle behavior comparison value range and/or the planned path comparison value range, judging that the simulation test does not reproduce the real scene.
Still further, the real vehicle behavior information includes a real vehicle acceleration, a real vehicle speed, a real vehicle coordinate, a real vehicle steering wheel angle at time t; the simulated vehicle behavior information comprises simulated vehicle acceleration, real vehicle speed, real vehicle coordinates and real vehicle steering wheel turning angles;
the S21 includes:
s211, comparing the real vehicle acceleration with the simulated vehicle acceleration to obtain an acceleration difference degree;
s212, comparing the product of the real vehicle speed and the real vehicle steering wheel angle with the product of the simulated vehicle speed and the simulated vehicle steering wheel angle to obtain the difference degree of the product of the speed and the steering wheel angle;
s213, comparing the real vehicle coordinates with the simulated vehicle coordinates to obtain the vehicle position difference degree;
the vehicle behavior comparison value comprises a preset acceleration difference threshold value, a difference threshold value of the product of a preset speed and a steering wheel angle and a preset vehicle position difference threshold value; the comparison value of the planned path comprises a preset difference threshold value of the planned path;
the step of judging whether the simulation test reproduces the real scene according to the vehicle behavior difference degree and the planned path difference degree comprises the following steps:
if the acceleration difference, the difference of the product of the speed and the steering wheel angle, the vehicle position difference and the planned path difference are respectively and correspondingly smaller than or equal to a preset acceleration difference threshold, a difference threshold of the product of the preset speed and the steering wheel angle, a preset vehicle position difference threshold and a preset planned path difference threshold, judging that the simulation test reproduces a real scene;
and if the difference degree of the acceleration difference degree and/or the product of the speed and the steering wheel angle and/or the difference degree of the vehicle position difference degree and/or the difference degree of the planned path is respectively and correspondingly greater than a preset acceleration difference degree threshold value and/or a difference degree threshold value of the product of the preset speed and the steering wheel angle and/or a preset vehicle position difference degree threshold value and/or a preset planned path difference degree threshold value, judging that the simulation test does not reproduce the real scene.
Preferably, the acceleration difference degree comprises an acceleration maximum average difference degree and an acceleration maximum difference value; the preset acceleration difference threshold comprises a preset acceleration maximum average difference threshold and a preset acceleration maximum difference threshold;
the S211 includes:
comparing the real vehicle acceleration with the simulated vehicle acceleration to obtain the maximum average difference degree and the maximum difference value of the acceleration;
the step of judging whether the simulation test reproduces the real scene according to the vehicle behavior difference degree and the planned path difference degree comprises the following steps:
if the maximum average difference of the accelerated speeds, the maximum difference of the accelerated speeds, the difference of the product of the speeds and the steering wheel angles, the difference of the vehicle positions and the difference of the planned paths are respectively and correspondingly less than or equal to a preset maximum average difference of the accelerated speeds, a preset maximum difference of the accelerated speeds, a difference of the product of the preset speeds and the steering wheel angles, a preset difference of the vehicle positions and a preset difference of the planned paths, judging that the simulation test reproduces a real scene;
and if the acceleration maximum average difference and/or the acceleration maximum difference and/or the difference of the speed and/or the product of the steering wheel angle and/or the vehicle position difference and/or the planned path difference are/is respectively and correspondingly greater than a preset acceleration maximum average difference threshold and/or a preset acceleration maximum difference threshold and/or a difference threshold of the product of the preset speed and the steering wheel angle and/or a preset vehicle position difference threshold and/or a preset planned path difference threshold, judging that the simulation test does not reproduce the real scene.
More preferably, the comparing the real vehicle acceleration with the simulated vehicle acceleration to obtain the maximum average difference degree of the acceleration includes:
(1) acquiring real vehicle acceleration and simulated vehicle acceleration at the moment t in a scene;
(2) acquiring k frames of real vehicle acceleration and k frames of simulated vehicle acceleration in a sliding window with the duration of n seconds corresponding to the time t in a scene according to the step (1);
(3) according to
Figure BDA0003332629620000031
Obtaining real vehicle acceleration and simulation vehicle in sliding windowAverage degree of difference of vehicle acceleration; wherein, the acelo(t)Is the real vehicle acceleration at time t, accels(t)The simulated vehicle acceleration at the time t, t0 is the starting time of the sliding window, t0+ n is the ending time of the sliding window, n is a positive number, and k is a natural number;
(4) traversing all sliding windows in the scene, and repeating the steps (1) to (3) to obtain the average difference degree between the real vehicle acceleration and the simulated vehicle acceleration of all the sliding windows;
(5) and (4) acquiring the maximum value of the average difference degree between the real vehicle acceleration and the simulated vehicle acceleration as the maximum average difference degree of the acceleration.
More preferably, the comparing the real vehicle acceleration with the simulated vehicle acceleration to obtain the maximum difference of the accelerations includes:
(1) acquiring real vehicle acceleration and simulated vehicle acceleration at the moment t in a scene;
(2) according to DIFFaccel=|accelo(t)-accels(t)Obtaining the difference value between the real vehicle acceleration and the simulated vehicle acceleration at the moment t; wherein, the acelo(t)Is the real vehicle acceleration at time t, accels(t)The simulated vehicle acceleration at time t;
(3) traversing all moments in the scene, repeating the steps (1) and (2), and obtaining the difference value between the real vehicle acceleration and the simulated vehicle acceleration at all moments;
(4) and (4) acquiring the maximum difference value of the real vehicle acceleration and the simulated vehicle acceleration from the step (3) as the maximum difference value of the acceleration.
Preferably, the S212 includes:
(1) acquiring the real vehicle speed, the real vehicle steering wheel corner, the simulated vehicle speed and the simulated vehicle steering wheel corner at the moment t in the scene;
(2) acquiring a product of k frames of real vehicle speed and real vehicle steering wheel angle and a product of k frames of simulated vehicle speed and simulated vehicle steering wheel angle in a sliding window with the duration of n seconds corresponding to the time t according to the step (1);
(3) according to
Figure BDA0003332629620000041
Obtaining the average difference degree of the product of the real vehicle speed and the steering wheel angle and the product of the simulated vehicle speed and the steering wheel angle in the sliding window; wherein speedo(t)Is the true vehicle speed at time t, swao(t)Is the true steering wheel angle of the vehicle at time t, speeds(t)Is the simulated vehicle speed at time t, swas(t)The simulated steering wheel angle of the vehicle at the time t, t0, t0+0.4, n and k are respectively the starting time of the sliding window, the ending time of the sliding window, a positive number and a natural number;
(4) traversing all sliding windows in the scene, repeating the steps (1) to (3), and obtaining the average difference degree of the product of the real vehicle speed and the steering wheel angle of all the sliding windows and the product of the simulated vehicle speed and the steering wheel angle;
(5) and (4) acquiring the maximum value of the average difference of the product of the real vehicle speed and the steering wheel angle and the product of the simulated vehicle speed and the steering wheel angle as the difference of the product of the speed and the steering wheel angle.
Preferably, the vehicle position difference degree comprises a vehicle coordinate maximum distance difference degree and a vehicle track Euclidean distance; the vehicle position difference threshold comprises a vehicle coordinate maximum distance difference threshold and a vehicle track Euclidean distance threshold;
the S213 includes:
comparing the real vehicle coordinates with the simulated vehicle coordinates to obtain the maximum difference degree of the vehicle coordinate distance and the vehicle Euclidean distance;
the step of judging whether the simulation test reproduces the real scene according to the vehicle behavior difference degree and the planned path difference degree comprises the following steps:
if the acceleration difference, the difference of the product of the speed and the steering wheel angle, the maximum difference of the vehicle coordinate distance and the vehicle Euclidean distance are respectively and correspondingly smaller than or equal to a preset acceleration difference threshold, a difference threshold of the product of the preset speed and the steering wheel angle, a preset vehicle coordinate maximum distance difference threshold, a preset vehicle track Euclidean distance threshold and a preset planned path difference threshold, judging that the simulation test reappears a real scene;
and if the acceleration difference and/or the difference of the product of the speed and the steering wheel angle and/or the vehicle position difference and/or the planned path difference are/is respectively and correspondingly greater than a preset acceleration difference threshold value and/or a difference threshold value of the product of the preset speed and the steering wheel angle and/or a preset vehicle coordinate maximum distance difference threshold value and/or a preset vehicle track Euclidean distance threshold value and/or a preset planned path difference threshold value, judging that the simulation test does not reproduce the real scene.
More preferably, the real vehicle coordinates comprise a real vehicle position abscissa and a real vehicle position ordinate; the simulated vehicle coordinates comprise a simulated vehicle position abscissa and a simulated vehicle position ordinate;
comparing the real vehicle coordinate with the simulated vehicle coordinate to obtain the maximum difference of the vehicle coordinate distance, comprising the following steps:
(1) acquiring a real vehicle position abscissa, a real vehicle position ordinate, a simulated vehicle position abscissa and a simulated vehicle position ordinate at the moment t in a scene;
(2) according to
Figure BDA0003332629620000051
Obtaining the distance difference degree of the vehicle coordinates at the time t; wherein, poso(t)X is the abscissa of the true vehicle position at time t, poso(t)Y is the real vehicle position ordinate at time t, poss(t)X is the abscissa of the simulated vehicle position at time t, poss(t)Y is the simulated vehicle position ordinate at time t;
(3) traversing all moments in the scene, repeating the steps (1) and (2), and obtaining the distance difference of the vehicle coordinates at all moments;
(4) and (4) acquiring the maximum value of the distance difference degree of the vehicle coordinates from the step (3) as the maximum difference degree of the distance of the vehicle coordinates.
More preferably, the real vehicle coordinates comprise a real vehicle position abscissa and a real vehicle position ordinate; the simulated vehicle coordinates comprise a simulated vehicle position abscissa and a simulated vehicle position ordinate;
comparing the real vehicle coordinates with the simulated vehicle coordinates to obtain the Euclidean distance of the vehicle track, comprising the following steps of:
(1) acquiring a real vehicle position abscissa, a real vehicle position ordinate, a simulated vehicle position abscissa and a simulated vehicle position ordinate at the moment t in a scene;
(2) according to
Figure BDA0003332629620000061
Obtaining the Euclidean distance of the vehicle track of the scene; wherein, poso(t)X is the abscissa of the true vehicle position at time t, poso(t)Y is the real vehicle position ordinate at time t, poss(t)X is the abscissa of the simulated vehicle position at time t, poss(t)Y is the vertical coordinate of the simulated vehicle position at the time t, and n is a natural number.
Preferably, the real planned path information includes real planned path coordinates including a real planned path abscissa and a real planned path ordinate; the simulation planning path information comprises simulation planning path coordinates which comprise a simulation planning path abscissa and a simulation planning path ordinate;
the S21 includes:
(1) acquiring n real planning path coordinates and simulation planning path coordinates corresponding to t time in a scene;
(2) according to
Figure BDA0003332629620000062
Figure BDA0003332629620000063
Obtaining the average planning track difference degree at the time t; wherein the content of the first and second substances,
Figure BDA0003332629620000064
for the ith real planned path abscissa at time t,
Figure BDA0003332629620000065
for the ith real planned path ordinate at time instant,
Figure BDA0003332629620000066
for the ith simulation planned path abscissa at time t,
Figure BDA0003332629620000067
the vertical coordinate of the ith simulation planning path at the time t, n and i are natural numbers, and i is less than or equal to n;
(3) traversing all moments in the scene, and repeating the steps (1) and (2) to obtain the average planning track difference degree of each moment in the scene;
(4) and (4) acquiring the maximum value of the average planning track difference degree from the step (3) as the planning path difference degree.
Further, the comparison value is obtained by fitting on a manual judgment scene set.
On the other hand, an embodiment of the present application provides an automatic driving scene recurrence detection apparatus, including:
the acquisition module is used for acquiring real operation information of the automatic driving vehicle in a real scene and simulation operation information in a simulation test;
the processing module is used for comparing the real operation information with the simulation operation information to obtain a comparison result;
the judging module is used for judging whether the simulation test reproduces a real scene according to the comparison result: if the comparison result is within the comparison value range, judging that the simulation test reproduces a real scene; and if the comparison result is out of the comparison value range, judging that the simulation test does not reproduce the real scene.
In another aspect, an embodiment of the present application provides an apparatus, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements any one of the steps of the above-mentioned automatic driving scene recurrence detection method when executing the computer program.
In another aspect, an embodiment of the present application provides a storage medium, where a computer program is stored, and the computer program, when executed by a processor, implements the steps of any one of the above-mentioned methods for detecting the recurrence of an automatic driving scenario.
In summary, compared with the prior art, the beneficial effects brought by the technical scheme provided by the embodiment of the present application at least include:
1. the method comprises the following steps of firstly obtaining real operation information of the automatic driving vehicle in a real scene and simulation operation information in a simulation test, then comparing the real operation information with the simulation operation information to obtain a comparison result, and finally judging whether the simulation test reproduces the real scene according to the comparison result: if the comparison result is within the comparison value range, judging that the simulation test reproduces a real scene; and if the comparison result is out of the comparison value range, judging that the simulation test does not reproduce the real scene. Compared with the existing detection method for manually judging the real data scene of the simulation test recurrence, the embodiment of the application not only can quickly and timely automatically judge the recurrence situation of the automatic driving scene, but also is beneficial to saving the labor cost, has expandability and can ensure the uniformity of the judgment standard.
2. The embodiment of the application compares the real vehicle behavior information and the real planning path information in the real operation information with the simulation vehicle behavior information and the simulation planning path information in the simulation operation information respectively, so as to obtain the vehicle behavior difference degree and the planning path difference degree; then, judging that the simulation test reproduces a real scene if the vehicle behavior difference degree and the planning path difference degree are respectively in the vehicle behavior comparison value range and the planning path comparison value range; if the vehicle behavior difference degree and/or the planned path difference degree are/is outside the vehicle behavior comparison value range and/or the planned path comparison value range respectively, the simulation test is judged not to reproduce a real scene, the vehicle behavior is considered, the state of a vehicle internal module is also considered, and compared with the existing manual judgment, the simulation test has information of a lower layer.
3. The comparison value of the embodiment of the application is obtained by fitting on the manual judgment scene set, so that the embodiment of the application has enough accuracy.
Drawings
Fig. 1 is a schematic flowchart of an automatic driving scene recurrence detection method according to a first exemplary embodiment of the present application.
Fig. 2 is a flowchart illustrating an automatic driving scene recurrence detection method according to a second exemplary embodiment of the present application.
Fig. 3 is a schematic structural diagram of an automatic driving scene recurrence detection apparatus according to a twelfth exemplary embodiment of the present application.
Fig. 4 is a schematic structural diagram of an apparatus according to a thirteenth exemplary embodiment of the present application.
Detailed Description
The present embodiment is only for explaining the present application, and it is not limited to the present application, and those skilled in the art can make modifications of the present embodiment without inventive contribution as needed after reading the present specification, but all of them are protected by patent law within the scope of the claims of the present application.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "comprises," "comprising," or any other variation thereof, in the description and claims of this application, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The embodiments of the present application will be described in further detail with reference to the drawings attached hereto.
A first exemplary embodiment of an autonomous driving scenario recurrence detection method of the present application, shown in fig. 1, comprises: s1, acquiring real operation information of the automatic driving vehicle in a real scene and simulation operation information in a simulation test;
s2, comparing the real operation information with the simulation operation information to obtain a comparison result;
s3, judging whether the simulation test reproduces the real scene according to the comparison result:
if the comparison result is within the comparison value range, judging that the simulation test reproduces a real scene;
and if the comparison result is out of the comparison value range, judging that the simulation test does not reproduce the real scene.
The first exemplary embodiment of the present application first obtains real operation information of an autonomous vehicle in a real scene and simulation operation information in a simulation test, then compares the real operation information with the simulation operation information to obtain a comparison result, and finally determines whether the simulation test reproduces the real scene according to the comparison result: if the comparison result is within the comparison value range, judging that the simulation test reproduces a real scene; and if the comparison result is out of the comparison value range, judging that the simulation test does not reproduce the real scene. Compared with the existing detection method for manually judging the real data scene of the simulation test recurrence, the embodiment of the application not only can quickly and timely automatically judge the recurrence situation of the automatic driving scene, but also is beneficial to saving the labor cost, has expandability and can ensure the uniformity of the judgment standard.
Fig. 2 is a first exemplary embodiment of the automatic driving scene recurrence detection method of the present application, which is improved based on the first exemplary embodiment shown in fig. 1, and the specific improvements are as follows:
the real operation information comprises real vehicle behavior information and real planning path information; the simulation operation information comprises simulation vehicle behavior information and simulation planning path information; the comparison result comprises a vehicle behavior difference degree and a planning path difference degree;
the S1 includes:
s11, acquiring real vehicle behavior information and real planning path information of the automatic driving vehicle in a real scene;
s12, acquiring the simulation vehicle behavior information and the simulation planning path information of the automatic driving vehicle in the simulation scene;
the S2 includes:
s21, comparing the real vehicle behavior information with the simulated vehicle behavior information to obtain the vehicle behavior difference degree;
s22, comparing the real planning path information with the simulation planning path information to obtain the planning path difference degree;
the S3 includes:
judging whether the simulation test reproduces a real scene according to the vehicle behavior difference degree and the planning path difference degree:
if the vehicle behavior difference degree and the planned path difference degree are respectively in the vehicle behavior comparison value range and the planned path comparison value range, judging that the simulation test reproduces a real scene;
and if the vehicle behavior difference degree and/or the planned path difference degree are respectively out of the vehicle behavior comparison value range and/or the planned path comparison value range, judging that the simulation test does not reproduce the real scene.
In the second exemplary embodiment of the present application, the real vehicle behavior information and the real planned path information in the real operation information are respectively compared with the simulated vehicle behavior information and the simulated planned path information in the simulated operation information, so as to obtain a vehicle behavior difference degree and a planned path difference degree; then, judging that the simulation test reproduces a real scene if the vehicle behavior difference degree and the planning path difference degree are respectively in the vehicle behavior comparison value range and the planning path comparison value range; if the vehicle behavior difference degree and/or the planned path difference degree are/is outside the vehicle behavior comparison value range and/or the planned path comparison value range respectively, the simulation test is judged not to reproduce a real scene, the vehicle behavior is considered, and the state of the vehicle interior module is also considered as the planned path information is the visual embodiment of the decision of the automatic driving algorithm of the vehicle interior module, so that the vehicle interior module has information of a lower layer compared with the existing manual judgment.
It should be noted that, for those skilled in the art, the execution sequence of S11 and S12 is not limited to the sequence shown in fig. 2, S11 and S12 may be executed simultaneously, or S12 may be executed first and then S11 may be executed. The execution sequence of S21 and S22 is not limited to the sequence shown in fig. 2, and S21 and S22 may be executed simultaneously, or S22 may be executed first and then S21 may be executed.
The third embodiment of the present application is further modified from the second exemplary embodiment shown in fig. 2, and the specific modifications are as follows:
the real vehicle behavior information comprises real vehicle acceleration, real vehicle speed, real vehicle coordinates and real vehicle steering wheel rotation angle at the moment t; the simulated vehicle behavior information comprises simulated vehicle acceleration, real vehicle speed, real vehicle coordinates and real vehicle steering wheel turning angles;
the S21 includes:
s211, comparing the real vehicle acceleration with the simulated vehicle acceleration to obtain an acceleration difference degree;
s212, comparing the product of the real vehicle speed and the real vehicle steering wheel angle with the product of the simulated vehicle speed and the simulated vehicle steering wheel angle to obtain the difference degree of the product of the speed and the steering wheel angle;
s213, comparing the real vehicle coordinates with the simulated vehicle coordinates to obtain the vehicle position difference degree;
the vehicle behavior comparison value comprises a preset acceleration difference threshold value, a difference threshold value of the product of a preset speed and a steering wheel angle and a preset vehicle position difference threshold value; the comparison value of the planned path comprises a preset difference threshold value of the planned path;
the step of judging whether the simulation test reproduces the real scene according to the vehicle behavior difference degree and the planned path difference degree comprises the following steps:
if the acceleration difference, the difference of the product of the speed and the steering wheel angle, the vehicle position difference and the planned path difference are respectively and correspondingly smaller than or equal to a preset acceleration difference threshold, a difference threshold of the product of the preset speed and the steering wheel angle, a preset vehicle position difference threshold and a preset planned path difference threshold, judging that the simulation test reproduces a real scene;
and if the difference degree of the acceleration difference degree and/or the product of the speed and the steering wheel angle and/or the difference degree of the vehicle position difference degree and/or the difference degree of the planned path is respectively and correspondingly greater than a preset acceleration difference degree threshold value and/or a difference degree threshold value of the product of the preset speed and the steering wheel angle and/or a preset vehicle position difference degree threshold value and/or a preset planned path difference degree threshold value, judging that the simulation test does not reproduce the real scene.
It should be noted that the execution sequence of S211 to S213 may be: first S211, second S212, and finally S213; the following steps can be also included: first S211, again S213, and finally S212; the following steps can be also included: first S212, again S211, and finally S213; the following steps can be also included: first S212, again S213, and finally S211; the following steps can be also included: first S213, second S211, and finally S212; the following steps can be also included: first S213, again S212, and finally S211.
According to the third exemplary embodiment of the application, the real vehicle behavior information and the simulated vehicle behavior information are compared by setting a plurality of measurement parameters (namely the difference of the product of the acceleration difference and/or the speed and the steering wheel angle and/or the difference of the vehicle position and/or the difference of the planned path), so that the judgment accuracy of the automatic driving scene recurrence detection method is greatly improved. Furthermore, the product of the speed and the steering wheel angle is selected as one of the parameters, since the same steering wheel angle has a greater influence on the driving path of the vehicle in the case of a higher vehicle speed.
The fourth exemplary embodiment of the present application is further improved on the basis of the third exemplary embodiment, and the specific improvements are as follows: the acceleration difference degree comprises an acceleration maximum average difference degree and an acceleration maximum difference value; the preset acceleration difference threshold comprises a preset acceleration maximum average difference threshold and a preset acceleration maximum difference threshold;
the S211 includes:
comparing the real vehicle acceleration with the simulated vehicle acceleration to obtain the maximum average difference degree and the maximum difference value of the acceleration;
the step of judging whether the simulation test reproduces the real scene according to the vehicle behavior difference degree and the planned path difference degree comprises the following steps:
if the maximum average difference of the accelerated speeds, the maximum difference of the accelerated speeds, the difference of the product of the speeds and the steering wheel angles, the difference of the vehicle positions and the difference of the planned paths are respectively and correspondingly less than or equal to a preset maximum average difference of the accelerated speeds, a preset maximum difference of the accelerated speeds, a difference of the product of the preset speeds and the steering wheel angles, a preset difference of the vehicle positions and a preset difference of the planned paths, judging that the simulation test reproduces a real scene;
and if the acceleration maximum average difference and/or the acceleration maximum difference and/or the difference of the speed and/or the product of the steering wheel angle and/or the vehicle position difference and/or the planned path difference are/is respectively and correspondingly greater than a preset acceleration maximum average difference threshold and/or a preset acceleration maximum difference threshold and/or a difference threshold of the product of the preset speed and the steering wheel angle and/or a preset vehicle position difference threshold and/or a preset planned path difference threshold, judging that the simulation test does not reproduce the real scene.
According to the fourth exemplary embodiment of the application, the maximum average difference of the acceleration and the maximum difference of the acceleration are set as the measurement parameters of the acceleration difference, so that the judgment accuracy of the automatic driving scene recurrence detection method is further improved.
The fifth exemplary embodiment of the present application is further improved on the basis of the fourth exemplary embodiment, and the specific improvements are as follows:
the step of comparing the real vehicle acceleration with the simulated vehicle acceleration to obtain the maximum average difference degree of the acceleration comprises the following steps:
(1) acquiring real vehicle acceleration and simulated vehicle acceleration at the moment t in a scene;
(2) acquiring k frames of real vehicle acceleration and k frames of simulated vehicle acceleration in a sliding window with the duration of n seconds corresponding to the time t in a scene according to the step (1);
(3) according to
Figure BDA0003332629620000121
Obtaining the average difference degree of the real vehicle acceleration and the simulated vehicle acceleration in the sliding window; wherein, the acelo(t)Is the real vehicle acceleration at time t, accels(t)The simulated vehicle acceleration at the time t, t0 is the starting time of the sliding window, t0+ n is the ending time of the sliding window, n is a positive number, and k is a natural number;
(4) traversing all sliding windows in the scene, and repeating the steps (1) to (3) to obtain the average difference degree between the real vehicle acceleration and the simulated vehicle acceleration of all the sliding windows;
(5) and (4) acquiring the maximum value of the average difference degree between the real vehicle acceleration and the simulated vehicle acceleration as the maximum average difference degree of the acceleration.
By executing the method flow steps of the fifth exemplary embodiment of the present application, the maximum average degree of difference of acceleration can be obtained quickly and accurately. Moreover, the accuracy and efficiency of obtaining the maximum average difference degree of the acceleration are the highest when the value of n is 0.4 and the value of k is 5 through tests of the inventor for countless times.
The sixth exemplary embodiment of the present application is further improved on the basis of the fourth exemplary embodiment, and the specific improvements are as follows:
comparing the real vehicle acceleration with the simulated vehicle acceleration to obtain the maximum acceleration difference value, comprising the following steps:
(1) acquiring real vehicle acceleration and simulated vehicle acceleration at the moment t in a scene;
(2) according to DIFFaccel=|accelo(t)-accels(t)Obtaining the difference value between the real vehicle acceleration and the simulated vehicle acceleration at the moment t; wherein, the acelo(t)Real vehicle at time tAcceleration of the vehicle, accels(t)The simulated vehicle acceleration at time t;
(3) traversing all moments in the scene, repeating the steps (1) and (2), and obtaining the difference value between the real vehicle acceleration and the simulated vehicle acceleration at all moments;
(4) and (4) acquiring the maximum difference value of the real vehicle acceleration and the simulated vehicle acceleration from the step (3) as the maximum difference value of the acceleration.
By executing the method flow steps of the sixth exemplary embodiment of the present application, the maximum acceleration difference value can be obtained quickly and accurately.
The seventh exemplary embodiment of the present application is further improved on the basis of the third exemplary embodiment, and the specific improvements are as follows:
the S212 includes:
(1) acquiring the real vehicle speed, the real vehicle steering wheel corner, the simulated vehicle speed and the simulated vehicle steering wheel corner at the moment t in the scene;
(2) acquiring a product of k frames of real vehicle speed and real vehicle steering wheel angle and a product of k frames of simulated vehicle speed and simulated vehicle steering wheel angle in a sliding window with the duration of n seconds corresponding to the time t according to the step (1);
(3) according to
Figure BDA0003332629620000131
Obtaining the average difference degree of the product of the real vehicle speed and the steering wheel angle and the product of the simulated vehicle speed and the steering wheel angle in the sliding window; wherein speedo(t)Is the true vehicle speed at time t, swao(t)Is the true steering wheel angle of the vehicle at time t, speeds(t)Is the simulated vehicle speed at time t, swas(t)The simulated steering wheel angle of the vehicle at the time t, t0, t0+0.4, n and k are respectively the starting time of the sliding window, the ending time of the sliding window, a positive number and a natural number;
(4) traversing all sliding windows in the scene, repeating the steps (1) to (3), and obtaining the average difference degree of the product of the real vehicle speed and the steering wheel angle of all the sliding windows and the product of the simulated vehicle speed and the steering wheel angle;
(5) and (4) acquiring the maximum value of the average difference of the product of the real vehicle speed and the steering wheel angle and the product of the simulated vehicle speed and the steering wheel angle as the difference of the product of the speed and the steering wheel angle.
By performing the method flow steps of the seventh exemplary embodiment of the present application, the disparity of the product of the speed and the steering wheel angle can be obtained quickly and accurately. Moreover, the inventor tests numerous times to verify that when the value of n is 0.4 and the value of k is 5, the accuracy and efficiency of obtaining the difference degree of the product of the speed and the steering wheel angle are the highest.
The eighth exemplary embodiment of the present application is further improved on the basis of the third exemplary embodiment, and the specific improvements are as follows:
the vehicle position difference degree comprises a vehicle coordinate maximum distance difference degree and a vehicle track Euclidean distance; the vehicle position difference threshold comprises a vehicle coordinate maximum distance difference threshold and a vehicle track Euclidean distance threshold;
the S213 includes:
comparing the real vehicle coordinates with the simulated vehicle coordinates to obtain the maximum difference degree of the vehicle coordinate distance and the vehicle Euclidean distance;
the step of judging whether the simulation test reproduces the real scene according to the vehicle behavior difference degree and the planned path difference degree comprises the following steps:
if the acceleration difference, the difference of the product of the speed and the steering wheel angle, the maximum difference of the vehicle coordinate distance and the vehicle Euclidean distance are respectively and correspondingly smaller than or equal to a preset acceleration difference threshold, a difference threshold of the product of the preset speed and the steering wheel angle, a preset vehicle coordinate maximum distance difference threshold, a preset vehicle track Euclidean distance threshold and a preset planned path difference threshold, judging that the simulation test reappears a real scene;
and if the acceleration difference and/or the difference of the product of the speed and the steering wheel angle and/or the vehicle position difference and/or the planned path difference are/is respectively and correspondingly greater than a preset acceleration difference threshold value and/or a difference threshold value of the product of the preset speed and the steering wheel angle and/or a preset vehicle coordinate maximum distance difference threshold value and/or a preset vehicle track Euclidean distance threshold value and/or a preset planned path difference threshold value, judging that the simulation test does not reproduce the real scene.
The seventh exemplary embodiment of the application further improves the judgment accuracy of the automatic driving scene recurrence detection method by setting the maximum distance difference of the vehicle coordinates and the Euclidean distance of the vehicle track as the measurement parameters of the vehicle position difference. Moreover, the maximum distance difference of the vehicle coordinate can be measured to find a scene which is not reproduced due to huge difference at a certain moment, and the Euclidean distance of the vehicle track can be measured to find a scene which is not reproduced due to tiny deviation in the whole process.
The eighth exemplary embodiment of the application is further improved on the basis of the seventh exemplary embodiment, and the specific improvements are as follows:
the real vehicle coordinates comprise a real vehicle position abscissa and a real vehicle position ordinate; the simulated vehicle coordinates comprise a simulated vehicle position abscissa and a simulated vehicle position ordinate;
comparing the real vehicle coordinate with the simulated vehicle coordinate to obtain the maximum difference of the vehicle coordinate distance, comprising the following steps:
(1) acquiring a real vehicle position abscissa, a real vehicle position ordinate, a simulated vehicle position abscissa and a simulated vehicle position ordinate at the moment t in a scene;
(2) according to
Figure BDA0003332629620000141
Obtaining the distance difference degree of the vehicle coordinates at the time t; wherein, poso(t)X is the abscissa of the true vehicle position at time t, poso(t)Y is the real vehicle position ordinate at time t, poss(t)X is the abscissa of the simulated vehicle position at time t, poss(t)Y is the simulated vehicle position ordinate at time t;
(3) traversing all moments in the scene, repeating the steps (1) and (2), and obtaining the distance difference of the vehicle coordinates at all moments;
(4) and (4) acquiring the maximum value of the distance difference degree of the vehicle coordinates from the step (3) as the maximum difference degree of the distance of the vehicle coordinates.
By executing the method flow steps of the eighth exemplary embodiment of the present application, the maximum distance difference of the vehicle coordinates can be obtained quickly and accurately.
The ninth exemplary embodiment of the present application is a further improvement on the seventh exemplary embodiment, and the specific improvement is as follows:
the real vehicle coordinates comprise a real vehicle position abscissa and a real vehicle position ordinate; the simulated vehicle coordinates comprise a simulated vehicle position abscissa and a simulated vehicle position ordinate;
comparing the real vehicle coordinates with the simulated vehicle coordinates to obtain the Euclidean distance of the vehicle track, comprising the following steps of:
(1) acquiring a real vehicle position abscissa, a real vehicle position ordinate, a simulated vehicle position abscissa and a simulated vehicle position ordinate at the moment t in a scene;
(2) according to
Figure BDA0003332629620000151
Obtaining the Euclidean distance of the vehicle track of the scene; wherein, poso(t)X is the abscissa of the true vehicle position at time t, poso(t)Y is the real vehicle position ordinate at time t, poss(t)X is the abscissa of the simulated vehicle position at time t, poss(t)Y is the vertical coordinate of the simulated vehicle position at the time t, and n is a natural number.
By executing the method flow steps of the ninth exemplary embodiment of the present application, the euclidean distance of the vehicle trajectory can be quickly and accurately acquired.
It should be noted that the specific value of n in the ninth exemplary embodiment of the present application is determined according to the specific length of the scene. For example, if the length of the scene is 10 seconds, then the value of n may be set to 100; the length of the scene is 20 seconds, then the value of n can be set to 200.
The tenth exemplary embodiment of the present application is further improved on the basis of the third exemplary embodiment of the present application, and the specific improvements are as follows:
the real planned path information comprises real planned path coordinates, and the real planned path coordinates comprise a real planned path abscissa and a real planned path ordinate; the simulation planning path information comprises simulation planning path coordinates which comprise a simulation planning path abscissa and a simulation planning path ordinate;
the S21 includes:
(1) acquiring n real planning path coordinates and simulation planning path coordinates corresponding to t time in a scene;
(2) according to
Figure BDA0003332629620000152
Figure BDA0003332629620000153
Obtaining the average planning track difference degree at the time t; wherein the content of the first and second substances,
Figure BDA0003332629620000154
for the ith real planned path abscissa at time t,
Figure BDA0003332629620000155
for the ith real planned path ordinate at time instant,
Figure BDA0003332629620000156
for the ith simulation planned path abscissa at time t,
Figure BDA0003332629620000157
the vertical coordinate of the ith simulation planning path at the time t, n and i are natural numbers, and i is less than or equal to n;
(3) traversing all moments in the scene, and repeating the steps (1) and (2) to obtain the average planning track difference degree of each moment in the scene;
(4) and (4) acquiring the maximum value of the average planning track difference degree from the step (3) as the planning path difference degree.
By executing the method flow steps of the tenth exemplary embodiment of the present application, the planned path discrepancy degree can be quickly and accurately obtained.
It should be noted that the specific value of n in the tenth embodiment of the present application is related to the length of the planned path, and the longer the planned path is, the larger the value of n is.
An eleventh exemplary embodiment of the present application is further modified from the first to tenth exemplary embodiments described above, and specifically modified as follows:
and the comparison value is obtained by fitting on a manual judgment scene set.
The manual judgment of the scene set includes a large number (for example, 1000) of scene simulation results, and a labeling result for manually judging whether to reproduce the original scene problem according to the scene simulation results. The specific fitting steps are as follows:
firstly, dividing a manual judgment scene set into a training set and a testing set according to the ratio of 4: 1. The training set is used for fitting the value range of the running information (namely the vehicle acceleration) of the vehicle, and the test set is used for avoiding the running information from being over-fitted on the training set. The training process is as follows: and (3) exhaustively taking values of all the operation information in steps of 0.1, and respectively calculating precision (namely accuracy), call (namely repeatability) and f1-score (namely f1 score) of the training set compared with the human judgment result under the current threshold. Taking the k groups of highest operation information of f1-score in the training set, and testing on the testing set; then, a group of operation information with the highest f1-score on the test set is taken as a final comparison value. Moreover, in practical use, when k is 5, an effect of avoiding overfitting is achieved. Wherein, the concrete calculation formula of precision is as follows:
Figure BDA0003332629620000161
the specific calculation formula of Recall is as follows:
Figure BDA0003332629620000162
the specific calculation formula of f1-score is:
Figure 1
fig. 3 is an automatic driving scene recurrence detection apparatus provided in a twelfth exemplary embodiment of the present application, including:
the acquisition module is used for acquiring real operation information of the automatic driving vehicle in a real scene and simulation operation information in a simulation test;
the processing module is used for comparing the real operation information with the simulation operation information to obtain a comparison result;
the judging module is used for judging whether the simulation test reproduces a real scene according to the comparison result: if the comparison result is within the comparison value range, judging that the simulation test reproduces a real scene; and if the comparison result is out of the comparison value range, judging that the simulation test does not reproduce the real scene.
The modules of the detection device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
Fig. 4 is a device, which may be a server, provided in a thirteenth exemplary embodiment of the present application. The device includes a processor, a memory, and a communication interface connected by a system bus. Wherein the processor of the device is configured to provide computing and control capabilities. The memory of the device may be implemented by any type or combination of volatile or non-volatile storage devices, including but not limited to: magnetic disk, optical disk, EEPROM, EPROM, SRAM, ROM, magnetic memory, flash memory, and PROM. The memory of the device provides an environment for the running of an operating system and computer programs stored within it. The communication interface of the device is a network interface for connecting and communicating with an external terminal through a network. Which when executed by a processor implements the detection method steps described in the above embodiments.
In a further embodiment of the present application, a storage medium is provided, which stores a computer program that, when executed by a processor, implements the loading method steps described in the above embodiments. Such storage media include, but are not limited to: ROM, RAM, CD-ROM, diskette, and floppy disk.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of each functional unit or module is illustrated, and in practical applications, the above-mentioned function may be distributed as different functional units or modules as required, that is, the internal structure of the apparatus described in this application may be divided into different functional units or modules to implement all or part of the above-mentioned functions.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (15)

1. An automatic driving scene reproduction detection method, comprising:
s1, acquiring real operation information of the automatic driving vehicle in a real scene and simulation operation information in a simulation test;
s2, comparing the real operation information with the simulation operation information to obtain a comparison result;
s3, judging whether the simulation test reproduces the real scene according to the comparison result:
if the comparison result is within the comparison value range, judging that the simulation test reproduces a real scene;
and if the comparison result is out of the comparison value range, judging that the simulation test does not reproduce the real scene.
2. The automated driving scenario recurrence detection method of claim 1, wherein the real operation information comprises real vehicle behavior information and real planned path information; the simulation operation information comprises simulation vehicle behavior information and simulation planning path information; the comparison result comprises a vehicle behavior difference degree and a planning path difference degree;
the S1 includes:
s11, acquiring real vehicle behavior information and real planning path information of the automatic driving vehicle in a real scene;
s12, acquiring the simulation vehicle behavior information and the simulation planning path information of the automatic driving vehicle in the simulation scene;
the S2 includes:
s21, comparing the real vehicle behavior information with the simulated vehicle behavior information to obtain the vehicle behavior difference degree;
s22, comparing the real planning path information with the simulation planning path information to obtain the planning path difference degree;
the S3 includes:
judging whether the simulation test reproduces a real scene according to the vehicle behavior difference degree and the planning path difference degree:
if the vehicle behavior difference degree and the planned path difference degree are respectively in the vehicle behavior comparison value range and the planned path comparison value range, judging that the simulation test reproduces a real scene;
and if the vehicle behavior difference degree and/or the planned path difference degree are respectively out of the vehicle behavior comparison value range and/or the planned path comparison value range, judging that the simulation test does not reproduce the real scene.
3. The automated driving scenario recurrence detection method of claim 2, wherein the real vehicle behavior information includes a real vehicle acceleration, a real vehicle speed, a real vehicle coordinate, a real vehicle steering wheel angle at time t; the simulated vehicle behavior information comprises simulated vehicle acceleration, real vehicle speed, real vehicle coordinates and real vehicle steering wheel turning angles;
the S21 includes:
s211, comparing the real vehicle acceleration with the simulated vehicle acceleration to obtain an acceleration difference degree;
s212, comparing the product of the real vehicle speed and the real vehicle steering wheel angle with the product of the simulated vehicle speed and the simulated vehicle steering wheel angle to obtain the difference degree of the product of the speed and the steering wheel angle;
s213, comparing the real vehicle coordinates with the simulated vehicle coordinates to obtain the vehicle position difference degree;
the vehicle behavior comparison value comprises a preset acceleration difference threshold value, a difference threshold value of the product of a preset speed and a steering wheel angle and a preset vehicle position difference threshold value; the comparison value of the planned path comprises a preset difference threshold value of the planned path;
the step of judging whether the simulation test reproduces the real scene according to the vehicle behavior difference degree and the planned path difference degree comprises the following steps:
if the acceleration difference, the difference of the product of the speed and the steering wheel angle, the vehicle position difference and the planned path difference are respectively and correspondingly smaller than or equal to a preset acceleration difference threshold, a difference threshold of the product of the preset speed and the steering wheel angle, a preset vehicle position difference threshold and a preset planned path difference threshold, judging that the simulation test reproduces a real scene;
and if the difference degree of the acceleration difference degree and/or the product of the speed and the steering wheel angle and/or the difference degree of the vehicle position difference degree and/or the difference degree of the planned path is respectively and correspondingly greater than a preset acceleration difference degree threshold value and/or a difference degree threshold value of the product of the preset speed and the steering wheel angle and/or a preset vehicle position difference degree threshold value and/or a preset planned path difference degree threshold value, judging that the simulation test does not reproduce the real scene.
4. The automatic driving scenario recurrence detection method of claim 3, wherein the acceleration difference degree comprises an acceleration maximum average difference degree and an acceleration maximum difference value; the preset acceleration difference threshold comprises a preset acceleration maximum average difference threshold and a preset acceleration maximum difference threshold;
the S211 includes:
comparing the real vehicle acceleration with the simulated vehicle acceleration to obtain the maximum average difference degree and the maximum difference value of the acceleration;
the step of judging whether the simulation test reproduces the real scene according to the vehicle behavior difference degree and the planned path difference degree comprises the following steps:
if the maximum average difference of the accelerated speeds, the maximum difference of the accelerated speeds, the difference of the product of the speeds and the steering wheel angles, the difference of the vehicle positions and the difference of the planned paths are respectively and correspondingly less than or equal to a preset maximum average difference of the accelerated speeds, a preset maximum difference of the accelerated speeds, a difference of the product of the preset speeds and the steering wheel angles, a preset difference of the vehicle positions and a preset difference of the planned paths, judging that the simulation test reproduces a real scene;
and if the acceleration maximum average difference and/or the acceleration maximum difference and/or the difference of the speed and/or the product of the steering wheel angle and/or the vehicle position difference and/or the planned path difference are/is respectively and correspondingly greater than a preset acceleration maximum average difference threshold and/or a preset acceleration maximum difference threshold and/or a difference threshold of the product of the preset speed and the steering wheel angle and/or a preset vehicle position difference threshold and/or a preset planned path difference threshold, judging that the simulation test does not reproduce the real scene.
5. The automatic driving scenario recurrence detection method of claim 4, wherein the comparing of the real vehicle acceleration with the simulated vehicle acceleration to obtain the maximum average degree of difference of the accelerations comprises:
(1) acquiring real vehicle acceleration and simulated vehicle acceleration at the moment t in a scene;
(2) acquiring k frames of real vehicle acceleration and k frames of simulated vehicle acceleration in a sliding window with the duration of n seconds corresponding to the time t in a scene according to the step (1);
(3) according to
Figure FDA0003332629610000031
Obtaining the average difference degree of the real vehicle acceleration and the simulated vehicle acceleration in the sliding window; wherein, the acelo(t)Is the real vehicle acceleration at time t, accels(t)The simulated vehicle acceleration at the time t, t0 is the starting time of the sliding window, t0+ n is the ending time of the sliding window, n is a positive number, and k is a natural number;
(4) traversing all sliding windows in the scene, and repeating the steps (1) to (3) to obtain the average difference degree between the real vehicle acceleration and the simulated vehicle acceleration of all the sliding windows;
(5) and (4) acquiring the maximum value of the average difference degree between the real vehicle acceleration and the simulated vehicle acceleration as the maximum average difference degree of the acceleration.
6. The automated driving scenario recurrence detection method of claim 4, wherein the comparing of the true vehicle acceleration with the simulated vehicle acceleration to obtain the maximum difference in acceleration comprises:
(1) acquiring real vehicle acceleration and simulated vehicle acceleration at the moment t in a scene;
(2) according to DIFFaccel=|accelo(t)-accels(t)Obtaining the difference value between the real vehicle acceleration and the simulated vehicle acceleration at the moment t; wherein, the acelo(t)Is the real vehicle acceleration at time t, accels(t)The simulated vehicle acceleration at time t;
(3) traversing all moments in the scene, repeating the steps (1) and (2), and obtaining the difference value between the real vehicle acceleration and the simulated vehicle acceleration at all moments;
(4) and (4) acquiring the maximum difference value of the real vehicle acceleration and the simulated vehicle acceleration from the step (3) as the maximum difference value of the acceleration.
7. The automatic driving scene recurrence detection method according to claim 3, wherein the S212 includes:
(1) acquiring the real vehicle speed, the real vehicle steering wheel corner, the simulated vehicle speed and the simulated vehicle steering wheel corner at the moment t in the scene;
(2) acquiring a product of k frames of real vehicle speed and real vehicle steering wheel angle and a product of k frames of simulated vehicle speed and simulated vehicle steering wheel angle in a sliding window with the duration of n seconds corresponding to the time t according to the step (1);
(3) according to
Figure FDA0003332629610000041
Obtaining the average difference degree of the product of the real vehicle speed and the steering wheel angle and the product of the simulated vehicle speed and the steering wheel angle in the sliding window; wherein speedo(t)Is the true vehicle speed at time t, swao(t)Is the true steering wheel angle of the vehicle at time t, speeds(t)Is the simulated vehicle speed at time t, swas(t)The simulated steering wheel angle of the vehicle at the time t, t0, t0+0.4, n and k are respectively the starting time of the sliding window, the ending time of the sliding window, a positive number and a natural number;
(4) traversing all sliding windows in the scene, repeating the steps (1) to (3), and obtaining the average difference degree of the product of the real vehicle speed and the steering wheel angle of all the sliding windows and the product of the simulated vehicle speed and the steering wheel angle;
(5) and (4) acquiring the maximum value of the average difference of the product of the real vehicle speed and the steering wheel angle and the product of the simulated vehicle speed and the steering wheel angle as the difference of the product of the speed and the steering wheel angle.
8. The automated driving scenario recurrence detection method of claim 3, wherein the vehicle position disparity degree includes a vehicle coordinate maximum distance disparity degree and a vehicle trajectory Euclidean distance; the vehicle position difference threshold comprises a vehicle coordinate maximum distance difference threshold and a vehicle track Euclidean distance threshold;
the S213 includes:
comparing the real vehicle coordinates with the simulated vehicle coordinates to obtain the maximum difference degree of the vehicle coordinate distance and the vehicle Euclidean distance;
the step of judging whether the simulation test reproduces the real scene according to the vehicle behavior difference degree and the planned path difference degree comprises the following steps:
if the acceleration difference, the difference of the product of the speed and the steering wheel angle, the maximum difference of the vehicle coordinate distance and the vehicle Euclidean distance are respectively and correspondingly smaller than or equal to a preset acceleration difference threshold, a difference threshold of the product of the preset speed and the steering wheel angle, a preset vehicle coordinate maximum distance difference threshold, a preset vehicle track Euclidean distance threshold and a preset planned path difference threshold, judging that the simulation test reappears a real scene;
and if the acceleration difference and/or the difference of the product of the speed and the steering wheel angle and/or the vehicle position difference and/or the planned path difference are/is respectively and correspondingly greater than a preset acceleration difference threshold value and/or a difference threshold value of the product of the preset speed and the steering wheel angle and/or a preset vehicle coordinate maximum distance difference threshold value and/or a preset vehicle track Euclidean distance threshold value and/or a preset planned path difference threshold value, judging that the simulation test does not reproduce the real scene.
9. The autonomous driving scenario recurrence detection method of claim 8, wherein the real vehicle coordinates comprise a real vehicle position abscissa and a real vehicle position ordinate; the simulated vehicle coordinates comprise a simulated vehicle position abscissa and a simulated vehicle position ordinate;
comparing the real vehicle coordinate with the simulated vehicle coordinate to obtain the maximum difference of the vehicle coordinate distance, comprising the following steps:
(1) acquiring a real vehicle position abscissa, a real vehicle position ordinate, a simulated vehicle position abscissa and a simulated vehicle position ordinate at the moment t in a scene;
(2) according to
Figure FDA0003332629610000051
Obtaining the distance difference degree of the vehicle coordinates at the time t; wherein, poso(t)X is the abscissa of the true vehicle position at time t, poso(t)Y is time tTrue vehicle position ordinate, poss(t)X is the abscissa of the simulated vehicle position at time t, poss(t)Y is the simulated vehicle position ordinate at time t;
(3) traversing all moments in the scene, repeating the steps (1) and (2), and obtaining the distance difference of the vehicle coordinates at all moments;
(4) and (4) acquiring the maximum value of the distance difference degree of the vehicle coordinates from the step (3) as the maximum difference degree of the distance of the vehicle coordinates.
10. The autonomous driving scenario recurrence detection method of claim 8, wherein the real vehicle coordinates comprise a real vehicle position abscissa and a real vehicle position ordinate; the simulated vehicle coordinates comprise a simulated vehicle position abscissa and a simulated vehicle position ordinate;
comparing the real vehicle coordinates with the simulated vehicle coordinates to obtain the Euclidean distance of the vehicle track, comprising the following steps of:
(1) acquiring a real vehicle position abscissa, a real vehicle position ordinate, a simulated vehicle position abscissa and a simulated vehicle position ordinate at the moment t in a scene;
(2) according to
Figure FDA0003332629610000052
Obtaining the Euclidean distance of the vehicle track of the scene; wherein, poso(t)X is the abscissa of the true vehicle position at time t, poso(t)Y is the real vehicle position ordinate at time t, poss(t)X is the abscissa of the simulated vehicle position at time t, poss(t)Y is the vertical coordinate of the simulated vehicle position at the time t, and n is a natural number.
11. The autopilot scenario recurrence detection method of claim 3 wherein the real planned path information includes real planned path coordinates including a real planned path abscissa and a real planned path ordinate; the simulation planning path information comprises simulation planning path coordinates which comprise a simulation planning path abscissa and a simulation planning path ordinate;
the S21 includes:
(1) acquiring n real planning path coordinates and simulation planning path coordinates corresponding to t time in a scene;
(2) according to
Figure FDA0003332629610000061
Figure FDA0003332629610000066
Obtaining the average planning track difference degree at the time t; wherein the content of the first and second substances,
Figure FDA0003332629610000062
for the ith real planned path abscissa at time t,
Figure FDA0003332629610000063
for the ith real planned path ordinate at time instant,
Figure FDA0003332629610000064
for the ith simulation planned path abscissa at time t,
Figure FDA0003332629610000065
the vertical coordinate of the ith simulation planning path at the time t, n and i are natural numbers, and i is less than or equal to n;
(3) traversing all moments in the scene, and repeating the steps (1) and (2) to obtain the average planning track difference degree of each moment in the scene;
(4) and (4) acquiring the maximum value of the average planning track difference degree from the step (3) as the planning path difference degree.
12. The automated driving scenario recurrence detection method of any of claims 1-11, wherein the comparison values are fit over a set of human judgment scenarios.
13. An automatic driving scene recurrence detection device, comprising:
the acquisition module is used for acquiring real operation information of the automatic driving vehicle in a real scene and simulation operation information in a simulation test;
the processing module is used for comparing the real operation information with the simulation operation information to obtain a comparison result;
the judging module is used for judging whether the simulation test reproduces a real scene according to the comparison result: if the comparison result is within the comparison value range, judging that the simulation test reproduces a real scene; and if the comparison result is out of the comparison value range, judging that the simulation test does not reproduce the real scene.
14. An apparatus comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the autonomous driving scenario recurrence detection method of any of claims 1-12 when executing the computer program.
15. A storage medium, characterized in that the storage medium has stored therein a computer program which, when being executed by a processor, carries out the steps of the automatic driving scenario recurrence detection method according to any one of claims 1-12.
CN202111285256.8A 2021-11-01 2021-11-01 Automatic driving scene recurrence detection method, device, equipment and storage medium Pending CN114154232A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111285256.8A CN114154232A (en) 2021-11-01 2021-11-01 Automatic driving scene recurrence detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111285256.8A CN114154232A (en) 2021-11-01 2021-11-01 Automatic driving scene recurrence detection method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114154232A true CN114154232A (en) 2022-03-08

Family

ID=80459205

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111285256.8A Pending CN114154232A (en) 2021-11-01 2021-11-01 Automatic driving scene recurrence detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114154232A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117034732A (en) * 2023-04-14 2023-11-10 北京百度网讯科技有限公司 Automatic driving model training method based on true and simulated countermeasure learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117034732A (en) * 2023-04-14 2023-11-10 北京百度网讯科技有限公司 Automatic driving model training method based on true and simulated countermeasure learning

Similar Documents

Publication Publication Date Title
US10993079B2 (en) Motion detection method, device, and medium
CN108596266A (en) Blending decision method, device based on semi-supervised learning and storage medium
CN111177887A (en) Method and device for constructing simulation track data based on real driving scene
CN107256561A (en) Method for tracking target and device
CN111797526A (en) Simulation test scene construction method and device
CN110426490A (en) A kind of the temperature and humidity drift compensation method and device of pernicious gas on-line computing model
CN114154232A (en) Automatic driving scene recurrence detection method, device, equipment and storage medium
CN116011225A (en) Scene library generation method, test method, electronic device and storage medium
CN111274078A (en) Method, system and device for testing performance of hard disk
CN113791605A (en) Test method, device, equipment and storage medium
CN110207643B (en) Folding angle detection method and device, terminal and storage medium
US20060155734A1 (en) Apparatus and methods for evaluating a dynamic system
CN114445684A (en) Method, device and equipment for training lane line segmentation model and storage medium
CN112989312B (en) Verification code identification method and device, electronic equipment and storage medium
CN111709665B (en) Vehicle safety assessment method and device
CN111885597A (en) Method and system for security authentication
CN116665170A (en) Training of target detection model, target detection method, device, equipment and medium
CN114065549B (en) Automatic driving level evaluation method, device, equipment and storage medium
CN116383041A (en) Lane line fitting method and device for automatic driving simulation test
CN114996116A (en) Anthropomorphic evaluation method for automatic driving system
CN114202224A (en) Method, apparatus, medium, and program product for detecting weld quality in a production environment
CN116566735B (en) Method for identifying malicious traffic through machine learning
CN112098782A (en) MOA insulation state detection method and system based on neural network
CN113850929B (en) Display method, device, equipment and medium for processing annotation data stream
CN113986752A (en) Particle swarm algorithm-based AEB algorithm failure scene searching method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination