CN115017742A - Automatic driving test scene generation method, device, equipment and storage medium - Google Patents

Automatic driving test scene generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN115017742A
CN115017742A CN202210941004.4A CN202210941004A CN115017742A CN 115017742 A CN115017742 A CN 115017742A CN 202210941004 A CN202210941004 A CN 202210941004A CN 115017742 A CN115017742 A CN 115017742A
Authority
CN
China
Prior art keywords
traffic participant
data
traffic
scene
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210941004.4A
Other languages
Chinese (zh)
Other versions
CN115017742B (en
Inventor
郝坤坤
白雨桥
潘余曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Xinxin Science And Technology Innovation Information Technology Co ltd
Original Assignee
Xi'an Xinxin Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Xinxin Information Technology Co ltd filed Critical Xi'an Xinxin Information Technology Co ltd
Priority to CN202210941004.4A priority Critical patent/CN115017742B/en
Publication of CN115017742A publication Critical patent/CN115017742A/en
Application granted granted Critical
Publication of CN115017742B publication Critical patent/CN115017742B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M17/00Testing of vehicles
    • G01M17/007Wheeled or endless-tracked vehicles

Abstract

The application provides a method, a device, equipment and a storage medium for generating an automatic driving test scene, wherein the method comprises the following steps: acquiring at least one group of traffic participant data, wherein each group of traffic participant data respectively comprises position information, movement speed information and movement direction information of two traffic participants; respectively calculating the time consumed by two traffic participants in each group of traffic participant data to move to a danger occurrence point to obtain the danger reaction time corresponding to each group of traffic participant data; obtaining target traffic participant data at least by selecting traffic participant data with the dangerous reaction time less than a set time threshold from each group of traffic participant data; and generating an automatic driving test scene according to the target traffic participant data. The method can generate an automatic driving test scene with higher danger degree, and the scene can be used for testing the driving behavior of the automatic driving vehicle in the dangerous scene.

Description

Automatic driving test scene generation method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of automated driving simulation testing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for generating an automated driving test scenario.
Background
The automatic driving simulation test refers to testing the driving behavior of an automatic driving vehicle or an automatic driving algorithm in a virtual automatic driving test scene so as to judge whether the automatic driving vehicle or the automatic driving algorithm can achieve an intelligent automatic driving effect, such as automatic obstacle avoidance and intelligent emergency response.
Generally, the safety of automatic driving is more represented by whether the driving of the automatic driving vehicle in a dangerous scene is intelligent enough. Therefore, it is necessary to generate a test scenario affecting the driving safety of the autonomous vehicle for testing the driving behavior of the autonomous vehicle in a dangerous scenario.
Based on the above requirements, how to measure the risk of the driving scene, and then generate an automatic driving test scene with high risk based on the measurement result becomes an urgent need of an automatic driving simulation test.
Disclosure of Invention
Based on the technical current situation, the application provides an automatic driving test scene generation method, device, equipment and storage medium, which can measure the risk of a driving scene and generate an automatic driving test scene with high risk.
In order to achieve the above purpose, the present application specifically proposes the following technical solutions:
the application provides an automatic driving test scene generation method in a first aspect, including:
acquiring at least one group of traffic participant data, wherein each group of traffic participant data respectively comprises position information, movement speed information and movement direction information of two traffic participants;
respectively calculating the time consumed by two traffic participants in each group of traffic participant data to move to a danger occurrence point to obtain the danger reaction time corresponding to each group of traffic participant data; the dangerous occurrence point comprises a position point when two traffic participants form a dangerous driving scene, and in the dangerous driving scene, the distance between the two traffic participants is smaller than a set distance threshold value;
obtaining target traffic participant data at least by selecting traffic participant data with the dangerous reaction time less than a set time threshold from each group of traffic participant data;
and generating an automatic driving test scene according to the target traffic participant data.
The second aspect of the present application provides an automatic driving test scenario generation apparatus, including:
the data acquisition unit is used for acquiring at least one group of traffic participant data, wherein each group of traffic participant data respectively comprises position information, movement speed information and movement direction information of two traffic participants;
the calculating unit is used for calculating the time consumed by two traffic participants in each group of traffic participant data to move to the danger occurrence point respectively to obtain the danger reaction time corresponding to each group of traffic participant data; the dangerous occurrence point comprises a position point when two traffic participants form a dangerous driving scene, and in the dangerous driving scene, the distance between the two traffic participants is smaller than a set distance threshold value;
the data screening unit is used for selecting traffic participant data with the dangerous reaction time smaller than a set time threshold from all the groups of traffic participant data to obtain target traffic participant data;
and the scene generation unit is used for generating an automatic driving test scene according to the target traffic participant data.
A third aspect of the present application provides an electronic device, comprising:
a memory and a processor;
the memory is connected with the processor and used for storing programs;
the processor is used for realizing the automatic driving test scene generation method by running the program in the memory.
A fourth aspect of the present application provides a storage medium, where a computer program is stored on the storage medium, and when the computer program is executed by a processor, the method for generating an autopilot test scenario is implemented.
The application provides an automatic driving test scene generation method, which is characterized in that after a plurality of groups of traffic participant data are obtained, the time consumed for two traffic participants in each group of traffic participant data to move to a danger occurrence point is calculated respectively, and the danger reaction time corresponding to each group of traffic participant data is obtained. The dangerous reaction time of the data of the traffic participants can visually represent the dangerous degree of a driving scene formed by the two traffic participants. For example, the shorter the dangerous response time, the more dangerous the driving scene, and the longer the dangerous response time, the safer the driving scene. Therefore, the measurement of the dangerous degree of the driving scene formed by the traffic participants is realized by calculating the dangerous reaction time corresponding to the data of the traffic participants.
On the basis, according to the dangerous response time of each group of traffic participant data, target traffic participant data is selected from the acquired groups of traffic participant data, and an automatic driving test scene is generated according to the selected target traffic participant data, so that the data with short dangerous response time, namely high driving danger, of the driving scene selected from the acquired groups of traffic participant data can be used for generating the automatic driving test scene, namely the automatic driving test scene with high danger degree can be obtained, and the automatic driving test scene can be used for testing the driving behavior of the automatic driving vehicle in the dangerous scene.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic view of an automatic driving simulation test scenario provided in an embodiment of the present application.
Fig. 2 is a schematic flow chart of a method for generating an automatic driving test scenario according to an embodiment of the present application.
Fig. 3 is a schematic flow chart of another automatic driving test scenario generation method according to an embodiment of the present application.
Fig. 4 is a schematic flow chart of another automatic driving test scenario generation method provided in the embodiment of the present application.
Fig. 5 is a schematic view of a traffic scene according to an embodiment of the present application.
Fig. 6 is a schematic view of another traffic scene provided in the embodiment of the present application.
Fig. 7 is a schematic structural diagram of an automatic driving test scenario generation apparatus according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical scheme of the embodiment of the application is suitable for generating the application scene of the automatic driving simulation test scene, and the test scene with high risk for the automatic driving vehicle can be generated by adopting the technical scheme of the embodiment of the application, so that the method and the device can be used for testing the driving behavior of the automatic driving vehicle in the dangerous scene.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Exemplary implementation Environment
Fig. 1 shows an application scenario to which the technical solution of the embodiment of the present application is applied, where the application scenario is an automatic driving simulation test scenario. The automatic driving system is internally provided with an automatic driving control algorithm and is used for controlling the automatic driving vehicle to run; the simulator is used for generating a virtual driving environment according to the test requirement, and an automatic driving vehicle controlled by an automatic driving system drives in the virtual driving environment generated by the simulator; the test engine is used as a central controller of the automatic driving simulation test and is used for controlling the whole automatic driving simulation test process, including but not limited to issuing test tasks to an automatic driving system, sending scene data to a simulation simulator, monitoring the driving process of the automatic driving vehicle in the driving environment and the like.
The technical scheme of the embodiment of the application can be applied to the test engine shown in the figure 1. The test engine may be a local processor, or may be a cloud or local server. The test engine can generate an automatic driving test scene meeting the test requirement by executing the technical scheme of the embodiment of the application, the generated automatic driving test scene data is sent to the simulation simulator, and the simulation simulator carries out scene rendering according to the received data, so that the corresponding automatic driving test scene can be obtained.
Exemplary method
Referring to fig. 2, an embodiment of the present application provides a method for generating an automatic driving test scenario, where the method includes:
s101, at least one group of traffic participant data is obtained, wherein each group of traffic participant data respectively comprises position information, movement speed information and movement direction information of two traffic participants.
Specifically, the traffic participant refers to an object that appears in a traffic scene and participates in an actual traffic process, and may be, for example, a motor vehicle, a non-motor vehicle, a pedestrian, a road surface obstacle, an animal, a plant, and the like that are located on a traveling road.
Theoretically, a traffic scene can be formed between every two traffic participants, for example, when two motor vehicles run back and forth on the same lane, a simple traffic scene that two vehicles run back and forth can be formed, and further, when the positions, the running speeds and the running directions of the two vehicles are different, other traffic scenes such as traffic scenes that the front vehicle is cut into, the front vehicle is braked and the rear vehicle overtakes can be derived.
The embodiment of the application acquires a plurality of groups of traffic participant data consisting of two traffic participants, and is used for generating a traffic scene containing the two traffic participants. Wherein, each group of traffic participant data respectively comprises position information, movement speed information and movement direction information of two traffic participants.
In theory, the traffic participants in each set of traffic participant data may be different, as may the traffic participant types.
In order to facilitate explanation of a specific processing procedure of the method for generating the automatic driving test scene, a plurality of sets of traffic participant data composed of two motor vehicles are acquired, that is, each set of acquired traffic participant data respectively includes position information, movement speed information, and movement direction information of the two vehicles. The position information of the vehicle is represented by vehicle longitude and latitude coordinates, the movement speed information of the vehicle is represented by the driving speed of the vehicle, and the movement direction information of the vehicle is represented by the driving direction of the vehicle.
In other scenarios, the acquired data of the traffic participants may not be completely the data of the motor vehicles, for example, the data of various motor vehicles and other traffic participants, such as motor vehicles and bicycles, motor vehicles and pedestrians, motor vehicles and obstacles, and the like. In addition, the data of the traffic participants can be selectively acquired according to the test target. For example, assuming that the test target is to test the driving behavior of an autonomous vehicle with a pedestrian in front, a test scenario in which a pedestrian is present in front of the vehicle should be generated, and the traffic participant data acquired at this time should be traffic participant data containing position information, movement speed information, and movement direction information of the motor vehicle and the pedestrian.
In the embodiment of the application, a corresponding automatic driving test scene generation method is introduced by taking a scene in which two traffic participants adjacent to each other in front and back travel in the same direction along the same or different lanes as an example. That is, in the embodiment of the present application, each set of acquired data of the traffic participants includes position information, movement speed information, and movement direction information of two vehicles that are adjacent to each other in the front-rear direction and travel in the same direction, respectively.
As an optional implementation manner, in the embodiment of the present application, a traffic scene formed by two vehicles is searched from a natural scene traffic flow data set (such as an NGSIM data set), and vehicle position, driving speed, and driving direction information in a corresponding scene are acquired, so as to obtain multiple sets of traffic participant data. For example, from the natural scene traffic stream data set, scene data of two vehicles that are adjacent to each other in front and behind and travel in the same direction are searched, that is, traffic participant data is obtained.
And S102, respectively calculating the time consumed by two traffic participants in each group of traffic participant data to move to the danger occurrence point, and obtaining the danger reaction time corresponding to each group of traffic participant data.
The dangerous occurrence point comprises a position point when two traffic participants form a dangerous driving scene, and in the dangerous driving scene, the distance between the two traffic participants is smaller than a set distance threshold value.
Specifically, in a real traffic scene, when a distance between two traffic participants on a road is smaller than a certain set distance, a driving scene formed by the two traffic participants is very dangerous, for example, when the distance between two vehicles is small, the distance between a vehicle and a pedestrian is small, and the distance between the vehicle and an obstacle is small, a traffic accident that the vehicles collide with each other, the vehicle collides with a person, and the vehicle collides with an obstacle is very likely to occur. The dangerous driving scene comprises a scene with a small distance between two traffic participants and a scene of collision of the two traffic participants.
It can be understood that, in the respective movement process of any two traffic participants, if the two traffic participants form a dangerous driving scene, it indicates that the two traffic participants are most likely to have traffic accidents, even traffic accidents occur. At a certain moment, if the distance between the two is closer to the distance between the two and the dangerous driving scene formed by the two, the higher the danger of the driving scene formed by the two at the moment is; when the time taken for two traffic participants to move to the position point where the two traffic participants constitute the dangerous driving scene is shorter from a certain moment, the higher the danger of the driving scene constituted by the two traffic participants at the moment is.
Based on the facts, for each set of acquired data of the traffic participants, the embodiment of the application calculates the time consumed by two traffic participants to move to the position point when the two traffic participants constitute the dangerous driving scene, so as to measure the danger of the driving scene constituted by the two traffic participants.
For convenience of introduction, in the embodiment of the present application, the location points when two traffic participants constitute a dangerous driving scene are named as dangerous occurrence points.
And for each group of acquired data of the traffic participants, respectively calculating and determining the time consumption of the two traffic participants moving to the danger occurrence point according to the positions, the movement speeds and the movement directions of the two traffic participants recorded in the data.
Specifically, for two traffic participants in each group of traffic participant data, respective position points of the two traffic participants when the two traffic participants form a dangerous driving scene are determined firstly; and then, calculating the time consumption of each traffic participant when the traffic participant moves to the respective position point when the traffic participant and the traffic participant form the dangerous driving scene according to the position, the movement speed and the movement direction information of each traffic participant in the acquired traffic participant data.
When the time consumption phases of the two are the same, taking the same time consumption of the two as the dangerous reaction time corresponding to the group of traffic participant data; and when the two time consumptions are different, taking the larger time of the two time consumptions as the dangerous reaction time corresponding to the group of traffic participant data.
The calculation of the danger response time is applicable to any scene where the movement trajectories of two traffic participants have an intersection, for example, two traffic participants move in the same direction and in the same direction, or a first traffic participant is stationary and a second traffic participant drives to the first traffic participant.
When one of the two traffic participants is stationary, the two traffic participants form a dangerous driving scene, and the position point of the stationary traffic participant is the position point of the stationary traffic participant, so that the time spent by the stationary traffic participant to move to the dangerous occurrence point is zero.
According to the introduction, the dangerous reaction time corresponding to each group of traffic participant data can be calculated and determined respectively. The length of the dangerous reaction time reflects the danger of the traffic scene formed by each group of traffic participant data. The traffic scene formed by a set of traffic participant data is less dangerous if the dangerous response time corresponding to the set of traffic participant data is longer, whereas the traffic scene formed by the set of traffic participant data is more dangerous if the dangerous response time corresponding to the set of traffic participant data is shorter.
S103, obtaining target traffic participant data at least by selecting traffic participant data with the danger reaction time smaller than a set time threshold from all the groups of traffic participant data.
Specifically, through the processing in step S102, the degree of risk of the traffic scene formed by each set of traffic participant data is numerically measured.
On the basis, the traffic participant data with the dangerous reaction time smaller than the set time threshold is selected from all the groups of traffic participant data to serve as the target traffic participant data. Wherein the set time threshold is set to a small value, so that the traffic participant data with high danger degree can be selected from various groups of traffic participant data.
And S104, generating an automatic driving test scene according to the target traffic participant data.
Specifically, a scene modeling is performed based on the target traffic participant data, and an automatic driving test scene corresponding to the target traffic participant data can be obtained.
As an exemplary embodiment, a traffic scene, for example, a traffic scene in which two vehicles adjacent to each other in front and behind are traveling in the same direction, may be constructed in the highway-env environment, and then the traffic scene is initialized using the target traffic participant data, so that an automatic driving test scene with a high risk level may be generated.
As can be seen from the above description, the method for generating an automatic driving test scene provided in the embodiment of the present application calculates time consumed for two traffic participants in each group of traffic participant data to move to a danger occurrence point after acquiring multiple groups of traffic participant data, and obtains danger response time corresponding to each group of traffic participant data. The dangerous reaction time of the data of the traffic participants can visually represent the dangerous degree of a driving scene formed by the two traffic participants. For example, the shorter the dangerous response time, the more dangerous the driving scene, and the longer the dangerous response time, the safer the driving scene. Therefore, the measurement of the dangerous degree of the driving scene formed by the traffic participants is realized by calculating the dangerous reaction time corresponding to the data of the traffic participants.
On the basis, according to the dangerous response time of each group of traffic participant data, target traffic participant data is selected from the acquired groups of traffic participant data, and an automatic driving test scene is generated according to the selected target traffic participant data, so that the data with short dangerous response time, namely high driving danger, of the driving scene selected from the acquired groups of traffic participant data can be used for generating the automatic driving test scene, namely the automatic driving test scene with high danger degree can be obtained, and the automatic driving test scene can be used for testing the driving behavior of the automatic driving vehicle in the dangerous scene.
The automatic driving test scene generation method introduced in the above embodiment measures the degree of risk of a driving scene formed by each set of traffic participant data, and selects traffic participant data with a higher degree of risk from a plurality of sets of traffic participant data to generate an automatic driving test scene. By utilizing the scheme, an automatic driving test scene with higher risk degree can be generated.
As a more preferable implementation manner, the automatic driving test scenario generation method provided by the embodiment of the application further calculates a natural deviation degree of each set of traffic participant data, and then selects traffic participant data with a higher risk degree and a higher natural degree from the acquired multiple sets of traffic participant data, so as to finally construct an automatic driving test scenario with a higher risk degree and a higher natural degree.
Referring to fig. 3, in the automatic driving test scene generation method provided in the embodiment of the present application, after step S202 is executed, time consumed by two traffic participants in each group of traffic participant data to move to a danger occurrence point is respectively calculated, and a danger reaction time corresponding to each group of traffic participant data is obtained, step S203 is executed, a difference between each group of traffic participant data and common scene data corresponding to each group of traffic participant data is calculated, and a naturalness deviation degree of each group of traffic participant data is determined.
The common scene data corresponding to the traffic participant data comprise data of a target natural traffic scene, wherein the probability of the target natural traffic scene occurring in the natural traffic scene exceeds a set probability threshold; and the target natural traffic scene is a natural traffic scene formed by two traffic participants in the traffic participant data.
Specifically, for each set of traffic participant data, two of the traffic participants may form a type of natural traffic scene, such as a front cut scene, a front braking scene, and the like, according to the positions, the movement speeds, and the movement directions of the two traffic participants. According to the embodiment of the application, the natural traffic scene formed by two traffic participants in the traffic participant data is defined as the target natural traffic scene corresponding to the traffic participant data.
For each type of natural traffic scene, the natural traffic scene can be divided into different actual scenes according to different parameters such as vehicle speed, distance and the like in the scene, and the occurrence probability of each actual scene in the actual situation is different. For example, for a front braking scenario, there may be various combinations of relative distance and relative speed between two vehicles, which in turn results in various front braking scenarios.
The traffic scene with the probability exceeding the set probability threshold value in the natural traffic scene is defined as the common traffic scene. Based on this, for each group of traffic participant data, the data of the target natural traffic scene, in which the probability of occurrence in the natural traffic scene exceeds the set probability threshold, is defined as common scene data corresponding to the traffic participant data in the embodiment of the present application.
For example, assuming that a certain set of traffic participant data is data of two vehicles in a preceding vehicle braking scene, the preceding vehicle braking scene data having a probability of occurring in a natural scene exceeding a set probability threshold is defined as common scene data corresponding to the set of traffic participant data.
The method and the device for determining the nature deviation degree of the traffic participant data respectively calculate the difference between each group of traffic participant data and the common scene data corresponding to each group of traffic participant data, and determine the nature deviation degree of each group of traffic participant data.
For example, the difference between each group of traffic participant data and the common scene data corresponding to each group of traffic participant data may be determined by calculating the root mean square error between each group of traffic participant data and the common scene data corresponding to each group of traffic participant data, where the root mean square error calculation result is the natural deviation degree of each group of traffic participant data.
For a set of traffic participant data, the smaller the difference between the data and the corresponding common scene data is, i.e., the smaller the difference between the data and the common scene data is, the smaller the deviation degree of the naturalness of the data is, the truer the data is, and the better the naturalness of the data is.
Based on the calculation of the natural deviation degree, when the target traffic participant data is selected from all the groups of traffic participant data, the danger reaction time of the traffic participant data is combined with the natural deviation degree, and the target traffic participant data is selected.
As an alternative embodiment, referring to fig. 3, step S204 may be executed to select traffic participant data with a risk response time less than a set time threshold from each set of traffic participant data, and then step S205 may be executed to select traffic participant data with a natural deviation degree less than a set deviation threshold from the traffic participant data with a risk response time less than a set time threshold as target traffic participant data.
As another alternative, the weighted sum of the risk response time and the natural deviation degree of each group of traffic participant data may be calculated as the evaluation value of each group of traffic participant data, and then the traffic participant data with the risk response time smaller than the set time threshold and the evaluation value smaller than the set evaluation value threshold may be selected as the target traffic participant data from the traffic participant data according to the risk response time and the evaluation value of each group of traffic participant data.
Steps S201, S202, S206 in the embodiment shown in fig. 3 correspond to steps S101, S102, S104 in the method embodiment shown in fig. 2, respectively, and please refer to the contents of the method embodiment shown in fig. 2.
The following specifically introduces the automatic driving test scenario generation method provided in the embodiment of the present application, taking the generation of the automatic driving test scenario for front vehicle cut-in and the automatic driving test scenario for front vehicle braking as an example.
For the front vehicle braking scene, the embodiment of the application determines common scene data of the front vehicle braking scene through the following processing of A1-A7:
and A1, retrieving the braking vehicle information from the natural scene traffic flow data set.
In particular, an acceleration threshold T is first determined, e.g., -2
Figure 309811DEST_PATH_IMAGE001
And then searching all vehicles in the data set, judging whether the vehicles have the time when the acceleration is smaller than T, and if so, storing the vehicle information at the time as braking vehicle information, including information such as vehicle ID, frame ID, position, acceleration and the like.
And A2, according to the braking vehicle information, vehicle braking scene data are obtained by searching the natural scene traffic flow data set.
Specifically, based on the time information of starting braking in the stored braking vehicle information, b frames and f frames are successively retrieved backward and forward, respectively, along the time while ensuring that the acceleration of the retrieved frames is less than 0 and the vehicle ID remains unchanged. And (5) obtaining vehicle braking scene data of the (b + f) frame. Specifically, the information includes ID of the braking vehicle, frame ID, position, acceleration, and the like.
And A3, retrieving the host vehicle information corresponding to the braking vehicle information from the vehicle braking scene data. Wherein the host vehicle is a vehicle that is behind the braking vehicle and is adjacent to the braking vehicle front and rear.
Specifically, from the vehicle braking scene data, a vehicle that is located after the braking vehicle and is adjacent to and in front of the braking vehicle is retrieved as a host vehicle corresponding to the braking vehicle. It will be appreciated that when the braking vehicle begins to brake, the host vehicle is the vehicle that is directly affected by the braking of the braking vehicle.
When the host vehicle corresponding to the braking vehicle is retrieved from the vehicle braking scene data, the host vehicle information is obtained, namely the host vehicle information corresponding to the braking vehicle is obtained, and specifically comprises information such as vehicle ID, frame ID, position, acceleration and the like.
And A4, retrieving front vehicle braking scene data from the vehicle braking scene data according to braking vehicle information and host vehicle information corresponding to the braking vehicle information.
Specifically, the braking vehicle information and the corresponding host vehicle information are combined, and each group of braking vehicle information and the corresponding host vehicle information are retrieved from the vehicle braking scene data to obtain the front vehicle braking scene data.
And A5, determining the relative distance and the relative speed information of the host vehicle and the braking vehicle in each set of the front vehicle braking scene data.
Specifically, the relative distance between the braking vehicle and the corresponding host vehicle can be calculated and determined according to the braking scene data of each group of front vehicles and the position information of the braking vehicle and the corresponding host vehicle contained in the braking scene data; according to the speed relation of the brake vehicle and the corresponding host vehicle, the relative speed of the brake vehicle and the host vehicle can be calculated and determined.
And A6, establishing a two-dimensional density distribution histogram model of the relative distance and the relative speed based on the relative distance and the relative speed information of the host vehicle and the braking vehicle in each group of front vehicle braking scene data.
Specifically, for all the acquired front vehicle braking scene data, the relative speed and the relative distance between the host vehicle and the corresponding braking vehicle are respectively used as an abscissa and an ordinate, the occurrence probability of each combination of the relative distance and the relative speed is counted, and a two-dimensional density distribution histogram model of the relative distance and the relative speed is established.
In the two-dimensional density distribution histogram model, the size of the histogram corresponding to each combination of the relative distance and the relative velocity indicates the size of the occurrence probability of the combination.
And A7, extracting the front vehicle braking scene data with the probability density larger than a preset probability density threshold value from each group of front vehicle braking scene data according to the two-dimensional density distribution histogram model of the relative distance and the relative speed, and taking the front vehicle braking scene data as common scene data corresponding to the front vehicle braking scene data.
Specifically, based on the two-dimensional density distribution histogram model of the relative distance and the relative speed, the front vehicle braking scene data with the probability density larger than the preset probability density threshold is extracted from each group of front vehicle braking scene data, and then the common scene data corresponding to the front vehicle braking scene data can be obtained.
For the leading vehicle cut-in scene, the embodiment of the application determines common scene data of the leading vehicle braking scene through the following processing of B1-B7:
and B1, retrieving lane change vehicle information from the natural scene traffic flow data set.
Specifically, first, all vehicles in the data set are searched, whether the time when the lane ID changes exists is judged, and if the time when the lane ID changes exists, the vehicle information at the time is stored as lane change vehicle information including information such as the vehicle ID, the frame ID, the original lane ID, the destination lane ID and the like.
And B2, according to the lane change vehicle information, vehicle lane change scene data are obtained by searching the natural scene traffic flow data set.
Specifically, based on the vehicle lane-changing time information in the stored lane-changing vehicle information, b frames and f frames are continuously searched backward and forward along the time, and the transverse coordinates of the searched frames are ensured to be almost unchanged and the lane ID is kept unchanged, so that the vehicle lane-changing scene data of the (b + f) frames can be obtained, wherein the data specifically comprise the ID of the cut-in vehicle, the frame ID, the position, the speed and other information.
And B3, retrieving host vehicle information corresponding to the lane-changing vehicle information from the vehicle lane-changing scene data. Wherein the host vehicle is a vehicle that is located behind and adjacent to the position after the lane change of the lane change vehicle.
Specifically, from the vehicle lane change scene data, a vehicle that is located after the lane change of the lane change vehicle is completed and is adjacent to the lane change vehicle before and after the lane change position is retrieved as a host vehicle corresponding to the lane change vehicle.
Host vehicle information corresponding to the lane-change vehicle information is retrieved from the vehicle lane-change scene data while ensuring that the frame ID, the time stamp, and the ID of the host vehicle ahead coincide with the cut-in vehicle, and the ID of the vehicle ahead of the host vehicle coincides with the ID of the cut-in vehicle, and then the host vehicle information at the time of the cut-in is determined based on the start time information of the cut-in vehicle.
And B4, searching and obtaining front vehicle cut-in scene data from the vehicle lane change scene data according to lane change vehicle information and host vehicle information corresponding to the lane change vehicle information.
Specifically, the lane-changing vehicle information and the corresponding host vehicle information are combined, and each group of lane-changing vehicle information and the corresponding host vehicle information are retrieved from the vehicle lane-changing scene data to obtain the front vehicle cut-in scene data.
And B5, determining the relative distance and relative speed information of the host vehicle and the cut-in vehicle in each set of the front vehicle cut-in scene data.
Specifically, corresponding to each group of front vehicle cut-in scene data, the relative distance between the two vehicles can be calculated and determined according to the position information of the lane-changing vehicle and the corresponding host vehicle contained in the scene data; according to the speed relationship of the lane-changing vehicle and the host vehicle corresponding to the lane-changing vehicle, the relative speed of the lane-changing vehicle and the host vehicle can be calculated and determined.
And B6, establishing a two-dimensional density distribution histogram model of the relative distance and the relative speed based on the relative distance and the relative speed information of the host vehicle and the cut-in vehicle in each group of the previous vehicle cut-in scene data.
Specifically, for all acquired front vehicle cut-in scene data, the relative speed and the relative distance between the host vehicle and the corresponding lane-changing vehicle are respectively used as an abscissa and an ordinate, the occurrence probability of each combination of the relative distance and the relative speed is counted, and a two-dimensional density distribution histogram model of the relative distance and the relative speed is established.
In the two-dimensional density distribution histogram model, the size of the histogram corresponding to each combination of the relative distance and the relative velocity indicates the size of the occurrence probability of the combination.
And B7, extracting the front vehicle cut-in scene data with the probability density larger than a preset probability density threshold value from each group of front vehicle cut-in scene data according to the two-dimensional density distribution histogram model of the relative distance and the relative speed, and taking the front vehicle cut-in scene data as common scene data corresponding to the front vehicle cut-in scene data.
Specifically, based on the two-dimensional density distribution histogram model of the relative distance and the relative speed, the preceding vehicle cut-in scene data with the probability density larger than the preset probability density threshold is extracted from each group of preceding vehicle cut-in scene data, and then the common scene data corresponding to the preceding vehicle cut-in scene data can be obtained.
Based on the common scene data, referring to fig. 4, when step S301 is executed to obtain at least one set of traffic participant data, the embodiment of the present application directly reads the at least one set of traffic participant data from the common scene data.
For example, the position information, the movement speed information, and the movement direction information of the braked vehicle and the corresponding host vehicle are read from the preceding vehicle braking common scene data, or the position information, the movement speed information, and the movement direction information of the lane-change vehicle and the corresponding host vehicle are read from the preceding vehicle cut-in common scene data.
It can be understood that the two traffic participants in the traffic participant data acquired by the embodiment of the present application are two traffic participants located in the same lane or adjacent lanes and having collision risks.
According to the above manner, after acquiring multiple sets of traffic participant data, the automatic driving test scene is generated through the following processing of steps S302-S307:
s302, predicting the movement track intersection point of a first traffic participant and a second traffic participant according to the position information, the movement speed information and the movement direction information of the first traffic participant and the second traffic participant in the traffic participant data.
Specifically, as described above, each set of data of the transportation participants includes position information, movement speed information, movement direction information, and the like of two transportation participants. The two traffic participants are respectively marked as a first traffic participant and a second traffic participant. Then, according to the position, the movement speed and the movement direction of the first traffic participant and the second traffic participant in each group of traffic participant data, the movement track intersection point of the two traffic participants can be predicted.
According to the embodiment of the application, the motion track intersection points of two traffic participants are respectively predicted under the two conditions that the two traffic participants are located in the same lane or adjacent lanes.
If the first traffic participant and the second traffic participant in the traffic participant data are located in the same lane, predicting a position point where the first traffic participant and the second traffic participant collide according to the position information, the movement speed information and the movement direction information of the first traffic participant and the second traffic participant, and taking the position point as a movement track intersection point of the first traffic participant and the second traffic participant.
Specifically, if the movement directions of the first traffic participant and the second traffic participant in the traffic participant data are opposite and the movement trajectories do not intersect with each other, or the movement directions of the first traffic participant and the second traffic participant are the same and the speed of the side closer to the rear is lower than the speed of the side closer to the front, the first traffic participant and the second traffic participant do not have a movement trajectory intersection.
The two traffic participants are positioned in the same lane and travel in the same direction, and the movement speed of the latter is higher than that of the former, the two traffic participants have possibility of collision, or the two traffic participants are positioned in the same lane and travel in the same direction, the two traffic participants have possibility of collision, the intersection point of the movement tracks of the two traffic participants can be calculated, and the intersection point of the movement tracks of the two traffic participants is the collision position point of the two traffic participants.
Firstly, according to the position information, the movement speed information and the movement direction information of the first traffic participant and the second traffic participant, the collision time of the first traffic participant and the second traffic participant is calculated and determined.
And then calculating and determining the position point of the collision of the first traffic participant and the second traffic participant according to the position information, the movement speed information, the movement direction information and the collision time of at least one of the first traffic participant and the second traffic participant.
For example, as shown in fig. 5, two vehicles in the graph represent a first traffic participant and a second traffic participant in the traffic participant data, respectively. According to the position information and the movement speed information of the first traffic participant and the second traffic participant, the speed and the relative distance of the first traffic participant and the second traffic participant can be determined, if the speed of the first traffic participant is v1, the speed of the second traffic participant is v2, v1 is larger than v2, and if the relative distance of the two vehicles is r1, the time of collision between the first traffic participant and the second traffic participant can be calculated and determined according to a formula r 1/(v 1-v 2).
Then, according to the current position of the first traffic participant and/or the second traffic participant, the movement speed of the first traffic participant and/or the second traffic participant and the collision, the position of the first traffic participant and/or the second traffic participant at the time of the collision can be calculated and determined, namely, the position point of the collision of the first traffic participant and/or the second traffic participant is determined.
With reference to the above processing, the intersection point of the motion tracks of the two vehicles can be calculated and determined for the braking scene of the front vehicle.
If the first traffic participant and the second traffic participant in the traffic participant data are located in adjacent lanes, predicting an intersection point of the traveling routes of the first traffic participant and the second traffic participant according to the position information and the movement direction information of the first traffic participant and the second traffic participant, and taking the intersection point as a movement track intersection point of the first traffic participant and the second traffic participant.
The travel routes of the first traffic participant and the second traffic participant refer to extension lines of the motion trails of the first traffic participant and the second traffic participant along the motion direction.
Specifically, if the movement directions of the first traffic participant and the second traffic participant in the traffic participant data are opposite and the movement trajectories do not intersect with each other, or the movement directions of the first traffic participant and the second traffic participant are the same and the movement speed of the side located at the back is lower than the movement speed of the side located at the front, the first traffic participant and the second traffic participant do not have a movement trajectory intersection.
If two traffic participants are located in adjacent lanes, if the two traffic participants move in the same direction, the lane opposite to the arbitrary direction is yawed, so that the traveling routes of the two traffic participants have an intersection, or if the two traffic participants move in the same direction, and if the speed of the latter is higher than that of the former, the lane opposite to the arbitrary direction is yawed, so that the traveling routes of the two traffic participants are intersected.
For example, as shown in fig. 6, two vehicles in the graph represent a first traffic participant and a second traffic participant in the traffic participant data, respectively. Assuming that the vehicle speed of the first traffic participant is v1 and the vehicle speed of the second traffic participant is v2, if v1 is greater than v2, if the opposite lane of any one direction is yawed, the two travel routes will intersect, and there is a risk of collision.
Therefore, for the situation that the two traffic participants are located in the adjacent lanes, the embodiment of the present application offsets the movement direction of the first traffic participant and/or the second traffic participant by a set angle toward the lane where the opposite side is located, so as to obtain the updated movement direction of the first traffic participant and/or the second traffic participant;
and then determining the intersection point of the travel routes of the first traffic participant and the second traffic participant according to the position information and the updated motion direction information of the first traffic participant and the second traffic participant.
Referring to fig. 6, the movement direction of the first traffic participant is shifted by a small angle toward the lane in which the second traffic participant is located
Figure 695793DEST_PATH_IMAGE002
Angle of the angle
Figure 662481DEST_PATH_IMAGE002
Between 0 and 10 degrees. When the movement direction of the first traffic participant deflects, the travel routes of the first traffic participant and the second traffic participant are intersected, and meanwhile, the position of the intersection point of the travel routes of the first traffic participant and the second traffic participant can be determined.
Referring to the above-described processing, for the preceding vehicle cut-in scene, the intersection of the travel routes of the two vehicles can be calculated and determined. In the specific calculation, the cut-in angle of the front vehicle is the offset angle corresponding to the moving direction
Figure 260952DEST_PATH_IMAGE002
. Offset angle of current vehicle travel route
Figure 544166DEST_PATH_IMAGE002
When determined, the intersection of the travel routes of the front and rear vehicles may be determined.
And S303, calculating and determining the longest consumed time of the first traffic participant and the second traffic participant moving to the motion track intersection point according to the position information and the motion speed information of the first traffic participant and the second traffic participant, and taking the longest consumed time as the dangerous response time corresponding to the traffic participant data.
Specifically, for the situation that a first traffic participant and a second traffic participant in the traffic participant data are located in the same lane, the intersection point of the motion trajectories of the first traffic participant and the second traffic participant is a collision position point of the first traffic participant and the second traffic participant, and the time consumed by the first traffic participant and the second traffic participant to move to the intersection point of the motion trajectories respectively can be calculated and determined according to the current position coordinates, the motion speed and the position coordinates of the intersection point of the motion trajectories of the first traffic participant and the second traffic participant, and the longest time consumed by the first traffic participant and the second traffic participant is taken as the danger reaction time corresponding to the traffic participant data.
For the case in which the first and second traffic participant in the traffic participant data are located in adjacent lanes, depending on the current position and movement speed of the first and second traffic participant, a distance d1 can be calculated which determines the intersection of the first and second traffic participant with the travel route of the two, and a distance d2 can be calculated which determines the intersection of the second traffic participant with the travel route of the two. Further, it is possible to calculate a time taken to determine that the first transportation participant moved from the current position to the intersection of the travel route described above t1 and calculate a time taken to determine that the second transportation participant moved from the current position to the intersection of the travel route described above t2, respectively. And then taking the maximum one of t1 and t2 as the dangerous reaction time corresponding to the data of the traffic participant.
According to the processing process for calculating the time consumption of the two traffic participants moving to the intersection point of the motion tracks of the two traffic participants, the intersection point of the motion tracks of the two traffic participants is determined firstly, then the time consumption of the two traffic participants moving to the intersection point of the motion tracks of the two traffic participants is calculated, and the largest time consumption of the two traffic participants is taken as the dangerous reaction time of the traffic scene formed by the two traffic participants, so that the purpose of quantifying the dangerous degree of the scene is achieved.
The above calculation scheme of the hazard response time is more flexible in determining the track intersection point, so that the calculation scheme is applicable to the situation that two traffic participants are located in the same lane or different lanes. Therefore, the method for quantifying the dangerous degree of the traffic scene, which is provided by the embodiment of the application, can be applied to more traffic scenes, and therefore can be used for identifying more types of dangerous traffic scenes.
S304, calculating the root mean square error of each group of traffic participant data and the common scene data corresponding to each group of traffic participant data, and taking the root mean square error as the naturalness deviation degree of each group of traffic participant data.
Specifically, the number of common scene data corresponding to the traffic participant data is usually large, and all the common scene data corresponding to the traffic participant data may be collected to obtain a common scene data set Ω, and the common scene data set may be regarded as a scene data space corresponding to a traffic scene corresponding to the traffic participant data.
Assuming that a certain set of traffic participant data is x, the root mean square error of the set of traffic participant data and the common scene data corresponding to the set of traffic participant data can be calculated by the following formula:
Figure 835470DEST_PATH_IMAGE003
wherein y is common scene data in a common scene data set Ω;
Figure 820612DEST_PATH_IMAGE004
a data dimension representing scene data;
Figure 488354DEST_PATH_IMAGE005
for normalization of the parameters, they are analyzed from the natural data set.
With reference to the above processing, the degree of deviation of naturalness of each set of the data of the traffic participants can be calculated and determined separately.
S305, respectively calculating the dangerous reaction time and the weighted sum of the natural deviation degree of each group of traffic participant data to obtain the evaluation value of each group of traffic participant data.
In particular, assume that the hazard response time of the traffic participant data x is expressed as
Figure 360495DEST_PATH_IMAGE006
The degree of deviation of naturalness is expressed as
Figure 88280DEST_PATH_IMAGE007
According to the method and the device, the weight is set for the natural deviation degree, then the dangerous reaction time of the traffic participant data and the natural deviation degree are subjected to weighted summation, and the evaluation value of the traffic participant data is obtained
Figure 560718DEST_PATH_IMAGE008
And S306, selecting the traffic participant data with the evaluation value smaller than the set evaluation value threshold value from all the groups of traffic participant data as target traffic participant data.
Specifically, after the evaluation values of each set of the traffic participant data are respectively calculated and determined through the processing of step S305, the screened sets of traffic participant data may be screened, and the traffic participant data with the evaluation value smaller than the set evaluation value threshold value may be selected from the sets of traffic participant data as the target traffic participant data.
Or selecting the traffic participant data with the danger reaction time smaller than the set time threshold and the evaluation value smaller than the set evaluation value threshold from the groups of traffic participant data as the target traffic participant data.
As an alternative embodiment, the evaluation value of the selected traffic participant data may also be used to guide the processing procedure for acquiring the traffic participant data. For example, the traffic participant data is searched from the common scene data by means of the bayesian optimizer, meanwhile, the evaluation value of the searched traffic participant data is calculated, and the evaluation value of the searched traffic participant data is used as the reference input of the bayesian optimizer, and the bayesian optimizer can adjust the subsequent search path according to the evaluation value of the searched traffic participant data, so that the search space is favorably reduced, and the traffic participant data with the smaller evaluation value is favorably and more efficiently searched.
And S307, generating an automatic driving test scene according to the target traffic participant data.
Through the processing of the steps S301-S306, a plurality of groups of target traffic participant data can be obtained.
When the obtained multiple groups of target traffic participant data are used for generating the automatic driving test scene, the obtained multiple groups of target traffic participant data are firstly subjected to cluster screening, and the higher-quality traffic participant data are further selected for generating the automatic driving test scene.
Specifically, firstly, clustering is carried out on a plurality of groups of target traffic participant data to obtain at least one target traffic participant data cluster.
And then selecting a target traffic participant data cluster with the minimum mean evaluation value from all the target traffic participant data clusters, and generating an automatic driving test scene according to the target traffic participant data cluster with the minimum mean evaluation value.
As an optional clustering method, the embodiment of the application adopts a K-means clustering algorithm to cluster the searched data of a large number of target traffic participants. The K-means algorithm is a classic clustering method based on division, and the basic idea is that K clusters are searched through iteration, so that a loss function corresponding to a clustering result is minimum. Wherein, the loss function can be defined as the sum of the squares of the errors of the distance of each sample from the center point of the cluster to which the sample belongs:
Figure 500993DEST_PATH_IMAGE009
wherein
Figure 227640DEST_PATH_IMAGE010
Represents the first
Figure 375594DEST_PATH_IMAGE011
The number of the samples is one,
Figure 820481DEST_PATH_IMAGE012
is that
Figure 564446DEST_PATH_IMAGE010
The cluster to which the cluster belongs to is selected,
Figure 926027DEST_PATH_IMAGE013
represents the center point of the cluster to which it corresponds,
Figure 667718DEST_PATH_IMAGE014
is the total number of clusters.
The specific method comprises the following steps:
(1) firstly, randomly initializing k target traffic participant data samples as a clustering center
Figure 114748DEST_PATH_IMAGE015
(2) Data samples for each target traffic participant
Figure 662404DEST_PATH_IMAGE010
Calculating the distances from the cluster centers to the k cluster centers and dividing the cluster centers into classes corresponding to the cluster centers with the minimum distances;
(3) for each class cluster
Figure 98065DEST_PATH_IMAGE016
Recalculating its cluster center
Figure 587821DEST_PATH_IMAGE017
(4) Iterating the steps (2) and (3) until the loss function
Figure 476142DEST_PATH_IMAGE018
And (6) converging.
For each generated class cluster
Figure 827489DEST_PATH_IMAGE016
Calculating an evaluation value mean value:
Figure 378643DEST_PATH_IMAGE019
wherein
Figure 790033DEST_PATH_IMAGE020
Are cluster-like
Figure 414918DEST_PATH_IMAGE016
Total number of samples.
And finally, selecting the cluster with the minimum M from all the clusters, and using the target traffic participant data in the cluster as data for generating the automatic driving test scene.
Firstly constructing a traffic scene in the highway-env, for example, constructing a traffic scene in which two vehicles adjacent to each other in front and back run in the same direction, and then initializing the traffic scene by using the selected target traffic participant data, so that an automatic driving test scene with high risk degree can be generated.
For example, for the traffic scene of the front vehicle brake and the front vehicle cut-in, firstly, a simple traffic scene of the front vehicle brake and the front vehicle cut-in is constructed in the highway-env, and then the generated traffic scene of the front vehicle brake and the front vehicle cut-in is initialized by using the traffic participant data of the front vehicle brake and the traffic participant data of the front vehicle cut-in obtained through the steps, so that the automatic driving test scene of the front vehicle brake and the front vehicle cut-in with higher danger degree can be generated.
Exemplary devices
Correspondingly, an embodiment of the present application further provides an automatic driving test scenario generation apparatus, as shown in fig. 7, the apparatus includes:
the data acquisition unit 100 is configured to acquire at least one set of traffic participant data, where each set of traffic participant data includes position information, movement speed information, and movement direction information of two traffic participants respectively;
the calculating unit 110 is configured to calculate time consumed for two traffic participants in each group of traffic participant data to move to a danger occurrence point, and obtain danger response time corresponding to each group of traffic participant data; the dangerous occurrence point comprises a position point when two traffic participants form a dangerous driving scene, and in the dangerous driving scene, the distance between the two traffic participants is smaller than a set distance threshold value;
the data screening unit 120 is configured to obtain target traffic participant data by selecting traffic participant data with a dangerous response time smaller than a set time threshold from each group of traffic participant data;
and a scene generating unit 130, configured to generate an automatic driving test scene according to the target traffic participant data.
As an optional implementation, the computing unit 110 is further configured to:
calculating the difference between each group of traffic participant data and the common scene data corresponding to each group of traffic participant data, and determining the natural deviation degree of each group of traffic participant data;
common scene data corresponding to the traffic participant data comprise data of a target natural traffic scene, wherein the probability of occurrence in the natural traffic scene exceeds a set probability threshold; wherein the target natural traffic scene is a natural traffic scene formed by two traffic participants in the traffic participant data;
the obtaining of the target traffic participant data by at least selecting traffic participant data with a dangerous reaction time less than a set time threshold from the traffic participant data groups comprises:
selecting traffic participant data with the dangerous reaction time less than a set time threshold from each group of traffic participant data;
and selecting the traffic participant data with the naturalness deviation degree smaller than the set deviation threshold value from the traffic participant data with the danger reaction time smaller than the set time threshold value as the target traffic participant data.
As an optional implementation, the computing unit 110 is further configured to:
calculating the difference between each group of traffic participant data and the common scene data corresponding to each group of traffic participant data, and determining the natural deviation degree of each group of traffic participant data;
common scene data corresponding to the traffic participant data comprise data of a target natural traffic scene, wherein the probability of occurrence in the natural traffic scene exceeds a set probability threshold; wherein the target natural traffic scene is a natural traffic scene formed by two traffic participants in the traffic participant data;
the obtaining of the target traffic participant data by at least selecting traffic participant data with a dangerous reaction time less than a set time threshold from the traffic participant data groups comprises:
respectively calculating the dangerous reaction time of each group of traffic participant data and the weighted sum of the natural deviation degree to obtain the evaluation value of each group of traffic participant data;
and selecting the traffic participant data with the dangerous response time smaller than the set time threshold and the evaluation value smaller than the set evaluation value threshold from all the groups of traffic participant data as the target traffic participant data.
As an optional implementation manner, calculating time consumed by two transportation participants moving to the danger occurrence point in each group of transportation participant data respectively, and obtaining the danger reaction time corresponding to each group of transportation participant data includes:
respectively calculating the longest consumed time of two traffic participants in each group of traffic participant data moving to the intersection point of the motion tracks of the two traffic participants as the dangerous response time corresponding to each group of traffic participant data; the intersection point of the motion trails of the two traffic participants is the position point of the collision of the two traffic participants determined by prediction or the intersection point of the motion trails of the two traffic participants.
As an optional implementation manner, respectively calculating the longest consumed time of two traffic participants moving to the intersection point of the motion trajectories of the two traffic participants in each group of traffic participant data, as the dangerous reaction time corresponding to each group of traffic participant data, includes:
predicting the motion track intersection point of a first traffic participant and a second traffic participant according to the position information, the motion speed information and the motion direction information of the first traffic participant and the second traffic participant in the traffic participant data;
and calculating and determining the longest consumed time of the first traffic participant and the second traffic participant moving to the motion track intersection point according to the position information and the motion speed information of the first traffic participant and the second traffic participant, and taking the longest consumed time as the dangerous reaction time corresponding to the traffic participant data.
As an alternative implementation manner, predicting the intersection point of the movement tracks of the first traffic participant and the second traffic participant according to the position information, the movement speed information and the movement direction information of the first traffic participant and the second traffic participant in the traffic participant data includes:
if a first traffic participant and a second traffic participant in traffic participant data are located in the same lane, predicting a position point where the first traffic participant and the second traffic participant collide according to position information, motion speed information and motion direction information of the first traffic participant and the second traffic participant, and taking the position point as a motion track intersection point of the first traffic participant and the second traffic participant;
if a first traffic participant and a second traffic participant in the traffic participant data are located in adjacent lanes, predicting an intersection point of the traveling routes of the first traffic participant and the second traffic participant according to the position information and the movement direction information of the first traffic participant and the second traffic participant, and taking the intersection point as a movement track intersection point of the first traffic participant and the second traffic participant.
As an alternative implementation, predicting the position point of the collision of the first traffic participant and the second traffic participant according to the position information, the movement speed information and the movement direction information of the first traffic participant and the second traffic participant comprises:
calculating and determining the collision time of the first traffic participant and the second traffic participant according to the position information, the movement speed information and the movement direction information of the first traffic participant and the second traffic participant;
and calculating and determining the position point of the collision of the first traffic participant and the second traffic participant according to the position information, the movement speed information, the movement direction information and the collision time of at least one of the first traffic participant and the second traffic participant.
As an alternative implementation, predicting the intersection point of the travel routes of the first traffic participant and the second traffic participant according to the position information and the movement direction information of the first traffic participant and the second traffic participant comprises:
offsetting the movement direction of the first traffic participant and/or the second traffic participant towards a lane where the opposite side is located by a set angle to obtain an updated movement direction of the first traffic participant and/or the second traffic participant;
and determining the intersection point of the traveling routes of the first traffic participant and the second traffic participant according to the position information and the updated motion direction information of the first traffic participant and the second traffic participant.
As an optional implementation manner, the calculating a difference between each group of traffic participant data and the common scene data corresponding to each group of traffic participant data, and determining a natural deviation degree of each group of traffic participant data includes:
and calculating the root mean square error of each group of traffic participant data and the common scene data corresponding to each group of traffic participant data as the naturalness deviation degree of each group of traffic participant data.
In an optional embodiment, the target traffic participant data is a plurality of groups; generating an automatic driving test scene according to the target traffic participant data, wherein the automatic driving test scene comprises the following steps:
clustering multiple groups of target traffic participant data to obtain at least one target traffic participant data cluster;
selecting a target traffic participant data cluster with the minimum mean evaluation value from all target traffic participant data clusters;
and generating an automatic driving test scene according to the target traffic participant data cluster with the minimum mean value of the evaluation values.
As an optional implementation, the acquiring at least one set of traffic participant data includes:
and searching at least one group of traffic participant data from the common scene data.
As an alternative implementation, when two of the traffic participant data form a front vehicle braking scene, the common scene data corresponding to the traffic participant data is obtained by:
retrieving front vehicle braking scene data from a natural scene traffic flow data set, and determining the relative distance and relative speed information of a main vehicle and a braking vehicle in each group of front vehicle braking scene data;
establishing a two-dimensional density distribution histogram model of the relative distance and the relative speed based on the relative distance and the relative speed information of the host vehicle and the braking vehicle in each group of front vehicle braking scene data;
and extracting the front vehicle braking scene data with the probability density larger than a preset probability density threshold value from each group of front vehicle braking scene data according to the two-dimensional density distribution histogram model of the relative distance and the relative speed, and taking the front vehicle braking scene data as common scene data corresponding to the front vehicle braking scene data.
As an alternative embodiment, the method for retrieving the front vehicle braking scene data from the natural scene traffic stream data set includes:
retrieving and obtaining braking vehicle information from a natural scene traffic flow data set;
according to the braking vehicle information, vehicle braking scene data are obtained by retrieving the natural scene traffic flow data in a centralized manner;
retrieving, from the vehicle braking scene data, host vehicle information corresponding to the braking vehicle information; wherein the host vehicle is a vehicle behind the braking vehicle and adjacent to the braking vehicle front and rear;
and according to the braking vehicle information and the host vehicle information corresponding to the braking vehicle information, searching the vehicle braking scene data to obtain the front vehicle braking scene data.
As an alternative implementation, when two of the traffic participant data form a preceding vehicle cut-in scene, the common scene data corresponding to the traffic participant data is obtained by:
retrieving and obtaining front vehicle cut-in scene data from a natural scene traffic flow data set, and determining the relative distance and relative speed information between a host vehicle and a cut-in vehicle in each group of front vehicle cut-in scene data;
establishing a two-dimensional density distribution histogram model of the relative distance and the relative speed based on the relative distance and the relative speed information of the host vehicle and the cut-in vehicle in each group of the previous vehicle cut-in scene data;
and extracting the front vehicle cut-in scene data with the probability density larger than a preset probability density threshold value from each group of front vehicle cut-in scene data according to the two-dimensional density distribution histogram model of the relative distance and the relative speed, and taking the front vehicle cut-in scene data as common scene data corresponding to the front vehicle cut-in scene data.
As an alternative embodiment, the method for retrieving the preceding vehicle cut-in scene data from the natural scene traffic stream data set includes:
retrieving and obtaining lane-changing vehicle information from a natural scene traffic flow data set;
according to the lane-changing vehicle information, vehicle lane-changing scene data are obtained by retrieving from the natural scene traffic flow data set;
retrieving host vehicle information corresponding to the lane-change vehicle information from the vehicle lane-change scene data; wherein the host vehicle is a vehicle behind the lane change position of the lane change vehicle and adjacent to the lane change position of the lane change vehicle in front and at the back;
and according to the lane-changing vehicle information and the host vehicle information corresponding to the lane-changing vehicle information, searching and obtaining the front vehicle cut-in scene data from the vehicle lane-changing scene data.
The automatic driving test scenario generation device provided by the embodiment belongs to the same application concept as the automatic driving test scenario generation method provided by the embodiment of the present application, can execute the automatic driving test scenario generation method provided by any embodiment of the present application, and has functional modules and beneficial effects corresponding to the execution method. For details of the automatic driving test scenario generation method provided in the foregoing embodiment of the present application, reference may be made to specific processing contents of the automatic driving test scenario generation method, which are not described herein again.
Exemplary electronic device
Another embodiment of the present application further provides an electronic device, as shown in fig. 8, the electronic device including:
a memory 200 and a processor 210;
wherein, the memory 200 is connected to the processor 210 for storing programs;
the processor 210 is configured to execute the program stored in the memory 200 to implement the automatic driving test scenario generation method disclosed in any of the above embodiments.
Specifically, the electronic device may further include: a bus, a communication interface 220, an input device 230, and an output device 240.
The processor 210, the memory 200, the communication interface 220, the input device 230, and the output device 240 are connected to each other through a bus. Wherein:
a bus may include a path that transfers information between components of a computer system.
The processor 210 may be a general-purpose processor, such as a general-purpose Central Processing Unit (CPU), microprocessor, etc., an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of programs in accordance with the present invention. But may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components.
The processor 210 may include a main processor and may also include a baseband chip, a modem, and the like.
The memory 200 stores programs for executing the technical solution of the present invention, and may also store an operating system and other key services. In particular, the program may include program code including computer operating instructions. More specifically, memory 200 may include a read-only memory (ROM), other types of static storage devices that may store static information and instructions, a Random Access Memory (RAM), other types of dynamic storage devices that may store information and instructions, a disk storage, a flash, and so forth.
The input device 230 may include a means for receiving data and information input by a user, such as a keyboard, mouse, camera, scanner, light pen, voice input device, touch screen, pedometer or gravity sensor, etc.
Output device 240 may include equipment that allows output of information to a user, such as a display screen, a printer, speakers, and the like.
Communication interface 220 may include any device that uses any transceiver or the like to communicate with other devices or communication networks, such as an ethernet network, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), etc.
The processor 210 executes the program stored in the memory 200 and invokes other devices, which can be used to implement the steps of any one of the automatic driving test scenario generation methods provided in the above embodiments of the present application.
Exemplary computer program product and storage Medium
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the autopilot test scenario generation method described in the "exemplary methods" section of this specification above.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, an embodiment of the present application may also be a storage medium having stored thereon a computer program that is executed by a processor to perform the steps in the automatic driving test scenario generation method described in the "exemplary methods" section above in this specification.
While, for purposes of simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present application is not limited by the order of acts or acts described, as some steps may occur in other orders or concurrently with other steps in accordance with the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The steps in the method of each embodiment of the present application may be sequentially adjusted, combined, and deleted according to actual needs, and technical features described in each embodiment may be replaced or combined.
The modules and sub-modules in the device and the terminal in the embodiments of the application can be combined, divided and deleted according to actual needs.
In the several embodiments provided in the present application, it should be understood that the disclosed terminal, apparatus and method may be implemented in other manners. For example, the above-described terminal embodiments are merely illustrative, and for example, the division of a module or a sub-module is only one logical division, and there may be other divisions when the terminal is actually implemented, for example, a plurality of sub-modules or modules may be combined or integrated into another module, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules or sub-modules described as separate parts may or may not be physically separate, and parts that are modules or sub-modules may or may not be physical modules or sub-modules, may be located in one place, or may be distributed over a plurality of network modules or sub-modules. Some or all of the modules or sub-modules can be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, each functional module or sub-module in the embodiments of the present application may be integrated into one processing module, or each module or sub-module may exist alone physically, or two or more modules or sub-modules may be integrated into one module. The integrated modules or sub-modules may be implemented in the form of hardware, or may be implemented in the form of software functional modules or sub-modules.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software unit executed by a processor, or in a combination of the two. The software cells may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (18)

1. An automatic driving test scenario generation method is characterized by comprising the following steps:
acquiring at least one group of traffic participant data, wherein each group of traffic participant data respectively comprises position information, movement speed information and movement direction information of two traffic participants;
respectively calculating the time consumed by two traffic participants in each group of traffic participant data to move to a danger occurrence point to obtain the danger reaction time corresponding to each group of traffic participant data; the dangerous occurrence point comprises a position point when two traffic participants form a dangerous driving scene, and in the dangerous driving scene, the distance between the two traffic participants is smaller than a set distance threshold value;
obtaining target traffic participant data at least by selecting traffic participant data with the dangerous reaction time less than a set time threshold from each group of traffic participant data;
and generating an automatic driving test scene according to the target traffic participant data.
2. The method of claim 1, further comprising:
calculating the difference between each group of traffic participant data and the common scene data corresponding to each group of traffic participant data, and determining the naturalness deviation degree of each group of traffic participant data;
common scene data corresponding to the traffic participant data comprise data of a target natural traffic scene, wherein the probability of occurrence in the natural traffic scene exceeds a set probability threshold; wherein the target natural traffic scene is a natural traffic scene formed by two traffic participants in the traffic participant data;
the obtaining of the target traffic participant data by at least selecting traffic participant data with a dangerous reaction time less than a set time threshold from the traffic participant data groups comprises:
selecting traffic participant data with the dangerous reaction time less than a set time threshold from each group of traffic participant data;
and selecting the traffic participant data with the naturalness deviation degree smaller than the set deviation threshold value from the traffic participant data with the danger reaction time smaller than the set time threshold value as the target traffic participant data.
3. The method of claim 1, further comprising:
calculating the difference between each group of traffic participant data and the common scene data corresponding to each group of traffic participant data, and determining the natural deviation degree of each group of traffic participant data;
common scene data corresponding to the traffic participant data comprise data of a target natural traffic scene, wherein the probability of occurrence in the natural traffic scene exceeds a set probability threshold; wherein the target natural traffic scene is a natural traffic scene formed by two traffic participants in the traffic participant data;
the obtaining of the target traffic participant data by at least selecting traffic participant data with a dangerous reaction time less than a set time threshold from the traffic participant data groups comprises:
respectively calculating the dangerous reaction time of each group of traffic participant data and the weighted sum of the natural deviation degree to obtain the evaluation value of each group of traffic participant data;
and selecting the traffic participant data with the dangerous reaction time smaller than a set time threshold and the evaluation value smaller than a set evaluation value threshold from all the groups of traffic participant data as target traffic participant data.
4. The method according to any one of claims 1 to 3, wherein calculating time spent by two transportation participants in each transportation participant data set to move to a danger occurrence point respectively to obtain a danger reaction time corresponding to each transportation participant data set comprises:
respectively calculating the longest consumed time of two traffic participants in each group of traffic participant data moving to the intersection point of the motion tracks of the two traffic participants as the dangerous response time corresponding to each group of traffic participant data; the intersection point of the motion trails of the two traffic participants is the position point of the collision of the two traffic participants determined by prediction or the intersection point of the motion trails of the two traffic participants.
5. The method of claim 4, wherein calculating the longest elapsed time from the movement of the two traffic participants to the intersection point of the movement tracks of the two traffic participants in each set of traffic participant data as the corresponding danger response time of each set of traffic participant data comprises:
predicting the motion track intersection point of a first traffic participant and a second traffic participant according to the position information, the motion speed information and the motion direction information of the first traffic participant and the second traffic participant in the traffic participant data;
and calculating and determining the longest consumed time of the first traffic participant and the second traffic participant moving to the motion track intersection point according to the position information and the motion speed information of the first traffic participant and the second traffic participant, and taking the longest consumed time as the dangerous reaction time corresponding to the traffic participant data.
6. The method of claim 5, wherein predicting the intersection point of the movement tracks of the first traffic participant and the second traffic participant according to the position information, the movement speed information and the movement direction information of the first traffic participant and the second traffic participant in the traffic participant data comprises:
if a first traffic participant and a second traffic participant in traffic participant data are located in the same lane, predicting a position point where the first traffic participant and the second traffic participant collide according to position information, motion speed information and motion direction information of the first traffic participant and the second traffic participant, and taking the position point as a motion track intersection point of the first traffic participant and the second traffic participant;
if a first traffic participant and a second traffic participant in the traffic participant data are located in adjacent lanes, predicting an intersection point of the traveling routes of the first traffic participant and the second traffic participant according to the position information and the movement direction information of the first traffic participant and the second traffic participant, and taking the intersection point as a movement track intersection point of the first traffic participant and the second traffic participant.
7. The method of claim 6, wherein predicting the location point of the collision of the first traffic participant and the second traffic participant according to the location information, the movement speed information and the movement direction information of the first traffic participant and the second traffic participant comprises:
according to the position information, the movement speed information and the movement direction information of the first traffic participant and the second traffic participant, calculating and determining the collision time of the first traffic participant and the second traffic participant;
and calculating and determining the position point of the collision of the first traffic participant and the second traffic participant according to the position information, the movement speed information, the movement direction information and the collision time of at least one of the first traffic participant and the second traffic participant.
8. The method of claim 6, wherein predicting the intersection of the travel routes of the first and second traffic participants based on the position information and the movement direction information of the first and second traffic participants comprises:
offsetting the movement direction of the first traffic participant and/or the second traffic participant towards a lane where the opposite side is located by a set angle to obtain an updated movement direction of the first traffic participant and/or the second traffic participant;
and determining the intersection point of the traveling routes of the first traffic participant and the second traffic participant according to the position information and the updated motion direction information of the first traffic participant and the second traffic participant.
9. The method according to claim 2 or 3, wherein the calculating the difference between each group of traffic participant data and the common scene data corresponding to each group of traffic participant data, and the determining the natural deviation degree of each group of traffic participant data comprises:
and calculating the root mean square error of each group of traffic participant data and the common scene data corresponding to each group of traffic participant data as the naturalness deviation degree of each group of traffic participant data.
10. The method of claim 3, wherein the target traffic participant data is in a plurality of sets; generating an automatic driving test scene according to the target traffic participant data, wherein the automatic driving test scene comprises the following steps:
clustering multiple groups of target traffic participant data to obtain at least one target traffic participant data cluster;
selecting a target traffic participant data cluster with the minimum evaluation value mean value from all the target traffic participant data clusters;
and generating an automatic driving test scene according to the target traffic participant data cluster with the minimum mean value of the evaluation values.
11. The method of claim 2 or 3, wherein the obtaining at least one set of traffic participant data comprises:
and searching at least one group of traffic participant data from the common scene data.
12. The method according to claim 2 or 3, characterized in that, when two of the traffic participant data form a front braking scene, the common scene data corresponding to the traffic participant data is obtained by:
retrieving front vehicle braking scene data from a natural scene traffic flow data set, and determining the relative distance and relative speed information of a main vehicle and a braking vehicle in each group of front vehicle braking scene data;
establishing a two-dimensional density distribution histogram model of the relative distance and the relative speed based on the relative distance and the relative speed information of the host vehicle and the braking vehicle in each group of front vehicle braking scene data;
and extracting the front vehicle braking scene data with the probability density larger than a preset probability density threshold value from each group of front vehicle braking scene data according to the two-dimensional density distribution histogram model of the relative distance and the relative speed, and taking the front vehicle braking scene data as common scene data corresponding to the front vehicle braking scene data.
13. The method of claim 12, wherein retrieving leading vehicle braking scenario data from a natural scenario traffic flow data set comprises:
retrieving and obtaining braking vehicle information from a natural scene traffic flow data set;
according to the braking vehicle information, vehicle braking scene data are obtained by retrieving the natural scene traffic flow data in a centralized manner;
retrieving, from the vehicle braking scene data, host vehicle information corresponding to the braking vehicle information; wherein the host vehicle is a vehicle behind the braking vehicle and adjacent to the braking vehicle front and rear;
and according to the braking vehicle information and the host vehicle information corresponding to the braking vehicle information, searching the vehicle braking scene data to obtain the front vehicle braking scene data.
14. The method according to claim 2 or 3, characterized in that, when two of the traffic participant data constitute a preceding vehicle cut-in scene, the common scene data corresponding to the traffic participant data is obtained by:
retrieving and obtaining front vehicle cut-in scene data from a natural scene traffic flow data set, and determining the relative distance and relative speed information between a host vehicle and a cut-in vehicle in each group of front vehicle cut-in scene data;
establishing a two-dimensional density distribution histogram model of the relative distance and the relative speed based on the relative distance and the relative speed information of the host vehicle and the cut-in vehicle in each group of the previous vehicle cut-in scene data;
and extracting the front vehicle cut-in scene data with the probability density larger than a preset probability density threshold value from each group of front vehicle cut-in scene data according to the two-dimensional density distribution histogram model of the relative distance and the relative speed, and taking the front vehicle cut-in scene data as common scene data corresponding to the front vehicle cut-in scene data.
15. The method of claim 14, wherein retrieving leading-in scene data from a natural scene traffic stream data set comprises:
retrieving and obtaining lane-changing vehicle information from a natural scene traffic flow data set;
according to the lane-changing vehicle information, vehicle lane-changing scene data are obtained by retrieving from the natural scene traffic flow data set;
retrieving host vehicle information corresponding to the lane-change vehicle information from the vehicle lane-change scene data; wherein the host vehicle is a vehicle behind the lane change position of the lane change vehicle and adjacent to the lane change position of the lane change vehicle in front and at the back;
and according to the lane-changing vehicle information and the host vehicle information corresponding to the lane-changing vehicle information, searching and obtaining the front vehicle cut-in scene data from the vehicle lane-changing scene data.
16. An automatic driving test scenario generation apparatus, comprising:
the data acquisition unit is used for acquiring at least one group of traffic participant data, wherein each group of traffic participant data respectively comprises position information, movement speed information and movement direction information of two traffic participants;
the calculating unit is used for calculating the time consumed by two traffic participants in each group of traffic participant data to move to the danger occurrence point respectively to obtain the danger reaction time corresponding to each group of traffic participant data; the dangerous occurrence point comprises a position point when two traffic participants form a dangerous driving scene, and in the dangerous driving scene, the distance between the two traffic participants is smaller than a set distance threshold value;
the data screening unit is used for selecting traffic participant data with the dangerous reaction time smaller than a set time threshold from all the groups of traffic participant data to obtain target traffic participant data;
and the scene generation unit is used for generating an automatic driving test scene according to the target traffic participant data.
17. An electronic device, comprising:
a memory and a processor;
the memory is connected with the processor and used for storing programs;
the processor is configured to implement the automatic driving test scenario generation method according to any one of claims 1 to 15 by executing a program in the memory.
18. A storage medium having stored thereon a computer program which, when executed by a processor, implements the automated driving test scenario generation method of any one of claims 1 to 15.
CN202210941004.4A 2022-08-08 2022-08-08 Automatic driving test scene generation method, device, equipment and storage medium Active CN115017742B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210941004.4A CN115017742B (en) 2022-08-08 2022-08-08 Automatic driving test scene generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210941004.4A CN115017742B (en) 2022-08-08 2022-08-08 Automatic driving test scene generation method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115017742A true CN115017742A (en) 2022-09-06
CN115017742B CN115017742B (en) 2022-12-13

Family

ID=83065957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210941004.4A Active CN115017742B (en) 2022-08-08 2022-08-08 Automatic driving test scene generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115017742B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115563020A (en) * 2022-12-05 2023-01-03 深圳慧拓无限科技有限公司 Method and system for generating danger test scene, electronic device and storage medium
CN115993257A (en) * 2023-03-23 2023-04-21 禾多科技(北京)有限公司 Reliability determination method for automatic driving system
CN116046417A (en) * 2023-04-03 2023-05-02 西安深信科创信息技术有限公司 Automatic driving perception limitation testing method and device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101308586A (en) * 2008-05-05 2008-11-19 戴宏 Motor vehicle day and night running observing recorder
CN105740510A (en) * 2016-01-22 2016-07-06 山东师范大学 Simulation system and method of evacuation crowd behavior based on grid-density-relation
CN111723470A (en) * 2020-05-26 2020-09-29 同济大学 Automatic driving control method based on calibration optimization RSS model
US20200310403A1 (en) * 2019-03-29 2020-10-01 TuSipmle, Inc. Operational testing of autonomous vehicles
CN112069643A (en) * 2019-05-24 2020-12-11 北京车和家信息技术有限公司 Automatic driving simulation scene generation method and device
CN112345272A (en) * 2021-01-11 2021-02-09 北京赛目科技有限公司 Automatic driving simulation test method and device for scene library
CN112578683A (en) * 2020-10-16 2021-03-30 襄阳达安汽车检测中心有限公司 Optimized in-loop simulation test method for automobile auxiliary driving controller
CN113642114A (en) * 2021-09-14 2021-11-12 吉林大学 Modeling method for humanoid random car following driving behavior capable of making mistakes
CN114647954A (en) * 2022-04-12 2022-06-21 广州文远知行科技有限公司 Simulation scene generation method and device, computer equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101308586A (en) * 2008-05-05 2008-11-19 戴宏 Motor vehicle day and night running observing recorder
CN105740510A (en) * 2016-01-22 2016-07-06 山东师范大学 Simulation system and method of evacuation crowd behavior based on grid-density-relation
US20200310403A1 (en) * 2019-03-29 2020-10-01 TuSipmle, Inc. Operational testing of autonomous vehicles
CN112069643A (en) * 2019-05-24 2020-12-11 北京车和家信息技术有限公司 Automatic driving simulation scene generation method and device
CN111723470A (en) * 2020-05-26 2020-09-29 同济大学 Automatic driving control method based on calibration optimization RSS model
CN112578683A (en) * 2020-10-16 2021-03-30 襄阳达安汽车检测中心有限公司 Optimized in-loop simulation test method for automobile auxiliary driving controller
CN112345272A (en) * 2021-01-11 2021-02-09 北京赛目科技有限公司 Automatic driving simulation test method and device for scene library
CN113642114A (en) * 2021-09-14 2021-11-12 吉林大学 Modeling method for humanoid random car following driving behavior capable of making mistakes
CN114647954A (en) * 2022-04-12 2022-06-21 广州文远知行科技有限公司 Simulation scene generation method and device, computer equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
SINA ALIGHANBARI ET AL: "Deep Reinforcement Learning with NMPC Assistance Nash Switching for Urban Autonomous Driving", 《IEEE TRANSACTIONS ON INTELLIGENT VEHICLES ( EARLY ACCESS )》 *
修海林: "有条件自动驾驶汽车测试与综合评价研究", 《中国优秀硕士学位论文全文数据库电子期刊 工程科技II辑》 *
刘康: "基于功能测试的自动驾驶汽车换道关键场景研究", 《中国优秀硕士学位论文全文数据库电子期刊 工程科技II辑》 *
周文帅 等: "面向高速公路车辆切入场景的自动驾驶测试用例生成方法", 《汽车技术》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115563020A (en) * 2022-12-05 2023-01-03 深圳慧拓无限科技有限公司 Method and system for generating danger test scene, electronic device and storage medium
CN115993257A (en) * 2023-03-23 2023-04-21 禾多科技(北京)有限公司 Reliability determination method for automatic driving system
CN115993257B (en) * 2023-03-23 2023-05-30 禾多科技(北京)有限公司 Reliability determination method for automatic driving system
CN116046417A (en) * 2023-04-03 2023-05-02 西安深信科创信息技术有限公司 Automatic driving perception limitation testing method and device, electronic equipment and storage medium
CN116046417B (en) * 2023-04-03 2023-11-24 安徽深信科创信息技术有限公司 Automatic driving perception limitation testing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115017742B (en) 2022-12-13

Similar Documents

Publication Publication Date Title
CN115017742B (en) Automatic driving test scene generation method, device, equipment and storage medium
Essa et al. Simulated traffic conflicts: do they accurately represent field-measured conflicts?
CN108062095A (en) The object tracking merged in probabilistic framework using sensor
WO2018161037A1 (en) Systems and methods for quantitatively assessing collision risk and severity
US20220048533A1 (en) Method and system for validating autonomous control software for a self-driving vehicle
Nguyen et al. Traffic conflict assessment for non-lane-based movements of motorcycles under congested conditions
CN110716529A (en) Automatic generation method and device for automatic driving test case
CN110827326B (en) Method, device, equipment and storage medium for generating simulation man-vehicle conflict scene model
CN104875740B (en) For managing the method for following space, main vehicle and following space management unit
CN114076631A (en) Overload vehicle identification method, system and equipment
EP4083959A1 (en) Traffic flow machine-learning modeling system and method applied to vehicles
CN116194350A (en) Generating multiple simulated edge condition driving scenarios
Platho et al. Predicting velocity profiles of road users at intersections using configurations
CN113867367B (en) Processing method and device for test scene and computer program product
Yuan et al. Harnessing machine learning for next-generation intelligent transportation systems: a survey
CN114444208A (en) Method, device, equipment and medium for determining reliability of automatic driving system
US20220383736A1 (en) Method for estimating coverage of the area of traffic scenarios
CN117128979A (en) Multi-sensor fusion method and device, electronic equipment and storage medium
CN110696828B (en) Forward target selection method and device and vehicle-mounted equipment
Das et al. Why slammed the brakes on? auto-annotating driving behaviors from adaptive causal modeling
Song et al. Remote estimation of free-flow speeds
CN111881121B (en) Automatic driving data filling method and device
Becker et al. Blurring the border between real and virtual parking environments
CN114684197A (en) Detection method, device and equipment for obstacle avoidance scene and storage medium
CN114492544A (en) Model training method and device and traffic incident occurrence probability evaluation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Room 533, 5th Floor, Building A3A4, Phase I, Zhong'an Chuanggu Science and Technology Park, No. 900 Wangjiang West Road, High tech Zone, Hefei City, Anhui Province, 230031

Patentee after: Anhui Xinxin Science and Technology Innovation Information Technology Co.,Ltd.

Address before: 2nd Floor, Building B2, Yunhui Valley, No. 156, Tiangu 8th Road, Software New Town, Yuhua Street Office, High-tech Zone, Xi'an City, Shaanxi Province 710000

Patentee before: Xi'an Xinxin Information Technology Co.,Ltd.