CN111599181A - Typical natural driving scene recognition and extraction method for intelligent driving system test - Google Patents

Typical natural driving scene recognition and extraction method for intelligent driving system test Download PDF

Info

Publication number
CN111599181A
CN111599181A CN202010707394.XA CN202010707394A CN111599181A CN 111599181 A CN111599181 A CN 111599181A CN 202010707394 A CN202010707394 A CN 202010707394A CN 111599181 A CN111599181 A CN 111599181A
Authority
CN
China
Prior art keywords
scene
vehicle
cut
driving
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010707394.XA
Other languages
Chinese (zh)
Other versions
CN111599181B (en
Inventor
陈华
熊英志
梁黎明
陈龙
李鹏辉
赵树廉
陈涛
夏芹
夏利红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cas Intelligent Network Technology Co ltd
China Academy Of Automobile Technology Co ltd
China Automotive Engineering Research Institute Co Ltd
Original Assignee
Cas Intelligent Network Technology Co ltd
China Academy Of Automobile Technology Co ltd
China Automotive Engineering Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cas Intelligent Network Technology Co ltd, China Academy Of Automobile Technology Co ltd, China Automotive Engineering Research Institute Co Ltd filed Critical Cas Intelligent Network Technology Co ltd
Priority to CN202010707394.XA priority Critical patent/CN111599181B/en
Publication of CN111599181A publication Critical patent/CN111599181A/en
Application granted granted Critical
Publication of CN111599181B publication Critical patent/CN111599181B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M17/00Testing of vehicles
    • G01M17/007Wheeled or endless-tracked vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0208Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterized by the configuration of the monitoring system
    • G05B23/0213Modular or universal configuration of the monitoring system, e.g. monitoring system having modules that may be combined to build monitoring program; monitoring system that can be applied to legacy systems; adaptable monitoring system; using different communication protocols
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0133Traffic data processing for classifying traffic situation
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications

Abstract

The invention discloses a typical natural driving scene recognition and extraction method for testing an intelligent driving system, which is realized on the basis of a system comprising a data extraction and calculation module and a typical scene type recognition module; the data extraction and calculation module obtains key parameters of a driving scene from a vehicle driving database through screening, matching and calculation; the typical scene type recognition module recognizes a typical scene according to the driving scene key parameters and the driving characteristics of the main vehicle and the relative state of the main vehicle and the target object, and comprises the following steps: the method comprises the following steps of a dangerous scene, a main vehicle lane changing scene, a following vehicle running scene, an adjacent vehicle cut-in scene, a front vehicle cut-out scene and a line patrol running scene. The method comprehensively extracts the typical natural driving scene, realizes the application of the off-line to the typical natural driving scene, and has complete structure and strong applicability.

Description

Typical natural driving scene recognition and extraction method for intelligent driving system test
Technical Field
The invention relates to the field of testing, analyzing and evaluating application of an intelligent driving system of an automobile, in particular to a typical natural driving scene recognition and extraction method for testing the intelligent driving system.
Background
In recent years, with the progress of science and technology, the degree of automobile intelligence is higher and higher, and intelligent driving systems of grades L2 and L3 are gradually brought to the market. Before entering the market, the automobile must be subjected to sufficient testing and verification to ensure that the intelligent control system can safely operate in its designed operating domain.
The natural driving scene refers to the real traffic condition of a driver in actual road driving, and the research of the typical natural driving scene has important significance for the research and development of an intelligent driving system, the compilation of test cases and the formulation of evaluation standards. The extraction of typical natural driving scenes is the basis of scene research, related researchers propose various types of typical natural driving scene extraction methods at present, for example, the PEGASUS project in Germany proposes the research flow of the dangerous driving scene extraction method, and the general standard of the dangerous driving scene extraction is obtained; the extraction standard of continuous vehicle following working conditions is provided by the 'vehicle longitudinal driving auxiliary system based on a driver characteristic self-learning method' in China, and the identification method of the cut-in working conditions of adjacent vehicles is provided by the 'lane change cut-in behavior analysis based on natural driving data' in Wangxiang.
Obviously, the research is limited to the extraction of a single-function scene, and a standardized and integrated scene extraction framework is not formed at present so as to support the comprehensive research of a typical natural driving scene.
Disclosure of Invention
The invention provides a typical natural driving scene recognition and extraction method for intelligent driving system test, aiming at the problem of extraction singleness of natural driving scenes in the prior art, the method is used for comprehensively extracting the typical natural driving scenes, and aims to realize offline standardized storage and application of the typical natural driving scenes to form a basic tool for natural driving scene research.
The technical scheme adopted by the invention is as follows: a typical natural driving scene recognition and extraction method for testing an intelligent driving system is characterized by comprising the following steps: the method is realized on the basis of a system comprising a data extraction and calculation module and a typical scene type identification module;
the data extraction and calculation module obtains driving scene key parameters from a vehicle driving database through screening, matching and calculation, wherein the driving scene key parameters comprise: collecting time T, a sampling step length T, a main vehicle speed egoV, a main vehicle acceleration egoAcc, a main vehicle-to-lane line distance distanceLane, a target object number objectID, a target object driving lane mark laneID, a target object relative longitudinal distance distanceX, a target object relative longitudinal speed relVX, a target object relative longitudinal acceleration relAccX, a target object collision time TTC and a target object head time distance THW;
the typical scene type recognition module recognizes a typical scene according to the driving scene key parameters provided by the data extraction and calculation module and the driving characteristics of the main vehicle and the relative state of the main vehicle and the target object, wherein the typical scene comprises: the method comprises the following steps of obtaining a dangerous scene, a main vehicle lane changing scene, a following vehicle running scene, an adjacent vehicle cut-in scene, a front vehicle cut-out scene and a line patrol running scene, and providing the dangerous scene, the main vehicle lane changing scene, the following vehicle running scene, the adjacent vehicle cut-in scene, the front vehicle cut-out scene and the line patrol.
Further, the data extraction and calculation module firstly screens raw data from a vehicle driving database, wherein the raw data comprises: collecting time T, a sampling step length T, a main vehicle speed egoV, a main vehicle acceleration egoAcc, a main vehicle-to-lane line distance distanceLane, a target object number objectID, a target object driving lane mark laneID, a target object relative longitudinal distance distanceX, a target object relative longitudinal speed relVX and a target object relative longitudinal acceleration relAccX;
then, the data extraction and calculation module matches the speed egoV of the host vehicle, the number objectID of the target object, the relative longitudinal distance distanceX of the target object and the relative longitudinal speed relVX of the target object at each moment through matching the acquisition time t corresponding to each group of original data, and obtains the relative speed and the relative position of each target object and the host vehicle at each moment;
finally, the data extraction and calculation module calculates the target object collision time TTC and the target object headway THW, wherein the method for calculating the target object collision time TTC comprises the following steps:
TTC=distanceX/relVX (1);
the method for calculating the headway THW of the target object comprises the following steps:
THW= distanceX/egoV (2)。
further, the typical scene type identification module comprises a dangerous scene identification module, a main vehicle lane changing scene identification module, a following vehicle running scene identification module, an adjacent vehicle cut-in and front vehicle cut-out scene identification module and a line patrol running scene identification module;
the dangerous scene identification module identifies a dangerous scene;
the main lane changing scene recognition module recognizes a main lane changing scene;
the following vehicle driving scene identification module identifies a following vehicle driving scene;
the adjacent vehicle cut-in and front vehicle cut-out scene recognition module recognizes an adjacent vehicle cut-in scene and a front vehicle cut-out scene;
and the line patrol driving scene identification module identifies a line patrol driving scene.
Further, the dangerous scene recognition module recognizes a dangerous scene by judging whether the acceleration egoAcc of the main vehicle, the target object collision time TTC and the target object headway THW meet the dangerous condition;
the dangerous conditions are as follows: any one of the master vehicle acceleration egoAcc, the target object collision time TTC and the target object headway THW exceeds a safety scene boundary value set correspondingly;
when the danger condition is met, the vehicle is judged to enter a dangerous scene, and the time when the danger condition is met is recorded as tdanger
Still further, when the host acceleration egoAcc < a set host acceleration safety scene boundary value, a hazard condition is satisfied;
when the target object collision time TTC is smaller than a set target object collision time safety scene boundary value and the main vehicle is braked, a danger condition is met;
and when the target object headway THW is smaller than the set target object headway safety scene boundary value and the main vehicle is braked, the danger condition is met.
Still further, the dangerous scene recognition module is self-satisfied with the time t of the dangerous conditiondangerForward reservation time tdangerForwardBackward reserved time tdangerBackwardThe time interval for recording the dangerous scene is [ t ]danger-tdangerForward, tdanger+tdangerBackward]。
Further, the main lane changing scene recognition module recognizes a main lane changing scene by judging whether the distance distanceLane from the main vehicle to the lane line meets the main lane changing condition;
the main lane changing conditions are as follows: the distance variation from the main car to a certain lane line at the adjacent sampling time is larger than a set distance threshold;
when the main bus lane change condition is met, the main bus lane change scene is judged to be entered, and the time when the main bus lane change condition is met is recorded as tlane. Still further, the total duration of the lane changing process is set as tlaneProcessThen record the main lane change scene time interval as [ t ]lane-tlaneProcess/2, tlane+tlaneProcess/2]。
Further, the following vehicle driving scene recognition module firstly recognizes a preselected following vehicle driving scene by judging whether the main vehicle speed egoV, the target object number ObjectID, the target object driving lane mark laneID and the target object relative longitudinal distance distanceX meet a following vehicle driving preselection condition, and then recognizes the following vehicle driving scene by judging whether the preselected following vehicle driving scene meets a following vehicle driving scene time condition;
the following vehicle running preselection condition is that a target object closest to the longitudinal distance of the main vehicle exists in a lane where the main vehicle is located in a certain time period, and the speed of the main vehicle is not zero;
setting the starting time of the pre-selected following vehicle as tpreFollowStartEnd time tpreFollowEndAnd recording the time interval of the preselected following driving scene as follows: [ t ] ofpreFollowStart, tpreFollowEnd];
The following driving scene time conditions are as follows: preselection following driving scene time interval [ t ]preFollowStart,tpreFollowEnd]Is larger than the set following driving scene time threshold value.
Still further, the adjacent vehicle cut-in and front vehicle cut-out scene recognition module recognizes an adjacent vehicle cut-in scene and a front vehicle cut-out scene through the following steps:
step 241): the adjacent vehicle cut-in and previous vehicle cut-out scene recognition module recognizes an adjacent vehicle cut-in scene or a previous vehicle cut-out candidate scene by judging whether a main vehicle lane change scene process, a preselected following vehicle driving scene process, a relative longitudinal distance distanceX of a target object, a relative longitudinal speed relVX of the target object and a relative longitudinal acceleration relAccX of the target object meet an adjacent vehicle cut-in or previous vehicle cut-out condition 1;
the adjacent vehicle cut-in or front vehicle cut-out condition 1 is as follows: there are two adjacent following different object objectsi、objectjIn the pre-selection following vehicle driving scene process, the interval time of the two scene processes is less than a set time threshold, and the interval time does not intersect with any main vehicle lane changing scene process;
recording the time interval of the candidate scene of the adjacent vehicle cut-in or the front vehicle cut-out as tpreFollowEndi, tpreFollowStartj]Wherein t ispreFollowEndiFor the previous object during the adjacent pre-selected following vehicle driving sceneiEnd of time, t, of the following travelpreFollowStartjFor the next object in the process of adjacent pre-selected following vehicle driving scenejA time starting point of a following driving process;
step 242): the adjacent vehicle cut-in and front vehicle cut-out scene recognition module distinguishes an adjacent vehicle cut-in scene from a front vehicle cut-out scene by judging the relative distance change characteristic between a main vehicle and a target in the process of cutting in the adjacent vehicle or cutting out the candidate scene from the front vehicle, and the judging step is as follows:
1) calculating method according to preset workshop kinematics and tpreFollowEndiTemporal host and objectiRelative motion state of, calculate tpreFollowStartjTemporal host and objectiRelative distance of (d) distanceXi
2) If t ispreFollowStartjTemporal host and objectjRelative distance of (d) distanceXjGreater than distanceXiIf the scene is judged to be a vehicle ahead cut-out scene, the adjacent vehicle cut-in and vehicle ahead cut-out scene identification module records the time of the vehicle ahead cut-out scene as tout|objectIDWherein t isout=(tpreFollowEndi+tpreFollowStartj)/2,objectIDIs an objectiThe number of (2); if t ispreFollowStartjTemporal host and objectjRelative distance of (d) distanceXjIs less than or equal to distanceXiIf the scene is judged to be an adjacent vehicle cut-in scene, the adjacent vehicle cut-in and front vehicle cut-out scene identification module records the adjacent vehicle cut-inThe time of entering scene is tin|objectIDWherein t isin=(tpreFollowEndi+tpreFollowStartj)/2, objectIDIs an objectjThe number of (2);
step 243): the adjacent vehicle cut-in and front vehicle cut-out scene recognition module screens all time starting points t in the process of the preselected following vehicle driving scene which does not meet the adjacent vehicle cut-in or front vehicle cut-out condition 1 and does not intersect with any main vehicle lane changing scene processpreFollowStartWhether an adjacent vehicle cut-in scene exists is further identified by judging whether the adjacent vehicle cut-in condition 2 is met;
the adjacent vehicle cut-in condition 2 is as follows: the time starting point t in the process of running the pre-selected following vehicle, which does not meet the adjacent vehicle cut-in or front vehicle cut-out condition 1 and does not intersect with any main vehicle lane changing scene process, existspreFollowStartAnd [ t)preFollowStart-nT, tpreFollowStart]The target object runs outside the lane where the main vehicle is located in the time period, T is sampling time, and n is any natural number which is not zero;
recording the cut-in time of the adjacent vehicle as tin|objectIDWherein t isin= tpreFollowStart, objectIDNumbering pre-cut adjacent vehicles;
step 244): the adjacent vehicle cut-in and front vehicle cut-out scene recognition module screens all time end points t in the process of the preselected following vehicle driving scene which does not meet the adjacent vehicle cut-in or front vehicle cut-out condition 1 and does not intersect with any main vehicle lane changing scene processpreFollowEndFurther identifying whether a front vehicle cut-out scene exists or not by judging whether the front vehicle cut-out condition 2 is met or not;
the front truck cutting condition 2 is: the time terminal t in the running scene process of the pre-selected following vehicle which does not meet the adjacent vehicle cut-in or front vehicle cut-out condition 1 and does not intersect with any main vehicle lane changing scene process existspreFollowEndAnd [ t)preFollowEnd,tpreFollowEnd+nT]The target object runs outside the lane where the main vehicle is located in the time period, T is sampling time, and n is any natural number which is not zero;
recording the time t of the front vehicle cut-outout|objectIDWherein t isout=tpreFollowEnd, objectIDThe pre-cut front cars are numbered.
Still further, in step 243), it is determined that the time interval for the neighboring vehicle to cut into the scene includes the settable forward reserved time tinForwardAnd backward reservation time tinBackwardIf the time interval for the adjacent vehicle to cut into the scene is [ t ]in-tinForward, tin+tinBackward];
In step 244), it is determined that the forward vehicle cut-out scene time interval includes the settable forward reserved time toutForwardAnd backward reservation time toutBackwardAnd the time interval of the front vehicle cut-out scene is [ t ]out-toutForward, tout+toutBackward]。
Further, the line patrol driving scene identification module identifies a line patrol driving scene by judging whether the main vehicle lane changing scene process, the preselected following driving scene process, the adjacent vehicle cut-in scene process and the preceding vehicle cut-out scene process meet line patrol driving conditions, and if the line patrol driving conditions are met, the line patrol driving scene exists;
the line patrol driving conditions are as follows: and in a certain time period, no main vehicle lane changing process, pre-selection following vehicle running process, adjacent vehicle cut-in process or front vehicle cut-out process exists.
Compared with the prior art, the invention has the following remarkable beneficial effects: the method realizes the comprehensive judgment of the vehicle driving typical scene based on a plurality of scene key parameters such as time information, sampling step length, vehicle speed of the main vehicle, acceleration of the main vehicle, distance from the main vehicle to a lane line, target number, target driving lane mark, relative longitudinal distance of the target, relative longitudinal speed of the target, relative longitudinal acceleration of the target, collision time of the target, headway time of the target and the like; according to the driving characteristics of the main vehicle and the relative state of the main vehicle and the target object, the driving scenes of the vehicle are reasonably divided into six types, so that the full coverage of the dynamic scene is realized, and the recognition accuracy of the typical scene is greatly improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention.
Drawings
Fig. 1 is a system architecture diagram of a typical natural driving scene recognition and extraction method provided by the present invention.
Fig. 2 is a flowchart of an implementation process of a typical natural driving scene recognition and extraction method provided by the present invention.
Detailed Description
The technical solution of the present invention is further explained in detail by the accompanying drawings and the specific embodiments.
The embodiment provides a typical natural driving scene recognition and extraction method, which is implemented based on a system architecture shown in fig. 1, and the system comprises a data extraction and calculation module 1 and a typical scene type recognition module 2; wherein, the typical scene type recognition module 2 further comprises: the system comprises a dangerous scene recognition module 21, a main vehicle lane changing scene recognition module 22, a following vehicle driving scene recognition module 23, an adjacent vehicle cut-in and front vehicle cut-out scene recognition module 24 and a line patrol driving scene recognition module 25. Each module is provided with a computer running program and a computer readable storage medium for realizing the functions thereof.
And the data extraction and calculation module 1 is used for screening original data in a vehicle running database, and calculating and acquiring key parameters of a driving scene. In implementation, the raw data comprises raw video information and raw sensor data information; the data extraction and calculation module 1 obtains driving scene key parameters through screening, matching and calculation, wherein the driving scene key parameters at least comprise parameters of acquisition time T, sampling step length T, main vehicle speed egoV, main vehicle acceleration egoAcc, distance between a main vehicle and a lane line distanceLane, object number objectID, object driving lane mark laneID, object relative longitudinal distance distancceX, object relative longitudinal speed relVX, object relative longitudinal acceleration relAccX, object collision time TTC and object head time distance THW corresponding to a group of data.
The data extraction and calculation module 1 transmits the obtained driving scene key parameters to the typical scene type identification module 2 for use.
And the typical scene type identification module 2 is used for identifying a typical scene according to the driving scene key parameters obtained by the data extraction and calculation module 1. In an embodiment, typical scenarios may be divided into: and the dangerous scene, the main vehicle lane changing scene, the following vehicle running scene, the adjacent vehicle cut-in scene, the front vehicle cut-out scene and the line patrol running scene are classified into six types. Therefore, the typical scene type recognition module 2 may include a dangerous scene recognition module 21, a main lane-changing scene recognition module 22, a following-vehicle driving scene recognition module 23, an adjacent-vehicle cut-in and previous-vehicle cut-out scene recognition module 24 (the same recognition standard is used when recognizing candidate scenes of adjacent vehicle cut-in or previous vehicle cut-out, so the same recognition module is used), and a line-patrol driving scene recognition module 25, which are respectively used for realizing recognition of corresponding scenes.
The typical scene type recognition module 2 realizes the overall judgment of the vehicle driving scene based on a plurality of key parameters such as acquisition time T, sampling step length T, vehicle speed egoV of the main vehicle, acceleration egoAcc of the main vehicle, distance between the main vehicle and a lane line, object number objectID, object driving lane mark laneID, object relative longitudinal distance distanceX, object relative longitudinal speed relVX, object relative longitudinal acceleration relAccX, object collision time TTC, and object head time THW. According to the driving characteristics of the main vehicle and the relative state of the main vehicle and the target object, the driving scenes of the vehicle are reasonably divided into six types, so that the full coverage of the dynamic scene is realized, and the recognition accuracy of the typical scene is greatly improved.
On the basis of the system architecture, the method for extracting the typical natural driving scene is implemented, as shown in fig. 2, and comprises the following steps:
step 1): firstly, a data extraction and calculation module 1 screens original data from a vehicle running database, wherein the driving scene key parameter information contained in the original data comprises: collecting time T, a sampling step length T, a main vehicle speed egoV, a main vehicle acceleration egoAcc, a main vehicle-to-lane line distance distanceLane, a target object number objectID, a target object driving lane mark laneID, a target object relative longitudinal distance distanceX, a target object relative longitudinal speed relVX and a target object relative longitudinal acceleration relAccX, wherein each set of collected original data comprises driving scene key parameters of the factors;
then, the data extraction and calculation module 1 matches the host vehicle speed egoV, the object number objectID, the object relative longitudinal distance distanceX and the object relative longitudinal speed relVX at each moment by matching the acquisition time t corresponding to each group of original data, and obtains the relative speed and the relative position of each object and the host vehicle at each moment;
finally, the data extraction and calculation module 1 calculates the target object collision time TTC and the target object headway THW, wherein the method for calculating the target object collision time TTC is as follows:
TTC=distanceX/relVX (1);
the method for calculating the headway THW of the target object comprises the following steps:
THW= distanceX/egoV (2)。
step 2): the identification of various types of scenes is carried out, and the method comprises the following steps:
step 21): the dangerous scene recognition module 21 recognizes the dangerous scene by judging whether the acceleration egoAcc of the main vehicle, the time TTC of the collision of the target object and the headway THW of the target object meet the dangerous conditions: any one of the master vehicle acceleration egoAcc, the target object collision time TTC, and the target object headway THW exceeds a safety scene boundary value that is correspondingly set. When the danger condition is satisfied, the system is judged as a dangerous scene, and the dangerous scene recognition module 21 records the time t when the danger condition is satisfieddanger
In the embodiment, the safe scene boundary value of the host vehicle acceleration egoAcc is set to f (egov):
(egoV) = -5, when egoV < 50 km/h;
(egoV) = egoV/20-7.5, when egoV is more than or equal to 50km/h and less than 90 km/h;
f (egoV) = -3, when egoV is more than or equal to 90 km/h;
the risk conditions for egoAcc are then:
egoAcc<f(egoV) (4);
setting the safe scene boundary value of the target object collision time TTC to be 2.5s, wherein the dangerous conditions are as follows:
TTC is less than 2.5s, and the main vehicle is braked (5);
setting the safety scene boundary value of the headway THW of the target object to be 1s, wherein the danger conditions are as follows:
THW is less than 1s, and the main vehicle is braked (6).
Further, the dangerous scene recognition module 21 records the dangerous scene process according to a dangerous scene time interval, which includes the time t when the self-satisfied dangerous condition is satisfieddangerForward reserved time tdangerForwardAnd a backward reservation time tdangerBackwardAnd the forward reserved time and the backward reserved time are self-set, so that the time interval of the dangerous scene is as follows: [ t ] ofdanger-tdangerForward, tdanger+tdangerBackward]The dangerous scene process is a full scene process in the time interval.
For example, in an embodiment, the forward reservation time t may be setdangerForward10s and reserved time t backwarddangerBackward5s, the total process time of the dangerous scene is tdanger-10, tdanger+5]。
The dangerous scene identification is carried out in the whole scene type identification process, and is parallel and independent with other scene identification.
Step 22): the main lane changing scene recognition module 22 recognizes the main lane changing scene by judging whether the distance distanceLane from the main vehicle to the lane line meets the main lane changing condition: the distance variation from the main car to a certain lane line at the adjacent sampling time is larger than a set distance threshold. When the main vehicle lane change condition is met, the main vehicle lane change condition is judged as a main vehicle lane change, and the main vehicle lane change scene recognition module 22 records the time t of meeting the main vehicle lane change conditionlane
For example, in one embodiment, a threshold distance from the host vehicle to a side lane line is set3m, the lane change condition is that the distance variation amount from the host vehicle to the lane line of the adjacent sampling step length T is larger than the distance threshold value, namely | distanceLane (T)lane)-distanceLane(tlane-T)|>3m。
Further, the main lane change scene recognition module 22 records the main lane change scene process according to the main lane change scene time interval, and the total duration of the lane change process is set as tlaneProcessThen the main lane change scene time interval is [ t ]lane-tlaneProcess/2, tlane+tlaneProcess/2]The main lane changing scene process is a full scene process in the time interval.
For example, in an embodiment, the host lane change scenario identification module 22 sets the host lane change process duration to tlaneProcessAnd if the time interval is 20s, the main lane changing scene recognition module records the time interval of the main lane changing scene as tlane-10, tlane+10]。
Step 23): the following vehicle driving scene recognition module 23 first recognizes a preselected following vehicle driving scene by judging whether the main vehicle speed egoV, the target object number ObjectID, the target object driving lane marker laneID, and the target object relative longitudinal distance distanceX satisfy a following vehicle driving preselection condition.
The following vehicle running preselection conditions are as follows: there is a certain time period in which a lane where the host vehicle is located has a target object closest to the longitudinal distance of the host vehicle, and the speed of the host vehicle is not completely zero. Setting the starting time of the pre-selected following vehicle as tpreFollowStartEnd time tpreFollowEndAnd recording the time interval of the preselected following driving scene as follows: [ t ] ofpreFollowStart,tpreFollowEnd]And recording the corresponding preselected following vehicle driving scene process as follows: [ t ] ofpreFollowStart, tpreFollowEnd]|objectID(ii) a Wherein the object isIDThe number of the target object is shown.
Then, the following driving scene recognition module 23 recognizes the following driving scene by judging whether the preselected following driving scene process satisfies the following driving scene time condition. The following driving scene time conditions are as follows: preselection following driving scene time zoneM [ t ]preFollowStart, tpreFollowEnd]When the time is larger than the set following driving scene time threshold value, the following steps are carried out: [ t ] ofpreFollowStart, tpreFollowEnd]When the following driving scene time threshold is greater than, the following driving scene recognition module 23 records that the following driving scene time interval is [ t ]FollowStart,tFollowEnd]The following driving scene process is [ t ]FollowStart,tFollowEnd]|objectID,tFollowStartFor the start time of driving with the vehicle, tFollowEndThe time is the following vehicle driving ending time.
For example, in the embodiment, the following vehicle driving scene recognition module 23 sets the following vehicle driving scene time interval threshold to 10s, and selects a process with the duration time of the preselected following vehicle driving scene process being greater than the following vehicle driving scene time interval threshold as the following vehicle driving scene, that is, the request t ispreFollowEnd-tpreFollowStart>10s。
The preselected following driving scene is identified firstly, and then the following driving scene is identified, because the preselected following driving scene identification result is used in the adjacent vehicle cut-in and previous vehicle cut-out scene identification, partial following scenes with extremely short duration are abandoned in the formal following driving scene, and the abandoned scenes are useful in the adjacent vehicle cut-in or previous vehicle cut-out judgment.
Step 24): the adjacent vehicle cut-in and preceding vehicle cut-out scene recognition module 24 recognizes the adjacent vehicle cut-in scene and the preceding vehicle cut-out scene through the following steps:
step 241): the adjacent vehicle cut-in and preceding vehicle cut-out scene recognition module 24 recognizes the adjacent vehicle cut-in scene or preceding vehicle cut-out candidate scene by judging whether the aforementioned main vehicle lane change scene process, the pre-selection following vehicle driving scene process, the target relative longitudinal distance distanceX, the target relative longitudinal velocity relVX, and the target relative longitudinal acceleration relAccX satisfy the adjacent vehicle cut-in or preceding vehicle cut-out condition 1, and enters the adjacent vehicle cut-in or preceding vehicle cut-out candidate scene if the adjacent vehicle cut-in or preceding vehicle cut-out condition 1 is satisfied.
The adjacent vehicle cut-in or front vehicle cut-out condition 1 is as follows: two adjacent preselected following vehicle running scene processes following different targets exist, the interval time between the preselected following vehicle running scene processes and the preselected following vehicle running scene processes is smaller than a set time threshold, and intersection exists between the interval time and any main vehicle lane changing scene process. The adjacent vehicle cut-in and leading vehicle cut-out scene recognition module 24 records that the scene process meeting the adjacent vehicle cut-in or leading vehicle cut-out condition 1 is an adjacent vehicle cut-in or leading vehicle cut-out candidate scene process.
Recording the time interval of the process of switching in the adjacent vehicle or switching out the candidate scene from the front vehicle as tpreFollowEndi,tpreFollowStartj]Then the candidate scene process in the time interval is [ t ]preFollowEndi|objecti,tpreFollowStartj|objectj]Wherein t ispreFollowEndiAnd objectiRespectively numbering the time end point and the target object number t of the previous following vehicle driving process in the adjacent preselected following vehicle driving scene processpreFollowStartjAnd objectjAnd respectively numbering the time starting point and the target object of the following vehicle running process in the adjacent preselected vehicle running scene process.
For example, in the embodiment, the preset interval time threshold of the scene of cut-in of the adjacent vehicle or cut-out of the front vehicle is set to be 1s, and the screening satisfies tpreFollowStartj-tpreFollowEndiThe running scene process of two adjacent preselected following vehicles in the time interval t < 1spreFollowEndi,tpreFollowStartj]The process that the interior of the main vehicle and any main vehicle have intersection in the lane changing process, and the adjacent vehicle cut-in and front vehicle cut-out scene recognition module records the candidate scene process meeting the conditions as [ tpreFollowEndi|objecti,tpreFollowStartj|objectj]。
Step 242): the adjacent vehicle cut-in and previous vehicle cut-out scene recognition module 24 distinguishes the adjacent vehicle cut-in scene from the previous vehicle cut-out scene by judging the relative distance change characteristic between the main vehicle and the target in the process of cutting in the adjacent vehicle or cutting out the candidate scene from the previous vehicle, and the judging steps are as follows:
(1) calculating method according to preset workshop kinematics and tpreFollowEndiTemporal host and objectiRelative motion state of, calculate tpreFollowStartjTemporal host and objectiRelative to each otherDistance distanceXi
(2) If t ispreFollowStartjTemporal host and objectjRelative distance of (d) distanceXjGreater than distanceXiIf the scene is judged to be a vehicle ahead cut-out scene, the adjacent vehicle cut-in and vehicle ahead cut-out scene identification module 24 records the time t of the vehicle ahead cut-out sceneout|objectIDWherein t isout=(tpreFollowEndi+tpreFollowStartj)/2,objectIDIs an objectiThe number of (2);
if t ispreFollowStartjTemporal host and objectjRelative distance of (d) distanceXjIs less than or equal to distanceXiIf the scene is determined to be an adjacent vehicle cut-in scene, the adjacent vehicle cut-in and previous vehicle cut-out scene recognition module 24 records the time of the adjacent vehicle cut-in scene as tin|objectIDWherein t isin=(tpreFollowEndi+tpreFollowStartj)/2, objectIDIs an objectjThe number of (2).
For example, in the embodiment, it is assumed that the host vehicle and the target vehicle (i.e., the object) are present in a short timei、objectj) The method is a constant speed motion process, namely the relative speed of the main vehicle and the target vehicle is kept unchanged, and the judging steps are as follows:
(1) calculating t = t using integralpreFollowStartjThe principal and object in timeiRelative distance of (d) distanceXi(tpreFollowStartj);
(2)
Figure DEST_PATH_IMAGE001
The subject and the objectjRelative distance of (d) distanceXj(tpreFollowStartj)>distanceXi(tpreFollowStartj) If the process is judged to be a front vehicle cut-out process, the adjacent vehicle cut-in and front vehicle cut-out scene recognition module records the time t of the front vehicle cut-out sceneout|objectiWherein t isout=(tpreFollowEndi+tpreFollowStartj)/2;
If the main car isAnd objectjRelative distance of (d) distanceXj(tpreFollowStartj)≤distanceXi(tpreFollowStartj) If the process is judged to be an adjacent vehicle cut-in scene, the adjacent vehicle cut-in and front vehicle cut-out scene recognition module records the time of the adjacent vehicle cut-in scene as tin|objectjWherein t isin=(tpreFollowEndi+tpreFollowStartj)/2。
Step 243): the adjacent vehicle cut-in and front vehicle cut-out scene recognition module 24 screens all time starting points t in the process of the preselected following vehicle driving scene which does not meet the adjacent vehicle cut-in or front vehicle cut-out condition 1 and does not intersect with any main vehicle lane changing scene processpreFollowStartAnd further identifying whether an adjacent vehicle cut-in scene exists or not by judging whether the adjacent vehicle cut-in condition 2 is met or not.
The adjacent vehicle cut-in condition 2 is as follows: time starting point t in process of existence of pre-selection following vehicle running scene not meeting adjacent vehicle cut-in or preceding vehicle cut-out condition 1preFollowStartAnd t ispreFollowStartDoes not intersect with any main lane change scene process, and [ tpreFollowStart-nT, tpreFollowStart]The target object runs outside the lane where the main vehicle is located in the time period (namely the target vehicle and the main vehicle are not in the same lane), wherein T is sampling time, and n is any natural number which is not zero. The adjacent vehicle cut-in and front vehicle cut-out scene recognition module 24 records the cut-in time of the adjacent vehicle as tin|objectIDWherein t isin=tpreFollowStart, objectIDAnd numbering the adjacent vehicles which are pre-cut.
Further, the adjacent vehicle cut-in and front vehicle cut-out scene recognition module 24 records the adjacent vehicle cut-in scene process according to the adjacent vehicle cut-in scene time interval, which includes the settable forward reserved time tinForwardAnd backward reservation time tinBackwardIf the time interval for the adjacent vehicle to cut into the scene is [ t ]in-tinForward, tin+tinBackward]The process of the adjacent vehicle cutting into the scene is [ t ]in-tinForward, tin+tinBackward]| objectID
For example, in the embodiment, the time interval for the neighboring vehicle to cut into the scene includes the forward reserved time tinForward=10s and a backward reservation time tinBackwardIf the time is not less than 10s, the process of the adjacent vehicle cutting into the scene is recorded as tin-10, tin+10]| objectID
Step 244): the adjacent vehicle cut-in and front vehicle cut-out scene recognition module 24 screens time end points t in all preselected following vehicle driving scene processes which do not meet the adjacent vehicle cut-in or front vehicle cut-out condition 1 and do not intersect with any main vehicle lane change scene processpreFollowEndAnd further identifying whether a front vehicle cut-out scene exists or not by judging whether the front vehicle cut-out condition 2 is met or not.
The front truck cutting condition 2 is: time terminal t in the process of existence of pre-selection following vehicle running scene which does not meet adjacent vehicle cut-in or front vehicle cut-out condition 1preFollowEndAnd t ispreFollowEndDoes not intersect with any main lane change scene process, and [ tpreFollowEnd, tpreFollowEnd+nT]And the target object runs outside the lane where the main vehicle is located in the time period, wherein T is sampling time, and n is any natural number which is not zero. The adjacent vehicle cut-in and front vehicle cut-out scene recognition module 24 records the front vehicle cut-out time as tout|objectIDWherein t isout=tpreFollowEnd,objectIDIs a pre-cut front vehicle.
Further, the adjacent vehicle cut-in and preceding vehicle cut-out scene recognition module 24 records a preceding vehicle cut-out scene process according to a preceding vehicle cut-out scene time interval, which includes a settable forward reserved time toutForwardAnd backward reservation time toutBackwardAnd the time interval of the front vehicle cut-out scene is [ t ]out-toutForward, tout+toutBackward]The process of cutting scene by the front vehicle is [ t ]out-toutForward, tout+toutBackward]| objectID
For example, in the embodiment, the front vehicle cut-out scene time interval includes the forward reserved time toutForward=10s and reserved backwardsTime toutBackwardIf the time is not less than 10s, the process of cutting scene by the front vehicle is recorded as tout-10, tout+10]| objectID
Step 25): the line patrol driving scene recognition module 25 recognizes a line patrol driving scene by judging whether the main vehicle lane change scene process, the preselected following driving scene process, the adjacent vehicle cut-in scene process and the preceding vehicle cut-out scene process satisfy the line patrol driving conditions, and if so, the line patrol driving scene exists.
The line patrol driving conditions are as follows: there is a certain time period during which there is no main vehicle lane changing process, pre-selection following vehicle running process, adjacent vehicle cut-in process or front vehicle cut-out process.
The patrol driving scene module 25 records the patrol driving time interval as tfreeStart, tfreeEnd]And correspondingly recording the process of the line patrol driving scene as follows: [ t ] offreeStart, tfreeEnd]L ego where ego is the master number.
By the system and the method, the working conditions of the vehicle in the driving process can be comprehensively divided, the identification efficiency is greatly improved, and other data processing systems can obtain the time range and key information of a typical natural driving scene for further data analysis.

Claims (12)

1. A typical natural driving scene recognition and extraction method for testing an intelligent driving system is characterized by comprising the following steps: the method is realized on the basis of a system comprising a data extraction and calculation module (1) and a typical scene type identification module (2);
the data extraction and calculation module (1) obtains driving scene key parameters from a vehicle driving database through screening, matching and calculation, wherein the driving scene key parameters comprise: collecting time T, a sampling step length T, a main vehicle speed egoV, a main vehicle acceleration egoAcc, a main vehicle-to-lane line distance distanceLane, a target object number objectID, a target object driving lane mark laneID, a target object relative longitudinal distance distanceX, a target object relative longitudinal speed relVX, a target object relative longitudinal acceleration relAccX, a target object collision time TTC and a target object head time distance THW;
the typical scene type recognition module (2) recognizes a typical scene according to the driving scene key parameters provided by the data extraction and calculation module (1) and the driving characteristics of the host vehicle and the relative state of the host vehicle and the target object, wherein the typical scene comprises: the method comprises the following steps of obtaining a dangerous scene, a main vehicle lane changing scene, a following vehicle running scene, an adjacent vehicle cut-in scene, a front vehicle cut-out scene and a line patrol running scene, and providing the dangerous scene, the main vehicle lane changing scene, the following vehicle running scene, the adjacent vehicle cut-in scene, the front vehicle cut-out scene and the line patrol.
2. The typical natural driving scene recognition extraction method for the intelligent driving system test according to claim 1, characterized in that: the data extraction and calculation module (1) firstly screens original data from a vehicle running database, wherein the original data comprises: collecting time T, a sampling step length T, a main vehicle speed egoV, a main vehicle acceleration egoAcc, a main vehicle-to-lane line distance distanceLane, a target object number objectID, a target object driving lane mark laneID, a target object relative longitudinal distance distanceX, a target object relative longitudinal speed relVX and a target object relative longitudinal acceleration relAccX;
then, the data extraction and calculation module (1) matches the speed egoV of the host vehicle, the number object ID of the object, the relative longitudinal distance distanceX of the object and the relative longitudinal speed relVX of the object at each moment through matching the acquisition time t corresponding to each group of original data, and obtains the relative speed and the relative position of each object and the host vehicle at each moment;
finally, the data extraction and calculation module (1) calculates the target object collision time TTC and the target object headway THW, wherein the method for calculating the target object collision time TTC comprises the following steps:
TTC=distanceX/relVX (1);
the method for calculating the headway THW of the target object comprises the following steps:
THW= distanceX/egoV (2)。
3. the typical natural driving scene recognition extraction method for the intelligent driving system test according to claim 1, characterized in that: the typical scene type recognition module (2) comprises a dangerous scene recognition module (21), a main vehicle lane changing scene recognition module (22), a following driving scene recognition module (23), an adjacent vehicle cut-in and front vehicle cut-out scene recognition module (24) and a line patrol driving scene recognition module (25);
the danger scene recognition module (21) recognizes a danger scene;
the main lane changing scene recognition module (22) recognizes a main lane changing scene;
the following vehicle driving scene recognition module (23) recognizes a following vehicle driving scene;
the adjacent vehicle cut-in and front vehicle cut-out scene recognition module (24) recognizes an adjacent vehicle cut-in scene and a front vehicle cut-out scene;
a cruising scene recognition module (25) recognizes a cruising scene.
4. The typical natural driving scene recognition extraction method for the intelligent driving system test according to claim 3, characterized in that: the danger scene recognition module (21) recognizes a danger scene by judging whether the acceleration egoAcc of the main vehicle, the target object collision time TTC and the target object head time distance THW meet danger conditions;
the dangerous conditions are as follows: any one of the master vehicle acceleration egoAcc, the target object collision time TTC and the target object headway THW exceeds a safety scene boundary value set correspondingly;
when the danger condition is met, the vehicle is judged to enter a dangerous scene, and the time when the danger condition is met is recorded as tdanger
5. The typical natural driving scene recognition extraction method for the intelligent driving system test according to claim 4, wherein the method comprises the following steps:
when the acceleration egoAcc of the main vehicle is less than a set safety scene boundary value of the acceleration of the main vehicle, a danger condition is met;
when the target object collision time TTC is smaller than a set target object collision time safety scene boundary value and the main vehicle is braked, a danger condition is met;
and when the target object headway THW is smaller than the set target object headway safety scene boundary value and the main vehicle is braked, the danger condition is met.
6. The typical natural driving scene recognition extraction method for the intelligent driving system test according to claim 4 or 5, wherein the method comprises the following steps: the time t when the dangerous scene recognition module (21) meets the dangerous conditiondangerForward reservation time tdangerForwardBackward reserved time tdangerBackwardThe time interval for recording the dangerous scene is [ t ]danger-tdangerForward,tdanger+tdangerBackward]。
7. The typical natural driving scene recognition extraction method for the intelligent driving system test according to claim 3, characterized in that: the main lane changing scene recognition module (22) recognizes a main lane changing scene by judging whether the distance distanceLane from the main vehicle to the lane line meets the main lane changing condition;
the main lane changing conditions are as follows: the distance variation from the main car to a certain lane line at the adjacent sampling time is larger than a set distance threshold;
when the main bus lane change condition is met, the main bus lane change scene is judged to be entered, and the time when the main bus lane change condition is met is recorded as tlane
8. The typical natural driving scene recognition extraction method for the intelligent driving system test according to claim 7, wherein the method comprises the following steps: setting the total duration of the lane changing process as tlaneProcessThen record the main lane change scene time interval as [ t ]lane-tlaneProcess/2, tlane+tlaneProcess/2]。
9. The typical natural driving scene recognition extraction method for the intelligent driving system test according to claim 3, characterized in that: the following vehicle driving scene recognition module (23) firstly recognizes a preselected following vehicle driving scene by judging whether the main vehicle speed egoV, the target object number ObjectID, the target object driving lane mark laneID and the target object relative longitudinal distance distanceX meet following vehicle driving preselection conditions, and then recognizes the following vehicle driving scene by judging whether the preselected following vehicle driving scene meets following vehicle driving scene time conditions;
the following vehicle running preselection condition is that a target object closest to the longitudinal distance of the main vehicle exists in a lane where the main vehicle is located in a certain time period, and the speed of the main vehicle is not zero;
setting the starting time of the pre-selected following vehicle as tpreFollowStartEnd time tpreFollowEndAnd recording the time interval of the preselected following driving scene as follows: [ t ] ofpreFollowStart, tpreFollowEnd];
The following driving scene time conditions are as follows: preselection following driving scene time interval [ t ]preFollowStart,tpreFollowEnd]Is larger than the set following driving scene time threshold value.
10. The typical natural driving scene recognition extraction method for the intelligent driving system test according to claim 9, wherein: the adjacent vehicle cut-in and front vehicle cut-out scene recognition module (24) recognizes an adjacent vehicle cut-in scene and a front vehicle cut-out scene through the following steps:
step 241): the adjacent vehicle cut-in and front vehicle cut-out scene recognition module (24) recognizes an adjacent vehicle cut-in scene or a front vehicle cut-out candidate scene by judging whether a main vehicle lane change scene process, a pre-selection following vehicle driving scene process, a target relative longitudinal distance distanceX, a target relative longitudinal speed relVX and a target relative longitudinal acceleration relAccX meet an adjacent vehicle cut-in or front vehicle cut-out condition 1 or not;
the adjacent vehicle cut-in or front vehicle cut-out condition 1 is as follows: there are two adjacent following different object objectsi、objectjThe interval time of the two scene processes is less than the set time threshold value, and the interval time is not changed from any main lane scene processThere is an intersection;
recording the time interval of the candidate scene of the adjacent vehicle cut-in or the front vehicle cut-out as tpreFollowEndi, tpreFollowStartj]Wherein t ispreFollowEndiFor the previous object during the adjacent pre-selected following vehicle driving sceneiEnd of time, t, of the following travelpreFollowStartjFor the next object in the process of adjacent pre-selected following vehicle driving scenejA time starting point of a following driving process;
step 242): the adjacent vehicle cut-in and front vehicle cut-out scene recognition module (24) distinguishes an adjacent vehicle cut-in scene from a front vehicle cut-out scene by judging the relative distance change characteristic between a main vehicle and a target in the process of cutting in the adjacent vehicle or cutting out the candidate scene from the front vehicle, and the judging step is as follows:
1) calculating method according to preset workshop kinematics and tpreFollowEndiTemporal host and objectiRelative motion state of, calculate tpreFollowStartjTemporal host and objectiRelative distance of (d) distanceXi
2) If t ispreFollowStartjTemporal host and objectjRelative distance of (d) distanceXjGreater than distanceXiIf the scene is judged to be a vehicle ahead cut-out scene, an adjacent vehicle cut-in and vehicle ahead cut-out scene recognition module (24) records the time t of the vehicle ahead cut-out sceneout|objectIDWherein t isout=(tpreFollowEndi+tpreFollowStartj)/2,objectIDIs an objectiThe number of (2);
if t ispreFollowStartjTemporal host and objectjRelative distance of (d) distanceXjIs less than or equal to distanceXiIf the scene is judged to be an adjacent vehicle cut-in scene, an adjacent vehicle cut-in and front vehicle cut-out scene recognition module (24) records the time t of the adjacent vehicle cut-in scenein|objectIDWherein t isin=(tpreFollowEndi+tpreFollowStartj)/2,objectIDIs an objectjThe number of (2);
step 243): the adjacent vehicle cut-in and front vehicle cut-out scene recognition module (24) screens all time starting points t in a preselected following vehicle driving scene process which does not meet the adjacent vehicle cut-in or front vehicle cut-out condition 1 and does not intersect with any main vehicle lane changing scene processpreFollowStartWhether an adjacent vehicle cut-in scene exists is further identified by judging whether the adjacent vehicle cut-in condition 2 is met;
the adjacent vehicle cut-in condition 2 is as follows: the time starting point t in the process of running the pre-selected following vehicle, which does not meet the adjacent vehicle cut-in or front vehicle cut-out condition 1 and does not intersect with any main vehicle lane changing scene process, existspreFollowStartAnd [ t)preFollowStart-nT, tpreFollowStart]The target object runs outside the lane where the main vehicle is located in the time period, T is sampling time, and n is any natural number which is not zero;
recording the cut-in time of the adjacent vehicle as tin|objectIDWherein t isin= tpreFollowStart, objectIDNumbering pre-cut adjacent vehicles;
step 244): the adjacent vehicle cut-in and front vehicle cut-out scene recognition module (24) screens all time end points t in a preselected following vehicle driving scene process which does not meet the adjacent vehicle cut-in or front vehicle cut-out condition 1 and does not intersect with any main vehicle lane changing scene processpreFollowEndFurther identifying whether a front vehicle cut-out scene exists or not by judging whether the front vehicle cut-out condition 2 is met or not;
the front truck cutting condition 2 is: the time terminal t in the running scene process of the pre-selected following vehicle which does not meet the adjacent vehicle cut-in or front vehicle cut-out condition 1 and does not intersect with any main vehicle lane changing scene process existspreFollowEndAnd [ t)preFollowEnd,tpreFollowEnd+nT]The target object runs outside the lane where the main vehicle is located in the time period, T is sampling time, and n is any natural number which is not zero;
recording the time t of the front vehicle cut-outout|objectIDWherein t isout= tpreFollowEnd, objectIDThe pre-cut front cars are numbered.
11. The typical natural driving scene recognition extraction method for the intelligent driving system test according to claim 10, wherein:
in step 243), it is determined that the time interval for switching the neighboring vehicle into the scene includes the settable forward reserved time tinForwardAnd backward reservation time tinBackwardAnd the time interval of the front vehicle cut-out scene is [ t ]in-tinForward, tin+tinBackward];
In step 244), it is determined that the forward vehicle cut-out scene time interval includes the settable forward reserved time toutForwardAnd backward reservation time toutBackwardAnd the time interval of the front vehicle cut-out scene is [ t ]out-toutForward, tout+toutBackward]。
12. The typical natural driving scene recognition extraction method for the intelligent driving system test according to claim 10 or 11, wherein: the line patrol driving scene recognition module (25) recognizes a line patrol driving scene by judging whether the main vehicle lane change scene process, the preselected following driving scene process, the adjacent vehicle cut-in scene process and the preceding vehicle cut-out scene process meet line patrol driving conditions, and if the line patrol driving conditions are met, the line patrol driving scene exists;
the line patrol driving conditions are as follows: and in a certain time period, no main vehicle lane changing process, pre-selection following vehicle running process, adjacent vehicle cut-in process or front vehicle cut-out process exists.
CN202010707394.XA 2020-07-22 2020-07-22 Typical natural driving scene recognition and extraction method for intelligent driving system test Active CN111599181B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010707394.XA CN111599181B (en) 2020-07-22 2020-07-22 Typical natural driving scene recognition and extraction method for intelligent driving system test

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010707394.XA CN111599181B (en) 2020-07-22 2020-07-22 Typical natural driving scene recognition and extraction method for intelligent driving system test

Publications (2)

Publication Number Publication Date
CN111599181A true CN111599181A (en) 2020-08-28
CN111599181B CN111599181B (en) 2020-10-27

Family

ID=72188230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010707394.XA Active CN111599181B (en) 2020-07-22 2020-07-22 Typical natural driving scene recognition and extraction method for intelligent driving system test

Country Status (1)

Country Link
CN (1) CN111599181B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112277958A (en) * 2020-10-27 2021-01-29 武汉光庭信息技术股份有限公司 Driver braking behavior analysis method
CN112346998A (en) * 2021-01-11 2021-02-09 北京赛目科技有限公司 Automatic driving simulation test method and device based on scene
CN112513951A (en) * 2020-10-28 2021-03-16 华为技术有限公司 Scene file acquisition method and device
CN113408061A (en) * 2021-07-08 2021-09-17 中汽院智能网联科技有限公司 Virtual driving scene element recombination method based on improved Latin hypercube sampling
CN113487874A (en) * 2021-05-27 2021-10-08 中汽研(天津)汽车工程研究院有限公司 System and method for collecting, identifying and classifying following behavior scene data
CN113581172A (en) * 2021-08-04 2021-11-02 武汉理工大学 Method for identifying driving scene cut into by intelligent driving vehicle facing target vehicle
CN113867367A (en) * 2021-11-30 2021-12-31 腾讯科技(深圳)有限公司 Processing method and device for test scene and computer program product
CN114283579A (en) * 2021-12-20 2022-04-05 招商局检测车辆技术研究院有限公司 C-V2X-based key scene generation method, risk assessment method and system
CN114863689A (en) * 2022-07-08 2022-08-05 中汽研(天津)汽车工程研究院有限公司 Method and system for collecting, identifying and extracting data of on-off ramp behavior scene
CN115249408A (en) * 2022-06-21 2022-10-28 重庆长安汽车股份有限公司 Scene classification extraction method for automatic driving test data
CN115848371A (en) * 2023-02-13 2023-03-28 智道网联科技(北京)有限公司 ACC system control method, device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110020471A (en) * 2019-03-28 2019-07-16 上海工程技术大学 A kind of functional simulation detection system of autonomous driving vehicle
US20190265712A1 (en) * 2018-02-27 2019-08-29 Nauto, Inc. Method for determining driving policy
CN110750311A (en) * 2019-10-18 2020-02-04 北京汽车研究总院有限公司 Data classification method, device and equipment
CN110942671A (en) * 2019-12-04 2020-03-31 北京京东乾石科技有限公司 Vehicle dangerous driving detection method and device and storage medium
US10636295B1 (en) * 2019-01-30 2020-04-28 StradVision, Inc. Method and device for creating traffic scenario with domain adaptation on virtual driving environment for testing, validating, and training autonomous vehicle
CN111324120A (en) * 2020-02-26 2020-06-23 中汽研汽车检验中心(天津)有限公司 Cut-in and cut-out scene extraction method for automatic driving front vehicle
CN111338973A (en) * 2020-05-19 2020-06-26 中汽院汽车技术有限公司 Scene-based automatic driving simulation test evaluation service cloud platform and application method
CN111401414A (en) * 2020-02-29 2020-07-10 同济大学 Natural driving data-based dangerous scene extraction and classification method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190265712A1 (en) * 2018-02-27 2019-08-29 Nauto, Inc. Method for determining driving policy
US10636295B1 (en) * 2019-01-30 2020-04-28 StradVision, Inc. Method and device for creating traffic scenario with domain adaptation on virtual driving environment for testing, validating, and training autonomous vehicle
CN110020471A (en) * 2019-03-28 2019-07-16 上海工程技术大学 A kind of functional simulation detection system of autonomous driving vehicle
CN110750311A (en) * 2019-10-18 2020-02-04 北京汽车研究总院有限公司 Data classification method, device and equipment
CN110942671A (en) * 2019-12-04 2020-03-31 北京京东乾石科技有限公司 Vehicle dangerous driving detection method and device and storage medium
CN111324120A (en) * 2020-02-26 2020-06-23 中汽研汽车检验中心(天津)有限公司 Cut-in and cut-out scene extraction method for automatic driving front vehicle
CN111401414A (en) * 2020-02-29 2020-07-10 同济大学 Natural driving data-based dangerous scene extraction and classification method
CN111338973A (en) * 2020-05-19 2020-06-26 中汽院汽车技术有限公司 Scene-based automatic driving simulation test evaluation service cloud platform and application method

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112277958A (en) * 2020-10-27 2021-01-29 武汉光庭信息技术股份有限公司 Driver braking behavior analysis method
CN112513951A (en) * 2020-10-28 2021-03-16 华为技术有限公司 Scene file acquisition method and device
CN112346998A (en) * 2021-01-11 2021-02-09 北京赛目科技有限公司 Automatic driving simulation test method and device based on scene
CN113487874A (en) * 2021-05-27 2021-10-08 中汽研(天津)汽车工程研究院有限公司 System and method for collecting, identifying and classifying following behavior scene data
CN113408061A (en) * 2021-07-08 2021-09-17 中汽院智能网联科技有限公司 Virtual driving scene element recombination method based on improved Latin hypercube sampling
CN113408061B (en) * 2021-07-08 2023-05-05 中汽院智能网联科技有限公司 Virtual driving scene element recombination method based on improved Latin hypercube sampling
CN113581172B (en) * 2021-08-04 2022-11-29 武汉理工大学 Method for identifying driving scene cut into by intelligent driving vehicle facing target vehicle
CN113581172A (en) * 2021-08-04 2021-11-02 武汉理工大学 Method for identifying driving scene cut into by intelligent driving vehicle facing target vehicle
CN113867367A (en) * 2021-11-30 2021-12-31 腾讯科技(深圳)有限公司 Processing method and device for test scene and computer program product
CN113867367B (en) * 2021-11-30 2022-02-22 腾讯科技(深圳)有限公司 Processing method and device for test scene and computer program product
CN114283579A (en) * 2021-12-20 2022-04-05 招商局检测车辆技术研究院有限公司 C-V2X-based key scene generation method, risk assessment method and system
CN115249408A (en) * 2022-06-21 2022-10-28 重庆长安汽车股份有限公司 Scene classification extraction method for automatic driving test data
CN114863689A (en) * 2022-07-08 2022-08-05 中汽研(天津)汽车工程研究院有限公司 Method and system for collecting, identifying and extracting data of on-off ramp behavior scene
CN115848371A (en) * 2023-02-13 2023-03-28 智道网联科技(北京)有限公司 ACC system control method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111599181B (en) 2020-10-27

Similar Documents

Publication Publication Date Title
CN111599181B (en) Typical natural driving scene recognition and extraction method for intelligent driving system test
CN110155046B (en) Automatic emergency braking hierarchical control method and system
CN113487874B (en) System and method for collecting, identifying and classifying following behavior scene data
CN110304074B (en) Hybrid driving method based on layered state machine
CN111815959B (en) Vehicle violation detection method and device and computer readable storage medium
Feng et al. Analysis of driver brake behavior under critical cut-in scenarios
CN112950811B (en) New energy automobile region operation risk assessment and early warning system integrating whole automobile safety
CN116466644B (en) Vehicle performance supervision system and method based on PLC control
WO2023151227A1 (en) Method and device for determining working condition of vehicle, and storage medium and processor
CN113722835A (en) Modeling method for anthropomorphic random lane change driving behavior
CN113570747B (en) Driving safety monitoring system and method based on big data analysis
CN115279643A (en) On-board active learning method and apparatus for training a perception network of an autonomous vehicle
Ma et al. Naturalistic driving behavior analysis under typical normal cut-in scenarios
CN114528253A (en) Intelligent internet automobile public road dangerous scene extraction method and device, dangerous scene construction method and device and computing equipment
CN110667597B (en) Driving style state identification method based on vehicle controller local area network data information
CN112686127A (en) GM-HMM-based driver overtaking intention identification method
CN112406871A (en) Intelligent driving system and method
Yuan et al. Analysis of normal stopping behavior of drivers at urban intersections in China
CN113581172B (en) Method for identifying driving scene cut into by intelligent driving vehicle facing target vehicle
CN114120625B (en) Vehicle information integration system, method, and storage medium
CN115966100B (en) Driving safety control method and system
CN114274938B (en) Vehicle braking scene determining method, device, equipment and storage medium
CN117272690B (en) Method, equipment and medium for extracting dangerous cut-in scene of automatic driving vehicle
CN114407918B (en) Takeover scene analysis method, takeover scene analysis device, takeover scene analysis equipment and storage medium
Gu et al. Autonomous driving hazard scenario extraction and safety assessment based on crash reports and carla simulation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant