CN114782926A - Driving scene recognition method, device, equipment, storage medium and program product - Google Patents

Driving scene recognition method, device, equipment, storage medium and program product Download PDF

Info

Publication number
CN114782926A
CN114782926A CN202210694071.0A CN202210694071A CN114782926A CN 114782926 A CN114782926 A CN 114782926A CN 202210694071 A CN202210694071 A CN 202210694071A CN 114782926 A CN114782926 A CN 114782926A
Authority
CN
China
Prior art keywords
driving
behavior
target vehicle
time period
crossing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210694071.0A
Other languages
Chinese (zh)
Other versions
CN114782926B (en
Inventor
赵树廉
段剑犁
吴思宇
王红
高博麟
李克强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202210694071.0A priority Critical patent/CN114782926B/en
Publication of CN114782926A publication Critical patent/CN114782926A/en
Application granted granted Critical
Publication of CN114782926B publication Critical patent/CN114782926B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The method comprises the steps of firstly obtaining driving data of a target vehicle in a preset time period, determining driving behavior elements of the target vehicle in the preset time period according to the driving data, and then determining a driving scene of the target vehicle in the preset time period according to the driving behavior elements of the target vehicle in the preset time period and a preset driving scene model. By adopting the method, various driving scenes can be uniformly and automatically identified.

Description

Driving scene recognition method, device, equipment, storage medium and program product
Technical Field
The present application relates to the field of vehicle automatic driving technologies, and in particular, to a driving scenario recognition method, apparatus, device, storage medium, and program product.
Background
With the progress of science and technology, the degree of automobile intelligence is higher and higher, and the automatic driving technology is gradually pushed to the market. Prior to the introduction of autonomous vehicles into the marketplace, autonomous systems must be adequately tested and validated to ensure that the autonomous vehicles can operate safely.
The driving scene is used as final input capable of reflecting the real running environment of the automatic driving vehicle and is a core element for supporting the development and test evaluation technology of the automatic driving vehicle. However, the existing driving scene recognition method can only recognize scenes with single functions, and is difficult to adapt to the real driving environment of a complex automatic driving vehicle.
Therefore, how to realize the automatic identification of multiple driving scenes is an urgent problem to be solved.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a driving scenario identification method, device, apparatus, storage medium, and program product capable of automatically identifying a plurality of driving scenarios.
In a first aspect, the present application provides a driving scenario recognition method, including:
acquiring driving data of a target vehicle in a preset time period;
determining a driving behavior element of the target vehicle within a preset time period according to the driving data; the driving behavior primitive represents the common behavior characteristics of the vehicle in the driving process of a plurality of driving scenes;
determining a driving scene of the target vehicle within a preset time period according to the driving behavior elements of the target vehicle within the preset time period and a preset driving scene model; the driving scene model comprises the corresponding relation between the driving behavior primitive group and the driving scene.
In one embodiment, determining the driving behavior primitive of the target vehicle within the preset time period according to the driving data comprises:
dividing a preset time period into a plurality of time windows according to the time sequence;
and according to the time sequence of each time window, performing behavior primitive identification on the driving data in each time window to obtain driving behavior primitives of the target vehicle in a preset time period.
In one embodiment, the behavior primitive identification comprises a straight-line behavior primitive identification; the driving behavior primitives comprise straight-going behavior primitives;
performing behavior primitive recognition on the driving data in each time window to obtain driving behavior primitives of the target vehicle in a preset time period, wherein the behavior primitives comprise:
aiming at any time window, performing straight-going behavior primitive recognition on the driving data in the time window, and if the driving data in the time window belongs to the straight-going behavior primitive, continuing performing straight-going behavior primitive recognition on the driving data in the next time window of the time window;
and if the driving data in the time window does not belong to the straight-going behavior primitive, deleting the driving data at the starting moment of the time window, then subdividing the time window, and identifying the straight-going behavior primitive of the driving data in the subdivided time window.
In one embodiment, the driving data comprises a transverse distance from the target vehicle to the lane line and a driving included angle between the target vehicle and a tangent of the lane line;
performing straight-ahead behavior primitive recognition on driving data in a time window, comprising:
acquiring a transverse distance corresponding to each moment in a time window and a driving included angle corresponding to each moment;
determining a distance difference between the transverse distance of each moment in the time window and the transverse distance of the starting moment of the time window according to the transverse distance corresponding to each moment in the time window;
and if the absolute value of the distance difference value at each moment in the time window is less than or equal to the straight-going behavior distance threshold value, and the absolute value of the driving included angle at each moment in the time window is less than or equal to the straight-going behavior angle threshold value, determining that the driving data in the time window belongs to straight-going behavior primitive identification.
In one embodiment, the driving data comprises a left transverse distance from the target vehicle to the left lane line and a left driving included angle between the target vehicle and the tangential direction of the left lane line;
the method for identifying the driving data in each time window to obtain the driving behavior elements of the target vehicle in the preset time period comprises the following steps:
sequentially acquiring left transverse distance variable quantities of target vehicles at all adjacent moments in a preset time period;
determining a time period of the crossing behavior of the target vehicle according to the variable quantity of each left transverse distance;
and acquiring driving behavior primitives of the target vehicle in the crossing behavior time period.
In one embodiment, determining the time period of the target vehicle for the crossing behavior according to the left lateral distance variation comprises:
obtaining left transverse distance variable quantities of which absolute values are larger than a lane width threshold value in all left transverse distance variable quantities to obtain candidate left transverse distance variable quantities;
determining the earlier time in the adjacent times corresponding to the left transverse distance variable quantity of each candidate as a crossing time; a crossing behavior of the corresponding target vehicle occurs at each crossing moment;
and determining the crossing time period of the target vehicle for each crossing behavior according to each crossing time.
In one embodiment, determining the crossing time period of each crossing behavior of the target vehicle according to the crossing time moments comprises:
for any crossing moment, determining the moment meeting the preset condition before the crossing moment as the starting moment of the crossing behavior corresponding to the crossing moment, and determining the moment meeting the preset condition after the crossing moment as the ending moment of the crossing behavior corresponding to the crossing moment; the preset condition is that the absolute value of the left transverse distance variation with the starting moment of the preset time period is smaller than or equal to the crossing behavior distance threshold, and the absolute value of the left driving included angle is smaller than or equal to the crossing behavior angle threshold;
and determining the time period between the starting time and the ending time as the time period of the crossing behavior corresponding to the crossing time of the target vehicle.
In one embodiment, the driving behavior primitives include a left ride-through behavior primitive and a right ride-through behavior primitive;
acquiring driving behavior primitives of a target vehicle in a crossing behavior time period, comprising:
acquiring a reference left transverse distance variable quantity of a target vehicle in a crossing behavior time period;
if the reference left transverse distance variation is larger than the lane width threshold, determining that a driving behavior element of the target vehicle in the crossing behavior time period belongs to a right crossing behavior element;
and if the reference left transverse distance variation is smaller than the lane width threshold value, determining that the driving behavior primitive of the target vehicle in the crossing behavior time period belongs to the left crossing behavior primitive.
In one embodiment, determining the driving scene of the target vehicle within the preset time period according to the driving behavior primitive of the target vehicle within the preset time period and the preset driving scene model comprises:
splicing the driving behavior elements in the preset time period according to the time sequence of the driving behavior elements in the preset time period to obtain a plurality of candidate splicing element groups of the target vehicle in the preset time period;
matching each candidate splicing primitive group with a driving behavior primitive group in a driving scene model respectively;
determining the driving scene corresponding to the driving behavior primitive group successfully matched in the driving scene model as the driving scene corresponding to the candidate splicing primitive group; and the driving scene of each candidate splicing primitive group is the driving scene of the target vehicle in a preset time period.
In one embodiment, the splicing processing of the driving behavior primitives in a preset time period includes:
splicing driving behavior elements with time correlation of a target vehicle within a preset time period; temporal associations represent temporal adjacency or temporal overlap;
and splicing the driving behavior primitives with time association of the target vehicle in a preset time period with the candidate behavior primitives with time association and driving behavior association in the vehicles around the target vehicle.
In one embodiment, the construction process of the driving scene model comprises the following steps:
acquiring a plurality of sample driving scenes; the driving scenes comprise driving scenes of a vehicle to which the sample driving scene belongs, vehicles around the vehicle and the vehicle, wherein the driving scenes are associated with time and driving behaviors;
respectively carrying out driving behavior primitive analysis on each driving scene, and determining a driving behavior primitive group corresponding to each driving scene;
and determining the corresponding relation between each driving behavior primitive group and the corresponding driving scene as a driving scene model.
In a second aspect, the present application further provides a driving scenario recognition apparatus, including:
the acquisition module is used for acquiring driving data of the target vehicle within a preset time period;
the first determining module is used for determining a driving behavior element of the target vehicle in a preset time period according to the driving data; the driving behavior primitive represents the common behavior characteristics of the vehicle in the driving process of a plurality of driving scenes;
the second determination module is used for determining a driving scene of the target vehicle in a preset time period according to the driving behavior element of the target vehicle in the preset time period and a preset driving scene model; the driving scene model comprises the corresponding relation between the driving behavior primitive group and the driving scene.
In a third aspect, an embodiment of the present application provides a computer device, which includes a memory and a processor, where the memory stores a computer program, and the processor implements, when executing the computer program, the steps of any one of the methods provided in the embodiment of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of any one of the methods provided in the embodiments of the first aspect.
In a fifth aspect, the present application provides a computer program product, which includes a computer program that, when executed by a processor, implements the steps of any one of the methods provided in the embodiments of the first aspect.
According to the driving scene recognition method, the device, the equipment, the storage medium and the program product, firstly, the driving data of the target vehicle in the preset time period is obtained, the driving behavior elements of the target vehicle in the preset time period are determined according to the driving data, and then the driving scene of the target vehicle in the preset time period is determined according to the driving behavior elements of the target vehicle in the preset time period and the preset driving scene model. In the method, the driving behavior primitives represent common behavior characteristics of the vehicle in the driving process of a plurality of driving scenes, the driving behavior primitives are determined by utilizing the driving data of the target vehicle so as to realize the identification of the driving scenes of the target vehicle, the comprehensive research on typical driving scenes is realized, and the driving behavior primitives are common behavior characteristics of the plurality of driving scenes, so that the method is suitable for the identification of the driving scenes of any vehicle and has universality And the method has strong universality and easy-to-use automatic identification and extraction of driving scenes.
Drawings
FIG. 1 is a diagram of an exemplary driving scenario recognition method;
FIG. 2 is a flow diagram illustrating a driving scenario identification method in one embodiment;
FIG. 3 is a flow chart illustrating a driving scenario recognition method according to another embodiment;
FIG. 4 is a flow chart illustrating a driving scenario recognition method according to another embodiment;
FIG. 5 is a flow chart illustrating a driving scenario recognition method according to another embodiment;
FIG. 6 is a flow chart illustrating a driving scenario recognition method according to another embodiment;
FIG. 7 is a schematic diagram of a coordinate system of a driving scenario identification method in one embodiment;
FIG. 8 is a flowchart illustrating a driving scenario recognition method according to another embodiment;
FIG. 9 is a flowchart illustrating a driving scenario recognition method according to another embodiment;
FIG. 10 is a flowchart illustrating a driving scenario recognition method according to another embodiment;
FIG. 11 is a flow chart illustrating a driving scenario identification method in another embodiment;
FIG. 12 is a schematic diagram of a vehicle position of a driving scenario identification method in one embodiment;
FIG. 13 is a flow chart illustrating a driving scenario identification method in another embodiment;
FIG. 14 is a flow chart illustrating a driving scenario identification method in another embodiment;
FIG. 15 is a block diagram showing the construction of a driving scene recognition apparatus according to an embodiment;
FIG. 16 is a diagram of an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The driving scene recognition method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. Where the target vehicle 102 communicates with the server 104 over a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104, or may be located on the cloud or other network server. The target vehicle 102 includes various autonomous vehicles and manually driven vehicles, among others. The server 104 may be implemented as a stand-alone server or a server cluster comprised of multiple servers.
The target vehicle can be provided with a vehicle-mounted system which is used for acquiring driving data of the target vehicle in the driving process and uploading the driving data to the server, and the server receives the driving data of the target vehicle and then carries out driving scene recognition on the driving data.
At present, a large number of researchers have proposed various types of typical driving scene extraction methods; in one mode, a general standard for extracting the dangerous driving scene is determined through research on the dangerous driving scene, in the other mode, characteristic data of a driver is obtained through real road experiments and an abnormal driving behavior scale and analyzed, so that an extraction standard of continuous following working conditions is extracted, and in the other mode, a method for identifying adjacent vehicle cut-in working conditions is extracted through analysis on lane change cut-in behaviors of natural driving data.
However, these research works all adopt a relatively fixed classification rule to classify the driving scene data and are limited to the extraction of a single functional scene. However, the fixed extraction rule of the single-function scene is difficult to adapt to the increasingly complex real driving environment of the automatic driving vehicle, and is not beneficial to forming a standardized and integrated scene extraction framework subsequently, so as to support the comprehensive research on a typical natural driving scene.
Based on this, the embodiment of the application provides a driving scene recognition method, a driving scene recognition device, a driving scene recognition storage medium and a program product, which can automatically recognize a plurality of driving scenes and realize comprehensive research on the driving scenes.
The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application.
In an embodiment, a driving scenario identification method is provided, which is described by taking the application environment of the method in fig. 1 as an example, and the embodiment relates to a specific process of acquiring driving data of a target vehicle in a preset time period, determining a driving behavior element of the target vehicle in the preset time period according to the driving data, and then determining a driving scenario corresponding to the driving behavior element of the target vehicle in the preset time period according to a preset driving scenario model, as shown in fig. 2, and the embodiment includes the following steps:
s201, driving data of the target vehicle in a preset time period is acquired.
The target vehicle is a vehicle needing to identify the driving scene, the preset time period is a time period needing to identify the driving scene of the target vehicle, and the driving data represent driving data of the target vehicle in a natural driving process within the preset time period.
The driving data may include driving data of the target vehicle and driving data of vehicles around the target vehicle, the driving data including, but not limited to, driving speed data of the vehicle, lane line data, vehicle position data, vehicle driving direction data, traffic environment data including, but not limited to, a distance of the host vehicle from both lane lines, a distance from a center of a lane, and the like, the traffic environment data including, but not limited to, a road type, weather, a speed limit, a road congestion degree, a traffic light, and the like, and the driving data is data capable of reflecting a driving scene of the target vehicle.
In this embodiment, a vehicle-mounted platform may be installed on the target vehicle, and the vehicle-mounted platform is provided with various sensors, for example: cameras, radars, inertial measurement units, and the like.
In the driving process of the target vehicle, the original driving data of the target vehicle are acquired in real time through various sensors on the vehicle-mounted platform, and the driving data can be sampled according to a preset fixed frequency in the mode of acquiring the original driving data of the target vehicle.
When the driving scene of the target vehicle is identified, the scene of the target vehicle may be identified in a time period manner, for example, driving data of the target vehicle within a preset time period is obtained, where the preset time period may be any time period during which the target vehicle is driving.
It should be noted that the driving data is natural driving data, that is, the driving data of the target vehicle in a preset time period is the driving data of the target vehicle in any time period during natural driving.
S202, determining driving behavior elements of the target vehicle in a preset time period according to the driving data; the driving behavior primitive represents common behavior characteristics of the vehicle in the driving process of a plurality of driving scenes.
The driving behavior primitive is a local behavior on a driving scene and is a basic structural unit of the driving scene, and when the driving scene of the target vehicle within a preset time period is identified, the driving behavior primitive of the target vehicle within the preset time period can be determined firstly.
Alternatively, the driving behavior primitive of the target vehicle within the preset time period may be determined according to a preset neural network model, specifically, the driving data of the target vehicle within the preset time period is input into the preset neural network model, and the driving behavior primitive of the target vehicle within the preset time period is output through analysis of the neural network model.
S203, determining a driving scene of the target vehicle in a preset time period according to the driving behavior element of the target vehicle in the preset time period and a preset driving scene model; the driving scene model comprises the corresponding relation between the driving behavior primitive group and the driving scene.
The driving behavior primitives are the common behavior characteristics of the vehicle in a plurality of driving scenes, so that the driving behavior primitives are combined to obtain the corresponding driving scenes, namely, the driving behavior primitives and the driving behaviors have corresponding relations.
The driving scene model comprises the corresponding relation between the driving behavior primitive group and the driving scene, so that the driving scene of the target vehicle in the preset time period can be determined according to the driving behavior primitive and the driving scene model of the target vehicle in the preset time period.
Specifically, the driving behavior primitives of the target vehicle within the preset time period may be combined according to a preset combination rule to obtain a driving behavior primitive group, and then the driving scene of the target vehicle within the preset time period may be determined according to the driving behavior primitive group of the target vehicle within the preset time period and the corresponding relationship between the driving behavior primitive group and the driving scene in the driving scene model.
According to the driving scene recognition method, firstly, driving data of a target vehicle in a preset time period is obtained, driving behavior elements of the target vehicle in the preset time period are determined according to the driving data, and then a driving scene of the target vehicle in the preset time period is determined according to the driving behavior elements of the target vehicle in the preset time period and a preset driving scene model. In the method, the driving behavior primitive represents the common behavior characteristic of the vehicle in the driving process of a plurality of driving scenes, the driving behavior primitive is determined by utilizing the driving data of the target vehicle so as to realize the identification of the driving scene of the target vehicle, the comprehensive research of typical driving scenes is realized, and the driving behavior primitive is the common behavior characteristic of the plurality of driving scenes, so the method is suitable for the identification of the driving scenes of any vehicle and has universality And the method has strong universality and easy-to-use automatic identification and extraction of driving scenes.
Based on the above embodiments, how to obtain the driving behavior primitive of the target vehicle within the preset time period is described below by an embodiment, in one embodiment, as shown in fig. 3, determining the driving behavior primitive of the target vehicle within the preset time period according to the driving data includes the following steps:
s301, dividing a preset time period into a plurality of time windows according to the time sequence.
When driving scene recognition is performed on a driving process of a target vehicle in a preset time period, the preset time period can be firstly divided into a plurality of time windows, and then scene recognition is performed on driving data in each time window according to a preset recognition rule.
The manner of dividing the plurality of time windows may be to divide the time windows by a preset time length according to a time sequence, or to divide a preset time period into a plurality of time windows on average, and it should be noted that the number of the time windows is not limited in this application.
Optionally, when actually sampling the driving data, there may be a problem that the sampling frequency intervals are not consistent, that is, the time intervals of two adjacent driving data are different, and each driving data corresponds to a time within a preset time period, so the manner of dividing the plurality of time windows may also be determined according to the sampling number of the driving data within the preset time period, for example, 1000 driving data are provided within the preset time period, and each 100 driving data are divided into one time window according to the sequence of the sampling times.
In one embodiment, the pass widthT(s) dividing the sliding time window, assuming for any time window a starting time within the time windowt j The end time ist k Then, there are:
Figure 831225DEST_PATH_IMAGE002
(1)
wherein, the first and the second end of the pipe are connected with each other,t k t j are respectively the firstkStrip, firstjAnd the moment corresponding to the driving data.
Alternatively,t=3s。
s302, according to the time sequence of each time window, performing behavior primitive identification on the driving data in each time window to obtain driving behavior primitives of the target vehicle in a preset time period.
Because each time window is divided according to the time sequence, the time sequence of each time window exists, and the driving data in each time window is sequentially subjected to behavior element identification according to the time sequence of each time window, so that the driving behavior elements of the target vehicle in the preset time period are finally obtained.
The manner of performing behavior primitive recognition on the driving data in each time window may be to perform behavior primitive recognition on the driving data in each time window according to a preset recognition algorithm, specifically, to use the driving data in each time window as an input of the recognition algorithm, and to obtain driving behavior primitives in each time window by running the recognition algorithm.
According to the driving scene identification method, the preset time period is divided into a plurality of time windows according to the time sequence, and the driving data in each time window is subjected to behavior element identification according to the time sequence of each time window to obtain the driving behavior element of the target vehicle in the preset time period. According to the method, the driving behavior primitive of the target vehicle in the preset time period is obtained by performing behavior primitive recognition on the driving data in each time window, and the effectiveness and accuracy of subsequent driving scene recognition of the target vehicle in the preset time period are guaranteed.
In one embodiment, as shown in fig. 4, the behavior primitive recognition is performed on the driving data in each time window to obtain the driving behavior primitive of the target vehicle in the preset time period, which includes the following steps:
s401, aiming at any time window, straight-going behavior primitive recognition is carried out on the driving data in the time window, and if the driving data in the time window belongs to the straight-going behavior primitive, the straight-going behavior primitive recognition is continuously carried out on the driving data in the next time window of the time window.
Based on the plurality of time windows divided by the preset time period, firstly, according to the sequence of time, the driving data in the time window with the first time is subjected to straight behavior primitive recognition, if the driving data in the time window belongs to the straight behavior primitive, the driving data in the time window with the second time is continuously subjected to straight behavior primitive recognition until the driving data in each time window is subjected to straight behavior primitive recognition.
For example, if there are 5 time windows in the preset time period, and the 5 time windows are numbered by numbers 1-5 according to the time sequence, i.e. the time window No. 1 is the time window with the first time, then the driving data in the time window No. 1 is firstly processed with the straight-going behavior primitive identification, if the driving data in the No. 1 time window belongs to the straight-going behavior primitive, the straight-going behavior primitive identification is continuously carried out on the driving data in the No. 2 time window, if the driving data in the No. 2 time window belongs to the straight-going behavior primitive, the straight-going behavior primitive recognition is continuously carried out on the driving data in the No. 3 time window, if the driving data in the No. 3 time window belongs to the straight-going behavior primitive, the straight-going behavior primitive recognition is continuously carried out on the driving data in the No. 4 time window, and if the driving data in the No. 4 time window belongs to the straight-going behavior primitive, continuing to perform straight-going behavior primitive identification on the driving data in the No. 5 time window.
S402, if the driving data in the time window does not belong to the straight-going behavior primitives, deleting the driving data at the initial moment of the time window, then dividing the time window again, and identifying the straight-going behavior primitives of the driving data in the newly divided time window.
And if the driving data in the time window does not belong to the straight-going behavior primitives, removing the driving data at the initial time in the time window, subdividing the time window at the time after the initial time, and then identifying the straight-going behavior primitives of the driving data in the subdivided time window.
Based on the above example, for time windows 1 to 5, if the driving data in the time window 1 belongs to the straight behavior primitive, the driving data in the time window 2 continues to be identified by the straight behavior primitive, and if the driving data in the time window 2 does not belong to the straight behavior primitive, the driving data corresponding to the start time in the time window 2 is removed, and then the time windows 2 to 5 are divided again to determine a new time window, where the dividing manner may be the same as the manner of dividing the time windows, and this embodiment of the present application is not described herein again.
Then, the driving data in each of the subdivided time windows is continuously subjected to the straight-going behavior primitive recognition, and the recognition mode is the same as the mode of performing the straight-going behavior primitive recognition on the driving data in each of the time windows in the above embodiment, which is not described herein again.
In the driving scene recognition method, for any time window, straight-going behavior primitive recognition is carried out on driving data in the time window, if the driving data in the time window belongs to the straight-going behavior primitive, straight-going behavior primitive recognition is continuously carried out on the driving data in the next time window of the time window, if the driving data in the time window does not belong to the straight-going behavior primitive, the driving data at the starting moment of the time window is deleted, the time window is divided again, and the straight-going behavior primitive recognition is carried out on the driving data in the newly divided time window. According to the method, the straight-going behavior primitives of the target vehicle are identified in each time window within the preset time period, so that the accuracy of the straight-going behavior primitives of the target vehicle within the preset time period is guaranteed.
The embodiment performs the straight-going behavior primitive recognition on the driving data in the time window, and when the driving data includes the lateral distance from the target vehicle to the lane line and the driving included angle between the target vehicle and the lane line tangent line, the following describes in detail how to perform the straight-going behavior primitive recognition on the driving data in the time window by an embodiment, and in an embodiment, as shown in fig. 5, the straight-going behavior primitive recognition on the driving data in the time window includes the following steps:
s501, acquiring a transverse distance corresponding to each time and a driving included angle corresponding to each time in a time window.
When the driving data in the time window is subjected to straight-ahead behavior primitive recognition, firstly, the transverse distance from the target vehicle to the lane line and the driving included angle between the target vehicle and the lane line tangent line corresponding to each moment in the time window need to be acquired.
The target vehicle obtains the driving data of the target vehicle in real time through the sensor in the driving process, the driving data are stored in the database, and each piece of driving data corresponds to one moment, so that the transverse distance between the target vehicle and the lane line and the driving included angle between the target vehicle and the lane line tangent line, which correspond to the target vehicle at each moment in the corresponding time window, can be directly obtained from the database.
When the transverse distance from the target vehicle to the lane line and the included angle between the target vehicle and the tangent of the lane line are obtained, the transverse distance from the target vehicle to the lane line and the included angle between the target vehicle and the tangent of the lane line can be obtained, and the included angle between the target vehicle and the tangent of the lane line represents the included angle between the vehicle course of the target vehicle and the tangential direction of the lane line.
And S502, determining a distance difference between the transverse distance of each moment in the time window and the transverse distance of the starting moment of the time window according to the transverse distance corresponding to each moment in the time window.
According to the transverse distance from the target vehicle to the lane line corresponding to each time in the time window, the distance difference between the transverse distance of each time in the time window and the transverse distance of the starting time of the time window is calculated, and the distance difference can be expressed by an expression (2).
Figure 787679DEST_PATH_IMAGE004
(2)
Wherein the content of the first and second substances,
Figure 137889DEST_PATH_IMAGE006
jandkrespectively a start time and an end time of the time window,iis any time within the time window and,
Figure 103571DEST_PATH_IMAGE008
is the first in a time windowiThe lateral distance of the target vehicle to the lane line at that time,
Figure 753995DEST_PATH_IMAGE010
the lateral distance of the target vehicle to the lane line at the start of the time window,
Figure 146931DEST_PATH_IMAGE012
is the first in the time windowiThe distance difference between the lateral distance of the instant and the lateral distance of the starting instant of the time window.
And S503, if the absolute value of the distance difference value at each moment in the time window is smaller than or equal to the straight-going behavior distance threshold value, and the absolute value of the driving included angle at each moment in the time window is smaller than or equal to the straight-going behavior angle threshold value, determining that the driving data in the time window belongs to a straight-going behavior primitive.
If the absolute value of the distance difference value of each moment in the time window is smaller than or equal to the straight-going behavior distance threshold value, and the absolute value of the running included angle of each moment in the time window is smaller than or equal to the straight-going behavior angle threshold value, it is indicated that the course deviation between each moment in the time window and the initial moment in the time window is not large, and it is determined that the driving data in the time window belongs to the straight-going behavior primitive.
It should be noted that the straight-ahead behavior primitive in the present embodiment represents a dynamic driving behavior in which the vehicle does not change lanes on a straight lane and a curve.
Based on the above embodiment, the conditions for satisfying the straight-line behavior primitive are:
Figure 250016DEST_PATH_IMAGE014
(3)
wherein, the first and the second end of the pipe are connected with each other,
Figure 753809DEST_PATH_IMAGE016
is the first in the time windowiThe angle between the target vehicle and the tangent of the lane line at that moment,
Figure 990231DEST_PATH_IMAGE018
for the straight-line behavior distance threshold in the straight-line behavior primitive,
Figure 554068DEST_PATH_IMAGE020
is a straight-line behavior angle threshold in the straight-line behavior primitive.
Alternatively,
Figure 613291DEST_PATH_IMAGE018
=0.3m,
Figure 451934DEST_PATH_IMAGE020
=
Figure 76950DEST_PATH_IMAGE022
according to the driving scene identification method, the transverse distance corresponding to each time in the time window and the driving included angle corresponding to each time are obtained, the distance difference between the transverse distance of each time in the time window and the transverse distance of the starting time of the time window is determined according to the transverse distance corresponding to each time in the time window, and if the absolute value of the distance difference of each time in the time window is smaller than or equal to the straight-going behavior distance threshold value and the absolute value of the driving included angle of each time in the time window is smaller than or equal to the straight-going behavior angle threshold value, it is determined that the driving data in the time window belongs to straight-going behavior primitive identification. The method determines the conditions meeting the straight-going behavior elements, and improves the accuracy of straight-going behavior element identification.
The following describes how to determine a driving behavior element of the target vehicle within a preset time period according to the left lateral distance from the target vehicle to the left lane line, and in one embodiment, as shown in fig. 6, the behavior element recognition is performed on the driving data within each time window to obtain the driving behavior element of the target vehicle within the preset time period, including the following steps:
s601, sequentially obtaining the left transverse distance variable quantity of the target vehicle at each adjacent moment in a preset time period.
The driving data of the target vehicle in the preset time period includes the left lateral distance from the target vehicle to the left lane line at each moment in the preset time period, so that the left lateral distance variation from the target vehicle to the left lane line at each adjacent moment in the preset time period can be sequentially calculated, as shown in formula (4).
Figure 811688DEST_PATH_IMAGE024
(4)
Wherein, the first and the second end of the pipe are connected with each other,
Figure 358207DEST_PATH_IMAGE026
is as followsiThe left lateral distance of the target vehicle to the left lane line at time +1,
Figure DEST_PATH_IMAGE027_39A
is as followsiAt that moment, the target vehicle arrives at the left lane lineThe left-hand lateral distance of (a),
Figure DEST_PATH_IMAGE029_50A
is a firsti+1 time and 1iThe amount of change in the left lateral distance of the target vehicle to the left lane line at that time.
Alternatively, the left lateral distance of the vehicle from the left lane line may be collected by establishing a rectangular coordinate system according to the left-hand rule with the position of the camera sensor on the vehicle as the origin of the coordinate system, as shown in fig. 7.
The adjacent time corresponding to the left lateral distance variation should satisfy that the time difference between the adjacent times is smaller than the minimum time difference judgment threshold, which ensures that the time difference between the adjacent times is not too long, and further ensures the accuracy of the scene recognition algorithm, as shown in formulas (5) and (6).
Figure DEST_PATH_IMAGE031_41A
(5)
Figure DEST_PATH_IMAGE033_43A
(6)
Wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE035_99A
denotes the firsti+At the time of the 1-time instant,
Figure DEST_PATH_IMAGE037_46A
denotes the firstiAt the time of day, the user may,
Figure DEST_PATH_IMAGE039_44A
denotes the firsti+Time 1 and the firstiThe difference in time of the time of day,
Figure DEST_PATH_IMAGE041_40A
indicates the minimum time difference determination threshold.
And S602, determining the time period of the crossing behavior of the target vehicle according to the variable quantity of each left transverse distance.
According to the obtained left transverse distance variation between the adjacent moments, the time period of the crossing behavior of the target vehicle can be determined, and the crossing behavior represents the behavior of the target vehicle crossing the lane line.
In one embodiment, as shown in fig. 8, the determining the time period of the target vehicle having the crossing behavior according to the left lateral distance variation includes the following steps:
and S801, acquiring left lateral distance variation with an absolute value larger than a lane width threshold value in each left lateral distance variation to obtain candidate left lateral distance variation.
The left lateral distance variation whose absolute value is larger than the threshold value of the lane width among the left lateral distance variations obtained in the above-described embodiment is determined as a left lateral distance variation candidate, and may be represented by equation (7).
Figure 758796DEST_PATH_IMAGE043
(7)
Wherein, the first and the second end of the pipe are connected with each other,Dis a threshold value for the lane width, optionally,Dthe value of (c) can be determined according to the width of the lane.
Alternatively, the first and second liquid crystal display panels may be,D=3m。
and S802, determining the earlier time in the adjacent times corresponding to the left transverse distance variable quantity of each candidate as the crossing time.
And each crossing moment corresponds to one crossing behavior of the target vehicle.
In the above embodiment, the earlier time in the adjacent time corresponding to the candidate left lateral distance variation is determined as the crossing time, for example, if the adjacent time corresponding to any candidate left lateral distance variation is the crossing timet i+1Andt i then will bet i The crossing time is determined, and the crossing time represents the time when the target vehicle crosses the lane line.
It should be noted that there may be a plurality of left lateral distance candidates, and the number of times of the target vehicle having the crossing behavior may also be a plurality of times, that is, the crossing time may be more than one.
And S803, determining a crossing time period for the target vehicle to generate crossing behaviors each time according to each crossing time.
According to each crossing time, namely the time when the target vehicle crosses the lane line each time, the time period when the target vehicle crosses the lane line each time, namely the time when the target vehicle starts to cross the lane line and the time when the target vehicle crosses the lane line can be determined.
And S603, acquiring driving behavior primitives of the target vehicle in the crossing behavior time period.
And further determining a driving behavior primitive in the crossing behavior time period based on the determined time period of the crossing behavior of the target vehicle in the preset time period.
The manner of determining the driving behavior primitive of the target vehicle in the time period of the crossing behavior may be determined according to a preset recognition algorithm, specifically, the driving data of the target vehicle in the time period of the crossing behavior is used as the input of the recognition algorithm, and by running the recognition algorithm, the driving behavior primitive of the target vehicle in the time period of the crossing behavior is finally output.
In the driving scene recognition method, left transverse distance variable quantities of the target vehicle at each adjacent moment in a preset time period are sequentially acquired, a time period of the crossing behavior of the target vehicle is determined according to the left transverse distance variable quantities, and then driving behavior elements of the target vehicle in the time period of the crossing behavior are acquired. The method determines the time period of the crossing behavior of the target vehicle according to the left transverse distance variation of the target vehicle at each adjacent moment in the preset time period, and then further determines the driving behavior elements of the target vehicle in the time period of the crossing behavior, so that the automatic identification of the driving behavior elements of the target vehicle in the preset time period is realized.
Further, the driving data further includes a left driving angle between the target vehicle and a tangential direction of the left lane line, and how to further determine a crossing time period of the target vehicle for crossing behavior each time according to the left driving angle between the target vehicle and the tangential direction of the left lane line is described in detail below by using an embodiment, in an embodiment, as shown in fig. 9, determining the crossing time period of the target vehicle for crossing behavior each time according to each crossing time includes:
s901, for any crossing time, determining a time before the crossing time and meeting a preset condition as a start time of a crossing behavior corresponding to the crossing time, and determining a time after the crossing time and meeting the preset condition as an end time of the crossing behavior corresponding to the crossing time.
The preset condition is that the absolute value of the left transverse distance variation with the starting moment of the preset time period is smaller than or equal to the crossing behavior distance threshold, and the absolute value of the left driving included angle is smaller than or equal to the crossing behavior angle threshold.
When the time period of the crossing behavior of the target vehicle is determined based on the obtained domain crossing time in the preset time period and the crossing time which is the time when the target vehicle crosses the lane line, the starting time of the crossing behavior of the target vehicle needs to be determined by backtracking from the crossing time to the front, and the ending time of the crossing behavior of the target vehicle is determined by iterating from the crossing time to the back.
Specifically, the moment meeting the preset condition is determined as the starting moment of the crossing behavior corresponding to the crossing moment from the crossing moment to the forward backtracking till the moment meeting the preset condition is found, namely the moment which is closest to the crossing moment and meets the condition that the absolute value of the left transverse distance variation with the starting moment of the preset time period is smaller than or equal to the crossing behavior distance threshold and the absolute value of the left driving included angle is smaller than or equal to the crossing behavior angle threshold is determined as the starting moment of the crossing behavior at the crossing moment to the forward backtracking.
And iterating backwards from the crossing moment until the moment meeting the preset condition is found, determining the moment meeting the preset condition as the ending moment of the crossing behavior corresponding to the crossing moment, namely iterating backwards at the crossing moment, determining the moment which is closest to the crossing moment and meets the requirement that the absolute value of the left transverse distance variation with the starting moment of the preset time period is less than or equal to a crossing behavior distance threshold value, and determining the moment when the absolute value of the left driving included angle is less than or equal to a crossing behavior angle threshold value as the ending moment of the crossing behavior.
The preset condition is shown as formula (8).
Figure DEST_PATH_IMAGE045_48A
(8)
Wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE047_49A
indicating that the conditions are simultaneously met for the same time,
Figure 579597DEST_PATH_IMAGE010
a left lateral distance which is a start time of the preset time period,
Figure 16394DEST_PATH_IMAGE008
is the first within a preset time periodiThe left lateral distance of the time, i.e. the start time and the end time of the crossing behavior,
Figure DEST_PATH_IMAGE049_41A
for vehicles at firstiThe left driving included angle of the target vehicle at the moment,
Figure DEST_PATH_IMAGE051_89A
Figure DEST_PATH_IMAGE053_44A
respectively, a cross-behavior distance threshold and a cross-behavior angle threshold in the driving behavior primitive.
Alternatively, the first and second liquid crystal display panels may be,
Figure DEST_PATH_IMAGE051_90A
=0.1m,
Figure DEST_PATH_IMAGE053_45A
=
Figure DEST_PATH_IMAGE055_50A
and S902, determining the time period between the starting time and the ending time as the time period of the crossing behavior of the target vehicle.
And determining the time period between the starting time and the ending time of the crossing behavior corresponding to each crossing time as the time period of the crossing behavior corresponding to each crossing time of the target vehicle.
In the driving scene recognition method, for any crossing time, the time meeting the preset condition before the crossing time is determined as the starting time of the crossing behavior corresponding to the crossing time, and the time meeting the preset condition after the crossing time is determined as the ending time of the crossing behavior corresponding to the crossing time; the preset condition is that the absolute value of the left transverse distance variation with the starting time of the preset time period is smaller than or equal to the crossing behavior distance threshold, the absolute value of the left driving included angle is smaller than or equal to the crossing behavior angle threshold, and the time period between the starting time and the ending time is determined as the time period when the crossing behavior of the target vehicle occurs. The method further limits the time period of the crossing behavior, improves the accuracy of driving behavior primitive identification, and therefore ensures the accuracy of driving scene identification of the target vehicle in the preset time period.
Further, the driving behavior primitives include a left crossing behavior primitive and a right crossing behavior primitive, and in one embodiment, as shown in fig. 10, the step of obtaining the driving behavior primitives of the target vehicle within the time period of the crossing behavior includes the following steps:
s1001, a reference left lateral distance variation amount of the target vehicle in the time period of the crossing behavior is acquired.
The reference left lateral distance variation amount of the target vehicle within the time period of the crossing behavior represents the left lateral distance variation amount at the corresponding crossing time.
And directly acquiring the left transverse distance variable quantity corresponding to the crossing time from the database, and determining the left transverse distance variable quantity as the reference left transverse distance variable quantity.
And S1002, if the reference left transverse distance variation is larger than the lane width threshold, determining that the driving behavior element of the target vehicle in the crossing behavior time period belongs to the right crossing behavior element.
And S1003, if the reference left transverse distance variation is smaller than the lane width threshold, determining that the driving behavior primitive of the target vehicle in the crossing behavior time period belongs to a left crossing behavior primitive.
The left lateral distance variation corresponds to a difference value of the left lateral distance from the target vehicle to the left lane line, wherein the difference value corresponds to a time after the crossing time in the crossing behavior time period and the crossing time.
And the left transverse distance is acquired by establishing a rectangular coordinate system according to a left-hand rule by taking the position of the camera sensor as the origin of the coordinate system, and the left transverse distance from the target vehicle to the left lane line is the distance from the vehicle to the left lane line of the lane to which the target vehicle belongs.
For example, if the lane width is 3.5, when the target vehicle has a left crossing behavior, the left lateral distance of the target vehicle is changed from 0 to-3.5, and then the reference left lateral distance variation is-3.5; when the target vehicle has right crossing behavior, the left lateral distance of the target vehicle is changed from-3.5 to 0, and the reference left lateral distance is changed by 3.5.
Therefore, if the reference left lateral distance variation is larger than the lane width threshold, that is, a positive value, it is determined that the driving behavior primitive of the target vehicle within the time period of the crossing behavior belongs to the right crossing behavior primitive; if the reference left lateral distance variation is smaller than the lane width threshold value, namely a negative value, it is determined that the driving behavior primitive of the target vehicle within the crossing behavior time period belongs to the left crossing behavior primitive.
It should be noted that, in the embodiment of the present application, the left lateral distance from the target vehicle to the left lane line and the left lateral distance from the target vehicle to the left lane line are negative values, so the reference left lateral distance variation is greater than the right crossing behavior element corresponding to the lane width threshold, and the reference left lateral distance variation is smaller than the crossing behavior element corresponding to the lane width threshold. However, in practical applications, the embodiments of the present application do not limit this.
In the driving scene recognition method, a reference left transverse distance variation of the target vehicle in the time period of the crossing behavior is obtained, if the reference left transverse distance variation is larger than a lane width threshold, it is determined that a driving behavior primitive of the target vehicle in the time period of the crossing behavior belongs to a right crossing behavior primitive, and if the reference left transverse distance variation is smaller than the lane width threshold, it is determined that the driving behavior primitive of the target vehicle in the time period of the crossing behavior belongs to a left crossing behavior primitive. The method determines whether the driving behavior primitive of the target vehicle in the crossing behavior time period belongs to the left crossing behavior primitive or the right crossing behavior primitive, and ensures the diversity of the driving scene of the target vehicle.
Based on the driving behavior primitive of the target vehicle within the preset time period obtained above, and the driving behavior primitive includes a straight driving primitive, a left crossing behavior primitive and a right crossing primitive, in an embodiment, as shown in fig. 11, the driving scene of the target vehicle within the preset time period is determined according to the driving behavior primitive of the target vehicle within the preset time period and the preset driving scene model, which includes the following steps:
and S1101, splicing the driving behavior primitives in the preset time period according to the time sequence of the driving behavior primitives in the preset time period to obtain a plurality of candidate spliced primitive groups of the target vehicle in the preset time period.
And splicing the driving behavior elements according to the time sequence of the driving behavior elements in a preset time period, and determining the spliced driving behavior elements as candidate splicing element groups.
In one embodiment, splicing processing is carried out on each driving behavior primitive in a preset time period, and the driving behavior primitives with time correlation of the target vehicle in the preset time period are spliced; temporal associations represent temporal adjacency or temporal overlap.
The time association comprises time adjacency or time overlap, specifically, a primitive combination time difference threshold value is defined, and in any two adjacent driving behavior primitives in a preset time period, according to the sequence of time, the absolute value of the time difference between the ending time of the previous driving behavior primitive and the starting time of the next driving behavior primitive is smaller than the primitive combination time difference threshold value, namely, the time association is that
Figure DEST_PATH_IMAGE057_45A
Then the adjacent driving behavior primitive is determined to have time correlation, and the two adjacent driving behavior primitives are spliced according to a combination sequence, wherein,
Figure DEST_PATH_IMAGE059_55A
the time difference threshold is combined for the primitives that satisfy the stitching condition.
Therefore, the driving behavior elements with time correlation in the driving behavior elements in the preset time period are spliced, and a plurality of candidate spliced element groups can be obtained.
For example, if the driving behavior primitives of the target vehicle in the preset time period include the No. 1 straight behavior primitive, the No. 1 left crossing behavior primitive and the No. 2 straight behavior primitive, and the No. 1 straight behavior primitive and the No. 1 left crossing behavior primitive have time correlation therebetween, and the No. 1 left crossing behavior primitive and the No. 2 straight behavior primitive have time correlation therebetween, the No. 1 straight behavior primitive, the No. 1 left crossing behavior primitive and the No. 2 straight behavior primitive are sequentially spliced according to the time sequence to obtain the candidate spliced primitive group.
In another embodiment, the driving behavior primitives with time correlation of the target vehicle within the preset time period are spliced with the candidate behavior primitives with time correlation and driving behavior correlation in the peripheral vehicles of the target vehicle.
Optionally, when the driving data of the target vehicle is obtained, the driving data further includes driving data of vehicles around the target vehicle, as shown in fig. 12, fig. 12 is a position distribution diagram of the target vehicle and its surrounding vehicles, that is, numbers 1 to 8 in fig. 12 are surrounding vehicles.
Therefore, when obtaining each driving behavior primitive of the target vehicle in the preset time period, determining the driving behavior primitives of the surrounding vehicles of the target vehicle in the preset time period, and splicing the driving behavior primitives with time association of the target vehicle in the preset time period, candidate behavior primitives with time association and driving behavior association of the surrounding vehicles of the target vehicle in the preset time period are spliced, wherein the candidate behavior primitives are the driving behavior primitives with time association and driving behavior association of the driving behavior primitives of the target vehicle in the surrounding vehicles.
Wherein the driving behavior cells of the surrounding vehicles having the driving behavior association representation can influence the driving behavior of the target vehicle, for example, for the target vehicle, the driving behaviors of vehicles No. 1, 2, 3, 4 and 5 around the target vehicle need to be judged; the description of the time association is the same as the above embodiments, and is not repeated herein.
For example, with continued reference to fig. 12, consider whether there are vehicles and formal conditions of the vehicles directly in front of the target vehicle, i.e., the driving behavior primitive of vehicle No. 2 that is in the same lane as and in front of the target vehicle; if the target vehicle has the straight-ahead behavior primitive in the preset time period, the straight-ahead behavior primitive is regarded as the No. 1 straight-ahead behavior primitive, the candidate behavior primitives which have time correlation with the No. 1 straight-ahead behavior primitive and are the driving behavior primitives exist in the driving behavior primitives of the No. 2 vehicle, and the candidate behavior primitives are the straight-ahead behavior primitives, the No. 1 straight-ahead behavior primitive and the straight-ahead behavior primitives corresponding to the candidate behavior primitives are spliced to obtain the candidate spliced primitive group.
And S1102, matching each candidate splicing primitive group with a driving behavior primitive group in the driving scene model respectively.
The driving scene model comprises the corresponding relation between the driving behavior primitive group and the driving scene, so that each obtained candidate splicing primitive group can be respectively matched with the driving behavior primitive group in the driving scene model, and whether the driving behavior primitive group which is the same as the candidate splicing primitive group exists in the driving scene model or not is detected.
S1103, determining the driving scene corresponding to the driving behavior primitive group successfully matched in the driving scene model as the driving scene corresponding to the candidate splicing primitive group; and the driving scene of each candidate splicing primitive group is the driving scene of the target vehicle in a preset time period.
If the candidate splicing basic element group is successfully matched with the driving behavior basic element group in the driving scene model, determining the driving scene corresponding to the successfully matched driving behavior basic element group as the driving scene of the candidate splicing basic element group, namely the driving scene of the target vehicle in a preset time period.
According to the driving scene identification method, according to the time sequence of each driving behavior element in the preset time period, splicing processing is carried out on each driving behavior element in the preset time period to obtain a plurality of candidate splicing basic element groups of the target vehicle in the preset time period, then each candidate splicing basic element group is respectively matched with the driving behavior element group in the driving scene model, the driving scene corresponding to the driving behavior element group which is successfully matched in the driving scene model is determined to be the driving scene corresponding to the candidate splicing basic element group, and the driving scene of each candidate splicing basic element group is the driving scene of the target vehicle in the preset time period. According to the method, the driving behavior elements of the target vehicle are spliced, and then the driving scene corresponding to the spliced candidate splicing element group is determined based on the preset driving scene model, so that automatic identification of various types of driving scenes is realized.
Based on the driving scenario model, the following describes a construction process of the driving scenario model in detail by using an embodiment, and in an embodiment, as shown in fig. 13, the construction process of the driving scenario model includes the following steps:
s1301, acquiring a plurality of sample driving scenes; the driving scenarios include driving scenarios in which a sample vehicle to which the sample driving scenario belongs has a temporal association with surrounding vehicles of the sample vehicle and a driving behavior association.
Before a driving scene model is constructed, a plurality of sample driving scenes are required to be obtained, wherein the sample driving scenes comprise a free-running scene, a car following scene, a lane changing scene, an overtaking scene, a vehicle cut-in scene and a vehicle cut-out scene.
The driving scenes comprise driving scenes that a vehicle to which the sample driving scene belongs has time correlation and driving behavior correlation with surrounding vehicles, for example, a following scene is a scene in which the speed and the longitudinal position of the vehicle are influenced by a vehicle ahead, namely, a vehicle which runs straight ahead exists in a lane to which the sample vehicle belongs in the following scene; the vehicle cut-in scene indicates that, during the straight-ahead driving of the vehicle, there is a surrounding vehicle that enters the lane to which the sample vehicle belongs and is in front of the sample vehicle, and the vehicle cut-out scene indicates that the surrounding vehicle that is in the same lane as the sample vehicle and is in front of the sample vehicle switches to another lane.
And S1302, respectively performing driving behavior primitive analysis on each driving scene, and determining a driving behavior primitive group corresponding to each driving scene.
Analyzing and mining the common behavior characteristics of the driving scenes, determining driving behavior elements, namely the driving behavior elements comprise a straight behavior element, a left crossing behavior element and a right crossing behavior element, analyzing the driving behavior elements, determining the driving behavior elements corresponding to the driving scenes, combining and splicing the driving behavior elements corresponding to the driving scenes, and determining the driving behavior elements corresponding to the driving scenes.
Optionally, different driving scenarios correspond to different driving behavior primitive groups, e.g., a free-driving scenario is mainly composed of straight-going behavior primitives of sample vehicles.
The car following scene mainly comprises straight-going behavior elements of sample vehicles and straight-going behavior elements of surrounding vehicles of the sample vehicles, wherein the transverse positions of the surrounding vehicles are lanes to which the sample vehicles belong, and the longitudinal positions of the surrounding vehicles are relative to the front of the sample vehicles.
The lane changing scene comprises a left lane changing scene and a right lane changing scene, wherein the left lane changing scene mainly comprises a straight-going behavior element of a sample vehicle, a left crossing behavior element of the sample vehicle and a straight-going behavior element of the sample vehicle; the right lane changing scene mainly comprises a straight-going behavior element of a sample vehicle, a right crossing behavior element of the sample vehicle and a straight-going behavior element of the sample vehicle.
The overtaking scenes comprise a left overtaking scene and a right overtaking scene.
Wherein, the left overtaking scene mainly comprises a straight-going behavior element of a sample vehicle, a left crossing behavior element of the sample vehicle and a straight-going behavior element of the sample vehicle, the system comprises a right crossing behavior element of a sample vehicle and a straight behavior element of the sample vehicle, wherein the straight behavior element of the first sample vehicle and the straight behavior element of a surrounding vehicle with the transverse position being a lane to which the sample vehicle belongs and the longitudinal position being the front of the sample vehicle exist at the same time, the straight behavior element of the second sample vehicle and the straight behavior element of a surrounding vehicle with the transverse position being the right side adjacent vehicle of the lane to which the sample vehicle belongs and the longitudinal position being the same horizontal position as the sample vehicle exist at the same time, and the straight behavior element of the third sample vehicle and the straight behavior element of a surrounding vehicle with the transverse position being the lane to which the sample vehicle belongs and the longitudinal position being the rear of the sample vehicle exist at the same time.
The right overtaking scene mainly comprises a straight-going behavior element of a sample vehicle, a right crossing behavior element of the sample vehicle, a straight-going behavior element of the sample vehicle, a left crossing behavior element of the sample vehicle and a straight-going behavior element of the sample vehicle, the straight-going behavior element of the first sample vehicle and the straight-going behavior element of the peripheral vehicle with the transverse position being the lane to which the sample vehicle belongs and the longitudinal position being the front of the sample vehicle exist at the same time, the straight-going behavior element of the second sample vehicle and the straight-going behavior element of the peripheral vehicle with the transverse position being the left side adjacent vehicle of the lane to which the sample vehicle belongs and the longitudinal position being the same horizontal position as the sample vehicle exist at the same time, and the straight-going behavior element of the third sample vehicle and the straight-going behavior element of the peripheral vehicle with the transverse position being the lane to which the sample vehicle belongs and the longitudinal position being the rear of the sample vehicle exist at the same time.
The vehicle cut-out scene includes a left cut-out scene and a right cut-out scene.
The left cut-out scene mainly comprises a straight-going behavior element of a sample vehicle, a left crossing behavior element of a peripheral vehicle of the sample vehicle and a straight-going behavior element of the peripheral vehicle of the sample vehicle, wherein the peripheral vehicle is positioned in front of the left crossing behavior element, the transverse position of the peripheral vehicle is positioned in a lane where the sample vehicle is located, the longitudinal position of the peripheral vehicle is positioned in front of the sample vehicle, the transverse position of the peripheral vehicle is positioned in a left adjacent lane of the lane where the sample vehicle belongs before the straight-going behavior element, and the longitudinal position of the peripheral vehicle is positioned at the same horizontal position as that of the sample vehicle.
The right cut-out scene mainly comprises a straight-going behavior element of a sample vehicle, a right crossing behavior element of a vehicle around the sample vehicle and a straight-going behavior element of a vehicle around the sample vehicle, wherein the peripheral vehicle is positioned in front of the right crossing behavior element, the transverse position of the peripheral vehicle is positioned in a lane where the sample vehicle is positioned, the longitudinal position of the peripheral vehicle is positioned in front of the sample vehicle, the peripheral vehicle is positioned in front of the straight-going behavior element, the transverse position of the peripheral vehicle is positioned in a right adjacent lane of the lane where the sample vehicle belongs, and the longitudinal position of the peripheral vehicle is positioned at the same horizontal position as that of the sample vehicle.
The vehicle cut-in scenario includes a left-side cut-in scenario and a right-side cut-in scenario.
The left cut-in scene mainly comprises a straight-going behavior element of a sample vehicle, a right crossing behavior element of a peripheral vehicle of the sample vehicle and a straight-going behavior element of the peripheral vehicle of the sample vehicle, wherein the peripheral vehicle is positioned in front of the right crossing behavior element, the transverse position of the peripheral vehicle is positioned in a left adjacent lane of a lane to which the sample vehicle belongs, the longitudinal position of the peripheral vehicle is positioned at the same horizontal position as that of the sample vehicle, the transverse position of the peripheral vehicle is positioned in the lane to which the sample vehicle is positioned before the straight-going behavior element, and the longitudinal position of the peripheral vehicle is positioned in front of the sample vehicle.
The right cut-in scene mainly comprises a straight-going behavior element of a sample vehicle, a left crossing behavior element of a vehicle around the sample vehicle and a straight-going behavior element of a vehicle around the sample vehicle, wherein the peripheral vehicle is positioned in front of the left crossing behavior element, the transverse position of the peripheral vehicle is positioned in a right adjacent lane of a lane to which the sample vehicle belongs, the longitudinal position of the peripheral vehicle is positioned at the same horizontal position with the sample vehicle, the transverse position of the peripheral vehicle is positioned in the lane to which the sample vehicle is positioned before the straight-going behavior element, and the longitudinal position of the peripheral vehicle is positioned in front of the sample vehicle.
And S1303, determining the corresponding relation between each driving behavior primitive group and the corresponding driving scene as a driving scene model.
Optionally, the driving scene can be defined by three key elements of vehicle attribute, vehicle position and driving behavior element, and a mathematical and symbolic theoretical model of the driving scene and the driving behavior element is established by combining the definition; specifically, the vehicle attributes can be divided into a sample vehicle and a surrounding vehicle of the sample vehicle, the vehicle position can be determined according to the relative positions of the sample vehicle and the surrounding vehicle and the lane where the vehicle is located, and the driving behavior element comprises three basic behavior processes of a straight-going behavior element, a left crossing behavior element and a right crossing behavior element.
And determining the corresponding relation between the driving scenes corresponding to the driving behavior primitive combinations as a driving scene model, wherein the corresponding relation can be specifically represented by a formula (9).
Figure DEST_PATH_IMAGE061_61A
(9)
Wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE063_43A
which is indicative of a driving scenario in which,
Figure DEST_PATH_IMAGE065_36A
a union is represented that is a union of the two,
Figure DEST_PATH_IMAGE067_40A
the intersection is represented by the number of points in the intersection,
Figure DEST_PATH_IMAGE069_37A
Figure DEST_PATH_IMAGE071_21A
Figure DEST_PATH_IMAGE073_18A
Figure DEST_PATH_IMAGE075_17A
Figure DEST_PATH_IMAGE077_19A
Figure DEST_PATH_IMAGE079_5A
in that
Figure DEST_PATH_IMAGE081_5A
Is composed of
Figure DEST_PATH_IMAGE083_5A
Respectively representing that the vehicle attribute is a sample vehicle and surrounding vehicles of the sample vehicle;longthe values of-1, 0 and 1 respectively represent three different longitudinal relative positions which are relative to the rear of the sample vehicle, are at the same level with the sample vehicle and are relative to the front of the sample vehicle;lateralthe number of the sample vehicle is 1, 0 and 1 respectively represents three different transverse relative positions of an adjacent lane on the left side of the lane where the sample vehicle is located, the same lane with the sample vehicle and an adjacent lane on the right side of the lane where the sample vehicle is located;
Figure DEST_PATH_IMAGE085_5A
representing the longitudinal and transverse relative position relation with the sample vehicle;
Figure DEST_PATH_IMAGE087_6A
in that
Figure DEST_PATH_IMAGE089_17A
Is composed of
Figure DEST_PATH_IMAGE091_17A
Respectively representing three different driving behavior elements of a straight behavior, a left crossing behavior and a right crossing behavior;
Figure DEST_PATH_IMAGE093_11A
0 and 1 respectively indicate that no vehicle exists and a sample vehicle/surrounding vehicle exists in the position;
Figure DEST_PATH_IMAGE095_12A
representing corresponding surrounding vehicle attributes in the scene
Figure DEST_PATH_IMAGE096_55A
Is/are as follows
Figure DEST_PATH_IMAGE095_13A
The driving behavior of each element is basically the same as the driving behavior of each element,
Figure DEST_PATH_IMAGE098_37A
representing surrounding vehicle attributes in the scene
Figure DEST_PATH_IMAGE099_14A
Total driving behavior primitive number.
In one embodiment, a left lane changing scene is taken as an example for description, and the left lane changing scene is analyzed according to three key elements of vehicle attributes, vehicle positions and driving behavior primitives, and in the left lane changing scene, the related vehicle attributes are mainly sample vehicles
Figure DEST_PATH_IMAGE101_6A
The vehicle position is the relative position of the sample vehiclelong=0,lateral= 0; the driving behavior primitive mainly comprises a straight-going behavior primitive
Figure DEST_PATH_IMAGE103_5A
Left crossing behavior primitive
Figure DEST_PATH_IMAGE105_5A
And a forward behavior primitive
Figure DEST_PATH_IMAGE107_8A
Composition is carried out; since the left lane change scene can be determined only by the driving behavior of the sample vehicle, the extraction result is not affected by whether the other positions contain the surrounding vehicles of the sample vehicle, the distribution situation of the positions of the surrounding vehicles at different positions can be processed according to the union set, and the left lane change scene can be expressed as the following formula by taking the example that the surrounding vehicles do not exist around the sample vehicle:
Figure DEST_PATH_IMAGE109_7A
(10)
in practical application, after the driving behavior elements of the target vehicle are extracted, the driving behavior elements are combined and spliced into candidate splicing elements, and if the candidate splicing elements meet the scene driving model of the left lane changing scene, the candidate splicing elements are determined to be the left lane changing scene.
The left lane changing scene comprises three driving behavior primitives with continuous time characteristics: the straight-ahead behavior element of the target vehicle, the left crossing behavior element of the target vehicle and the straight-ahead behavior element of the target vehicle, and the time difference between the tail time and the start time of every two adjacent elements
Figure DEST_PATH_IMAGE110_28A
And meanwhile, splicing the adjacent driving behavior primitives according to the driving behavior primitive combination sequence. Alternatively,
Figure DEST_PATH_IMAGE059_56A
=2s。
in the driving scene identification method, a plurality of sample driving scenes are obtained; the driving scenes comprise driving scenes in which sample vehicles to which the sample driving scenes belong are in time correlation and driving behaviors are correlated with surrounding vehicles of the sample vehicles; respectively carrying out driving behavior primitive analysis on each driving scene, and determining a driving behavior primitive group corresponding to each driving scene; and determining the corresponding relation between each driving behavior primitive group and the corresponding driving scene as a driving scene model. According to the method, the driving scene is modeled, the relation between the driving scene and the driving behavior elements is determined, and the automatic identification and extraction of the typical driving scene which is high in coverage, strong in universality and easy to use and faces to the function development and test of the automatic driving vehicle are realized.
In one embodiment, the application provides a method for identifying driving behavior primitives, the driving behavior primitives with time correlation and driving behavior correlation are spliced, and a driving scene model is utilized, so that automatic identification and extraction of large-scale natural driving data can be realized. On the collected partial natural driving data set, the extraction accuracy rate of the method for the free-running scene, the following scene, the lane-changing scene, the overtaking scene, the vehicle cut-in scene and the vehicle cut-out scene can reach more than 90%. The automatic identification and extraction process can be realized through the following two typical modes: and downloading the typical driving scene extraction algorithm to a real-time processor on the automatic driving vehicle to realize the driving scene recognition of the target vehicle.
Optionally, the method may also perform real-time identification and extraction in an integrated environment of an NI PXI real-time simulator or a Virtual simulation platform, perform real-time identification and extraction in an integrated environment of a Virtual Test Drive (VTD) and Matlab/Simulink, and may also perform offline identification and extraction on data acquired in the acquisition device or the simulation environment.
In the embodiment of the application, the driving scenes can be identified and extracted through the driving behavior primitives, and specifically, the automatic identification and extraction of the typical driving scenes which are high in coverage, strong in universality and easy to use and are oriented to the function development and test of the automatic driving vehicles can be realized through a plurality of key means, namely typical driving scene analysis, driving behavior primitive modeling, driving behavior primitive combination and driving scene extraction.
In one embodiment, as shown in fig. 14, the embodiment includes:
s1401, analyzing common behavior characteristics of a driving scene under natural driving conditions to obtain driving behavior elements;
the driving scene at least comprises free running, car following, lane changing, overtaking, vehicle cut-in and vehicle cut-out, and the driving behavior elements comprise straight behavior elements, left cross-lane behavior elements and right cross-lane behavior elements.
And S1402, defining a driving scene according to each driving behavior element.
And S1403, determining the combination relation of driving behavior elements corresponding to the driving scenes according to the driving characteristics of different types of driving scenes and the definition of the driving scenes, and establishing a mathematical and symbolic theoretical model of typical driving scenes and the driving behavior elements, namely a driving scene model.
And S1404, modeling each driving behavior element according to the characteristics of the driving behavior element to obtain each driving behavior element model.
The modeling process of the generalized straight-ahead driving behavior element comprises the following steps:
judging straight driving behavior elements in natural driving data by a preset time window, firstly, judging whether the absolute value of the distance difference between the lane line distance of each moment of the time window and the lane line distance of the starting moment of the time window is less than or equal to a straight driving behavior distance threshold value or not from the first time window of the natural driving data, and the included angle between the corresponding moment and the tangential direction of the lane line is less than or equal to a straight driving behavior angle threshold value or not, if so, judging that the time window meets the straight driving behavior elements, continuously judging the data of the next time window, otherwise, sequentially constructing a new time window, judging whether the straight driving behavior elements are met or not, and judging all the natural driving data.
The modeling process of the left crossing behavior primitive and the right crossing behavior primitive is as follows:
firstly, judging whether the time difference value of adjacent moments is smaller than a minimum time difference judgment threshold value or not, and whether the absolute value of the distance variation from a lane line is larger than a lane width judgment threshold value or not, and if so, determining the moment when a vehicle crosses the lane;
secondly, backtracking from the time when the vehicle crosses the lane to determine the starting time of the behavior of the vehicle crossing the lane; the absolute value of the distance variation of the lane line distance corresponding to the starting time of the behavior that the vehicle crosses the lane and the starting time of the natural driving data is smaller than or equal to a crossing behavior distance threshold, and the absolute value of the included angle between the heading of the vehicle corresponding to the starting time of the behavior that the vehicle crosses the lane and the tangential direction of the lane line is smaller than or equal to a crossing behavior angle threshold;
iterating backwards from the time when the vehicle crosses the lane, and determining the end time of the crossing lane line, wherein whether the distance difference between the end time of the crossing lane line and the initial time of the natural driving data is smaller than or equal to a crossing behavior distance threshold value or not, and the included angle in the tangential direction of the lane line at the end time of the crossing lane line is smaller than or equal to a crossing behavior angle threshold value or not;
if the distance variation from the lane line is larger than the lane width judgment threshold, judging that the time period from the starting time to the ending time of the lane crossing behavior of the vehicle is a right crossing behavior element, otherwise, judging that the time period from the starting time to the ending time of the lane crossing behavior of the vehicle is a left crossing behavior element.
S1405, according to the driving behavior primitive model and the driving scene model, recognizing and extracting the driving data of the vehicle, and determining the driving scene of the vehicle.
For specific limitations of the driving scenario identification method provided in this embodiment, reference may be made to step limitations of each embodiment in the driving scenario identification method, which is not described herein again.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the application also provides a driving scene recognition device for realizing the driving scene recognition method. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme recorded in the method, so the specific limitations in one or more embodiments of the driving scenario recognition device provided below can be referred to the limitations on the driving scenario recognition method in the foregoing, and details are not repeated here.
In one embodiment, as shown in fig. 15, there is provided a driving scenario recognition apparatus 1500 including: an acquisition module 1501, a first determination module 1502, and a second determination module 1503, wherein:
the acquisition module 1501 is used for acquiring driving data of a target vehicle within a preset time period;
a first determining module 1502, configured to determine, according to the driving data, a driving behavior primitive of the target vehicle within a preset time period; the driving behavior primitive represents the common behavior characteristics of the vehicle in the driving process of a plurality of driving scenes;
the second determination module 1503 is used for determining a driving scene of the target vehicle within a preset time period according to the driving behavior primitive of the target vehicle within the preset time period and the preset driving scene model; the driving scene model comprises the corresponding relation between the driving behavior primitive group and the driving scene.
In one embodiment, the first determination module 1502 includes:
the dividing unit is used for dividing the preset time period into a plurality of time windows according to the time sequence;
and the identification unit is used for performing behavior primitive identification on the driving data in each time window according to the time sequence of each time window to obtain the driving behavior primitive of the target vehicle in the preset time period.
In one embodiment, the identification unit comprises:
the first identification subunit is used for identifying the straight-ahead behavior primitive of the driving data in the time window aiming at any time window, and if the driving data in the time window belongs to the straight-ahead behavior primitive, continuing to identify the straight-ahead behavior primitive of the driving data in the next time window of the time window;
and the second identification subunit is used for deleting the driving data at the starting moment of the time window and then subdividing the time window if the driving data in the time window does not belong to the straight-going behavior primitives, and identifying the straight-going behavior primitives of the driving data in the subdivided time window.
In one embodiment, the first identification subunit comprises:
the first acquiring subunit is used for acquiring the transverse distance corresponding to each time in the time window and the driving included angle corresponding to each time;
the first determining subunit is used for determining a distance difference between the transverse distance of each moment in the time window and the transverse distance of the starting moment of the time window according to the transverse distance corresponding to each moment in the time window;
and the second determining subunit is used for determining that the driving data in the time window belongs to the straight behavior primitive if the absolute value of the distance difference value at each moment in the time window is less than or equal to the straight behavior distance threshold value and the absolute value of the driving included angle at each moment in the time window is less than or equal to the straight behavior angle threshold value.
In one embodiment, the identification unit comprises:
the second acquiring subunit is used for sequentially acquiring the left transverse distance variable quantity of the target vehicle at each adjacent moment in a preset time period;
the third determining subunit is used for determining the time period of the crossing behavior of the target vehicle according to the variable quantity of each left transverse distance;
and the third acquisition subunit is used for acquiring the driving behavior primitives of the target vehicle in the time period of the crossing behavior.
In one embodiment, the third determining subunit includes:
the fourth determining subunit is configured to obtain a left lateral distance variation of which an absolute value is greater than a lane width threshold in each left lateral distance variation, and obtain a candidate left lateral distance variation;
a fifth determining subunit, configured to determine, as crossing times, earlier times among adjacent times corresponding to the left lateral distance variation candidates; each crossing moment corresponds to a target vehicle to generate a crossing behavior;
and the sixth determining subunit is used for determining the crossing time period of the target vehicle, which is subjected to the crossing behavior each time, according to each crossing time.
In one embodiment, the sixth determining subunit includes:
a seventh determining subunit, configured to determine, for any crossing time, a time that satisfies the preset condition before the crossing time as a start time of a crossing behavior corresponding to the crossing time, and determine a time that satisfies the preset condition after the crossing time as an end time of the crossing behavior corresponding to the crossing time; the preset condition is that the absolute value of the left transverse distance variation with the starting time of the preset time period is smaller than or equal to the crossing behavior distance threshold, and the absolute value of the left driving included angle is smaller than or equal to the crossing behavior angle threshold;
and the eighth determining subunit is used for determining the time period between the starting time and the ending time as the time period when the target vehicle has the crossing behavior.
In one embodiment, the third acquisition subunit includes:
a fourth acquisition subunit configured to acquire a reference left lateral distance variation amount of the target vehicle over a time period of the crossing behavior;
a ninth determining subunit, configured to determine that a driving behavior primitive of the target vehicle within the time period of the crossing behavior belongs to the right crossing behavior primitive if the reference left lateral distance variation is greater than the lane width threshold;
a tenth determining subunit, configured to determine that a driving behavior primitive of the target vehicle within the time period of the crossing behavior belongs to the left crossing behavior primitive if the reference left lateral distance variation is smaller than the lane width threshold.
In one embodiment, the second determination module 1503 includes:
the splicing unit is used for splicing the driving behavior elements in the preset time period according to the time sequence of the driving behavior elements in the preset time period to obtain a plurality of candidate splicing element groups of the target vehicle in the preset time period;
the matching unit is used for matching each candidate splicing primitive group with a driving behavior primitive group in the driving scene model respectively;
the determining unit is used for determining the driving scene corresponding to the driving behavior primitive group successfully matched in the driving scene model as the driving scene corresponding to the candidate splicing primitive group; and the driving scene of each candidate splicing primitive group is the driving scene of the target vehicle in a preset time period.
In one embodiment, the splicing unit includes:
the first splicing subunit is used for splicing the driving behavior elements with time correlation of the target vehicle within a preset time period; temporal associations represent temporal adjacency or temporal overlap;
and the second splicing subunit is used for splicing the driving behavior primitive with time correlation of the target vehicle in the preset time period with the candidate behavior primitive with time correlation and driving behavior correlation in the peripheral vehicles of the target vehicle.
In one embodiment, the apparatus 1500 includes:
the system comprises a sample module, a data processing module and a control module, wherein the sample module is used for acquiring a plurality of sample driving scenes; the driving scenes comprise driving scenes in which sample vehicles to which the sample driving scenes belong are in time correlation and driving behaviors are correlated with surrounding vehicles of the sample vehicles;
the analysis module is used for respectively carrying out driving behavior primitive analysis on each driving scene and determining a driving behavior primitive group corresponding to each driving scene;
and the corresponding module is used for determining the corresponding relation between each driving behavior primitive group and the corresponding driving scene as a driving scene model.
The various modules in the driving scene recognition device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 16. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a driving scenario recognition method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the configuration shown in fig. 16 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the computing device to which the present application may be applied, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In the steps implemented by the processor in this embodiment, the implementation principle and technical effect are similar to those of the driving scene recognition method described above, and are not described herein again.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In the present embodiment, the implementation principle and the technical effect of each step implemented when the computer program is executed by the processor are similar to the principle of the driving scene recognition method, and are not described herein again.
In an embodiment, a computer program product is provided, comprising a computer program which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In the present embodiment, the implementation principle and the technical effect of each step implemented when the computer program is executed by the processor are similar to the principle of the driving scene recognition method, and are not described herein again.
It should be noted that the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware instructions of a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, databases, or other media used in the embodiments provided herein can include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
All possible combinations of the technical features in the above embodiments may not be described for the sake of brevity, but should be considered as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (15)

1. A driving scenario recognition method, characterized in that the method comprises:
acquiring driving data of a target vehicle in a preset time period;
determining a driving behavior primitive of the target vehicle in the preset time period according to the driving data; the driving behavior primitive represents the common behavior characteristics of the vehicle in the driving process of a plurality of driving scenes;
determining a driving scene of the target vehicle in the preset time period according to the driving behavior element of the target vehicle in the preset time period and a preset driving scene model; the driving scene model comprises a corresponding relation between a driving behavior primitive group and a driving scene.
2. The method of claim 1, wherein the determining a driving behavior primitive of the target vehicle within the preset time period according to the driving data comprises:
dividing the preset time period into a plurality of time windows according to the time sequence;
and performing behavior primitive recognition on the driving data in each time window according to the time sequence of each time window to obtain driving behavior primitives of the target vehicle in the preset time period.
3. The method of claim 2, wherein the behavior primitive identification comprises a straight-through behavior primitive identification; the driving behavior primitives comprise straight-through behavior primitives;
the performing behavior primitive identification on the driving data in each time window to obtain the driving behavior primitive of the target vehicle in the preset time period includes:
for any time window, performing straight-going behavior primitive identification on the driving data in the time window, and if the driving data in the time window belongs to a straight-going behavior primitive, continuing performing straight-going behavior primitive identification on the driving data in the next time window of the time window;
and if the driving data in the time window do not belong to the straight-going behavior primitive, deleting the driving data at the starting moment of the time window, then re-dividing the time window, and carrying out straight-going behavior primitive identification on the driving data in the re-divided time window.
4. The method of claim 3, wherein the driving data includes a lateral distance of the target vehicle to a lane line and an included angle of travel between the target vehicle and a lane line tangent;
the straight-going behavior primitive identification of the driving data in the time window comprises the following steps:
acquiring the transverse distance corresponding to each moment in the time window and the driving included angle corresponding to each moment;
determining a distance difference between the transverse distance of each moment in the time window and the transverse distance of the starting moment of the time window according to the transverse distance corresponding to each moment in the time window;
and if the absolute value of the distance difference value at each moment in the time window is less than or equal to the straight-going behavior distance threshold value, and the absolute value of the driving included angle at each moment in the time window is less than or equal to the straight-going behavior angle threshold value, determining that the driving data in the time window belongs to the straight-going behavior elements.
5. The method according to any one of claims 2-4, wherein the driving data includes a left lateral distance of the target vehicle to a left lane line;
the performing behavior primitive identification on the driving data in each time window to obtain the driving behavior primitive of the target vehicle in the preset time period includes:
sequentially acquiring left transverse distance variable quantities of the target vehicle at each adjacent moment in the preset time period;
determining a time period of the crossing behavior of the target vehicle according to the left transverse distance variable quantity;
and acquiring a driving behavior primitive of the target vehicle in the crossing behavior time period.
6. The method according to claim 5, wherein the determining a time period during which the target vehicle has the ride-through behavior according to each of the left lateral distance variations comprises:
obtaining left transverse distance variable quantities of which absolute values are larger than a lane width threshold value in the left transverse distance variable quantities to obtain candidate left transverse distance variable quantities;
determining the earlier time in the adjacent times corresponding to each candidate left transverse distance variable quantity as a crossing time; each crossing moment corresponds to the target vehicle to generate a crossing behavior;
and determining a crossing time period of the target vehicle for each crossing behavior according to each crossing time.
7. The method of claim 6, wherein the driving data further comprises a left included angle of travel of the target vehicle from a left lane line tangent direction; determining a crossing time period of each crossing behavior of the target vehicle according to each crossing time, wherein the crossing time period comprises the following steps:
for any crossing moment, determining the moment meeting a preset condition before the crossing moment as the starting moment of the crossing behavior corresponding to the crossing moment, and determining the moment meeting the preset condition after the crossing moment as the ending moment of the crossing behavior corresponding to the crossing moment; the preset condition is that the absolute value of the left transverse distance variation with the starting moment of the preset time period is smaller than or equal to a crossing behavior distance threshold, and the absolute value of the left driving included angle is smaller than or equal to a crossing behavior angle threshold;
and determining the time period between the starting time and the ending time as the time period of the crossing behavior of the target vehicle.
8. The method of claim 5, wherein the driving behavior primitives comprise a left cross behavior primitive and a right cross behavior primitive;
the acquiring of the driving behavior primitive of the target vehicle in the crossing behavior time period comprises:
acquiring a reference left lateral distance variation of the target vehicle in the crossing behavior time period;
if the reference left transverse distance variation is larger than a lane width threshold value, determining that a driving behavior primitive of the target vehicle in the crossing behavior time period belongs to a right crossing behavior primitive;
and if the reference left lateral distance variation is smaller than the lane width threshold, determining that a driving behavior primitive of the target vehicle in the crossing behavior time period belongs to a left crossing behavior primitive.
9. The method according to any one of claims 1-4, wherein the determining the driving scene of the target vehicle within the preset time period according to the driving behavior primitive of the target vehicle within the preset time period and a preset driving scene model comprises:
splicing the driving behavior elements in a preset time period according to the time sequence of the driving behavior elements in the preset time period to obtain a plurality of candidate spliced element groups of the target vehicle in the preset time period;
matching each candidate splicing primitive group with a driving behavior primitive group in the driving scene model respectively;
determining the driving scene corresponding to the successfully matched driving behavior primitive group in the driving scene model as the driving scene corresponding to the candidate splicing primitive group; and the driving scene of each candidate splicing primitive group is the driving scene of the target vehicle in the preset time period.
10. The method according to claim 9, wherein the stitching processing of the driving behavior primitives within the preset time period comprises:
splicing the driving behavior elements with time correlation of the target vehicle in the preset time period; the temporal associations represent temporal adjacency or temporal overlap;
and splicing the driving behavior primitive with time correlation of the target vehicle in the preset time period and the candidate behavior primitives with time correlation and driving behavior correlation in the peripheral vehicles of the target vehicle.
11. The method of claim 1, wherein the construction process of the driving scenario model comprises:
acquiring a plurality of sample driving scenes; the driving scenes comprise driving scenes in which a sample vehicle to which the sample driving scene belongs has time correlation with surrounding vehicles of the sample vehicle and driving behavior correlation;
respectively carrying out driving behavior primitive analysis on each driving scene, and determining a driving behavior primitive group corresponding to each driving scene;
and determining the corresponding relation between each driving behavior primitive group and the corresponding driving scene as the driving scene model.
12. A driving scenario recognition apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring driving data of the target vehicle within a preset time period;
the first determining module is used for determining a driving behavior element of the target vehicle in the preset time period according to the driving data; the driving behavior primitive represents the common behavior characteristics of the vehicle in the driving process of a plurality of driving scenes;
the second determination module is used for determining the driving scene of the target vehicle in the preset time period according to the driving behavior elements of the target vehicle in the preset time period and a preset driving scene model; the driving scene model comprises a corresponding relation between a driving behavior primitive group and a driving scene.
13. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any one of claims 1 to 11 when executing the computer program.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 11.
15. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 11 when executed by a processor.
CN202210694071.0A 2022-06-17 2022-06-17 Driving scene recognition method, device, equipment, storage medium and program product Active CN114782926B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210694071.0A CN114782926B (en) 2022-06-17 2022-06-17 Driving scene recognition method, device, equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210694071.0A CN114782926B (en) 2022-06-17 2022-06-17 Driving scene recognition method, device, equipment, storage medium and program product

Publications (2)

Publication Number Publication Date
CN114782926A true CN114782926A (en) 2022-07-22
CN114782926B CN114782926B (en) 2022-08-26

Family

ID=82421100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210694071.0A Active CN114782926B (en) 2022-06-17 2022-06-17 Driving scene recognition method, device, equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN114782926B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115617880A (en) * 2022-12-19 2023-01-17 北京百度网讯科技有限公司 Mining method, device and equipment for automatic driving scene and storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1170652A1 (en) * 2000-07-06 2002-01-09 Renault Process of adaptive regulation of the distance between two mobile vehicles
US20140142867A1 (en) * 2012-11-16 2014-05-22 National University Corporation Kanazawa University Driver state estimation device
WO2019047651A1 (en) * 2017-09-05 2019-03-14 百度在线网络技术(北京)有限公司 Driving behavior prediction method and device, and unmanned vehicle
CN109491375A (en) * 2017-09-13 2019-03-19 百度(美国)有限责任公司 The path planning based on Driving Scene for automatic driving vehicle
CN109520744A (en) * 2018-11-12 2019-03-26 百度在线网络技术(北京)有限公司 The driving performance test method and device of automatic driving vehicle
US20190265712A1 (en) * 2018-02-27 2019-08-29 Nauto, Inc. Method for determining driving policy
JP2019182425A (en) * 2018-09-07 2019-10-24 百度在線網絡技術(北京)有限公司 Control method and control device for automatic drive vehicle, and computer readable storage medium
CN111645670A (en) * 2020-05-23 2020-09-11 东南大学 Heavy-duty vehicle roll state detection method based on support vector machine
US20210042642A1 (en) * 2019-08-08 2021-02-11 Toyota Jidosha Kabushiki Kaisha Driving action evaluating device, driving action evaluating method, and recording medium storing driving action evaluating program
CN112525551A (en) * 2020-12-10 2021-03-19 北京百度网讯科技有限公司 Drive test method, device, equipment and storage medium for automatic driving vehicle
CN113997940A (en) * 2021-12-20 2022-02-01 镁佳(北京)科技有限公司 Driving behavior monitoring method and device
WO2022056564A1 (en) * 2020-09-15 2022-03-24 Avl List Gmbh Method and system for testing a driver assistance system
WO2022078077A1 (en) * 2020-10-15 2022-04-21 腾讯科技(深圳)有限公司 Driving risk early warning method and apparatus, and computing device and storage medium
US20220121213A1 (en) * 2020-10-21 2022-04-21 Automotive Research & Testing Center Hybrid planning method in autonomous vehicle and system thereof

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1170652A1 (en) * 2000-07-06 2002-01-09 Renault Process of adaptive regulation of the distance between two mobile vehicles
US20140142867A1 (en) * 2012-11-16 2014-05-22 National University Corporation Kanazawa University Driver state estimation device
WO2019047651A1 (en) * 2017-09-05 2019-03-14 百度在线网络技术(北京)有限公司 Driving behavior prediction method and device, and unmanned vehicle
CN109491375A (en) * 2017-09-13 2019-03-19 百度(美国)有限责任公司 The path planning based on Driving Scene for automatic driving vehicle
US20190265712A1 (en) * 2018-02-27 2019-08-29 Nauto, Inc. Method for determining driving policy
JP2019182425A (en) * 2018-09-07 2019-10-24 百度在線網絡技術(北京)有限公司 Control method and control device for automatic drive vehicle, and computer readable storage medium
CN109520744A (en) * 2018-11-12 2019-03-26 百度在线网络技术(北京)有限公司 The driving performance test method and device of automatic driving vehicle
US20210042642A1 (en) * 2019-08-08 2021-02-11 Toyota Jidosha Kabushiki Kaisha Driving action evaluating device, driving action evaluating method, and recording medium storing driving action evaluating program
CN111645670A (en) * 2020-05-23 2020-09-11 东南大学 Heavy-duty vehicle roll state detection method based on support vector machine
WO2022056564A1 (en) * 2020-09-15 2022-03-24 Avl List Gmbh Method and system for testing a driver assistance system
WO2022078077A1 (en) * 2020-10-15 2022-04-21 腾讯科技(深圳)有限公司 Driving risk early warning method and apparatus, and computing device and storage medium
US20220121213A1 (en) * 2020-10-21 2022-04-21 Automotive Research & Testing Center Hybrid planning method in autonomous vehicle and system thereof
CN112525551A (en) * 2020-12-10 2021-03-19 北京百度网讯科技有限公司 Drive test method, device, equipment and storage medium for automatic driving vehicle
CN113997940A (en) * 2021-12-20 2022-02-01 镁佳(北京)科技有限公司 Driving behavior monitoring method and device

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
FENG GAO等: "Automatic Virtual Test Technology for Intelligent Driving Systems Considering Both Coverage and Efficiency", 《IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY》 *
GUO J H等: "Review on the research of motion control for intelligent vehicles", 《JOURNAL OF AUTOMOTIVE SAFETY AND ENERGY》 *
娄燕等: "虚拟自主汽车智能驾驶行为模型的研究及实现", 《系统仿真学报》 *
李国法等: "智能汽车决策中的驾驶行为语义解析关键技术", 《汽车安全与节能学报》 *
郭景华等: "基于深度强化学习的驾驶员跟车模型研究", 《汽车工程》 *
霍东超等: "经验因素与车道类型对汽车驾驶员变道决策的影响", 《人类工效学》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115617880A (en) * 2022-12-19 2023-01-17 北京百度网讯科技有限公司 Mining method, device and equipment for automatic driving scene and storage medium

Also Published As

Publication number Publication date
CN114782926B (en) 2022-08-26

Similar Documents

Publication Publication Date Title
CN107038478B (en) Road condition prediction method and device, computer equipment and readable medium
EP3186662B1 (en) Measuring traffic speed in a road network
US20230138650A1 (en) Test method for automatic driving, and electronic device
CN108027243A (en) For operating the control error correction planing method of automatic driving vehicle
US11693417B2 (en) Generating training data using simulated environments and training machine learning models for vehicle guidance
CN109685898B (en) Layering method and device of point cloud data, computer equipment and storage medium
WO2022156553A1 (en) Method and apparatus for presenting road information
US11302065B2 (en) Systems and methods for filtering sensor data to remove data points associated with ephemeral objects
CN114782926B (en) Driving scene recognition method, device, equipment, storage medium and program product
Gao et al. A data-driven lane-changing behavior detection system based on sequence learning
CN111062405A (en) Method and device for training image recognition model and image recognition method and device
EP4202365A1 (en) Method, apparatus, and computer program product for identifying and correcting lane geometry in map data
CN114120650A (en) Method and device for generating test result
CN114360239A (en) Traffic prediction method and system for multilayer space-time traffic knowledge map reconstruction
CN114625744A (en) Updating method and device of electronic map
WO2017031382A2 (en) Movement intelligence using satellite imagery
CN114743395B (en) Signal lamp detection method, device, equipment and medium
CN116152637A (en) Evaluation method of automatic driving perception model, computer equipment and storage medium
EP3872669A1 (en) Systems and methods for reconstructing a trajectory from anonymized data
EP3872519A1 (en) Systems and methods for reconstructing a trajectory from anonymized data
CN114872718A (en) Vehicle trajectory prediction method, vehicle trajectory prediction device, computer equipment and storage medium
CN114169247A (en) Method, device and equipment for generating simulated traffic flow and computer readable storage medium
US20230194298A1 (en) Method, apparatus, and computer program product for correcting lane geometry in map data
CN116010538A (en) Traffic thermodynamic diagram generation method, device, equipment, storage medium and program product
CN115392391A (en) Training method of road type recognition model and road type recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant