CN112085077A - Method and device for determining lane change of vehicle, storage medium and electronic equipment - Google Patents

Method and device for determining lane change of vehicle, storage medium and electronic equipment Download PDF

Info

Publication number
CN112085077A
CN112085077A CN202010889856.4A CN202010889856A CN112085077A CN 112085077 A CN112085077 A CN 112085077A CN 202010889856 A CN202010889856 A CN 202010889856A CN 112085077 A CN112085077 A CN 112085077A
Authority
CN
China
Prior art keywords
relative motion
track
lane change
vehicle
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010889856.4A
Other languages
Chinese (zh)
Other versions
CN112085077B (en
Inventor
牟童
孟健
何光宇
程万军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Corp
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Corp filed Critical Neusoft Corp
Priority to CN202010889856.4A priority Critical patent/CN112085077B/en
Publication of CN112085077A publication Critical patent/CN112085077A/en
Application granted granted Critical
Publication of CN112085077B publication Critical patent/CN112085077B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Remote Sensing (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present disclosure relates to a determination method, an apparatus, a storage medium, and an electronic device for lane change of a vehicle, the method applied to a first vehicle, including: acquiring a track data set and a relative motion data set, respectively inputting the track data set into a pre-trained track recognition model and a pre-trained track clustering model, determining track lane change probability according to a track recognition result output by the track recognition model and a track clustering result output by the track clustering model, respectively inputting the relative motion data set into the pre-trained relative motion recognition model and the pre-trained relative motion clustering model, determining the relative motion lane change probability according to a relative motion recognition result output by the relative motion recognition model and a relative motion clustering result output by the relative motion clustering model, determining the lane change probability of a second vehicle according to the track lane change probability and the relative motion lane change probability, and determining whether the second vehicle is to change lanes according to the lane change probability and a preset lane change threshold.

Description

Method and device for determining lane change of vehicle, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of vehicle control technologies, and in particular, to a method and an apparatus for determining a lane change of a vehicle, a storage medium, and an electronic device.
Background
With the increasing amount of automobiles, the driving safety of the automobiles is more and more emphasized by people. At present, because a driver cannot find the lane change of a vehicle ahead in time, the caused traffic accidents are more frequent, and the life and property safety of the driver and pedestrians is seriously influenced. Therefore, to ensure safe driving of the vehicle, it is necessary to accurately determine whether the vehicle ahead will change lanes.
In general, a lane change determination method for a vehicle may be divided into two types, one is to determine whether a vehicle ahead will change lane or not by the curvature of the vehicle ahead in the forward direction and the lane line direction, and this determination method is susceptible to the influence of a driver's misoperation, a transient change in the vehicle motion state, and the like, and the accuracy of the determination is unstable. The other method is to judge whether the front vehicle will change the track or not according to the operation data (such as the steering wheel angle, the turn lamp state, the clutch brake state, the driver state and the like of the front vehicle) in the front vehicle based on a Markov model, for the vehicle, the accuracy of the judgment method is not high due to the fact that the operation data in the front vehicle is difficult to accurately acquire, and the judgment result mainly depends on the data at the current time point due to the fact that the Markov model depends on the homogeneous Markov assumption and the observation independence assumption, so that the accuracy of the judgment result is further reduced.
Disclosure of Invention
The invention aims to provide a method, a device, a storage medium and an electronic device for determining vehicle lane change, which are used for solving the problem of low accuracy in judging vehicle lane change in the prior art.
In order to achieve the above object, according to a first aspect of embodiments of the present disclosure, there is provided a method for determining a lane change of a vehicle, applied to a first vehicle, the method including:
acquiring a track data set and a relative motion data set, wherein the track data set comprises track data of a second vehicle acquired at a plurality of acquisition moments within a preset time length, the relative motion data set comprises relative motion data between the second vehicle and a first vehicle acquired at the plurality of acquisition moments within the preset time length, the plurality of acquisition moments comprise current acquisition moments, and the second vehicle is a vehicle which runs on a lane adjacent to a lane where the first vehicle is located and is positioned in front of the first vehicle;
inputting the track data set into a pre-trained track recognition model and a pre-trained track clustering model respectively, and determining track lane change probability according to a track recognition result output by the track recognition model and a track clustering result output by the track clustering model;
inputting the relative motion data set into a pre-trained relative motion recognition model and a pre-trained relative motion clustering model respectively, and determining the lane change probability of the relative motion according to a relative motion recognition result output by the relative motion recognition model and a relative motion clustering result output by the relative motion clustering model;
determining the lane change probability of the second vehicle according to the track lane change probability and the relative motion lane change probability;
and determining whether the second vehicle is to change the lane according to the lane change probability and a preset lane change threshold value.
Optionally, the acquiring the trajectory data set and the relative motion data set includes:
acquiring the track data and the relative motion data acquired at each acquisition time within the preset time length, wherein the track data comprises: a lateral position and a longitudinal position of the second vehicle, the relative motion data comprising: a relative longitudinal speed and a relative longitudinal distance of the first vehicle from the second vehicle;
according to the trajectory data acquired at each acquisition time, determining supplementary trajectory data corresponding to the acquisition time, wherein the supplementary trajectory data comprises: a lateral velocity and a lateral acceleration of the second vehicle;
taking the trajectory data acquired at each acquisition moment and the corresponding supplementary trajectory data as the trajectory data set;
and processing the relative motion data acquired at each acquisition moment according to a preset rule to obtain the relative motion data set.
Optionally, the respectively inputting the trajectory data set into a pre-trained trajectory recognition model and a pre-trained trajectory clustering model to determine a trajectory lane change probability according to a trajectory recognition result output by the trajectory recognition model and a trajectory clustering result output by the trajectory clustering model includes:
inputting the track data set into the track recognition model to obtain a track recognition result, and inputting the track data set into the track clustering model to obtain a track clustering result, wherein the track recognition result is used for indicating straight movement or lane change, and the track clustering result is used for indicating the category of the track data acquired at each acquisition time;
if the track identification result indicates straight going, determining that the track lane change probability is zero;
and if the track identification result indicates lane change, determining the track lane change probability according to the track clustering result.
Optionally, the trajectory recognition model is an Attention-LSTM model, and the inputting the trajectory data set into the trajectory recognition model to obtain the trajectory recognition result includes:
inputting the track data set into the Attention-LSTM model to obtain the track recognition result output by the Attention-LSTM model and the Attention value of each acquisition moment;
the determining the track lane change probability according to the track clustering result comprises the following steps:
determining the initial track lane change probability at the acquisition time according to the category to which the track data acquired at each acquisition time belongs and the corresponding relation between the preset category and the initial track lane change probability in the track clustering result;
and determining the track lane change probability according to the initial track lane change probability at each acquisition time and the attention value at each acquisition time.
Optionally, the inputting the relative motion data set into a pre-trained relative motion recognition model and a pre-trained relative motion clustering model respectively to determine a relative motion lane change probability according to a relative motion recognition result output by the relative motion recognition model and a relative motion clustering result output by the relative motion clustering model includes:
inputting the relative motion data set into the relative motion recognition model to obtain the relative motion recognition result, and inputting the relative motion data set into the relative motion clustering model to obtain a relative motion clustering result, wherein the relative motion recognition result is used for indicating straight-going or lane changing, and the relative motion clustering result is used for indicating the category to which the relative motion data acquired at each acquisition moment belong;
if the relative motion identification result indicates straight going, determining that the relative motion lane change probability is zero;
and if the relative motion identification result indicates lane change, determining the relative motion lane change probability according to the relative motion clustering result.
Optionally, the relative motion recognition model is an Attention-GRU model, and the inputting the relative motion data set into the relative motion recognition model to obtain the relative motion recognition result includes:
inputting the relative motion data set into the Attention-GRU model to obtain the relative motion recognition result output by the Attention-GRU model and the Attention value at each acquisition moment;
the determining the relative motion lane change probability according to the relative motion clustering result comprises:
determining the initial relative motion lane change probability at the acquisition time according to the category to which the relative motion data acquired at each acquisition time belongs and the corresponding relation between the preset category and the initial relative motion lane change probability in the relative motion clustering result;
and determining the relative motion lane change probability according to the initial relative motion lane change probability at each acquisition time and the attention value at each acquisition time.
Optionally, the acquiring the trajectory data set and the relative motion data set includes:
acquiring the track data and the relative motion data acquired at each acquisition time within a specified time;
dividing the specified duration into a specified number of sliding windows, wherein the length of each sliding window is the preset duration;
taking the track data and the relative motion data collected in each sliding window as the track data set and the relative motion data set corresponding to the sliding window;
the determining whether the second vehicle is to change lanes according to the lane change probability and a preset lane change threshold value comprises the following steps:
weighting and summing the lane change probabilities determined according to the track data set and the relative motion data set corresponding to each sliding window to determine a total lane change probability, wherein the weight corresponding to each sliding window is inversely proportional to the distance between the sliding window and the current moment;
if the total lane change probability is larger than or equal to the lane change threshold value, determining that the second vehicle is about to change lanes;
and if the total lane change probability is smaller than the lane change threshold value, determining that the second vehicle keeps going straight.
Optionally, the track recognition result is used for indicating straight running, left lane changing or right lane changing; after the determining whether the second vehicle is about to change lane according to the lane change probability and a preset lane change threshold, the method further includes:
if the second vehicle is determined to be lane-changed, counting a first number corresponding to left lane change, a second number corresponding to right lane change and a third number corresponding to straight line in the track identification result determined according to the track data set corresponding to each sliding window, wherein the sum of the first number, the second number and the third number is equal to the specified number;
if the first number is larger than the second number, determining that the second vehicle is about to change lanes to the left;
and if the first number is smaller than the second number, determining that the second vehicle is going to change lanes to the right.
Optionally, after determining whether the second vehicle is about to change lane according to the lane change probability and a preset lane change threshold, the method further includes:
if the second vehicle is determined to be lane-changed, determining lane-changing time of the second vehicle according to the transverse position, the transverse speed and the course angle of the second vehicle, which are included in the track data acquired at the current acquisition time;
determining the ending track data of the second vehicle at the end of lane change according to the lane change time and the longitudinal speed included in the track data acquired at the current acquisition time;
and fitting according to the stopping track data and the track data acquired at the current acquisition moment and a Bezier function to obtain the lane change track of the second vehicle.
Optionally, the trajectory recognition model is trained by:
obtaining a first sample input set and a first sample output set, wherein each first sample input in the first sample input set comprises a plurality of sample track data, each first sample output set comprises a first sample output corresponding to each first sample input, each first sample output comprises a sample track identification result marked by the corresponding plurality of sample track data, and the sample track identification result is used for indicating straight running or lane change;
taking the first sample input set as the input of the track recognition model, and taking the first sample output set as the output of the track recognition model, so as to train the track recognition model;
the relative motion recognition model is trained by the following steps:
acquiring a second sample input set and a second sample output set, wherein each second sample input in the second sample input set comprises a plurality of sample relative motion data, each second sample output in the second sample output set comprises a second sample output corresponding to each second sample input, each second sample output comprises a sample relative motion identification result marked by the corresponding plurality of sample relative motion data, and the sample relative motion identification result is used for indicating straight running or lane change;
and taking the second sample input set as the input of the relative motion recognition model, and taking the second sample output set as the output of the relative motion recognition model, so as to train the relative motion recognition model.
Optionally, the trajectory recognition model is an LSTM model, and the relative motion recognition model is a GRU model;
the taking the first sample input set as the input of the trajectory recognition model and the first sample output set as the output of the trajectory recognition model to train the trajectory recognition model includes:
inputting the first sample input set into an initial LSTM model according to a first input weight, and taking the first sample output set as the output of the initial LSTM model to train the initial LSTM model;
updating the first input weight according to the trained initial LSTM model;
repeatedly executing the first sample input set, inputting an initial LSTM model according to a first input weight, using the first sample output set as the output of the initial LSTM model to train the initial LSTM model, and updating the first input weight according to the trained initial LSTM model, wherein the initial LSTM model and the first input weight obtained by executing a preset number of iterations are used as the track recognition model and the input weight corresponding to the track recognition model;
the training the relative motion recognition model by using the second sample input set as the input of the relative motion recognition model and the second sample output set as the output of the relative motion recognition model comprises:
inputting the second sample input set into the initial GRU model according to a second input weight, and taking the second sample output set as the output of the initial GRU model to train the initial GRU model;
updating the second input weight according to the trained initial GRU model;
repeatedly executing the second sample input set, inputting the initial GRU model according to a second input weight, using the second sample output set as the output of the initial GRU model to train the initial GRU model, and updating the second input weight according to the trained initial GRU model, wherein the initial GRU model and the second input weight obtained by executing the iteration number are used as the relative motion recognition model and the input weight corresponding to the relative motion recognition model.
According to a second aspect of the embodiments of the present disclosure, there is provided a vehicle lane change determination apparatus applied to a first vehicle, the apparatus comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a track data set and a relative motion data set, the track data set comprises track data of a second vehicle acquired at a plurality of acquisition moments within a preset time length, the relative motion data set comprises relative motion data between the second vehicle and a first vehicle acquired at the plurality of acquisition moments within the preset time length, the plurality of acquisition moments comprise current acquisition moments, and the second vehicle is a vehicle which runs on a lane adjacent to a lane where the first vehicle is located and is positioned in front of the first vehicle;
the first processing module is used for respectively inputting the track data set into a pre-trained track recognition model and a pre-trained track clustering model so as to determine track lane change probability according to a track recognition result output by the track recognition model and a track clustering result output by the track clustering model;
the second processing module is used for respectively inputting the relative motion data set into a pre-trained relative motion recognition model and a pre-trained relative motion clustering model so as to determine the lane change probability of the relative motion according to a relative motion recognition result output by the relative motion recognition model and a relative motion clustering result output by the relative motion clustering model;
the first determining module is used for determining the lane change probability of the second vehicle according to the track lane change probability and the relative motion lane change probability;
and the second determining module is used for determining whether the second vehicle is to change the lane according to the lane change probability and a preset lane change threshold.
According to a third aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect of embodiments of the present disclosure.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of the first aspect of an embodiment of the disclosure.
Through the technical scheme, in the disclosure, the first vehicle firstly acquires the track data set and the relative motion data set acquired at a plurality of acquisition moments within the preset time length. And then inputting the track data set into a track identification model and a track clustering model to determine track lane changing probability according to the track identification result and the track clustering result, and then inputting the relative motion data set into a relative motion identification model and a relative motion clustering model to determine the relative motion lane changing probability according to the relative motion identification result and the relative motion clustering result. And finally, determining whether the second vehicle is to change the lane according to the lane changing probability of the track and the relative motion lane changing probability, wherein the second vehicle is a vehicle which is ahead of the first vehicle and is positioned in an adjacent lane. According to the lane changing judgment method and device, the first vehicle collects the track data set and the relative motion data set, operation data in the second vehicle do not need to be obtained, the accuracy of the data is improved, whether the second vehicle is to change the lane or not is determined according to the track data set and the relative motion data set, the continuity of the second vehicle in the driving process in space and time is comprehensively considered, and the accuracy of the lane changing judgment of the vehicle is improved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a flow chart illustrating a method of determining a lane change for a vehicle in accordance with an exemplary embodiment;
FIG. 2 is a flow chart illustrating another method of determining a lane change for a vehicle in accordance with an exemplary embodiment;
FIG. 3 is a flow chart illustrating another method of determining a vehicle lane change in accordance with an exemplary embodiment;
FIG. 4 is a flow chart illustrating another method of determining a vehicle lane change in accordance with an exemplary embodiment;
FIG. 5 is a flow chart illustrating another method of determining a vehicle lane change in accordance with an exemplary embodiment;
FIG. 6 is a flow chart illustrating another method of determining a vehicle lane change in accordance with an exemplary embodiment;
FIG. 7 is a flow chart illustrating another method of determining a vehicle lane change in accordance with an exemplary embodiment;
FIG. 8 is a scatter plot of a linear regression shown in accordance with an exemplary embodiment;
FIG. 9 is a schematic diagram illustrating a lane-change trajectory in accordance with an exemplary embodiment;
FIG. 10 is a block diagram illustrating a vehicle lane change determination device in accordance with an exemplary embodiment;
FIG. 11 is a block diagram illustrating another vehicle lane change determination device in accordance with an exemplary embodiment;
FIG. 12 is a block diagram illustrating another vehicle lane change determination device in accordance with an exemplary embodiment;
FIG. 13 is a block diagram illustrating another vehicle lane change determination device in accordance with an exemplary embodiment;
FIG. 14 is a block diagram illustrating another vehicle lane change determination device in accordance with an exemplary embodiment;
FIG. 15 is a block diagram illustrating another vehicle lane change determination device in accordance with an exemplary embodiment;
FIG. 16 is a block diagram illustrating another vehicle lane change determination device in accordance with an exemplary embodiment;
FIG. 17 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Before describing the method, the apparatus, the storage medium, and the electronic device for determining a lane change of a vehicle provided by the present disclosure, an application scenario according to various embodiments of the present disclosure is first described, where the application scenario includes a first vehicle and a second vehicle traveling on a road, where the first vehicle travels in a first lane, the second vehicle travels in a second lane, the second lane is an adjacent lane of the first lane, and the second vehicle is located in front of the first vehicle. Furthermore, the road may further include a third vehicle located in front of the first vehicle and running on the first lane, and may further include a fourth vehicle located in front of the second vehicle and running on the second lane. The execution subject of the embodiment provided by the present disclosure is the first vehicle, and the first vehicle may be provided with a data acquisition device thereon to acquire the trajectory data and the relative motion data mentioned below. Wherein, data acquisition device can include for example: GNSS (Global Navigation Satellite System, chinese) devices, inertial Navigation devices, cameras, infrared devices, radars, and the like, which are not specifically limited in this disclosure.
FIG. 1 is a flow chart illustrating a method for determining a lane change for a vehicle, as shown in FIG. 1, applied to a first vehicle, including the steps of:
step 101, a track data set and a relative motion data set are obtained, wherein the track data set comprises track data of a second vehicle collected at a plurality of collection times within a preset time length, the relative motion data set comprises relative motion data between the second vehicle and a first vehicle collected at the plurality of collection times within the preset time length, the plurality of collection times comprise current collection times, and the second vehicle is a vehicle which runs on a lane adjacent to the lane where the first vehicle is located and is located in front of the first vehicle.
For example, during the driving process of the first vehicle, the track data P of the second vehicle may be acquired at each acquisition time according to the preset acquisition period TiAnd relative movement data E between the second vehicle and the first vehicleiWherein i represents the sequence number of the acquisition time, that is, the time difference between the ith acquisition time and the (i + 1) th acquisition time is T. PiIt is understood that the data reflecting the second vehicle travel trajectory may include a plurality of trajectory characteristics, such as: lateral position x of the second vehicleiLongitudinal position yiVelocity viAcceleration ai。EiIt is understood that data reflecting relative movement between the first vehicle and the second vehicle may include a plurality of relative movement characteristics, such as: relative longitudinal speed Deltav of first vehicle and second vehicleiAnd a relative longitudinal distance Δ yi. The trajectory data and the relative movement data can be directly acquired according to the data acquisition device arranged on the first vehicle, so that the operation data in the second vehicle (for example: the second vehicle) is not required to be acquiredSteering wheel angle, turn signal status, clutch brake status, driver status, etc.) of the vehicle, the accuracy of the trajectory data and the relative motion data can be ensured. Then, the trajectory data acquired at the multiple acquisition moments within the preset time length can be used as a trajectory data set, and the relative motion data acquired at the multiple acquisition moments within the preset time length can be used as a relative motion data set. The preset duration may be understood as a time range including the current acquisition time, which includes a plurality of acquisition times. For example, if the preset time duration is 30s and the acquisition period is 0.1s, the trajectory data set is { P }1,P2,…,Pi,…,P 300300 trajectory data, and a relative motion data set { E }1,E2,…,Ei,…,E 300300 pieces of relative motion data.
And 102, respectively inputting the track data set into a pre-trained track recognition model and a pre-trained track clustering model, and determining the track lane change probability according to the track recognition result output by the track recognition model and the track clustering result output by the track clustering model.
And 103, respectively inputting the relative motion data set into a pre-trained relative motion recognition model and a pre-trained relative motion clustering model, and determining the lane change probability of the relative motion according to a relative motion recognition result output by the relative motion recognition model and a relative motion clustering result output by the relative motion clustering model.
For example, the trajectory recognition model and the trajectory clustering model may be trained in advance for the trajectory data set, where the trajectory recognition model is capable of recognizing that the trajectory of the second vehicle reflected by the input trajectory data set belongs to lane change or straight movement, that is, the trajectory recognition result output by the trajectory recognition model is used to indicate that the trajectory data set belongs to lane change or straight movement. The track clustering model can cluster the input track data sets and determine which category each track data belongs to, i.e. the track clustering result output by the track clustering model is used for indicating the category each track data belongs to. Therefore, the track data set can be respectively input into the track recognition model and the track clustering model to obtain a track recognition result and a track clustering result, and then the track lane change probability is determined according to the track recognition result and the track clustering result. The trajectory lane change probability may be understood as a probability that the second vehicle will change lane as determined from the trajectory data set. In one implementation, if the track recognition result indicates straight-going, the track lane change probability may be directly determined to be zero, and if the track recognition result indicates lane change, the track lane change probability may be determined according to the track clustering result. In another implementation, a first trajectory lane change probability may be determined according to the trajectory recognition result, then a second trajectory lane change probability may be determined according to the trajectory clustering result, and then a product of the first trajectory lane change probability and the second trajectory lane change probability may be used as the trajectory lane change probability.
Similarly, a relative motion recognition model and a relative motion clustering model may be trained in advance for the relative motion data set, where the relative motion recognition model is capable of recognizing whether the relative motion between the first vehicle and the second vehicle reflected by the input relative motion data set belongs to lane change or straight movement, that is, the relative motion recognition result output by the relative motion recognition model is used to indicate that the relative motion data set belongs to lane change or straight movement. The relative motion clustering model can cluster the input relative motion data sets and determine which category each relative motion data belongs to, i.e. the relative motion clustering result output by the relative motion clustering model is used for indicating the category each relative motion data belongs to. Therefore, the relative motion data set can be respectively input into the relative motion recognition model and the relative motion clustering model to obtain a relative motion recognition result and a relative motion clustering result, and then the relative motion lane change probability is determined according to the relative motion recognition result and the relative motion clustering result. The relative motion lane change probability may be understood as a probability that a second vehicle will change lanes determined from the relative motion data set. In one implementation, if the relative motion recognition result indicates a straight line, the relative motion lane change probability may be directly determined to be zero, and if the relative motion recognition result indicates a lane change, the relative motion lane change probability may be determined according to the relative motion clustering result. In another implementation manner, a first relative motion lane change probability may be determined according to the relative motion recognition result, then a second relative motion lane change probability may be determined according to the relative motion clustering result, and then a product of the first relative motion lane change probability and the second relative motion lane change probability may be used as the relative motion lane change probability.
And step 104, determining the lane change probability of the second vehicle according to the track lane change probability and the relative motion lane change probability.
And 105, determining whether the second vehicle is to change the lane according to the lane change probability and a preset lane change threshold value.
For example, after the trajectory lane change probability and the relative motion lane change probability are determined, the trajectory lane change probability and the relative motion lane change probability may be weighted and summed to obtain the lane change probability of the second vehicle. For example, the lane change probability is S, and the trajectory lane change probability is SPThe probability of relative motion lane change is SEThe term "S" - "S" may be usedP+β*SEWhere α is a weight value (which may be set to 0.5) corresponding to the trajectory lane change probability, and β is a weight value (which may be set to 0.5) corresponding to the relative motion lane change probability. And finally, comparing the lane change probability with a preset lane change threshold value so as to determine whether the second vehicle is about to change lanes. For example, the lane change threshold is 0.5, and if the lane change probability is greater than 0.5, it is determined that the second vehicle is about to change lanes, and if the lane change probability is less than or equal to 0.5, it is determined that the second vehicle is about to remain straight. After determining that the second vehicle is about to change lanes, the first vehicle may also issue a prompt message to prompt the driver of the first vehicle that the second vehicle is about to change lanes, thereby enabling the driver of the first vehicle to make a decision in advance to avoid the second vehicle. Because the track data set and the relative motion data set comprise data (comprising the track data and the relative motion data) acquired at a plurality of acquisition moments, the driving track of the second vehicle and the relative motion between the first vehicle and the second vehicle can be reflected, the spatial and temporal continuity of the driving process of the second vehicle is comprehensively considered, and therefore, the track lane change probability determined according to the track data set and the lane change probability obtained according to the relative motion lane change probability determined according to the relative motion data set are combined, and the second vehicle can be more accurately determinedWhether lane changing is to be performed or not is achieved, and the lane changing judgment accuracy of the vehicle is improved.
In summary, in the present disclosure, a first vehicle first acquires a trajectory data set and a relative motion data set acquired at a plurality of acquisition times within a preset time period. And then inputting the track data set into a track identification model and a track clustering model to determine track lane changing probability according to the track identification result and the track clustering result, and then inputting the relative motion data set into a relative motion identification model and a relative motion clustering model to determine the relative motion lane changing probability according to the relative motion identification result and the relative motion clustering result. And finally, determining whether the second vehicle is to change the lane according to the lane changing probability of the track and the relative motion lane changing probability, wherein the second vehicle is a vehicle which is ahead of the first vehicle and is positioned in an adjacent lane. According to the lane changing judgment method and device, the first vehicle collects the track data set and the relative motion data set, operation data in the second vehicle do not need to be obtained, the accuracy of the data is improved, whether the second vehicle is to change the lane or not is determined according to the track data set and the relative motion data set, the continuity of the second vehicle in the driving process in space and time is comprehensively considered, and the accuracy of the lane changing judgment of the vehicle is improved.
Fig. 2 is a flowchart illustrating another method for determining a lane change of a vehicle according to an exemplary embodiment, and as shown in fig. 2, the implementation of step 101 may include:
step 1011, acquiring trajectory data and relative motion data acquired at each acquisition time within a preset time length, wherein the trajectory data comprises: a lateral position and a longitudinal position of the second vehicle, the relative motion data comprising: a relative longitudinal speed and a relative longitudinal distance of the first vehicle from the second vehicle.
Step 1012, determining supplementary trajectory data corresponding to each acquisition time according to the trajectory data acquired at each acquisition time, where the supplementary trajectory data includes: lateral velocity and lateral acceleration of the second vehicle.
And 1013, taking the track data acquired at each acquisition moment and the corresponding supplementary track data as a track data set.
In an application scenario, the acquisition manner of the trajectory data set may be to acquire the trajectory data at each acquisition time within a preset time period. And then supplementing the track data at the acquisition time according to the track data acquired at each acquisition time. For example, the trajectory data acquired at the acquisition instant may include the lateral position x of the second vehicleiAnd a longitudinal position yiTwo trajectory features. On the basis of this, the trajectory data may also comprise the speed v of the second vehicleiAnd acceleration aiThen the trajectory data comprises x in totali,yi,vi,aiFour trajectory features.
Further, the lateral velocity v of the second vehicle at the time of the acquisition may be determined based on the trajectory data at the time of the acquisitionxi=(xi-xi-1) T and lateral acceleration axi=(vxi-vx(i-1)) and/T is used as supplementary track data corresponding to the acquisition time. On the basis of this, the longitudinal speed v of the second vehicle at the time of acquisition can also be determinedyi=(yi-yi-1) T, longitudinal acceleration ayi=(vyi-vy(i-1)) Heading angle Ang,/Ti=tan-1(yi-yi-1)/(xi-xi-1) And v isyi、ayi、AngiAnd supplementing the supplementary track data corresponding to the acquisition time. Thus, the supplemental trajectory data includes v in totalxi、vyi、axi、ayi、AngiFive trajectory features. And then splicing the track data acquired at each acquisition moment and the corresponding supplementary track data into total track data (including nine track features) of the acquisition moment, and taking the total track data as a track data set. I.e. the trajectory data set is P1’,P2’,…,Pi’,…,PN', wherein N is the number of acquisition time within a preset time length, and the total track data of the ith acquisition time is Pi’={xi,yi,vi,ai,vxi,vyi,axi,ayi,Angi}. Further, the track features in each total track data may be subjected to maximum and minimum normalization processing to map each track feature to [0, 1 [ ]]Within the interval. By locus feature xiFor example, for xiNormalization processing is carried out, and x after processingi *=(xi-xmin)/(xmax-xmin) Wherein x ismaxAnd xminThe largest lateral position and the smallest lateral position in the trajectory data set.
And 1014, processing the relative motion data acquired at each acquisition moment according to a preset rule to obtain a relative motion data set.
Correspondingly, the acquisition mode of the relative motion data set may be that the relative motion data is acquired at each acquisition time within a preset time length. The relative motion data may include a relative longitudinal speed Δ ν of the first vehicle and the second vehicle1iAnd a relative longitudinal distance Δ y1iThe method also comprises the following steps: relative longitudinal speed Deltav of the first vehicle and the third vehicle2iAnd a relative longitudinal distance Δ y2iThe relative longitudinal speed Deltav of the second vehicle and the fourth vehicle3iAnd a relative longitudinal distance Δ y3iThere are six relative motion features. Then the relative motion data acquired at the ith acquisition time is Ei={Δy1i,Δy2i,Δy3i,Δv1i,Δv2i,Δv3i}. And then processing the relative motion data acquired at each acquisition moment according to a preset rule to obtain a relative motion data set. The relative motion data set is { E }1,E2,…,Ei,…,ENAnd N is the number of acquisition moments in a preset time length.
Wherein the preset rule may include at least one of the following: rule one, if a certain relative motion feature acquired at a certain acquisition time exceeds a normal range corresponding to the relative motion feature, the relative motion feature acquired at the acquisition time is deleted. And according to a second rule, if a difference value between a certain relative motion feature acquired at a certain acquisition time and the relative motion feature acquired at an acquisition time before the acquisition time exceeds a fluctuation range corresponding to the relative motion feature (it can be understood that the relative motion feature acquired at the acquisition time is a jump), deleting the relative motion feature acquired at the acquisition time. And a third rule, if a certain relative motion feature acquired at a certain acquisition time does not exist (it can be understood that the relative motion feature is not acquired, and it can also be understood that the relative motion feature is deleted in the first rule or the second rule), fitting the relative motion feature acquired at the acquisition time according to a gaussian distribution. Similarly, the maximum and minimum normalization processing may be performed on the relative motion feature in each piece of relative motion data to map each relative motion feature into the [0, 1] interval.
FIG. 3 is a flow chart illustrating another method of determining a vehicle lane change, according to an exemplary embodiment, as shown in FIG. 3, step 102 may include:
and step 1021, inputting the track data set into the track recognition model to obtain a track recognition result, and inputting the track data set into the track clustering model to obtain a track clustering result, wherein the track recognition result is used for indicating straight-going or lane changing, and the track clustering result is used for indicating the category of the track data acquired at each acquisition moment.
For example, the trajectory recognition model may be an Attention Long Short Term Memory (LSTM) model, which may be understood as adding Attention weight to the hidden layer of the LSTM. Then, after inputting the trajectory data set into the Attention-LSTM model, the trajectory recognition result output by the Attention-LSTM model and the Attention value at each acquisition time can be obtained. The trajectory data set is { P }1,P2,…,Ps,…,PtAnd t is the number of acquisition moments in a preset time length. The Attention value obtained by inputting the trajectory data set into the Attention-LSTM model, which can be understood as the Attention between the acquisition time and the current acquisition time (i.e., the tth acquisition time), can be obtained by the following formula:
Figure BDA0002656577960000121
et=(h1,h2,…,hs,…,ht)Tht
wherein e istIndicating the attention values at the current acquisition time and all other acquisition times,
Figure BDA0002656577960000122
can be understood as using the softmax function pair etThe Attention value e representing the s-th collection time outputted by the Attention-LSTM model, obtained by normalizationtsDenotes etThe s-th element of (1), hsThe hidden layer output of the Attention-LSTM model at the s-th acquisition time under the Self-Attention mechanism is shown.
The trajectory clustering model may be a Kohonen neural network, and after the trajectory data set is input into the Kohonen neural network, the Kohonen neural network can cluster the trajectory data in the trajectory data set, so as to determine the category to which each trajectory data belongs according to the clustering result. The Kohonen neural network includes an input layer and a competition layer, the number of clustering neurons in the competition layer may be preset, for example, 10, and then the Kohonen neural network may classify the trajectory data into 10 categories.
And 1022, if the track identification result indicates straight going, determining that the track lane change probability is zero.
And step 1023, if the track identification result indicates lane change, determining the track lane change probability according to the track clustering result.
For example, if the track recognition result indicates straight-going, the track lane change probability may be directly determined to be zero, and if the track recognition result indicates lane change, the track lane change probability may be further determined according to the track clustering result. Specifically, the trajectory lane change probability may be determined by:
step 1) determining the initial track lane change probability at the acquisition time according to the category of the track data acquired at each acquisition time included in the track clustering result and the corresponding relation between the preset category and the initial track lane change probability.
For example, the initial trajectory lane change probability at each acquisition time may be determined according to the category to which the trajectory data acquired at each acquisition time belongs and the corresponding relationship between the preset category and the initial trajectory lane change probability, and the initial trajectory lane change probability may be understood as the probability determined according to the trajectory clustering result. Wherein, the corresponding relation between the category and the initial trajectory lane change probability can be determined when the Kohonen neural network is established. For example, a large amount of track data that has been marked as lane-change, and a large amount of track data that has been marked as straight-going, may be input into the Kohonen neural network, resulting in the number of track data that are clustered into M categories that are marked as lane-change, and the number of track data that are clustered into M categories that are marked as straight-going, as shown in table 1:
TABLE 1
Figure BDA0002656577960000131
Where nI represents the number of track data labeled lane change clustered as under category I, and mI represents the number of track data labeled straight clustered as under category I. Then, the correspondence between the categories and the initial trajectory lane change probability can be as shown in table 2:
TABLE 2
Class 1 Class 2 Class I Class M
q1 q2 qI qM
Wherein qI represents the initial trajectory lane change probability corresponding to the category I, qI ═ nI/(nI + mI). With the trajectory data set as { P1,P2,…,Ps,…,PtFor example, t is the number of acquisition time within a preset time length, the track data set is input into a Kohonen neural network, and the obtained track clustering result is { O }1,O2,…,Os,…,OtIn which O issIndicating the category to which the s-th acquisition instant belongs. And then according to the table 2, determining the initial track lane change probability at each acquisition time: { SP1,SP2,…,SPs,…,SPtIn which S isPsAnd the initial track lane change probability of the s-th acquisition time is shown.
And 2) determining the track lane change probability according to the initial track lane change probability at each acquisition time and the attention value at each acquisition time.
For example, the initial trajectory lane change probability at each acquisition time and the product of the attention value at each acquisition time may be summed to obtain the trajectory lane change probability, that is, the trajectory lane change probability
Figure BDA0002656577960000132
Fig. 4 is a flowchart illustrating another method for determining a lane change of a vehicle according to an exemplary embodiment, and as shown in fig. 4, the implementation of step 103 may include:
and step 1031, inputting the relative motion data set into the relative motion recognition model to obtain a relative motion recognition result, and inputting the relative motion data set into the relative motion clustering model to obtain a relative motion clustering result, wherein the relative motion recognition result is used for indicating straight movement or lane change, and the relative motion clustering result is used for indicating the category to which the relative motion data acquired at each acquisition moment belong.
For example, the relative motion recognition model may be an Attention-GRU (English: Attention Gate Recurrent Unit) model, which may be understood as adding Attention weight to the hidden layer of the GRU. Then, after inputting the relative motion data set into the Attention-GRU model, the relative motion recognition result output by the Attention-GRU model and the Attention value at each acquisition time can be obtained. The relative motion data set is { E }1,E2,…,Es,…,EtAnd t is the number of acquisition moments in a preset time length. The relative motion data set is input into the Attention-GRU model, and the obtained Attention value at each acquisition time can be understood as the Attention between the acquisition time and the current acquisition time (i.e. the t-th acquisition time), and can be obtained by the following formula:
Figure BDA0002656577960000141
et=(h1,h2,…,hs,…,ht)Tht
wherein e istIndicating the attention value at the current acquisition instant and at all other acquisition instants, at r s elCan be understood as using the softmax function pair etThe Attention value e representing the s-th collection time output by the Attention-GRU model obtained by normalizationtsDenotes etThe s-th element of (1), hsThe hidden layer output of the Attention-GRU model at the s-th acquisition time under the Self-Attention mechanism is shown.
The relative motion clustering model may also be a Kohonen neural network, and after the relative motion data set is input into the Kohonen neural network, the Kohonen neural network can cluster the relative motion data in the relative motion data set, so as to determine the category to which each piece of relative motion data belongs according to the clustering result. The Kohonen neural network includes an input layer and a competition layer, the number of clustering neurons in the competition layer may be preset, for example, 10, and then the Kohonen neural network may classify the relative motion data into 10 categories.
And 1032, if the relative motion identification result indicates straight running, determining that the relative motion lane change probability is zero.
And 1033, if the relative motion recognition result indicates lane change, determining the relative motion lane change probability according to the relative motion clustering result.
For example, if the relative motion recognition result indicates straight-going, the relative motion lane change probability may be directly determined to be zero, and if the relative motion recognition result indicates lane change, the relative motion lane change probability may be further determined according to the relative motion clustering result. Specifically, the relative motion lane change probability may be determined by:
and 3) determining the initial relative motion lane change probability at the acquisition time according to the category to which the relative motion data acquired at each acquisition time belongs and the corresponding relation between the preset category and the initial relative motion lane change probability in the relative motion clustering result.
For example, the initial relative motion lane change probability at each acquisition time may be determined according to the category to which the relative motion data acquired at each acquisition time belongs and the corresponding relationship between the preset category and the initial relative motion lane change probability, where the initial relative motion lane change probability may be understood as the probability determined according to the relative motion clustering result. The corresponding relationship between the category and the initial relative motion lane change probability is the same as the establishment method of the corresponding relationship between the category and the initial trajectory lane change probability, and is not described herein again.
With the relative motion data set as { E }1,E2,…,Es,…,EtFor example, t is the number of acquisition time within a preset time length, and a relative motion data set is inputKohonen neural network, and obtaining a relative motion clustering result of { Q1,Q2,…,Qs,…,QtIn which QsIndicating the category to which the s-th acquisition instant belongs. And then determining the initial relative motion lane change probability at each acquisition moment according to the corresponding relation between the category and the initial relative motion lane change probability: { SE1,SE2,…,SEs,…,SEtIn which S isEsRepresenting the initial relative motion lane change probability at the s-th acquisition instant.
And 4) determining the lane change probability of the relative motion according to the initial relative motion lane change probability of each acquisition time and the attention value of each acquisition time.
For example, the initial relative motion lane change probability at each acquisition time and the product of the attention value at each acquisition time may be summed to obtain the relative motion lane change probability, i.e., the relative motion lane change probability
Figure BDA0002656577960000151
FIG. 5 is a flow chart illustrating another method of determining a lane change for a vehicle, according to an exemplary embodiment, as shown in FIG. 5, step 101 may be accomplished by:
step 1015, acquiring the trajectory data and the relative motion data acquired at each acquisition time within the specified duration.
Step 1016, dividing the specified duration into a specified number of sliding windows, where the length of each sliding window is a preset duration.
Step 1017, the trajectory data and the relative motion data collected in each sliding window are used as the trajectory data set and the relative motion data set corresponding to the sliding window.
In another implementation, multiple sets of trajectory data sets and relative motion data sets may be acquired. And then, respectively executing the steps 102 to 104 on each group of track data sets and relative motion data sets to obtain lane change probability determined according to each group of track data sets and relative motion data sets, then obtaining total lane change probability, and comparing the total lane change probability with a lane change threshold value to judge whether a second vehicle needs lane change.
Specifically, the trajectory data and the relative motion data acquired at each acquisition time within a specified time period may be acquired, where the specified time period is longer than a preset time period. And then dividing the specified duration into a specified number of sliding windows according to a preset step length, wherein the length of each sliding window is the preset duration. For example, if the specified duration is 2min, the preset duration is 30s, and the step size can be set to 15s, then 7 (i.e., the specified number of) sliding windows can be obtained: [0, 30s ], [15s, 45s ], [30s, 60s ], [45s, 75s ], [60s, 90s ], [75s, 105s ], and [90s, 120s ]. Then, the trajectory data and the relative motion data collected in each sliding window are used as the trajectory data set and the relative motion data set corresponding to the sliding window, so that a specified number of groups of trajectory data sets and relative motion data sets are obtained.
Accordingly, step 105 may be implemented by:
step 1051, weighting and summing the lane change probability determined according to the track data set and the relative motion data set corresponding to each sliding window to determine the total lane change probability, wherein the weight corresponding to each sliding window is inversely proportional to the distance between the sliding window and the current moment.
Step 1052, determining that the second vehicle is about to change lane if the total lane change probability is greater than or equal to the lane change threshold.
Step 1053, determining that the second vehicle will remain straight if the total lane change probability is less than the lane change threshold.
For example, steps 102 to 104 may be performed on the trajectory data set and the relative motion data set corresponding to each sliding window, respectively, to obtain the lane change probability determined according to the trajectory data set and the relative motion data set corresponding to each sliding window. And then, carrying out weighted summation on the lane change probability determined according to the track data set and the relative motion data set corresponding to each sliding window to obtain the total lane change probability. And the weight value corresponding to each sliding window is inversely proportional to the distance between the sliding window and the current moment. And if the total lane change probability is greater than or equal to the lane change threshold value, determining that the second vehicle is to change lanes, and if the total lane change probability is smaller than the lane change threshold value, determining that the second vehicle is to keep going straight.
Taking the trajectory data set and the relative motion data set corresponding to the N sliding windows determined in step 1017 as an example, then N lane change probabilities may be obtained: s1,S2,…,Si,…,SNWherein S isiAnd determining lane change probability according to the track data set and the relative motion data set corresponding to the ith sliding window. Then the total lane change probability can be determined by the following equation:
Figure BDA0002656577960000161
Figure BDA0002656577960000162
wherein S istotalIndicates the total lane change probability, WiRepresenting the weight value corresponding to the ith sliding window, wherein D represents the specified duration, DiIndicating the distance of the ith sliding window from the current time, e.g., D ═ 120s, 7 sliding windows: [0, 30s ]]、[15s,45s]、[30s,60s]、[45s,75s]、[60s,90s]、[75s,105s]、[90s,120s]Then d1=120-30=90,d2120-45-75, and so on.
Fig. 6 is a flowchart illustrating another method of determining a lane change of a vehicle according to an exemplary embodiment, where the track recognition result is used to indicate a straight run, a left lane change, or a right lane change, as shown in fig. 6. After step 105, the method may further comprise:
and 106, if the second vehicle is determined to be lane-changed, counting a first number corresponding to left lane change, a second number corresponding to right lane change and a third number corresponding to straight line in a track identification result determined according to the track data set corresponding to each sliding window, wherein the sum of the first number, the second number and the third number is equal to the designated number.
And step 107, if the first number is larger than the second number, determining that the second vehicle is going to change lanes to the left.
And step 108, if the first number is smaller than the second number, determining that the second vehicle is going to change lanes to the right.
For example, if the track recognition result can indicate that the track of the second vehicle belongs to a left lane change, a right lane change or a straight line (i.e. the scene indicating the lane change is divided into a left lane change and a right lane change), after the second vehicle is determined to be lane change in step 105, it can also be determined whether the second vehicle is to be left lane change or right lane change. Specifically, step 102 may be performed on the trajectory data sets corresponding to the specified number of sliding windows, respectively, so as to obtain the specified number of trajectory identification results determined according to the trajectory data set corresponding to each sliding window. Then, counting the track identification results with the specified number, wherein the track identification results indicate a first number corresponding to left lane changing, indicate a second number corresponding to right lane changing and indicate a third number corresponding to straight lines, and the sum of the first number, the second number and the third number is equal to the specified number.
If the first number is greater than the second number, it may be determined that the second vehicle is about to change lanes to the left. If the first number is less than the second number, then it may be determined that the second vehicle is about to right lane change. If the first number is equal to the second number, the determination may be performed again after new trajectory data is acquired at the next acquisition time.
FIG. 7 is a flow chart illustrating another method for determining a lane change for a vehicle, according to an exemplary embodiment, as shown in FIG. 7, after step 105, the method may further comprise:
and step 109, if the second vehicle is determined to be lane-changing, determining lane-changing time of the second vehicle according to the transverse position, the transverse speed and the course angle of the second vehicle, which are included in the track data acquired at the current acquisition time.
For example, after determining that the second vehicle is going to change lane in step 105, the lane change trajectory of the second vehicle may be further determined according to the trajectory data collected at the current collection time in the trajectory data set. Specifically, the change of the second vehicle may be determined according to the lateral position, the lateral speed, and the heading angle of the second vehicle included in the trajectory data acquired at the current acquisition timeThe trace time. Taking the current acquisition time as i and the correspondingly acquired track data as PiFor example, the lateral position included therein is xiTransverse velocity vxiAnd heading angle AngiThen, the lane change time of the second vehicle can be obtained by the following formula:
Figure BDA0002656577960000171
wherein, timepre,iIndicating lane change time, xtarIndicating the center position of the target lane to which the second lane is to be diverged, ξ, ω, and λ are preset correlation coefficients, and θ is a preset constant. It should be noted that ξ, ω, λ and θ can be obtained by constructing a correlation coefficient matrix according to a predetermined training data set and then using a linear regression method. For example, the training data set is { T }1,T2,…,Tj,…,TNWhere N is the number of training data contained in the training data set, Tj={xj,vxj,AngjCalculating time for each training data according to the formulapre,iA correlation matrix is constructed, as shown in table 3:
TABLE 3
Figure BDA0002656577960000172
Wherein the content of the first and second substances,
Figure BDA0002656577960000173
cov (X, Y) denotes tan (Ang)j) And
Figure BDA0002656577960000174
σ X represents tan (Ang)j) Standard deviation of (a), σ Y represents
Figure BDA0002656577960000175
The standard deviation of (2) and other correlation coefficients, and so on, are not described herein again. Tan (Ang) according to the correlation matrix pairj)、
Figure BDA0002656577960000176
And (x)j-xtar) Linear regression is performed, as shown in fig. 8, to obtain ξ, ω, λ and θ.
And step 110, determining the ending track data of the second vehicle at the end of lane change according to the lane change time and the longitudinal speed included in the track data acquired at the current acquisition time.
And step 111, fitting according to the stopping track data and the track data acquired at the current acquisition time and a Bezier function to obtain the lane change track of the second vehicle.
For example, after determining the lane change time, the longitudinal displacement of the second vehicle during the lane change process may be determined according to the lane change time and the longitudinal speed included in the trajectory data collected at the current collection time, and the ending trajectory data of the second vehicle at the end of the lane change may be further determined according to the longitudinal displacement and the center position of the target lane to which the second lane is to be changed. Taking the current acquisition time as i and the correspondingly acquired track data as PiIncluding a longitudinal velocity vyiFor example, then the longitudinal displacement ypre,i=vyitimepre,i. And determining the transverse position, the longitudinal position, the speed and the course angle of the second vehicle at the end of lane change according to the longitudinal displacement and the central position of the target lane, and taking the transverse position, the longitudinal position, the speed and the course angle as ending track data.
And finally, fitting according to the stopping track data and the track data acquired at the current acquisition time and a Bessel function to obtain the lane change track of the second vehicle. For example, the trajectory data acquired at the current acquisition time may be used as the starting point P0The end point P is the end point of the track data3Then P is added0Extending a vehicle length (d) along the current driving direction of the second vehicle as an extension point P1Then P is added3A length of the second vehicle is extended in the opposite direction along the current driving direction of the second vehicle as a reverse extension line point P2To P0、P1、P2、P3Fitting according to a third-order Bessel functionCombining: b (t) ═ P0(1-t)3+P1t(1-t)2+P2t2(1-t)+P3t3Wherein t is [0, 1]]Within the interval, the lane change trajectory is fitted as shown in fig. 9. After the lane changing track of the second vehicle is determined, the lane changing track can be displayed on a central control display screen of the first vehicle, so that a driver of the first vehicle makes a decision in advance to avoid the second vehicle.
It should be noted that, the trajectory recognition model in the above embodiment is trained through the following steps:
step A, a first sample input set and a first sample output set are obtained, each first sample input in the first sample input set comprises a plurality of sample track data, each first sample output set comprises a first sample output corresponding to each first sample input, each first sample output comprises a sample track identification result marked by the corresponding plurality of sample track data, and the sample track identification result is used for indicating straight movement or lane change.
For example, before training the trajectory recognition model, a first sample input set and a first sample output set for training the trajectory recognition model may be obtained, where the first sample input set includes a plurality of first sample inputs, each first sample input includes a plurality of sample trajectory data, that is, each first sample input is a set of sample trajectory data, and the set of sample trajectory data is labeled with a sample trajectory recognition result for indicating that the set of sample trajectory data is a straight line or a lane change (which may also be used for indicating that the set of sample trajectory data is a straight line, a left lane change or a right lane change). Thus, each first sample inputs a corresponding sample trajectory recognition result, and forms a first sample output set. The rule for determining the sample track recognition result may be to label, as the lane change, sample track data acquired when the second vehicle passes through the lane line and sample track data acquired within 4 seconds before the second vehicle passes through the lane line. And marking the sample track data acquired at the rest time as straight lines.
Wherein each sample trajectory data may include: lateral position, longitudinal position, velocity, acceleration, lateral velocity, longitudinal velocity, lateral accelerationAnd the longitudinal acceleration and the heading angle are nine track characteristics. For example, the first set of sample inputs includes N first sample inputs, which may be { A }1,A2,…,An,…,ANIn which AnComprises L sample trajectory data, which can be { a }n1,an2,…,anl,…,anL}。
And step B, taking the first sample input set as the input of the track recognition model, and taking the first sample output set as the output of the track recognition model so as to train the track recognition model.
After the first sample input set and the first sample output set are obtained, the first sample input set can be used as the input of the trajectory recognition model, and the first sample output set can be used as the output of the trajectory recognition model, so as to train the trajectory recognition model, and when any first sample input is input, the output of the trajectory recognition model is matched with the first sample output corresponding to the first sample input.
In the following, taking the trajectory recognition model as an LSTM model as an example, the training process of the trajectory recognition model is specifically described:
for example, an input weight may be added to the input layer of the LSTM model, and it may be understood that, when any first sample input is taken as the input of the LSTM model, the first sample input is multiplied by the input weight, that is, the forgetting gate model of the LSTM model is:
Figure BDA0002656577960000191
wherein n istDenotes the t-th first sample input, ftIndicating the output of a forgetting gate, ht-1Representing the output of the last-in-time LSTM model, WfWeight representing forgetting gate, bfIndicating that the door was forgotten to be mishandled,
Figure BDA0002656577960000192
representing an input weight;
the input gate model for the LSTM model is:
Figure BDA0002656577960000193
wherein itRepresenting the output of the input gate, WiRepresenting the weight of the input gate, biIndicating the bias of the input gate.
The candidate gate models for the LSTM model are:
Figure BDA0002656577960000194
wherein the content of the first and second substances,
Figure BDA0002656577960000195
representing candidate vectors, WCWeight representing candidate door, bCIndicating the bias of the candidate gate. Accordingly, the cell function is:
Figure BDA0002656577960000196
the output gate model of the LSTM model is:
Figure BDA0002656577960000197
wherein o istRepresenting the output of the output gate, WoRepresenting the weight of the output gate, boIndicating the offset of the output gate.
Using the tanh activation function to control the memory cell, the output of the LSTM model is:
ht=ot·tanh(Ct)
specifically, the input weight of LSTM may be determined by:
and step B1, inputting the initial LSTM model by the first sample input set according to the first input weight, and taking the first sample output set as the output of the initial LSTM model to train the initial LSTM model.
Step B2, updating the first input weights according to the trained initial LSTM model.
And step B3, repeatedly executing the steps B1 to B2, and taking the initial LSTM model and the first input weight obtained by executing the preset iteration number times as the track recognition model and the input weight corresponding to the track recognition model.
The current iteration number is represented by KmaxA preset number of iterations is shown for illustration. The first sample input set is { A }1,A2,…,An,…,ANIn which AnIs { an1,an2,…,anl,…,anLFor example. The first iteration is performed with K equal to 1, and the initial value of the first input weight corresponding to the first sample input set may be WK={W1 K,W2 K,…,Wn K,…,WN K},Wn KIs represented by AnCorresponding input weight, Wn K={Wn1 K,Wn2 K,…,Wnl K,…,WnL KIn which Wnl KDenotes a in the K-th iterationnlCorresponding input weight, in this case, let Wnl K=1/L。
Then let n equal to 1, i.e. AnAccording to Wn KInput weights of (A) into the initial LSTM modelnAnd the corresponding first sample output is used as the output of the initial LSTM model to train the initial LSTM model to obtain the trained initial LSTM model. New A can be obtained according to the initial LSTM model after trainingnCorresponding first input weight, i.e. Wn K+1。Wn K+1={Wn1 K+1,Wn2 K +1,…,Wnl K+1,…,WnL K+1},
Figure BDA0002656577960000201
Wherein the content of the first and second substances,
Figure BDA0002656577960000202
can be understood as a normalization factor, alphaKFor preset parameters, GK(Wnl KAn) To be AnAccording to Wn KThe input weights of (1) are input to the initial LSTM model, the output of the initial LSTM model.
Then N is equal to N +1, and the steps are repeated in sequence until N is equal to N, so that W is obtainedK+1={W1 K+1,W2 K+1,…,Wn K+1,…,WN K+1}. Then, let K be K +1 again, until K is KmaxThrough two-layer circulation, W is obtainedKmax={W1 Kmax,W2 Kmax,…,Wn Kmax,…,WN KmaxIn which Wn Kmax={Wn1 Kmax,Wn2 Kmax,…,Wnl Kmax,…,WnL Kmax}。
The relative motion recognition model is trained by the following steps:
and step C, acquiring a second sample input set and a second sample output set, wherein each second sample input in the second sample input set comprises a plurality of sample relative motion data, each second sample output set comprises a second sample output corresponding to each second sample input, each second sample output comprises a sample relative motion identification result marked by the corresponding plurality of sample relative motion data, and the sample relative motion identification result is used for indicating straight movement or lane change.
And D, taking the second sample input set as the input of the relative motion recognition model, and taking the second sample output set as the output of the relative motion recognition model so as to train the relative motion recognition model.
Similarly, a second sample input set and a second sample output set for training the relative motion recognition model may be obtained before training the relative motion recognition model, where the second sample input set includes a plurality of second sample inputs, each second sample input includes a plurality of sample relative motion data, that is, each second sample input is a set of sample relative motion data, and the set of sample relative motion data is labeled with a sample relative motion recognition result for indicating that the set of sample relative motion data is a straight line or a lane change (and may also be used for indicating that the set of sample relative motion data is a straight line, a left lane change or a right lane change). Thus, each second sample inputs the corresponding sample relative motion recognition result, and forms a second sample output set. The rule for determining the result of identifying the relative movement of the sample may be to label the relative movement data of the sample acquired when the second vehicle passes through the lane line and the relative movement data of the sample acquired within 4 seconds before the second vehicle passes through the lane line as lane change. And the relative motion data of the samples collected at other times are marked as a straight line.
Wherein each sample relative motion data may comprise: the relative longitudinal speed and relative longitudinal distance of the first vehicle to the second vehicle, the relative longitudinal speed and relative longitudinal distance of the first vehicle to the third vehicle, and the relative longitudinal speed and relative longitudinal distance of the second vehicle to the fourth vehicle, for a total of six relative motion characteristics. For example, the second sample input set includes N second sample inputs, which may be { B }1,B2,…,Bn,…,BNIn which BnIncludes L samples of relative motion data, which may be { b }n1,bn2,…,bnl,…,bnL}。
After the second sample input set and the second sample output set are obtained, the second sample input set may be used as an input of the relative motion recognition model, and the second sample output set may be used as an output of the relative motion recognition model, so as to train the relative motion recognition model, so that when any one second sample input is input, the output of the relative motion recognition model matches with a second sample output corresponding to the second sample input.
In the following, taking the relative motion recognition model as a GRU model as an example, the training process of the relative motion recognition model is specifically described:
for example, an input weight may be added to an input layer of the GRU model, and it may be understood that, when any second sample input is used as an input of the GRU model, the second sample input is multiplied by the input weight, that is, an update gate model of the GRU model is:
Figure BDA0002656577960000211
wherein, n'tDenotes the t-th second sample input, rtIndicating the output of the update gate, WrDenotes the weight of the update door, h't-1Represents the output of the GRU model at the last time,
Figure BDA0002656577960000212
representing the input weight.
The reset gate model of the GRU model is:
Figure BDA0002656577960000213
wherein z istRepresents the output of a reset gate, WzRepresenting the weight of the reset gate.
The candidate gate models for the GRU model are:
Figure BDA0002656577960000214
wherein the content of the first and second substances,
Figure BDA0002656577960000215
is a candidate vector, WhIs the weight of the candidate gate. The output of the corresponding GRU model is:
Figure BDA0002656577960000216
specifically, the input weight of the GRU may be determined by:
and D1, inputting the second sample input set into the initial GRU model according to the second input weight, and taking the second sample output set as the output of the initial GRU model to train the initial GRU model.
Step D2, updating the second input weight according to the trained initial GRU model.
And D3, repeating the steps D1 to D2, and using the initial GRU model and the second input weight obtained by the iteration number as the relative motion recognition model and the input weight corresponding to the relative motion recognition model.
The input weights of the GRUs in the above embodiment are determined in the same manner as the input weights of the LSTM in the above embodiment, and will not be described in detail here.
In summary, in the present disclosure, a first vehicle first acquires a trajectory data set and a relative motion data set acquired at a plurality of acquisition times within a preset time period. And then inputting the track data set into a track identification model and a track clustering model to determine track lane changing probability according to the track identification result and the track clustering result, and then inputting the relative motion data set into a relative motion identification model and a relative motion clustering model to determine the relative motion lane changing probability according to the relative motion identification result and the relative motion clustering result. And finally, determining whether the second vehicle is to change the lane according to the lane changing probability of the track and the relative motion lane changing probability, wherein the second vehicle is a vehicle which is ahead of the first vehicle and is positioned in an adjacent lane. According to the lane changing judgment method and device, the first vehicle collects the track data set and the relative motion data set, operation data in the second vehicle do not need to be obtained, the accuracy of the data is improved, whether the second vehicle is to change the lane or not is determined according to the track data set and the relative motion data set, the continuity of the second vehicle in the driving process in space and time is comprehensively considered, and the accuracy of the lane changing judgment of the vehicle is improved.
Fig. 10 is a block diagram illustrating a vehicle lane change determination apparatus according to an exemplary embodiment, and as shown in fig. 10, the apparatus 200 is applied to a first vehicle, and includes:
the acquisition module 201 is configured to acquire a track data set and a relative motion data set, where the track data set includes track data of a second vehicle acquired at multiple acquisition times within a preset time period, the relative motion data set includes relative motion data between the second vehicle and a first vehicle acquired at multiple acquisition times within the preset time period, the multiple acquisition times include a current acquisition time, and the second vehicle is a vehicle that travels on an adjacent lane of a lane where the first vehicle is located and is located in front of the first vehicle.
The first processing module 202 is configured to input the trajectory data set into a pre-trained trajectory recognition model and a pre-trained trajectory clustering model, respectively, so as to determine a trajectory lane change probability according to a trajectory recognition result output by the trajectory recognition model and a trajectory clustering result output by the trajectory clustering model.
And the second processing module 203 is configured to input the relative motion data set into a pre-trained relative motion recognition model and a pre-trained relative motion clustering model respectively, so as to determine the lane change probability of the relative motion according to the relative motion recognition result output by the relative motion recognition model and the relative motion clustering result output by the relative motion clustering model.
The first determining module 204 is configured to determine a lane change probability of the second vehicle according to the trajectory lane change probability and the relative motion lane change probability.
The second determining module 205 is configured to determine whether the second vehicle is to change the lane according to the lane change probability and a preset lane change threshold.
Fig. 11 is a block diagram illustrating another vehicle lane change determination apparatus according to an exemplary embodiment, and as shown in fig. 11, the obtaining module 201 includes:
the first obtaining sub-module 2011 is configured to obtain track data and relative motion data collected at each collection time within a preset time period, where the track data includes: a lateral position and a longitudinal position of the second vehicle, the relative motion data comprising: a relative longitudinal speed and a relative longitudinal distance of the first vehicle from the second vehicle.
The processing sub-module 2012 is configured to determine, according to the trajectory data acquired at each acquisition time, supplementary trajectory data corresponding to the acquisition time, where the supplementary trajectory data includes: lateral velocity and lateral acceleration of the second vehicle. And taking the track data acquired at each acquisition moment and the corresponding supplementary track data as a track data set. And processing the relative motion data acquired at each acquisition moment according to a preset rule to obtain a relative motion data set.
Fig. 12 is a block diagram illustrating another vehicle lane change determination apparatus according to an exemplary embodiment, and as shown in fig. 12, the first processing module 202 includes:
the first input sub-module 2021 is configured to input the trajectory data set into the trajectory recognition model to obtain a trajectory recognition result, and input the trajectory data set into the trajectory clustering model to obtain a trajectory clustering result, where the trajectory recognition result is used to indicate a straight line or lane change, and the trajectory clustering result is used to indicate a category to which the trajectory data acquired at each acquisition time belongs.
The first determining sub-module 2022 is configured to determine that the track lane change probability is zero if the track recognition result indicates a straight-ahead operation. And if the track identification result indicates lane change, determining the track lane change probability according to the track clustering result.
In one embodiment, the trajectory recognition model is an Attention-LSTM model, and the first input submodule 2021 is configured to:
inputting the track data set into the Attention-LSTM model to obtain the track recognition result output by the Attention-LSTM model and the Attention value at each acquisition moment.
The first judging sub-module 2022 is configured to implement the following steps:
step 1) determining the initial track lane change probability at the acquisition time according to the category of the track data acquired at each acquisition time included in the track clustering result and the corresponding relation between the preset category and the initial track lane change probability.
And 2) determining the track lane change probability according to the initial track lane change probability at each acquisition time and the attention value at each acquisition time.
Fig. 13 is a block diagram illustrating another vehicle lane change determination apparatus according to an exemplary embodiment, and as shown in fig. 13, the second processing module 203 includes:
the second input sub-module 2031 is configured to input the relative motion data set into the relative motion recognition model to obtain a relative motion recognition result, and input the relative motion data set into the relative motion clustering model to obtain a relative motion clustering result, where the relative motion recognition result is used to indicate a straight line or a lane change, and the relative motion clustering result is used to indicate a category to which the relative motion data acquired at each acquisition time belongs.
The second determining sub-module 2032 is configured to determine that the relative motion lane change probability is zero if the relative motion recognition result indicates a straight line. And if the relative motion identification result indicates lane change, determining the lane change probability of the relative motion according to the relative motion clustering result.
Optionally, the relative motion recognition model is an Attention-GRU model, and the second input sub-module 2031 is configured to:
and inputting the relative motion data set into the Attention-GRU model to obtain the relative motion recognition result output by the Attention-GRU model and the Attention value at each acquisition moment.
The second determination submodule 2032 is configured to implement the following steps:
and 3) determining the initial relative motion lane change probability at the acquisition time according to the category to which the relative motion data acquired at each acquisition time belongs and the corresponding relation between the preset category and the initial relative motion lane change probability in the relative motion clustering result.
And 4) determining the lane change probability of the relative motion according to the initial relative motion lane change probability of each acquisition time and the attention value of each acquisition time.
Fig. 14 is a block diagram illustrating another vehicle lane change determination apparatus according to an exemplary embodiment, and as shown in fig. 14, the obtaining module 201 includes:
and the second obtaining submodule 2013 is used for obtaining the track data and the relative motion data which are collected at each collecting moment in a specified time length.
The dividing sub-module 2014 is configured to divide the specified duration into a specified number of sliding windows, where the length of each sliding window is a preset duration. And taking the track data and the relative motion data collected in each sliding window as a track data set and a relative motion data set corresponding to the sliding window.
The second determination module 205 includes:
the first determining submodule 2051 is configured to perform weighted summation on the lane change probabilities determined according to the trajectory data set and the relative motion data set corresponding to each sliding window to determine a total lane change probability, where a weight corresponding to each sliding window is inversely proportional to a distance between the sliding window and the current time.
A second determining sub-module 2052 is configured to determine that the second vehicle is about to change lane if the total lane change probability is greater than or equal to the lane change threshold. And if the total lane change probability is smaller than the lane change threshold value, determining that the second vehicle keeps going straight.
Fig. 15 is a block diagram illustrating another vehicle lane change determination apparatus according to an exemplary embodiment, in which a trajectory recognition result is used to indicate a straight run, a left lane change, or a right lane change, as shown in fig. 15. The apparatus 200 further comprises:
a third determining module 206, configured to, after determining whether the second vehicle is to change lanes according to the lane change probability and a preset lane change threshold, count a first number corresponding to the left lane change, a second number corresponding to the right lane change, and a third number corresponding to the straight line in a lane identification result determined according to the lane data set corresponding to each sliding window if it is determined that the second vehicle is to change lanes, where a sum of the first number, the second number, and the third number is equal to a specified number. If the first number is greater than the second number, it is determined that the second vehicle is about to change lanes to the left. If the first number is less than the second number, it is determined that the second vehicle is about to change lanes to the right.
Fig. 16 is a block diagram illustrating another vehicle lane change determination apparatus according to an exemplary embodiment, and as shown in fig. 16, the apparatus 200 further includes:
and the track generation module 207 is configured to determine, after determining whether the second vehicle is to change the track according to the track changing probability and a preset track changing threshold, if it is determined that the second vehicle is to change the track, determine the track changing time of the second vehicle according to the lateral position, the lateral speed, and the heading angle of the second vehicle included in the track data acquired at the current acquisition time. And determining the ending track data of the second vehicle at the end of lane change according to the lane change time and the longitudinal speed included in the track data acquired at the current acquisition time. And fitting according to the stopping track data and the track data acquired at the current acquisition moment and a Bezier function to obtain the lane change track of the second vehicle.
It should be noted that, the trajectory recognition model in the above embodiment is trained through the following steps:
step A, a first sample input set and a first sample output set are obtained, each first sample input in the first sample input set comprises a plurality of sample track data, each first sample output set comprises a first sample output corresponding to each first sample input, each first sample output comprises a sample track identification result marked by the corresponding plurality of sample track data, and the sample track identification result is used for indicating straight movement or lane change.
And step B, taking the first sample input set as the input of the track recognition model, and taking the first sample output set as the output of the track recognition model so as to train the track recognition model.
The relative motion recognition model is trained by the following steps:
and step C, acquiring a second sample input set and a second sample output set, wherein each second sample input in the second sample input set comprises a plurality of sample relative motion data, each second sample output set comprises a second sample output corresponding to each second sample input, each second sample output comprises a sample relative motion identification result marked by the corresponding plurality of sample relative motion data, and the sample relative motion identification result is used for indicating straight movement or lane change.
And D, taking the second sample input set as the input of the relative motion recognition model, and taking the second sample output set as the output of the relative motion recognition model so as to train the relative motion recognition model.
In one application scenario, the trajectory recognition model is an LSTM model and the relative motion recognition model is a GRU model.
Step B may be achieved by:
and step B1, inputting the initial LSTM model by the first sample input set according to the first input weight, and taking the first sample output set as the output of the initial LSTM model to train the initial LSTM model.
Step B2, updating the first input weights according to the trained initial LSTM model.
And step B3, repeatedly executing the steps B1 to B2, and taking the initial LSTM model and the first input weight obtained by executing the preset iteration number times as the track recognition model and the input weight corresponding to the track recognition model.
Step D may be achieved by:
and D1, inputting the second sample input set into the initial GRU model according to the second input weight, and taking the second sample output set as the output of the initial GRU model to train the initial GRU model.
Step D2, updating the second input weight according to the trained initial GRU model.
And D3, repeating the steps D1 to D2, and using the initial GRU model and the second input weight obtained by the iteration number as the relative motion recognition model and the input weight corresponding to the relative motion recognition model.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In summary, in the present disclosure, a first vehicle first acquires a trajectory data set and a relative motion data set acquired at a plurality of acquisition times within a preset time period. And then inputting the track data set into a track identification model and a track clustering model to determine track lane changing probability according to the track identification result and the track clustering result, and then inputting the relative motion data set into a relative motion identification model and a relative motion clustering model to determine the relative motion lane changing probability according to the relative motion identification result and the relative motion clustering result. And finally, determining whether the second vehicle is to change the lane according to the lane changing probability of the track and the relative motion lane changing probability, wherein the second vehicle is a vehicle which is ahead of the first vehicle and is positioned in an adjacent lane. According to the lane changing judgment method and device, the first vehicle collects the track data set and the relative motion data set, operation data in the second vehicle do not need to be obtained, the accuracy of the data is improved, whether the second vehicle is to change the lane or not is determined according to the track data set and the relative motion data set, the continuity of the second vehicle in the driving process in space and time is comprehensively considered, and the accuracy of the lane changing judgment of the vehicle is improved.
FIG. 17 is a block diagram illustrating an electronic device 300 in accordance with an example embodiment. As shown in fig. 17, the electronic device 300 may include: a processor 301 and a memory 302. The electronic device 300 may also include one or more of a multimedia component 303, an input/output (I/O) interface 304, and a communication component 305.
The processor 301 is configured to control the overall operation of the electronic device 300, so as to complete all or part of the steps in the above-mentioned method for determining a lane change of a vehicle. The memory 302 is used to store various types of data to support operation at the electronic device 300, such as instructions for any application or method operating on the electronic device 300 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and the like. The Memory 302 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. The multimedia components 303 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 302 or transmitted through the communication component 305. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 304 provides an interface between the processor 301 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 305 is used for wired or wireless communication between the electronic device 300 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or a combination of one or more of them, which is not limited herein. The corresponding communication component 305 may therefore include: Wi-Fi module, Bluetooth module, NFC module, etc.
In an exemplary embodiment, the electronic Device 300 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the above-described method for determining vehicle lane changes.
In another exemplary embodiment, a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the above-described method of determining a lane change of a vehicle is also provided. For example, the computer readable storage medium may be the memory 302 including program instructions executable by the processor 301 of the electronic device 300 to perform the method for determining a lane change of a vehicle described above.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned method of determining a lane change of a vehicle when executed by the programmable apparatus.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that, in the foregoing embodiments, various features described in the above embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, various combinations that are possible in the present disclosure are not described again.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (10)

1. A method of determining a lane change for a vehicle, applied to a first vehicle, the method comprising:
acquiring a track data set and a relative motion data set, wherein the track data set comprises track data of a second vehicle acquired at a plurality of acquisition moments within a preset time length, the relative motion data set comprises relative motion data between the second vehicle and a first vehicle acquired at the plurality of acquisition moments within the preset time length, the plurality of acquisition moments comprise current acquisition moments, and the second vehicle is a vehicle which runs on a lane adjacent to a lane where the first vehicle is located and is positioned in front of the first vehicle;
inputting the track data set into a pre-trained track recognition model and a pre-trained track clustering model respectively, and determining track lane change probability according to a track recognition result output by the track recognition model and a track clustering result output by the track clustering model;
inputting the relative motion data set into a pre-trained relative motion recognition model and a pre-trained relative motion clustering model respectively, and determining the lane change probability of the relative motion according to a relative motion recognition result output by the relative motion recognition model and a relative motion clustering result output by the relative motion clustering model;
determining the lane change probability of the second vehicle according to the track lane change probability and the relative motion lane change probability;
and determining whether the second vehicle is to change the lane according to the lane change probability and a preset lane change threshold value.
2. The method of claim 1, wherein the acquiring a trajectory data set and a relative motion data set comprises:
acquiring the track data and the relative motion data acquired at each acquisition time within the preset time length, wherein the track data comprises: a lateral position and a longitudinal position of the second vehicle, the relative motion data comprising: a relative longitudinal speed and a relative longitudinal distance of the first vehicle from the second vehicle;
according to the trajectory data acquired at each acquisition time, determining supplementary trajectory data corresponding to the acquisition time, wherein the supplementary trajectory data comprises: a lateral velocity and a lateral acceleration of the second vehicle;
taking the trajectory data acquired at each acquisition moment and the corresponding supplementary trajectory data as the trajectory data set;
and processing the relative motion data acquired at each acquisition moment according to a preset rule to obtain the relative motion data set.
3. The method of claim 1, wherein inputting the trajectory data set into a pre-trained trajectory recognition model and a pre-trained trajectory clustering model respectively to determine a trajectory lane change probability according to a trajectory recognition result output by the trajectory recognition model and a trajectory clustering result output by the trajectory clustering model comprises:
inputting the track data set into the track recognition model to obtain a track recognition result, and inputting the track data set into the track clustering model to obtain a track clustering result, wherein the track recognition result is used for indicating straight movement or lane change, and the track clustering result is used for indicating the category of the track data acquired at each acquisition time;
if the track identification result indicates straight going, determining that the track lane change probability is zero;
and if the track identification result indicates lane change, determining the track lane change probability according to the track clustering result.
4. The method of claim 3, wherein the trajectory recognition model is an Attention-LSTM model, and the inputting the trajectory data set into the trajectory recognition model to obtain the trajectory recognition result comprises:
inputting the track data set into the Attention-LSTM model to obtain the track recognition result output by the Attention-LSTM model and the Attention value of each acquisition moment;
the determining the track lane change probability according to the track clustering result comprises the following steps:
determining the initial track lane change probability at the acquisition time according to the category to which the track data acquired at each acquisition time belongs and the corresponding relation between the preset category and the initial track lane change probability in the track clustering result;
and determining the track lane change probability according to the initial track lane change probability at each acquisition time and the attention value at each acquisition time.
5. The method of claim 1, wherein the inputting the relative motion data set into a pre-trained relative motion recognition model and a pre-trained relative motion clustering model respectively to determine a relative motion lane change probability according to a relative motion recognition result output by the relative motion recognition model and a relative motion clustering result output by the relative motion clustering model comprises:
inputting the relative motion data set into the relative motion recognition model to obtain the relative motion recognition result, and inputting the relative motion data set into the relative motion clustering model to obtain a relative motion clustering result, wherein the relative motion recognition result is used for indicating straight-going or lane changing, and the relative motion clustering result is used for indicating the category to which the relative motion data acquired at each acquisition moment belong;
if the relative motion identification result indicates straight going, determining that the relative motion lane change probability is zero;
and if the relative motion identification result indicates lane change, determining the relative motion lane change probability according to the relative motion clustering result.
6. The method of claim 5, wherein the relative motion recognition model is an Attention-GRU model, and the inputting the relative motion data set into the relative motion recognition model to obtain the relative motion recognition result comprises:
inputting the relative motion data set into the Attention-GRU model to obtain the relative motion recognition result output by the Attention-GRU model and the Attention value at each acquisition moment;
the determining the relative motion lane change probability according to the relative motion clustering result comprises:
determining the initial relative motion lane change probability at the acquisition time according to the category to which the relative motion data acquired at each acquisition time belongs and the corresponding relation between the preset category and the initial relative motion lane change probability in the relative motion clustering result;
and determining the relative motion lane change probability according to the initial relative motion lane change probability at each acquisition time and the attention value at each acquisition time.
7. The method of any one of claims 1-6, wherein the acquiring a trajectory data set and a relative motion data set comprises:
acquiring the track data and the relative motion data acquired at each acquisition time within a specified time;
dividing the specified duration into a specified number of sliding windows, wherein the length of each sliding window is the preset duration;
taking the track data and the relative motion data collected in each sliding window as the track data set and the relative motion data set corresponding to the sliding window;
the determining whether the second vehicle is to change lanes according to the lane change probability and a preset lane change threshold value comprises the following steps:
weighting and summing the lane change probabilities determined according to the track data set and the relative motion data set corresponding to each sliding window to determine a total lane change probability, wherein the weight corresponding to each sliding window is inversely proportional to the distance between the sliding window and the current moment;
if the total lane change probability is larger than or equal to the lane change threshold value, determining that the second vehicle is about to change lanes;
and if the total lane change probability is smaller than the lane change threshold value, determining that the second vehicle keeps going straight.
8. A vehicle lane change determination apparatus, for use with a first vehicle, the apparatus comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a track data set and a relative motion data set, the track data set comprises track data of a second vehicle acquired at a plurality of acquisition moments within a preset time length, the relative motion data set comprises relative motion data between the second vehicle and a first vehicle acquired at the plurality of acquisition moments within the preset time length, the plurality of acquisition moments comprise current acquisition moments, and the second vehicle is a vehicle which runs on a lane adjacent to a lane where the first vehicle is located and is positioned in front of the first vehicle;
the first processing module is used for respectively inputting the track data set into a pre-trained track recognition model and a pre-trained track clustering model so as to determine track lane change probability according to a track recognition result output by the track recognition model and a track clustering result output by the track clustering model;
the second processing module is used for respectively inputting the relative motion data set into a pre-trained relative motion recognition model and a pre-trained relative motion clustering model so as to determine the lane change probability of the relative motion according to a relative motion recognition result output by the relative motion recognition model and a relative motion clustering result output by the relative motion clustering model;
the first determining module is used for determining the lane change probability of the second vehicle according to the track lane change probability and the relative motion lane change probability;
and the second determining module is used for determining whether the second vehicle is to change the lane according to the lane change probability and a preset lane change threshold.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
10. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 7.
CN202010889856.4A 2020-08-28 2020-08-28 Method and device for determining lane change of vehicle, storage medium and electronic equipment Active CN112085077B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010889856.4A CN112085077B (en) 2020-08-28 2020-08-28 Method and device for determining lane change of vehicle, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010889856.4A CN112085077B (en) 2020-08-28 2020-08-28 Method and device for determining lane change of vehicle, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112085077A true CN112085077A (en) 2020-12-15
CN112085077B CN112085077B (en) 2023-10-31

Family

ID=73729247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010889856.4A Active CN112085077B (en) 2020-08-28 2020-08-28 Method and device for determining lane change of vehicle, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112085077B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113044042A (en) * 2021-06-01 2021-06-29 禾多科技(北京)有限公司 Vehicle predicted lane change image display method and device, electronic equipment and readable medium
CN113386775A (en) * 2021-06-16 2021-09-14 杭州电子科技大学 Driver intention identification method considering human-vehicle-road characteristics
CN114019497A (en) * 2022-01-05 2022-02-08 南京楚航科技有限公司 Target lane change identification method based on millimeter wave radar variance statistics
CN114212110A (en) * 2022-01-28 2022-03-22 中国第一汽车股份有限公司 Obstacle trajectory prediction method, obstacle trajectory prediction device, electronic device, and storage medium
CN114771539A (en) * 2022-06-16 2022-07-22 小米汽车科技有限公司 Vehicle lane change decision method, device, storage medium and vehicle
CN117874479A (en) * 2024-03-11 2024-04-12 西南交通大学 Heavy-duty locomotive coupler force identification method based on data driving
CN117874479B (en) * 2024-03-11 2024-05-24 西南交通大学 Heavy-duty locomotive coupler force identification method based on data driving

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100023265A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with integrated driving style recognition
US20100023223A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition
CN102855638A (en) * 2012-08-13 2013-01-02 苏州大学 Detection method for abnormal behavior of vehicle based on spectrum clustering
CN103605362A (en) * 2013-09-11 2014-02-26 天津工业大学 Learning and anomaly detection method based on multi-feature motion modes of vehicle traces
CN111079590A (en) * 2019-12-04 2020-04-28 东北大学 Peripheral vehicle behavior pre-judging method of unmanned vehicle
CN111104969A (en) * 2019-12-04 2020-05-05 东北大学 Method for pre-judging collision possibility between unmanned vehicle and surrounding vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100023265A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with integrated driving style recognition
US20100023223A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition
CN102855638A (en) * 2012-08-13 2013-01-02 苏州大学 Detection method for abnormal behavior of vehicle based on spectrum clustering
CN103605362A (en) * 2013-09-11 2014-02-26 天津工业大学 Learning and anomaly detection method based on multi-feature motion modes of vehicle traces
CN111079590A (en) * 2019-12-04 2020-04-28 东北大学 Peripheral vehicle behavior pre-judging method of unmanned vehicle
CN111104969A (en) * 2019-12-04 2020-05-05 东北大学 Method for pre-judging collision possibility between unmanned vehicle and surrounding vehicle

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘凯;李?东;林伟鹏;: "车辆再识别技术综述", 智能科学与技术学报, no. 01 *
谢辉;高斌;熊硕;王悦;: "结构化道路中动态车辆的轨迹预测", 汽车安全与节能学报, no. 04 *
赵治国;冯建翔;周良杰;王凯;胡昊锐;张海山;宁忠麟;: "驾驶员避撞转向行为的改进K-means聚类与识别", 汽车工程, no. 01 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113044042A (en) * 2021-06-01 2021-06-29 禾多科技(北京)有限公司 Vehicle predicted lane change image display method and device, electronic equipment and readable medium
CN113386775A (en) * 2021-06-16 2021-09-14 杭州电子科技大学 Driver intention identification method considering human-vehicle-road characteristics
CN113386775B (en) * 2021-06-16 2022-06-17 杭州电子科技大学 Driver intention identification method considering human-vehicle-road characteristics
CN114019497A (en) * 2022-01-05 2022-02-08 南京楚航科技有限公司 Target lane change identification method based on millimeter wave radar variance statistics
CN114019497B (en) * 2022-01-05 2022-03-18 南京楚航科技有限公司 Target lane change identification method based on millimeter wave radar variance statistics
CN114212110A (en) * 2022-01-28 2022-03-22 中国第一汽车股份有限公司 Obstacle trajectory prediction method, obstacle trajectory prediction device, electronic device, and storage medium
CN114212110B (en) * 2022-01-28 2024-05-03 中国第一汽车股份有限公司 Obstacle trajectory prediction method and device, electronic equipment and storage medium
CN114771539A (en) * 2022-06-16 2022-07-22 小米汽车科技有限公司 Vehicle lane change decision method, device, storage medium and vehicle
CN114771539B (en) * 2022-06-16 2023-02-28 小米汽车科技有限公司 Vehicle lane change decision method and device, storage medium and vehicle
CN117874479A (en) * 2024-03-11 2024-04-12 西南交通大学 Heavy-duty locomotive coupler force identification method based on data driving
CN117874479B (en) * 2024-03-11 2024-05-24 西南交通大学 Heavy-duty locomotive coupler force identification method based on data driving

Also Published As

Publication number Publication date
CN112085077B (en) 2023-10-31

Similar Documents

Publication Publication Date Title
CN112085077B (en) Method and device for determining lane change of vehicle, storage medium and electronic equipment
Hou et al. Interactive trajectory prediction of surrounding road users for autonomous driving using structural-LSTM network
Zhang et al. Vehicle motion prediction at intersections based on the turning intention and prior trajectories model
Altché et al. An LSTM network for highway trajectory prediction
KR20180068511A (en) Apparatus and method for generating training data for training neural network determining information related to road included in an image
US20170016734A1 (en) Turn predictions
CN110597086A (en) Simulation scene generation method and unmanned system test method
US11501449B2 (en) Method for the assessment of possible trajectories
Wirthmüller et al. Predicting the time until a vehicle changes the lane using LSTM-based recurrent neural networks
Schulz et al. Learning interaction-aware probabilistic driver behavior models from urban scenarios
Li et al. Development and evaluation of two learning-based personalized driver models for pure pursuit path-tracking behaviors
Camara et al. Predicting pedestrian road-crossing assertiveness for autonomous vehicle control
Amsalu et al. Driver behavior modeling near intersections using hidden Markov model based on genetic algorithm
Cheng et al. Modeling mixed traffic in shared space using lstm with probability density mapping
US20210200229A1 (en) Generating trajectory labels from short-term intention and long-term result
JP4420512B2 (en) Moving object motion classification method and apparatus, and image recognition apparatus
Hui et al. Deep encoder–decoder-NN: A deep learning-based autonomous vehicle trajectory prediction and correction model
Sun et al. Vehicle turning behavior modeling at conflicting areas of mixed-flow intersections based on deep learning
CN114127810A (en) Vehicle autonomous level function
Toledo-Moreo et al. Maneuver prediction for road vehicles based on a neuro-fuzzy architecture with a low-cost navigation unit
Hori et al. Driver confusion status detection using recurrent neural networks
Jeong Predictive lane change decision making using bidirectional long shot-term memory for autonomous driving on highways
Gross et al. Route and stopping intent prediction at intersections from car fleet data
Peng et al. Driving maneuver detection via sequence learning from vehicle signals and video images
CN114368387A (en) Attention mechanism-based driver intention identification and vehicle track prediction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant