CN114926823A - WGCN-based vehicle driving behavior prediction method - Google Patents

WGCN-based vehicle driving behavior prediction method Download PDF

Info

Publication number
CN114926823A
CN114926823A CN202210494451.XA CN202210494451A CN114926823A CN 114926823 A CN114926823 A CN 114926823A CN 202210494451 A CN202210494451 A CN 202210494451A CN 114926823 A CN114926823 A CN 114926823A
Authority
CN
China
Prior art keywords
matrix
edge
gcn
vehicle
vehicles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210494451.XA
Other languages
Chinese (zh)
Other versions
CN114926823B (en
Inventor
李可
杨玲
张宏浩
王小宁
罗寿西
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202210494451.XA priority Critical patent/CN114926823B/en
Publication of CN114926823A publication Critical patent/CN114926823A/en
Application granted granted Critical
Publication of CN114926823B publication Critical patent/CN114926823B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of vehicle driving behavior prediction, in particular to a WGCN-based vehicle driving behavior prediction method, which comprises the following steps: firstly, generating a characteristic matrix and a local map of each vehicle; secondly, constructing a map by the local map after weighting the feature matrix and encoding the local map by the CNN, and inputting the map into the GCN; thirdly, extracting the characteristics of input data through GCN, increasing the dimension of the edge characteristics by using an edge-enhanced attention mechanism of GCN, and improving the accuracy of weight coefficient distribution; the GCN characteristic transmission mechanism is utilized to enable the interactive characteristics of the vehicles to be transmitted in a graph form, and the change of the interactive relation between the vehicles is fully represented; inputting the interactive characteristics output by the GCN into a Transformer for training; and fifthly, obtaining a prediction result of the driving behavior of the vehicle through the full connection layer. The invention enables the vehicle driving behavior prediction to have higher accuracy.

Description

WGCN-based vehicle driving behavior prediction method
Technical Field
The invention relates to the technical field of vehicle driving behavior prediction, in particular to a WGCN-based vehicle driving behavior prediction method.
Background
With the development of the car networking technology and the machine learning technology, in the field of vehicle driving, traditional driving assistance functions, such as: adaptive Cruise Control (ACC) and Lane Keeping (LP) have not been able to satisfy people's demand for vehicle intellectualization, and people urgently need a vehicle assistant driving system with richer and more intelligent functions. Thus, the automatic driving technique gradually comes into the field of vision of people. To realize automatic driving of a vehicle, first, the vehicle needs to be equipped with a plurality of sensors such as a millimeter wave radar, an on-vehicle image sensor, and a Global Positioning System (GPS) to acquire real-time accurate information of the vehicle and surrounding environment. Secondly, a powerful independent computing unit, such as tesla's FSD (Full Self-Driving Computer, FSD) and wary MDC (Mobile Data Center, MDC), is required for the vehicle to perform fast and accurate computation on massive Data. Finally, there is a need to combine with modern communication technology to enable rapid transmission of responses to various information collected by vehicles and to requests from vehicles. However, the conventional autonomous driving automobile is affected by the number of connected vehicles, road environment, traffic conditions, calculation capability of a calculation unit, and the like, so that it is difficult for the vehicle to provide a high quality of autonomous driving service. In addition, the communication resources owned by the automatically driven vehicle are limited, and particularly in a complex traffic scene, the vehicle receives and transmits data with high delay, and cannot process real-time data, which causes a safety problem.
The occurrence of moving edge calculation and deep learning is helpful to solve the problems of insufficient automatic driving calculation and communication resources, and can improve the intelligence of the automatic driving automobile. Specifically, functions of each task in automatic driving, such as vehicle identification, vehicle behavior prediction and the like, can be realized through deep learning, higher accuracy can be obtained, and a hidden markov chain-based method is proposed in documents to predict lane change behavior within 0.5-0.7 s; the literature and the like acquire information such as an accelerator, a steering wheel, a vehicle deflection angle and the like through a sensor, and a better effect is obtained by predicting driving behaviors on an ACT-R framework; in the literature, a fuzzy neural network is established, and the risk coefficient and the lane change feasibility coefficient are combined to judge the behavior intention of a driver. The neural network model obtained through deep learning can be placed on an edge server, and the vehicle can obtain low-delay high-precision automatic driving service by requesting the edge server. There is a document that proposes an algorithm for real-time traffic prediction based on an LSTM network, which extracts vehicle behavior characteristics by learning movement and interaction information of a vehicle, resulting in higher prediction accuracy. In the literature, a traffic scene is represented by using a multi-channel grid map, and the driving track of the vehicle is predicted by using the CNN, so that a good prediction effect is obtained.
Despite the progress made in the current research on vehicle driving behavior prediction, there are two issues that need to be solved. Firstly, the existing research has less consideration to the interactive information between vehicles, lacks the key information of the mutual influence between the vehicles and has certain influence on the accuracy of prediction. Secondly, many studies currently consider only temporal characteristics and spatial characteristics of the vehicle alone, and lack studies combining the temporal characteristics and the spatial characteristics, so that the characteristics of the vehicle travel information are insufficient.
Disclosure of Invention
It is an object of the present invention to provide a WGCN-based vehicle driving behavior prediction method that overcomes some or some of the deficiencies of the prior art.
According to the invention, the WGCN-based vehicle driving behavior prediction method comprises the following steps:
firstly, generating a characteristic matrix and a local map of each vehicle;
secondly, constructing a map by the local map after weighting the characteristic matrix and coding the local map by CNN, and inputting the map into GCN;
thirdly, extracting the characteristics of input data through GCN, increasing the dimension of the edge characteristics by utilizing an edge-enhancement attention mechanism of GCN, and improving the accuracy of weight coefficient distribution; the GCN characteristic transmission mechanism is utilized to enable the interactive characteristics of the vehicles to be transmitted in a graph form, and the change of the interactive relation between the vehicles is fully represented;
inputting the interactive features output by the GCN into a Transformer for training;
and fifthly, obtaining a prediction result of the driving behavior of the vehicle through the full connection layer.
Preferably, in the first step, a feature matrix X is generated, where X is [ P, M ═ M]Where P is a node feature matrix including position (x, y), velocity (v) x ,v y ) And a course angle theta, M being a local map;
Figure BDA0003632260690000031
preferably, in the second step, the edge-enhancement in the edge-enhancement attention mechanism is to increase the dimension of the edge feature, so that the edge feature can express more information; the attention mechanism is that different vertexes are assigned with different weight coefficients, the vertexes with different weights have different priorities during processing, and the higher the weight of the vertexes is, the more information representing the vertexes is rich, and the influence is larger.
Preferably, in the edge-enhanced attention mechanism, the edge feature vectors of the vertex n and the surrounding vertices are used for calculating the weight values of the surrounding vehicles, and the final purpose is to generate a weighted adjacency matrix to represent the influence magnitude among different vehicles, wherein the adjacency matrix is represented as follows:
Figure BDA0003632260690000032
A′=softmax(A)
the above formula describes the process of the attention matrix A', and first needs to make the edge featureNormalizing the matrix E to obtain E', and then letting the edge feature matrix and the trainable attention parameter matrix W a Multiplying to obtain an attention matrix A, and then performing softmax calculation on the attention matrix A to obtain A 'so that the values of elements in the A' are in the range of 0 to 1 so as to represent different weights; finally, a weighted adjacency matrix A is obtained adj
A adj =E′A′。
Preferably, in the feature transfer mechanism, the adjacency relation and the interactive feature between the vehicles are used as the input of the neural network in the form of a graph, and the information is updated by using the feature transfer mode of the graph convolution neural network, so that the network can fully extract the internal relation between the vehicles;
feature matrix X and weighted adjacency matrix A adj The information serving as the update information is propagated in the constructed interaction model, and the specific update process is as follows:
Figure BDA0003632260690000033
wherein H k To hide the matrix, when k is 0,
Figure BDA0003632260690000034
α is a weight coefficient, i is 1, 2,., m,
Figure BDA0003632260690000035
M′=CNN(M),g=[P′,M′],H 0 g, which is a map constructed by the local map weighted by the feature matrix and coded by the CNN; k represents the calculation currently performed at the ith layer of the GCN,
Figure BDA0003632260690000041
the representative is a trainable weight matrix which is updated during training; finally, the result of the multiplication of the three is processed by an activation function to obtain H k+1 (ii) a The characteristic transfer is carried out by utilizing the graph convolution neural network, and the relation characteristic of the input vehicle characteristic matrix and the weighted adjacent matrix can be captured.
The invention provides a vehicle driving behavior prediction method based on WGCN (weighted Graph Neural network) and aims at the problem of vehicle driving behavior prediction in a scene of Internet of vehicles. Firstly, generating a characteristic matrix, then weighting and constructing a local map after CNN coding into a map, and then increasing the dimension of edge characteristics by using an edge-enhanced attention mechanism of GCN, so that the accuracy of weight coefficient distribution is improved, and the extracted interactive characteristics are richer; and secondly, the characteristic transmission mechanism of the GCN graph convolution neural network is utilized to enable the interactive characteristics of the vehicles to be transmitted in a graph form, and the change of the interactive relation between the vehicles is fully represented, so that the vehicle driving behavior prediction has higher accuracy. And finally, inputting a Transformer for training, and obtaining a prediction result through the full-connection layer, so that the driving behavior of the vehicle can be predicted better.
Drawings
Fig. 1 is a flowchart of a WGCN-based vehicle driving behavior prediction method in an embodiment;
FIG. 2 is a schematic diagram of a network model for predicting vehicle driving behavior in an embodiment;
FIG. 3 is a schematic diagram of an edge-enhanced attention mechanism in an embodiment.
Detailed Description
For a further understanding of the present invention, reference is made to the following detailed description taken in conjunction with the accompanying drawings and examples. It is to be understood that the examples are illustrative of the invention and not restrictive.
Examples
As shown in fig. 1, the present embodiment provides a WGCN-based vehicle driving behavior prediction method, which includes the steps of:
firstly, generating a characteristic matrix and a local map of each vehicle;
secondly, constructing a map by the local map after weighting the feature matrix and encoding the local map by the CNN, and inputting the map into the GCN;
thirdly, extracting the characteristics of input data through GCN, increasing the dimension of the edge characteristics by using an edge-enhanced attention mechanism of GCN, and improving the accuracy of weight coefficient distribution; the GCN characteristic transmission mechanism is utilized to enable the interactive characteristics of the vehicles to be transmitted in a graph form, and the change of the interactive relation between the vehicles is fully represented;
inputting the interactive characteristics output by the GCN into a Transformer for training;
and fifthly, obtaining a prediction result of the driving behavior of the vehicle through the full connection layer.
In the edge-enhanced attention mechanism, the edge feature vectors of the vertex n and the surrounding vertices calculate the weight values of surrounding vehicles, and the final purpose is to generate an adjacent matrix with weight values to represent the influence magnitude among different vehicles, wherein the adjacent matrix is represented as follows:
Figure BDA0003632260690000051
A′=softmax(A)
the above formula describes the process of the attention matrix A ', firstly, the edge feature matrix E needs to be normalized to obtain E', and then the edge feature matrix and the trainable attention parameter matrix W are assigned a Multiplying to obtain an attention matrix A, and then performing softmax calculation on the attention matrix A to obtain A 'so that the values of elements in the A' are in the range of 0 to 1 so as to represent different weights; finally, a weighted adjacency matrix A is obtained adj
A adj =E′A′。
In the characteristic transfer mechanism, the adjacency relation and the interactive characteristic between the vehicles are used as the input of a neural network in a graph form, and the information is updated by utilizing the characteristic transfer mode of the graph convolution neural network, so that the network can fully extract the internal relation between the vehicles;
feature matrix X and weighted adjacency matrix A adj The information serving as the update information is propagated in the constructed interaction model, and the specific update process is as follows:
Figure BDA0003632260690000052
wherein H k To hide the matrix, when k is 0,
Figure BDA0003632260690000053
a is a weight coefficient, i is 1, 2,., m,
Figure BDA0003632260690000054
M′=CNN(M),g=[P′,M′],H 0 g, namely constructing a map of the local map after weighting by the feature matrix and CNN coding; k represents the calculation currently performed at the ith layer of the GCN,
Figure BDA0003632260690000055
the representative is a trainable weight matrix which is updated during training; finally, the result of the multiplication of the three is processed by an activation function to obtain H k+1 (ii) a The characteristic transfer is carried out by utilizing the graph convolution neural network, and the relation characteristic of the input vehicle characteristic matrix and the weighted adjacent matrix can be captured.
The overall execution speed of the Transformer is faster, the convergence round of the same task is the same, the Transformer converges nearly ten times faster than the LSTM, and the Transformer training is also faster than the LSTM.
Transformer does not use recursion, Transformer achieves infinite span of attention by using global contrast. It does not need to process each agent in order, but instead processes the entire sequence at once and creates an attention matrix, where each output is a weighted sum of the inputs. For example, in natural language processing, The french word "acord" can be expressed as "The (0) + element" (1) +. in.
However, Transformer lacks modeling in the time dimension, and even with Position Encoding, has a gap from the natural time-series network of LSTM.
The problem can be solved by a historical weighted graph, and the historical weighted graph contains time information, so that a Transformer is not needed for modeling.
Network model
FIG. 2 is a schematic diagram of a network model for predicting vehicle driving behavior on a highway, depicting the vehicle traveling in multiple lanes. The Cloud Server in the figure is a Cloud Server located at the Cloud end, and is a data center and a computing center for predicting tasks, and the MEC Server is a Server located at the edge and assisting the Cloud Server in storage and computation. The self vehicle and the surrounding vehicles can carry out information transmission by means of V2V, and the vehicles can also communicate with the MEC and the Cloud Server by means of V2I. When the ego-vehicle issues a prediction request, it is necessary to acquire current vehicle driving information and historical trajectory data of surrounding vehicles, and then make a prediction on future driving behaviors of the surrounding vehicles according to the acquired data, such as vehicle driving behaviors of keeping the speed of the vehicle straight, vehicle acceleration driving, vehicle deceleration driving, vehicle left-turn or vehicle right-turn, and the like. In the scenario of this embodiment, a plurality of MEC servers in different geographic locations are provided, and these servers will perform data collection and deep learning tasks simultaneously with the vehicle. The advantage of this arrangement is that each MEC server and vehicle computing storage capacity can be fully utilized, and all information can be combined to obtain more complete data.
Edge-enhanced graph convolution neural network model
In order to better extract the adjacent and interactive features between vehicles in a complex dynamic scene, the embodiment designs a graph convolution neural network model based on an edge-enhanced attention mechanism, and the model has two important mechanisms, namely the edge-enhanced attention mechanism and a feature transfer mechanism based on the graph convolution neural network. The edge-enhancement attention mechanism improves the accuracy of weight coefficient distribution by increasing the dimension of edge features, so that the extracted interactive features are richer; the feature transfer mechanism of the graph convolutional neural network is to transfer and update node interaction features in a data form of a dynamic graph by introducing a node feature matrix and a weighted adjacency matrix, so that the change of the interaction relationship between self vehicles and surrounding vehicles is fully described.
Edge-enhanced attention mechanism
Edge-enhancement in the edge-enhancement attention mechanism refers to increasing the dimensionality of the edge features so that the edge features can express more information; the attention mechanism means that different vertices are assigned different weight coefficients, and the vertices with different weights have different priorities in processing. The higher the weight of the vertex, the more rich the information representing the vertex, and the greater the influence.
In an actual traffic scene, the driving behavior of the vehicle has high complexity, so that the interactive features of the vehicle need to be fully extracted to obtain a good prediction effect. The traditional neural network model of the graph is as follows: the Graph Attention Network (GAT) and the Graph Convolutional neural Network (GCN) cannot well meet the requirement because the GCN only considers whether an edge exists between two vertexes, and although the GAT can let the weight on the sideband to represent the influence of each vertex, the edge features are only one real number, that is, the features contained in the edge are not rich enough, in summary, the two traditional Graph neural Network models cannot sufficiently express the edge features, and thus the features of vehicle interaction cannot be effectively extracted.
The attention mechanism is embodied in the relative state between vehicles, such as: since a collision is likely to occur when a certain vehicle is close to a self-vehicle, the vehicle is given a higher weight, and the influence of the vehicle becomes large. The improvement of the edge enhancement attention mechanism is that the edge can be characterized in a multi-dimensional way, not only by a real number, so that the edge contains more information and the weight coefficient distribution is more accurate. As shown in fig. 3, the final purpose of this step is to generate an adjacency matrix with weights to represent the influence between different vehicles, where the edge feature vectors at the vertex n and the surrounding vertices calculate the weights of the surrounding vehicles, and the adjacency matrix is specifically represented as follows:
Figure BDA0003632260690000081
A′=softmax(A)
the above formula describes the process of the attention matrix A ', first, the edge feature matrix E needs to be normalized to obtain E', and then the edge feature matrix and the trainable attention parameter matrix are combinedW a Multiplying to obtain an attention matrix A, and then performing softmax calculation on the attention matrix A to obtain A 'so that the values of the elements in A' are in the range of 0 to 1 so as to represent different weights. Finally, a weighted adjacency matrix A is obtained adj
A adj =E′A′。
Feature delivery mechanism for graph convolutional neural networks
The adjacency relation and the interactive characteristic between the vehicles are used as the input of the neural network in the form of a graph, and the information is updated by utilizing the characteristic transmission mode of the graph convolution neural network, so that the network can more fully extract the internal relation between the vehicles. From the above, the feature matrix X and the weighted adjacency matrix A can be obtained adj The two matrices are used as update information to be propagated in the constructed interaction model, and the specific update process is as follows:
Figure BDA0003632260690000082
wherein H k To hide the matrix, when k is 0,
Figure BDA0003632260690000083
a is a weight coefficient, i is 1, 2,., m,
Figure BDA0003632260690000084
M′=CNN(M),g=[P′,M′],H 0 and g, the map is constructed by the local map weighted by the feature matrix and coded by the CNN. k represents the calculation currently performed at the kth level of the GCN,
Figure BDA0003632260690000085
typically, a trainable weight matrix is used and is updated during training. Finally, the result of the multiplication of the three is processed by an activation function to obtain H k+1 . Therefore, the characteristic transmission is carried out by utilizing the graph convolution neural network, the characteristics of the relation between the input vehicle characteristic matrix and the weighted adjacent matrix can be well captured, and the complex traffic graph can be actually carried outThe information is extracted for processing in the following steps, and in summary, the nature of the GCN is an information extractor.
Problem definition and modeling
When a vehicle is normally driven on a highway, different driving behaviors can be generated, such as: lane change, straight-ahead, etc., and therefore the present embodiment defines the driving behavior of the vehicle. The vehicle driving behavior refers to a behavior that a vehicle makes a corresponding action on a road according to different traffic states, so that the vehicle driving state changes. The categories of behaviors can be first divided into three major categories: keep straight (Keep Lane, KL), Left Lane (Turn Left, TL), Right Lane (Turn Right, TR), can be written as: act 3 KL, TL, TR. On the basis of the three main categories, driving behaviors can be further divided into: acceleration straight line (KLA), constant holding Speed (KLS), deceleration straight line (KLD), Left Lane (Turn Left, TL), Right Lane (Turn Right, TR) And Act 5 ={KLA,KLS,KLD,TL,TR}。
Therefore, the problem to be solved by the present embodiment can be described as follows: given self-vehicle X 0 With surrounding vehicles X k (k∈[1,n]) Node feature matrices X, and T his Historical features over time S, to T fut Predicting the driving behavior of the vehicle after the time:
Predict:
Y pre ={y 0 ,y 1 ,...,y n }
Subject to:
Figure BDA0003632260690000091
center vehicle with lane change behavior acquisition
Because vehicles tend to keep driving in an original lane in general and lane change behaviors are less in comparison, vehicles with lane change behaviors need to be selected when self vehicles are selected, so that the prediction of the driving behaviors of surrounding vehicles is significant, and lane change vehicle data are screened from original data, and the algorithm is as follows:
algorithm 1: lane change vehicle selection
Figure BDA0003632260690000092
Figure BDA0003632260690000101
The main function of the algorithm 1 is to select the vehicle data F with lane change behavior from the I-80 data S of the NGSIM. The key point is that vehicles with the same Vehicle _ ID need to be arranged in time sequence, and data of vehicles with Lane _ ID changes are added into F. Subsequent steps will operate on this data.
Obtaining surrounding vehicles of a central vehicle
And 2, algorithm: selecting surrounding vehicles of a center vehicle
Figure BDA0003632260690000102
The sensors deployed on the vehicles have certain distance limitations, and the interaction between the vehicles also has distance limitations, and the farther the distance between the vehicles is, the smaller the influence effect is, so that it is necessary to select vehicles within a certain distance from the central vehicle as the surrounding vehicles. As shown in algorithm 2, the center Vehicle and all vehicles present in the current Frame are determined from the Vehicle's Vehicle _ ID, Frame _ ID, and the distance of each Vehicle from the center Vehicle is calculated, and the Vehicle within distance dis is selected as the valid surrounding Vehicle.
Obtaining edge feature matrices
From the above, it can be seen that the vehicle interaction model designed in this embodiment is a graph structure, and has the feature matrix X and the edge feature E, the feature matrix X is relatively simple to obtain and can be directly obtained according to a formula, the edge feature matrix E is relatively complex, and the algorithm 3 describes the obtaining of the edge feature matrix E. The key step is to process the diagonal elements so that each element of the edge feature matrix E is not 0.
And (3) algorithm: computing edge feature matrices
Figure BDA0003632260690000103
Figure BDA0003632260690000111
The present invention and its embodiments have been described above schematically, without limitation, and what is shown in the drawings is only one of the embodiments of the present invention, and the actual structure is not limited thereto. Therefore, if the person skilled in the art receives the teaching, without departing from the spirit of the invention, the person skilled in the art shall not inventively design the similar structural modes and embodiments to the technical solution, but shall fall within the scope of the invention.

Claims (5)

1. The WGCN-based vehicle driving behavior prediction method is characterized in that: the method comprises the following steps:
firstly, generating a feature matrix and a local map of each vehicle;
secondly, constructing a map by the local map after weighting the feature matrix and encoding the local map by the CNN, and inputting the map into the GCN;
thirdly, extracting the characteristics of input data through GCN, increasing the dimension of the edge characteristics by utilizing an edge-enhancement attention mechanism of GCN, and improving the accuracy of weight coefficient distribution; the GCN characteristic transmission mechanism is utilized to enable the interactive characteristics of the vehicles to be transmitted in a graph form, and the change of the interactive relation between the vehicles is fully represented;
inputting the interactive characteristics output by the GCN into a Transformer for training;
and fifthly, obtaining a prediction result of the driving behavior of the vehicle through the full connection layer.
2. The WGCN-based vehicle driving behavior of claim 1A prediction method, characterized by: in the first step, a characteristic matrix X is generated, wherein X is [ P, M ═ M]Where P is a node feature matrix including position (x, y), velocity (v) x ,v y ) And the course angle theta, M is a local map;
Figure FDA0003632260680000011
3. the WGCN-based vehicle driving behavior prediction method according to claim 1, characterized in that: in the second step, the edge-enhancement in the edge-enhancement attention mechanism is to increase the dimensionality of the edge features so that the edge features can express more information; the attention mechanism is to assign different weight coefficients to different vertexes, the vertexes with different weights have different priorities during processing, and the higher the weight of a vertex is, the more abundant the information representing the vertex is, and the greater the influence is.
4. The WGCN-based vehicle driving behavior prediction method according to claim 3, characterized in that: in the edge-enhanced attention mechanism, the edge feature vectors of the vertex n and the surrounding vertices are used for calculating the weight values of the surrounding vehicles, and the final purpose is to generate an adjacent matrix with weight values to represent the influence magnitude among different vehicles, wherein the adjacent matrix is represented as follows:
Figure FDA0003632260680000012
A′=softmax(A)
the above formula describes the process of the attention matrix A ', first, the edge feature matrix E needs to be normalized to obtain E', and then the edge feature matrix and the trainable attention parameter matrix W are combined a Multiplying to obtain an attention matrix A, and then performing softmax calculation on the attention matrix A to obtain A 'so that the values of elements in A' are in the range of 0 to 1 so as to represent different weights; finally, a weighted adjacency matrix A is obtained adj
A adj =E′A′。
5. The WGCN-based vehicle driving behavior prediction method according to claim 4, wherein: in the characteristic transmission mechanism, the adjacency relation and the interactive characteristic between the vehicles are used as the input of a neural network in a graph form, and the information is updated by utilizing the characteristic transmission mode of the graph convolution neural network, so that the network can fully extract the internal relation between the vehicles;
feature matrix X and weighted adjacency matrix A adj The information serving as the update information is propagated in the constructed interaction model, and the specific update process is as follows:
Figure FDA0003632260680000021
wherein H k To hide the matrix, when k is 0,
Figure FDA0003632260680000022
a is a weight coefficient, i is 1, 2,., m,
Figure FDA0003632260680000023
M′=CNN(M),g=[P′,M′],H 0 g, namely constructing a map of the local map after weighting by the feature matrix and CNN coding; k represents the calculation currently performed at the ith layer of the GCN,
Figure FDA0003632260680000024
representing a trainable weight matrix, which is updated during training; finally, the result of the multiplication of the three is processed by an activation function to obtain H k+1 (ii) a The characteristic transfer is carried out by utilizing the graph convolution neural network, and the relation characteristic of the input vehicle characteristic matrix and the weighted adjacent matrix can be captured.
CN202210494451.XA 2022-05-07 2022-05-07 WGCN-based vehicle driving behavior prediction method Active CN114926823B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210494451.XA CN114926823B (en) 2022-05-07 2022-05-07 WGCN-based vehicle driving behavior prediction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210494451.XA CN114926823B (en) 2022-05-07 2022-05-07 WGCN-based vehicle driving behavior prediction method

Publications (2)

Publication Number Publication Date
CN114926823A true CN114926823A (en) 2022-08-19
CN114926823B CN114926823B (en) 2023-04-18

Family

ID=82809419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210494451.XA Active CN114926823B (en) 2022-05-07 2022-05-07 WGCN-based vehicle driving behavior prediction method

Country Status (1)

Country Link
CN (1) CN114926823B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116959260A (en) * 2023-09-20 2023-10-27 东南大学 Multi-vehicle driving behavior prediction method based on graph neural network
CN118025203A (en) * 2023-07-12 2024-05-14 江苏大学 Automatic driving vehicle behavior prediction method and system integrating complex network and graph converter
CN118289006A (en) * 2024-06-05 2024-07-05 浙江大学 Tunnel driving risk level assessment method and system based on vehicle bus data

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002140714A (en) * 2000-10-31 2002-05-17 Konica Corp Feature variable accuracy judging method and image processor
CN112215337A (en) * 2020-09-30 2021-01-12 江苏大学 Vehicle trajectory prediction method based on environment attention neural network model
CN112906720A (en) * 2021-03-19 2021-06-04 河北工业大学 Multi-label image identification method based on graph attention network
WO2021108919A1 (en) * 2019-12-06 2021-06-10 The Governing Council Of The University Of Toronto System and method for generating a protein sequence
CN113299354A (en) * 2021-05-14 2021-08-24 中山大学 Small molecule representation learning method based on Transformer and enhanced interactive MPNN neural network
CN113362491A (en) * 2021-05-31 2021-09-07 湖南大学 Vehicle track prediction and driving behavior analysis method
EP3896581A1 (en) * 2020-04-14 2021-10-20 Naver Corporation Learning to rank with cross-modal graph convolutions
CN113954864A (en) * 2021-09-22 2022-01-21 江苏大学 Intelligent automobile track prediction system and method fusing peripheral vehicle interaction information
CN113989495A (en) * 2021-11-17 2022-01-28 大连理工大学 Vision-based pedestrian calling behavior identification method
CN114091450A (en) * 2021-11-19 2022-02-25 南京通达海科技股份有限公司 Judicial domain relation extraction method and system based on graph convolution network

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002140714A (en) * 2000-10-31 2002-05-17 Konica Corp Feature variable accuracy judging method and image processor
WO2021108919A1 (en) * 2019-12-06 2021-06-10 The Governing Council Of The University Of Toronto System and method for generating a protein sequence
EP3896581A1 (en) * 2020-04-14 2021-10-20 Naver Corporation Learning to rank with cross-modal graph convolutions
CN112215337A (en) * 2020-09-30 2021-01-12 江苏大学 Vehicle trajectory prediction method based on environment attention neural network model
CN112906720A (en) * 2021-03-19 2021-06-04 河北工业大学 Multi-label image identification method based on graph attention network
CN113299354A (en) * 2021-05-14 2021-08-24 中山大学 Small molecule representation learning method based on Transformer and enhanced interactive MPNN neural network
CN113362491A (en) * 2021-05-31 2021-09-07 湖南大学 Vehicle track prediction and driving behavior analysis method
CN113954864A (en) * 2021-09-22 2022-01-21 江苏大学 Intelligent automobile track prediction system and method fusing peripheral vehicle interaction information
CN113989495A (en) * 2021-11-17 2022-01-28 大连理工大学 Vision-based pedestrian calling behavior identification method
CN114091450A (en) * 2021-11-19 2022-02-25 南京通达海科技股份有限公司 Judicial domain relation extraction method and system based on graph convolution network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
刘文: "移动目标轨迹预测方法研究综述", 《智能科学与技术学报》 *
张立: "基于深度学习的特定目标情感分类模型研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
李可: "高速铁路圆端形空心高墩日照温度场效应研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》 *
陈佳丽等: "利用门控机制融合依存与语义信息的事件检测方法", 《中文信息学报》 *
高文靖: "基于属性学习的图像情感分析研究", 《中国博士学位论文全文数据库信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118025203A (en) * 2023-07-12 2024-05-14 江苏大学 Automatic driving vehicle behavior prediction method and system integrating complex network and graph converter
CN116959260A (en) * 2023-09-20 2023-10-27 东南大学 Multi-vehicle driving behavior prediction method based on graph neural network
CN116959260B (en) * 2023-09-20 2023-12-05 东南大学 Multi-vehicle driving behavior prediction method based on graph neural network
CN118289006A (en) * 2024-06-05 2024-07-05 浙江大学 Tunnel driving risk level assessment method and system based on vehicle bus data

Also Published As

Publication number Publication date
CN114926823B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
WO2022052406A1 (en) Automatic driving training method, apparatus and device, and medium
US11537134B1 (en) Generating environmental input encoding for training neural networks
US20230280702A1 (en) Hybrid reinforcement learning for autonomous driving
CN110850861B (en) Attention-based hierarchical lane-changing depth reinforcement learning
CN114926823B (en) WGCN-based vehicle driving behavior prediction method
CN110796856B (en) Vehicle lane change intention prediction method and training method of lane change intention prediction network
Li et al. Survey on artificial intelligence for vehicles
US11555706B1 (en) Processing graph representations of tactical maps using neural networks
EP4152204A1 (en) Lane line detection method, and related apparatus
Cai et al. Environment-attention network for vehicle trajectory prediction
US20200160104A1 (en) Binary Feature Compression for Autonomous Devices
US20200160117A1 (en) Attention Based Feature Compression and Localization for Autonomous Devices
US20200379461A1 (en) Methods and systems for trajectory forecasting with recurrent neural networks using inertial behavioral rollout
CN110356412B (en) Method and apparatus for automatic rule learning for autonomous driving
CN107229973A (en) The generation method and device of a kind of tactful network model for Vehicular automatic driving
Wang et al. End-to-end self-driving using deep neural networks with multi-auxiliary tasks
CN115056798A (en) Automatic driving vehicle lane change behavior vehicle-road cooperative decision algorithm based on Bayesian game
CN115303297B (en) Urban scene end-to-end automatic driving control method and device based on attention mechanism and graph model reinforcement learning
CN114368387B (en) Attention mechanism-based driver intention recognition and vehicle track prediction method
CN114030485A (en) Automatic driving automobile man lane change decision planning method considering attachment coefficient
CN114267191B (en) Control system, method, medium, equipment and application for relieving traffic jam of driver
CN116205024A (en) Self-adaptive automatic driving dynamic scene general generation method for high-low dimension evaluation scene
US20230196749A1 (en) Training Neural Networks for Object Detection
Khanum et al. Involvement of deep learning for vision sensor-based autonomous driving control: a review
Garlick et al. Real-time optimal trajectory planning for autonomous vehicles and lap time simulation using machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant