CN114926823A - WGCN-based vehicle driving behavior prediction method - Google Patents

WGCN-based vehicle driving behavior prediction method Download PDF

Info

Publication number
CN114926823A
CN114926823A CN202210494451.XA CN202210494451A CN114926823A CN 114926823 A CN114926823 A CN 114926823A CN 202210494451 A CN202210494451 A CN 202210494451A CN 114926823 A CN114926823 A CN 114926823A
Authority
CN
China
Prior art keywords
matrix
edge
feature
vehicles
gcn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210494451.XA
Other languages
Chinese (zh)
Other versions
CN114926823B (en
Inventor
李可
杨玲
张宏浩
王小宁
罗寿西
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202210494451.XA priority Critical patent/CN114926823B/en
Publication of CN114926823A publication Critical patent/CN114926823A/en
Application granted granted Critical
Publication of CN114926823B publication Critical patent/CN114926823B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of vehicle driving behavior prediction, in particular to a WGCN-based vehicle driving behavior prediction method, which comprises the following steps: firstly, generating a characteristic matrix and a local map of each vehicle; secondly, constructing a map by the local map after weighting the feature matrix and encoding the local map by the CNN, and inputting the map into the GCN; thirdly, extracting the characteristics of input data through GCN, increasing the dimension of the edge characteristics by using an edge-enhanced attention mechanism of GCN, and improving the accuracy of weight coefficient distribution; the GCN characteristic transmission mechanism is utilized to enable the interactive characteristics of the vehicles to be transmitted in a graph form, and the change of the interactive relation between the vehicles is fully represented; inputting the interactive characteristics output by the GCN into a Transformer for training; and fifthly, obtaining a prediction result of the driving behavior of the vehicle through the full connection layer. The invention enables the vehicle driving behavior prediction to have higher accuracy.

Description

基于WGCN的车辆驾驶行为预测方法Prediction method of vehicle driving behavior based on WGCN

技术领域technical field

本发明涉及车辆驾驶行为预测技术领域,具体地说,涉及一种基于WGCN的车辆驾驶行为预测方法。The invention relates to the technical field of vehicle driving behavior prediction, in particular to a WGCN-based vehicle driving behavior prediction method.

背景技术Background technique

随着车联网技术以及机器学习技术的发展,在车辆驾驶领域中,传统的辅助驾驶功能,如:自适应定速巡航(Adaptive Cruise Control,ACC)以及车道保持(Lane Keeping,LP),已经不能满足人们对于车辆智能化的需求,人们迫切需要一种功能更加丰富,更加智能的车辆辅助驾驶系统。因此,自动驾驶技术逐渐的进入了人们的视野。要实现车辆的自动驾驶,首先,需要为车辆配备许多传感器如毫米波雷达、车载图像传感器,以及全球定位系统(Global Positioning System,GPS),以获取实时的准确的自身车辆信息与周围环境信息。其次,需要为车辆配备强大的独立计算单元,如特斯拉的FSD(Full Self-DrivingComputer,FSD)、华为的MDC(Mobile Data Center,MDC),来对海量的数据进行快速准确的计算。最后还需要与现代通信技术结合,使得车辆收集的各种信息以及车辆的请求能够快速的传输响应。然而,传统的自动驾驶汽车受到连接车辆数量、道路环境、交通状况、计算单元的计算能力等因素影响,使得车辆难以提供较高的自动驾驶服务质量。除此之外,自动驾驶的车辆所拥有的通信资源有限,尤其是在复杂的交通场景下,会使得车辆接收与发送数据产生较高延迟,无法处理实时数据,引发安全问题。With the development of vehicle networking technology and machine learning technology, in the field of vehicle driving, traditional assisted driving functions, such as adaptive cruise control (Adaptive Cruise Control, ACC) and lane keeping (Lane Keeping, LP), can no longer be To meet people's needs for intelligent vehicles, people urgently need a vehicle assisted driving system with richer functions and more intelligence. Therefore, autonomous driving technology has gradually entered people's field of vision. In order to realize the automatic driving of the vehicle, first of all, the vehicle needs to be equipped with many sensors such as millimeter wave radar, on-board image sensor, and global positioning system (Global Positioning System, GPS), in order to obtain real-time accurate own vehicle information and surrounding environment information. Secondly, the vehicle needs to be equipped with a powerful independent computing unit, such as Tesla's FSD (Full Self-Driving Computer, FSD), Huawei's MDC (Mobile Data Center, MDC), to quickly and accurately calculate the massive data. Finally, it needs to be combined with modern communication technology, so that the various information collected by the vehicle and the request of the vehicle can be quickly transmitted and responded. However, traditional autonomous vehicles are affected by factors such as the number of connected vehicles, road environment, traffic conditions, and computing power of computing units, making it difficult for vehicles to provide high quality of autonomous driving services. In addition, the communication resources of autonomous vehicles are limited, especially in complex traffic scenarios, which will cause a high delay in the reception and transmission of data by the vehicle, unable to process real-time data, and cause safety problems.

移动边缘计算和深度学习的出现,有助于解决自动驾驶计算和通信资源不足的问题,并可以提升自动驾驶汽车的智能化。具体而言,自动驾驶中的各个任务,如车辆识别,车辆行为预测等等功能都可以通过深度学习实现,并能得到较高的准确性,有文献提出一种基于隐马尔科夫链的方法能预测0.5-0.7s内的变道行为;有文献等人通过传感器获取油门、方向盘、车辆偏向角等信息,在ACT-R架构上进行驾驶行为预测取得较好的效果;有文献建立了模糊神经网络,将危险系数、换道可行性系数结合,能对驾驶人的行为意图做出判断。通过深度学习获得的神经网络模型则可以放置在边缘服务器,车辆可以通过请求边缘服务器来获得低时延高精度的自动驾驶服务。有文献基于LSTM网络提出一种实时流量预测的算法,该算法通过学习车辆的移动与交互信息来提取车辆行为特征,得到了较高的预测准确性。有文献利用多通道网格图来表示交通场景,并使用CNN对车辆的驾驶轨迹进行预测,得到了较好的预测效果。The emergence of mobile edge computing and deep learning can help solve the problem of insufficient computing and communication resources for autonomous driving, and can improve the intelligence of autonomous vehicles. Specifically, various tasks in automatic driving, such as vehicle recognition, vehicle behavior prediction, etc., can be implemented through deep learning, and can achieve high accuracy. A method based on Hidden Markov Chains is proposed in the literature. It can predict the lane-changing behavior within 0.5-0.7s; some literature and others obtain information such as accelerator, steering wheel, vehicle deflection angle through sensors, and achieve good results in driving behavior prediction on the ACT-R architecture; some literatures establish fuzzy The neural network combines the risk factor and the lane-changing feasibility factor to judge the driver's behavioral intention. The neural network model obtained through deep learning can be placed on the edge server, and the vehicle can obtain low-latency and high-precision autonomous driving services by requesting the edge server. Some literatures propose a real-time traffic prediction algorithm based on LSTM network. The algorithm extracts vehicle behavior characteristics by learning vehicle movement and interaction information, and obtains high prediction accuracy. Some literatures use multi-channel grid maps to represent traffic scenes, and use CNN to predict the driving trajectory of vehicles, and obtain good prediction results.

尽管目前对车辆驾驶行为预测的研究取得了一定的进展,但仍有两个方面的问题亟需解决。首先,现有研究对车辆之间交互信息的考虑还较少,缺失了车辆之间相互影响的关键信息,对预测的准确性有一定的影响。其次,目前许多研究仅单独考虑车辆的时间特性与空间特征,缺少结合时间与空间特性的研究,使得车辆行驶信息的特征不够充分。Although the current research on vehicle driving behavior prediction has made some progress, there are still two problems that need to be solved urgently. First, the existing research has not considered the interaction information between vehicles, and the key information of the interaction between vehicles is missing, which has a certain impact on the accuracy of prediction. Secondly, many current studies only consider the temporal and spatial characteristics of vehicles alone, and lack of research that combines temporal and spatial characteristics, making the characteristics of vehicle driving information insufficient.

发明内容SUMMARY OF THE INVENTION

本发明的内容是提供一种基于WGCN的车辆驾驶行为预测方法,其能够克服现有技术的某种或某些缺陷。The content of the present invention is to provide a WGCN-based vehicle driving behavior prediction method, which can overcome some or some defects of the prior art.

根据本发明的一种基于WGCN的车辆驾驶行为预测方法,其包括以下步骤:According to a WGCN-based vehicle driving behavior prediction method of the present invention, it comprises the following steps:

一、生成每辆车的特征矩阵和本地地图;1. Generate the feature matrix and local map of each vehicle;

二、特征矩阵加权后和经过CNN编码后的本地地图构建成一张图,并输入到GCN中;2. The weighted feature matrix and the local map encoded by CNN are constructed into a map and input into GCN;

三、通过GCN提取输入数据的特征,利用GCN的边-增强注意力机制使边特征的维数增加,提高权重系数分配的准确率;利用GCN的特征传递机制使得车辆的交互特征能以图的形式进行传递,充分表示了车辆之间的交互关系变化;3. Extract the features of the input data through GCN, and use the edge-enhanced attention mechanism of GCN to increase the dimension of edge features and improve the accuracy of weight coefficient allocation; use the feature transfer mechanism of GCN to make the interactive features of vehicles in the form of graphs. The form is transmitted, which fully expresses the change of the interaction relationship between the vehicles;

四、将GCN输出的交互特征输入Transformer进行训练;4. Input the interactive features output by GCN into Transformer for training;

五、通过全连接层获得对车辆驾驶行为的预测结果。5. Obtain the prediction result of vehicle driving behavior through the fully connected layer.

作为优选,步骤一中,生成特征矩阵X,X=[P,M],其中P为节点特征矩阵,包括位置(x,y),速度(vx,vy),和航向角θ,M为本地地图;Preferably, in step 1, a feature matrix X is generated, X=[P,M], where P is a node feature matrix, including position (x, y), velocity (v x , v y ), and heading angle θ, M for the local map;

Figure BDA0003632260690000031
Figure BDA0003632260690000031

作为优选,步骤二中,边-增强注意力机制中的边-增强是增加边特征的维度,使得边特征能表达更多信息;注意力机制是给不同的顶点分配不同的权重系数,不同的权重的顶点在处理时具有不同的优先级,顶点的权重越高,表示顶点的信息越丰富,影响力越大。Preferably, in step 2, the edge-enhancement in the edge-enhancement attention mechanism is to increase the dimension of the edge feature, so that the edge feature can express more information; the attention mechanism is to assign different weight coefficients to different vertices. The weighted vertices have different priorities in processing. The higher the weight of the vertex, the richer the information of the vertex and the greater the influence.

作为优选,边-增强注意力机制中,顶点n与周围顶点的边特征向量进行计算周围车辆的权重值,最终目的是生成一个带权值的邻接矩阵,来表示不同车辆之间的影响大小,邻接矩阵表示如下:As an option, in the edge-enhanced attention mechanism, vertex n and the edge feature vectors of surrounding vertices calculate the weights of surrounding vehicles, and the ultimate goal is to generate an adjacency matrix with weights to represent the influence between different vehicles. The adjacency matrix is represented as follows:

Figure BDA0003632260690000032
Figure BDA0003632260690000032

A′=softmax(A)A'=softmax(A)

上述公式描述注意力矩阵A′的过程,首先需要让边特征矩阵E经过归一化得到E′,然后让边特征矩阵与可训练的注意力参数矩阵Wa相乘,得到注意力矩阵A,随后将矩阵A的进行softmax计算得到A′,使得A′中元素的值在0到1的范围内,以便于表示不同的权重;最后得到加权邻接矩阵AadjThe above formula describes the process of the attention matrix A'. First, the edge feature matrix E needs to be normalized to obtain E', and then the edge feature matrix is multiplied by the trainable attention parameter matrix W a to obtain the attention matrix A, Then perform softmax calculation on the matrix A to obtain A', so that the value of the elements in A' is in the range of 0 to 1, so as to express different weights; finally, the weighted adjacency matrix A adj is obtained:

Aadj=E′A′。A adj =E'A'.

作为优选,特征传递机制中,车辆间的邻接关系与交互特征以图的形式作为神经网络的输入,利用图卷积神经网络的特征传递方式进行信息更新,使得网络能充分的提取车辆之间的内在关系;Preferably, in the feature transfer mechanism, the adjacency relationship and interaction features between vehicles are used as the input of the neural network in the form of a graph, and the feature transfer method of the graph convolutional neural network is used to update the information, so that the network can fully extract the relationship between the vehicles. internal relationship;

特征矩阵X与加权邻接矩阵Aadj作为更新信息在构建的交互模型中进行传播,具体更新过程如下:The feature matrix X and the weighted adjacency matrix A adj are used as update information to propagate in the constructed interaction model. The specific update process is as follows:

Figure BDA0003632260690000033
Figure BDA0003632260690000033

其中Hk为隐藏矩阵,当k=0时,

Figure BDA0003632260690000034
α为权重系数,i=1,2,...,m,
Figure BDA0003632260690000035
M′=CNN(M),g=[P′,M′],H0=g,即为经特征矩阵加权后和经过CNN编码后的本地地图构建成的一张图;k代表的是在当前在GCN的第l层进行计算,
Figure BDA0003632260690000041
代表的是可训练的权重矩阵,在训练时进行更新;最后,三者相乘的结果经过激活函数后得到Hk+1;利用图卷积神经网络进行特征传递,能捕捉输入的车辆特征矩阵与加权邻接矩阵的关系特征。where H k is the hidden matrix, when k=0,
Figure BDA0003632260690000034
α is the weight coefficient, i=1, 2,..., m,
Figure BDA0003632260690000035
M'=CNN(M), g=[P', M'], H 0 =g, which is a map constructed from the local map weighted by the feature matrix and coded by CNN; k represents the Currently computing at layer l of GCN,
Figure BDA0003632260690000041
Represents the trainable weight matrix, which is updated during training; finally, the result of the multiplication of the three is obtained after the activation function to obtain H k+1 ; The graph convolutional neural network is used for feature transfer, which can capture the input vehicle feature matrix Relational features with weighted adjacency matrix.

面向车联网场景中的车辆驾驶行为预测问题,本发明提出一种基于WGCN(Weighted Graph Neural Network)的车辆驾驶行为预测方法。该方法首先生成特征矩阵,然后加权和经过CNN编码后的本地地图构建成一张图,再利用GCN的边-增强注意力机制使边特征的维数增加,提高了权重系数分配的准确率,使得提取的交互特征更加丰富;其次利用GCN图卷积神经网络的特征传递机制使得车辆的交互特征能以图的形式进行传递,充分表示了车辆之间的交互关系变化,从而使得对车辆驾驶行为预测有更高的准确性。最后输入Transformer进行训练,通过全连接层获得预测结果,能较佳地对车辆驾驶行为进行预测。Facing the problem of vehicle driving behavior prediction in the Internet of Vehicles scenario, the present invention proposes a vehicle driving behavior prediction method based on WGCN (Weighted Graph Neural Network). This method first generates a feature matrix, and then constructs a graph with the weighted sum of the local map encoded by CNN, and then uses the edge-enhanced attention mechanism of GCN to increase the dimension of edge features, which improves the accuracy of weight coefficient assignment, making The extracted interactive features are more abundant; secondly, the feature transfer mechanism of the GCN graph convolutional neural network enables the interactive features of vehicles to be transferred in the form of graphs, which fully represent the changes in the interactive relationship between vehicles, so as to predict the driving behavior of vehicles. have higher accuracy. Finally, the Transformer is input for training, and the prediction result is obtained through the fully connected layer, which can better predict the driving behavior of the vehicle.

附图说明Description of drawings

图1为实施例中一种基于WGCN的车辆驾驶行为预测方法的流程图;1 is a flowchart of a WGCN-based vehicle driving behavior prediction method in an embodiment;

图2为实施例中车辆驾驶行为预测网络模型示意图;2 is a schematic diagram of a vehicle driving behavior prediction network model in an embodiment;

图3为实施例中边-增强注意力机制示意图。FIG. 3 is a schematic diagram of an edge-enhanced attention mechanism in an embodiment.

具体实施方式Detailed ways

为进一步了解本发明的内容,结合附图和实施例对本发明作详细描述。应当理解的是,实施例仅仅是对本发明进行解释而并非限定。In order to further understand the content of the present invention, the present invention will be described in detail with reference to the accompanying drawings and embodiments. It should be understood that the embodiments are only for explaining the present invention and not for limiting.

实施例Example

如图1所示,本实施例提供了一种基于WGCN的车辆驾驶行为预测方法,其包括以下步骤:As shown in FIG. 1 , this embodiment provides a WGCN-based vehicle driving behavior prediction method, which includes the following steps:

一、生成每辆车的特征矩阵和本地地图;1. Generate the feature matrix and local map of each vehicle;

二、特征矩阵加权后和经过CNN编码后的本地地图构建成一张图,并输入到边-增强的图卷积神经网络GCN中;2. The weighted feature matrix and the local map encoded by CNN are constructed into a graph and input into the edge-enhanced graph convolutional neural network GCN;

三、通过GCN提取输入数据的特征,利用GCN的边-增强注意力机制使边特征的维数增加,提高权重系数分配的准确率;利用GCN的特征传递机制使得车辆的交互特征能以图的形式进行传递,充分表示了车辆之间的交互关系变化;3. Extract the features of the input data through GCN, and use the edge-enhanced attention mechanism of GCN to increase the dimension of edge features and improve the accuracy of weight coefficient allocation; use the feature transfer mechanism of GCN to make the interactive features of vehicles in the form of graphs. The form is transmitted, which fully expresses the change of the interaction relationship between the vehicles;

四、将GCN输出的交互特征输入Transformer进行训练;4. Input the interactive features output by GCN into Transformer for training;

五、通过全连接层获得对车辆驾驶行为的预测结果。5. Obtain the prediction result of vehicle driving behavior through the fully connected layer.

边-增强注意力机制中,顶点n与周围顶点的边特征向量进行计算周围车辆的权重值,最终目的是生成一个带权值的邻接矩阵,来表示不同车辆之间的影响大小,邻接矩阵表示如下:In the edge-enhanced attention mechanism, vertex n and the edge feature vector of surrounding vertices calculate the weight value of surrounding vehicles. The ultimate goal is to generate an adjacency matrix with weights to represent the influence between different vehicles. The adjacency matrix represents as follows:

Figure BDA0003632260690000051
Figure BDA0003632260690000051

A′=softmax(A)A'=softmax(A)

上述公式描述注意力矩阵A′的过程,首先需要让边特征矩阵E经过归一化得到E′,然后让边特征矩阵与可训练的注意力参数矩阵Wa相乘,得到注意力矩阵A,随后将矩阵A的进行softmax计算得到A′,使得A′中元素的值在0到1的范围内,以便于表示不同的权重;最后得到加权邻接矩阵AadjThe above formula describes the process of the attention matrix A'. First, the edge feature matrix E needs to be normalized to obtain E', and then the edge feature matrix is multiplied by the trainable attention parameter matrix W a to obtain the attention matrix A, Then perform softmax calculation on the matrix A to obtain A', so that the value of the elements in A' is in the range of 0 to 1, so as to express different weights; finally, the weighted adjacency matrix A adj is obtained:

Aadj=E′A′。A adj =E'A'.

特征传递机制中,车辆间的邻接关系与交互特征以图的形式作为神经网络的输入,利用图卷积神经网络的特征传递方式进行信息更新,使得网络能充分的提取车辆之间的内在关系;In the feature transfer mechanism, the adjacency relationship and interaction features between vehicles are used as the input of the neural network in the form of a graph, and the feature transfer method of the graph convolutional neural network is used to update the information, so that the network can fully extract the internal relationship between vehicles;

特征矩阵X与加权邻接矩阵Aadj作为更新信息在构建的交互模型中进行传播,具体更新过程如下:The feature matrix X and the weighted adjacency matrix A adj are used as update information to propagate in the constructed interaction model. The specific update process is as follows:

Figure BDA0003632260690000052
Figure BDA0003632260690000052

其中Hk为隐藏矩阵,当k=0时,

Figure BDA0003632260690000053
α为权重系数,i=1,2,...,m,
Figure BDA0003632260690000054
M′=CNN(M),g=[P′,M′],H0=g,即为经特征矩阵加权后和经过CNN编码后的本地地图构建成的一张图;k代表的是在当前在GCN的第l层进行计算,
Figure BDA0003632260690000055
代表的是可训练的权重矩阵,在训练时进行更新;最后,三者相乘的结果经过激活函数后得到Hk+1;利用图卷积神经网络进行特征传递,能捕捉输入的车辆特征矩阵与加权邻接矩阵的关系特征。where H k is the hidden matrix, when k=0,
Figure BDA0003632260690000053
α is the weight coefficient, i=1, 2,..., m,
Figure BDA0003632260690000054
M'=CNN(M), g=[P', M'], H 0 =g, which is a map constructed from the local map weighted by the feature matrix and coded by CNN; k represents the Currently computing at layer l of GCN,
Figure BDA0003632260690000055
Represents the trainable weight matrix, which is updated during training; finally, the result of the multiplication of the three is obtained after the activation function to obtain H k+1 ; The graph convolutional neural network is used for feature transfer, which can capture the input vehicle feature matrix Relational features with weighted adjacency matrix.

Transformer整体的执行速度更快,相同任务的收敛轮次,Transformer比LSTM收敛快了将近十倍,Transformer训练也比LSTM更快。The overall execution speed of Transformer is faster. For the same task convergence rounds, Transformer converges nearly ten times faster than LSTM, and Transformer training is also faster than LSTM.

Transformer不使用递归,Transformer通过使用全局对比来实现无限的注意力跨度。它无需按顺序处理每个代理,而是一次性处理整个序列并创建注意力矩阵,其中每个输出都是输入的加权总和。举个例子来说,在自然语言处理中,可以将法语单词"accord"表示为"The(0)+agreement"(1)+...神经网络通过学习得到注意力矩阵的权重。Transformer does not use recursion, Transformer achieves infinite attention span by using global contrast. Instead of processing each agent in sequence, it processes the entire sequence at once and creates an attention matrix, where each output is a weighted sum of the inputs. For example, in natural language processing, the French word "accord" can be represented as "The(0)+agreement"(1)+... The neural network learns the weights of the attention matrix.

但Transformer缺乏对时间维度的建模,即使有Position Encoding也和LSTM这种天然的时序网络有差距。However, Transformer lacks modeling of the time dimension, and even if there is Position Encoding, it is still far from the natural time series network such as LSTM.

而这个问题可以由历史加权图解决,历史加权图以及包含了时间信息,就不需要Transformer进行建模。And this problem can be solved by the historical weighted graph. The historical weighted graph contains time information, so Transformer is not needed for modeling.

网络模型network model

图2是一条高速公路上,对车辆驾驶行为预测的网络模型示意图,描述了车辆在多条车道上行驶。图中的Cloud Server是位于云端的云服务器,是预测任务的数据中心与计算中心,MEC服务器则是位于边缘的协助云服务器进行存储和计算的服务器。自我车辆与周围车辆可以通过V2V的方式进行信息传递,同时车辆也可以通过V2I的方式与MEC与CloudServer进行通信。当自我车辆发出预测请求时,需要获取周围车辆当前的车辆的驾驶信息以及历史轨迹数据,然后根据得到的数据对周围车辆的未来驾驶行为做出预测,如车辆保持速度直行、车辆加速驾驶、车辆减速驾驶、车辆左转或者车辆右转等车辆驾驶行为。在本实施例的场景中,又设置有多个处在不同地理位置的MEC服务器,这些服务器会与车辆同时进行数据收集与深度学习任务。这样设置的好处在于可以充分利用各个MEC服务器与车辆计算存储能力,又可以把所有信息相结合获取更加完整的数据。Figure 2 is a schematic diagram of a network model for predicting the driving behavior of vehicles on a highway, depicting vehicles traveling in multiple lanes. The Cloud Server in the figure is a cloud server located in the cloud, which is the data center and computing center for forecasting tasks, and the MEC server is a server located at the edge that assists the cloud server in storage and computing. The ego vehicle and surrounding vehicles can transmit information through V2V, and the vehicle can also communicate with MEC and CloudServer through V2I. When the ego vehicle sends a prediction request, it needs to obtain the current driving information and historical trajectory data of the surrounding vehicles, and then predict the future driving behavior of the surrounding vehicles according to the obtained data, such as the vehicle keeping the speed and driving straight, the vehicle accelerating the driving, the vehicle Vehicle driving behaviors such as slowing down, turning left, or turning right. In the scenario of this embodiment, a plurality of MEC servers located in different geographical locations are set up, and these servers will perform data collection and deep learning tasks simultaneously with the vehicle. The advantage of this setting is that the computing and storage capacity of each MEC server and the vehicle can be fully utilized, and all the information can be combined to obtain more complete data.

基于边-增强的图卷积神经网络模型Edge-augmented graph convolutional neural network model

为了更好地提取复杂动态场景下车辆之间的邻接与交互特征,本实施例设计了基于边-增强注意力机制的图卷积神经网络模型,该模型具有两个重要机制,分别是边-增强注意力机制与基于图卷积神经网络的特征传递机制。边-增强注意力机制通过增加边特征的维数,提高权重系数分配的准确率,使得提取的交互特征更加丰富;图卷积神经网络的特征传递机制是通过引入节点特征矩阵与加权邻接矩阵以动态图的数据形式传递与更新节点交互特征,从而充分刻画自我车辆与周围车辆间的交互关系的变化。In order to better extract the adjacency and interaction features between vehicles in complex dynamic scenes, this embodiment designs a graph convolutional neural network model based on the edge-enhanced attention mechanism. The model has two important mechanisms, namely edge- Enhanced attention mechanism and feature transfer mechanism based on graph convolutional neural network. The edge-enhanced attention mechanism increases the dimension of edge features, improves the accuracy of weight coefficient allocation, and makes the extracted interactive features more abundant; the feature transfer mechanism of graph convolutional neural network is to introduce node feature matrix and weighted adjacency matrix to The data form of the dynamic graph transmits and updates the node interaction features, so as to fully describe the changes in the interaction relationship between the ego vehicle and the surrounding vehicles.

边-增强注意力机制Edge-Enhanced Attention Mechanism

边-增强注意力机制中的边-增强指的是增加边特征的维度,使得边特征能表达更多信息;注意力机制指的是是给不同的顶点分配不同的权重系数,不同的权重的顶点在处理时具有不同的优先级。顶点的权重越高,表示顶点的信息越丰富,影响力越大。Edge-enhancement in the attention mechanism refers to increasing the dimension of edge features, so that edge features can express more information; attention mechanism refers to assigning different weight coefficients to different vertices. Vertices have different priorities when they are processed. The higher the weight of the vertex, the richer the information of the vertex, the greater the influence.

在实际的交通场景下,车辆的驾驶行为具有高复杂性,因此要获得较好的预测效果则需要充分的提取出车辆的交互特征。传统的图神经网络网络模型如:图注意力网络(Graph Attention Network,GAT)和图卷积神经网络(Graph Convolutional Network,GCN)都不能很好的满足这个需求,是由于GCN只考虑了两个顶点之间是否有边,而GAT虽然可以让边带上权重,表示各个顶点的影响大小,但边的特征仍然只能是一个实数,也就是说边所包含的特征不够丰富,总而言之,这两种传统的图神经网络模型不能充分的表达边的特征,因此不能有效的提取出车辆交互的特征。In the actual traffic scene, the driving behavior of the vehicle has high complexity, so to obtain a better prediction effect, it is necessary to fully extract the interactive features of the vehicle. Traditional graph neural network models such as Graph Attention Network (GAT) and Graph Convolutional Network (GCN) cannot meet this requirement well, because GCN only considers two Whether there is an edge between vertices, and although GAT can add weight to the edge to indicate the influence of each vertex, the feature of the edge can only be a real number, which means that the features contained in the edge are not rich enough. All in all, these two The traditional graph neural network model cannot fully express the features of edges, so it cannot effectively extract the features of vehicle interaction.

注意力机制体现在车辆之间的相对状态,如:某个车辆距离自我车辆的距离很近,容易发生碰撞,因此将此车辆赋予较高的权重,使得此车的影响变大。边缘增强注意力机制的改进在于边的特征可以是多维的,而不仅仅是一个实数,使得边包含的信息更多,权重系数分配更为准确。如图3所示,顶点n与周围顶点的边特征向量进行计算周围车辆的权重值,此步骤最终的目的是生成一个带权值的邻接矩阵,来表示不同车辆之间的影响大小,邻接矩阵具体的表示如下:The attention mechanism is reflected in the relative state between vehicles. For example, a vehicle is very close to the ego vehicle and is prone to collision. Therefore, a higher weight is assigned to this vehicle, which makes the impact of this vehicle larger. The improvement of the edge-enhanced attention mechanism is that the feature of the edge can be multi-dimensional, not just a real number, so that the edge contains more information and the weight coefficient assignment is more accurate. As shown in Figure 3, vertex n and the edge eigenvectors of surrounding vertices calculate the weight value of surrounding vehicles. The ultimate purpose of this step is to generate an adjacency matrix with weights to represent the influence between different vehicles. The adjacency matrix The specific representation is as follows:

Figure BDA0003632260690000081
Figure BDA0003632260690000081

A′=softmax(A)A'=softmax(A)

上述公式描述了注意力矩阵A′的过程,首先需要让边特征矩阵E经过归一化得到E′,然后让边特征矩阵与可训练的注意力参数矩阵Wa相乘,得到注意力矩阵A,随后将矩阵A的进行softmax计算得到A′,使得A′中元素的值在0到1的范围内,以便于表示不同的权重。最后得到加权邻接矩阵AadjThe above formula describes the process of the attention matrix A'. First, the edge feature matrix E needs to be normalized to obtain E', and then the edge feature matrix is multiplied by the trainable attention parameter matrix Wa to obtain the attention matrix A. , and then perform a softmax calculation on the matrix A to obtain A', so that the values of the elements in A' are in the range of 0 to 1, so as to represent different weights. Finally, the weighted adjacency matrix A adj is obtained:

Aadj=E′A′。A adj =E'A'.

图卷积神经网络的特征传递机制Feature Transfer Mechanism of Graph Convolutional Neural Networks

车辆间的邻接关系与交互特征以图的形式作为神经网络的输入,利用图卷积神经网络的特征传递方式进行信息更新,使得网络能更加充分的提取车辆之间的内在关系。由上可以得到特征矩阵X与加权邻接矩阵Aadj,这两个矩阵作为更新信息在构建的交互模型中进行传播,具体更新过程如下:The adjacency relationship and interaction features between vehicles are used as the input of the neural network in the form of a graph, and the feature transfer method of the graph convolutional neural network is used to update the information, so that the network can more fully extract the internal relationship between the vehicles. From the above, the feature matrix X and the weighted adjacency matrix A adj can be obtained. These two matrices are used as update information to propagate in the constructed interaction model. The specific update process is as follows:

Figure BDA0003632260690000082
Figure BDA0003632260690000082

其中Hk为隐藏矩阵,当k=0时,

Figure BDA0003632260690000083
α为权重系数,i=1,2,...,m,
Figure BDA0003632260690000084
M′=CNN(M),g=[P′,M′],H0=g,即为经特征矩阵加权后和经过CNN编码后的本地地图构建成的一张图。k代表的是在当前在GCN的第k层进行计算,
Figure BDA0003632260690000085
代表的是可训练的权重矩阵,在训练时进行更新。最后,三者相乘的结果经过激活函数后得到Hk+1。因此,利用图卷积神经网络进行特征传递,可以较好的捕捉输入的车辆特征矩阵以及带权邻接矩阵的关系的特征,实际上就是可以对复杂的交通图进行信息提取,以供后面的步骤处理,综上,GCN的本质就是一个信息提取器。where H k is the hidden matrix, when k=0,
Figure BDA0003632260690000083
α is the weight coefficient, i=1, 2,..., m,
Figure BDA0003632260690000084
M'=CNN(M), g=[P', M'], H 0 =g, which is a map constructed from the local map weighted by the feature matrix and encoded by the CNN. k represents the current calculation in the kth layer of GCN,
Figure BDA0003632260690000085
Represents a trainable weight matrix, which is updated during training. Finally, the result of the multiplication of the three is passed through the activation function to obtain H k+1 . Therefore, the use of graph convolutional neural network for feature transfer can better capture the characteristics of the relationship between the input vehicle feature matrix and the weighted adjacency matrix. In fact, it is possible to extract information from complex traffic maps for later steps. Processing, to sum up, the essence of GCN is an information extractor.

问题定义与建模Problem Definition and Modeling

车辆在公路上进行正常驾驶,会产生不同的驾驶的行为,如:变道,直行等,因此本实施例对车辆驾驶行为进行定义。车辆驾驶行为指的是车辆在道路上根据不同的交通状态,做出相应的动作,使得车辆驾驶状态改变的行为。行为的种类可以首先分为三大类分别是:保持直行(Keep Lane,KL)、向左变道(Turn Left,TL)、向右变道(Turn Right,TR),可以记作:Act3={KL,TL,TR}。在此基础的三大类上,驾驶行为可以进一步分为:加速直行(KeepLane And Accelerate,KLA)、保持匀速(Keep Lane And Speed,KLS)、减速直行(Keep LaneAnd Slow Down,KLD)、向左变道(Turn Left,TL)、向右变道(Turn Right,TR),记作Act5={KLA,KLS,KLD,TL,TR}。The normal driving of the vehicle on the highway will produce different driving behaviors, such as: changing lanes, going straight, etc. Therefore, this embodiment defines the driving behavior of the vehicle. Vehicle driving behavior refers to the behavior that the vehicle makes corresponding actions on the road according to different traffic conditions, so that the vehicle driving state changes. The types of behavior can be firstly divided into three categories: Keep Lane (KL), Turn Left (TL), Turn Right (TR), which can be recorded as: Act 3 ={KL,TL,TR}. On this basis, driving behaviors can be further divided into: Keep Lane And Accelerate (KLA), Keep Lane And Speed (KLS), Keep Lane And Slow Down (KLD), Turn Left Lane change (Turn Left, TL), right lane change (Turn Right, TR), denoted as Act 5 ={KLA, KLS, KLD, TL, TR}.

因此本实施例需要解决的问题便可以描述为:给定自我车辆X0与周围车辆Xk(k∈[1,n])的节点特征矩阵X、和This时间内的历史特征S,对Tfut时间后的车辆驾驶行为进行预测:Therefore, the problem to be solved in this embodiment can be described as: given the node feature matrix X of the ego vehicle X 0 and the surrounding vehicles X k (k∈[1,n]), and the historical feature S in the time T his , for Predict the driving behavior of the vehicle after T fut time:

Predict:Predict:

Ypre={y0,y1,...,yn}Y pre = {y 0 , y 1 , ..., y n }

Subject to:Subject to:

Figure BDA0003632260690000091
Figure BDA0003632260690000091

获取有变道行为的中心车辆Get the center vehicle with lane changing behavior

由于一般情况下车辆都倾向于保持原车道驾驶,变道行为相比较而言发生较少,所以在选择自我车辆的时候需要选择有变道行为的车辆,这样对周围车辆的驾驶行为进行预测才具有意义,因此从原始数据中筛选出变道车辆的数据,其算法如下:In general, vehicles tend to drive in the original lane, and lane-changing behaviors are relatively rare. Therefore, when choosing a self-vehicle, it is necessary to select vehicles with lane-changing behaviors, so that the driving behavior of surrounding vehicles can be predicted. It is meaningful, so the data of lane-changing vehicles are filtered out from the original data, and the algorithm is as follows:

算法1:变道车辆选择Algorithm 1: Lane Change Vehicle Selection

Figure BDA0003632260690000092
Figure BDA0003632260690000092

Figure BDA0003632260690000101
Figure BDA0003632260690000101

算法1主要功能是从NGSIM的I-80数据S里选择具有变道行为的车辆数据F。其中的关键点在于需要将同一Vehicle_ID的车辆按时间顺序排列,将Lane_ID发生变化的车辆的数据加入到F中。随后的步骤将基于此数据进行操作。The main function of Algorithm 1 is to select the vehicle data F with lane changing behavior from the I-80 data S of the NGSIM. The key point is that the vehicles with the same Vehicle_ID need to be arranged in chronological order, and the data of the vehicles whose Lane_ID changes are added to F. Subsequent steps will operate based on this data.

获取中心车辆的周围车辆Get surrounding vehicles of the center vehicle

算法2:选择中心车辆的周围车辆Algorithm 2: Select the surrounding vehicles of the center vehicle

Figure BDA0003632260690000102
Figure BDA0003632260690000102

车辆上所部署的传感器有一定距离限制,同时,车辆与车辆之间的相互影响也有距离限制,车辆之间距离越远,影响作用越小,因此需要选择离中心车辆一定距离内的车辆作为周围车辆。如算法2所示,根据车辆的Vehicle_ID,Frame_ID确定中心车辆与当前帧内存在的所有车辆,并计算每个车辆与中心车辆距离,并选择在距离dis内的车辆作为有效的周围车辆。The sensors deployed on the vehicle have a certain distance limit, and at the same time, the mutual influence between the vehicle and the vehicle is also limited by the distance. The farther the distance between the vehicles, the smaller the influence, so it is necessary to select the vehicle within a certain distance from the center vehicle as the surrounding vehicle. As shown in Algorithm 2, according to the Vehicle_ID, Frame_ID of the vehicle, determine the center vehicle and all vehicles existing in the current frame, and calculate the distance between each vehicle and the center vehicle, and select the vehicle within the distance dis as the effective surrounding vehicle.

获取边特征矩阵Get edge feature matrix

由上可知本实施例设计的车辆交互模型为图结构,具有特征矩阵X与边特征E,特征矩阵X的获取较为简单,可以直接根据公式得到,边特征矩阵E较为复杂,算法3对边特征矩阵E的获取进行了描述。其中关键的步骤为处理对角线元素,使边特征矩阵E的每个元素不为0。It can be seen from the above that the vehicle interaction model designed in this embodiment is a graph structure, which has a feature matrix X and an edge feature E. The acquisition of the feature matrix X is relatively simple and can be obtained directly according to the formula, and the edge feature matrix E is relatively complex. The acquisition of matrix E is described. The key step is to process the diagonal elements so that each element of the edge feature matrix E is not 0.

算法3:计算边特征矩阵Algorithm 3: Calculate the edge feature matrix

Figure BDA0003632260690000103
Figure BDA0003632260690000103

Figure BDA0003632260690000111
Figure BDA0003632260690000111

以上示意性的对本发明及其实施方式进行了描述,该描述没有限制性,附图中所示的也只是本发明的实施方式之一,实际的结构并不局限于此。所以,如果本领域的普通技术人员受其启示,在不脱离本发明创造宗旨的情况下,不经创造性的设计出与该技术方案相似的结构方式及实施例,均应属于本发明的保护范围。The present invention and its embodiments have been described above schematically, and the description is not restrictive, and what is shown in the accompanying drawings is only one of the embodiments of the present invention, and the actual structure is not limited thereto. Therefore, if those of ordinary skill in the art are inspired by it, without departing from the purpose of the present invention, any structural modes and embodiments similar to this technical solution are designed without creativity, which shall belong to the protection scope of the present invention. .

Claims (5)

1.基于WGCN的车辆驾驶行为预测方法,其特征在于:包括以下步骤:1. A WGCN-based vehicle driving behavior prediction method, characterized in that it comprises the following steps: 一、生成每辆车的特征矩阵和本地地图;1. Generate the feature matrix and local map of each vehicle; 二、特征矩阵加权后和经过CNN编码后的本地地图构建成一张图,并输入到GCN中;2. The weighted feature matrix and the local map encoded by CNN are constructed into a map and input into GCN; 三、通过GCN提取输入数据的特征,利用GCN的边-增强注意力机制使边特征的维数增加,提高权重系数分配的准确率;利用GCN的特征传递机制使得车辆的交互特征能以图的形式进行传递,充分表示了车辆之间的交互关系变化;3. Extract the features of the input data through GCN, and use the edge-enhanced attention mechanism of GCN to increase the dimension of edge features and improve the accuracy of weight coefficient allocation; use the feature transfer mechanism of GCN to make the interactive features of vehicles in the form of graphs. The form is transmitted, which fully expresses the change of the interaction relationship between the vehicles; 四、将GCN输出的交互特征输入Transformer进行训练;4. Input the interactive features output by GCN into Transformer for training; 五、通过全连接层获得对车辆驾驶行为的预测结果。5. Obtain the prediction result of vehicle driving behavior through the fully connected layer. 2.根据权利要求1所述的基于WGCN的车辆驾驶行为预测方法,其特征在于:步骤一中,生成特征矩阵X,X=[P,M],其中P为节点特征矩阵,包括位置(x,y),速度(vx,vy),和航向角θ,M为本地地图;2. The WGCN-based vehicle driving behavior prediction method according to claim 1, characterized in that: in step 1, a feature matrix X is generated, X=[P, M], wherein P is a node feature matrix, including the position (x , y), speed (v x , v y ), and heading angle θ, M is the local map;
Figure FDA0003632260680000011
Figure FDA0003632260680000011
3.根据权利要求1所述的基于WGCN的车辆驾驶行为预测方法,其特征在于:步骤二中,边-增强注意力机制中的边-增强是增加边特征的维度,使得边特征能表达更多信息;注意力机制是给不同的顶点分配不同的权重系数,不同的权重的顶点在处理时具有不同的优先级,顶点的权重越高,表示顶点的信息越丰富,影响力越大。3. The WGCN-based vehicle driving behavior prediction method according to claim 1, wherein in step 2, the edge-enhancement in the edge-enhancing attention mechanism is to increase the dimension of the edge feature, so that the edge feature can express more Multi-information; the attention mechanism is to assign different weight coefficients to different vertices, and vertices with different weights have different priorities during processing. The higher the weight of a vertex, the richer the information of the vertex, and the greater the influence. 4.根据权利要求3所述的基于WGCN的车辆驾驶行为预测方法,其特征在于:边-增强注意力机制中,顶点n与周围顶点的边特征向量进行计算周围车辆的权重值,最终目的是生成一个带权值的邻接矩阵,来表示不同车辆之间的影响大小,邻接矩阵表示如下:4. The WGCN-based vehicle driving behavior prediction method according to claim 3, characterized in that: in the edge-enhanced attention mechanism, vertex n and the edge feature vector of surrounding vertices calculate the weight value of surrounding vehicles, and the final purpose is Generate an adjacency matrix with weights to represent the influence between different vehicles. The adjacency matrix is expressed as follows:
Figure FDA0003632260680000012
Figure FDA0003632260680000012
A′=softmax(A)A'=softmax(A) 上述公式描述注意力矩阵A′的过程,首先需要让边特征矩阵E经过归一化得到E′,然后让边特征矩阵与可训练的注意力参数矩阵Wa相乘,得到注意力矩阵A,随后将矩阵A的进行softmax计算得到A′,使得A′中元素的值在0到1的范围内,以便于表示不同的权重;最后得到加权邻接矩阵AadjThe above formula describes the process of the attention matrix A'. First, the edge feature matrix E needs to be normalized to obtain E', and then the edge feature matrix is multiplied by the trainable attention parameter matrix W a to obtain the attention matrix A, Then perform softmax calculation on the matrix A to obtain A', so that the value of the elements in A' is in the range of 0 to 1, so as to express different weights; finally, the weighted adjacency matrix A adj is obtained: Aadj=E′A′。A adj =E'A'.
5.根据权利要求4所述的基于WGCN的车辆驾驶行为预测方法,其特征在于:特征传递机制中,车辆间的邻接关系与交互特征以图的形式作为神经网络的输入,利用图卷积神经网络的特征传递方式进行信息更新,使得网络能充分的提取车辆之间的内在关系;5. The WGCN-based vehicle driving behavior prediction method according to claim 4, characterized in that: in the feature transfer mechanism, the adjacency relationship and interaction features between vehicles are used as the input of the neural network in the form of graphs, and the graph convolutional neural The network's feature transfer method is used to update information, so that the network can fully extract the internal relationship between vehicles; 特征矩阵X与加权邻接矩阵Aadj作为更新信息在构建的交互模型中进行传播,具体更新过程如下:The feature matrix X and the weighted adjacency matrix A adj are used as update information to propagate in the constructed interaction model. The specific update process is as follows:
Figure FDA0003632260680000021
Figure FDA0003632260680000021
其中Hk为隐藏矩阵,当k=0时,
Figure FDA0003632260680000022
α为权重系数,i=1,2,...,m,
Figure FDA0003632260680000023
M′=CNN(M),g=[P′,M′],H0=g,即为经特征矩阵加权后和经过CNN编码后的本地地图构建成的一张图;k代表的是在当前在GCN的第l层进行计算,
Figure FDA0003632260680000024
代表的是可训练的权重矩阵,在训练时进行更新;最后,三者相乘的结果经过激活函数后得到Hk+1;利用图卷积神经网络进行特征传递,能捕捉输入的车辆特征矩阵与加权邻接矩阵的关系特征。
where H k is the hidden matrix, when k=0,
Figure FDA0003632260680000022
α is the weight coefficient, i=1, 2,..., m,
Figure FDA0003632260680000023
M'=CNN(M), g=[P', M'], H 0 =g, which is a map constructed from the local map weighted by the feature matrix and coded by CNN; k represents the Currently computing at layer l of GCN,
Figure FDA0003632260680000024
Represents the trainable weight matrix, which is updated during training; finally, the result of the multiplication of the three is obtained after the activation function to obtain H k+1 ; The graph convolutional neural network is used for feature transfer, which can capture the input vehicle feature matrix Relational features with weighted adjacency matrix.
CN202210494451.XA 2022-05-07 2022-05-07 WGCN-based vehicle driving behavior prediction method Active CN114926823B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210494451.XA CN114926823B (en) 2022-05-07 2022-05-07 WGCN-based vehicle driving behavior prediction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210494451.XA CN114926823B (en) 2022-05-07 2022-05-07 WGCN-based vehicle driving behavior prediction method

Publications (2)

Publication Number Publication Date
CN114926823A true CN114926823A (en) 2022-08-19
CN114926823B CN114926823B (en) 2023-04-18

Family

ID=82809419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210494451.XA Active CN114926823B (en) 2022-05-07 2022-05-07 WGCN-based vehicle driving behavior prediction method

Country Status (1)

Country Link
CN (1) CN114926823B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116959260A (en) * 2023-09-20 2023-10-27 东南大学 A multi-vehicle driving behavior prediction method based on graph neural network
CN118025203A (en) * 2023-07-12 2024-05-14 江苏大学 Automatic driving vehicle behavior prediction method and system integrating complex network and graph converter
CN118289006A (en) * 2024-06-05 2024-07-05 浙江大学 Tunnel driving risk level assessment method and system based on vehicle bus data

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002140714A (en) * 2000-10-31 2002-05-17 Konica Corp Feature variable accuracy judging method and image processor
CN112215337A (en) * 2020-09-30 2021-01-12 江苏大学 A Vehicle Trajectory Prediction Method Based on Environmental Attention Neural Network Model
CN112906720A (en) * 2021-03-19 2021-06-04 河北工业大学 Multi-label image identification method based on graph attention network
WO2021108919A1 (en) * 2019-12-06 2021-06-10 The Governing Council Of The University Of Toronto System and method for generating a protein sequence
CN113299354A (en) * 2021-05-14 2021-08-24 中山大学 Small molecule representation learning method based on Transformer and enhanced interactive MPNN neural network
CN113362491A (en) * 2021-05-31 2021-09-07 湖南大学 Vehicle track prediction and driving behavior analysis method
EP3896581A1 (en) * 2020-04-14 2021-10-20 Naver Corporation Learning to rank with cross-modal graph convolutions
CN113954864A (en) * 2021-09-22 2022-01-21 江苏大学 Intelligent automobile track prediction system and method fusing peripheral vehicle interaction information
CN113989495A (en) * 2021-11-17 2022-01-28 大连理工大学 Vision-based pedestrian calling behavior identification method
CN114091450A (en) * 2021-11-19 2022-02-25 南京通达海科技股份有限公司 Judicial domain relation extraction method and system based on graph convolution network

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002140714A (en) * 2000-10-31 2002-05-17 Konica Corp Feature variable accuracy judging method and image processor
WO2021108919A1 (en) * 2019-12-06 2021-06-10 The Governing Council Of The University Of Toronto System and method for generating a protein sequence
EP3896581A1 (en) * 2020-04-14 2021-10-20 Naver Corporation Learning to rank with cross-modal graph convolutions
CN112215337A (en) * 2020-09-30 2021-01-12 江苏大学 A Vehicle Trajectory Prediction Method Based on Environmental Attention Neural Network Model
CN112906720A (en) * 2021-03-19 2021-06-04 河北工业大学 Multi-label image identification method based on graph attention network
CN113299354A (en) * 2021-05-14 2021-08-24 中山大学 Small molecule representation learning method based on Transformer and enhanced interactive MPNN neural network
CN113362491A (en) * 2021-05-31 2021-09-07 湖南大学 Vehicle track prediction and driving behavior analysis method
CN113954864A (en) * 2021-09-22 2022-01-21 江苏大学 Intelligent automobile track prediction system and method fusing peripheral vehicle interaction information
CN113989495A (en) * 2021-11-17 2022-01-28 大连理工大学 Vision-based pedestrian calling behavior identification method
CN114091450A (en) * 2021-11-19 2022-02-25 南京通达海科技股份有限公司 Judicial domain relation extraction method and system based on graph convolution network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
刘文: "移动目标轨迹预测方法研究综述", 《智能科学与技术学报》 *
张立: "基于深度学习的特定目标情感分类模型研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
李可: "高速铁路圆端形空心高墩日照温度场效应研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》 *
陈佳丽等: "利用门控机制融合依存与语义信息的事件检测方法", 《中文信息学报》 *
高文靖: "基于属性学习的图像情感分析研究", 《中国博士学位论文全文数据库信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118025203A (en) * 2023-07-12 2024-05-14 江苏大学 Automatic driving vehicle behavior prediction method and system integrating complex network and graph converter
CN116959260A (en) * 2023-09-20 2023-10-27 东南大学 A multi-vehicle driving behavior prediction method based on graph neural network
CN116959260B (en) * 2023-09-20 2023-12-05 东南大学 A multi-vehicle driving behavior prediction method based on graph neural network
CN118289006A (en) * 2024-06-05 2024-07-05 浙江大学 Tunnel driving risk level assessment method and system based on vehicle bus data

Also Published As

Publication number Publication date
CN114926823B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
WO2022052406A1 (en) Automatic driving training method, apparatus and device, and medium
US11899411B2 (en) Hybrid reinforcement learning for autonomous driving
CN110647839B (en) Method, device and computer-readable storage medium for generating automatic driving strategy
CN110796856B (en) Vehicle lane change intention prediction method and training method of lane change intention prediction network
CN114926823B (en) WGCN-based vehicle driving behavior prediction method
CN107229973B (en) Method and device for generating strategy network model for automatic vehicle driving
CN112232490B (en) Visual-based depth simulation reinforcement learning driving strategy training method
CN112550314B (en) Embedded optimization control method suitable for unmanned driving, its driving control module and automatic driving control system
CN115331460A (en) Large-scale traffic signal control method and device based on deep reinforcement learning
CN113552883A (en) A method and system for autonomous driving of ground unmanned vehicles based on deep reinforcement learning
CN118861965A (en) Cascaded deep reinforcement learning safety decision-making method based on multimodal spatiotemporal representation
CN119283896A (en) A trajectory planning method for autonomous driving vehicles
CN116729433A (en) End-to-end automatic driving decision planning method and equipment combining element learning multitask optimization
CN119418583B (en) Intelligent driving skill training method and system based on behavior cloning and reinforcement learning
CN114997048A (en) Automatic driving vehicle lane keeping method based on TD3 algorithm improved by exploration strategy
CN114267191A (en) Control system, method, medium, equipment and application for relieving traffic jam of driver
CN118928464A (en) Method and device for generating automatic driving decision based on hybrid expert model
Chen et al. Empowering IoT-Based Autonomous Driving via Federated Instruction Tuning With Feature Diversity
CN115909733A (en) Driving intention prediction method based on cross-domain perception and mental theory
CN114779764A (en) Vehicle reinforcement learning motion planning method based on driving risk analysis
Wu et al. Federated learning-based driving strategies optimization for intelligent connected vehicles
Cao et al. The Design of Vehicle Profile Based on Multivehicle Collaboration for Autonomous Vehicles in Roundabouts
CN116863430B (en) Point cloud fusion method for automatic driving
CN119129641B (en) Multi-agent cooperative control method for traffic scenarios
CN116360429B (en) An autonomous driving decision-making method based on adaptive discriminator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant