CN112434735B - A method, system and device for constructing dynamic driving conditions - Google Patents

A method, system and device for constructing dynamic driving conditions Download PDF

Info

Publication number
CN112434735B
CN112434735B CN202011320811.1A CN202011320811A CN112434735B CN 112434735 B CN112434735 B CN 112434735B CN 202011320811 A CN202011320811 A CN 202011320811A CN 112434735 B CN112434735 B CN 112434735B
Authority
CN
China
Prior art keywords
input
clustering
cluster
segment
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011320811.1A
Other languages
Chinese (zh)
Other versions
CN112434735A (en
Inventor
康宇
裴丽红
许镇义
赵振怡
刘斌琨
曹洋
吕文君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202011320811.1A priority Critical patent/CN112434735B/en
Publication of CN112434735A publication Critical patent/CN112434735A/en
Application granted granted Critical
Publication of CN112434735B publication Critical patent/CN112434735B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/80Technologies aiming to reduce greenhouse gasses emissions common to all road transportation technologies
    • Y02T10/84Data processing systems or methods, management, administration

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a method for constructing a dynamic driving condition, which comprises the following steps: acquiring speed data of a vehicle, preprocessing the speed data, and generating an input fragment X; constructing a joint learning framework based on a deep neural network and a bidirectional long-term and short-term memory network, and inputting input segments into the joint learning framework to obtain a feature space Z; soft distribution clustering of the characteristic space Z is realized by utilizing a regularization item based on relative entropy, and a clustering result is obtained after iterative updating; and classifying the input fragments according to the corresponding relation between the clustering result and the input fragments to obtain fragment libraries of various types, and selecting the input fragments from the fragment libraries of various types to form the driving working condition.

Description

一种动态行驶工况构建方法、系统及设备A method, system and device for constructing dynamic driving conditions

技术领域technical field

本发明涉及环境检测技术领域,具体涉及一种动态行驶工况构建方法、系统及设备。The invention relates to the technical field of environmental detection, in particular to a method, system and device for constructing a dynamic driving condition.

背景技术Background technique

根据环保部组织修订的《机动车污染防治技术政策》指出进一步改善环境质量为核心构建机动车污染防治体系,推进机动车污染防治工作的系统化、科学化、信息化。《技术政策》明确,将逐步加严新生产机动车一氧化碳(CO)、总碳氢化合物(THC)、氮氧化物(NOx)和颗粒物(PM)等污染物排放限值。《2020-2025年中国机动车行业市场前景及投资机会研究报告》显示,截至2019年底,全国机动车保有量达3.48亿辆,我国是全球第一大汽车消费市场和生产国,我国机动车保有量常年位居世界前列。随着机动车保有量快速增长,由此带来的城市交通拥堵及车辆尾气污染排放问题日渐严重。机动车污染物排放主要受车辆行驶状况的影响,如车辆在交通堵塞下的怠速时间较长及加减速频率过高,都会造成较高的尾气排放。行驶工况构建是一种基于典型交通状况的汽车驾驶剖面的构建方法,在汽车排放、经济性和行驶里程的评价中具有重要的作用。According to the "Technical Policy for the Prevention and Control of Motor Vehicle Pollution" organized by the Ministry of Environmental Protection, it is pointed out that further improving the environmental quality is the core to build a motor vehicle pollution prevention and control system, and promote the systematization, scientific and informatization of motor vehicle pollution prevention and control. The "Technical Policy" clarifies that the emission limits of pollutants such as carbon monoxide (CO), total hydrocarbons (THC), nitrogen oxides (NOx) and particulate matter (PM) will be gradually tightened for new production vehicles. The "2020-2025 China Motor Vehicle Industry Market Prospects and Investment Opportunities Research Report" shows that by the end of 2019, the national motor vehicle ownership reached 348 million vehicles. my country is the world's largest automobile consumption market and production country. The quantity ranks among the top in the world all the year round. With the rapid increase in the number of motor vehicles, the resulting urban traffic congestion and vehicle exhaust pollution have become increasingly serious. Motor vehicle pollutant emissions are mainly affected by the driving conditions of the vehicle, such as long idling time and high acceleration and deceleration frequency of vehicles in traffic jams, which will result in higher exhaust emissions. Driving condition construction is a method of constructing vehicle driving profiles based on typical traffic conditions, which plays an important role in the evaluation of vehicle emissions, economy and mileage.

目前行驶工况构建方法主要分为两类:马尔科夫分析法和聚类分析法。马尔可夫分析法将车辆行驶过程的速度和时间关系看作随机过程,利用t时刻的状态只依赖于t-1时刻的状态的特点(即无后效性),将不同模型事件组合在一起形成整个行驶过程。聚类分析法将所有微行程片段根据其相似程度分成若干类,再依据一定原则从每一类片段库中挑选片段组成最终的工况曲线。相比于马尔科夫分析法,聚类分析法能得到不同类别的工况,更接近实际道路工况,简单易行。At present, the construction methods of driving conditions are mainly divided into two categories: Markov analysis method and cluster analysis method. The Markov analysis method regards the speed and time relationship of the vehicle driving process as a random process, and combines the different model events by using the characteristic that the state at time t only depends on the state at time t-1 (that is, there is no after-effect). form the entire driving process. The cluster analysis method divides all micro-stroke segments into several categories according to their similarity, and then selects segments from each type of segment library according to certain principles to form the final operating condition curve. Compared with the Markov analysis method, the cluster analysis method can obtain different types of working conditions, which is closer to the actual road conditions and is simple and easy to implement.

由于城市各区域实际交通状况、道路特性不同,都会对车辆的驾驶循环造成影响,同时新能源汽车的发展,车型数据越来越丰富,传统方法采用的手工设计特征来表示驾驶数据在空间上的速度-时间分布,且驾驶数据被视为静态数据,其固有的动态特性和时间依赖性往往被忽略,导致精度低,鲁棒性不足。Due to the different actual traffic conditions and road characteristics in various areas of the city, it will affect the driving cycle of the vehicle. At the same time, with the development of new energy vehicles, the model data is becoming more and more abundant. The traditional method uses hand-designed features to represent the spatial distribution of driving data. Speed-time distribution, and driving data is treated as static data, its inherent dynamic characteristics and temporal dependencies are often ignored, resulting in low accuracy and insufficient robustness.

发明内容SUMMARY OF THE INVENTION

为解决上述技术问题,本发明提供一种动态行驶工况构建方法、系统及设备。In order to solve the above technical problems, the present invention provides a method, system and device for constructing a dynamic driving condition.

为解决上述技术问题,本发明采用如下技术方案:In order to solve the above-mentioned technical problems, the present invention adopts the following technical solutions:

一种动态行驶工况构建方法,包括以下步骤:A method for constructing a dynamic driving condition, comprising the following steps:

步骤一:获取车辆的速度数据并进行预处理,生成输入片段X;Step 1: Obtain the speed data of the vehicle and preprocess it to generate the input segment X;

步骤二:构建基于深度神经网络和双向长短时记忆网络的联合学习框架,将输入片段输入到联合学习框架中,得到特征空间Z;Step 2: Construct a joint learning framework based on a deep neural network and a bidirectional long-short-term memory network, input the input segment into the joint learning framework, and obtain the feature space Z;

步骤三:利用基于相对熵的正则化项实现特征空间Z的软分配聚类,迭代更新后得到聚类结果;Step 3: Use the regularization term based on relative entropy to realize the soft assignment clustering of the feature space Z, and obtain the clustering result after iterative update;

步骤四:根据聚类结果与输入片段的对应关系,对输入片段进行分类得到多种类型的片段库,从各类片段库中挑选输入片段形成行驶工况。Step 4: According to the correspondence between the clustering results and the input segments, classify the input segments to obtain multiple types of segment libraries, and select input segments from the various segment libraries to form driving conditions.

具体地,步骤一中,对速度数据进行预处理时,将速度数据中的无效数据去除并填充缺失值;对速度数据进行微行程片段提取,生成微行程片段库;对微行程片段库进行插值处理得到等长序列库,对等长序列库进行归一化处理得到输入片段。Specifically, in step 1, when the speed data is preprocessed, the invalid data in the speed data is removed and the missing values are filled; the micro-stroke segment is extracted from the speed data to generate a micro-stroke segment library; the micro-stroke segment library is interpolated The equal-length sequence library is obtained by processing, and the input fragment is obtained by normalizing the equal-length sequence library.

具体地,步骤二中,将输入片段输入到联合学习框架时,所述联合学习框架包括自编码器,所述自编码器包括编码器和解码器,编码器通过深度神经网络和双向长短时记忆网络依次对输入片段进行处理;Specifically, in step 2, when the input segment is input into the joint learning framework, the joint learning framework includes an autoencoder, and the autoencoder includes an encoder and a decoder. The encoder uses a deep neural network and a bidirectional long-term memory The network processes the input fragments in turn;

所述深度神经网络学习输入片段中短时间尺度的波形并提取其局部特征;The deep neural network learns short-time-scale waveforms in the input segment and extracts its local features;

所述双向长短时记忆网络学习输入片段中跨越时间尺度波形之间的时间连接,提取其全局特征,进而形成所述的特征空间Z;The bidirectional long-short-term memory network learns the temporal connection between waveforms across time scales in the input segment, extracts its global features, and then forms the feature space Z;

解码器采用上采样和反卷积对特征空间进行重构,形成重构片段X′;The decoder uses upsampling and deconvolution to reconstruct the feature space to form a reconstructed segment X';

对自编码器进行预训练,使解码器输出的重构片段X′与输入片段具有最小的均方误差

Figure BDA0002792828440000021
Pre-train the autoencoder so that the reconstructed segment X′ output by the decoder has the smallest mean square error with the input segment
Figure BDA0002792828440000021

具体地,步骤三中,利用基于相对熵的正则化项实现特征序列的软分配聚类时,所述联合学习框架还包括用于对特征空间进行聚类的时序聚类层,编码器和时序聚类层迭代更新,直到得到稳定的结果,最终输入片段聚类为多种类型的片段库

Figure BDA0002792828440000022
其中k0是最优聚类数,包括以下步骤:Specifically, in step 3, when the relative entropy-based regularization term is used to realize the soft assignment clustering of the feature sequence, the joint learning framework further includes a time series clustering layer for clustering the feature space, an encoder and a time series The clustering layer is iteratively updated until stable results are obtained, and finally the input fragments are clustered into multiple types of fragment libraries
Figure BDA0002792828440000022
where k 0 is the optimal number of clusters, including the following steps:

步骤41:使用欧氏距离ED来计算特征空间各元素zi到聚类中心cj的距离dijStep 41: use the Euclidean distance ED to calculate the distance d ij from each element zi in the feature space to the cluster center c j ;

步骤42:利用学生t分布将距离dij归一化为概率分配,特征向量zi属于第j个簇的概率Step 42: Normalize the distance d ij to a probability distribution using Student’s t distribution, the probability that the feature vector zi belongs to the jth cluster

Figure BDA0002792828440000031
Figure BDA0002792828440000031

其中,qij值越大,特征向量zi离聚类中心越近,属于第k簇的可能性越大,α为学生t分布的自由度数;Among them, the larger the value of q ij is, the closer the eigenvector zi is to the cluster center, the greater the possibility of belonging to the kth cluster, and α is the number of degrees of freedom of the student's t distribution;

步骤43:目标分布pij设置为高于置信阈值的数据点的delta分布,并忽略其余值,其中,

Figure BDA0002792828440000032
Step 43: The target distribution p ij is set to the delta distribution of data points above the confidence threshold, and the remaining values are ignored, where,
Figure BDA0002792828440000032

步骤44:将迭代训练的目标设定为最小化概率分配qij与目标分布pij之间的相对熵损失

Figure BDA0002792828440000033
Step 44: Set the goal of iterative training to minimize the relative entropy loss between the probability distribution q ij and the target distribution p ij
Figure BDA0002792828440000033

步骤45:总损失Losstotal=LossC+λLossae,其中λ为比例系数,LossC作为正则化项,防止编码器特征提取过程过拟合。Step 45: The total loss Loss total =Loss C +λLoss ae , where λ is a proportional coefficient, and Loss C is used as a regularization term to prevent overfitting in the encoder feature extraction process.

具体地,根据戴维森堡丁指数DBI,选择最优聚类数,包括以下步骤:Specifically, selecting the optimal number of clusters according to the Davidson Bodding Index DBI includes the following steps:

设置k值,分别带入训练编解码及聚类网络;Set the k value and bring it into the training codec and clustering network respectively;

计算各k值下聚类结果的DBI值:Calculate the DBI value of the clustering results under each k value:

Figure BDA0002792828440000034
Figure BDA0002792828440000034

其中,k代表聚类数;‖ci-cj||2表示聚类i质心与聚类j质心的欧式距离;

Figure BDA0002792828440000035
表示聚类i内特征向量到其质心的平均距离,代表聚类i中数据的分散程度,
Figure BDA0002792828440000036
表示聚类j内特征向量到其质心的平均距离,代表聚类j中数据的分散程度;
Figure BDA0002792828440000037
Figure BDA0002792828440000038
Mi表示聚类i的数据个数;Xis表示聚类i中的第s个数据,Xjs表示聚类j中的第s个数据,ci表示聚类i的质心,cj表示聚类j的质心;p通常取2;Among them, k represents the number of clusters; ‖c i -c j || 2 represents the Euclidean distance between the centroid of cluster i and the centroid of cluster j;
Figure BDA0002792828440000035
Represents the average distance from the feature vector in cluster i to its centroid, represents the degree of dispersion of data in cluster i,
Figure BDA0002792828440000036
Represents the average distance from the feature vector in cluster j to its centroid, and represents the dispersion degree of data in cluster j;
Figure BDA0002792828440000037
Figure BDA0002792828440000038
M i represents the number of data in cluster i; X is represents the s-th data in cluster i, X js represents the s-th data in cluster j, ci represents the centroid of cluster i, and c j represents cluster i The centroid of class j; p usually takes 2;

选取DBI值第一次出现局部最小值时的k值,作为最优聚类数k0The k value when the DBI value first appears in the local minimum value is selected as the optimal number of clusters k 0 .

具体地,步骤三中,利用基于相对熵的正则化项实现特征序列的软分配聚类时,使用K-means算法初始化聚类中心。Specifically, in step 3, when using the relative entropy-based regularization term to realize the soft assignment clustering of the feature sequence, the K-means algorithm is used to initialize the cluster center.

具体地,步骤四中,从各类片段库中挑选输入片段形成行驶工况时,所述聚类结果中特征空间内的特征向量具有类标签,按类内距离与类间距的比值对各类标签下的特征向量进行排序,确定各类标签下特征向量的优先级;根据各类片段库占总体片段库的时间占比,确定各类片段库中选取的片段数目,根据各类标签下特征向量的优先级挑选输入片段形成行驶工况。Specifically, in step 4, when input segments are selected from various types of segment libraries to form driving conditions, the feature vectors in the feature space in the clustering result have class labels, and each type of The feature vectors under the tags are sorted, and the priority of the feature vectors under various tags is determined; according to the time proportion of the various fragment libraries in the total fragment library, the number of selected fragments in the various fragment libraries is determined. The vector of priorities picks input segments to form driving conditions.

具体地,使用相对误差和速度加速度联合分布两种方法评估所构建的行驶工况。Specifically, two methods of relative error and joint distribution of speed and acceleration are used to evaluate the constructed driving conditions.

一种动态行驶工况构建系统,包括:A system for constructing dynamic driving conditions, comprising:

数据获取模块,其获取车辆的速度数据并进行预处理,生成输入片段X;A data acquisition module, which acquires the speed data of the vehicle and performs preprocessing to generate an input segment X;

编码模块,其构建基于深度神经网络和双向长短时记忆网络的联合学习框架,将输入片段输入到联合学习框架中,得到特征空间Z;an encoding module, which constructs a joint learning framework based on a deep neural network and a bidirectional long-short-term memory network, inputs the input segment into the joint learning framework, and obtains the feature space Z;

聚类模块,其利用基于相对熵的正则化项实现特征空间Z的软分配聚类,迭代更新后得到聚类结果;Clustering module, which utilizes the regularization term based on relative entropy to realize soft assignment clustering of feature space Z, and obtains the clustering result after iterative update;

行驶工况构建模块,其根据聚类结果与输入片段的对应关系,对输入片段进行分类得到多种类型的片段库,从各类片段库中挑选输入片段形成行驶工况。The driving condition building module classifies the input segments according to the corresponding relationship between the clustering results and the input segments to obtain various types of segment libraries, and selects input segments from various segment libraries to form driving conditions.

一种计算机设备,包括包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现所述的构建方法。A computer device includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the construction method when the processor executes the computer program.

与现有技术相比,本发明的有益技术效果是:Compared with the prior art, the beneficial technical effects of the present invention are:

不同于传统的工况构建方法,本发明采用了无监督联合特征学习与聚类框架,利用行驶数据的连续性,考虑动态数据的时间依赖性,行驶工况构建过程不使用任何手工设计特征表达及挑选片段,并且能够在真实行驶数据上实现更高精度及鲁棒性的工况模型构建。Different from the traditional working condition construction method, the present invention adopts an unsupervised joint feature learning and clustering framework, utilizes the continuity of driving data and considers the time dependence of dynamic data, and does not use any hand-designed feature expression in the driving condition construction process. And select segments, and can achieve higher accuracy and robustness model construction on real driving data.

附图说明Description of drawings

图1为本发明工况构建方法的流程示意图;Fig. 1 is the schematic flow chart of the working condition construction method of the present invention;

图2为本发明联合学习框架的结构示意图;2 is a schematic structural diagram of a joint learning framework of the present invention;

图3为本发明聚类的结果可视化图;Fig. 3 is the result visualization diagram of clustering of the present invention;

图4为本发明所构建的工况模型图;Fig. 4 is a working condition model diagram constructed by the present invention;

图5为本发明试验车辆行驶速度可视化图;5 is a visualization diagram of the driving speed of the test vehicle of the present invention;

图6为本发明估算CO排放状况的可视化图;FIG. 6 is a visualization diagram of the estimated CO emission status of the present invention;

图7为各污染物的计算系数。Figure 7 shows the calculated coefficients for each pollutant.

具体实施方式Detailed ways

下面结合附图对本发明的一种优选实施方式作详细的说明。A preferred embodiment of the present invention will be described in detail below with reference to the accompanying drawings.

如图1和2所示,一种动态行驶工况构建方法,包括以下步骤:As shown in Figures 1 and 2, a method for constructing a dynamic driving condition includes the following steps:

S1:获取车辆的速度数据并进行预处理,生成输入片段X。S1: Obtain the speed data of the vehicle and perform preprocessing to generate the input segment X.

具体地,步骤一中,对速度数据进行预处理时,将速度数据中的无效数据去除并填充缺失值;对速度数据进行微行程片段提取,生成微行程片段库;对微行程片段库进行插值处理得到等长序列库,对等长序列库进行归一化处理得到输入片段。Specifically, in step 1, when the speed data is preprocessed, the invalid data in the speed data is removed and the missing values are filled; the micro-stroke segment is extracted from the speed data to generate a micro-stroke segment library; the micro-stroke segment library is interpolated The equal-length sequence library is obtained by processing, and the input fragment is obtained by normalizing the equal-length sequence library.

对微行程片段库进行插值时,采用三次样条插值以及线性插值两种方法,将一列序列组合成两列输入。When interpolating the micro-stroke segment library, two methods of cubic spline interpolation and linear interpolation are used to combine one column of sequences into two columns of input.

微行程片段,由不超过180s的怠速片段(速度保持为0)和运动学片段(速度始终大于0)组成,提取的微行程片段即从怠速状态开始到下一个怠速状态结束。The micro-stroke segment consists of an idle speed segment (the speed is kept at 0) and a kinematic segment (the speed is always greater than 0) that does not exceed 180s. The extracted micro-stroke segment starts from the idle speed state and ends at the next idle speed state.

S2:构建基于深度神经网络和双向长短时记忆网络的联合学习框架,将输入片段输入到联合学习框架中,得到特征空间z。S2: Construct a joint learning framework based on a deep neural network and a bidirectional long-short-term memory network, and input the input segment into the joint learning framework to obtain the feature space z.

具体地,步骤二中,将输入片段输入到联合学习框架时,所述联合学习框架包括自编码器,所述自编码器包括编码器和解码器,编码器通过深度神经网络和双向长短期记忆网络依次对输入片段进行处理;Specifically, in step 2, when the input segment is input into the joint learning framework, the joint learning framework includes an autoencoder, and the autoencoder includes an encoder and a decoder. The encoder uses a deep neural network and a bidirectional long short-term memory. The network processes the input fragments in turn;

所述深度神经网络学习输入片段中短时间尺度的波形并提取其局部特征;The deep neural network learns short-time-scale waveforms in the input segment and extracts its local features;

所述双向长短期记忆网络学习输入片段中跨越时间尺度波形之间的时间连接,提取其全局特征,进而形成所述的特征空间Z;The bidirectional long short-term memory network learns the temporal connection between waveforms across time scales in the input segment, extracts its global features, and then forms the feature space Z;

解码器采用上采样和反卷积对特征空间进行重构,形成重构片段X′;The decoder uses upsampling and deconvolution to reconstruct the feature space to form a reconstructed segment X';

对自编码器进行预训练,使解码器输出的重构片段X′与输入片段具有最小的均方误差

Figure BDA0002792828440000051
Pre-train the autoencoder so that the reconstructed segment X′ output by the decoder has the smallest mean square error with the input segment
Figure BDA0002792828440000051

本步骤的目的在于:解决动态数据的时间依赖性并实现非线性时间维度降维。The purpose of this step is to solve the time dependence of dynamic data and realize dimension reduction of nonlinear time dimension.

具体地,步骤三中,利用基于相对熵的正则化项实现特征序列的软分配聚类时,使用K-means算法初始化聚类中心。Specifically, in step 3, when using the relative entropy-based regularization term to realize the soft assignment clustering of the feature sequence, the K-means algorithm is used to initialize the cluster center.

S3:利用基于相对熵的正则化项实现特征空间Z的软分配聚类,迭代更新后得到聚类结果。S3: Use the regularization term based on relative entropy to realize the soft assignment clustering of the feature space Z, and obtain the clustering result after iterative update.

具体地,步骤三中,利用基于相对熵的正则化项实现特征序列的软分配聚类时,所述联合学习框架还包括用于对特征空间进行聚类的时序聚类层,编码器和时序聚类层迭代更新,直到得到稳定的结果,最终输入片段聚类为多种类型的片段库

Figure BDA0002792828440000061
其中k0是最优聚类数,包括以下步骤:Specifically, in step 3, when the relative entropy-based regularization term is used to realize the soft assignment clustering of the feature sequence, the joint learning framework further includes a time series clustering layer for clustering the feature space, an encoder and a time series The clustering layer is iteratively updated until stable results are obtained, and finally the input fragments are clustered into multiple types of fragment libraries
Figure BDA0002792828440000061
where k 0 is the optimal number of clusters, including the following steps:

步骤41:使用欧氏距离ED来计算特征空间各元素zi到聚类中心cj的距离dijStep 41: use the Euclidean distance ED to calculate the distance d ij from each element zi in the feature space to the cluster center c j ;

步骤42:利用学生t分布将距离dij归一化为概率分配,特征向量zi属于第j个簇的概率Step 42: Normalize the distance d ij to a probability distribution using Student’s t distribution, the probability that the feature vector zi belongs to the jth cluster

Figure BDA0002792828440000062
Figure BDA0002792828440000062

其中,qij值越大,特征向量zi离聚类中心越近,属于第k簇的可能性越大,α为学生t分布的自由度数;Among them, the larger the value of q ij is, the closer the eigenvector zi is to the cluster center, the greater the possibility of belonging to the kth cluster, and α is the number of degrees of freedom of the student's t distribution;

步骤43:目标分布pij设置为高于置信阈值的数据点的delta分布,并忽略其余值,其中,

Figure BDA0002792828440000063
Step 43: The target distribution p ij is set to the delta distribution of data points above the confidence threshold, and the remaining values are ignored, where,
Figure BDA0002792828440000063

步骤44:将迭代训练的目标设定为最小化概率分配qij与目标分布pij之间的相对熵损失

Figure BDA0002792828440000064
Step 44: Set the goal of iterative training to minimize the relative entropy loss between the probability distribution q ij and the target distribution p ij
Figure BDA0002792828440000064

步骤45:总损失Losst4tal=LossC+λLossae,其中λ为比例系数,LossC作为正则化项,防止编码器特征提取过程过拟合。自编码器经过预训练,故微调即可,本实施例中,此时比例系数可取常数0.01。Step 45: The total loss Loss t4tal = Loss C + λLoss ae , where λ is a proportional coefficient, and Loss C is used as a regularization term to prevent the encoder feature extraction process from overfitting. The autoencoder has been pre-trained, so fine-tuning can be done. In this embodiment, the proportional coefficient may take a constant of 0.01 at this time.

利用速度数据对联合学习框架进行训练,即编码器和时序聚类层迭代更新直到得到稳定结果的流程如下。The joint learning framework is trained using velocity data, that is, the encoder and time series clustering layers are iteratively updated until a stable result is obtained. The process is as follows.

输入:enter:

给定车辆微行程片段的速度值集合,即输入片段X,训练样本大小n,预训练迭代次数iteration0,优化迭代次数iteration1,聚类数k。A set of velocity values for a given vehicle micro-travel segment, that is, the input segment X, the training sample size n, the number of pre-training iterations iteration0, the number of optimization iterations iteration1, and the number of clusters k.

输出:训练完成的编解码网络θ;聚类中心C。Output: Encoder-decoder network θ after training; cluster center C.

具体过程如下:The specific process is as follows:

1:初始化学习参数θ,学习率η,动量v;1: Initialize learning parameters θ, learning rate η, momentum v;

2:for第i次随机选取n个训练样本x(1≤i≤iteration0)do{2: for the i-th random selection of n training samples x(1≤i≤iteration0)do{

3:编码器网络输出Z;3: encoder network output Z;

4:解码器网络输出X′;4: The decoder network outputs X';

5:按照公式计算损失函数Lossae(i);5: Calculate the loss function Loss ae (i) according to the formula;

6:更新权重参数θi←θi-1-αvj-2i-1ΔLossae;}6: Update the weight parameter θ i ←θ i-1 -αv j-2i-1 ΔLoss ae ;}

7:end for7: end for

8:初始化聚类中心C,初始化学习率η,一阶矩估计的指数衰减率β1,二阶矩估计的指数衰减率β2,常数∈,一阶动量项m0,二阶动量项v0 8: Initialize the cluster center C, initialize the learning rate η, the exponential decay rate β 1 estimated by the first-order moment, the exponential decay rate β 2 estimated by the second-order moment, the constant ∈, the first-order momentum term m 0 , the second-order momentum term v 0

9:for第i次随机选取n个训练样本x(1≤i≤iteration1)do{9: for the i-th random selection of n training samples x(1≤i≤iteration1)do{

10:按照公式计算kl散度Lossc;10: calculate the kl divergence Lossc according to the formula;

11:按照公式计算总损失函数Losstotal;11: Calculate the total loss function Losstotal according to the formula;

12:一阶动量项修正值

Figure BDA0002792828440000071
12: First-order momentum term correction value
Figure BDA0002792828440000071

13:二阶动量项修正值

Figure BDA0002792828440000072
13: Second-order momentum term correction value
Figure BDA0002792828440000072

14:更新编解码网络权重参数

Figure BDA0002792828440000073
14: Update the codec network weight parameters
Figure BDA0002792828440000073

15:更新聚类中心

Figure BDA0002792828440000074
15: Update cluster centers
Figure BDA0002792828440000074

16:end for16: end for

17:return训练完成的编解码网络θ;聚类中心C。17: Return the encoder-decoder network θ that has been trained; the cluster center C.

上述过程利用了伪代码的形式对联合学习框架的训练过程进行了说明,其中return表示输出值;for A do{B}表示将A中每个元素迭代一次,就执行一次B中内容,endfor表示结束循环。The above process uses the form of pseudocode to describe the training process of the joint learning framework, where return represents the output value; for A do{B} means that each element in A is iterated once, and the content in B is executed once, and endfor means End the loop.

具体地,根据戴维森堡丁指数DBI,选择最优聚类数,包括以下步骤:Specifically, selecting the optimal number of clusters according to the Davidson Bodding Index DBI includes the following steps:

设置k值,分别带入训练编解码及聚类网络;Set the k value and bring it into the training codec and clustering network respectively;

计算各k值下聚类结果的DBI值:Calculate the DBI value of the clustering results under each k value:

Figure BDA0002792828440000075
Figure BDA0002792828440000075

其中,k代表聚类数;‖ci-cj||2表示聚类i质心与聚类j质心的欧式距离;

Figure BDA0002792828440000076
表示聚类i内特征向量到其质心的平均距离,代表聚类i中数据的分散程度,
Figure BDA0002792828440000081
表示聚类j内特征向量到其质心的平均距离,代表聚类j中数据的分散程度;
Figure BDA0002792828440000082
Figure BDA0002792828440000083
Mi表示聚类i的数据个数;Xis表示聚类i中的第s个数据,Xjs表示聚类j中的第s个数据,ci表示聚类i的质心,cj表示聚类j的质心;p通常取2;Among them, k represents the number of clusters; ‖c i -c j || 2 represents the Euclidean distance between the centroid of cluster i and the centroid of cluster j;
Figure BDA0002792828440000076
Represents the average distance from the feature vector in cluster i to its centroid, represents the degree of dispersion of data in cluster i,
Figure BDA0002792828440000081
Represents the average distance from the feature vector in cluster j to its centroid, and represents the dispersion degree of data in cluster j;
Figure BDA0002792828440000082
Figure BDA0002792828440000083
M i represents the number of data in cluster i; X is represents the s-th data in cluster i, X js represents the s-th data in cluster j, ci represents the centroid of cluster i, and c j represents cluster i The centroid of class j; p usually takes 2;

选取DBI值第一次出现局部最小值时的k值,作为最优聚类数k0The k value when the DBI value first appears in the local minimum value is selected as the optimal number of clusters k 0 .

DBI值越小意味着类内距离越小,同时类间距离越大,即聚类效果越好。The smaller the DBI value, the smaller the intra-class distance, and the larger the inter-class distance, that is, the better the clustering effect.

S4:根据聚类结果与输入片段的对应关系,对输入片段进行分类得到多种类型的片段库,从各类片段库中挑选输入片段形成行驶工况。S4: According to the corresponding relationship between the clustering result and the input segment, the input segment is classified to obtain various types of segment libraries, and the input segments are selected from the various segment libraries to form the driving condition.

具体地,步骤四中,从各类片段库中挑选输入片段形成行驶工况时,所述聚类结果中特征空间内的特征向量具有类标签,按类内距离与类间距的比值对各类标签下的特征向量进行排序,确定各类标签下特征向量的优先级;根据各类片段库占总体片段库的时间占比,确定各类片段库中选取的片段数目,根据各类标签下特征向量的优先级挑选输入片段形成行驶工况。Specifically, in step 4, when input segments are selected from various types of segment libraries to form driving conditions, the feature vectors in the feature space in the clustering result have class labels, and each type of The feature vectors under the tags are sorted, and the priority of the feature vectors under various tags is determined; according to the time proportion of the various fragment libraries in the total fragment library, the number of selected fragments in the various fragment libraries is determined. The vector of priorities picks input segments to form driving conditions.

每个特征向量对应一个输入片段,特征向量和输入片段之间通过各自的序号进行匹配;经过聚类操作后,每个特征向量都具有一个类标签,由于每个特征向量都对应一个输入片段,故可以通过类标签将输入片段分成各类片段库;对特征向量按照一定规则进行排序,其实质上就是在为各类片段库内的输入片段确定优先级。Each feature vector corresponds to an input segment, and the feature vector and the input segment are matched by their respective serial numbers; after the clustering operation, each feature vector has a class label. Since each feature vector corresponds to an input segment, Therefore, the input fragments can be divided into various fragment libraries through class labels; the feature vectors are sorted according to certain rules, which is essentially determining the priority of the input fragments in the various fragment libraries.

每一类片段库应当挑选的数目,通过各类片段库内的输入片段占所有输入片段的时间占比来确定;每一类片段库挑选输入片段的顺序,通过各类片段库内输入片段的优先级确定。The number of fragments that should be selected for each type of fragment library is determined by the time proportion of input fragments in each type of fragment library to all input fragments; Priority is determined.

具体地,使用相对误差RE和速度加速度联合分布SAPD两种方法评估所构建的行驶工况。Specifically, two methods of relative error RE and speed-acceleration joint distribution SAPD are used to evaluate the constructed driving conditions.

本发明使用COPERT模型估算单车排放,具体来说:The present invention uses the COPERT model to estimate single-vehicle emissions, specifically:

采用COPERT Ⅲ排放模型计算单车型的尾气排放因子Using COPERT Ⅲ emission model to calculate the exhaust emission factor of a single vehicle

Efjw=(aw+cwvj+ewvj 2)/(1+bwvj+dwvj 2);Ef jw =(a w +c w v j +e w v j 2 )/(1+b w v j +d w v j 2 );

其中vj为第j类型的车驾驶循环的平均速度,aw、bw、cw、dw为第w种污染物的计算系数,具体见图7所示。where v j is the average speed of the j-th type of vehicle driving cycle, and a w , b w , c w , and d w are the calculation coefficients of the w-th type of pollutant, as shown in Figure 7 .

车辆主要污染物排放量估算E=Efjw×len×f,主要污染物如二氧化碳CO、碳氢化合物HC、氮氧化物NOx,其中,len表示行驶路程长度,f表示车流量,单车排放估算时,f取1。Estimated emissions of major vehicle pollutants E=Ef jw ×len×f, major pollutants such as carbon dioxide CO, hydrocarbons HC, nitrogen oxides NOx, where len represents the length of the driving distance, f represents the traffic flow, and when estimating single-vehicle emissions , f takes 1.

结合车辆的GPS数据,估算并可视化单辆车的排放,为城市道路规划提供建议。Combined with the vehicle's GPS data, it estimates and visualizes the emissions of a single vehicle to provide recommendations for urban road planning.

不同于传统的工况构建方法,本发明采用了无监督联合特征学习与聚类框架,利用行驶数据的连续性,考虑动态数据的时间依赖性,行驶工况构建过程不使用任何手工设计特征表达及挑选片段,并且能够在真实行驶数据上实现更高精度及鲁棒性的工况模型构建。Different from the traditional working condition construction method, the present invention adopts an unsupervised joint feature learning and clustering framework, utilizes the continuity of driving data and considers the time dependence of dynamic data, and does not use any hand-designed feature expression in the driving condition construction process. And select segments, and can achieve higher accuracy and robustness model construction on real driving data.

利用福州市轻型车的OBD数据,包括速度数据和GPS数据,进行了本发明方法的验证,从聚类效果展示本发明方法的先进性,进一步展示构建的工况模型,并演示了工况模型的一个应用案例。Using the OBD data of light vehicles in Fuzhou City, including speed data and GPS data, the method of the present invention is verified, and the advanced nature of the method of the present invention is demonstrated from the clustering effect, and the constructed working condition model is further displayed, and the working condition model is demonstrated. an application case.

图3为本实施例的聚类结果,类间耦合度低,类内聚合度高,聚类效果好。FIG. 3 shows the clustering result of this embodiment. The coupling degree between classes is low, the aggregation degree within the class is high, and the clustering effect is good.

图4为本实施例构建的工况模型,设置的行驶工况周期为1200-1300s。FIG. 4 is a working condition model constructed in this embodiment, and the set driving condition period is 1200-1300s.

图5和图6为估算福州市试验车辆单车的污染排放的可视化展示,其中图5为试验车辆行驶速度可视化图;图6为估算CO排放状况的可视化图。Figures 5 and 6 are visual representations of estimating the pollution emissions of single vehicles in Fuzhou test vehicles, of which Figure 5 is a visual diagram of the driving speed of the test vehicles; Figure 6 is a visual diagram of the estimated CO emissions.

由图5和图6可知,汽车污染排放与速度大小成正比;高速行驶路段及部分路口排放较大。可以提高排放较大路口的通行效率以降低排放。It can be seen from Figure 5 and Figure 6 that the pollution emissions of vehicles are proportional to the speed; the high-speed driving sections and some intersections have larger emissions. It is possible to improve the efficiency of traffic at intersections with high emissions to reduce emissions.

一种动态行驶工况构建系统,包括:A system for constructing dynamic driving conditions, comprising:

数据获取模块,其获取车辆的速度数据并进行预处理,生成输入片段X;A data acquisition module, which acquires the speed data of the vehicle and performs preprocessing to generate an input segment X;

编码模块,其构建基于深度神经网络和双向长短时记忆网络的联合学习框架,将输入片段输入到联合学习框架中,得到特征空间Z;an encoding module, which constructs a joint learning framework based on a deep neural network and a bidirectional long-short-term memory network, inputs the input segment into the joint learning framework, and obtains the feature space Z;

聚类模块,其利用基于相对熵的正则化项实现特征空间Z的软分配聚类,迭代更新后得到聚类结果;Clustering module, which utilizes the regularization term based on relative entropy to realize soft assignment clustering of feature space Z, and obtains the clustering result after iterative update;

行驶工况构建模块,其根据聚类结果与输入片段的对应关系,对输入片段进行分类得到多种类型的片段库,从各类片段库中挑选输入片段形成行驶工况。The driving condition building module classifies the input segments according to the corresponding relationship between the clustering results and the input segments to obtain various types of segment libraries, and selects input segments from various segment libraries to form driving conditions.

一种计算机设备,包括包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现所述的构建方法。A computer device includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the construction method when the processor executes the computer program.

本发明中速度数据和GPS数据来源于车载诊断系统的汽车行驶数据。In the present invention, the speed data and the GPS data are derived from the vehicle driving data of the vehicle-mounted diagnostic system.

对于本领域技术人员而言,显然本发明不限于上述示范性实施例的细节,而且在不背离本发明的精神或基本特征的情况下,能够以其他的具体形式实现本发明。因此无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本发明的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化囊括在本发明内,不应将权利要求中的任何附图标记视为限制所涉及的权利要求。It will be apparent to those skilled in the art that the present invention is not limited to the details of the above-described exemplary embodiments, but that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics of the invention. The embodiments are therefore to be regarded in all respects as illustrative and not restrictive, the scope of the invention being defined by the appended claims rather than the foregoing description, and are therefore intended to fall within the scope of the appended claims. All changes that come within the meaning and range of equivalents are embraced within the invention, and any reference signs in the claims shall not be construed as limiting the scope of the claims involved.

此外,应当理解,虽然本说明书按照实施方式加以描述,但并非每个实施方式仅包含一个独立技术方案,说明书的这种叙述方式仅仅是为了清楚起见,本领域技术人员应当将说明书作为一个整体,各实施例中的技术方案也可以经适当组合,形成本领域技术人员可以理解的其他实施方式。In addition, it should be understood that although this specification is described in terms of embodiments, not every embodiment only includes an independent technical solution, and this description in the specification is only for the sake of clarity, and those skilled in the art should take the specification as a whole, The technical solutions in each embodiment can also be appropriately combined to form other embodiments that can be understood by those skilled in the art.

Claims (8)

1. A dynamic running condition construction method comprises the following steps:
the method comprises the following steps: acquiring speed data of a vehicle, preprocessing the speed data, and generating an input fragment X;
step two: constructing a joint learning framework based on a deep neural network and a bidirectional long-term and short-term memory network, and inputting input segments into the joint learning framework to obtain a feature space Z;
step three: soft distribution clustering of the characteristic space Z is realized by utilizing a regularization item based on relative entropy, and a clustering result is obtained after iterative updating;
step four: classifying the input fragments according to the corresponding relation between the clustering result and the input fragments to obtain fragment libraries of various types, and selecting the input fragments from the fragment libraries to form a driving working condition;
in the first step, when the speed data is preprocessed, invalid data in the speed data is removed and missing values are filled; extracting micro-stroke fragments from the speed data to generate a micro-stroke fragment library; carrying out interpolation processing on the micro-stroke fragment library to obtain an equal-length sequence library, and carrying out normalization processing on the equal-length sequence library to obtain an input fragment;
step two, when the input segment is input into the joint learning framework, the joint learning framework comprises a self-encoder, the self-encoder comprises an encoder and a decoder, and the encoder sequentially processes the input segment through a deep neural network and a bidirectional long-time and short-time memory network;
the deep neural network learns the waveform of the short time scale in the input segment and extracts the local features of the waveform;
the bidirectional long-short time memory network learns the time connection between the waveforms across the time scale in the input segment, extracts the global characteristics of the input segment and further forms the characteristic space Z;
the decoder reconstructs the characteristic space by adopting up-sampling and deconvolution to form a reconstructed segment X';
pre-training the self-encoder to make the reconstructed segment X' output from the decoder and the input segment have the minimum mean square error
Figure FDA0003642817310000011
2. The dynamic running condition construction method according to claim 1, characterized in that: in the third step, when the soft distribution clustering of the feature sequences is realized by utilizing the regularization item based on the relative entropy, the joint learning framework further comprises a time sequence clustering layer for clustering the feature space, an encoder and the time sequence clustering layer are updated in an iterative manner until a stable result is obtained, and finally the input segments are clustered into a plurality of types of segment libraries
Figure FDA0003642817310000012
Wherein k is 0 Is an optimal clustering number and comprises the following steps:
step 41: computing elements z of a feature space using Euclidean distances ED i To the center of the cluster c j Distance d of ij
Step 42: distance d using student t distribution ij Normalized to probability distribution, feature vector z i Probability of belonging to jth cluster
Figure FDA0003642817310000021
Wherein q is ij The larger the value, the feature vector z i The closer to the clustering center, the higher the possibility of belonging to the jth cluster, and alpha is the degree of freedom of student t distribution;
step 43: target distributionp ij Set to a delta distribution of data points above a confidence threshold and ignore the remaining values, wherein,
Figure FDA0003642817310000022
step 44: setting the target of iterative training to minimize probability distribution q ij With the target distribution p ij Relative entropy loss therebetween
Figure FDA0003642817310000023
Wherein n is the number of micro-stroke segments;
step 45: total Loss total =Loss C +λLoss ae Wherein λ is a proportionality coefficient, Loss C As a regularization term, the encoder feature extraction process is prevented from overfitting.
3. The dynamic running condition construction method according to claim 2, characterized in that: selecting an optimal clustering number according to the davison burger index DBI, comprising the following steps:
setting a k value, and respectively substituting the k value into the training coding and decoding and the clustering network;
Figure FDA0003642817310000024
calculating the DBI value of the clustering result under each k value:
wherein k represents a cluster number; II c i -c j || 2 Representing the Euclidean distance between the centroid of the cluster i and the centroid of the cluster j;
Figure FDA0003642817310000025
represents the average distance of the feature vector in the cluster i to the centroid thereof, represents the dispersion degree of the data in the cluster i,
Figure FDA0003642817310000026
represents the average distance of the feature vector in the cluster j to the centroid thereof, and represents the clusterThe degree of dispersion of data in class j;
Figure FDA0003642817310000027
Figure FDA0003642817310000028
M i representing the number of data of the cluster i; m j Representing the number of data of the cluster j; x is Representing the s-th data in cluster i, X js Representing the s-th data in the cluster j, c i Representing the centroid of cluster i, c j Represents the centroid of cluster j; p is 2;
selecting the k value of the DBI value when the local minimum value appears for the first time as the optimal clustering number k 0
4. The dynamic running condition construction method according to claim 1, characterized in that: in the third step, when soft distribution clustering of the characteristic sequences is realized by utilizing the regularization item based on the relative entropy, a K-means algorithm is used for initializing a clustering center.
5. The dynamic running condition construction method according to claim 1, characterized in that: step four, when the input segments are selected from the segment libraries to form a driving working condition, the feature vectors in the feature space in the clustering result have class labels, the feature vectors under the various labels are sequenced according to the ratio of the class distance to the class distance, and the priority of the feature vectors under the various labels is determined; and determining the number of the selected fragments in each fragment library according to the time ratio of each fragment library to the total fragment library, and selecting the input fragments according to the priority of the feature vectors under each label to form a driving working condition.
6. The dynamic running condition construction method according to claim 1, characterized in that: and evaluating the constructed running condition by using two methods of relative error and speed acceleration combined distribution.
7. A dynamic driving condition construction system, characterized by comprising:
the data acquisition module acquires and preprocesses speed data of a vehicle to generate an input fragment X;
the coding module is used for constructing a joint learning framework based on a deep neural network and a bidirectional long-time memory network, and inputting the input segments into the joint learning framework to obtain a feature space Z;
the clustering module is used for realizing soft distribution clustering of the characteristic space Z by utilizing a regularization item based on relative entropy and obtaining a clustering result after iterative updating;
the driving condition construction module is used for classifying the input fragments according to the corresponding relation between the clustering result and the input fragments to obtain fragment libraries of various types, and selecting the input fragments from the fragment libraries of various types to form the driving condition;
when the speed data is preprocessed, removing invalid data in the speed data and filling missing values; extracting micro-stroke fragments from the speed data to generate a micro-stroke fragment library; carrying out interpolation processing on the micro-stroke fragment library to obtain an equal-length sequence library, and carrying out normalization processing on the equal-length sequence library to obtain an input fragment;
when an input segment is input into a joint learning framework, the joint learning framework comprises a self-encoder, the self-encoder comprises an encoder and a decoder, and the encoder sequentially processes the input segment through a deep neural network and a bidirectional long-term and short-term memory network;
the deep neural network learns the waveform of the short time scale in the input segment and extracts the local features of the waveform;
the bidirectional long-short time memory network learns the time connection between the waveforms across the time scale in the input segment, extracts the global characteristics of the input segment and further forms the characteristic space Z;
the decoder reconstructs the characteristic space by adopting up-sampling and deconvolution to form a reconstructed segment X';
pre-training the self-encoder to make the reconstructed segment X' output from the decoder and the input segment have the minimum mean square error
Figure FDA0003642817310000041
8. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the building method according to any one of claims 1-6 when executing the computer program.
CN202011320811.1A 2020-11-23 2020-11-23 A method, system and device for constructing dynamic driving conditions Active CN112434735B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011320811.1A CN112434735B (en) 2020-11-23 2020-11-23 A method, system and device for constructing dynamic driving conditions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011320811.1A CN112434735B (en) 2020-11-23 2020-11-23 A method, system and device for constructing dynamic driving conditions

Publications (2)

Publication Number Publication Date
CN112434735A CN112434735A (en) 2021-03-02
CN112434735B true CN112434735B (en) 2022-09-06

Family

ID=74692957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011320811.1A Active CN112434735B (en) 2020-11-23 2020-11-23 A method, system and device for constructing dynamic driving conditions

Country Status (1)

Country Link
CN (1) CN112434735B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221975B (en) * 2021-04-26 2023-07-11 中国科学技术大学先进技术研究院 Working condition construction method and storage medium based on improved Markov analysis method
CN113469240B (en) * 2021-06-29 2024-04-02 中国科学技术大学 A driving condition construction method and storage medium based on shape similarity
CN113627610B (en) * 2021-08-03 2022-07-05 北京百度网讯科技有限公司 Deep learning model training method for meter box prediction and meter box prediction method
CN114021617B (en) * 2021-09-29 2024-09-17 中国科学技术大学 Mobile source driving condition construction method and equipment based on short-range feature clustering
CN113962359A (en) * 2021-09-30 2022-01-21 华东师范大学 A self-balancing model training method based on federated learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2420204A2 (en) * 2010-08-19 2012-02-22 Braun GmbH Method for operating an electric appliance and electric appliance
CN107914714A (en) * 2017-11-16 2018-04-17 北京经纬恒润科技有限公司 The display methods and device of a kind of vehicle running state
CN109711459A (en) * 2018-12-24 2019-05-03 广东德诚科教有限公司 User individual action estimation method, apparatus, computer equipment and storage medium
CN110985651A (en) * 2019-12-04 2020-04-10 北京理工大学 Automatic transmission multi-parameter fusion gear shifting strategy based on prediction
CN111832225A (en) * 2020-07-07 2020-10-27 重庆邮电大学 A method for constructing driving conditions of automobiles

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11731612B2 (en) * 2019-04-30 2023-08-22 Baidu Usa Llc Neural network approach for parameter learning to speed up planning for complex driving scenarios

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2420204A2 (en) * 2010-08-19 2012-02-22 Braun GmbH Method for operating an electric appliance and electric appliance
CN107914714A (en) * 2017-11-16 2018-04-17 北京经纬恒润科技有限公司 The display methods and device of a kind of vehicle running state
CN109711459A (en) * 2018-12-24 2019-05-03 广东德诚科教有限公司 User individual action estimation method, apparatus, computer equipment and storage medium
CN110985651A (en) * 2019-12-04 2020-04-10 北京理工大学 Automatic transmission multi-parameter fusion gear shifting strategy based on prediction
CN111832225A (en) * 2020-07-07 2020-10-27 重庆邮电大学 A method for constructing driving conditions of automobiles

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Label-Based Trajectory Clustering in Complex Road Networks;Xinzheng Niu etal.;《IEEE Transactions on Intelligent Transportation Systems》;20191213;第21卷(第10期);全文 *
基于神经网络算法的车辆行驶识别研究;史骏;《计算机与数字工程》;20171231;第45卷(第12期);全文 *
车辆行驶工况的开发和精度研究;高建平等;《浙江大学学报(工学版)》;20171015(第10期);全文 *

Also Published As

Publication number Publication date
CN112434735A (en) 2021-03-02

Similar Documents

Publication Publication Date Title
CN112434735B (en) A method, system and device for constructing dynamic driving conditions
Zhang et al. Constructing a PM2. 5 concentration prediction model by combining auto-encoder with Bi-LSTM neural networks
CN111832225B (en) Method for constructing driving condition of automobile
Shen et al. TTPNet: A neural network for travel time prediction based on tensor decomposition and graph embedding
CN111709292B (en) Compressor vibration fault detection method based on recursion diagram and deep convolution network
CN109035779A (en) Freeway traffic flow prediction technique based on DenseNet
CN113326981A (en) Atmospheric environment pollutant prediction model based on dynamic space-time attention mechanism
CN111598325A (en) Traffic Speed Prediction Method Based on Hierarchical Clustering and Hierarchical Attention Mechanism
CN113239753A (en) Improved traffic sign detection and identification method based on YOLOv4
CN117152427A (en) Remote sensing image semantic segmentation method and system based on diffusion model and knowledge distillation
CN108447057A (en) SAR image change detection based on conspicuousness and depth convolutional network
CN112001293A (en) Remote sensing image ground object classification method combining multi-scale information and coding and decoding network
CN116415200A (en) A method and system for abnormal vehicle trajectory detection based on deep learning
CN115310674A (en) Long-time sequence prediction method based on parallel neural network model LDformer
CN114021617B (en) Mobile source driving condition construction method and equipment based on short-range feature clustering
CN116776269A (en) Traffic anomaly detection method based on graph convolution neural network self-encoder
CN115422747A (en) Calculation method and calculation device for motor vehicle exhaust pollutant emission
CN115063972A (en) Traffic speed prediction method and system based on graph convolution and gated recurrent unit
CN117726939A (en) Hyperspectral image classification method based on multi-feature fusion
Pei et al. UJ-FLAC: Unsupervised joint feature learning and clustering for dynamic driving cycles construction
CN118439034B (en) Driving style recognition method, driving style recognition device, computer equipment and storage medium
CN107492129B (en) Non-convex compressive sensing optimization reconstruction method based on sketch representation and structured clustering
Li et al. ADDGCN: A Novel Approach with Down-Sampling Dynamic Graph Convolution and Multi-Head Attention for Traffic Flow Forecasting
CN106960225A (en) A kind of sparse image classification method supervised based on low-rank
CN115345257B (en) Flight trajectory classification model training method, classification method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant