CN112434735A - Dynamic driving condition construction method, system and equipment - Google Patents

Dynamic driving condition construction method, system and equipment Download PDF

Info

Publication number
CN112434735A
CN112434735A CN202011320811.1A CN202011320811A CN112434735A CN 112434735 A CN112434735 A CN 112434735A CN 202011320811 A CN202011320811 A CN 202011320811A CN 112434735 A CN112434735 A CN 112434735A
Authority
CN
China
Prior art keywords
input
clustering
cluster
data
fragment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011320811.1A
Other languages
Chinese (zh)
Other versions
CN112434735B (en
Inventor
康宇
裴丽红
许镇义
赵振怡
刘斌琨
曹洋
吕文君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202011320811.1A priority Critical patent/CN112434735B/en
Publication of CN112434735A publication Critical patent/CN112434735A/en
Application granted granted Critical
Publication of CN112434735B publication Critical patent/CN112434735B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/80Technologies aiming to reduce greenhouse gasses emissions common to all road transportation technologies
    • Y02T10/84Data processing systems or methods, management, administration

Abstract

The invention discloses a method for constructing a dynamic driving condition, which comprises the following steps: acquiring speed data of a vehicle, preprocessing the speed data, and generating an input fragment X; constructing a joint learning framework based on a deep neural network and a bidirectional long-time memory network, and inputting input segments into the joint learning framework to obtain a feature space Z; soft distribution clustering of the characteristic space Z is realized by utilizing a regularization item based on relative entropy, and a clustering result is obtained after iterative updating; and classifying the input fragments according to the corresponding relation between the clustering result and the input fragments to obtain various fragment libraries, and selecting the input fragments from the various fragment libraries to form the driving working condition.

Description

Dynamic driving condition construction method, system and equipment
Technical Field
The invention relates to the technical field of environment detection, in particular to a method, a system and equipment for constructing a dynamic driving condition.
Background
According to the technical policy of motor vehicle pollution prevention and control revised by the organization of the ministry of environmental protection, the further improvement of the environmental quality is pointed out as the core to construct a motor vehicle pollution prevention and control system, and the systematization, the scientification and the informatization of the motor vehicle pollution prevention and control work are promoted. The technical policy clearly shows that the emission limits of pollutants such as carbon monoxide (CO), Total Hydrocarbons (THC), nitrogen oxides (NOx) and Particulate Matters (PM) of motor vehicles are gradually tightened. The 2020 plus 2025 year market prospect and investment opportunity research report of China motor vehicle industry show that by the end of 2019, the number of motor vehicles in China is up to 3.48 hundred million, China is the first automobile consumption market and production country in the world, and the number of motor vehicles in China stays at the forefront of the world all the year round. With the rapid increase of the quantity of motor vehicles kept, the problems of urban traffic jam and vehicle tail gas pollution emission caused by the motor vehicles are getting more serious. The pollutant discharge of motor vehicles is mainly influenced by the running conditions of the vehicles, and if the idling time of the vehicles under traffic jam is long and the acceleration and deceleration frequency is too high, the higher exhaust emission is caused. The construction of the driving condition is a construction method of an automobile driving profile based on typical traffic conditions, and plays an important role in the evaluation of automobile emission, economy and mileage.
The current construction method of the running condition is mainly divided into two types: markov analysis and cluster analysis. The Markov analysis method regards the speed and time relation of the vehicle running process as a random process, and combines different model events to form the whole running process by utilizing the characteristic that the state at the time t only depends on the state at the time t-1 (namely, no aftereffect). The clustering analysis method divides all micro-process segments into a plurality of classes according to the similarity degree of the micro-process segments, and selects the segments from each class of segment library according to a certain principle to form a final working condition curve. Compared with a Markov analysis method, the clustering analysis method can obtain different types of working conditions, is closer to the actual road working conditions, and is simple and easy to implement.
The driving cycle of the vehicle is influenced due to different actual traffic conditions and road characteristics in various regions of a city, meanwhile, new energy vehicles are developed, vehicle type data are more and more abundant, the manual design characteristics adopted by the traditional method are used for representing the spatial speed-time distribution of driving data, the driving data are regarded as static data, the inherent dynamic characteristics and time dependence of the static data are ignored, and the low precision and the insufficient robustness are caused.
Disclosure of Invention
In order to solve the technical problems, the invention provides a method, a system and equipment for constructing a dynamic driving condition.
In order to solve the technical problems, the invention adopts the following technical scheme:
a dynamic driving condition construction method comprises the following steps:
the method comprises the following steps: acquiring speed data of a vehicle, preprocessing the speed data, and generating an input fragment X;
step two: constructing a joint learning framework based on a deep neural network and a bidirectional long-time memory network, and inputting input segments into the joint learning framework to obtain a feature space Z;
step three: soft distribution clustering of the characteristic space Z is realized by utilizing a regularization item based on relative entropy, and a clustering result is obtained after iterative updating;
step four: and classifying the input fragments according to the corresponding relation between the clustering result and the input fragments to obtain various fragment libraries, and selecting the input fragments from the various fragment libraries to form the driving working condition.
Specifically, in the first step, when the speed data is preprocessed, invalid data in the speed data is removed and missing values are filled; extracting micro-stroke fragments from the speed data to generate a micro-stroke fragment library; and carrying out interpolation processing on the micro-stroke fragment library to obtain an equal-length sequence library, and carrying out normalization processing on the equal-length sequence library to obtain an input fragment.
Specifically, in the second step, when the input segment is input into the joint learning framework, the joint learning framework comprises a self-encoder, the self-encoder comprises an encoder and a decoder, and the encoder sequentially processes the input segment through a deep neural network and a bidirectional long-term and short-term memory network;
the deep neural network learns the waveform of the short time scale in the input segment and extracts the local features of the waveform;
the bidirectional long-short time memory network learns the time connection between the waveforms across the time scale in the input segment, extracts the global characteristics of the input segment and further forms the characteristic space Z;
the decoder reconstructs the characteristic space by adopting up-sampling and deconvolution to form a reconstructed segment X';
pre-training the self-encoder to make the reconstructed segment X' output from the decoder and the input segment have the minimum mean square error
Figure BDA0002792828440000021
Specifically, in step three, when soft distribution clustering of the feature sequences is realized by utilizing the regularization term based on the relative entropy, the joint learning framework further comprises a time sequence clustering layer for clustering the feature space, an encoder and time sequence clustering layer iterative updating until a stable result is obtained, and finally the input segments are clustered into a segment library of multiple types
Figure BDA0002792828440000022
Wherein k is0Is an optimal clustering number and comprises the following steps:
step 41: computing elements z of a feature space using Euclidean distances EDiTo the center of the cluster cjDistance d ofij
Step 42: distance d using student t distributionijNormalized to probability distribution, feature vector ziProbability of belonging to jth cluster
Figure BDA0002792828440000031
Wherein q isijThe larger the value, the feature vector ziThe closer to the clustering center, the higher the probability of belonging to the kth cluster, and alpha is the degree of freedom of student t distribution;
step 43: target distribution pijSetting delta of data point above confidence thresholdDistribute, and ignore the remaining values, wherein,
Figure BDA0002792828440000032
step 44: setting an objective of iterative training to minimize probability distribution qijWith the target distribution pijRelative entropy loss therebetween
Figure BDA0002792828440000033
Step 45: total Losstotal=LossC+λLossaeWhere λ is the proportionality coefficient, LossCAs a regularization term, the encoder feature extraction process is prevented from overfitting.
Specifically, according to the Davison bauble index DBI, the optimal clustering number is selected, and the method comprises the following steps:
setting a k value, and respectively carrying the k value into the training coding and decoding and the clustering network;
calculating the DBI value of the clustering result under each k value:
Figure BDA0002792828440000034
wherein k represents a cluster number; II ci-cj||2Representing the Euclidean distance between the centroid of the cluster i and the centroid of the cluster j;
Figure BDA0002792828440000035
represents the average distance of the feature vector in the cluster i to the centroid thereof, represents the dispersion degree of the data in the cluster i,
Figure BDA0002792828440000036
representing the average distance from the characteristic vector in the cluster j to the centroid of the characteristic vector, and representing the dispersion degree of data in the cluster j;
Figure BDA0002792828440000037
Figure BDA0002792828440000038
Mirepresenting the number of data of the cluster i; xisRepresenting the s-th data in cluster i, XjsRepresenting the s-th data in cluster j, ciRepresenting the centroid of cluster i, cjRepresents the centroid of cluster j; p is usually 2;
selecting the k value of the DBI value when the local minimum value appears for the first time as the optimal clustering number k0
Specifically, in the third step, when soft distribution clustering of the feature sequences is realized by utilizing a regularization term based on relative entropy, a K-means algorithm is used for initializing a clustering center.
Specifically, in the fourth step, when the input segments are selected from the segment libraries of various types to form a driving working condition, the feature vectors in the feature space in the clustering result have class labels, the feature vectors under the various labels are sorted according to the ratio of the class distance to the class distance, and the priority of the feature vectors under the various labels is determined; and determining the number of the selected fragments in each fragment library according to the time ratio of each fragment library to the total fragment library, and selecting the input fragments according to the priority of the feature vectors under each label to form a driving working condition.
Specifically, the constructed driving condition is evaluated by using two methods of relative error and speed acceleration combined distribution.
A dynamic driving condition construction system comprising:
the data acquisition module is used for acquiring the speed data of the vehicle, preprocessing the speed data and generating an input fragment X;
the coding module is used for constructing a joint learning framework based on a deep neural network and a bidirectional long-time memory network, and inputting the input segments into the joint learning framework to obtain a feature space Z;
the clustering module is used for realizing soft distribution clustering of the characteristic space Z by utilizing a regularization item based on relative entropy and obtaining a clustering result after iterative updating;
and the driving condition construction module is used for classifying the input fragments according to the corresponding relation between the clustering result and the input fragments to obtain a plurality of types of fragment libraries, and selecting the input fragments from the various fragment libraries to form the driving condition.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the building method when executing the computer program.
Compared with the prior art, the invention has the beneficial technical effects that:
different from the traditional working condition construction method, the invention adopts an unsupervised combined feature learning and clustering framework, takes the continuity of the driving data into consideration of the time dependence of the dynamic data, does not use any manual design feature expression and fragment selection in the driving working condition construction process, and can realize the working condition model construction with higher precision and robustness on the real driving data.
Drawings
FIG. 1 is a schematic flow chart of a method for constructing the working conditions according to the present invention;
FIG. 2 is a schematic diagram of the structure of the joint learning framework of the present invention;
FIG. 3 is a graph of a visualization of the results of the clustering of the present invention;
FIG. 4 is a diagram of a model of operating conditions constructed in accordance with the present invention;
FIG. 5 is a visual representation of the driving speed of a test vehicle according to the present invention;
FIG. 6 is a visualization of the estimated CO emission condition of the present invention;
fig. 7 is a calculated coefficient for each contaminant.
Detailed Description
A preferred embodiment of the present invention will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1 and 2, a dynamic driving condition construction method includes the following steps:
s1: and acquiring the speed data of the vehicle, preprocessing the speed data and generating an input fragment X.
Specifically, in the first step, when the speed data is preprocessed, invalid data in the speed data is removed and missing values are filled; extracting micro-stroke fragments from the speed data to generate a micro-stroke fragment library; and carrying out interpolation processing on the micro-stroke fragment library to obtain an equal-length sequence library, and carrying out normalization processing on the equal-length sequence library to obtain an input fragment.
When the micro-stroke fragment library is interpolated, a one-column sequence is combined into two columns for input by adopting two methods of cubic spline interpolation and linear interpolation.
And the micro-stroke segment consists of an idle speed segment (the speed is kept to be 0) and a kinematic segment (the speed is always greater than 0) which do not exceed 180s, and the extracted micro-stroke segment starts from the idle speed state to the end of the next idle speed state.
S2: and constructing a joint learning framework based on a deep neural network and a bidirectional long-time and short-time memory network, and inputting the input segments into the joint learning framework to obtain a feature space z.
Specifically, in the second step, when the input segment is input into the joint learning framework, the joint learning framework comprises a self-encoder, the self-encoder comprises an encoder and a decoder, and the encoder sequentially processes the input segment through a deep neural network and a bidirectional long-short term memory network;
the deep neural network learns the waveform of the short time scale in the input segment and extracts the local features of the waveform;
the bidirectional long and short term memory network learns the time connection between the cross time scale waveforms in the input segment and extracts the global characteristics of the input segment so as to form the characteristic space Z;
the decoder reconstructs the characteristic space by adopting up-sampling and deconvolution to form a reconstructed segment X';
pre-training the self-encoder to make the reconstructed segment X' output from the decoder and the input segment have the minimum mean square error
Figure BDA0002792828440000051
The purpose of this step is: the time dependency of the dynamic data is solved, and the nonlinear time dimension reduction is realized.
Specifically, in the third step, when soft distribution clustering of the feature sequences is realized by utilizing a regularization term based on relative entropy, a K-means algorithm is used for initializing a clustering center.
S3: and realizing soft distribution clustering of the characteristic space Z by utilizing a regularization item based on the relative entropy, and obtaining a clustering result after iterative updating.
Specifically, in step three, when soft distribution clustering of the feature sequences is realized by utilizing the regularization term based on the relative entropy, the joint learning framework further comprises a time sequence clustering layer for clustering the feature space, an encoder and time sequence clustering layer iterative updating until a stable result is obtained, and finally the input segments are clustered into a segment library of multiple types
Figure BDA0002792828440000061
Wherein k is0Is an optimal clustering number and comprises the following steps:
step 41: computing elements z of a feature space using Euclidean distances EDiTo the center of the cluster cjDistance d ofij
Step 42: distance d using student t distributionijNormalized to probability distribution, feature vector ziProbability of belonging to jth cluster
Figure BDA0002792828440000062
Wherein q isijThe larger the value, the feature vector ziThe closer to the clustering center, the higher the probability of belonging to the kth cluster, and alpha is the degree of freedom of student t distribution;
step 43: target distribution pijSet to a delta distribution of data points above a confidence threshold and ignore the remaining values, wherein,
Figure BDA0002792828440000063
step 44: setting an objective of iterative training to minimize probability distribution qijWith the target distribution pijRelative entropy loss therebetween
Figure BDA0002792828440000064
Step 45: total Losst4tal=LossC+λLossaeWhere λ is the proportionality coefficient, LossCAs a regularization term, the encoder feature extraction process is prevented from overfitting. The self-encoder is pre-trained and therefore fine-tuned, in this embodiment, the scaling factor may be constant 0.01.
The joint learning framework is trained by using the speed data, namely, the flow of iteratively updating the encoder and the time sequence clustering layer until a stable result is obtained is as follows.
Inputting:
the speed value set of the vehicle micro-travel segment is given, namely an input segment X, the size n of a training sample, the number iteration0 of pre-training iteration, the number iteration1 of optimization iteration and the number k of clusters.
Outputting a coding and decoding network theta after training; the cluster center C.
The specific process is as follows:
initializing learning parameters theta, learning rate eta and momentum v;
2 for selecting n training samples x (i is more than or equal to 1 and less than or equal to iteration0) do at the ith time randomly
3, outputting Z by the encoder network;
4, decoder network output X';
5, calculating Loss function Loss according to a formulaae(i);
6 updating the weight parameter thetai←θi-1-αvj-2i-1ΔLossae;}
7:end for
Initializing a clustering center C, initializing a learning rate eta, and estimating an exponential decay rate beta of a first moment1Exponential decay Rate beta of second moment estimation2Constant e, first order momentum term m0Second order momentum term v0
9 for selecting n training samples x (i is more than or equal to 1 and less than or equal to iteration1) do at the ith time randomly
10, calculating kl divergence Lossc according to a formula;
11, calculating a total loss function lostotal according to a formula;
first order momentum term correction
Figure BDA0002792828440000071
Second order momentum term correction value
Figure BDA0002792828440000072
Updating codec network weight parameters
Figure BDA0002792828440000073
15 updating the clustering centers
Figure BDA0002792828440000074
16:end for
17, completing the coding and decoding network theta of the train; the cluster center C.
The above process describes the training process of the joint learning framework in the form of pseudo code, where return represents the output value; for A do { B } indicates that iterating each element in A once, the content in B is executed once, and end for indicates ending the loop.
Specifically, according to the Davison bauble index DBI, the optimal clustering number is selected, and the method comprises the following steps:
setting a k value, and respectively carrying the k value into the training coding and decoding and the clustering network;
calculating the DBI value of the clustering result under each k value:
Figure BDA0002792828440000075
wherein k represents a cluster number; II ci-cj||2Representing the Euclidean distance between the centroid of the cluster i and the centroid of the cluster j;
Figure BDA0002792828440000076
represents the average distance of the feature vector in the cluster i to the centroid thereof, represents the dispersion degree of the data in the cluster i,
Figure BDA0002792828440000081
representing the average distance from the characteristic vector in the cluster j to the centroid of the characteristic vector, and representing the dispersion degree of data in the cluster j;
Figure BDA0002792828440000082
Figure BDA0002792828440000083
Mirepresenting the number of data of the cluster i; xisRepresenting the s-th data in cluster i, XjsRepresenting the s-th data in cluster j, ciRepresenting the centroid of cluster i, cjRepresents the centroid of cluster j; p is usually 2;
selecting the k value of the DBI value when the local minimum value appears for the first time as the optimal clustering number k0
Smaller DBI value means smaller intra-class distance, while larger inter-class distance means better clustering effect.
S4: and classifying the input fragments according to the corresponding relation between the clustering result and the input fragments to obtain various fragment libraries, and selecting the input fragments from the various fragment libraries to form the driving working condition.
Specifically, in the fourth step, when the input segments are selected from the segment libraries of various types to form a driving working condition, the feature vectors in the feature space in the clustering result have class labels, the feature vectors under the various labels are sorted according to the ratio of the class distance to the class distance, and the priority of the feature vectors under the various labels is determined; and determining the number of the selected fragments in each fragment library according to the time ratio of each fragment library to the total fragment library, and selecting the input fragments according to the priority of the feature vectors under each label to form a driving working condition.
Each feature vector corresponds to an input segment, and the feature vectors are matched with the input segments through respective sequence numbers; after clustering operation, each feature vector is provided with a class label, and each feature vector corresponds to one input segment, so that the input segments can be divided into various segment libraries through the class labels; the feature vectors are ordered according to a rule that essentially determines the priority of the input segments within the library of classes of segments.
The number of each type of segment library to be selected is determined by the time ratio of the input segments in each type of segment library to all the input segments; the sequence of the input fragments is selected by each type of fragment library and is determined by the priority of the input fragments in each type of fragment library.
Specifically, the constructed running condition is evaluated by using two methods of relative error RE and velocity acceleration combined profile SAPD.
The present invention uses a COPERT model to estimate the emissions of a single vehicle, specifically:
calculating exhaust emission factor of single vehicle type by adopting COPERT III emission model
Efjw=(aw+cwvj+ewvj 2)/(1+bwvj+dwvj 2);
Wherein v isjAverage speed of driving cycle of j-th type vehicle, aw、bw、cw、dwThe calculated coefficients for the w-th contaminant are shown in detail in fig. 7.
Vehicle major pollutant emission estimation E ═ EfjwXlen × f, main pollutants such as carbon dioxide CO, hydrocarbons HC, nitrogen oxides NOx, where len denotes the length of the driving route, f denotes the vehicle flow, and f is 1 when estimating the emission of a single vehicle.
And the emission of a single vehicle is estimated and visualized by combining the GPS data of the vehicle, so that a suggestion is provided for urban road planning.
Different from the traditional working condition construction method, the invention adopts an unsupervised combined feature learning and clustering framework, takes the continuity of the driving data into consideration of the time dependence of the dynamic data, does not use any manual design feature expression and fragment selection in the driving working condition construction process, and can realize the working condition model construction with higher precision and robustness on the real driving data.
The verification of the method is carried out by utilizing OBD data of light vehicles in Fuzhou city, including speed data and GPS data, the advancement of the method is shown from the clustering effect, the constructed working condition model is further shown, and an application case of the working condition model is demonstrated.
Fig. 3 shows the clustering result of this embodiment, which has low inter-class coupling degree, high intra-class polymerization degree, and good clustering effect.
Fig. 4 shows the operating condition model constructed in the present embodiment, and the set driving operating condition period is 1200 s and 1300 s.
Fig. 5 and 6 are visual representations of estimation of pollutant emissions of a single test vehicle in fuzhou city, wherein fig. 5 is a visual representation of the driving speed of the test vehicle; fig. 6 is a visualization of the estimated CO emission conditions.
As can be seen from FIGS. 5 and 6, the emission of pollutants from automobiles is proportional to the magnitude of the speed; the high-speed running road section and part of intersections have larger discharge. The passing efficiency of the intersection with larger discharge can be improved to reduce the discharge.
A dynamic driving condition construction system comprising:
the data acquisition module is used for acquiring the speed data of the vehicle, preprocessing the speed data and generating an input fragment X;
the coding module is used for constructing a joint learning framework based on a deep neural network and a bidirectional long-time memory network, and inputting the input segments into the joint learning framework to obtain a feature space Z;
the clustering module is used for realizing soft distribution clustering of the characteristic space Z by utilizing a regularization item based on relative entropy and obtaining a clustering result after iterative updating;
and the driving condition construction module is used for classifying the input fragments according to the corresponding relation between the clustering result and the input fragments to obtain a plurality of types of fragment libraries, and selecting the input fragments from the various fragment libraries to form the driving condition.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the building method when executing the computer program.
The speed data and the GPS data in the present invention are derived from the vehicle driving data of the on-board diagnostic system.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein, and any reference signs in the claims are not intended to be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (10)

1. A dynamic driving condition construction method comprises the following steps:
the method comprises the following steps: acquiring speed data of a vehicle, preprocessing the speed data, and generating an input fragment X;
step two: constructing a joint learning framework based on a deep neural network and a bidirectional long-time memory network, and inputting input segments into the joint learning framework to obtain a feature space Z;
step three: soft distribution clustering of the characteristic space Z is realized by utilizing a regularization item based on relative entropy, and a clustering result is obtained after iterative updating;
step four: and classifying the input fragments according to the corresponding relation between the clustering result and the input fragments to obtain various fragment libraries, and selecting the input fragments from the various fragment libraries to form the driving working condition.
2. The dynamic running condition construction method according to claim 1, characterized in that: in the first step, when the speed data is preprocessed, invalid data in the speed data is removed and missing values are filled; extracting micro-stroke fragments from the speed data to generate a micro-stroke fragment library; and carrying out interpolation processing on the micro-stroke fragment library to obtain an equal-length sequence library, and carrying out normalization processing on the equal-length sequence library to obtain an input fragment.
3. The dynamic running condition construction method according to claim 1, characterized in that: step two, when the input segment is input into the joint learning framework, the joint learning framework comprises a self-encoder, the self-encoder comprises an encoder and a decoder, and the encoder sequentially processes the input segment through a deep neural network and a bidirectional long-time and short-time memory network;
the deep neural network learns the waveform of the short time scale in the input segment and extracts the local features of the waveform;
the bidirectional long-short time memory network learns the time connection between the waveforms across the time scale in the input segment, extracts the global characteristics of the input segment and further forms the characteristic space Z;
the decoder reconstructs the characteristic space by adopting up-sampling and deconvolution to form a reconstructed segment X';
pre-training the self-encoder to make the reconstructed segment X' output from the decoder and the input segment have the minimum mean square error
Figure FDA0002792828430000011
4. The method of constructing a dynamic running condition according to claim 3, characterized in that: in the third step, when the soft distribution clustering of the feature sequences is realized by utilizing the regularization item based on the relative entropy, the joint learning framework further comprises a time sequence clustering layer for clustering the feature space, an encoder and time sequence clustering layer iterative updating until a stable result is obtained, and finally the input segments are clustered into a plurality of types of segment libraries
Figure FDA0002792828430000012
Wherein k is0Is an optimum polymerizationClass number, comprising the steps of:
step 41: computing elements z of a feature space using Euclidean distances EDiTo the center of the cluster cjDistance d ofij
Step 42: distance d using student t distributionijNormalized to probability distribution, feature vector ziProbability of belonging to jth cluster
Figure FDA0002792828430000021
Wherein q isijThe larger the value, the feature vector ziThe closer to the clustering center, the higher the probability of belonging to the kth cluster, and alpha is the degree of freedom of student t distribution;
step 43: target distribution pijSet to a delta distribution of data points above a confidence threshold and ignore the remaining values, wherein,
Figure FDA0002792828430000022
step 44: setting an objective of iterative training to minimize probability distribution qijWith the target distribution pijRelative entropy loss therebetween
Figure FDA0002792828430000023
Step 45: total Losstotal=LossC+λLossaeWhere λ is the proportionality coefficient, LossCAs a regularization term, the encoder feature extraction process is prevented from overfitting.
5. The dynamic running condition construction method according to claim 4, characterized in that: selecting an optimal clustering number according to the Davison burger index DBI, comprising the following steps:
setting a k value, and respectively carrying the k value into the training coding and decoding and the clustering network;
calculating the DBI value of the clustering result under each k value:
Figure FDA0002792828430000024
wherein k represents a cluster number; II ci-cj||2Representing the Euclidean distance between the centroid of the cluster i and the centroid of the cluster j;
Figure FDA0002792828430000025
represents the average distance of the feature vector in the cluster i to the centroid thereof, represents the dispersion degree of the data in the cluster i,
Figure FDA0002792828430000026
representing the average distance from the characteristic vector in the cluster j to the centroid of the characteristic vector, and representing the dispersion degree of data in the cluster j;
Figure FDA0002792828430000027
Figure FDA0002792828430000028
Mirepresenting the number of data of the cluster i; xisRepresenting the s-th data in cluster i, XjsRepresenting the s-th data in cluster j, ciRepresenting the centroid of cluster i, cjRepresents the centroid of cluster j; p is usually 2;
selecting the k value of the DBI value when the local minimum value appears for the first time as the optimal clustering number k0
6. The method of constructing a dynamic driving pattern according to claim 1, wherein: in the third step, when soft distribution clustering of the characteristic sequences is realized by utilizing the regularization item based on the relative entropy, a K-means algorithm is used for initializing a clustering center.
7. The dynamic running condition construction method according to claim 1, characterized in that: step four, when the input segments are selected from the segment libraries to form a driving working condition, the feature vectors in the feature space in the clustering result have class labels, the feature vectors under the various labels are sequenced according to the ratio of the class distance to the class distance, and the priority of the feature vectors under the various labels is determined; and determining the number of the selected fragments in each fragment library according to the time ratio of each fragment library to the total fragment library, and selecting the input fragments according to the priority of the feature vectors under each label to form a driving working condition.
8. The dynamic running condition construction method according to claim 1, characterized in that: and evaluating the constructed running condition by using two methods of relative error and speed acceleration combined distribution.
9. A dynamic driving condition construction system, characterized by comprising:
the data acquisition module is used for acquiring the speed data of the vehicle, preprocessing the speed data and generating an input fragment X;
the coding module is used for constructing a joint learning framework based on a deep neural network and a bidirectional long-time memory network, and inputting the input segments into the joint learning framework to obtain a feature space Z;
the clustering module is used for realizing soft distribution clustering of the characteristic space Z by utilizing a regularization item based on relative entropy and obtaining a clustering result after iterative updating;
and the driving condition construction module is used for classifying the input fragments according to the corresponding relation between the clustering result and the input fragments to obtain a plurality of types of fragment libraries, and selecting the input fragments from the various fragment libraries to form the driving condition.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the building method according to any one of claims 1 to 8 when executing the computer program.
CN202011320811.1A 2020-11-23 2020-11-23 Dynamic driving condition construction method, system and equipment Active CN112434735B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011320811.1A CN112434735B (en) 2020-11-23 2020-11-23 Dynamic driving condition construction method, system and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011320811.1A CN112434735B (en) 2020-11-23 2020-11-23 Dynamic driving condition construction method, system and equipment

Publications (2)

Publication Number Publication Date
CN112434735A true CN112434735A (en) 2021-03-02
CN112434735B CN112434735B (en) 2022-09-06

Family

ID=74692957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011320811.1A Active CN112434735B (en) 2020-11-23 2020-11-23 Dynamic driving condition construction method, system and equipment

Country Status (1)

Country Link
CN (1) CN112434735B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221975A (en) * 2021-04-26 2021-08-06 中国科学技术大学先进技术研究院 Working condition construction method based on improved Markov analysis method and storage medium
CN113469240A (en) * 2021-06-29 2021-10-01 中国科学技术大学 Driving condition construction method based on shape similarity and storage medium
CN113627610A (en) * 2021-08-03 2021-11-09 北京百度网讯科技有限公司 Deep learning model training method for meter box prediction and meter box prediction method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2420204A2 (en) * 2010-08-19 2012-02-22 Braun GmbH Method for operating an electric appliance and electric appliance
CN107914714A (en) * 2017-11-16 2018-04-17 北京经纬恒润科技有限公司 The display methods and device of a kind of vehicle running state
CN109711459A (en) * 2018-12-24 2019-05-03 广东德诚科教有限公司 User individual action estimation method, apparatus, computer equipment and storage medium
CN110985651A (en) * 2019-12-04 2020-04-10 北京理工大学 Automatic transmission multi-parameter fusion gear shifting strategy based on prediction
CN111832225A (en) * 2020-07-07 2020-10-27 重庆邮电大学 Method for constructing driving condition of automobile
US20200348676A1 (en) * 2019-04-30 2020-11-05 Baidu Usa Llc Neural network approach for parameter learning to speed up planning for complex driving scenarios

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2420204A2 (en) * 2010-08-19 2012-02-22 Braun GmbH Method for operating an electric appliance and electric appliance
CN107914714A (en) * 2017-11-16 2018-04-17 北京经纬恒润科技有限公司 The display methods and device of a kind of vehicle running state
CN109711459A (en) * 2018-12-24 2019-05-03 广东德诚科教有限公司 User individual action estimation method, apparatus, computer equipment and storage medium
US20200348676A1 (en) * 2019-04-30 2020-11-05 Baidu Usa Llc Neural network approach for parameter learning to speed up planning for complex driving scenarios
CN110985651A (en) * 2019-12-04 2020-04-10 北京理工大学 Automatic transmission multi-parameter fusion gear shifting strategy based on prediction
CN111832225A (en) * 2020-07-07 2020-10-27 重庆邮电大学 Method for constructing driving condition of automobile

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XINZHENG NIU ETAL.: "Label-Based Trajectory Clustering in Complex Road Networks", 《IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS》 *
史骏: "基于神经网络算法的车辆行驶识别研究", 《计算机与数字工程》 *
高建平等: "车辆行驶工况的开发和精度研究", 《浙江大学学报(工学版)》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221975A (en) * 2021-04-26 2021-08-06 中国科学技术大学先进技术研究院 Working condition construction method based on improved Markov analysis method and storage medium
CN113469240A (en) * 2021-06-29 2021-10-01 中国科学技术大学 Driving condition construction method based on shape similarity and storage medium
CN113469240B (en) * 2021-06-29 2024-04-02 中国科学技术大学 Driving condition construction method based on shape similarity and storage medium
CN113627610A (en) * 2021-08-03 2021-11-09 北京百度网讯科技有限公司 Deep learning model training method for meter box prediction and meter box prediction method

Also Published As

Publication number Publication date
CN112434735B (en) 2022-09-06

Similar Documents

Publication Publication Date Title
CN112434735B (en) Dynamic driving condition construction method, system and equipment
CN110163439A (en) A kind of city size taxi trajectory predictions method based on attention mechanism
CN111709549B (en) SVD-PSO-LSTM-based short-term traffic flow prediction navigation reminding method
CN111832814A (en) Air pollutant concentration prediction method based on graph attention machine mechanism
CN111832225A (en) Method for constructing driving condition of automobile
CN109239669B (en) Self-evolution radar target detection algorithm based on deep learning
CN111598325A (en) Traffic speed prediction method based on hierarchical clustering and hierarchical attention mechanism
CN115071762B (en) Pedestrian trajectory prediction method, model and storage medium under urban scene
CN112884014A (en) Traffic speed short-time prediction method based on road section topological structure classification
CN114493191A (en) Driving behavior modeling analysis method based on network appointment data
CN115422747A (en) Method and device for calculating discharge amount of pollutants in tail gas of motor vehicle
Motallebiaraghi et al. High-fidelity modeling of light-duty vehicle emission and fuel economy using deep neural networks
CN105890600A (en) Subway passenger position inferring method based on mobile phone sensors
Pei et al. Uj-flac: Unsupervised joint feature learning and clustering for dynamic driving cycles construction
Wang et al. Predictability of Vehicle Fuel Consumption Using LSTM: Findings from Field Experiments
CN110944295B (en) Position prediction method, position prediction device, storage medium and terminal
Li et al. Traffic accident analysis based on C4. 5 algorithm in WEKA
CN116663742A (en) Regional capacity prediction method based on multi-factor and model fusion
Guo et al. Application of PCA-K-means++ combination model to construction of light vehicle driving conditions in intelligent traffic
CN112991765B (en) Method, terminal and storage medium for updating road high-emission source recognition model
CN115909717A (en) Expressway short-term traffic flow prediction method based on deep learning
Ehlers Traffic queue length and pressure estimation for road networks with geometric deep learning algorithms
Kang et al. Vehicle Trajectory Clustering in Urban Road Network Environment Based on Doc2Vec Model
CN113051808A (en) Method and apparatus for testing a machine
Ranjbar et al. Scene novelty prediction from unsupervised discriminative feature learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant