CN112434735B - Dynamic driving condition construction method, system and equipment - Google Patents

Dynamic driving condition construction method, system and equipment Download PDF

Info

Publication number
CN112434735B
CN112434735B CN202011320811.1A CN202011320811A CN112434735B CN 112434735 B CN112434735 B CN 112434735B CN 202011320811 A CN202011320811 A CN 202011320811A CN 112434735 B CN112434735 B CN 112434735B
Authority
CN
China
Prior art keywords
input
clustering
cluster
segment
fragment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011320811.1A
Other languages
Chinese (zh)
Other versions
CN112434735A (en
Inventor
康宇
裴丽红
许镇义
赵振怡
刘斌琨
曹洋
吕文君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202011320811.1A priority Critical patent/CN112434735B/en
Publication of CN112434735A publication Critical patent/CN112434735A/en
Application granted granted Critical
Publication of CN112434735B publication Critical patent/CN112434735B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/80Technologies aiming to reduce greenhouse gasses emissions common to all road transportation technologies
    • Y02T10/84Data processing systems or methods, management, administration

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Traffic Control Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method for constructing a dynamic driving condition, which comprises the following steps: acquiring speed data of a vehicle, preprocessing the speed data, and generating an input fragment X; constructing a joint learning framework based on a deep neural network and a bidirectional long-term and short-term memory network, and inputting input segments into the joint learning framework to obtain a feature space Z; soft distribution clustering of the characteristic space Z is realized by utilizing a regularization item based on relative entropy, and a clustering result is obtained after iterative updating; and classifying the input fragments according to the corresponding relation between the clustering result and the input fragments to obtain fragment libraries of various types, and selecting the input fragments from the fragment libraries of various types to form the driving working condition.

Description

Dynamic driving condition construction method, system and equipment
Technical Field
The invention relates to the technical field of environment detection, in particular to a method, a system and equipment for constructing a dynamic driving condition.
Background
According to the technical policy of motor vehicle pollution prevention and control revised by the organization of the ministry of environmental protection, the further improvement of the environmental quality is pointed out as the core to construct a motor vehicle pollution prevention and control system, and the systematization, the scientification and the informatization of the motor vehicle pollution prevention and control work are promoted. The technical policy clearly shows that the emission limits of pollutants such as carbon monoxide (CO), Total Hydrocarbons (THC), nitrogen oxides (NOx) and Particulate Matters (PM) of motor vehicles are gradually tightened. The 2020 plus 2025 year market prospect and investment opportunity research report of China motor vehicle industry show that by the end of 2019, the number of motor vehicles in China is up to 3.48 hundred million, China is the first automobile consumption market and production country in the world, and the number of motor vehicles in China stays at the forefront of the world all the year round. With the rapid increase of the quantity of motor vehicles kept, the problems of urban traffic jam and vehicle tail gas pollution emission caused by the motor vehicles are getting more serious. The pollutant discharge of motor vehicles is mainly influenced by the running conditions of the vehicles, and if the idling time of the vehicles under traffic jam is long and the acceleration and deceleration frequency is too high, the higher exhaust emission is caused. The construction of the driving condition is a construction method of an automobile driving profile based on typical traffic conditions, and plays an important role in the evaluation of automobile emission, economy and mileage.
The current construction method of the running condition is mainly divided into two types: markov analysis and cluster analysis. The Markov analysis method regards the speed and time relation of the vehicle running process as a random process, and combines different model events to form the whole running process by utilizing the characteristic that the state at the time t only depends on the state at the time t-1 (namely, no after-effect). The clustering analysis method divides all micro-process segments into a plurality of classes according to the similarity degree of the micro-process segments, and selects the segments from each class of segment library according to a certain principle to form a final working condition curve. Compared with a Markov analysis method, the clustering analysis method can obtain different types of working conditions, is closer to actual road working conditions, and is simple and feasible.
The actual traffic conditions and road characteristics in various regions of a city are different, so that the driving cycle of a vehicle is influenced, meanwhile, the vehicle type data is more and more abundant due to the development of new energy vehicles, the speed-time distribution of driving data on the space is represented by the manual design characteristics adopted by the traditional method, the driving data is regarded as static data, the inherent dynamic characteristics and time dependency of the driving data are ignored, and the low precision and the insufficient robustness are caused.
Disclosure of Invention
In order to solve the technical problems, the invention provides a method, a system and equipment for constructing a dynamic driving condition.
In order to solve the technical problems, the invention adopts the following technical scheme:
a dynamic driving condition construction method comprises the following steps:
the method comprises the following steps: acquiring speed data of a vehicle, preprocessing the speed data and generating an input fragment X;
step two: constructing a joint learning framework based on a deep neural network and a bidirectional long-term and short-term memory network, and inputting input segments into the joint learning framework to obtain a feature space Z;
step three: soft distribution clustering of the characteristic space Z is realized by utilizing a regularization item based on relative entropy, and a clustering result is obtained after iterative updating;
step four: and classifying the input fragments according to the corresponding relation between the clustering result and the input fragments to obtain various fragment libraries, and selecting the input fragments from the various fragment libraries to form the driving working condition.
Specifically, in the first step, when the speed data is preprocessed, invalid data in the speed data is removed and missing values are filled; extracting micro-stroke fragments from the speed data to generate a micro-stroke fragment library; and carrying out interpolation processing on the micro-stroke fragment library to obtain an equal-length sequence library, and carrying out normalization processing on the equal-length sequence library to obtain an input fragment.
Specifically, in the second step, when the input segment is input into the joint learning framework, the joint learning framework comprises a self-encoder, the self-encoder comprises an encoder and a decoder, and the encoder sequentially processes the input segment through a deep neural network and a bidirectional long-short time memory network;
the deep neural network learns the waveform of the short time scale in the input segment and extracts the local features of the waveform;
the bidirectional long-short time memory network learns the time connection between the waveforms across time scales in the input segment, extracts the global characteristics of the input segment and further forms the characteristic space Z;
the decoder reconstructs the characteristic space by adopting up-sampling and deconvolution to form a reconstructed segment X';
pre-training the self-encoder to make the reconstructed segment X' output from the decoder and the input segment have the minimum mean square error
Figure BDA0002792828440000021
Specifically, in step three, when soft distribution clustering of the feature sequences is realized by utilizing the regularization term based on the relative entropy, the joint learning framework further comprises a time sequence clustering layer for clustering the feature space, an encoder and time sequence clustering layer iterative updating until a stable result is obtained, and finally the input segments are clustered into a segment library of multiple types
Figure BDA0002792828440000022
Wherein k is 0 Is an optimal clustering number and comprises the following steps:
step 41: computing elements z of a feature space using Euclidean distance ED i To the center of the cluster c j Distance d of ij
Step 42: distance d is distributed by using t of students ij Normalized to probability distribution, feature vector z i Probability of belonging to jth cluster
Figure BDA0002792828440000031
Wherein q is ij The larger the value, the feature vector z i The closer to the clustering center, the higher the probability of belonging to the kth cluster, and alpha is the degree of freedom of student t distribution;
step 43: target distribution p ij Set to a delta distribution of data points above a confidence threshold and ignore the remaining values, wherein,
Figure BDA0002792828440000032
step 44: setting an objective of iterative training to minimize probability distribution q ij With the target distribution p ij Relative entropy loss therebetween
Figure BDA0002792828440000033
Step 45: total Loss total =Loss C +λLoss ae Where λ is the proportionality coefficient, Loss C As a regularization term, the encoder feature extraction process is prevented from overfitting.
Specifically, according to the Davison bauble index DBI, the optimal clustering number is selected, and the method comprises the following steps:
setting a k value, and respectively carrying the k value into the training coding and decoding and the clustering network;
calculating the DBI value of the clustering result under each k value:
Figure BDA0002792828440000034
wherein k represents a cluster number; II c i -c j || 2 Representing Euclidean distance between the centroid of the cluster i and the centroid of the cluster j;
Figure BDA0002792828440000035
represents the average distance of the feature vector in the cluster i to the centroid thereof, represents the dispersion degree of the data in the cluster i,
Figure BDA0002792828440000036
representing the average distance from the characteristic vector in the cluster j to the centroid of the characteristic vector, and representing the dispersion degree of data in the cluster j;
Figure BDA0002792828440000037
Figure BDA0002792828440000038
M i representing the number of data of the cluster i; x is Representing the s-th data in cluster i, X js Representing the s-th data in cluster j, c i Representing the centroid of cluster i, c j Represents the centroid of cluster j; p is typically 2;
selecting the k value of the DBI value when the local minimum value appears for the first time as the optimal clustering number k 0
Specifically, in the third step, when soft distribution clustering of the feature sequences is realized by utilizing a regularization term based on relative entropy, a clustering center is initialized by using a K-means algorithm.
Specifically, in the fourth step, when the input segments are selected from the segment libraries of various types to form a driving working condition, the feature vectors in the feature space in the clustering result have class labels, the feature vectors under the various labels are sorted according to the ratio of the class distance to the class distance, and the priority of the feature vectors under the various labels is determined; and determining the number of the selected fragments in each fragment library according to the time ratio of each fragment library to the total fragment library, and selecting the input fragments according to the priority of the feature vectors under each label to form a driving working condition.
Specifically, the constructed driving condition is evaluated by using two methods of relative error and speed acceleration combined distribution.
A dynamic driving condition construction system comprising:
the data acquisition module acquires and preprocesses speed data of a vehicle to generate an input fragment X;
the coding module is used for constructing a joint learning framework based on a deep neural network and a bidirectional long-time memory network, and inputting the input segments into the joint learning framework to obtain a feature space Z;
the clustering module is used for realizing soft distribution clustering of the characteristic space Z by utilizing a regularization item based on relative entropy and obtaining a clustering result after iterative updating;
and the driving condition construction module is used for classifying the input fragments according to the corresponding relation between the clustering result and the input fragments to obtain a plurality of types of fragment libraries, and selecting the input fragments from the various fragment libraries to form the driving condition.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the building method when executing the computer program.
Compared with the prior art, the invention has the beneficial technical effects that:
different from the traditional working condition construction method, the invention adopts an unsupervised combined feature learning and clustering framework, takes the continuity of the driving data into consideration of the time dependence of the dynamic data, does not use any manual design feature expression and fragment selection in the driving working condition construction process, and can realize the working condition model construction with higher precision and robustness on the real driving data.
Drawings
FIG. 1 is a schematic flow chart of a method for constructing the working conditions according to the present invention;
FIG. 2 is a schematic diagram of the structure of the joint learning framework of the present invention;
FIG. 3 is a graph of a visualization of the results of the clustering of the present invention;
FIG. 4 is a diagram of a model of operating conditions constructed in accordance with the present invention;
FIG. 5 is a visual representation of the driving speed of a test vehicle according to the present invention;
FIG. 6 is a visualization of the estimated CO emission status of the present invention;
fig. 7 is a calculated coefficient for each contaminant.
Detailed Description
A preferred embodiment of the present invention will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1 and 2, a dynamic driving condition construction method includes the following steps:
s1: and acquiring the speed data of the vehicle, preprocessing the speed data and generating an input fragment X.
Specifically, in the first step, when the speed data is preprocessed, invalid data in the speed data is removed and missing values are filled; extracting micro-stroke fragments from the speed data to generate a micro-stroke fragment library; and carrying out interpolation processing on the micro-stroke fragment library to obtain an equal-length sequence library, and carrying out normalization processing on the equal-length sequence library to obtain an input fragment.
When the micro-stroke fragment library is interpolated, a one-column sequence is combined into two columns for input by adopting two methods of cubic spline interpolation and linear interpolation.
And the micro-stroke segment consists of an idle speed segment (the speed is kept to be 0) and a kinematic segment (the speed is always greater than 0) which do not exceed 180s, and the extracted micro-stroke segment starts from the idle speed state to the end of the next idle speed state.
S2: and constructing a joint learning framework based on a deep neural network and a bidirectional long-time and short-time memory network, and inputting the input segments into the joint learning framework to obtain a feature space z.
Specifically, in the second step, when the input segment is input into the joint learning framework, the joint learning framework comprises a self-encoder, the self-encoder comprises an encoder and a decoder, and the encoder sequentially processes the input segment through a deep neural network and a bidirectional long-short term memory network;
the deep neural network learns the waveform of the short time scale in the input segment and extracts the local features of the waveform;
the bidirectional long and short term memory network learns the time connection between the cross time scale waveforms in the input segment and extracts the global characteristics of the input segment so as to form the characteristic space Z;
the decoder reconstructs the characteristic space by adopting up-sampling and deconvolution to form a reconstructed segment X';
pre-training the self-encoder to make the reconstructed segment X' output from the decoder and the input segment have the minimum mean square error
Figure BDA0002792828440000051
The purpose of this step is: the time dependency of dynamic data is solved, and nonlinear time dimension reduction is realized.
Specifically, in the third step, when soft distribution clustering of the feature sequences is realized by utilizing a regularization term based on relative entropy, a K-means algorithm is used for initializing a clustering center.
S3: and realizing soft distribution clustering of the characteristic space Z by utilizing a regularization item based on the relative entropy, and obtaining a clustering result after iterative updating.
Specifically, in step three, when soft distribution clustering of the feature sequences is realized by utilizing the regularization term based on the relative entropy, the joint learning framework further comprises a time sequence clustering layer for clustering the feature space, an encoder and time sequence clustering layer iterative updating until a stable result is obtained, and finally the input segments are clustered into a segment library of multiple types
Figure BDA0002792828440000061
Wherein k is 0 Is an optimal clustering number and comprises the following steps:
step 41: computing elements z of a feature space using Euclidean distances ED i To the center of the cluster c j Distance d of ij
Step 42: distance d is distributed by using t of students ij Normalized to probability distribution, feature vector z i Probability of belonging to jth cluster
Figure BDA0002792828440000062
Wherein q is ij The larger the value, the feature vector z i The closer to the clustering center, the higher the probability of belonging to the kth cluster, and alpha is the degree of freedom of student t distribution;
step 43: target distribution p ij Set to a delta distribution of data points above a confidence threshold and ignore the remaining values, wherein,
Figure BDA0002792828440000063
step 44: setting an objective of iterative training to minimize probability distribution q ij With the target distribution p ij Relative entropy loss therebetween
Figure BDA0002792828440000064
Step 45: total Loss t4tal =Loss C +λLoss ae Wherein λ is a proportionality coefficient, Loss C As a regularization term, overfitting of the encoder feature extraction process is prevented. The self-encoder is pre-trained and therefore fine-tuned, in this embodiment, the scaling factor may be constant 0.01.
The joint learning framework is trained by using the speed data, namely, the flow of iteratively updating the encoder and the time sequence clustering layer until a stable result is obtained is as follows.
Inputting:
the speed value set of the vehicle micro-travel segment is given, namely an input segment X, the size n of a training sample, the number iteration0 of pre-training iteration, the number iteration1 of optimization iteration and the number k of clusters.
Outputting a coding and decoding network theta after training; the cluster center C.
The specific process is as follows:
initializing learning parameters theta, learning rate eta and momentum v;
2 for selecting n training samples x (i is more than or equal to 1 and less than or equal to iteration0) do at the ith time randomly
3, outputting Z by an encoder network;
4, decoder network output X';
5, calculating Loss function Loss according to a formula ae (i);
6 updating the weight parameter theta i ←θ i-1 -αv j-2i-1 ΔLoss ae ;}
7:end for
Initializing a clustering center C, initializing a learning rate eta, and estimating an exponential decay rate beta of a first moment 1 Exponential decay Rate beta of second moment estimation 2 Constant e, first order momentum term m 0 Second order momentum term v 0
9 for selecting n training samples x (i is more than or equal to 1 and less than or equal to iteration1) do at the ith time randomly
10, calculating kl divergence Lossc according to a formula;
11, calculating a total loss function lostotal according to a formula;
first order momentum term correction
Figure BDA0002792828440000071
Second order momentum term correction value
Figure BDA0002792828440000072
Updating codec network weight parameters
Figure BDA0002792828440000073
15, updateClustering center
Figure BDA0002792828440000074
16:end for
17, completing the coding and decoding network theta of the train; the cluster center C.
The above process describes the training process of the joint learning framework in the form of pseudo code, where return represents the output value; for A do { B } indicates that iterating each element in A once, the content in B is executed once, and end for indicates ending the loop.
Specifically, according to the Davison bauble index DBI, the optimal clustering number is selected, and the method comprises the following steps:
setting a k value, and respectively carrying the k value into the training coding and decoding and the clustering network;
calculating the DBI value of the clustering result under each k value:
Figure BDA0002792828440000075
wherein k represents a cluster number; II c i -c j || 2 Representing the Euclidean distance between the centroid of the cluster i and the centroid of the cluster j;
Figure BDA0002792828440000076
represents the average distance of the feature vector in the cluster i to the centroid thereof, represents the dispersion degree of the data in the cluster i,
Figure BDA0002792828440000081
representing the average distance from the characteristic vector in the cluster j to the centroid of the characteristic vector, and representing the dispersion degree of data in the cluster j;
Figure BDA0002792828440000082
Figure BDA0002792828440000083
M i representing the number of data of the cluster i; x is Representing the s-th data in cluster i, X js Representing in cluster jS data, c i Representing the centroid of cluster i, c j Representing the centroid of cluster j; p is usually 2;
selecting the k value of the DBI value when the local minimum value appears for the first time as the optimal clustering number k 0
Smaller DBI value means smaller intra-class distance, while larger inter-class distance means better clustering effect.
S4: and classifying the input fragments according to the corresponding relation between the clustering result and the input fragments to obtain various fragment libraries, and selecting the input fragments from the various fragment libraries to form the driving working condition.
Specifically, in the fourth step, when the input segments are selected from the segment libraries of various types to form a driving working condition, the feature vectors in the feature space in the clustering result have class labels, the feature vectors under the various labels are sorted according to the ratio of the class distance to the class distance, and the priority of the feature vectors under the various labels is determined; and determining the number of the selected fragments in each fragment library according to the time ratio of each fragment library to the total fragment library, and selecting the input fragments according to the priority of the feature vectors under each label to form a driving working condition.
Each feature vector corresponds to an input segment, and the feature vectors are matched with the input segments through respective sequence numbers; after clustering operation, each feature vector is provided with a class label, and each feature vector corresponds to one input segment, so that the input segments can be divided into various segment libraries through the class labels; the feature vectors are ordered according to a rule that essentially determines the priority of the input segments within the library of classes of segments.
The number of each type of segment library to be selected is determined by the time ratio of the input segments in each type of segment library to all the input segments; the sequence of the input fragments is selected by each type of fragment library and is determined by the priority of the input fragments in each type of fragment library.
Specifically, the constructed driving condition is evaluated by using two methods of the relative error RE and the velocity acceleration combined profile SAPD.
The present invention uses a COPERT model to estimate the bicycle emissions, specifically:
calculating tail gas emission factor of single vehicle type by adopting COPERT III emission model
Ef jw =(a w +c w v j +e w v j 2 )/(1+b w v j +d w v j 2 );
Wherein v is j Average speed of driving cycle of j-th type vehicle, a w 、b w 、c w 、d w The calculated coefficients for the w-th contaminant are shown in detail in fig. 7.
Vehicle major pollutant emission estimation E ═ Ef jw Xlen × f, main pollutants such as carbon dioxide CO, hydrocarbons HC, nitrogen oxides NOx, where len denotes the length of the driving route, f denotes the vehicle flow, and f is 1 when estimating the emission of a single vehicle.
And the emission of a single vehicle is estimated and visualized by combining the GPS data of the vehicle, so that a proposal is provided for urban road planning.
Different from the traditional working condition construction method, the invention adopts an unsupervised combined feature learning and clustering framework, takes the continuity of the driving data into consideration of the time dependence of the dynamic data, does not use any manual design feature expression and fragment selection in the driving working condition construction process, and can realize the working condition model construction with higher precision and robustness on the real driving data.
The verification of the method is carried out by utilizing OBD data of light vehicles in Fuzhou city, including speed data and GPS data, the advancement of the method is shown from the clustering effect, the constructed working condition model is further shown, and an application case of the working condition model is demonstrated.
Fig. 3 shows the clustering result of this embodiment, which has low inter-class coupling, high intra-class polymerization, and good clustering effect.
Fig. 4 shows the operating condition model constructed in the present embodiment, and the set driving operating condition period is 1200 s and 1300 s.
Fig. 5 and 6 are visual representations of estimation of pollutant emissions of a single test vehicle in fuzhou city, wherein fig. 5 is a visual representation of the driving speed of the test vehicle; fig. 6 is a visualization of the estimated CO emission conditions.
As can be seen from FIGS. 5 and 6, the emission of pollutants from automobiles is proportional to the magnitude of the speed; the high-speed running road section and part of intersections have larger discharge. The passing efficiency of the intersection with larger discharge can be improved to reduce the discharge.
A dynamic driving condition construction system comprising:
the data acquisition module acquires and preprocesses speed data of a vehicle to generate an input fragment X;
the coding module is used for constructing a joint learning framework based on a deep neural network and a bidirectional long-time memory network, and inputting the input segments into the joint learning framework to obtain a feature space Z;
the clustering module is used for realizing soft distribution clustering of the characteristic space Z by utilizing a regularization item based on relative entropy and obtaining a clustering result after iterative updating;
and the driving condition construction module is used for classifying the input fragments according to the corresponding relation between the clustering result and the input fragments to obtain a plurality of types of fragment libraries, and selecting the input fragments from the various fragment libraries to form the driving condition.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the building method when executing the computer program.
The speed data and the GPS data in the present invention are derived from the vehicle driving data of the on-board diagnostic system.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein, and any reference signs in the claims are not to be construed as limiting the claims.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (8)

1. A dynamic running condition construction method comprises the following steps:
the method comprises the following steps: acquiring speed data of a vehicle, preprocessing the speed data, and generating an input fragment X;
step two: constructing a joint learning framework based on a deep neural network and a bidirectional long-term and short-term memory network, and inputting input segments into the joint learning framework to obtain a feature space Z;
step three: soft distribution clustering of the characteristic space Z is realized by utilizing a regularization item based on relative entropy, and a clustering result is obtained after iterative updating;
step four: classifying the input fragments according to the corresponding relation between the clustering result and the input fragments to obtain fragment libraries of various types, and selecting the input fragments from the fragment libraries to form a driving working condition;
in the first step, when the speed data is preprocessed, invalid data in the speed data is removed and missing values are filled; extracting micro-stroke fragments from the speed data to generate a micro-stroke fragment library; carrying out interpolation processing on the micro-stroke fragment library to obtain an equal-length sequence library, and carrying out normalization processing on the equal-length sequence library to obtain an input fragment;
step two, when the input segment is input into the joint learning framework, the joint learning framework comprises a self-encoder, the self-encoder comprises an encoder and a decoder, and the encoder sequentially processes the input segment through a deep neural network and a bidirectional long-time and short-time memory network;
the deep neural network learns the waveform of the short time scale in the input segment and extracts the local features of the waveform;
the bidirectional long-short time memory network learns the time connection between the waveforms across the time scale in the input segment, extracts the global characteristics of the input segment and further forms the characteristic space Z;
the decoder reconstructs the characteristic space by adopting up-sampling and deconvolution to form a reconstructed segment X';
pre-training the self-encoder to make the reconstructed segment X' output from the decoder and the input segment have the minimum mean square error
Figure FDA0003642817310000011
2. The dynamic running condition construction method according to claim 1, characterized in that: in the third step, when the soft distribution clustering of the feature sequences is realized by utilizing the regularization item based on the relative entropy, the joint learning framework further comprises a time sequence clustering layer for clustering the feature space, an encoder and the time sequence clustering layer are updated in an iterative manner until a stable result is obtained, and finally the input segments are clustered into a plurality of types of segment libraries
Figure FDA0003642817310000012
Wherein k is 0 Is an optimal clustering number and comprises the following steps:
step 41: computing elements z of a feature space using Euclidean distances ED i To the center of the cluster c j Distance d of ij
Step 42: distance d using student t distribution ij Normalized to probability distribution, feature vector z i Probability of belonging to jth cluster
Figure FDA0003642817310000021
Wherein q is ij The larger the value, the feature vector z i The closer to the clustering center, the higher the possibility of belonging to the jth cluster, and alpha is the degree of freedom of student t distribution;
step 43: target distributionp ij Set to a delta distribution of data points above a confidence threshold and ignore the remaining values, wherein,
Figure FDA0003642817310000022
step 44: setting the target of iterative training to minimize probability distribution q ij With the target distribution p ij Relative entropy loss therebetween
Figure FDA0003642817310000023
Wherein n is the number of micro-stroke segments;
step 45: total Loss total =Loss C +λLoss ae Wherein λ is a proportionality coefficient, Loss C As a regularization term, the encoder feature extraction process is prevented from overfitting.
3. The dynamic running condition construction method according to claim 2, characterized in that: selecting an optimal clustering number according to the davison burger index DBI, comprising the following steps:
setting a k value, and respectively substituting the k value into the training coding and decoding and the clustering network;
Figure FDA0003642817310000024
calculating the DBI value of the clustering result under each k value:
wherein k represents a cluster number; II c i -c j || 2 Representing the Euclidean distance between the centroid of the cluster i and the centroid of the cluster j;
Figure FDA0003642817310000025
represents the average distance of the feature vector in the cluster i to the centroid thereof, represents the dispersion degree of the data in the cluster i,
Figure FDA0003642817310000026
represents the average distance of the feature vector in the cluster j to the centroid thereof, and represents the clusterThe degree of dispersion of data in class j;
Figure FDA0003642817310000027
Figure FDA0003642817310000028
M i representing the number of data of the cluster i; m j Representing the number of data of the cluster j; x is Representing the s-th data in cluster i, X js Representing the s-th data in the cluster j, c i Representing the centroid of cluster i, c j Represents the centroid of cluster j; p is 2;
selecting the k value of the DBI value when the local minimum value appears for the first time as the optimal clustering number k 0
4. The dynamic running condition construction method according to claim 1, characterized in that: in the third step, when soft distribution clustering of the characteristic sequences is realized by utilizing the regularization item based on the relative entropy, a K-means algorithm is used for initializing a clustering center.
5. The dynamic running condition construction method according to claim 1, characterized in that: step four, when the input segments are selected from the segment libraries to form a driving working condition, the feature vectors in the feature space in the clustering result have class labels, the feature vectors under the various labels are sequenced according to the ratio of the class distance to the class distance, and the priority of the feature vectors under the various labels is determined; and determining the number of the selected fragments in each fragment library according to the time ratio of each fragment library to the total fragment library, and selecting the input fragments according to the priority of the feature vectors under each label to form a driving working condition.
6. The dynamic running condition construction method according to claim 1, characterized in that: and evaluating the constructed running condition by using two methods of relative error and speed acceleration combined distribution.
7. A dynamic driving condition construction system, characterized by comprising:
the data acquisition module acquires and preprocesses speed data of a vehicle to generate an input fragment X;
the coding module is used for constructing a joint learning framework based on a deep neural network and a bidirectional long-time memory network, and inputting the input segments into the joint learning framework to obtain a feature space Z;
the clustering module is used for realizing soft distribution clustering of the characteristic space Z by utilizing a regularization item based on relative entropy and obtaining a clustering result after iterative updating;
the driving condition construction module is used for classifying the input fragments according to the corresponding relation between the clustering result and the input fragments to obtain fragment libraries of various types, and selecting the input fragments from the fragment libraries of various types to form the driving condition;
when the speed data is preprocessed, removing invalid data in the speed data and filling missing values; extracting micro-stroke fragments from the speed data to generate a micro-stroke fragment library; carrying out interpolation processing on the micro-stroke fragment library to obtain an equal-length sequence library, and carrying out normalization processing on the equal-length sequence library to obtain an input fragment;
when an input segment is input into a joint learning framework, the joint learning framework comprises a self-encoder, the self-encoder comprises an encoder and a decoder, and the encoder sequentially processes the input segment through a deep neural network and a bidirectional long-term and short-term memory network;
the deep neural network learns the waveform of the short time scale in the input segment and extracts the local features of the waveform;
the bidirectional long-short time memory network learns the time connection between the waveforms across the time scale in the input segment, extracts the global characteristics of the input segment and further forms the characteristic space Z;
the decoder reconstructs the characteristic space by adopting up-sampling and deconvolution to form a reconstructed segment X';
pre-training the self-encoder to make the reconstructed segment X' output from the decoder and the input segment have the minimum mean square error
Figure FDA0003642817310000041
8. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the building method according to any one of claims 1-6 when executing the computer program.
CN202011320811.1A 2020-11-23 2020-11-23 Dynamic driving condition construction method, system and equipment Active CN112434735B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011320811.1A CN112434735B (en) 2020-11-23 2020-11-23 Dynamic driving condition construction method, system and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011320811.1A CN112434735B (en) 2020-11-23 2020-11-23 Dynamic driving condition construction method, system and equipment

Publications (2)

Publication Number Publication Date
CN112434735A CN112434735A (en) 2021-03-02
CN112434735B true CN112434735B (en) 2022-09-06

Family

ID=74692957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011320811.1A Active CN112434735B (en) 2020-11-23 2020-11-23 Dynamic driving condition construction method, system and equipment

Country Status (1)

Country Link
CN (1) CN112434735B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221975B (en) * 2021-04-26 2023-07-11 中国科学技术大学先进技术研究院 Working condition construction method based on improved Markov analysis method and storage medium
CN113469240B (en) * 2021-06-29 2024-04-02 中国科学技术大学 Driving condition construction method based on shape similarity and storage medium
CN113627610B (en) * 2021-08-03 2022-07-05 北京百度网讯科技有限公司 Deep learning model training method for meter box prediction and meter box prediction method
CN114021617B (en) * 2021-09-29 2024-09-17 中国科学技术大学 Mobile source driving condition construction method and equipment based on short-range feature clustering

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2420204A2 (en) * 2010-08-19 2012-02-22 Braun GmbH Method for operating an electric appliance and electric appliance
CN107914714A (en) * 2017-11-16 2018-04-17 北京经纬恒润科技有限公司 The display methods and device of a kind of vehicle running state
CN109711459A (en) * 2018-12-24 2019-05-03 广东德诚科教有限公司 User individual action estimation method, apparatus, computer equipment and storage medium
CN110985651A (en) * 2019-12-04 2020-04-10 北京理工大学 Automatic transmission multi-parameter fusion gear shifting strategy based on prediction
CN111832225A (en) * 2020-07-07 2020-10-27 重庆邮电大学 Method for constructing driving condition of automobile

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11731612B2 (en) * 2019-04-30 2023-08-22 Baidu Usa Llc Neural network approach for parameter learning to speed up planning for complex driving scenarios

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2420204A2 (en) * 2010-08-19 2012-02-22 Braun GmbH Method for operating an electric appliance and electric appliance
CN107914714A (en) * 2017-11-16 2018-04-17 北京经纬恒润科技有限公司 The display methods and device of a kind of vehicle running state
CN109711459A (en) * 2018-12-24 2019-05-03 广东德诚科教有限公司 User individual action estimation method, apparatus, computer equipment and storage medium
CN110985651A (en) * 2019-12-04 2020-04-10 北京理工大学 Automatic transmission multi-parameter fusion gear shifting strategy based on prediction
CN111832225A (en) * 2020-07-07 2020-10-27 重庆邮电大学 Method for constructing driving condition of automobile

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Label-Based Trajectory Clustering in Complex Road Networks;Xinzheng Niu etal.;《IEEE Transactions on Intelligent Transportation Systems》;20191213;第21卷(第10期);全文 *
基于神经网络算法的车辆行驶识别研究;史骏;《计算机与数字工程》;20171231;第45卷(第12期);全文 *
车辆行驶工况的开发和精度研究;高建平等;《浙江大学学报(工学版)》;20171015(第10期);全文 *

Also Published As

Publication number Publication date
CN112434735A (en) 2021-03-02

Similar Documents

Publication Publication Date Title
CN112434735B (en) Dynamic driving condition construction method, system and equipment
CN101551809B (en) Search method of SAR images classified based on Gauss hybrid model
CN110163439A (en) A kind of city size taxi trajectory predictions method based on attention mechanism
CN111832225A (en) Method for constructing driving condition of automobile
CN109376331A (en) A kind of city bus emission index estimation method promoting regression tree based on gradient
CN117079276B (en) Semantic segmentation method, system, equipment and medium based on knowledge distillation
CN112884014A (en) Traffic speed short-time prediction method based on road section topological structure classification
CN113127716A (en) Sentiment time sequence anomaly detection method based on saliency map
CN115422747A (en) Method and device for calculating discharge amount of pollutants in tail gas of motor vehicle
Sun et al. Road network metric learning for estimated time of arrival
CN106297296A (en) A kind of fine granularity distribution method hourage based on sparse tracing point data
Pei et al. Uj-flac: Unsupervised joint feature learning and clustering for dynamic driving cycles construction
Yamashita et al. Accessing and constructing driving data to develop fuel consumption forecast model
Li et al. Traffic accident analysis based on C4. 5 algorithm in WEKA
CN113051808A (en) Method and apparatus for testing a machine
CN114021617B (en) Mobile source driving condition construction method and equipment based on short-range feature clustering
CN114580715B (en) Pedestrian track prediction method based on generation countermeasure network and long-term and short-term memory model
CN112991765B (en) Method, terminal and storage medium for updating road high-emission source recognition model
CN115909717A (en) Expressway short-term traffic flow prediction method based on deep learning
CN114987504A (en) Dynamic driver identity recognition method and system based on incremental learning
Kang et al. Vehicle trajectory clustering in urban road network environment based on Doc2Vec model
Fakharurazi et al. Object Detection in Autonomous Vehicles
Ranjbar et al. Scene novelty prediction from unsupervised discriminative feature learning
Dib et al. A two-stage deep learning based approach for predicting instantaneous vehicle speed profiles on road networks
Li et al. Infrared Small Target Detection Algorithm Based on ISTD-CenterNet.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant