CN113569928A - Train running state detection data missing processing model and reconstruction method - Google Patents

Train running state detection data missing processing model and reconstruction method Download PDF

Info

Publication number
CN113569928A
CN113569928A CN202110792198.1A CN202110792198A CN113569928A CN 113569928 A CN113569928 A CN 113569928A CN 202110792198 A CN202110792198 A CN 202110792198A CN 113569928 A CN113569928 A CN 113569928A
Authority
CN
China
Prior art keywords
data
missing
module
model
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110792198.1A
Other languages
Chinese (zh)
Other versions
CN113569928B (en
Inventor
张昌凡
陈泓润
何静
曹源
杨皓楠
徐逸夫
印玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University of Technology
Original Assignee
Hunan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University of Technology filed Critical Hunan University of Technology
Priority to CN202110792198.1A priority Critical patent/CN113569928B/en
Publication of CN113569928A publication Critical patent/CN113569928A/en
Application granted granted Critical
Publication of CN113569928B publication Critical patent/CN113569928B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a train running state detection data missing processing model and a reconstruction method, wherein a brand-new variational self-coding-generation confrontation semantic fusion network (VAE-FGAN) is constructed for reconstructing missing data, firstly, a GRU module is introduced into an encoder to fuse bottom layer characteristics and high layer characteristics of data, so that the VAE-FGAN learns the correlation between measured data in an unsupervised training mode; secondly, introducing an SE-NET attention mechanism into the whole generated network to promote and increase the expression of the feature extraction network on the data features; and finally, parameter sharing is achieved through transfer learning and pre-training. The method can keep higher reconstruction precision, the reconstructed data can well accord with the distribution rule of the measured data, and the problems of poor generalization capability and unstable training of the model in the prior art when the model aims at processing partial fault operation and maintenance data in the running process of the high-speed train and is extremely little or missing are solved.

Description

Train running state detection data missing processing model and reconstruction method
Technical Field
The invention relates to the technical field of data missing reconstruction, in particular to a train running state detection data missing processing model and a reconstruction method.
Background
The high-speed train plays an important role in a transportation system, and the safety guarantee of the high-speed train in operation is free from errors. The lines of the high-speed train often meet various complex environments such as mountainous areas, tunnels and the like, and phenomena such as network faults, transmission interruption, harmonic interference and the like can occur, so that a large amount of data loss conditions exist in the monitored data, and related fault characteristic information of a lost data section cannot be obtained, so that the later multi-source information fusion causes large errors, and the judgment of the faults is not facilitated. The existing traditional methods such as the EM algorithm and the KNN algorithm cannot well simulate the correlation between complex data characteristics and different devices in a high-speed train.
Recently, it is very popular to generate countermeasure network 202011072927.8 a method based on generating countermeasure network high speed train measurement data missing reconstruction discloses a method based on generating countermeasure network high speed train measurement data missing reconstruction, but these models are difficult to ensure that samples distributed from random noise and original data are generated when training high speed train discrete data, and it is difficult to achieve nash equilibrium, resulting in gradient disappearance. Second, deep learning techniques need to rely on large-scale, high-quality, complete data to train deep network structures. And the useful data generated in the actual running process of the high-speed train is few and few, so that the generalization capability of the deep learning model is difficult to ensure, the transfer learning can ensure the sharing of the characteristics and parameters of learning under different data, and the problem that the deep network model is difficult to train by using the small sample operation and maintenance data of the high-speed train is effectively solved.
Disclosure of Invention
The invention provides a high-speed train measurement missing data processing model for generating an countermeasure network by migrating small sample data, and solves the problem of reconstruction inaccuracy caused by more small sample missing data measured by a high-speed train.
The invention solves another technical problem of providing a reconstruction method for detecting data loss of the train running state.
The purpose of the invention is realized by the following technical scheme:
a train running state detection data missing processing model comprises a data acquisition module, a data preprocessing module, a variational self-coding-generation confrontation semantic fusion network (VAE-FGAN) module, a transfer learning parameter sharing module and a data missing part rebuilding module; the data acquisition module transmits data to the data preprocessing module, the data preprocessing module transmits the processed data to the variational self-coding-generation countermeasure semantic fusion network (VAE-FGAN) module, the variational self-coding-generation countermeasure semantic fusion network (VAE-FGAN) module generates samples and transmits the data to the transfer learning parameter sharing module to obtain missing data, and the transfer learning parameter sharing module transmits the obtained missing data to the data missing part reconstruction module for reasonable interpolation and outputs a complete data result.
Further, the data acquisition module comprises one or more of a current sensor, a voltage sensor, a temperature sensor, a humidity sensor, a displacement sensor and an electrical frequency sensor.
Further, the variational self-encoding-generation countermeasure semantic fusion network (VAE-FGAN) in step S2 includes an encoder E, a generator G and a discriminator D; and the data feature information captured by the encoder E is generated into a new sample by the generator G, and the discriminator D judges the authenticity of the generated data and classifies the data.
Furthermore, the encoder E introduces a GRU network module, and has unique advantages in learning data characteristics by utilizing the GRU network, so that the capability of the encoder E in acquiring deep semantics of the data is improved, and the quality of generated data is improved.
Furthermore, the encoder E and the generator G are respectively provided with an attention mechanism SE-NET, and the importance of each channel in the next stage is represented by a weight.
Further, the attention mechanism SE-NET comprises an SE module, an Squeeze operation, an Excitation operation and feature fusion. And carrying out weight distribution on each channel, after the Squeeze operation, obtaining a global description for the network, fusing all input characteristic information by a full connection layer by virtue of the Excitation operation and the characteristic fusion, and mapping the input to a 0-1 interval by virtue of a Sigmoid function.
According to the high-speed train running state detection data missing processing model, a high-speed train measurement missing data reconstruction method for generating a countermeasure network through migration under small sample data is provided, and the method comprises the following steps:
and S1, acquiring the operation and maintenance data set of the high-speed train, and preprocessing the acquired discrete data.
S2, applying variational self-coding to generate the data correlation characteristics for learning against semantic fusion network (VAE-FGAN):
in the training process, an encoder E extracts and compresses characteristics of a sample in a complete data set and encodes the sample into a potential space z through a linear network, wherein z is information of important characteristics of potential capture data; the new samples generated are generated by the generator G from the description of the underlying variable z, which keeps getting close to the expected distribution by using variational reasoning, choosing the KL divergence as part of the loss function to calculate the distance between the two distributions.
And S3, constructing a parameter sharing model by using transfer learning, and generating data of the missing part of the small sample characteristic data.
And S4, interpolating the missing partial data and outputting a complete data result.
Further, the operation and maintenance data set of step S1 includes one or more of device ac voltage, dc voltage, monitoring output current, collecting device temperature, oil level, collecting device humidity, and receiver power frequency.
Further, the data preprocessing in step S1 includes space-time correction, registration and data dimension-increasing processes, and after the acquired high-speed train discrete measurement data is segmented and intercepted, the high-dimensional data is mapped into a 2-D grid matrix form.
Further, the migration learning model adopts variational self-coding-generation countermeasure semantic fusion network (VAE-FGAN) as a basic network structure, and generates the generator G by using the sample datapGenerator G for pre-training and transferring well-trained parameters to main training networkmFine-tuning using data of a small number of samples;
further, the sample data size ratio of the pre-training to the main training is 5-20: 1; preferably, the sample data size ratio of the pre-training to the main training is 10: 1.
further, in the process of interpolating the missing data portion in step S4, a context similarity is defined and the KL divergence is defined to jointly constrain the portion for interpolating the missing data, so that the final output measurement result is as close to the real portion as possible.
Compared with the prior art, the beneficial effects are:
according to the method, under a missing data reconstruction method, a brand-new variational self-coding-generation countermeasure semantic fusion network (VAE-FGAN) is constructed and used for reconstructing missing data, the application of a GRU semantic fusion module in the variational self-coding-generation countermeasure semantic fusion network (VAE-FGAN) enables the bottom-layer features and the high-layer features of the data to be fused, the reconstruction accuracy of a model is effectively improved, and an SE-NET attention mechanism is introduced into the whole generation network to improve the expression of the feature extraction network on the data features; and finally, parameter sharing is achieved through transfer learning and pre-training. The migration generation confrontation network model can learn the relevant characteristics of data from the measured data of the small sample, not only can keep higher reconstruction precision under the condition of different loss rates, but also can well accord with the distribution rule of the measured data.
Drawings
FIG. 1 is a construction frame of variational self-coding-generation of confrontation network missing data under small sample data;
FIG. 2 shows a network structure of an encoder, a decoder and a discriminator
Wherein k represents the size of a convolution kernel, c represents the number of channels, and h represents the number of hidden layers of a gated cyclic network GRU;
FIG. 3 is a core portion of the present invention, a GRU-based semantic fusion module, which integrates highly the underlying feature information and the upper feature information;
FIG. 4 is a SE-NET attention mechanism added to the encoder and generator;
fig. 5 shows the reconstruction effect of the missing data with a specific value, and the reconstruction result very close to the normal value can be obtained by measuring the context feature relationship of the data.
Fig. 6 is a missing data reconstruction visualization chart of the present invention, which can effectively observe the distribution characteristics of the reconstructed data and the real data.
Detailed Description
The following examples are further explained and illustrated, but the present invention is not limited in any way by the specific examples. Unless otherwise indicated, the methods and equipment used in the examples are conventional in the art and all materials used are conventional commercially available materials.
Example 1
The embodiment provides a high-speed train running state detection data missing processing model.
As shown in fig. 1, the high-speed train operation state detection data missing processing model includes four modules, namely a data acquisition module, a data preprocessing module, a variational self-coding-generation countermeasure semantic fusion network (VAE-FGAN) module, a migration learning parameter sharing module and a data missing part reconstruction module.
The data acquisition module comprises one or more of a current sensor, a voltage sensor, a temperature sensor, a humidity sensor, a displacement sensor and an electrical frequency sensor.
The preprocessing module comprises multi-dimensional data space-time correction and registration, and data dimension increasing is realized by mapping to a high dimension.
The variational self-coding-generation countermeasure semantic fusion network (VAE-FGAN) module comprises a VAE coder E and a generator G, the VAE coder E, the generator G and a discriminator D form a VAE-GAN backbone network structure, an attention mechanism SE-NET is respectively added to the coder E and the generator G, and the coder E is combined with a GRU network model to obtain a coder semantic fusion structure.
Referring to fig. 2, in the encoder E, the generator G, and the discriminator D of the VAE, a RuLU function is selected for the activation functions of the encoder and the generator, and in order to improve the discrimination performance of the discriminator, the activation function is different from the activation functions of other convolutional layers, and a leakyreu function is selected.
As in fig. 3, the GRU module includes three quantities: output h at last momentt-1Current time input xtCurrent time output htZ in the formulatAnd rtThe larger the value of the update gate is, the more information at the previous moment is brought in, and the relationship is as follows:
zt=σ(Wz·[ht-1,xt])
rt=σ(Wz·[ht-1,xt])
wherein sigma is sigmoid function, and data is converted into a numerical value in the range of 0-1 through the function, WzThe situation is represented by the weight of the current state.
And extracting deep features among the data by utilizing the capability of GRU learning context data. The data of the first layer is used as the output of the last step of the GRU module, the data of the second layer is used as the input of the current step of the GRU module, and useful data information is output after being reserved; and performing feature semantic fusion on the output data and the Layer 1 and Layer 2 data, and outputting the output data to the next GRU module. The structural design highly integrates the correlation among data, fully combines the bottom layer characteristic information and the high layer characteristic, and improves the authenticity of the whole model for reconstructing the missing data.
As shown in FIG. 4, the SE-NET attention mechanism mainly comprises an SE module, an Squeeze operation, an Excitation operation and feature fusion. And carrying out weight distribution on each channel, after the Squeeze operation, obtaining a global description for the network, fusing all input characteristic information by a full connection layer by virtue of the Excitation operation and the characteristic fusion, and mapping the input to a 0-1 interval by virtue of a Sigmoid function.
Example 2
In this embodiment, according to embodiment 1, the method for reconstructing the measurement missing data of the high-speed train, which generates the countermeasure network by migration under small sample data provided by the high-speed train operation state detection data missing processing model, includes the steps of:
s1, acquiring the high-speed train operation and maintenance data set through the data acquisition module, and preprocessing the acquired discrete data:
the preprocessing comprises space-time correction, registration and data dimension increasing processes, and after the acquired high-speed train discrete measurement data are segmented and intercepted, high-dimension mapping is carried out to form a 2-D grid matrix.
S2, applying variational self-coding to generate the data correlation characteristics for learning against semantic fusion network (VAE-FGAN):
in the training process, an encoder E extracts and compresses characteristics of a sample in a complete data set and encodes the sample into a potential space z through a linear network, wherein z is information of important characteristics of potential capture data; the new samples generated are generated by the generator G from the description of the underlying variable z, which is constantly approaching the expected distribution by using variational reasoning, selecting KL divergence as part of the loss function to calculate the distance between the two distributions.
S3, creating data of the missing part of the small sample characteristic data by applying a transfer learning construction parameter sharing modelBy using the sample data to generator GpGenerator G for pre-training and transferring well-trained parameters to main training networkmFine tuning is performed using a small sample of data. In the transfer learning, one VAE-FGAN is trained, parameters are transferred to another VAE-FGAN in a parameter freezing mode, and fine tuning is carried out by using a small amount of data, wherein the two VAE-FGANs have the same structure, and only the trained data is different.
The optimal ratio of the sample data size of the pre-training to the main training is 10: 1.
s4, reasonably interpolating missing partial data, and outputting a complete data result:
in the process of interpolation of a missing data part, firstly, a binary mask matrix M consistent with the dimension of measurement data input by a model is established to properly describe feature data containing a missing sample, and for the missing part of the data sample, the element value of the corresponding mask matrix M part is 0, and the whole part is 1. The measurement data X and M are subjected to Hadamard product (Hadamard product) operations, and data loss of different degrees is expressed by matrix operations.
Secondly, to ensure that the undeleted portion can remain unchanged, the reconstructed data is similar to the original measured data. Defining a context-constrained similarity LrAnd the generator is used for continuously generating data which is most matched with the complete data part, and the reconstructed data and the complete data are ensured to have consistent context relation.
Lr(z)=||Xe M-G(z)e M||2
Wherein L isr(z) a context-constrained similarity loss function; x is the measured data containing the deficiency value; g (z) is the generated data sample, M binary mask matrix. It must be noted that we only calculated the non-missing part of the data.
The discriminator penalty is used to ensure that the reconstructed data is as close to true as possible, defining the penalty LdComprises the following steps:
Ld(z)=-D(G(z))
wherein G represents the generator generated data; d is expressed as the discriminator network output, i.e. the KL distance between the generated data reconstruction sample and the real sample.
In summary, the reconstructed metrology missing data loss function consists of the similarity loss and the discriminator loss:
L(z)=Lr+λLd(z)
wherein, L (z) is a loss function for data reconstruction; l isrLoss of similarity; λ is a hyperparameter.
Example 3
In this embodiment, the five characteristics (the maximum value and the minimum value of the positioning direct-current voltage, and the maximum value, the minimum value and the average value of the positioning alternating-current voltage) of the same device are adopted for the 32-day device operation and maintenance data of a certain high-speed train, and the selection of the same device improves the strong relevance of the data characteristics. The total amount of the group of data is 1630 pieces of complete and non-missing data, in order to verify the interpolation effect of the text model on the missing data, sample data are disordered sequentially, a pre-training model is trained by three characteristic samples of the maximum value, the minimum value and the average value of alternating voltage, and the main training model is subjected to fine adjustment and interpolation verification by the maximum value and the minimum value of direct current. After the main training model is migrated through the parameters of the pre-training model, 100 pieces of sample data are adopted for parameter fine adjustment, the remaining 250 pieces of sample data in the main training model are subjected to random deletion with different degrees and are subjected to interpolation, and the generalization capability of the model under the small sample data is checked. It should be noted that the data of the training model must be guaranteed to be complete data without missing values, and the trimmed data and the verification data must not be duplicated in order to evaluate the interpolation capability of the model for processing the missing data.
The program hardware environment is realized in python codes, and is a CPU processor Intel (R) Xeon (R) E-2124G CPU frequency of 3.41 GHz; the GPU is NVIDIA GeForce GTX 1660, and the platform versions are Python 3.7.7 and torch 1.4.0.
The learning rate of an encoder E and a discriminator D of the pre-training is set to 0.0001 in the experiment, the learning rate of a generator G is set to 0.00002, and the learning rate of a main training model is set to 0.00002, because the number of the experimental data samples is not large, in order to enable the network to reach the optimal solution of gradient descent more quickly and enable the model to converge to be stable quickly, BATCH _ SIZE is set to 1280 for 100 Epoch cycles, and the model structure can obtain a good supervised learning effect.
In the evaluation of reconstruction of missing data, two indexes of Mean Absolute Error (MAE) and Mean Absolute Percent Error (MAPE) are used to evaluate the reconstruction effect of the model, and the calculation formula is as follows:
Figure BDA0003161421390000081
Figure BDA0003161421390000082
wherein n is the number of data, xiThe measured data of the original high-speed train is shown,
Figure BDA0003161421390000083
representing the complete data after reconstruction. The result change of the two indexes determines the reconstruction effect of the missing data, and the smaller the value of the MAE and the MAPE is, the better the reconstruction effect is.
As mentioned above, the data used in the experiment are all high-speed train operation and maintenance complete data, and the purpose is to verify whether the model is matched with the condition of data loss caused by the high-speed train in a complex operation environment. Therefore, the experiment adopts the binary mask matrix and the complete data to carry out the Hadamard product operation to express the missing data. Considering that the measurement data missing position of the high-speed train in the actual running process has uncertainty and uncontrollable property, a generated mask matrix is randomly set, wherein 1 represents complete, and 0 represents missing. The missing amount of metrology data is controlled by controlling the number of mask matrices 0. If the experiment adopts 250 samples to achieve the deletion effect of 20%, the randomly generated mask deletion amount slightly fluctuates above and below 50 sampling points.
As shown in fig. 5, assuming that the measured data of the measurements numbered 2, 5, 8, 9, and 14 in the high-speed train system are all lost due to communication failure, the reconstructed data of the model is shown in fig. 6 according to the context feature relationship between the data and the a priori knowledge of the data. It can be seen that the VAE-FGAN model of the invention can obtain a reconstruction result very close to a normal value by measuring the context characteristic relationship of data under specific missing data.
As shown in fig. 6, 250 samples are taken for each feature by the model of the present invention, wherein 50 samples are randomly deleted, the coincidence degree of the curve and the point reflects the difference degree between the reconstructed data and the original data, when the error between the reconstructed data and the original data is zero, the curve and the point coincide, it can be seen that the reconstruction effect of the maximum value and the minimum value of the positioning dc voltage has the same distribution change rule as the original data, the reconstructed data of the specific positioning dc voltage minimum value has a high degree of fitting with the original measured data, and the error at the top end value is also extremely small. The method shows that the VAE-GAN semantic fusion model can greatly restore the original data information distribution and has high reconstruction precision by learning the characteristic rule among the same equipment data.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A train running state detection data missing processing model is characterized by comprising a data acquisition module, a data preprocessing module, a variational self-coding-generation confrontation semantic fusion network module, a transfer learning parameter sharing module and a data missing part reconstruction module; the data acquisition module transmits data to the data preprocessing module, the data preprocessing module transmits the processed data to the variational self-coding and generation confrontation semantic fusion network module, the variational self-coding and generation confrontation semantic fusion network module generates samples and transmits the data to the migration learning parameter sharing module to obtain missing data, and the migration learning parameter sharing module transmits the obtained missing data to the data missing part reconstruction module for reasonable interpolation and outputs a complete data result.
2. The model for processing the lack of data detected by the train operation status according to claim 1, wherein the data acquisition module comprises one or more of a current sensor, a voltage sensor, a temperature sensor, a humidity sensor, a displacement sensor and an electrical frequency sensor.
3. The model for processing missing data of train operation status detection according to claim 1, wherein the variational self-coding-generation countermeasure semantic fusion network in step S2 includes an encoder E, a generator G and a discriminator D; and the data feature information captured by the encoder E is generated into a new sample by the generator D, and the discriminator judges the authenticity of the generated data and classifies the data.
4. The model for processing data loss in train operation status detection according to claim 2, wherein the encoder E incorporates a GRU network module.
5. The model for processing the missing train operation status detection data according to claim 2, wherein an attention mechanism is further provided at the encoder E and the generator G.
6. A reconstruction method for detecting data loss of a train running state is characterized by comprising the following steps:
s1, collecting a high-speed train operation and maintenance data set, and preprocessing the collected discrete data;
s2, applying variational self-coding to generate the relevant characteristics among the data of the semantic fusion network learning countermeasure:
in the training process, an encoder E extracts and compresses characteristics of a sample in a complete data set and encodes the sample to a potential space z through a linear network, wherein the z is information of important characteristics of potential capture data; the new samples generated are generated by the generator G from the description of the underlying variable z, which keeps the posterior distribution close to the expected distribution by using variational reasoning, choosing the KL divergence as part of the loss function to calculate the distance between the two distributions.
S3, constructing a parameter sharing model by using a transfer learning model, and generating data of a small sample characteristic data missing part;
and S4, interpolating the missing partial data and outputting a complete data result.
7. The method for reconstructing the missing train operating condition detection data of claim 6, wherein the operation and maintenance data set of step S1 includes one or more of a device ac voltage, a dc voltage, a monitoring output current, a device temperature, a fuel level, a device humidity, and a receiver power frequency.
8. The method for reconstructing the train operation state detection data loss according to claim 6, wherein the data preprocessing in the step S1 includes space-time correction, registration and data dimension-increasing processes, and after the acquired high-speed train discrete measurement data are segmented and intercepted, the high-dimensional mapping is performed to a 2-D grid matrix form.
9. The method for reconstructing the train operation state detection data loss according to claim 6, wherein the transfer learning model adopts variational self-coding-generation countermeasure semantic fusion network as a basic network structure, and uses sample data to generator GpGenerator G for pre-training and transferring well-trained parameters to main training networkmFine tuning is performed using a small sample of data.
10. The method for reconstructing the train operation state detection data loss according to claim 1, wherein the sample data size ratio of the pre-training to the main training is 5-20: 1; preferably, the sample data size ratio of the pre-training to the main training is 10: 1.
CN202110792198.1A 2021-07-13 2021-07-13 Train running state detection data missing processing model and reconstruction method Active CN113569928B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110792198.1A CN113569928B (en) 2021-07-13 2021-07-13 Train running state detection data missing processing model and reconstruction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110792198.1A CN113569928B (en) 2021-07-13 2021-07-13 Train running state detection data missing processing model and reconstruction method

Publications (2)

Publication Number Publication Date
CN113569928A true CN113569928A (en) 2021-10-29
CN113569928B CN113569928B (en) 2024-01-30

Family

ID=78164679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110792198.1A Active CN113569928B (en) 2021-07-13 2021-07-13 Train running state detection data missing processing model and reconstruction method

Country Status (1)

Country Link
CN (1) CN113569928B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114169396A (en) * 2021-11-05 2022-03-11 华中科技大学 Training data generation model construction method and application for aircraft fault diagnosis
CN116506309A (en) * 2023-06-27 2023-07-28 北京圣传创世科技发展有限公司 Vehicle-mounted ATP communication signal comprehensive monitoring system and method
CN117828280A (en) * 2024-03-05 2024-04-05 山东新科建工消防工程有限公司 Intelligent fire information acquisition and management method based on Internet of things

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108583624A (en) * 2018-04-12 2018-09-28 中车青岛四方机车车辆股份有限公司 Train operation state method for visualizing and device
CN109308689A (en) * 2018-10-15 2019-02-05 聚时科技(上海)有限公司 The unsupervised image repair method of confrontation network migration study is generated based on mask
WO2019035364A1 (en) * 2017-08-16 2019-02-21 ソニー株式会社 Program, information processing method, and information processing device
CN110212528A (en) * 2019-06-19 2019-09-06 华北电力大学 Reconstructing method is lacked based on the power distribution network metric data for generating confrontation and dual Semantic Aware
CN110634539A (en) * 2019-09-12 2019-12-31 腾讯科技(深圳)有限公司 Artificial intelligence-based drug molecule processing method and device and storage medium
CN111462264A (en) * 2020-03-17 2020-07-28 中国科学院深圳先进技术研究院 Medical image reconstruction method, medical image reconstruction network training method and device
US20200242736A1 (en) * 2019-01-29 2020-07-30 Nvidia Corporation Method for few-shot unsupervised image-to-image translation
CN111859978A (en) * 2020-06-11 2020-10-30 南京邮电大学 Emotion text generation method based on deep learning
CN112395737A (en) * 2020-10-09 2021-02-23 湖南工业大学 Method for reconstructing measurement data loss of high-speed train based on generation countermeasure network
CN112528548A (en) * 2020-11-27 2021-03-19 东莞市汇林包装有限公司 Self-adaptive depth coupling convolution self-coding multi-mode data fusion method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019035364A1 (en) * 2017-08-16 2019-02-21 ソニー株式会社 Program, information processing method, and information processing device
CN108583624A (en) * 2018-04-12 2018-09-28 中车青岛四方机车车辆股份有限公司 Train operation state method for visualizing and device
CN109308689A (en) * 2018-10-15 2019-02-05 聚时科技(上海)有限公司 The unsupervised image repair method of confrontation network migration study is generated based on mask
US20200242736A1 (en) * 2019-01-29 2020-07-30 Nvidia Corporation Method for few-shot unsupervised image-to-image translation
CN110212528A (en) * 2019-06-19 2019-09-06 华北电力大学 Reconstructing method is lacked based on the power distribution network metric data for generating confrontation and dual Semantic Aware
CN110634539A (en) * 2019-09-12 2019-12-31 腾讯科技(深圳)有限公司 Artificial intelligence-based drug molecule processing method and device and storage medium
CN111462264A (en) * 2020-03-17 2020-07-28 中国科学院深圳先进技术研究院 Medical image reconstruction method, medical image reconstruction network training method and device
CN111859978A (en) * 2020-06-11 2020-10-30 南京邮电大学 Emotion text generation method based on deep learning
CN112395737A (en) * 2020-10-09 2021-02-23 湖南工业大学 Method for reconstructing measurement data loss of high-speed train based on generation countermeasure network
CN112528548A (en) * 2020-11-27 2021-03-19 东莞市汇林包装有限公司 Self-adaptive depth coupling convolution self-coding multi-mode data fusion method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHANGFAN ZHANG ET AL: "Reconstruction method for missing measurement data based on Wasserstein Generative Adversarial Network", 《JASIII》, vol. 25, no. 2, pages 195 - 203 *
彭中联;万巍;荆涛;魏金侠;: "基于改进CGANs的入侵检测方法研究", 信息网络安全, no. 05, pages 53 - 62 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114169396A (en) * 2021-11-05 2022-03-11 华中科技大学 Training data generation model construction method and application for aircraft fault diagnosis
CN116506309A (en) * 2023-06-27 2023-07-28 北京圣传创世科技发展有限公司 Vehicle-mounted ATP communication signal comprehensive monitoring system and method
CN116506309B (en) * 2023-06-27 2023-09-08 新唐信通(浙江)科技有限公司 Vehicle-mounted ATP communication signal comprehensive monitoring system and method
CN117828280A (en) * 2024-03-05 2024-04-05 山东新科建工消防工程有限公司 Intelligent fire information acquisition and management method based on Internet of things
CN117828280B (en) * 2024-03-05 2024-06-07 山东新科建工消防工程有限公司 Intelligent fire information acquisition and management method based on Internet of things

Also Published As

Publication number Publication date
CN113569928B (en) 2024-01-30

Similar Documents

Publication Publication Date Title
CN113569928A (en) Train running state detection data missing processing model and reconstruction method
CN109490814B (en) Metering automation terminal fault diagnosis method based on deep learning and support vector data description
CN110992113A (en) Neural network intelligent algorithm-based project cost prediction method for capital construction transformer substation
CN109635928A (en) A kind of voltage sag reason recognition methods based on deep learning Model Fusion
CN110414412B (en) Wide-area power grid multiple disturbance accurate identification method and device based on big data analysis
CN108399248A (en) A kind of time series data prediction technique, device and equipment
CN114297947B (en) Data-driven wind power system twinning method and system based on deep learning network
CN111401749A (en) Dynamic safety assessment method based on random forest and extreme learning regression
CN115293280A (en) Power equipment system anomaly detection method based on space-time feature segmentation reconstruction
CN113688869B (en) Photovoltaic data missing reconstruction method based on generation countermeasure network
CN114841072A (en) Differential fusion Transformer-based time sequence prediction method
CN112395737A (en) Method for reconstructing measurement data loss of high-speed train based on generation countermeasure network
CN116610998A (en) Switch cabinet fault diagnosis method and system based on multi-mode data fusion
CN113807951A (en) Transaction data trend prediction method and system based on deep learning
CN111654392A (en) Low-voltage distribution network topology identification method and system based on mutual information
CN115015683A (en) Cable production performance test method, device, equipment and storage medium
CN114116832A (en) Power distribution network abnormity identification method based on data driving
CN110084295A (en) Control method and control system are surrounded in a kind of grouping of multi-agent system
CN114357670A (en) Power distribution network power consumption data abnormity early warning method based on BLS and self-encoder
CN112215410B (en) Power load prediction method based on improved deep learning
WO2024087129A1 (en) Generative adversarial multi-head attention neural network self-learning method for aero-engine data reconstruction
CN115994605A (en) Multi-data fusion photovoltaic power prediction algorithm for comprehensive meteorological factor data
CN115359197A (en) Geological curved surface reconstruction method based on spatial autocorrelation neural network
CN114113909A (en) Power distribution network single-phase earth fault line selection method and system
CN113705695A (en) Power distribution network fault data identification method based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant