CN112185104B - Traffic big data restoration method based on countermeasure autoencoder - Google Patents
Traffic big data restoration method based on countermeasure autoencoder Download PDFInfo
- Publication number
- CN112185104B CN112185104B CN202010855606.9A CN202010855606A CN112185104B CN 112185104 B CN112185104 B CN 112185104B CN 202010855606 A CN202010855606 A CN 202010855606A CN 112185104 B CN112185104 B CN 112185104B
- Authority
- CN
- China
- Prior art keywords
- data
- generator
- matrix
- traffic data
- discriminator
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 239000011159 matrix material Substances 0.000 claims abstract description 56
- 238000013528 artificial neural network Methods 0.000 claims abstract description 25
- 230000007246 mechanism Effects 0.000 claims abstract description 21
- 238000012549 training Methods 0.000 claims abstract description 20
- 230000003042 antagnostic effect Effects 0.000 claims abstract description 12
- 230000006870 function Effects 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 13
- 230000008439 repair process Effects 0.000 claims description 13
- 239000004698 Polyethylene Substances 0.000 claims description 11
- 238000011156 evaluation Methods 0.000 claims description 6
- -1 polyethylene Polymers 0.000 claims description 3
- 229920000573 polyethylene Polymers 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 5
- 238000011084 recovery Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 238000012217 deletion Methods 0.000 description 4
- 230000037430 deletion Effects 0.000 description 4
- 239000013598 vector Substances 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 230000008485 antagonism Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000000356 contaminant Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
- G08G1/0129—Traffic data processing for creating historical data or processing based on historical data
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a traffic big data restoration method based on a confrontation self-encoder, which comprises the following steps: determining a road section needing traffic data restoration, and collecting historical traffic data of the road section; constructing a mask matrix based on historical traffic data; constructing an antagonistic neural network, comprising: constructing a self-encoder model combined with an attention mechanism as a generator, and constructing a discriminator combined with the attention mechanism; training an antagonistic neural network based on historical traffic data and a mask matrix to generate a data restoration model; and repairing the traffic data acquired in real time on the road section by using the data repairing model. The invention introduces structures such as a self-encoder, a multi-head attention mechanism, a clue matrix and the like on the basis of the GAN, effectively learns the distribution characteristics of the traffic big data by utilizing the structure of the anti-neural network, generates complete traffic data by utilizing the self-encoder according to the missing traffic data, and can effectively improve the accuracy of model data restoration by utilizing the multi-head attention mechanism and the clue matrix.
Description
Technical Field
The invention relates to the technical field of traffic data restoration, in particular to a traffic big data restoration method based on a confrontation self-encoder.
Background
Many practical traffic analysis model deployments involve analyzing multi-sensor acquired time series data with spatially distributed signatures. Such as geotagged air temperature data collected by air temperature sensors, air contaminant monitoring data, traffic data collected by road traffic sensors, etc. Due to the reasons of sensor failure, communication error, storage loss and the like, data acquired by the sensor is difficult to avoid and has the problem of data loss. Tasks such as classification of data, regression prediction, and traffic control optimization. Due to the diversity of data missing modes, the task of repairing data is a very challenging task, and a proper algorithm needs to be designed to extract the change rule of data from multidimensional data, especially to find out the interdependence relationship among data.
Traditional data restoration methods tend to have respective limitations, for example, data restoration methods based on statistical analysis and using methods such as mean, mode, median and the like tend to ignore interdependencies in data; the statistical-based machine learning model imposes strong constraints on the data, such as assumptions of linearity, smoothness of the data, and the like.
Dalca et al propose a variational approximation learning algorithm based on Convolutional Neural Network (CNN) and sparse perception. The HI-VAE algorithm proposed by nazaba et al, which is based on a modified algorithm of variational self-coding, can accurately fill in a variety of missing data. Fortuin et al propose a new depth order latent variable model GP-VAE for descent and data restoration. Models based on Recurrent Neural Networks (RNNs) often assume that the relationships between data are of a sequential type, which cannot be processed in parallel and it is difficult to directly model the interdependencies between different time-stamped input data.
Disclosure of Invention
The present invention aims to provide a method for repairing the traffic flow missing data of the countermeasure type self-encoder based on the self-attention mechanism, which aims to overcome the defects of the prior art. The method can effectively capture the incidence relation among different sensors, and can generate the repair data approximate to the real data distribution through a antagonism training mode.
The technical solution for realizing the purpose of the invention is as follows: a method for traffic big data restoration based on an antagonistic self-encoder, the method comprising the steps of:
step 2, constructing a mask matrix based on the historical traffic data;
step 3, constructing an antagonistic neural network, comprising: constructing a self-encoder model combined with an attention mechanism as a generator, and constructing a discriminator combined with the attention mechanism;
step 4, training the confrontation neural network based on the historical traffic data and a mask matrix to generate a data restoration model;
and 5, repairing the traffic data acquired in real time on the road section by using the data repairing model.
Further, the step 2 of constructing a mask matrix based on the historical traffic data includes:
step 2-1, constructing a traffic data matrix, wherein the data of the ith row and the jth column of the matrix represent historical traffic data acquired by a jth traffic data acquisition device at the ith time point or time period;
and 2-2, constructing a mask matrix, wherein the dimension of the matrix is the same as that of the traffic data matrix, if data are missing at a certain position in the traffic data matrix, setting the corresponding position in the mask matrix as 0, and otherwise, setting the corresponding position as 1.
Further, the generator in step 3 comprises sequentially arranged position coding modules and sequentially arranged N groups of first modules, wherein the first modules comprise sequentially connected multi-head attention structures and fully-connected neural networks; n is more than or equal to 1; adding the input and output of the multi-head attention structure and inputting the sum into a full-connection neural network;
the position coding module uses sampling values of sine and cosine functions with different frequencies as position coding information:
where pos is the position of the input data, i represents the dimension, dmodelRepresenting the length of the time dimension of the input; PE (polyethylene)(pos,2i)、PE(pos,2i+1)Respectively representing 2i th and 2i +1 th position codes;
the multi-head attention structure comprises a plurality of scaling point product attention mechanisms;
the discriminator comprises M groups of first modules and two layers of fully-connected networks which are arranged in sequence, the output is a score value, and the input attribute is represented; m is more than or equal to 1.
Further, in step 4, training the antagonistic neural network based on the historical traffic data and the mask matrix to generate a data restoration model, and the specific process includes:
step 4-1, setting the loss function of the discriminator as:
where x is the true data, PrIn order to be able to distribute the real data,for the data after the generator is repaired, m is a mask matrix, PcDistributing the repaired data; d is a discriminator, G is a generator, D (x) represents the evaluation score of the discriminator on the real data,representing the evaluation score of the arbiter on the repair data;showing that in the training process of the discriminator, learning parameters of a neural network of the generator are frozen, and E shows an average value;
step 4-2, setting the loss function of the generator as:
wherein,
in the formula,the mean error MSE of the data which are really collected in the traffic data matrix and the data which are produced by the generator at the corresponding position is represented, and the reconstruction loss of the repair data is represented;a score value representing the output of the discriminator to the generator, representing the distribution loss of the repair data;representing the result produced by the generator at the known true value position; x' represents known real data in the input data,indicating that the learning parameter of the neural network of the discriminator is frozen during the training process of the generator, which is the Hammett multiplication;
step 4-3, training the generator: inputting real data, namely a traffic data matrix and a mask matrix, into a generator, and outputting a repaired traffic data matrix;
step 4-4, training a discriminator: adding noise to data in the traffic data matrix to be used as a positive sample, using the repaired traffic data matrix as a negative sample, inputting the positive sample and the negative sample into a discriminator, and outputting a judgment result, namely a score value output to a generator;
and repeating the steps 4-3 to 4-4 to enable the generator and the discriminator to play and optimize continuously until the data restoration precision reaches a preset threshold value.
Compared with the prior art, the invention has the following remarkable advantages: 1) judging the generated traffic data and real data by using a GAN network structure, and forcing a generator to generate data which is closer to real distribution; 2) compared with other data restoration methods, the method has higher restoration accuracy in three data deletion modes of time dimension deletion, space dimension deletion and block deletion; 3) by adopting an unsupervised learning mode, the problems of incomplete historical data and missing in the actual situation can be effectively solved; 4) by utilizing the attention mechanism, different weights can be given to the sensors at different positions and data at different time, so that the aim of accurate repair is fulfilled.
The present invention is described in further detail below with reference to the attached drawing figures.
Drawings
FIG. 1 is a flow chart of a method for repairing traffic big data based on a countermeasure self-encoder in one embodiment.
FIG. 2 is a diagram of an exemplary embodiment of an autoencoder model.
FIG. 3 is a diagram of a scaled dot product attention mechanism in one embodiment.
FIG. 4 is a diagram of a multi-headed attention structure in one embodiment.
Fig. 5 is a diagram showing the structure of the generator and the discriminator in one embodiment, and fig. (a) and (b) are a diagram showing the structure of the generator and the structure of the discriminator, respectively.
FIG. 6 is a graph comparing the results of the repair model in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, in conjunction with fig. 1, there is provided a method for repairing traffic big data based on a confrontation self-encoder, the method comprising the following steps:
here, the acquisition data may be acquired at a certain time step.
Here, the historical traffic data includes road flow, speed, and occupancy.
Step 2, constructing a mask matrix based on the historical traffic data;
step 3, constructing an antagonistic neural network, comprising: constructing a self-encoder model combined with an attention mechanism as a generator, and constructing a discriminator combined with the attention mechanism;
step 4, training the confrontation neural network based on the historical traffic data and a mask matrix to generate a data restoration model;
and 5, repairing the traffic data acquired in real time on the road section by using the data repairing model.
Further, in one embodiment, with reference to fig. 2, the step 2 of building a self-coding model based on the historical traffic data includes:
step 2-1, constructing a traffic data matrix, wherein the data of the ith row and the jth column of the matrix represent historical traffic data acquired by a jth traffic data acquisition device at the ith time point or time period;
and 2-2, constructing a mask matrix, wherein the dimension of the matrix is the same as that of the traffic data matrix, if data are missing at a certain position in the traffic data matrix, setting the corresponding position in the mask matrix as 0, and otherwise, setting the corresponding position as 1.
Further, in one embodiment, with reference to fig. 5(a), the generator in step 3 includes sequentially arranged position coding modules, and sequentially arranged N groups of first modules, where the first modules include sequentially connected multi-head attention structure, fully-connected neural network; n is more than or equal to 1; adding the input and output of the multi-head attention structure and inputting the sum into a full-connection neural network;
here, N is preferably 3.
The position coding module uses sampling values of sine and cosine functions with different frequencies as position coding information:
where pos is the position of the input data, i represents the dimension, dmodelRepresenting the length of the time dimension of the input; PE (polyethylene)(pos,2i)、PE(pos,2i+1)Respectively represent the 2i th and the 2i +1 thPosition coding;
this function was chosen because it allows the model to easily learn to participate in the computation by relative position, for any fixed offset k, PE(pos+k,i)Can be expressed as PE(pos,i)A function of (a). In the traffic data restoration problem, different position codes are applied to different acquisition moments in one day, so that the self-coding network can use the information of the extracted position codes for constructing missing parts of input data, the position input information can be used as one type of condition information generated by model data to be input, and the missing data generation problem is converted into a similar condition data generation problem.
With reference to fig. 4, the multi-headed attention structure includes a plurality of scaled dot product attention mechanisms. Here preferably 6 attention functions are included. The scaling and dot product attention mechanism is a special attention mechanism as shown in FIG. 3, and the dimension d is obtained by calculation according to the inputk"query", "key" and dimension of dvA "value" vector of. Multiply and divide the points of "query" and "key" byAnd obtaining the attention of the current input to each different time point input, applying a flexible maximum transfer function (Softmax) to carry out normalization processing on the different time point inputs to obtain an attention weight value, and finally multiplying the corresponding attention by the value vector to obtain the attention output.
In an actual calculation process, the calculation process of attention among the whole input vectors can be operated in parallel, and is represented as the following process:
for each mapped query, key and value, the model obtains each attention function generation d by executing the attention functions in parallelvThe output value of the dimension. Connecting the values and weighting and projecting again to obtain the final attention feature map,mapping "query", "key", and "value" to the same h d's, respectively, using an h-th linear transformationk、dkAnd dvDimension, finally, the output is spliced and mapped to d through weightingmodelAnd (5) maintaining. The calculation process can be expressed as:
MultiHead(Q,K,V)=Concat(head1,head2,...,headn)WO
headi=Attention(QWi Q,KWi KVWi V)
with reference to fig. 5(b), the discriminator includes M groups of first modules and two layers of fully connected networks, the output is a score value, which represents the attribute of the input, and the larger the output value is, the more the discriminator deems the input at this time is the more "true"; m is more than or equal to 1.
Here, preferably, M takes 3.
Further, in one embodiment, the training of the antagonistic neural network based on the historical traffic data and the mask matrix in step 4 to generate a data restoration model includes:
step 4-1, the output of the generator is either completely real or completely false in the standard GAN by the discriminator, while in the data recovery problem, the output is composed of a part of real components and a part of generated components. The arbiter attempts to tell which inputs are completely true (observed) and which contain a generating component (repaired). For this feature, the loss function of the discriminator is set as:
where x is the true data, PrIn order to be able to distribute the real data,for the data after the generator is repaired, m is a mask matrix, PcDistributing the repaired data; d is a discriminator and G is a generatorD (x) represents the evaluation score of the discriminator on the real data,representing the evaluation score of the arbiter on the repair data;showing that in the training process of the discriminator, learning parameters of a neural network of the generator are frozen, and E shows an average value;
as can be seen from the formula, the goal of the discriminator is to increase as much as possible the score given by the discriminator to the real sample (maximum 1), and to decrease the score given to the false sample from the generator (minimum 0)
Step 4-2, setting the loss function of the generator as:
wherein,
in the formula,the mean error MSE of the data which are really collected in the traffic data matrix and the data which are produced by the generator at the corresponding position is represented, and the reconstruction loss of the repair data is represented;a score value representing the output of the discriminator to the generator, representing the distribution loss of the repair data;representing the result produced by the generator at the known true value position; x' represents known real data in the input data,indicating that the learning parameter of the neural network of the discriminator is frozen during the training process of the generator, which is the Hammett multiplication;
step 4-3, training the generator: inputting real data, namely a traffic data matrix and a mask matrix, into a generator, and outputting a repaired traffic data matrix;
step 4-4, training a discriminator: adding noise to data in the traffic data matrix to be used as a positive sample, using the repaired traffic data matrix as a negative sample, inputting the positive sample and the negative sample into a discriminator, and outputting a judgment result, namely a score value output to a generator;
and repeating the steps 4-3 to 4-4 to enable the generator and the discriminator to play and optimize continuously until the data restoration precision reaches a preset threshold value.
As a specific example, comparing the model of the present invention with several other data recovery models (historical average model, K-nearest neighbor model, depth self-encoder, bidirectional recurrent neural network), the recovery accuracy is shown in fig. 6, and it can be known from fig. 6 that the recovery accuracy of the method of the present invention is higher than that of other data recovery models for different data loss rates.
The invention provides an attention mechanism-based unsupervised traffic data missing data repairing method, which is used for estimating missing values in a multivariate time sequence by utilizing a generated countermeasure network. The self-attention mechanism self-encoder generates a new complete sample closest to the original incomplete sample by means of discriminant loss and the loss of the squared error of the known data. Therefore, the missing values can be estimated with the new complete samples generated in a short training time. Besides, the introduction of the attention mechanism can effectively improve the traffic flow data repairing effect, can capture the interdependence relation between input data in a parallel computing mode among different time stamps, and ensures the accuracy of the data generated by a generator by utilizing the square error loss on the known partial data.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (4)
1. A method for repairing traffic big data based on an antagonistic self-encoder is characterized by comprising the following steps:
step 1, determining a road section needing traffic data restoration, and collecting historical traffic data of the road section;
step 2, constructing a mask matrix based on the historical traffic data;
step 3, constructing an antagonistic neural network, comprising: constructing a self-encoder model combined with an attention mechanism as a generator, and constructing a discriminator combined with the attention mechanism;
step 4, training the confrontation neural network based on the historical traffic data and a mask matrix to generate a data restoration model; the specific process comprises the following steps:
step 4-1, setting the loss function of the discriminator as:
where x is the true data, PrIn order to be able to distribute the real data,for the repair data generated by the generator, m is a mask matrix, PcDistributing the repaired data; d is a discriminator, G is a generator, D (x) represents the evaluation score of the discriminator on the real data,representing the evaluation score of the arbiter on the repair data;showing that in the training process of the discriminator, learning parameters of a neural network of the generator are frozen, and E shows an average value;
step 4-2, setting the loss function of the generator as:
wherein,
in the formula,the mean error MSE of the data which are really collected in the traffic data matrix and the data which are produced by the generator at the corresponding position is represented, and the reconstruction loss of the repair data is represented;a score value representing the output of the discriminator to the generator, representing the distribution loss of the repair data;representing the result produced by the generator at the known true value position; x' represents known real data in the input data,indicating that the learning parameter of the neural network of the discriminator is frozen during the training process of the generator, which is the Hammett multiplication;
step 4-3, training the generator: inputting real data, namely a traffic data matrix and a mask matrix, into a generator, and outputting a repaired traffic data matrix;
step 4-4, training a discriminator: adding noise to data in the traffic data matrix to be used as a positive sample, using the repaired traffic data matrix as a negative sample, inputting the positive sample and the negative sample into a discriminator, and outputting a judgment result, namely a score value output to a generator;
repeating the steps 4-3 to 4-4, so that the generator and the discriminator are continuously played and optimized mutually until the data restoration precision reaches a preset threshold value;
and 5, repairing the traffic data acquired in real time on the road section by using the data repairing model.
2. The method for restoring the traffic big data based on the antagonistic self-encoder as claimed in claim 1, wherein the historical traffic data in step 1 comprises road flow, speed and occupancy.
3. The method for repairing traffic big data based on the antagonistic self-encoder according to the claim 1 or 2, characterized in that the step 2 constructs a mask matrix based on the historical traffic data, and the specific process comprises:
step 2-1, constructing a traffic data matrix, wherein the data of the ith row and the jth column of the matrix represent historical traffic data acquired by a jth traffic data acquisition device at the ith time point or time period;
and 2-2, constructing a mask matrix, wherein the dimension of the matrix is the same as that of the traffic data matrix, if data are missing at a certain position in the traffic data matrix, setting the corresponding position in the mask matrix as 0, and otherwise, setting the corresponding position as 1.
4. The method for repairing the traffic big data based on the antagonistic self-encoder according to the claim 3, wherein the generator in the step 3 comprises sequentially arranged position encoding modules and sequentially arranged N groups of first modules, wherein the first modules comprise sequentially connected multi-head attention structure and fully connected neural network; n is more than or equal to 1; adding the input and output of the multi-head attention structure and inputting the sum into a full-connection neural network;
the position coding module uses sampling values of sine and cosine functions with different frequencies as position coding information:
where pos is the position of the input data, i represents the dimension, dmodelRepresenting the length of the time dimension of the input; PE (polyethylene)(pos,2i)、PE(pos,2i+1)Respectively representing 2i th and 2i +1 th position codes;
the multi-head attention structure comprises a plurality of scaling point product attention mechanisms;
the discriminator comprises M groups of first modules and two layers of fully-connected networks which are arranged in sequence, the output is a score value, and the input attribute is represented; m is more than or equal to 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010855606.9A CN112185104B (en) | 2020-08-22 | 2020-08-22 | Traffic big data restoration method based on countermeasure autoencoder |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010855606.9A CN112185104B (en) | 2020-08-22 | 2020-08-22 | Traffic big data restoration method based on countermeasure autoencoder |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112185104A CN112185104A (en) | 2021-01-05 |
CN112185104B true CN112185104B (en) | 2021-12-10 |
Family
ID=73925361
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010855606.9A Active CN112185104B (en) | 2020-08-22 | 2020-08-22 | Traffic big data restoration method based on countermeasure autoencoder |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112185104B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112905379B (en) * | 2021-03-10 | 2023-07-18 | 南京理工大学 | Traffic big data restoration method of graph self-encoder based on self-attention mechanism |
CN113190997B (en) * | 2021-04-29 | 2023-08-01 | 贵州数据宝网络科技有限公司 | Big data terminal data restoration method and system |
CN113643564B (en) * | 2021-07-27 | 2022-08-26 | 中国科学院深圳先进技术研究院 | Parking data restoration method and device, computer equipment and storage medium |
CN114996625B (en) * | 2022-04-26 | 2024-06-14 | 西南石油大学 | Logging data complement method based on Bayesian optimization and self-encoder |
CN115019510B (en) * | 2022-06-29 | 2024-01-30 | 华南理工大学 | Traffic data restoration method based on dynamic self-adaptive generation countermeasure network |
CN115659797B (en) * | 2022-10-24 | 2023-03-28 | 大连理工大学 | Self-learning method for generating anti-multi-head attention neural network aiming at aeroengine data reconstruction |
CN116542438B (en) * | 2023-03-28 | 2024-01-30 | 大连海事大学 | Bus passenger starting and stopping point estimation and repair method based on non-reference real phase |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109492232A (en) * | 2018-10-22 | 2019-03-19 | 内蒙古工业大学 | A kind of illiteracy Chinese machine translation method of the enhancing semantic feature information based on Transformer |
CN109493599A (en) * | 2018-11-16 | 2019-03-19 | 南京航空航天大学 | A kind of Short-time Traffic Flow Forecasting Methods based on production confrontation network |
WO2019090213A1 (en) * | 2017-11-03 | 2019-05-09 | Siemens Aktiengesellschaft | Segmenting and denoising depth images for recognition applications using generative adversarial neural networks |
CN110018927A (en) * | 2019-01-28 | 2019-07-16 | 北京工业大学 | Based on the traffic data restorative procedure for generating confrontation network |
CN110597963A (en) * | 2019-09-23 | 2019-12-20 | 腾讯科技(深圳)有限公司 | Expression question-answer library construction method, expression search method, device and storage medium |
CN110838288A (en) * | 2019-11-26 | 2020-02-25 | 杭州博拉哲科技有限公司 | Voice interaction method and system and dialogue equipment |
CN110942624A (en) * | 2019-11-06 | 2020-03-31 | 浙江工业大学 | Road network traffic data restoration method based on SAE-GAN-SAD |
CN111311729A (en) * | 2020-01-18 | 2020-06-19 | 西安电子科技大学 | Natural scene three-dimensional human body posture reconstruction method based on bidirectional projection network |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10586310B2 (en) * | 2017-04-06 | 2020-03-10 | Pixar | Denoising Monte Carlo renderings using generative adversarial neural networks |
WO2019094933A1 (en) * | 2017-11-13 | 2019-05-16 | The Charles Stark Draper Laboratory, Inc. | Automated repair of bugs and security vulnerabilities in software |
CN108520503B (en) * | 2018-04-13 | 2020-12-22 | 湘潭大学 | Face defect image restoration method based on self-encoder and generation countermeasure network |
CN110288537A (en) * | 2019-05-20 | 2019-09-27 | 湖南大学 | Facial image complementing method based on the depth production confrontation network from attention |
-
2020
- 2020-08-22 CN CN202010855606.9A patent/CN112185104B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019090213A1 (en) * | 2017-11-03 | 2019-05-09 | Siemens Aktiengesellschaft | Segmenting and denoising depth images for recognition applications using generative adversarial neural networks |
CN109492232A (en) * | 2018-10-22 | 2019-03-19 | 内蒙古工业大学 | A kind of illiteracy Chinese machine translation method of the enhancing semantic feature information based on Transformer |
CN109493599A (en) * | 2018-11-16 | 2019-03-19 | 南京航空航天大学 | A kind of Short-time Traffic Flow Forecasting Methods based on production confrontation network |
CN110018927A (en) * | 2019-01-28 | 2019-07-16 | 北京工业大学 | Based on the traffic data restorative procedure for generating confrontation network |
CN110597963A (en) * | 2019-09-23 | 2019-12-20 | 腾讯科技(深圳)有限公司 | Expression question-answer library construction method, expression search method, device and storage medium |
CN110942624A (en) * | 2019-11-06 | 2020-03-31 | 浙江工业大学 | Road network traffic data restoration method based on SAE-GAN-SAD |
CN110838288A (en) * | 2019-11-26 | 2020-02-25 | 杭州博拉哲科技有限公司 | Voice interaction method and system and dialogue equipment |
CN111311729A (en) * | 2020-01-18 | 2020-06-19 | 西安电子科技大学 | Natural scene three-dimensional human body posture reconstruction method based on bidirectional projection network |
Non-Patent Citations (2)
Title |
---|
基于生成式对抗网络的路网交通流数据补全方法;王力 等;《交通运输系统工程与信息》;20181231;第18卷(第6期);正文全文 * |
生成式对抗网络研究进展;王万良,李卓蓉;《通信学报》;20180228;第39卷(第2期);正文全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112185104A (en) | 2021-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112185104B (en) | Traffic big data restoration method based on countermeasure autoencoder | |
Li et al. | The emerging graph neural networks for intelligent fault diagnostics and prognostics: A guideline and a benchmark study | |
CN112101480B (en) | Multivariate clustering and fused time sequence combined prediction method | |
CN108960303B (en) | Unmanned aerial vehicle flight data anomaly detection method based on LSTM | |
CN107886161A (en) | A kind of global sensitivity analysis method for improving Complex Information System efficiency | |
CN111768000A (en) | Industrial process data modeling method for online adaptive fine-tuning deep learning | |
CN110110809B (en) | Fuzzy automaton construction method based on machine fault diagnosis | |
CN115618296A (en) | Dam monitoring time sequence data anomaly detection method based on graph attention network | |
CN117784710B (en) | Remote state monitoring system and method for numerical control machine tool | |
CN115051929B (en) | Network fault prediction method and device based on self-supervision target perception neural network | |
Jia et al. | State of health prediction of lithium-ion batteries based on bidirectional gated recurrent unit and transformer | |
Chen et al. | Discovering state variables hidden in experimental data | |
Yang et al. | Remaining useful life prediction based on normalizing flow embedded sequence-to-sequence learning | |
CN113485261A (en) | CAEs-ACNN-based soft measurement modeling method | |
CN104634265A (en) | Soft measurement method for thickness of mineral floating foam layer based on multivariate image feature fusion | |
Gómez et al. | Neural network architecture selection: can function complexity help? | |
CN116720743A (en) | Carbon emission measuring and calculating method based on data clustering and machine learning | |
CN108490782A (en) | A kind of method and system being suitable for complex industrial process product quality indicator missing data completion based on selective double layer integrated study | |
Mudronja et al. | Data-based modelling of significant wave height in the Adriatic sea | |
Vo et al. | Harnessing attention mechanisms in a comprehensive deep learning approach for induction motor fault diagnosis using raw electrical signals | |
CN113984389A (en) | Rolling bearing fault diagnosis method based on multi-receptive-field and improved capsule map neural network | |
CN113705888A (en) | Industrial steam generation amount prediction method and system based on Pearson correlation and neural network | |
CN116910483A (en) | Soft measurement modeling method based on shielding convolution attention residual error shrinkage network | |
CN116403054A (en) | Image optimization classification method based on brain-like network model | |
CN116068520A (en) | Cognitive radar joint modulation recognition and parameter estimation method based on transducer |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |