CN112417000B - Time sequence missing value filling method based on bidirectional cyclic codec neural network - Google Patents
Time sequence missing value filling method based on bidirectional cyclic codec neural network Download PDFInfo
- Publication number
- CN112417000B CN112417000B CN202011295072.5A CN202011295072A CN112417000B CN 112417000 B CN112417000 B CN 112417000B CN 202011295072 A CN202011295072 A CN 202011295072A CN 112417000 B CN112417000 B CN 112417000B
- Authority
- CN
- China
- Prior art keywords
- sequence
- network
- neural network
- cyclic
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2458—Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
- G06F16/2474—Sequence data queries, e.g. querying versioned data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9537—Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Probability & Statistics with Applications (AREA)
- Fuzzy Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a time sequence missing value filling method based on a bidirectional cyclic codec neural network. The method combines a self-encoder and a recurrent neural network, and can realize modeling of a time sequence containing a missing value; the method measures the difference between a filling sequence and a label sequence through two training losses, and reversely updates an encoder and a decoder in an asynchronous mode; the method amplifies the reaction of the network to missing data by coordinating the gating units. The method solves the problems that the general method can not correctly model the time-space relationship of the time sequence containing the missing value and the filling effect is sensitive to the change of the missing rate.
Description
Technical Field
The invention relates to the field of artificial intelligence, in particular to a time sequence missing value filling method based on a bidirectional cyclic codec neural network.
Background
In application tasks of the multidimensional time series of the industrial Internet of things, such as context recognition, predictive maintenance, anomaly detection and the like, a complete time series is a precondition for smooth execution of the tasks. However, a large number of device accesses and environmental instability problems lead to missing values that are prevalent in the multidimensional time series of the industrial internet of things. The existing multidimensional time series filling means comprise mean filling, cluster filling, regression filling and the like. The mean filling effect depends on the difference between data points, the filling precision is not high, and large deviation is easily caused particularly when continuous missing occurs. A clustering filling-based method, such as a fuzzy C-means method, cannot model a space-time relationship, and the filling effect is greatly influenced by the deletion rate. With a regression filling-based method, such as recurrent neural network regression, the robustness of the training process to the missing values is poor, and the spatiotemporal relationship in the time sequence containing the missing values cannot be correctly modeled. Aiming at missing value filling of a multidimensional time sequence, a filling method which can model a space-time relation in the time sequence containing the missing value and has a filling effect insensitive to the change of the missing rate is urgently needed.
Disclosure of Invention
The invention aims to provide a time sequence missing value filling method based on a bidirectional cyclic codec neural network. The method can overcome the defects that the time-space relationship in the time sequence containing the missing value is difficult to be correctly modeled by the existing time sequence filling method, and the filling effect is greatly influenced by the change of the missing rate.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a time sequence missing value filling method based on a bidirectional cyclic codec neural network can effectively model the space-time relationship in a time sequence containing missing values and improve filling performance and stability. The method comprises the following steps:
step S1: and taking a continuous time sequence with a fixed length and without missing values from the historical database as a tag sequence, and taking the tag sequence as an input sequence after artificially creating missing points.
Step S2: and inputting the input sequence into a neural network of a bidirectional cyclic coder-decoder to obtain an output sequence.
Step S3: and calculating the training loss of the output sequence and the label sequence of the bidirectional cyclic codec neural network, and reversely updating the neural network.
Step S4: after the neural network model is trained, the time sequence containing the missing value is input into the neural network of the bidirectional cyclic codec, and the obtained output sequence is the filled time sequence.
In step S1, the tag data is represented as X ═ { X ═ X1,x2,…,xt,…,xT}, Where T represents the time series length and D represents the number of data attributes. The method for artificially creating the missing points comprises the following steps:
in the formula (I), the compound is shown in the specification,a value representing the d-th attribute at the t-th instant in the time series of artificial deletions,representing the value of the d-th attribute at the t-th time of the tag data.Is a value other than 0 or 1,to representIn the absence of any of the above-described agents,to representAre not deleted. By using To measure the deletion degree of the time series.
In step S2, the input sequence is input into the bidirectional cyclic codec neural network to obtain a calculation process of the output sequence, which includes the following steps:
step S21: obtaining an input sequence Wherein T represents the time series length and D represents the number of data attributes;
step S21: calculating the average value of each attribute in the input time sequenceThe calculation method comprises the following steps:
wherein the content of the first and second substances,is a value other than 0 or 1,to representIn the absence of any of the above-described agents,to representIs not deleted;
step S22: from T-1 to T-T, the output of the encoder network is obtained by iterating the following equation:
wherein, Ws,γ,bsFor learnable network weights, WsIs D × N in shapehThe shape of gamma is 1X 1, bsIs in the shape of 1 x D,shape N ofh×Nh. Represents a matrix product,. indicates a Hadamard product,. sigma. indicates a sigmoid function. In particular, it is possible to use, for example,is an all 0 matrix. f. ofGR(.) represents a cyclic body function containing coordinated gating cells. Recording s at each time ttS ═ S1,s2,…st,…,sT},Where T represents the time series length and D represents the number of data attributes. Finally, the encoder network is marked asWherein theta iseRepresenting all trainable weights in the encoder network,represents an input sequence;
step S23: from T to 1, the output of the decoder network is obtained by iterating the following equation:
wherein, Wy,byIs a learnable network weight, WyIs D × N in shapeh,byIs in the shape of 1 x D,shape N ofh×Nh。Representing a standard round function for the decoder network. Record y at each time ttValue of (a), Y ═ Y1,y2,…,yt,…,yT},Where T represents the time series length and D represents the number of data attributes. Order toWhereinRepresenting a non-linear function. Finally, the encoder network is marked asWherein theta isdRepresents all trainable weights in the decoder network, S represents the input sequence from the encoder;
step S24: the output of the bi-directional cyclic codec neural network is Where T represents the time series length and D represents the number of data attributes. Calculated by the following formula
In step S22, a cyclic function including a coordinated gating unit is expressed by the following formula:
wherein, WgIs a learnable network weight WgIs in the shape of (N)h+D)×Nh,[.]The splicing operation of the matrix is represented.Standard loop body functions for the encoder network are shown, such as the LSTM loop body and the GRU loop body.
In step S3, the procedure of calculating the training loss and updating the network in the reverse direction includes the following steps:
step S31: obtaining tag data X ═ { X ═ X1,x2,…,xt,…,xT}, Where T represents the time series length and D represents the number of data attributes. Calculation encoderLoss of powerAnd decoderLoss of powerThe expression is as follows:
step S32: step S22 is executed to obtain S ═ S1,s2,…,sT}, calculating the gradientUpdating theta by gradient descent methode。
Step S34: step S23 is executed to obtainCalculating gradientsUpdating theta by gradient descent methodd。
In step S4, in the practical application process, the time-series missing value padding process is:
step S41: splitting the time series to be filled into N parts { XR }according to fixed time series length T1,XR2,…,XRi,…}。
Compared with the prior art, the invention has the following technical effects:
1. due to the use of the automatic encoder network and the recurrent neural network and the iterative process described in the step S22, the invention can correctly model the spatiotemporal relationship in the time sequence containing the missing value.
2. The invention uses the coordinated gate control unit, amplifies the response of the neural network to the missing data, and can reduce the influence of the change of the missing rate on the filling effect.
3. The invention can weaken gradient explosion and diffusion phenomena in the training process and improve the interpretability of the encoder network and the decoder network due to the asynchronous reverse updating process of the encoder network and the decoder network.
4. The invention does not limit the characteristics of the time sequence to be filled and introduce prior knowledge about the time sequence to be filled, and is suitable for rapid deployment and application in a scene.
Drawings
Fig. 1 is a schematic view of a scene of filling missing values in a multidimensional time series of a sensor according to an embodiment of the present application;
fig. 2 is a schematic diagram of a bi-directional cyclic codec based neural network according to an embodiment of the present application.
Fig. 3 is a schematic diagram of a cyclic network of coordinated gate units according to an embodiment of the present application.
Detailed Description
All terms relating to artificial intelligence used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
The invention provides a time sequence missing value filling method based on a neural network of a bidirectional cyclic coder-decoder, which is used for filling a sensor time sequence. The embodiment provided by the application is a scene schematic diagram of filling missing values of a multidimensional time series of a sensor as shown in fig. 1. The acquisition equipment is connected with a plurality of monitoring equipment, the data are acquired at short intervals and uploaded to the server at long intervals, and the data received by the server can be regarded as a time sequence with equal length. If the time sequence is not lost, the server directly stores the sequence into a historical database; if the data contains missing data, the server inputs the sequence into a filling module, and the filling module stores the filled result into a historical database. The monitoring device may be a sensor or a controller, among others. The collecting device can be a gateway, a field controller and the like, and the collected data can be sensor reading or controller signals and the like. Illustratively, an embedded programmable logic controller (ePLC) is widely used in various industrial fields due to its easy programming and high reliability, and is suitable as an acquisition device. The server may be a stand-alone server or a service in a cloud service platform. It should be noted that the technical solution of the present application may be applied to the above application scenarios, is not applicable to limiting the present solution, and may also be applied to other scenarios requiring time-series missing value padding.
The technical solution of the embodiments of the present application will be described in detail below by taking a specific base station implementation as an example. A base station is an important infrastructure for mobile communication, in which a large number of sensors record the operating state of the base station. And selecting a total of 7 sensors from the base station, wherein the sensors are respectively a first ballast current, a first ballast temperature, a second ballast current, a second ballast temperature, a third ballast current, a third ballast temperature and a base station environment temperature, and the acquisition period is 30 minutes. The historical database stored a total of 17520 pieces of data from 2018, month 2 to 2019, month 2, each piece of data contained 7 attributes consisting of 7 sensor readings and contained 18.7% of the data attribute missing. The uploading period of the acquisition equipment is 7 hours, namely 14 pieces of data.
In this embodiment, the method for filling missing values in a time series based on a neural network of a bi-directional cyclic codec includes the following steps:
step S1: and (3) taking a continuous time sequence which does not contain the missing value and has the length of 14 out of the historical database as a tag sequence, and taking the tag sequence after artificially creating the missing point as an input sequence.
Step S2: and inputting the input sequence into a neural network of a bidirectional cyclic coder-decoder to obtain an output sequence.
Step S3: and calculating the training loss of the output sequence and the label sequence of the bidirectional cyclic codec neural network, and reversely updating the neural network.
Step S4: after the neural network model is trained, the time sequence containing the missing value is input into the neural network of the bidirectional cyclic codec, and the obtained output sequence is the filled time sequence.
In the present embodiment, in the step S1, the tag data is represented as X ═ { X ═ X1,x2,…,xt,…,x14},The method for artificially creating the missing points comprises the following steps:
in the formula (I), the compound is shown in the specification,a value representing the d-th attribute at the t-th instant in the time series of artificial deletions,a value indicating the d-th attribute at the t-th time in the tag sequence indicates a Hadamard product.Is a value other than 0 or 1,to representIn the absence of any of the above-described agents,to representIs not deleted;
in this embodiment, in step S2, the training data is input into the computation process of the bi-directional cyclic codec neural network to obtain the output sequence, and the steps are as follows:
Step S21: calculating the average value of each attribute in the input time sequenceThe calculation method comprises the following steps:
wherein the content of the first and second substances,is a value other than 0 or 1,to representIn the absence of any of the above-described agents,to representAre not deleted.
Step S22: from t 1 to t 14, the output of the encoder network is obtained by iterating the following equation:
wherein, Ws,γ,bsFor learnable network weights, WsIs 7X 64, gamma is 1X 1, bsThe shape of (a) is 1 x 7,the shape of (1) is 64 × 64. Denotes a matrix product,. indicates a Hadamard product,. sigma. indicates a sigmoid function. In particular, it is possible to use, for example,is an all 0 matrix. f. ofGR(.) represents a cyclic body function containing coordinated gating cells. Recording s at each time ttS ═ S1,s2,…st,…,s14},Finally, the encoder network is marked asWherein theta iseRepresenting all trainable weights in the encoder network.
Step S23: from t-14 to t-1, the output of the decoder network is obtained by iterating the following equation:
wherein, Wy,byIs a learnable network weight, WyIs 7X 64, byThe shape of (a) is 1 x 7,the shape of (1) is 64 × 64.Representing a standard round function for the decoder network. In this embodiment, a GRU loop body is used. Recording the value of yt at each instant t, Y ═ Y1,y2,…yt,…,y14},Order to Wherein
Finally, the encoder network is marked asWherein theta isdRepresenting all trainable weights in the decoder network and S represents the input sequence from the encoder.
Step S24: the output of the bi-directional cyclic codec neural network is Calculated by the following formula
In this embodiment, in step S22, the cyclic function including the coordinated gating unit is expressed by the following formula:
wherein, WgIs a learnable network weight, WgIs 71X 64 [.]The splicing operation of the matrix is represented.A standard loop body function for the encoder network is shown, in this embodiment, a GRU loop body is used.
In this embodiment, in the step S3, the process of calculating the training loss and updating the network in the reverse direction includes the following steps:
step S31: obtaining tag data X ═ { X ═ X1,x2,…,xt ,…,x14}, Calculation encoderLoss of powerAnd decoderLoss of powerThe expression is as follows:
step S32: step S22 is executed to obtain S ═ S1,s2,…,s14}, calculating the gradientUpdating theta by gradient descent methode。
Step S34: step S23 is executed to obtainCalculating gradientsUpdating theta by gradient descent methodd。
In this embodiment, in the step S4, in the practical application process, the time-series missing value padding process is:
step S41: the 1400-length time series to be filled is split into 100 parts { XR } according to a fixed time series length 141,XR2,…,XRi,…,XR100}。
Specifically, the experimental comparison process and the results of this example with other time series filling methods containing deletion values are as follows:
1. three experiments were performed with deletion rates of 10%, 20%, and 30%.
2. We compare the filling performance of this embodiment with other time series filling methods, Bidirectiontemporal filling for time series (BRITS), Linear memory vector temporal network (LIME-RNN).
3. The difference in filling performance we denote by Mean Relative Error (MRE):
wherein x represents a sequence without missing values,indicates the sequence filled after x makes a deletion.
Table 1 shows the filling accuracy of the present invention compared to other time series filling methods with missing values for different missing cases.
TABLE 1
Rate of absence | LIME-RNN | BRITS | This example |
30% | 19.11% | 15.91% | 11.97% |
20% | 14.28% | 13.26% | 10.94% |
10% | 11.85% | 11.73% | 9.94% |
The table data shows that the time series missing value filling method based on the bidirectional cyclic codec neural network provided by the embodiment can obtain higher filling precision than that of the existing method under the conditions of the missing rate of 10%, 20% and 30%, is less affected by the change of the missing rate, and has certain reference value and practical economic benefit.
The above description of the embodiments is only intended to facilitate the understanding of the method of the invention and its core idea. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (1)
1. A time sequence missing value filling method based on a bidirectional cyclic codec neural network is characterized by comprising the following steps:
step S1: taking a continuous time sequence with a fixed length and without a missing value from a historical database as a tag sequence, and taking the tag sequence as an input sequence after a missing point is artificially generated by the tag sequence;
step S2: inputting the input sequence into a neural network of a bidirectional cyclic coder-decoder to obtain an output sequence;
step S3: calculating the training loss of the output sequence and the label sequence of the bidirectional cyclic codec neural network, and reversely updating the neural network;
step S4: after training the neural network model, inputting the time sequence containing the missing value into a bidirectional cyclic codec neural network, wherein the obtained output sequence is the filled time sequence;
wherein, the step S2 further includes the following steps:
step S21: obtaining an input sequence Where T represents the time series length, D represents the number of data attributes,represents the d-th attribute;
representing the mean of T input sequences of length, calculating the mean of each attribute thereinThe method comprises the following steps:
wherein the content of the first and second substances,is mtCorresponds to the d-th component ofIs a value other than 0 or 1,to representIn the absence of any of the above-described agents,to representIs not deleted;
step S22: from T-1 to T-T, the output of the encoder network is obtained by iterating the following equation:
wherein, Ws,γ,bsFor learnable network weights, WsIs D × N in shapehThe shape of gamma is 1X 1, bsIs in the shape of 1 x D,shape N ofh×Nh(ii) a Represents a matrix product, which indicates a Hadamard product;is a matrix of all 0; f. ofGR(.) represents a cyclic body function containing coordinated gating cells; recording s at each time ttS ═ S1,s2,…st,…,sT},Wherein T represents the time series length and D represents the number of data attributes; finally, the encoder network is marked asWherein theta iseRepresenting all trainable weights in the encoder network,represents an input sequence;
step S23: from T to 1, the output of the decoder network is obtained by iterating the following equation:
wherein, Wy,byIs a learnable network weight, WyIs D × N in shapeh,byIs in the shape of 1 x D,shape N ofh×Nh;Represents a standard cyclic function for the decoder network; record y at each time ttValue of (a), Y ═ Y1,y2,…,yt,…,yT},Wherein T represents the time series length and D represents the number of data attributes; order toWhereinRepresenting a non-linear function; finally, the encoder network is marked asWherein theta isdRepresents all trainable weights in the decoder network, S represents the input sequence from the encoder;
step S24: the output of the bi-directional cyclic codec neural network is Wherein T represents the time series length and D represents the number of data attributes; calculated by the following formula
In step S22, the cyclic function of the coordinated gating cell is expressed by the following formula:
wherein, WgIs a learnable network weight having a shape of (N)h+D)×Nh,[.]Representing a splicing operation of the matrix, and sigma (.) representing a sigmoid function;representing a standard cyclic body function for the encoder network;
in step S3, the procedure of calculating the training loss and updating the network backwards is characterized by comprising the following steps:
step S31: obtaining tag data X ═ { X ═ X1,x2,…,xt,…,xT}, Where T represents the time series length, D represents the number of data attributes,represents the d-th attribute; calculation encoderLoss of powerAnd decoderLoss of powerThe expression is as follows:
step S32: step S22 is executed to obtain S ═ S1,s2,…,sT}, calculating the gradientUpdating theta by gradient descent methode;
Step S34: step S23 is executed to obtainCalculating gradientsUpdating theta by gradient descent methodd;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011295072.5A CN112417000B (en) | 2020-11-18 | 2020-11-18 | Time sequence missing value filling method based on bidirectional cyclic codec neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011295072.5A CN112417000B (en) | 2020-11-18 | 2020-11-18 | Time sequence missing value filling method based on bidirectional cyclic codec neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112417000A CN112417000A (en) | 2021-02-26 |
CN112417000B true CN112417000B (en) | 2022-01-07 |
Family
ID=74774419
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011295072.5A Active CN112417000B (en) | 2020-11-18 | 2020-11-18 | Time sequence missing value filling method based on bidirectional cyclic codec neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112417000B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112948743B (en) * | 2021-03-26 | 2022-05-03 | 重庆邮电大学 | Coal mine gas concentration deficiency value filling method based on space-time fusion |
CN113297191B (en) * | 2021-05-28 | 2022-04-05 | 湖南大学 | Stream processing method and system for network missing data online filling |
CN114530239B (en) * | 2022-01-07 | 2022-11-08 | 北京交通大学 | Energy-saving mobile health medical monitoring system based on software and hardware cooperation |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108090558A (en) * | 2018-01-03 | 2018-05-29 | 华南理工大学 | A kind of automatic complementing method of time series missing values based on shot and long term memory network |
CN109598002A (en) * | 2018-11-15 | 2019-04-09 | 重庆邮电大学 | Neural machine translation method and system based on bidirectional circulating neural network |
CN111046027A (en) * | 2019-11-25 | 2020-04-21 | 北京百度网讯科技有限公司 | Missing value filling method and device for time series data |
CN111414353A (en) * | 2020-02-29 | 2020-07-14 | 平安科技(深圳)有限公司 | Intelligent missing data filling method and device and computer readable storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110837888A (en) * | 2019-11-13 | 2020-02-25 | 大连理工大学 | Traffic missing data completion method based on bidirectional cyclic neural network |
CN111401553B (en) * | 2020-03-12 | 2023-04-18 | 南京航空航天大学 | Missing data filling method and system based on neural network |
CN111860785A (en) * | 2020-07-24 | 2020-10-30 | 中山大学 | Time sequence prediction method and system based on attention mechanism cyclic neural network |
-
2020
- 2020-11-18 CN CN202011295072.5A patent/CN112417000B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108090558A (en) * | 2018-01-03 | 2018-05-29 | 华南理工大学 | A kind of automatic complementing method of time series missing values based on shot and long term memory network |
CN109598002A (en) * | 2018-11-15 | 2019-04-09 | 重庆邮电大学 | Neural machine translation method and system based on bidirectional circulating neural network |
CN111046027A (en) * | 2019-11-25 | 2020-04-21 | 北京百度网讯科技有限公司 | Missing value filling method and device for time series data |
CN111414353A (en) * | 2020-02-29 | 2020-07-14 | 平安科技(深圳)有限公司 | Intelligent missing data filling method and device and computer readable storage medium |
Non-Patent Citations (2)
Title |
---|
"BRITS: Bidirectional Recurrent Imputation for Time Series";Wei Cao等;《arXiv:1805.10572》;20180527;全文 * |
"电子健康记录缺失数据预测与填充方法研究";陈宣池;《中国优秀硕士学位论文全文数据库 医药卫生科技辑》;20200815;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112417000A (en) | 2021-02-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112417000B (en) | Time sequence missing value filling method based on bidirectional cyclic codec neural network | |
CN107688871B (en) | Water quality prediction method and device | |
Jia et al. | Research on a mine gas concentration forecasting model based on a GRU network | |
CN111310965A (en) | Aircraft track prediction method based on LSTM network | |
CN110381524B (en) | Bi-LSTM-based large scene mobile flow online prediction method, system and storage medium | |
CN105427138A (en) | Neural network model-based product market share analysis method and system | |
CN111160659B (en) | Power load prediction method considering temperature fuzzification | |
CN110837888A (en) | Traffic missing data completion method based on bidirectional cyclic neural network | |
CN115758290A (en) | Fan gearbox high-speed shaft temperature trend early warning method based on LSTM | |
CN115016276B (en) | Intelligent water content adjustment and environment parameter Internet of things big data system | |
CN114839881B (en) | Intelligent garbage cleaning and environmental parameter big data Internet of things system | |
CN112948743B (en) | Coal mine gas concentration deficiency value filling method based on space-time fusion | |
CN113031555A (en) | Intelligent purification system for harmful gas in environment of livestock and poultry house | |
CN111127104A (en) | Commodity sales prediction method and system | |
CN115630742A (en) | Weather prediction method and system based on self-supervision pre-training | |
CN113807951A (en) | Transaction data trend prediction method and system based on deep learning | |
CN113722997A (en) | New well dynamic yield prediction method based on static oil and gas field data | |
CN114117599A (en) | Shield attitude position deviation prediction method | |
CN113642812B (en) | Beidou-based micro-deformation prediction method, device, equipment and readable storage medium | |
CN115128978A (en) | Internet of things environment big data detection and intelligent monitoring system | |
CN111340300A (en) | Method and system for predicting residential load based on FAF-LSTM deep neural network | |
CN116894180B (en) | Product manufacturing quality prediction method based on different composition attention network | |
CN114626115A (en) | Building hourly thermal load prediction modeling method based on transfer learning | |
CN110852415B (en) | Vegetation index prediction method, system and equipment based on neural network algorithm | |
CN117196105A (en) | People number prediction method, device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |