CN110597799A - Automatic filling method, system and equipment for missing value of time sequence data - Google Patents

Automatic filling method, system and equipment for missing value of time sequence data Download PDF

Info

Publication number
CN110597799A
CN110597799A CN201910878109.8A CN201910878109A CN110597799A CN 110597799 A CN110597799 A CN 110597799A CN 201910878109 A CN201910878109 A CN 201910878109A CN 110597799 A CN110597799 A CN 110597799A
Authority
CN
China
Prior art keywords
data
model
missing
time sequence
time series
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910878109.8A
Other languages
Chinese (zh)
Other versions
CN110597799B (en
Inventor
刘建志
高冲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Instrument Electric (group) Co Ltd Central Research Institute
Original Assignee
Shanghai Instrument Electric (group) Co Ltd Central Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Instrument Electric (group) Co Ltd Central Research Institute filed Critical Shanghai Instrument Electric (group) Co Ltd Central Research Institute
Priority to CN201910878109.8A priority Critical patent/CN110597799B/en
Publication of CN110597799A publication Critical patent/CN110597799A/en
Application granted granted Critical
Publication of CN110597799B publication Critical patent/CN110597799B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/215Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention provides a method, a system and equipment for automatically filling missing values of time sequence data, which realize automatic feature extraction while decomposing the time sequence data into superposition of different modes through the technology of random mask and neural network decomposition, thereby constructing a more accurate and effective method for filling missing values of end-to-end time sequence data, comprising three steps of data preparation, model training and model use, wherein the data preparation step acquires original time sequence data to carry out data preprocessing, constructs a random mask according to a given missing rate, and takes a newly generated random mask and corresponding original data as a new data set; in the model training step, model training is carried out by using a new data set generated in the data preparation step so as to construct a model based on neural network decomposition; and the model using step constructs a corresponding mask according to the time sequence data with missing values, and fills the missing values of the time sequence data by using the trained model.

Description

Automatic filling method, system and equipment for missing value of time sequence data
Technical Field
The invention relates to the technical field of machine learning, in particular to a method, a system and equipment for automatically filling missing values of time sequence data.
Background
With the development of deep learning and neural networks, the analysis and processing of time series data are receiving more and more attention. Such as weather, medical, traffic, water services, etc. However, these actual time series data inevitably produce missing values for various reasons, and the missing value processing and padding must be performed first for better analysis and utilization of these data. The missing values of the time series data often have nonlinear and dynamic correlation with other values, and the traditional filling methods such as zero filling, mean value filling, EM algorithm and the like cannot effectively process the nonlinear and dynamic correlation. While some methods based on the LSTM model treat missing value padding as a prediction problem, on one hand, a large amount of prior knowledge is needed to perform manual feature extraction for the accuracy of model prediction. On the other hand, the time series data itself often has some direct or hidden patterns, and the former method often ignores the patterns, and certainly cannot obtain more accurate and effective missing value filling effect by using the patterns. Therefore, a new end-to-end sequential data missing value filling method capable of automatically extracting features and fully utilizing a hidden mode is urgently needed to be provided.
Disclosure of Invention
The invention aims to provide a method, a system and equipment for automatically filling missing values of time sequence data based on random mask and neural network decomposition, and aims to decompose the time sequence data, search a hidden mode of the time sequence data, automatically extract features and realize a more accurate and effective end-to-end time sequence data missing value filling method.
In order to achieve the above object, an aspect of the present invention provides an automatic padding method for missing values of time series data, including:
preparing data, namely preprocessing the acquired original time sequence data without missing values, constructing a random mask according to a given missing rate, and taking the newly generated random mask and the corresponding original data as a new data set for training a model;
model training, namely performing model training by using a new data set generated in the data preparation step to construct a model automatically filled with a missing value of time sequence data based on neural network decomposition;
model use, which constructs a corresponding mask for the time series data with missing values, and fills the missing values of the time series data by using the trained model.
After the data preprocessing, obtaining a data set from the original time sequence data, wherein { X }iIn which X is ∈ 1,2 … mi∈RkIs a k-dimensional vector.
Further, according to a given miss ratio { p }j}(pjE (0,0.35)) constructs a random mask { MijThe new data set is formed by original data and a certain random mask corresponding to the original data, namely { (X)i,Mij)}(i∈1,2…m,j∈D)。
Further, the neural network decomposition-based model includes an input layer that receives a newly constructed data set (X)i,Mij) And entering a model for training.
Further, the neural network decomposition-based model includes an embedding layer that embeds a data set (X)i,Mij) Element-by-element multiplication is performed to mask construct missing values such that the missing values identified by the mask do not participate in the model learning process.
Further, the neural network decomposition-based model includes an intermediate layer that extends features learned by non-missing values in the embedded layer to missing values.
Further, the model based on neural network decomposition includes a decomposition layer which decomposes the learned features by an activation function, the activation function includes a sin activation function and a linear activation function, the sin activation function is used for obtaining a sine component, and the linear activation function is used for obtaining a linear component.
Further, the model based on the neural network decomposition comprises an output layer, and the output layer obtains an output X by performing element-by-element addition calculation on each linear classification in the hierarchical layer_out
Further, the mask M according to different construction modesyFilling missing value q, wherein the data to be filled is Y, filling missing value by using trained model, and combining Y with constructed mask MyInputting the output Y into the model_outFinally, the deletion portion in Y is replaced with Y_outThe data of the corresponding position of (2) is padded.
On the other hand, the invention also provides an automatic filling system for missing values of time sequence data, which comprises the following steps:
the data preparation module is used for acquiring original time sequence data without missing values to carry out data preprocessing, then constructing a random mask according to a given missing rate, and taking a newly generated random mask and corresponding original data as a new data set for training a model;
the model training module is used for carrying out model training by utilizing a new data set generated in the data preparation step so as to construct a model automatically filled with a missing value of time sequence data based on neural network decomposition;
and the model using module is used for constructing a corresponding mask according to the time sequence data with the missing value and filling the missing value of the time sequence data by using the trained model.
On the other hand, the invention also provides an automatic filling device for missing values of time sequence data, which comprises:
a processor;
a memory to store processor-executable computer program instructions;
wherein the computer program instructions, when executed by a processor, implement the method of any one of claims 1 to 9.
According to the invention, through the technology of random mask and neural network decomposition, the time sequence data is decomposed into superposition of different modes, and meanwhile, the automatic feature extraction is realized, so that a more accurate and effective end-to-end time sequence data missing value filling method is constructed.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flowchart illustrating a method for automatically filling missing values in time series data according to an embodiment of the present invention;
FIG. 2 is an architecture diagram of a neural network decomposition model in accordance with an embodiment of the present invention;
fig. 3 is a system architecture diagram of an automatic padding system for missing values of time-series data according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A method, system and apparatus for water pattern excavation and matching according to an embodiment of the present invention will be described with reference to the accompanying drawings, and first, a water pattern excavation method according to an embodiment of the present invention will be described with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for automatically filling missing values in time series data according to an embodiment of the present invention.
As shown in fig. 1, the method for automatically filling missing values of time series data includes:
step S101, data preparation.
Specifically, in one embodiment of the present invention, the data preparation step is to perform a necessary preprocessing process on the acquired raw data without missing values. A random mask is then constructed based on the given miss rate. The newly generated random mask and the corresponding raw data will be used as a new data set for training the model.
And step S102, training a model.
Specifically, in one embodiment of the present invention, the model training step includes building a model based on neural network decomposition and performing model training using a new data set generated during the data preparation phase.
Step S103, model use.
Specifically, in one embodiment of the present invention, the model using step aims at the time sequence data with missing values to construct the corresponding mask, and the model obtained by training in the above stage is used for filling the missing values.
Specifically, in one embodiment of the present invention, after the data preprocessing, a data set is obtained from the original time series data, wherein { X } isiIn which X is ∈ 1,2 … mi∈RkIs a k-dimensional vector.
Further, according to a given miss ratio { p }j}(pjE (0,0.35)) constructs a random mask { Mij}. The missing rate is defined as the ratio of missing value data points to total data points. For a given pjFor each XiLet it consider ceil (m × p)j) One data point is missing. ceil stands for integer up. Random mask MijThen is with XiVectors of the same dimensions, whose ceil (m p) is randomly chosenj) The position is set to 0 and the other positions are set to 1.
Further, the new data set is composed of the original data and a certain random mask corresponding to the original data, namely { (X)i,Mij) F (i e 1,2 … m, j e D), where D can have different selection ranges according to different random masking strategies.
FIG. 2 is an architecture diagram of a neural network decomposition model in accordance with an embodiment of the present invention.
As shown in fig. 2, the neural network decomposition model according to an embodiment of the present invention includes an input layer 1, an embedding layer 2, an intermediate layer 3, a decomposition layer 4, and an output layer 5. WhereinRepresenting element-by-element multiplication.Representing element by element addition.Representing a neural network layer, i.e. a fully-connected layerAnd an activation layer. Where left-hand Σ represents the fully connected layer. Right side of theRepresenting the active layer.
Specifically, in one embodiment, X, M of input layer 1 during model training represent the newly constructed dataset (X, respectively)i,Mij). Through the embedding layer 2An operation is performed to mask the constructed missing values so that the missing values identified by the mask do not participate in the model learning process. The mask only works in the embedding layer 2.
Specifically, in one embodiment, the middle layer 3 is used to extend the feature learned by the non-missing value in the embedded layer to the missing value, and is a process of filling and relearning the feature value.
In particular, in one embodiment, the previously learned features are decomposed in the decomposition layer by different activation functions, typically a sine component by a sin activation function and a linear component by a linear activation function. Different activation functions can be combined and expanded according to different scenarios to achieve the effect of obtaining more components.
Specifically, in one embodiment, the output layer passes throughCarrying out element-by-element addition on each component in the decomposition layer to obtain an output X_out
Specifically, in one embodiment, the model expects to be able to automatically fill in missing values by learning, outputting X_outThe desired result is as much as X. The loss function can thus be defined as F (X)_outX) where the typical loss function is mse (mean squared error).
The model is trained by the above steps. When the model is used for missing value filling, the mask needs different construction modes. Assuming that it contains missing valuesThe data to be padded is Y e RkIs a k-dimensional vector. Where there are q missing values, a mask M is constructedy。MyIs a k-dimensional vector in which q positions are 0 and the other positions are 1. q positions of 0 correspond to the positions where there is a deletion in Y. Sum Y with constructed mask MyInputting the output Y into the model_out. Finally using Y as the deletion part in Y_outThe data of the corresponding position of (2) is padded.
Fig. 3 is a system architecture diagram of an automatic filling system for missing values of time series data according to the present invention.
As shown in fig. 3, the system for automatically filling missing values in time series data according to an embodiment of the present invention includes:
the data preparation module 201 is configured to acquire original time series data without missing values to perform data preprocessing, construct a random mask according to a given missing rate, and use the newly generated random mask and corresponding original data as a new data set for model training;
the model training module 202 is used for performing model training by using a new data set generated in the data preparation step to construct a model automatically filled with a missing value of time sequence data based on neural network decomposition;
the model using module 203 constructs a corresponding mask for the time series data with missing values, and fills the missing values of the time series data by using the trained model.
Specifically, the present invention further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or terminal. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An automatic filling method for missing values of time series data is characterized by comprising the following steps:
preparing data, namely preprocessing the acquired original time sequence data without missing values, constructing a random mask according to a given missing rate, and taking the newly generated random mask and the corresponding original data as a new data set for training a model;
model training, namely performing model training by using a new data set generated in the data preparation step to construct a model based on neural network decomposition;
model use, which constructs a corresponding mask for the time series data with missing values, and fills the missing values of the time series data by using the trained model.
2. The method of claim 1, wherein the missing value of the time series data is filled automatically,
after the data preprocessing, obtaining a data set from the original time sequence data, wherein { X }iIn which X belongs to 1,2i∈RkIs a k-dimensional vector.
3. The method as claimed in claim 2, wherein the missing value of the time series data is automatically padded,
according to a given miss ratio pj}(pjE (0,0.35)) constructs a random mask { MijThe new data set is formed by original data and a certain random mask corresponding to the original data, namely { (X)i,Mij)}(i∈1,2...m,j∈D)。
4. The method as claimed in claim 3, wherein the missing value of the time series data is automatically padded,
the neural network decomposition-based model includes an input layer that receives a newly constructed data set (X)i,Mij) And entering a model for training.
5. The method of claim 4, wherein the missing value of the time series data is automatically padded,
the neural network decomposition-based model includes an embedding layer that embeds a data set (X)i,Mij) Element-by-element multiplication is performed to mask construct missing values such that the missing values identified by the mask do not participate in the model learning process.
6. The method of claim 5, wherein the missing value of the time series data is filled automatically,
the neural network decomposition-based model includes an intermediate layer that extends features learned by non-missing values in an embedded layer to missing values.
7. The method of claim 6, wherein the missing value of the time series data is automatically padded,
the neural network decomposition-based model includes a decomposition layer that decomposes the learned features by activation functions, including a sin activation function to obtain a sinusoidal component and a linear activation function to obtain a linear component.
8. The method of claim 7, wherein the missing value of the time series data is automatically padded,
the model based on neural network decomposition comprises an output layer, and the output layer obtains an output X by performing element-by-element addition calculation on each linear classification in the hierarchical layer_out
Masks M according to different construction modesyFilling missing value q, wherein the data to be filled is Y, filling missing value by using trained model, and combining Y with constructed mask MyInputting the output Y into the model_outFinally, the deletion portion in Y is replaced with Y_outThe data of the corresponding position of (2) is padded.
9. An automatic filling system for missing values of time series data, comprising:
the data preparation module is used for acquiring original time sequence data without missing values to carry out data preprocessing, then constructing a random mask according to a given missing rate, and taking a newly generated random mask and corresponding original data as a new data set for training a model;
the model training module is used for carrying out model training by utilizing a new data set generated in the data preparation step so as to construct a model automatically filled with a missing value of time sequence data based on neural network decomposition;
and the model using module is used for constructing a corresponding mask according to the time sequence data with the missing value and filling the missing value of the time sequence data by using the trained model.
10. An automatic filling device for missing values of time series data is characterized by comprising:
a processor;
a memory to store processor-executable computer program instructions;
wherein the computer program instructions, when executed by a processor, implement the method of any one of claims 1 to 9.
CN201910878109.8A 2019-09-17 2019-09-17 Automatic filling method, system and equipment for missing value of time sequence data Active CN110597799B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910878109.8A CN110597799B (en) 2019-09-17 2019-09-17 Automatic filling method, system and equipment for missing value of time sequence data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910878109.8A CN110597799B (en) 2019-09-17 2019-09-17 Automatic filling method, system and equipment for missing value of time sequence data

Publications (2)

Publication Number Publication Date
CN110597799A true CN110597799A (en) 2019-12-20
CN110597799B CN110597799B (en) 2023-01-24

Family

ID=68860551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910878109.8A Active CN110597799B (en) 2019-09-17 2019-09-17 Automatic filling method, system and equipment for missing value of time sequence data

Country Status (1)

Country Link
CN (1) CN110597799B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111563077A (en) * 2020-05-12 2020-08-21 国网山东省电力公司泰安供电公司 Power grid voltage data missing filling method, system, terminal and storage medium
CN112101482A (en) * 2020-10-26 2020-12-18 西安交通大学 Method for detecting abnormal parameter mode of missing satellite data
CN112215238A (en) * 2020-10-29 2021-01-12 支付宝(杭州)信息技术有限公司 Method, system and device for constructing general feature extraction model
CN112527862A (en) * 2020-12-10 2021-03-19 国网河北省电力有限公司雄安新区供电公司 Time sequence data processing method and device
CN113486433A (en) * 2020-12-31 2021-10-08 上海东方低碳科技产业股份有限公司 Method for calculating energy consumption shortage number of net zero energy consumption building and filling system
CN113591954A (en) * 2021-07-20 2021-11-02 哈尔滨工程大学 Filling method of missing time sequence data in industrial system
CN114153829A (en) * 2021-11-30 2022-03-08 中国电力工程顾问集团华东电力设计院有限公司 Cross-space-time bidirectional data missing value filling method and device for energy big data
CN114826988A (en) * 2021-01-29 2022-07-29 中国电信股份有限公司 Method and device for anomaly detection and parameter filling of time sequence data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108090558A (en) * 2018-01-03 2018-05-29 华南理工大学 A kind of automatic complementing method of time series missing values based on shot and long term memory network
CN109815223A (en) * 2019-01-21 2019-05-28 北京科技大学 A kind of complementing method and complementing device for industry monitoring shortage of data
US20190188562A1 (en) * 2017-12-15 2019-06-20 International Business Machines Corporation Deep Neural Network Hardening Framework
US20190228763A1 (en) * 2019-03-29 2019-07-25 Krzysztof Czarnowski On-device neural network adaptation with binary mask learning for language understanding systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190188562A1 (en) * 2017-12-15 2019-06-20 International Business Machines Corporation Deep Neural Network Hardening Framework
CN108090558A (en) * 2018-01-03 2018-05-29 华南理工大学 A kind of automatic complementing method of time series missing values based on shot and long term memory network
CN109815223A (en) * 2019-01-21 2019-05-28 北京科技大学 A kind of complementing method and complementing device for industry monitoring shortage of data
US20190228763A1 (en) * 2019-03-29 2019-07-25 Krzysztof Czarnowski On-device neural network adaptation with binary mask learning for language understanding systems

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张松兰等: "基于统计相关的缺失值数据处理研究", 《统计与决策》 *
马茜等: "顺序敏感的多源感知数据填补技术", 《软件学报》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111563077A (en) * 2020-05-12 2020-08-21 国网山东省电力公司泰安供电公司 Power grid voltage data missing filling method, system, terminal and storage medium
CN111563077B (en) * 2020-05-12 2023-04-25 国网山东省电力公司泰安供电公司 Power grid voltage data missing filling method, system, terminal and storage medium
CN112101482A (en) * 2020-10-26 2020-12-18 西安交通大学 Method for detecting abnormal parameter mode of missing satellite data
CN112215238A (en) * 2020-10-29 2021-01-12 支付宝(杭州)信息技术有限公司 Method, system and device for constructing general feature extraction model
CN112527862A (en) * 2020-12-10 2021-03-19 国网河北省电力有限公司雄安新区供电公司 Time sequence data processing method and device
CN113486433A (en) * 2020-12-31 2021-10-08 上海东方低碳科技产业股份有限公司 Method for calculating energy consumption shortage number of net zero energy consumption building and filling system
CN114826988A (en) * 2021-01-29 2022-07-29 中国电信股份有限公司 Method and device for anomaly detection and parameter filling of time sequence data
CN113591954A (en) * 2021-07-20 2021-11-02 哈尔滨工程大学 Filling method of missing time sequence data in industrial system
CN113591954B (en) * 2021-07-20 2023-10-27 哈尔滨工程大学 Filling method of missing time sequence data in industrial system
CN114153829A (en) * 2021-11-30 2022-03-08 中国电力工程顾问集团华东电力设计院有限公司 Cross-space-time bidirectional data missing value filling method and device for energy big data
CN114153829B (en) * 2021-11-30 2023-01-20 中国电力工程顾问集团华东电力设计院有限公司 Cross-space-time bidirectional data missing value filling method and device for energy big data

Also Published As

Publication number Publication date
CN110597799B (en) 2023-01-24

Similar Documents

Publication Publication Date Title
CN110597799B (en) Automatic filling method, system and equipment for missing value of time sequence data
Mei et al. Compressive-sensing-based structure identification for multilayer networks
CN112527273A (en) Code completion method, device and related equipment
CN111428848B (en) Molecular intelligent design method based on self-encoder and 3-order graph convolution
CN113487024A (en) Alternate sequence generation model training method and method for extracting graph from text
CN113783874A (en) Network security situation assessment method and system based on security knowledge graph
Hu et al. A spatial image steganography method based on nonnegative matrix factorization
CN112463989A (en) Knowledge graph-based information acquisition method and system
Yuan et al. Clifford algebra method for network expression, computation, and algorithm construction
Hirata et al. Reconstructing state spaces from multivariate data using variable delays
CN111768326B (en) High-capacity data protection method based on GAN (gas-insulated gate bipolar transistor) amplified image foreground object
CN113194493A (en) Wireless network data missing attribute recovery method and device based on graph neural network
CN115730156B (en) Matching system and method for new track smelling of ship
CN107622201B (en) A kind of Android platform clone's application program rapid detection method of anti-reinforcing
Bosma et al. Estimating solar and wind power production using computer vision deep learning techniques on weather maps
CN115146292A (en) Tree model construction method and device, electronic equipment and storage medium
CN115408535A (en) Accident knowledge graph construction method and device, storage medium and electronic equipment
CN114818548A (en) Aquifer parameter field inversion method based on convolution generated confrontation network
CN113641791A (en) Expert recommendation method, electronic device and storage medium
CN116226434B (en) Multi-element heterogeneous model training and application method, equipment and readable storage medium
CN115994668B (en) Intelligent community resource management system
Galdino Interval continuous-time Markov chains simulation
CN116485501B (en) Graph neural network session recommendation method based on graph embedding and attention mechanism
Quirce et al. Cuckoo search algorithm approach for the IFS Inverse Problem of 2D binary fractal images
Boroujeny et al. Multi-Bit Distortion-Free Watermarking for Large Language Models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant