CN115422264B - Time sequence data processing method, device, equipment and readable storage medium - Google Patents

Time sequence data processing method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN115422264B
CN115422264B CN202211361310.7A CN202211361310A CN115422264B CN 115422264 B CN115422264 B CN 115422264B CN 202211361310 A CN202211361310 A CN 202211361310A CN 115422264 B CN115422264 B CN 115422264B
Authority
CN
China
Prior art keywords
data
target
self
time sequence
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211361310.7A
Other languages
Chinese (zh)
Other versions
CN115422264A (en
Inventor
张潇澜
李峰
殷涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202211361310.7A priority Critical patent/CN115422264B/en
Publication of CN115422264A publication Critical patent/CN115422264A/en
Application granted granted Critical
Publication of CN115422264B publication Critical patent/CN115422264B/en
Priority to PCT/CN2023/095897 priority patent/WO2024093207A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2474Sequence data queries, e.g. querying versioned data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Fuzzy Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The application belongs to the field of computer application, and particularly discloses a time sequence data processing method, a device, equipment and a readable storage medium, wherein the method comprises the following steps: dividing the time sequence data according to a plurality of different window sizes to obtain a plurality of training sets; respectively inputting the training sets into a plurality of self-encoders for training; wherein a self-encoder corresponds to a window size; after training, performing preferential selection on the models output by each self-encoder to obtain target models; and acquiring target time sequence data, and extracting target characteristics of the target time sequence data by utilizing a target model. According to the method and the device, under the condition of automatically selecting the optimal window, automatic feature extraction of time sequence data with multiple indexes and multiple dimensions for each index is achieved.

Description

Time sequence data processing method, device, equipment and readable storage medium
Technical Field
The present disclosure relates to the field of computer applications, and in particular, to a method, an apparatus, a device, and a readable storage medium for processing time series data.
Background
Time sequence data analysis is an important research direction in the field of artificial intelligence, and is widely applied to multiple fields such as intelligent operation and maintenance, natural language processing, video analysis, voice recognition and the like. Specifically, the method includes performance monitoring, anomaly detection, capacity prediction, fault diagnosis, natural language analysis and understanding, video and voice mode analysis and recognition and the like of hardware equipment.
In practical applications, time series data is typically described using multiple metrics (multiple dimensions are used in each metric). And predicting the future development trend of the data through analyzing the index data. However, the dimensions in the index data are often redundant, the importance degrees are different, and a large amount of redundant information causes great interference to the accuracy of the algorithm, so that the time complexity of the algorithm is increased, and the prediction effect and the processing efficiency of the algorithm are reduced.
Therefore, how to automatically extract key features of data in multiple dimensions of different indexes is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a time sequence data processing method, a device, equipment and a readable storage medium, which can automatically select an optimal window and a model, so that a finally obtained target model can automatically extract key characteristics of time sequence data in a plurality of dimensions of different indexes.
In order to solve the technical problems, the application provides the following technical scheme:
a feature extraction method, comprising:
dividing the time sequence data according to a plurality of different window sizes to obtain a plurality of training sets;
Respectively inputting the training sets into a plurality of self-encoders for training; wherein one of the self-encoders corresponds to one of the window sizes;
after training, performing preferential selection on the models output by the self-encoders to obtain target models;
and acquiring target time sequence data, and extracting target characteristics of the target time sequence data by utilizing the target model.
Preferably, the selecting method includes selecting a model output by each self-encoder preferentially to obtain a target model, including:
dividing the test time sequence data according to a plurality of different window sizes to obtain a plurality of test sets;
respectively inputting the test sets into the corresponding models for testing to obtain reconstruction errors corresponding to the models;
and utilizing the reconstruction error to perform preferential selection on a plurality of models to obtain the target model.
Preferably, using the reconstruction error, a preferential selection is performed on a plurality of models to obtain the target model, including:
separately computing a sum of squares of all of said reconstruction errors for each of said models;
and determining the model with the minimum square sum as the target model.
Preferably, the dividing the time sequence data according to a plurality of different window sizes to obtain a plurality of training sets includes:
acquiring window range parameters;
generating a plurality of different window sizes by using the window range parameters;
and dividing the time sequence data according to different window sizes to obtain a plurality of training sets.
Preferably, generating a number of different window sizes using the window range parameter includes:
and inputting the window range parameters into a grid search algorithm for calculation to obtain a plurality of different window sizes.
Preferably, the training set is respectively input into a plurality of self encoders for training, including:
inputting the training set to the self-encoder;
encoding input data using the trained feature extraction network in the self-encoder;
decoding the encoded data by utilizing an LSTM network in the self-encoder;
and calculating a loss value corresponding to the decoded data, and adjusting a model by using the loss value.
Preferably, the method for encoding input data by using the trained feature extraction network in the self-encoder comprises:
And encoding the input data by utilizing the trained residual error network in the self-encoder.
Preferably, the method for encoding input data by using the trained feature extraction network in the self-encoder comprises:
and if the time sequence data is video time sequence data, performing feature extraction on the input data by utilizing the trained CNN network in the self-encoder to obtain a feature map, and performing dimension reduction on the feature map to obtain one-dimensional data.
Preferably, the dimension reduction is performed on the feature map to obtain one-dimensional data, including:
and converting the feature map by using a full connection layer to obtain the one-dimensional data.
Preferably, the calculating the loss value corresponding to the decoded data and adjusting the model by using the loss value includes:
calculating a reconstruction error of each index by using the input data and the decoded data;
and determining the average value of the square sum of the reconstruction errors as the loss value.
Preferably, determining the mean value of the sum of squares of the reconstruction errors as the loss value includes:
if the input data of a plurality of continuous windows are processed simultaneously, respectively calculating the loss of each window data;
And determining the average value of all the losses of the window data as the loss value.
Preferably, extracting the target feature of the target time series data by using the target model includes:
dividing the target time sequence data according to the window size corresponding to the target model;
and sequentially inputting the time sequence data obtained after the segmentation into the target model to perform feature extraction, so as to obtain the target features.
A time series data processing apparatus comprising:
the data preprocessing module is used for respectively dividing the time sequence data according to a plurality of different window sizes to obtain a plurality of training sets;
the model training module is used for respectively inputting the training sets into a plurality of self-encoders for training; wherein one of the self-encoders corresponds to one of the window sizes;
the model selection module is used for carrying out preferential selection on the models output by the self-encoders after training is completed to obtain target models;
and the feature extraction module is used for acquiring target time sequence data and extracting target features of the target time sequence data by utilizing the target model.
An electronic device, comprising:
a memory for storing a computer program;
And the processor is used for realizing the steps of the time sequence data processing method when executing the computer program.
A readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the above-described time series data processing method.
By applying the method provided by the embodiment of the application, the time sequence data are respectively segmented according to a plurality of different window sizes, so as to obtain a plurality of training sets; respectively inputting the training sets into a plurality of self-encoders for training; wherein a self-encoder corresponds to a window size; after training, performing preferential selection on the models output by each self-encoder to obtain target models; and acquiring target time sequence data, and extracting target characteristics of the target time sequence data by utilizing a target model.
In the application, in order to effectively extract the target characteristics of target time sequence data, the time sequence data is divided based on a plurality of different window sizes, so as to obtain a plurality of training sets corresponding to the different window sizes respectively; these training sets are then input to the self-encoder for training, respectively. Since one self-encoder corresponds to one window size, after the training of each self-encoder is finished, a plurality of models corresponding to different window sizes can be obtained. By preferentially selecting the models, an optimal model with the optimal window size, namely a target model, can be obtained. Thus, after the target time sequence data is acquired, the target characteristics of the target time sequence data can be extracted directly based on the target model. That is, the present application achieves automated feature extraction of time series data for a plurality of indices, each having a plurality of dimensions, with automatic selection of an optimal window.
Accordingly, the embodiments of the present application further provide a time-series data processing apparatus, a device, and a readable storage medium corresponding to the above time-series data processing method, which have the above technical effects and are not described herein again.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the related art, the drawings that are required to be used in the embodiments or the related technical descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to the drawings without inventive effort for a person having ordinary skill in the art.
FIG. 1 is a flowchart illustrating an implementation of a method for processing time series data according to an embodiment of the present application;
FIG. 2 is a schematic diagram of video data according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a training set in an embodiment of the present application;
FIG. 4 is a schematic diagram of model training in a self-encoder according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a model according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating an embodiment of a method for processing time series data according to the present disclosure;
FIG. 7 is a schematic diagram of a timing data processing apparatus according to an embodiment of the present disclosure;
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 9 is a schematic diagram of a specific structure of an electronic device in an embodiment of the application.
Detailed Description
In order to provide a better understanding of the present application, those skilled in the art will now make further details of the present application with reference to the drawings and detailed description. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
In order to facilitate understanding of the time series data processing method provided in the embodiments of the present application, the following description is made of related technical terms and techniques:
CNN network: convolutional neural network Convolutional Neural Network.
LSTM network: long Short-Term Memory network, long Short-Term Memory, a time recurrent neural network.
VGG network: one of the convolutional neural networks, visual Geometry Group.
ResNet network: residual network, one of convolutional neural networks.
In the field of time series data prediction or anomaly detection, it is common to first analyze patterns of historical data and then use this pattern to predict data at a future time. The basic process is as follows: setting a window size of the history data to be selected, and constructing a mode of time series data based on a machine learning or deep learning method (including parameter estimation/non-parameter estimation or network and the like); this pattern is used to predict data that may occur in the future, or by comparing the size relationship of the predicted data and the current data to decide whether the current data is abnormal. In such a scheme, it is necessary to design the window size of the history data to be collected in advance and test the effect thereof. If not, then manual modification is performed and other attempts are made. Thus, the blindness of the test is caused, the uncertainty of the optimal window is obtained, the selection and evaluation efficiency of the optimal window is reduced, and meanwhile, the high labor cost is increased, so that the automatic test and optimization of the window data can not be realized.
Machine learning and deep learning are common when feature extraction is performed on time series data. The machine learning performs feature analysis on the dimension of the data through a field expert, removes redundant dimensions, leaves key features, constructs new dimension features, and then performs model training. Deep learning, constructing a neural network, taking all dimensions of historical data as input, and obtaining key characteristics after processing and compressing the historical data through the neural network. In such schemes, when a machine learning method is used to extract key features of the time series data, an expert is required to have enough domain knowledge, and in the face of high-dimensional data, the time and the effort are often consumed, and the analysis cannot be guaranteed to be comprehensive enough. When a deep learning method is used to extract features, more emphasis is placed on researching multiple dimensional information of one index.
Aiming at the defects of the scheme, the application discloses a time sequence data processing method which can realize automatic feature extraction of time sequence data with multiple indexes and multiple dimensions on each index, and can automatically select data of an optimal window on the other hand. The time series data processing method can be applied to feature extraction in various fields, such as anomaly detection, natural language processing, voice or video data analysis and the like. Referring to fig. 1 for a specific implementation process, fig. 1 is a flowchart of a time-series data processing method in an embodiment of the present application, where the method includes the following steps:
S101, respectively dividing time sequence data according to a plurality of different window sizes to obtain a plurality of training sets.
The time-series data refers to time-series data, such as video data. As shown in fig. 2, there are multiple dimensions in the video data, as well as multiple indices, and are ordered based on time.
In the application, a plurality of different window sizes can be set, and then the time sequence data is divided according to the different window sizes respectively, so that a plurality of training sets are obtained.
The size of each sample corresponds to the same window size in the same training set. That is, there are several different window sizes, and several time series data, and finally several training sets are obtained.
Illustrating: if 3,4 and 5 different window sizes exist, dividing the time sequence data A by using the window size as 3 to obtain a training set A1 with the window size as 3; dividing the time sequence data A by using the window size of 4 to obtain a training set A2 with the window size of 4; and dividing the time sequence data A by using the window size of 5 to obtain a training set A3 with the window size of 5.
In this embodiment of the present application, there are at least 2 different window sizes, specifically, there are several different window sizes, which can be set according to actual requirements and generated.
In a specific embodiment of the present application, step S101 divides the time series data according to a plurality of different window sizes, to obtain a plurality of training sets, including:
step one, acquiring window range parameters;
generating a plurality of different window sizes by utilizing window range parameters;
specifically, the window range parameter may be input to a grid search algorithm for calculation to obtain a plurality of different window sizes.
And thirdly, dividing the time sequence data according to different window sizes to obtain a plurality of training sets.
For convenience of description, the following description will be given by combining the above three steps.
First, window range parameters are acquired by receiving user input. Then, according to the window range parameter, a Grid Search algorithm (Grid Search) is used to automatically generate a limited window set window= { w1, w2, …, wk }.
Wherein the window range parameter comprises
Figure 816071DEST_PATH_IMAGE001
Wherein min is the minimum value of the selectable window, max is the maximum value of the selectable window, +.>
Figure 502268DEST_PATH_IMAGE002
The step length is the positive integer of the three. In step size +.>
Figure 381362DEST_PATH_IMAGE002
K window parameters window= { w1, w2, …, wk } are generated. Wherein the window size indicates, for successive time series data, the selection Is a data number of (a) in the data set. For example, for consecutive video timing data, the number of pictures to be selected, such as wk, is used.
In dividing the time series data, a window overlap parameter (overlap) may be set so that an own training data set is generated for each window size by dividing the video time series data. Wherein the overlay parameter characterizes the overlapping degree of two adjacent specific window data,
Figure 291549DEST_PATH_IMAGE003
. Illustrating: for a segment of video time sequence Data, window wi is used for segmentation, and the training Data set generated after sampling is marked as +.>
Figure 623304DEST_PATH_IMAGE004
Wherein
Figure 582033DEST_PATH_IMAGE005
,/>
Figure 174689DEST_PATH_IMAGE006
. The training dataset form for four different sampling windows is given as shown in fig. 3. In each training set, each sample is a data packet containing wi pieces of time sequence data.
S102, respectively inputting training sets into a plurality of self-encoders for training.
Wherein one self-encoder corresponds to one window size.
Among them, an Auto Encoder (AE) is an artificial neural network (Artificial Neural Networks, ans) used in semi-supervised learning and non-supervised learning, and functions to perform characterization learning (representation learning) on input information by taking the input information as a learning target.
In this embodiment, one self-encoder corresponds to one window size, and thus how many different window sizes are used in step S101 corresponds to how many self-encoders.
In the embodiment of the application, the training sets correspond to different window sizes, so that when training is performed, the training sets are input into the self-encoder with the same window size for training. For example, if there are 5 training sets, there are 5 self-encoders corresponding to each; the same-window-size self-encoder 1 is trained by using the training set 1, the same-window-size self-encoder 2 is trained by using the training set 2, the same-window-size self-encoder 3 is trained by using the training set 3, the same-window-size self-encoder 4 is trained by using the training set 4, and the same-window-size self-encoder 5 is trained by using the training set 5.
Since different window sizes correspond to different self-encoders, the training process can be performed in parallel without having to iteratively adjust the window size to find the optimal window size. For how the training set is used to train in the self-encoder to obtain the model of the corresponding window size, reference may be made to the relevant definition and implementation scheme of the self-encoder, which is not limited in this embodiment.
In one embodiment of the present application, referring to fig. 4, step S102 inputs training sets into a plurality of self-encoders for training, including:
step one, inputting a training set to a self-encoder;
step two, coding input data by utilizing a trained characteristic extraction network in a self-coder;
thirdly, decoding the encoded data by utilizing an LSTM network in the self-encoder;
and step four, calculating a loss value corresponding to the data obtained by decoding, and adjusting the model by using the loss value.
For convenience of description, the following description will be given by combining the above four steps.
The self-encoder comprises an encoding part and a decoding part, namely, after the training set is input into the self-encoder, the input data can be encoded by utilizing the trained characteristic extraction network in the self-encoder, and the encoded data can be decoded by utilizing the LSTM network in the self-encoder. And then calculating a loss value corresponding to the decoded data, adjusting the model based on the loss value, and obtaining the trained model after training is finished. For example, a model as shown in fig. 5 may be obtained, where the input of the model is time series data and the output is the key feature of the time series data.
Wherein the coding part may be embodied as a residual network or VGG. The second step encodes the input data by using the trained feature extraction network in the self-encoder, and may specifically encode the input data by using the trained residual network in the self-encoder.
In one embodiment of the present application, the step two encodes the input data by using the trained feature extraction network in the self-encoder, including: if the time sequence data is video time sequence data, the trained CNN network in the self-encoder is utilized to conduct feature extraction on the input data, a feature map is obtained, and dimension reduction is conducted on the feature map, so that one-dimensional data is obtained.
The feature map is subjected to dimension reduction to obtain one-dimensional data, and the method comprises the following steps: and converting the feature map by using the full connection layer to obtain one-dimensional data.
Taking the training set corresponding to the video time sequence data as an example, the video time sequence data is actually a continuous picture data set, so in the encoding stage, the CNN network mainly encodes multi-dimensional data of three indexes (namely RGB three channels) of each picture to obtain a feature map, converts the feature map through a full connection layer (such as an Affine layer), flattens the feature map into 1-dimensional data, and inputs the 1-dimensional data into a decoder. The decoder may be composed of a plurality of LSTM networks, and restores the input 1-dimensional data.
In one specific embodiment of the present application, the calculating the loss value corresponding to the decoded data in the fourth step and adjusting the model using the loss value may specifically include:
step 1, calculating reconstruction errors of various indexes by using input data and decoded data;
and 2, determining the square and average value of the reconstruction error as a loss value.
For data of one window, the loss function is defined as the sum of squares of the reconstruction errors of all the input and output indexes, i.e. for the input data djwi, the loss function calculation formula is:
Figure 862634DEST_PATH_IMAGE007
wherein->
Figure 71899DEST_PATH_IMAGE008
All dimension values corresponding to the ith index of the input data are output as +.>
Figure 506422DEST_PATH_IMAGE009
. Specifically, loss, used to measure the error between the true and predicted values; wi is the size of the ith window, i.e. the number of data of one training sample; index represents an index, such as the number of channels; indexi represents the ith index; />
Figure 422426DEST_PATH_IMAGE010
Representing the j-th input sample, the window size corresponding to the generation of this sample being wi; />
Figure 143257DEST_PATH_IMAGE011
Representing the processed reconstructed data of the model for the j-th sample.
The square of the difference between the input real sample and the alignment reconstruction result is an error. And calculating the sum of errors of all dimensions (channels), and then calculating the average value of wi data, namely the final Loss value Loss.
Further, determining the sum of squares mean of the reconstruction errors as a loss value includes:
if the input data of a plurality of continuous windows are processed simultaneously, respectively calculating the loss of each window data; the average of the losses of all window data is determined as a loss value. That is, when batch processing (mini-batch) is used, i.e., when processing data for multiple consecutive windows simultaneously, the loss function is the average of the loss values of all window data.
In practical application, the encoder CNN may use mature networks such as VGG, resnet, etc., and learn the optimal weights on the public data set, so as to obtain a good encoding effect and extract better features.
And S103, after training is completed, the models output by the self-encoders are preferentially selected, and a target model is obtained.
After training is completed, a plurality of models corresponding to different window sizes can be obtained. At this time, these models may be preferentially selected, and the resulting model is referred to as a target model.
Specifically, in one specific embodiment of the present application, step S103 performs preferential selection on the models output from the encoders to obtain target models, including:
step one, respectively dividing test time sequence data according to a plurality of different window sizes to obtain a plurality of test sets;
Step two, respectively inputting the test sets into corresponding models for testing to obtain reconstruction errors corresponding to the models;
and thirdly, utilizing the reconstruction error to perform preferential selection on a plurality of models to obtain a target model.
For convenience of description, the following description will be given by combining the above three steps.
That is, after the model is trained, several test sets may be generated based on the test timing data. The acquisition mode of the test set is similar to that of the training set, and the acquisition mode of the test set can be specifically referred to the acquisition mode of the training set.
Aiming at different models, the corresponding test sets are input into the models for processing, and the reconstruction errors corresponding to the models can be obtained. For how the reconstruction error of the model is calculated, reference may be made specifically to the calculation method related to the reconstruction error, and the calculation method for the reconstruction error in this embodiment is not limited.
After each reconstruction error of the model is obtained, the model with the minimum reconstruction error can be selected as the target model based on the reconstruction error.
Specifically, for the third step, a plurality of models are preferentially selected by using the reconstruction error to obtain a target model, which comprises the following steps: respectively calculating the square sum of all reconstruction errors of each model; the model with the smallest sum of squares is determined as the target model. That is, the reconstruction error for each index may be involved in the choice of the final target model.
That is, after training, k models are output first, corresponding to k training data sets with different window sizes. Dividing a verification set (valid set, namely test time sequence data) according to the size of a window, inputting the verification set into corresponding k models for testing, and calculating all reconstruction errors. Reconstructing a model with the least sum of squares of errors as an optimal model
Figure 246342DEST_PATH_IMAGE012
Outputting it corresponding to the optimal window size +.>
Figure 218977DEST_PATH_IMAGE013
。/>
Figure 51804DEST_PATH_IMAGE014
The result of the encoder section output of (a) is a key feature of the input video timing data.
S104, acquiring target time sequence data, and extracting target characteristics of the target time sequence data by using a target model.
After the target model is obtained, when the key feature extraction is required to be carried out on the target time sequence data, the target model can be directly utilized to extract the target feature of the target time sequence data.
Specifically, extracting the target feature of the target time sequence data by using the target model includes:
dividing target time sequence data according to the window size corresponding to the target model;
and step two, sequentially inputting the time sequence data obtained after the segmentation into a target model for feature extraction to obtain target features.
That is, when the target model is used for extracting the key features, the target time sequence data is required to be segmented according to the window size corresponding to the target model, and the time sequence data obtained by segmentation is sequentially input into the target model for extraction, so that the corresponding key features (i.e., the target features) can be obtained.
The key features can be further used for replacing target time sequence data to perform correlation analysis processing. According to different contents of target time sequence data, performance monitoring, anomaly detection, capacity prediction, fault diagnosis, natural language analysis and understanding, video and voice mode analysis and recognition and the like of hardware equipment can be realized. For example, if the target time-series data corresponds to performance-related data of the hardware device, performance monitoring, anomaly detection, capacity prediction and fault diagnosis can be performed on the hardware device based on the key feature.
Because the target feature does not have a large amount of redundant information relative to the target time sequence data, the accuracy of the related processing algorithm is not interfered, the time complexity of the algorithm can be reduced, and the prediction effect and the processing efficiency of the algorithm are improved.
By applying the method provided by the embodiment of the application, the time sequence data are respectively segmented according to a plurality of different window sizes, so as to obtain a plurality of training sets; respectively inputting the training sets into a plurality of self-encoders for training; wherein a self-encoder corresponds to a window size; after training, performing preferential selection on the models output by each self-encoder to obtain target models; and acquiring target time sequence data, and extracting target characteristics of the target time sequence data by utilizing a target model.
In the application, in order to effectively extract the target characteristics of target time sequence data, the time sequence data is divided based on a plurality of different window sizes, so as to obtain a plurality of training sets corresponding to the different window sizes respectively; these training sets are then input to the self-encoder for training, respectively. Since one self-encoder corresponds to one window size, after the training of each self-encoder is finished, a plurality of models corresponding to different window sizes can be obtained. By preferentially selecting the models, an optimal model with the optimal window size, namely a target model, can be obtained. Thus, after the target time sequence data is acquired, the target characteristics of the target time sequence data can be extracted directly based on the target model. That is, the present application achieves automated feature extraction of time series data for a plurality of indices, each having a plurality of dimensions, with automatic selection of an optimal window.
In order to facilitate a person skilled in the art to better understand the time series data processing method provided in the embodiments of the present application, the time series data processing method is described in detail below with reference to a specific application scenario as an example.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating an implementation of a time-series data processing method according to an embodiment of the present application; the first part in fig. 6 corresponds to time sequence data, the second part is a grid search algorithm, the third part is different time window data, and the fourth part is a self-encoder.
In which, as shown in fig. 2, video data is a typical time series data. For a given piece of video data, sampling at a specific time interval will obtain a discrete time sequence of data, each data being a frame of picture information acquired. Each data includes I indices (indices), each of which is characterized by multiple dimensions (Dim), as shown in fig. 2. For the ith index, the data format is shown in the following table.
Figure 818903DEST_PATH_IMAGE015
Wherein each index contains several dimension values, index= { dim1, dim2, … … }. For video timing data, each data is a picture, and indexes of the picture are 3 channels, namely red, blue and green (RGB), and dimensions of each index are pixel values of corresponding channels. Since the pixel value of each channel is many, its dimension is high.
The key features in the video data are extracted by adopting the time sequence data processing method provided by the embodiment of the application, and the implementation is as follows:
first, a set of acquired video temporal data is given. Training the original data set { t1, t2, t3, … …, tn }, and verifying the original data set { v1, v2, v3, … … vm }, wherein si or vi is a picture, each picture examines three indexes (i.e., three channels RGB), and the data of each index corresponds to the value of a pixel. In both sets, all pictures are arranged in time sequence, and the size of each picture is identical.
Setting window initial parameters
Figure 737181DEST_PATH_IMAGE016
Obtaining a window size set by using a grid search algorithm
Figure 841403DEST_PATH_IMAGE017
There are 5 window cases in total.
The overlapping parameter overlapping=0 is set, the original data is preprocessed aiming at each window size wi, corresponding training sets (train) and verification sets (valid) are obtained, corresponding training data are generated through verification, the window wi=2 and the corresponding training sets are used as an example for illustration, and the generation flow of other windows and the corresponding verification sets are consistent.
The training original data set is segmented using window wi=2 to obtain the set { (t 1, t 2), (t 3, t 4), … …, (t (n-1), tn) }, which is the training data set, (t 1, t 2) is the first piece of data, which is denoted as d1wi, and then ordered in this order.
That is, in the data preprocessing stage, a segment of the video time sequence data collected is input and output as a set
Figure 138523DEST_PATH_IMAGE018
Where datai is a data set, each element in the set has a length of window size wi. Specifically, a Grid Search algorithm (Grid Search) may be used to automatically generate a limited number of different window size sets window= { w1, w2, …, wk }. Then, a window overlap parameter overlap is set, and a training data set (wi, datai) is generated for each window size. The overlay parameter is used for the overlapping degree of adjacent data when dividing the video data according to the window size, and the parameter can be dynamically changed. That is, for the historical multi-index multi-dimensional video time sequence Data, the overlapping parameter overlap, the window size wi, the training Data set thereof is the track= { dwi1, dwi2, & gt, dwin }, wherein +. >
Figure 466736DEST_PATH_IMAGE019
,/>
Figure 747676DEST_PATH_IMAGE020
In the model training phase. K parallel-executed experiments can be constructed, each experiment uses training data train with a window size, a model is trained, and reconstruction errors can be used as a loss function to update model parameters. Specifically, a trained residual network (Resnet) may be used in the self-encoder, and an LSTM linked network may be used in the decoder. Setting up
Figure 452327DEST_PATH_IMAGE021
I.e. one piece of data at a time. Training times epochs=50, and after each data treatment, the loss value is calculated,/->
Figure 197429DEST_PATH_IMAGE022
The index here takes the values R, G and B three channels, wi=2. The self-encoder is trained using a training data set, and the training process is stopped when the loss value is smoothed and large fluctuations no longer occur.
Model training with different window sizes can be performed in parallel or sequentially, and the sequence of the model training is not limited.
After model training is completed, a verification set (valid set) can be respectively input into k models, and the size of a reconstruction error is calculated. And taking the model with the minimum reconstruction error as an optimal model, wherein the result output by the encoder is the key characteristic of the video time sequence data, and the window size corresponding to the optimal model is wi.
Illustrating: 5 parallel tests can be started, the processes are executed in parallel, and 5 models are obtained after training. Wherein a trial is defined as a mapping:
Figure 571910DEST_PATH_IMAGE023
. Wherein x is the data set corresponding to a certain window size after preprocessing, in this caseThere are two situations in the application: the data set of the training phase and the data set of the verification phase. The configuration parameters config are window size wi, training times epochs, learning rate lr, batch size +.>
Figure 464779DEST_PATH_IMAGE024
Etc. For a training set of k window sizes, k parallel trials are started, each trial targeting at minimizing reconstruction error +.>
Figure 910804DEST_PATH_IMAGE025
. Respectively inputting the generated 5 verification sets into the corresponding 5 models to obtain 5 loss values, and selecting the model corresponding to the minimum loss value as an optimal model +.>
Figure 179587DEST_PATH_IMAGE026
Corresponding to the optimal window->
Figure 285821DEST_PATH_IMAGE027
. The output of the encoder in the optimal model is the characteristic of the video time sequence data. Therefore, in the application, after the optimal model is determined, the optimal window is also confirmed, and the window size of the same model does not need to be repeatedly adjusted, so that the method is more convenient to realize in particular.
Corresponding to the above method embodiments, the embodiments of the present application further provide a time-series data processing device, where the time-series data processing device described below and the time-series data processing method described above may be referred to correspondingly.
Referring to fig. 7, the apparatus includes the following modules:
the data preprocessing module 101 is configured to divide the time sequence data according to a plurality of different window sizes, so as to obtain a plurality of training sets;
the model training module 102 is used for respectively inputting training sets into a plurality of self-encoders for training; wherein a self-encoder corresponds to a window size;
the model selection module 103 is used for performing preferential selection on the models output by the self-encoders after training is completed to obtain target models;
the feature extraction module 104 is configured to obtain target time series data, and extract target features of the target time series data by using a target model.
By applying the device provided by the embodiment of the application, the time sequence data are respectively segmented according to a plurality of different window sizes, so as to obtain a plurality of training sets; respectively inputting the training sets into a plurality of self-encoders for training; wherein a self-encoder corresponds to a window size; after training, performing preferential selection on the models output by each self-encoder to obtain target models; and acquiring target time sequence data, and extracting target characteristics of the target time sequence data by utilizing a target model.
In the application, in order to effectively extract the target characteristics of target time sequence data, the time sequence data is divided based on a plurality of different window sizes, so as to obtain a plurality of training sets corresponding to the different window sizes respectively; these training sets are then input to the self-encoder for training, respectively. Since one self-encoder corresponds to one window size, after the training of each self-encoder is finished, a plurality of models corresponding to different window sizes can be obtained. By preferentially selecting the models, an optimal model with the optimal window size, namely a target model, can be obtained. Thus, after the target time sequence data is acquired, the target characteristics of the target time sequence data can be extracted directly based on the target model. That is, the present application achieves automated feature extraction of time series data for a plurality of indices, each having a plurality of dimensions, with automatic selection of an optimal window.
In a specific embodiment of the present application, the model selection module 103 is specifically configured to divide the test time sequence data according to a plurality of different window sizes, so as to obtain a plurality of test sets;
respectively inputting the test sets into corresponding models for testing to obtain reconstruction errors corresponding to the models;
And utilizing the reconstruction error to perform preferential selection on the plurality of models to obtain a target model.
In one embodiment of the present application, the model selection module 103 is specifically configured to calculate the sum of squares of all reconstruction errors of each model separately;
the model with the smallest sum of squares is determined as the target model.
In one embodiment of the present application, the data preprocessing module 101 is specifically configured to obtain window range parameters;
generating a plurality of different window sizes by using window range parameters;
and respectively dividing the time sequence data according to different window sizes to obtain a plurality of training sets.
In a specific embodiment of the present application, the data preprocessing module 101 is specifically configured to input the window range parameter to the grid search algorithm for calculation, so as to obtain a plurality of different window sizes.
In one embodiment of the present application, the model training module 102 is specifically configured to input a training set to the self-encoder;
encoding the input data using the trained feature extraction network in the self-encoder;
decoding the encoded data using the LSTM network in the self-encoder;
and calculating a loss value corresponding to the decoded data, and adjusting the model by using the loss value.
In one embodiment of the present application, the model training module 102 is specifically configured to encode the input data using a residual network trained in the self-encoder.
In a specific embodiment of the present application, the model training module 102 is specifically configured to, if the time-series data is video time-series data, perform feature extraction on the input data by using a trained CNN network in the self-encoder to obtain a feature map, and perform dimension reduction on the feature map to obtain one-dimensional data.
In a specific embodiment of the present application, the model training module 102 is specifically configured to convert the feature map by using the full connection layer to obtain one-dimensional data.
In one embodiment of the present application, the model training module 102 is specifically configured to calculate a reconstruction error of each index by using the input data and the decoded data;
the sum of squares mean of the reconstruction errors is determined as the loss value.
In one embodiment of the present application, the model training module 102 is specifically configured to calculate a loss of each window data if input data of a plurality of consecutive windows are processed simultaneously;
the average of the losses of all window data is determined as a loss value.
In a specific embodiment of the present application, the feature extraction module 104 is configured to segment the target time sequence data according to a window size corresponding to the target model;
and sequentially inputting the time sequence data obtained after the segmentation into a target model for feature extraction to obtain target features.
Corresponding to the above method embodiments, the embodiments of the present application further provide an electronic device, where an electronic device described below and a time-series data processing method described above may be referred to correspondingly.
Referring to fig. 8, the electronic device includes:
a memory 332 for storing a computer program;
a processor 322 for implementing the steps of the time series data processing method of the above method embodiment when executing the computer program.
Specifically, referring to fig. 9, fig. 9 is a schematic diagram of a specific structure of an electronic device according to the present embodiment, where the electronic device may have a relatively large difference due to different configurations or performances, and may include one or more processors (central processing units, CPU) 322 (e.g., one or more processors) and a memory 332, where the memory 332 stores one or more computer applications 342 or data 344. Wherein the memory 332 may be transient storage or persistent storage. The program stored in memory 332 may include one or more modules (not shown), each of which may include a series of instruction operations in the data processing apparatus. Still further, the central processor 322 may be configured to communicate with the memory 332 and execute a series of instruction operations in the memory 332 on the electronic device 301.
The electronic device 301 may also include one or more power supplies 326, one or more wired or wireless network interfaces 350, one or more input/output interfaces 358, and/or one or more operating systems 341.
The steps in the above-described time series data processing method may be implemented by the structure of the electronic device.
Corresponding to the above method embodiments, the embodiments of the present application further provide a readable storage medium, where a readable storage medium described below and a time-series data processing method described above may be referred to correspondingly.
A readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the time-series data processing method of the above-described method embodiments.
The readable storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, and the like.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, so that the same or similar parts between the embodiments are referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application, but such implementation should not be considered to be beyond the scope of this application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it is also noted that the terms include, comprise, or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The principles and embodiments of the present application are described herein with specific examples, the above examples being provided only to assist in understanding the methods of the present application and their core ideas; meanwhile, as those skilled in the art will vary in the specific embodiments and application scope according to the ideas of the present application, the contents of the present specification should not be construed as limiting the present application in summary.

Claims (13)

1. A time-series data processing method, characterized by comprising:
dividing the time sequence data according to a plurality of different window sizes to obtain a plurality of training sets;
respectively inputting the training sets into a plurality of self-encoders for training; wherein one of the self-encoders corresponds to one of the window sizes;
After training, performing preferential selection on the models output by the self-encoders to obtain target models;
acquiring target time sequence data, and extracting target characteristics of the target time sequence data by utilizing the target model;
extracting the target feature of the target time sequence data by using the target model comprises the following steps:
dividing the target time sequence data according to the window size corresponding to the target model;
sequentially inputting the time sequence data obtained after segmentation into the target model to perform feature extraction to obtain the target features;
the training sets are respectively input into a plurality of self-encoders for training, and the training sets comprise:
inputting the training set to the self-encoder;
encoding input data using the trained feature extraction network in the self-encoder;
decoding the encoded data by utilizing an LSTM network in the self-encoder;
and calculating a loss value corresponding to the decoded data, and adjusting a model by using the loss value.
2. The method according to claim 1, wherein the selecting the model output from each of the encoders to obtain the target model comprises:
Dividing the test time sequence data according to a plurality of different window sizes to obtain a plurality of test sets;
respectively inputting the test sets into the corresponding models for testing to obtain reconstruction errors corresponding to the models;
and utilizing the reconstruction error to perform preferential selection on a plurality of models to obtain the target model.
3. The time series data processing method according to claim 2, wherein using the reconstruction error to preferentially select a plurality of the models to obtain the target model includes:
separately computing a sum of squares of all of said reconstruction errors for each of said models;
and determining the model with the minimum square sum as the target model.
4. The method for processing time series data according to claim 1, wherein the dividing the time series data according to a plurality of different window sizes to obtain a plurality of training sets includes:
acquiring window range parameters;
generating a plurality of different window sizes by using the window range parameters;
and dividing the time sequence data according to different window sizes to obtain a plurality of training sets.
5. The method of time series data processing according to claim 4, wherein generating a number of different window sizes using the window range parameter comprises:
and inputting the window range parameters into a grid search algorithm for calculation to obtain a plurality of different window sizes.
6. The method of time series data processing according to claim 1, wherein encoding the input data using the trained feature extraction network in the self-encoder comprises:
and encoding the input data by utilizing the trained residual error network in the self-encoder.
7. The method of time series data processing according to claim 1, wherein encoding the input data using the trained feature extraction network in the self-encoder comprises:
and if the time sequence data is video time sequence data, performing feature extraction on the input data by utilizing the trained CNN network in the self-encoder to obtain a feature map, and performing dimension reduction on the feature map to obtain one-dimensional data.
8. The method of time series data processing according to claim 7, wherein the step of performing dimension reduction on the feature map to obtain one-dimensional data includes:
And converting the feature map by using a full connection layer to obtain the one-dimensional data.
9. The method according to claim 1, wherein calculating a loss value corresponding to the decoded data and adjusting a model using the loss value, comprises:
calculating a reconstruction error of each index by using the input data and the decoded data;
and determining the average value of the square sum of the reconstruction errors as the loss value.
10. The time series data processing method according to claim 9, wherein determining the sum of squares mean of the reconstruction errors as the loss value includes:
if the input data of a plurality of continuous windows are processed simultaneously, respectively calculating the loss of each window data;
and determining the average value of all the losses of the window data as the loss value.
11. A time series data processing apparatus, comprising:
the data preprocessing module is used for respectively dividing the time sequence data according to a plurality of different window sizes to obtain a plurality of training sets;
the model training module is used for respectively inputting the training sets into a plurality of self-encoders for training; wherein one of the self-encoders corresponds to one of the window sizes;
The model selection module is used for carrying out preferential selection on the models output by the self-encoders after training is completed to obtain target models;
the feature extraction module is used for acquiring target time sequence data and extracting target features of the target time sequence data by utilizing the target model;
the feature extraction module is specifically configured to segment the target time sequence data according to a window size corresponding to the target model; sequentially inputting the time sequence data obtained after segmentation into the target model to perform feature extraction to obtain the target features;
the model training module is specifically configured to input the training set to the self-encoder; encoding input data using the trained feature extraction network in the self-encoder; decoding the encoded data by utilizing an LSTM network in the self-encoder; and calculating a loss value corresponding to the decoded data, and adjusting a model by using the loss value.
12. An electronic device, comprising:
a memory for storing a computer program;
processor for implementing the steps of the time series data processing method according to any one of claims 1 to 10 when executing said computer program.
13. A readable storage medium, characterized in that the readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the time-series data processing method according to any of claims 1 to 10.
CN202211361310.7A 2022-11-02 2022-11-02 Time sequence data processing method, device, equipment and readable storage medium Active CN115422264B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211361310.7A CN115422264B (en) 2022-11-02 2022-11-02 Time sequence data processing method, device, equipment and readable storage medium
PCT/CN2023/095897 WO2024093207A1 (en) 2022-11-02 2023-05-23 Time series data processing method and apparatus, device, and nonvolatile readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211361310.7A CN115422264B (en) 2022-11-02 2022-11-02 Time sequence data processing method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN115422264A CN115422264A (en) 2022-12-02
CN115422264B true CN115422264B (en) 2023-05-05

Family

ID=84207989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211361310.7A Active CN115422264B (en) 2022-11-02 2022-11-02 Time sequence data processing method, device, equipment and readable storage medium

Country Status (2)

Country Link
CN (1) CN115422264B (en)
WO (1) WO2024093207A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112702329A (en) * 2020-12-21 2021-04-23 四川虹微技术有限公司 Traffic data anomaly detection method and device and storage medium
CN113850916A (en) * 2021-09-26 2021-12-28 浪潮电子信息产业股份有限公司 Model training and point cloud missing completion method, device, equipment and medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798018A (en) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 Behavior prediction method, behavior prediction device, storage medium and electronic equipment
CN113157760A (en) * 2020-01-22 2021-07-23 阿里巴巴集团控股有限公司 Target data determination method and device
CN112461537B (en) * 2020-10-16 2022-06-17 浙江工业大学 Wind power gear box state monitoring method based on long-time and short-time neural network and automatic coding machine
CN113435124A (en) * 2021-06-29 2021-09-24 北京工业大学 Water quality space-time correlation prediction method based on long-time and short-time memory and radial basis function neural network
CN113837812B (en) * 2021-10-09 2023-01-17 广东电力交易中心有限责任公司 Joint probability prediction method and device for node electricity price
CN114492826A (en) * 2021-11-22 2022-05-13 杭州电子科技大学 Unsupervised anomaly detection analysis solution method based on multivariate time sequence flow data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112702329A (en) * 2020-12-21 2021-04-23 四川虹微技术有限公司 Traffic data anomaly detection method and device and storage medium
CN113850916A (en) * 2021-09-26 2021-12-28 浪潮电子信息产业股份有限公司 Model training and point cloud missing completion method, device, equipment and medium

Also Published As

Publication number Publication date
CN115422264A (en) 2022-12-02
WO2024093207A1 (en) 2024-05-10

Similar Documents

Publication Publication Date Title
CN109587713B (en) Network index prediction method and device based on ARIMA model and storage medium
CN110210658B (en) Prophet and Gaussian process user network flow prediction method based on wavelet transformation
CN109816221A (en) Decision of Project Risk method, apparatus, computer equipment and storage medium
CN110726898B (en) Power distribution network fault type identification method
CN115293280A (en) Power equipment system anomaly detection method based on space-time feature segmentation reconstruction
CN111782491B (en) Disk failure prediction method, device, equipment and storage medium
CN111738521B (en) Non-invasive power load monitoring sequence generation method, system, equipment and medium
CN113284001B (en) Power consumption prediction method and device, computer equipment and storage medium
CN112288137A (en) LSTM short-term load prediction method and device considering electricity price and Attention mechanism
CN110619427A (en) Traffic index prediction method and device based on sequence-to-sequence learning model
CN111260082B (en) Spatial object motion trail prediction model construction method based on neural network
CN115422264B (en) Time sequence data processing method, device, equipment and readable storage medium
CN113988156A (en) Time series clustering method, system, equipment and medium
CN117132089B (en) Power utilization strategy optimization scheduling method and device
CN114444811A (en) Aluminum electrolysis mixing data superheat degree prediction method based on attention mechanism
CN109599123B (en) Audio bandwidth extension method and system based on genetic algorithm optimization model parameters
CN115713044B (en) Method and device for analyzing residual life of electromechanical equipment under multi-condition switching
CN115883424A (en) Method and system for predicting traffic data between high-speed backbone networks
CN115904916A (en) Hard disk failure prediction method and device, electronic equipment and storage medium
Lin et al. A CNN-based quality model for image interpolation
CN113158134A (en) Method and device for constructing non-invasive load identification model and storage medium
CN116491115A (en) Rate controlled machine learning model with feedback control for video coding
CN117094451B (en) Power consumption prediction method, device and terminal
CN117009752A (en) Power consumption demand prediction method based on POA-VMD-SSA-KELM model
CN117892073A (en) Irrigation area water metering system and water metering method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant