WO2023147227A1 - Systems and methods for online time series forcasting - Google Patents

Systems and methods for online time series forcasting Download PDF

Info

Publication number
WO2023147227A1
WO2023147227A1 PCT/US2023/060618 US2023060618W WO2023147227A1 WO 2023147227 A1 WO2023147227 A1 WO 2023147227A1 US 2023060618 W US2023060618 W US 2023060618W WO 2023147227 A1 WO2023147227 A1 WO 2023147227A1
Authority
WO
WIPO (PCT)
Prior art keywords
layer
adaptation
convolutional layer
time series
convolutional
Prior art date
Application number
PCT/US2023/060618
Other languages
French (fr)
Inventor
Hong-Quang PHAM
Chenghao Liu
Doyen Sahoo
Chu Hong Hoi
Original Assignee
Salesforce, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/871,819 external-priority patent/US20230244943A1/en
Application filed by Salesforce, Inc. filed Critical Salesforce, Inc.
Publication of WO2023147227A1 publication Critical patent/WO2023147227A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Definitions

  • the embodiments relate generally to machine learning systems, and more specifically to online time series forecasting.
  • Deep neural network models have been widely used in time series forecasting.
  • learning models may be used to forecast time series data such as continuous market data over a period of time in the future, weather data, and/or the like.
  • Existing deep models adopt batch- learning for time series forecasting tasks. Such models often randomly sample look-back and forecast windows during training and freeze the model during evaluation, breaking the time varying (non- stationary) nature of time series.
  • FIG. 1 is a simplified diagram illustrating an example structure of the FSNet framework for forecasting a time series, according to embodiments described herein.
  • FIG. 2 is a simplified diagram illustrating an example structure of a TCN layer (block) of the FSNet framework described in FIG. 1, according to embodiments described herein.
  • FIG. 3 is a simplified diagram illustrating an example structure of the dilated convolution layer in the TCN layer (block) described in FIG. 2, according to embodiments described herein.
  • FIG. 4 is a simplified diagram of a computing device that implements the FSNet framework, according to some embodiments described herein.
  • FIG. 5 is a simplified pseudo code segment for a fast and slow learning network implemented at the FSNet framework described in FIGS. 1-3, according to embodiments described here.
  • FIG. 6 is a simplified logic flow diagram illustrating an example process corresponding to the pseudo code algorithm in FIG. 5, according to embodiments described herein.
  • FIGS. 7-9 are example data charts and plots illustrating performance of the FSNet in example data experiments, according to embodiments described herein.
  • network may comprise any hardware or software-based framework that includes any artificial intelligence network or system, neural network or system and/or any training or learning models implemented thereon or therewith.
  • module may comprise hardware or software-based framework that performs one or more functions.
  • the module may be implemented on one or more neural networks.
  • a time series is a set of values that correspond to a parameter of interest at different points in time. Examples of the parameter can include prices of stocks, temperature measurements, and the like.
  • Time series forecasting is the process of determining a future datapoint or a set of future datapoints beyond the set of values in the time series. Time series forecasting of dynamic data via deep learning remains challenging.
  • Embodiments provide a framework combining fast and slow learning Networks (referred to as “FSNet”) to train deep neural forecasters on the fly for online time-series forecasting.
  • FSNet fast and slow learning Networks
  • FSNet employs a per-layer adapter to monitor each layer’s contribution to the forecasting loss via its partial derivative.
  • the adapter transforms each layer’s weight and feature at each step based on its recent gradient, allowing a finegrain per-layer fast adaptation to optimize the current loss.
  • FSNet employs a second and complementary associative memory component to store important, recurring patterns observed during training. The adapter interacts with the memory to store, update, and retrieve the previous transformations, facilitating fast learning of such patterns.
  • the FSNet framework can adapt to the fast-changing and the long- recurring patterns in time series.
  • the deep neural network plays the role of neocortex while the adapter and its memory play act as a hippocampus component.
  • FIG. 1 is a simplified diagram illustrating an example structure of the FSNet framework 100 for forecasting a time series, according to embodiments described herein.
  • the FSNet framework 100 comprises a plurality of convolution blocks 104a-n connected to a regressor 105.
  • the FSNet framework 100 may receive time series data 102, denoted by a times series of T observations each having n dimensions, from an input interface such as a memory or a network adapter.
  • the model 100 may use a look back window based on the availability of memory in such as GPU memory based on the size of the time series data or based on the seasonality of the data and the like.
  • a pair of lookback window and forecast window data are considered as a training sample.
  • a linear regressor 105 is employed to forecast all H steps in the horizons simultaneously.
  • the TCN backbone 104a-n may implement a deep learning algorithm (that learns slowly online and is a deep neural network) which receives an input such as a time series data 102, assigns importance (learnable weights and biases) to various aspects/objects in the time series data 102 and differentiates various aspects/objects in the time series data 102 from the other aspects/objects in the time series data 102.
  • the TCN backbone 104a-n may extract a time-series feature representation from the time series data 102.
  • the FSNet framework 100 further includes two complementary components: a per-layer adapter (
  • )i shown at 315 in FIG. 3
  • a per-layer associate memory Mi shown at 318 in FIG. 3
  • FIG. 2 is a simplified diagram illustrating an example structure of a TCN layer (block) 104a of the FSNet framework described in FIG. 1, according to embodiments described herein.
  • the block input 202 may be processed by a number of dilated convolution layers 204, 2-6, and the convoluted output is added to the original block input 202 to generate block output 208.
  • the convoluted output is added to the original block input 202 to generate block output 208.
  • two dilated convolution layers 204 and 206 are shown in FIG. 2 for illustrative purpose only, any other number of dilated convolution layers may be used in a TCN block.
  • each TCN block 104a may rely on its adapter 315 and associative memory 318 to quickly adapt to the changes in time series data 102 or learn more efficiently with limited data.
  • Each block or layer 104a- 104n may adapt independently rather than restricting the adaptation to the depth of the network, i.e., gradient descent over the depth of the network 104a-n.
  • each convolution filter stack is accompanied by an adapter and an associative memory.
  • the adapter receives the gradient EMA and interacts with the memory and convolution filter accordingly, as further illustrated in relation to FIG. 3.
  • FIG. 3 is a simplified diagram illustrating an example structure of the dilated convolution layer 204 (or 206) in the TCN layer (block) 104a described in FIG. 2, according to embodiments described herein.
  • the dilated convolution layer 204 may comprise convolution filters 310, a per-layer adapter 315, a per-layer memory 318.
  • Input 202 to the dilated convolution layer 204 may be fed to the convolution filters 310, which in turn computes the exponential moving average (EMA) 313 of the TCN backbone's gradients.
  • EMA exponential moving average
  • EMA is used to smooth out online training's noises by: where g* denotes the gradient of the /-th layer at time t and g L denotes the EMA gradient.
  • the fast adapter 315 may receive the EMA gradient g L as input and maps it to the adaptation coefficients u t , as shown at 316.
  • the fast adapter 315 may use the element- wise transformation as the adaptation process due to its efficiency for continual learning.
  • the fast adapter 315 may absorb the bias transformation parameter into a t for brevity.
  • the adaptation for a layer 0 may involves a weight adaptation and a feature adaptation, as shown at 319.
  • the weight adaptation parameter a t acts on the corresponding weight of the backbone network via an element-wise multiplication as
  • 0 a stack of 1 features maps of C channels and length L
  • 9 t denotes the adapted weight
  • tile (a ; ) denotes that the weight adaptor is applied per-channel on all filters via a tile function
  • O denotes the elementwise multiplication.
  • a feature adaptation component of the gradient wherein the feature adaptation parameter changes the convolutional layer feature map based on an element- wise multiplication between the feature adaptation component and the first convolutional layer feature map.
  • the convolutional layer may be updated based on the weight adaption component a t and the feature adaptation component p t .
  • the gradient may be directly mapped to the per-element adaptation parameter and this may result in a very high dimensional mapping.
  • a chunking operation may be implemented to split the gradient into equal size chunks and then maps each chunk to an element of the adaptation parameter.
  • the chunking operation may be implemented as (1) flattening the gradient EMA of a corresponding block of the TCN model 120 into a vector; (2) splitting the gradient vector into d chunks; (3) mapping each chunk to a hidden representation; and (4) mapping each hidden representation to a coordinate of the target adaptation parameter u.
  • a splitting operation (e, B) splitting a vector e into B segments, each has size dim (e)/B, the backbone's layer EMA gradient 313 of the TCN backbone to an adaptation coefficient u G via the chunking process as: where the are the first and second weight matrix of the adapter.
  • the adaptation may be applied per-channel, which greatly reduces the memory overhead, offers compression and generalization.
  • the FSNet adapter may use a fast adaptation procedure for the /-th layer is summarized as:
  • an associative memory 318 may be implemented to store the adaptation coefficients of repeating events encountered during learning. While the adapter 315 can handle fast recent changes over a short time scale, recurrent patterns are stored in the memory 318 and then retrieved when they reappear in the future.
  • each adapter 315 is equipped with an associate memory 318, denoted by , G Wxd where d denotes the dimensionality of u and N denotes the number of elements.
  • the associate memory 318 only sparsely interacts with the adapter to store, retrieve, and update such important events.
  • Trigger i where T > 0 is a hyper-parameter determining the significant degree of interference.
  • T may be set to a relatively high value (e.g., 0.7) so that the memory only remembers significant changing patterns, which could be important and may reappear.
  • gradient EMA for triggering the memory interaction y' 0.3
  • memory triggering threshold T 0.75.
  • memory read and write operations may be performed using the adaptation parameter's EMA (with coefficient y' ) to fully capture the current pattern.
  • the EMA of u t is calculated in the same manner as g t .
  • the adapter queries and retrieves the most similar transformations in the past via an attention read operation, which is a weighted sum over the memory items:
  • the FSNet framework described in relation to FIGS. 1-3 is suitable for the task- free, online continual learning scenario because there is no need to detect when tasks switch explicitly. Instead, the task boundaries definition can be relaxed to allow the model to improve its learning on current samples continuously.
  • FIG. 4 is a simplified diagram of a computing device that implements the FSNet framework, according to some embodiments described herein.
  • computing device 400 includes a processor 410 coupled to memory 420. Operation of computing device 400 is controlled by processor 410.
  • processor 410 may be representative of one or more central processing units, multi-core processors, microprocessors, microcontrollers, digital signal processors, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), graphics processing units (GPUs) and/or the like in computing device 400.
  • Computing device 400 may be implemented as a stand-alone subsystem, as a board added to a computing device, and/or as a virtual machine.
  • Memory 420 may be used to store software executed by computing device 400 and/or one or more data structures used during operation of computing device 400.
  • Memory 420 may include one or more types of machine readable media. Some common forms of machine readable media may include floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH- EPROM, any other memory chip or cartridge, and/or any other medium from which a processor or computer is adapted to read.
  • Processor 410 and/or memory 420 may be arranged in any suitable physical arrangement.
  • processor 410 and/or memory 420 may be implemented on a same board, in a same package (e.g., system-in-package), on a same chip (e.g., system- on-chip), and/or the like.
  • processor 410 and/or memory 420 may include distributed, virtualized, and/or containerized computing resources. Consistent with such embodiments, processor 410 and/or memory 420 may be located in one or more data centers and/or cloud computing facilities.
  • memory 420 may include non-transitory, tangible, machine readable media that includes executable code that when run by one or more processors (e.g., processor 410) may cause the one or more processors to perform the methods described in further detail herein.
  • memory 420 includes instructions for an online time series forecasting module 430 that may be used to implement and/or emulate the systems and models, and/or to implement any of the methods described further herein.
  • the online time series forecasting module 430 may receive an input 440, e.g., such as a time-series data in a lookback window, via a data interface 415.
  • the data interface 415 may be any of a user interface that receives uploaded time series data, or a communication interface that may receive or retrieve a previously stored sample of lookback window and forecasting window from the database.
  • the times series forecasting module 430 may generate an output 450, such as a forecast to the input 440.
  • the time series forecasting module 430 may further include a series of TCN blocks 431a-n (similar to 104a-n shown in FIG. 1) and a regressor 432 (similar to 105 shown in FIG. 1).
  • the time series forecasting module 430 and its submodules 431-432 may be implemented via software, hardware and/or a combination thereof.
  • computing devices such as computing device 400 may include non-transitory, tangible, machine readable media that include executable code that when run by one or more processors (e.g., processor 410) may cause the one or more processors to perform the processes of methods discussed throughout the disclosure.
  • processors e.g., processor 410
  • Some common forms of machine-readable media that may include the processes of methods are, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, and/or any other medium from which a processor or computer is adapted to read.
  • FIG. 5 is a simplified pseudo code segment for a fast and slow learning network implemented at the FSNet framework described in FIGS. 1-3, according to embodiments described here.
  • forward computation may be performed to compute the adaptation parameter comprising the weight adaptation component a t and the feature adaptation component at each layer.
  • Memory read and write operation may be performed via the chunking process and the adaptation parameter may be updated by a weighted sum of the current and past adaptation parameters.
  • the weight adaptation and feature adaptation may be performed according to Eq. (5).
  • forecast data can be generated via the regressor (e.g., 105 in FIG. 1).
  • the forecast data is then compared with the ground-truth future data from the training sample to compute the forecast loss, which is then used to update the stack of L layers via backpropagation.
  • the regressor may also be updated via stochastic gradient descent (SGD).
  • SGD stochastic gradient descent
  • the adaptation parameters and EMA adaptation parameters are then updated backwardly.
  • FIG. 6 is a simplified logic flow diagram illustrating an example process 600 corresponding to the pseudo code algorithm in FIG. 5, according to embodiments described herein.
  • One or more of the processes of method 600 may be implemented, at least in part, in the form of executable code stored on non-transitory, tangible, machine-readable media that when run by one or more processors may cause the one or more processors to perform one or more of the processes.
  • method 600 corresponds to the operation of the FSNet framework 100 (FIG. 1) for forecasting time series data at future timestamps in a dynamic system.
  • a time series dataset that includes a plurality of datapoints corresponding to a plurality of timestamps within a lookback time window may be received via a data interface (e.g., 415 in FIG. 4).
  • a convolutional layer (e.g., block 104a in FIGS. 1-2) from a stack of convolutional layers (e.g., Blocks 104a-n in FIG. 1) may compute a first gradient based on exponential moving average of gradients corresponding to the respective convolutional layer, e.g., according to Eq. (1).
  • a first adaptation parameter ui corresponding to the convolutional layer may be determined by mapping portions of the first gradient to elements of the first adaptation parameter.
  • the first adaptation parameter comprises a first weight adaptation component a L and a first feature adaptation component p L .
  • a layer forecasting loss indicative of a loss contribution of the respective convolutional layer to an overall forecasting loss according to the plurality of datapoints may be optionally determined, based on the plurality of datapoints.
  • the layer forecasting loss may be computed via the partial derivative V 0/ T.
  • the at least one convolutional layer may be optionally updated based on the layer forecasting loss. In this way, each layer may be monitored and modified independently to learn the current loss by learning through the layer forecasting loss.
  • a cosine similarity between the first gradient of the updated convolutional layer and a longer-term gradient associated with the at least one first convolutional layer may be computed, e.g., according to Eq. (6).
  • step 614 when the cosine similarity is greater than a pre-predefined threshold, method 600 proceeds to step 616 to perform a chunking process for memory read and write. Specifically, at step 616, a current adaptation parameter is retrieved from an indexed memory (e.g., 318 in FIG. 3) corresponding to the convolutional layer. At step 618, content stored at the indexed memory (e.g., 318 in FIG. 3) is updated based on the current adaptation parameter and the first adaptation parameter. At step 620, the first adaptation parameter is updated by taking a weighted average with the retrieved current adaptation parameter.
  • an indexed memory e.g., 318 in FIG. 3
  • an adapted layer parameter is computed based on the first weight adaptation component a t and a layer parameter corresponding to the first layer, e.g., according to Eq. (5).
  • a feature map h L of the first convolutional layer is generated with the first feature adaptation component p L .
  • the first feature map is a convolution of the adapted layer parameter and a previous adapted feature map from a preceding layer.
  • an adapted feature map h t is computed based on the first feature adaptation component and a first feature map h L of the first convolutional layer.
  • a regressor (e.g., 105 in FIG. 1) may generate time series forecast data corresponding to a future time window based on a final feature map output from the stack of convolutional layers corresponding to the time series data within the lookback time window.
  • a forecast loss may be computed based on the generated time series forecast data and ground-truth data corresponding to the future time window.
  • the stack of convolutional layers and the regressor may then be updated based on the forecast loss via backpropagation.
  • the regressor may be updated via stochastic gradient descent.
  • the gradient and the adaptation parameter of each layer of the stack may then be updated backwardly.
  • ETT1 Zihou et al., Informer: Beyond efficient transformer for long sequence time-series forecasting, in Proceedings of AAAI, 2021
  • ECL Electricalty Consuming Load2 dataset collects the electricity consumption of 321 clients from 2012 to 2014.
  • Traffic3 dataset records the road occupancy rates at San Francisco Bay area freeways
  • Weather4 dataset records 11 climate features from nearly 1,600 locations in the U.S in an hour intervals from 2010 to 2013.
  • a task may be synthesized by sampling 1, 000 samples from a first-order autoregressive process with coefficient cp ARcp(l), where different tasks correspond to different cp values.
  • the first synthetic data S-Abrupt contains abrupt, and recurrent concepts where the samples abruptly switch from one AR process to another by the following order: AR0.1(l), AR0.4(l), AR0.6(l), AR0.1(l), AR0.3(l), AR0.6(l).
  • the second data, S-Gradual contains gradual, incremental shifts, where the shift starts at the last 20% of each task. In this scenario, the last 20% samples of a task is an averaged from two AR process with the order as above.
  • the backbone size can be increased instead.
  • the mean and standard deviation are calculated to normalize online training samples and perform hyperparameter cross-validation.
  • the model’s ability to forecast longer horizons is tested by varying H G ⁇ 1, 24, 48 ⁇ .
  • TFCL Task-free continual learning, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11254- 11263,325, 2019
  • MIR Aljundi et al., Online continual learning with maximal interfered retrieval. Advances in Neural Information Processing Systems, 32:11849-11860, 2019) replace the random sampling in ER by selecting samples that cause the most forgetting.
  • DER+-I- (Buzzega et al., Dark experience for general continual learning: a strong, simple baseline, in 34th Conference on Neural Information Processing Systems (NeurlPS 2020), 2020) augments the standard ER with a knowledge distillation strategy (described in Hinton et al., Distilling the knowledge in a neural network. arXiv preprint arXiv: 1503.02531, 2015).
  • ER and its variants are strong baselines in the online setting since they enjoy the benefits of training on mini-batches, which greatly reduce noises from singe samples and offer faster, better convergence (see Bottou et al., Online learning and stochastic approximations, Online learning in neural networks, 17(9): 142, 1998).
  • Another baseline includes Experiment Replay (described in Chaudhry et al., On tiny episodic memories in continual learning, arXiv preprint arXiv: 1902.10486, 2019) strategy where a buffer is employed to store previous data and interleave old samples during the learning of newer ones.
  • Another baseline includes DER+-I- (Buzzega et al., Dark experience for general continual learning: a strong, simple baseline, in proceedings of 34th Conference on Neural Information Processing Systems (NeurlPS 2020), 2020) which further adds a knowledge distillation (described in Hinton et al., Distilling the knowledge in a neural network, arXiv preprint arXiv: 1503.02531, 2015) loss to ER.
  • ER and DER+-I- are strong baselines in the online setting since they enjoy the benefits of training on mini-batches, which greatly re-duce noises from singe samples and offers faster, better convergence.
  • FIG. 7 reports cumulative mean-squared errors (MSE) and mean-absolute errors (MAE) at the end of training. It is observed that ER and DER+-I- are strong competitors and can significantly im-prove over the OGD strategies. However, such methods still cannot work well under multiple task switches (S-Abrupt). Moreover, no clear task boundaries (S- Gradual) presents an even more challenging problem and increases most models’ errors. On the other hand, FSNet shows promising results on all datasets and outperforms most competing baselines across different forecasting horizons. Moreover, the improvements are significant on the synthetic benchmarks, indicating that LSFNet can quickly adapt to the non- stationary environment and recall previous knowledge, even without clear task boundaries.
  • MSE mean-squared errors
  • MAE mean-absolute errors
  • FIG. 8 reports the convergent behaviors on the considered methods.
  • the results show the benefits of ER by offering faster convergence during learning compared to OGD.
  • storing the original data may not apply in many domains.
  • S-Abrupt most baselines demonstrate the inability to quickly recover from concept drifts, indicated by the increasing error curves.
  • FSNet proposes on most datasets, with significant improve-ments over the baselines on the ETT, WTH, and S-Abrupt datasets.
  • the ECL dataset is more challenging with missing values (Li et al., 2019) and large magnitude varying within and across dimensions, which may require calculating a better data normalization.
  • FSNet achieved encouraging results on ECL, handling the above challenges can further improve its performance.
  • the results shed light on the challenges of online time series forecasting and demonstrate promising results of FSNet.
  • the model’s prediction quality on the S- Abrupt is visualized as shown in FIG. 8, as it is a univariate time series.
  • the remaining real-world datasets are multivariate are challenging to visualize.
  • the standard online optimization collapsed to a naive solution of predicting random noises around zero.
  • FSNet can successfully capture the time series’ patterns and provide better predictions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

Embodiments provide a framework combining fast and slow learning Networks (referred to as "FSNet") to train deep neural forecasters on the fly for online time-series fore-casting. FSNet is built on a deep neural network backbone (slow learner) with two complementary components to facilitate fast adaptation to both new and recurrent concepts. To this end, FSNet employs a per-layer adapter to monitor each layer's contribution to the forecasting loss via its partial derivative. The adapter transforms each layer's weight and feature at each step based on its recent gradient, allowing a finegrain per-layer fast adaptation to optimize the current loss. In addition, FSNet employs a second and complementary associative memory component to store important, recurring patterns observed during training. The adapter interacts with the memory to store, update, and retrieve the previous transformations, facilitating fast learning of such patterns.

Description

SYSTEMS AND METHODS FOR ONLINE TIME SERIES FORCASTING
Inventors: Hong-Quang Pham, Chenghao Liu, Doyen Sahoo, Chu Hong Hoi
CROSS REFERENCE(S)
[0001] The present disclosure claims priority to U.S. Non-Provisional patent application no. 17/871,819, filed July 22, 2022, and U.S. Provisional application no. 63/305,145, filed January 31, 2022, which are hereby expressly incorporated by reference herein in their entirety.
TECHNICAL FIELD
[0002] The embodiments relate generally to machine learning systems, and more specifically to online time series forecasting.
BACKGROUND
[0003] Deep neural network models have been widely used in time series forecasting. For example, learning models may be used to forecast time series data such as continuous market data over a period of time in the future, weather data, and/or the like. Existing deep models adopt batch- learning for time series forecasting tasks. Such models often randomly sample look-back and forecast windows during training and freeze the model during evaluation, breaking the time varying (non- stationary) nature of time series.
[0004] Therefore, there is a need for an efficient and adaptive deep learning framework for online time forecasting.
BRIEF DESCRIPTION OF THE DRAWINGS
[0005] FIG. 1 is a simplified diagram illustrating an example structure of the FSNet framework for forecasting a time series, according to embodiments described herein.
[0006] FIG. 2 is a simplified diagram illustrating an example structure of a TCN layer (block) of the FSNet framework described in FIG. 1, according to embodiments described herein. [0007] FIG. 3 is a simplified diagram illustrating an example structure of the dilated convolution layer in the TCN layer (block) described in FIG. 2, according to embodiments described herein.
[0008] FIG. 4 is a simplified diagram of a computing device that implements the FSNet framework, according to some embodiments described herein.
[0009] FIG. 5 is a simplified pseudo code segment for a fast and slow learning network implemented at the FSNet framework described in FIGS. 1-3, according to embodiments described here.
[0010] FIG. 6 is a simplified logic flow diagram illustrating an example process corresponding to the pseudo code algorithm in FIG. 5, according to embodiments described herein.
[0011] FIGS. 7-9 are example data charts and plots illustrating performance of the FSNet in example data experiments, according to embodiments described herein.
[0012] In the figures, elements having the same designations have the same or similar functions.
DETAILED DESCRIPTION
[0013] As used herein, the term “network” may comprise any hardware or software-based framework that includes any artificial intelligence network or system, neural network or system and/or any training or learning models implemented thereon or therewith.
[0014] As used herein, the term “module” may comprise hardware or software-based framework that performs one or more functions. In some embodiments, the module may be implemented on one or more neural networks.
[0015] A time series is a set of values that correspond to a parameter of interest at different points in time. Examples of the parameter can include prices of stocks, temperature measurements, and the like. Time series forecasting is the process of determining a future datapoint or a set of future datapoints beyond the set of values in the time series. Time series forecasting of dynamic data via deep learning remains challenging. [0016] Embodiments provide a framework combining fast and slow learning Networks (referred to as “FSNet”) to train deep neural forecasters on the fly for online time-series forecasting. FSNet is built on a deep neural network backbone (slow learner) with two complementary components to facilitate fast adaptation to both new and recurrent concepts. To this end, FSNet employs a per-layer adapter to monitor each layer’s contribution to the forecasting loss via its partial derivative. The adapter transforms each layer’s weight and feature at each step based on its recent gradient, allowing a finegrain per-layer fast adaptation to optimize the current loss. In addition, FSNet employs a second and complementary associative memory component to store important, recurring patterns observed during training. The adapter interacts with the memory to store, update, and retrieve the previous transformations, facilitating fast learning of such patterns.
[0017] In this way, the FSNet framework can adapt to the fast-changing and the long- recurring patterns in time series. Specifically, in FSNet, the deep neural network plays the role of neocortex while the adapter and its memory play act as a hippocampus component.
FSNet Framework Overview
[0018] FIG. 1 is a simplified diagram illustrating an example structure of the FSNet framework 100 for forecasting a time series, according to embodiments described herein.
[0019] The FSNet framework 100 comprises a plurality of convolution blocks 104a-n connected to a regressor 105. The FSNet framework 100 may receive time series data 102, denoted by
Figure imgf000005_0001
a times series of T observations each having n dimensions, from an input interface such as a memory or a network adapter. In some embodiments, the time series data 102 may be data in a look back window of length e starting at time i: Xt,e = xi> ■■■ > xi+e)- The model 100 may use a look back window based on the availability of memory in such as GPU memory based on the size of the time series data or based on the seasonality of the data and the like. The model 100 may generate an online forecast 106 predicting the next H-steps of the times series based on the input time series data 102, e.g., A,( i, H) = (xi+e+i ■■■ xi+e+n)> where w denotes the parameter of the forecasting model. Here, a pair of lookback window and forecast window data are considered as a training sample. For multiple step forecasting (H>1), a linear regressor 105 is employed to forecast all H steps in the horizons simultaneously. [0020] In one embodiment, FSNet framework 100 may include a temporal convolutional neural network (TCN) backbone having L layers (e.g., the blocks 1-L 104a- n) with parameters 6 = {0;}f=1. The TCN backbone 104a-n may implement a deep learning algorithm (that learns slowly online and is a deep neural network) which receives an input such as a time series data 102, assigns importance (learnable weights and biases) to various aspects/objects in the time series data 102 and differentiates various aspects/objects in the time series data 102 from the other aspects/objects in the time series data 102. The TCN backbone 104a-n may extract a time-series feature representation from the time series data 102.
[0021] Based on the TCN backbone 104a-n, the FSNet framework 100 further includes two complementary components: a per-layer adapter (|)i (shown at 315 in FIG. 3) for each TCN layer 104a-n and a per-layer associate memory Mi (shown at 318 in FIG. 3) for each TCN layer 104an. Thus, the total trainable parameters for the framework is co = {Qi, </>i} and the total associate memory is M = {Mi}i=i,
[0022] FIG. 2 is a simplified diagram illustrating an example structure of a TCN layer (block) 104a of the FSNet framework described in FIG. 1, according to embodiments described herein. At each TCN layer (block), e.g., 104a, the block input 202 may be processed by a number of dilated convolution layers 204, 2-6, and the convoluted output is added to the original block input 202 to generate block output 208. It is noted that while two dilated convolution layers 204 and 206 are shown in FIG. 2 for illustrative purpose only, any other number of dilated convolution layers may be used in a TCN block.
[0023] In one embodiment, each TCN block 104a may rely on its adapter 315 and associative memory 318 to quickly adapt to the changes in time series data 102 or learn more efficiently with limited data. Each block or layer 104a- 104n may adapt independently rather than restricting the adaptation to the depth of the network, i.e., gradient descent over the depth of the network 104a-n. The partial derivative
Figure imgf000006_0001
for each layer 104a-n characterizes the contribution of the convolutional layer 6t 104a-n to the forecasting loss
Figure imgf000006_0002
The
Figure imgf000006_0003
may be used to update the 1-th layer 6t In some embodiments, a gradient associated with each convolutional layer may be computed based on a partial derivative V0;-C Such gradient may be further smoothed out using the exponential moving (EMA) average within the dilated convolution 204 or 206 as described in relation to FIG. 3. [0024] Therefore, each convolution filter stack is accompanied by an adapter and an associative memory. At each layer, the adapter receives the gradient EMA and interacts with the memory and convolution filter accordingly, as further illustrated in relation to FIG. 3.
[0025] FIG. 3 is a simplified diagram illustrating an example structure of the dilated convolution layer 204 (or 206) in the TCN layer (block) 104a described in FIG. 2, according to embodiments described herein. The dilated convolution layer 204 may comprise convolution filters 310, a per-layer adapter 315, a per-layer memory 318. Input 202 to the dilated convolution layer 204 may be fed to the convolution filters 310, which in turn computes the exponential moving average (EMA) 313 of the TCN backbone's gradients. Specifically, because a gradient of a single sample can highly fluctuate and introduce noises to the adaptation parameters, EMA is used to smooth out online training's noises by:
Figure imgf000007_0001
where g* denotes the gradient of the /-th layer at time t and gL denotes the EMA gradient. In this way, the fast adapter 315 may receive the EMA gradient gL as input and maps it to the adaptation coefficients ut, as shown at 316.
[0026] In some embodiments, the fast adapter 315 may use the element- wise transformation as the adaptation process due to its efficiency for continual learning. The resulting adaptation parameter u; 316 may include two components: (i) a weight adaptation parameter a(; and (ii) a feature adaptation parameter pL. concatenated together as uL = [a;; Pi]. In some embodiments, the fast adapter 315 may absorb the bias transformation parameter into at for brevity.
[0027] In one embodiment, the adaptation for a layer 0( may involves a weight adaptation and a feature adaptation, as shown at 319. First, the weight adaptation parameter at acts on the corresponding weight of the backbone network via an element-wise multiplication as
0; = tile
Figure imgf000007_0002
wherein, 0 is a stack of 1 features maps of C channels and length L, 9t denotes the adapted weight, tile (a;) denotes that the weight adaptor is applied per-channel on all filters via a tile function, and O denotes the elementwise multiplication. [0028] Similarly, a feature adaptation component of the gradient, wherein the feature adaptation parameter changes the convolutional layer feature map based on an element- wise multiplication between the feature adaptation component and the first convolutional layer feature map. For example, the feature adaptation also interacts with the output feature map ht to generate the output 322 as ht = tile
Figure imgf000008_0001
[0029] In this way, the convolutional layer
Figure imgf000008_0002
may be updated based on the weight adaption component at and the feature adaptation component pt.
[0030] In some embodiments, the gradient may be directly mapped to the per-element adaptation parameter and this may result in a very high dimensional mapping.
[0031] In some embodiments, a chunking operation, denoted as £!(•; (), may be implemented to split the gradient into equal size chunks and then maps each chunk to an element of the adaptation parameter. Specifically, the chunking operation may be implemented as (1) flattening the gradient EMA of a corresponding block of the TCN model 120 into a vector; (2) splitting the gradient vector into d chunks; (3) mapping each chunk to a hidden representation; and (4) mapping each hidden representation to a coordinate of the target adaptation parameter u. For example, by using a vectorizing operation ( vec (•) ) that flattens a tensor into a vector, a splitting operation (e, B) splitting a vector e into B segments, each has size dim (e)/B, the backbone's layer EMA gradient 313 of the TCN backbone to an adaptation coefficient u G
Figure imgf000008_0003
via the chunking process as:
Figure imgf000008_0004
where the
Figure imgf000008_0005
are the first and second weight matrix of the adapter. In this way, the adaptation may be applied per-channel, which greatly reduces the memory overhead, offers compression and generalization.
[0032] In summary, let ® denotes the convolution operation, at step t, the FSNet adapter may use a fast adaptation procedure for the /-th layer is summarized as:
Figure imgf000009_0001
[0033] In one embodiment, in time series, old patterns may reappear in the future, and it is beneficial to recall similar knowledge in the past to facilitate learning further. While storing the original data can alleviate this problem, it might not be applicable in many domains due to privacy concerns. Therefore, an associative memory 318 may be implemented to store the adaptation coefficients of repeating events encountered during learning. While the adapter 315 can handle fast recent changes over a short time scale, recurrent patterns are stored in the memory 318 and then retrieved when they reappear in the future. For this purpose, each adapter 315 is equipped with an associate memory 318, denoted by , G Wxd where d denotes the dimensionality of u and N denotes the number of elements. The associate memory 318 only sparsely interacts with the adapter to store, retrieve, and update such important events.
[0034] Specifically, as interacting with the memory 318 at every step can be expensive and susceptible to noises, memory interaction may be triggered only when a substantial change in the representation is detected. Interference between the current and past representations can be characterized in terms of a dot product between the gradients. Therefore, a cosine similarity between the recent and longer term gradients may be computed and monitored to trigger the memory interaction when their interference fails below a threshold, which could indicate the pattern has changed significantly. To this end, in addition to computing the gradient EMA gt (313), a second gradient EMA g with a smaller coefficient y' < y is computed and their cosine similarity to trigger the memory interaction as:
Trigger i
Figure imgf000009_0002
where T > 0 is a hyper-parameter determining the significant degree of interference. Moreover, T may be set to a relatively high value (e.g., 0.7) so that the memory only remembers significant changing patterns, which could be important and may reappear. For example, example EMA hyperparameter may be set as: adapter’s EMA coefficient y = 0.9, gradient EMA for triggering the memory interaction y' = 0.3, memory triggering threshold T = 0.75. [0035] In one embodiment, when the current adaptation parameter may not capture the whole event, which could span over a few samples, memory read and write operations may be performed using the adaptation parameter's EMA (with coefficient y' ) to fully capture the current pattern. The EMA of ut is calculated in the same manner as gt. When a memory interaction is triggered, the adapter queries and retrieves the most similar transformations in the past via an attention read operation, which is a weighted sum over the memory items:
1. Attention calculation: r( = softmax
Figure imgf000010_0001
2. Top-k selection:
Figure imgf000010_0002
3. Retrieval:
Figure imgf000010_0003
where r(-k) [i] denotes the i-th element of r k> and ([i] denotes the i-th row of
Figure imgf000010_0004
As the memory could store conflicting patterns, sparse attention is applied by retrieving the top-k most relevant memory items, e.g., k = 2. The retrieved adaptation parameter characterizes old experiences in adapting to the current pattern in the past and can improve learning at the present time by weighted summing with the current parameters as uL «- TUi + (1 — r)ut, (7) where the same threshold value T can be used to determine the sparse memory interaction and the weighted sum of the adaptation parameter. Then a write operation is performed to update and accumulate the knowledge stored in
Figure imgf000010_0005
where ® denotes the outer-product operator, which allows to efficiently write the new knowledge to the most relevant locations indicated by
Figure imgf000010_0006
. The memory is then normalized to avoid its values scaling exponentially.
[0036] In one embodiment, the FSNet framework described in relation to FIGS. 1-3 is suitable for the task- free, online continual learning scenario because there is no need to detect when tasks switch explicitly. Instead, the task boundaries definition can be relaxed to allow the model to improve its learning on current samples continuously. Computing Environment
[0037] FIG. 4 is a simplified diagram of a computing device that implements the FSNet framework, according to some embodiments described herein. As shown in FIG. 4, computing device 400 includes a processor 410 coupled to memory 420. Operation of computing device 400 is controlled by processor 410. And although computing device 400 is shown with only one processor 410, it is understood that processor 410 may be representative of one or more central processing units, multi-core processors, microprocessors, microcontrollers, digital signal processors, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), graphics processing units (GPUs) and/or the like in computing device 400. Computing device 400 may be implemented as a stand-alone subsystem, as a board added to a computing device, and/or as a virtual machine.
[0038] Memory 420 may be used to store software executed by computing device 400 and/or one or more data structures used during operation of computing device 400. Memory 420 may include one or more types of machine readable media. Some common forms of machine readable media may include floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH- EPROM, any other memory chip or cartridge, and/or any other medium from which a processor or computer is adapted to read.
[0039] Processor 410 and/or memory 420 may be arranged in any suitable physical arrangement. In some embodiments, processor 410 and/or memory 420 may be implemented on a same board, in a same package (e.g., system-in-package), on a same chip (e.g., system- on-chip), and/or the like. In some embodiments, processor 410 and/or memory 420 may include distributed, virtualized, and/or containerized computing resources. Consistent with such embodiments, processor 410 and/or memory 420 may be located in one or more data centers and/or cloud computing facilities.
[0040] In some examples, memory 420 may include non-transitory, tangible, machine readable media that includes executable code that when run by one or more processors (e.g., processor 410) may cause the one or more processors to perform the methods described in further detail herein. For example, as shown, memory 420 includes instructions for an online time series forecasting module 430 that may be used to implement and/or emulate the systems and models, and/or to implement any of the methods described further herein. In some examples, the online time series forecasting module 430, may receive an input 440, e.g., such as a time-series data in a lookback window, via a data interface 415. The data interface 415 may be any of a user interface that receives uploaded time series data, or a communication interface that may receive or retrieve a previously stored sample of lookback window and forecasting window from the database. The times series forecasting module 430 may generate an output 450, such as a forecast to the input 440.
[0041] In some embodiments, the time series forecasting module 430 may further include a series of TCN blocks 431a-n (similar to 104a-n shown in FIG. 1) and a regressor 432 (similar to 105 shown in FIG. 1). In one implementation, the time series forecasting module 430 and its submodules 431-432 may be implemented via software, hardware and/or a combination thereof.
[0042] Some examples of computing devices, such as computing device 400 may include non-transitory, tangible, machine readable media that include executable code that when run by one or more processors (e.g., processor 410) may cause the one or more processors to perform the processes of methods discussed throughout the disclosure. Some common forms of machine-readable media that may include the processes of methods are, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, and/or any other medium from which a processor or computer is adapted to read.
Example Workflows
[0043] FIG. 5 is a simplified pseudo code segment for a fast and slow learning network implemented at the FSNet framework described in FIGS. 1-3, according to embodiments described here. For example, for the stack of L layers (e.g., 104a-n in FIG. 1), forward computation may be performed to compute the adaptation parameter comprising the weight adaptation component at and the feature adaptation component
Figure imgf000012_0001
at each layer. Memory read and write operation may be performed via the chunking process and the adaptation parameter may be updated by a weighted sum of the current and past adaptation parameters.
[0044] Next, the weight adaptation and feature adaptation may be performed according to Eq. (5). After updating the adaptation parameters through forward computation over L layers, forecast data can be generated via the regressor (e.g., 105 in FIG. 1). The forecast data is then compared with the ground-truth future data from the training sample to compute the forecast loss, which is then used to update the stack of L layers via backpropagation. The regressor may also be updated via stochastic gradient descent (SGD). The adaptation parameters and EMA adaptation parameters are then updated backwardly.
[0045] FIG. 6 is a simplified logic flow diagram illustrating an example process 600 corresponding to the pseudo code algorithm in FIG. 5, according to embodiments described herein. One or more of the processes of method 600 may be implemented, at least in part, in the form of executable code stored on non-transitory, tangible, machine-readable media that when run by one or more processors may cause the one or more processors to perform one or more of the processes. In some embodiments, method 600 corresponds to the operation of the FSNet framework 100 (FIG. 1) for forecasting time series data at future timestamps in a dynamic system.
[0046] At step 602, a time series dataset that includes a plurality of datapoints corresponding to a plurality of timestamps within a lookback time window (e.g., 102 in FIG. 1) may be received via a data interface (e.g., 415 in FIG. 4).
[0047] At step 604, a convolutional layer (e.g., block 104a in FIGS. 1-2) from a stack of convolutional layers (e.g., Blocks 104a-n in FIG. 1) may compute a first gradient based on exponential moving average of gradients corresponding to the respective convolutional layer, e.g., according to Eq. (1).
[0048] At step 606, a first adaptation parameter ui corresponding to the convolutional layer may be determined by mapping portions of the first gradient to elements of the first adaptation parameter. For example, the first adaptation parameter comprises a first weight adaptation component aL and a first feature adaptation component pL.
[0049] At step 608, for at least one convolutional layer in a temporal convolutional neural network, a layer forecasting loss indicative of a loss contribution of the respective convolutional layer to an overall forecasting loss according to the plurality of datapoints may be optionally determined, based on the plurality of datapoints. For example, the layer forecasting loss may be computed via the partial derivative V0/T. [0050] At step 610, the at least one convolutional layer may be optionally updated based on the layer forecasting loss. In this way, each layer may be monitored and modified independently to learn the current loss by learning through the layer forecasting loss.
[0051] At step 612, a cosine similarity between the first gradient of the updated convolutional layer and a longer-term gradient associated with the at least one first convolutional layer may be computed, e.g., according to Eq. (6).
[0052] At step 614, when the cosine similarity is greater than a pre-predefined threshold, method 600 proceeds to step 616 to perform a chunking process for memory read and write. Specifically, at step 616, a current adaptation parameter is retrieved from an indexed memory (e.g., 318 in FIG. 3) corresponding to the convolutional layer. At step 618, content stored at the indexed memory (e.g., 318 in FIG. 3) is updated based on the current adaptation parameter and the first adaptation parameter. At step 620, the first adaptation parameter is updated by taking a weighted average with the retrieved current adaptation parameter.
[0053] At step 622, an adapted layer parameter
Figure imgf000014_0001
is computed based on the first weight adaptation component at and a layer parameter corresponding to the first layer, e.g., according to Eq. (5).
[0054] At step 624, a feature map hL of the first convolutional layer is generated with the first feature adaptation component pL. For example, the first feature map is a convolution of the adapted layer parameter and a previous adapted feature map from a preceding layer. At step 626, an adapted feature map ht is computed based on the first feature adaptation component
Figure imgf000014_0002
and a first feature map hL of the first convolutional layer.
[0055] At step 628, a regressor (e.g., 105 in FIG. 1) may generate time series forecast data corresponding to a future time window based on a final feature map output from the stack of convolutional layers corresponding to the time series data within the lookback time window.
[0056] At step 630, a forecast loss may be computed based on the generated time series forecast data and ground-truth data corresponding to the future time window.
[0057] The stack of convolutional layers and the regressor may then be updated based on the forecast loss via backpropagation. At step 632, the regressor may be updated via stochastic gradient descent. At step 634, the gradient and the adaptation parameter of each layer of the stack may then be updated backwardly. Example Performance
[0058] Data experiments have been carried out to verify following hypotheses: (i) FSNet facilitates faster adaptation to both new and recurring concepts compared to existing strategies; (ii) FSNet achieves faster and better convergence than other methods; and (iii) modeling the partial derivative is the key ingredients for fast adaptation.
[0059] Specifically, a wide range of time series forecasting datasets have been used in the data experiments: (i) ETT1 (Zhou et al., Informer: Beyond efficient transformer for long sequence time-series forecasting, in Proceedings of AAAI, 2021) records the target value of “oil temperature” and 6 power load features over a period of two years. The ETTh2 and ETTml benchmarks are used, where the observations are recorded hourly and in 15-minutes intervals respectively, (ii) ECL (Electricty Consuming Load)2 dataset collects the electricity consumption of 321 clients from 2012 to 2014. (iii) Traffic3 dataset records the road occupancy rates at San Francisco Bay area freeways, (iv) Weather4 dataset records 11 climate features from nearly 1,600 locations in the U.S in an hour intervals from 2010 to 2013.
[0060] In addition, two synthetic datasets are constructed to explicitly test the model’s ability to deal with new and recurring concept drifts. A task may be synthesized by sampling 1, 000 samples from a first-order autoregressive process with coefficient cp ARcp(l), where different tasks correspond to different cp values. The first synthetic data, S-Abrupt contains abrupt, and recurrent concepts where the samples abruptly switch from one AR process to another by the following order: AR0.1(l), AR0.4(l), AR0.6(l), AR0.1(l), AR0.3(l), AR0.6(l). The second data, S-Gradual contains gradual, incremental shifts, where the shift starts at the last 20% of each task. In this scenario, the last 20% samples of a task is an averaged from two AR process with the order as above.
[0061] At implementation, data is split into warm-up and online training phases by the ratio of 25:75 and consider the TCN backbone for experiments, except the Informer baseline. Optimization details in Zhang et al., Informer: Beyond efficient transformer for long sequence time-series forecasting, in Proceedings of AAAI, 2021, by optimizing the h (MSE) loss with the AdamW optimizer. Both the epoch and batch size are set to one to follow the online learning setting. A fair comparison is implemented by making sure that all baselines use the same total memory budget as FSNet, which includes three- times the network sizes: one working model and two EMA of its gradient. Thus, for ER, MIR, and DER++, an episodic memory to store previous samples to meet this budget. For the remaining baselines, the backbone size can be increased instead. Lastly, in the warm-up phase, the mean and standard deviation are calculated to normalize online training samples and perform hyperparameter cross-validation. For all benchmarks, the look-back window length is set to be 60 and the forecast horizon of H = 1. The model’s ability to forecast longer horizons is tested by varying H G {1, 24, 48}.
[0062] A suite of training from both continual learning and time series forecasting are adopted for comparison. First, the OnlineTCN strategy that simply trains continuously (described in Zinkevich, Online convex programming and generalized infinitesimal gradient ascent, in Proceedings of the 20th international conference on machine learning (icml-03), pages 928-936,461, 2003. Second, the Experience Replay ER strategy (described in Lin, Self-improving reactive agents based on reinforcement learning, planning and teaching, Machine learning, 8(3-4):293-321, 1992) where a buffer is employed to store previous data and interleave old samples during the learning of newer ones. Three recent advanced variants of ER. First, TFCL (Aljundi et al., Task-free continual learning, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11254- 11263,325, 2019) introduces a task-boundaries detection mechanism and a knowledge consolidation strategy by regularizing the networks' outputs. Second, MIR (Aljundi et al., Online continual learning with maximal interfered retrieval. Advances in Neural Information Processing Systems, 32:11849-11860, 2019) replace the random sampling in ER by selecting samples that cause the most forgetting. Lastly, DER+-I- (Buzzega et al., Dark experience for general continual learning: a strong, simple baseline, in 34th Conference on Neural Information Processing Systems (NeurlPS 2020), 2020) augments the standard ER with a knowledge distillation strategy (described in Hinton et al., Distilling the knowledge in a neural network. arXiv preprint arXiv: 1503.02531, 2015). ER and its variants are strong baselines in the online setting since they enjoy the benefits of training on mini-batches, which greatly reduce noises from singe samples and offer faster, better convergence (see Bottou et al., Online learning and stochastic approximations, Online learning in neural networks, 17(9): 142, 1998). While the aforementioned baselines use a TCN backbone, Informer, the time series forecasting method based on the transformer architecture (Vaswani et al., Attention is all you need. Advances in neural information processing systems, 30, 2017) is also included. [0063] First, the Online Gradient Descent (OGD) (described in Zinkevich, Online convex and generalized infinitesimal gradient ascent, in proceedings of the 20th international conference on machine learning, pp. 928-936, 2003) strategy that simply trains continuously. OGD (L), a large variant of OGD with twice the TCN’s filters per layer is also included, resulting in a roughly twice number of parameters5. Another baseline includes Experiment Replay (described in Chaudhry et al., On tiny episodic memories in continual learning, arXiv preprint arXiv: 1902.10486, 2019) strategy where a buffer is employed to store previous data and interleave old samples during the learning of newer ones. Another baseline includes DER+-I- (Buzzega et al., Dark experience for general continual learning: a strong, simple baseline, in proceedings of 34th Conference on Neural Information Processing Systems (NeurlPS 2020), 2020) which further adds a knowledge distillation (described in Hinton et al., Distilling the knowledge in a neural network, arXiv preprint arXiv: 1503.02531, 2015) loss to ER. ER and DER+-I- are strong baselines in the online setting since they enjoy the benefits of training on mini-batches, which greatly re-duce noises from singe samples and offers faster, better convergence.
[0064] FIG. 7 reports cumulative mean-squared errors (MSE) and mean-absolute errors (MAE) at the end of training. It is observed that ER and DER+-I- are strong competitors and can significantly im-prove over the OGD strategies. However, such methods still cannot work well under multiple task switches (S-Abrupt). Moreover, no clear task boundaries (S- Gradual) presents an even more challenging problem and increases most models’ errors. On the other hand, FSNet shows promising results on all datasets and outperforms most competing baselines across different forecasting horizons. Moreover, the improvements are significant on the synthetic benchmarks, indicating that LSFNet can quickly adapt to the non- stationary environment and recall previous knowledge, even without clear task boundaries.
[0065] FIG. 8 reports the convergent behaviors on the considered methods. The results show the benefits of ER by offering faster convergence during learning compared to OGD. However, it is important to note that storing the original data may not apply in many domains. On S-Abrupt, most baselines demonstrate the inability to quickly recover from concept drifts, indicated by the increasing error curves. It is also observed that promising results of FSNet on most datasets, with significant improve-ments over the baselines on the ETT, WTH, and S-Abrupt datasets. The ECL dataset is more challenging with missing values (Li et al., 2019) and large magnitude varying within and across dimensions, which may require calculating a better data normalization. While FSNet achieved encouraging results on ECL, handling the above challenges can further improve its performance. Overall, the results shed light on the challenges of online time series forecasting and demonstrate promising results of FSNet.
[0066] The model’s prediction quality on the S- Abrupt is visualized as shown in FIG. 8, as it is a univariate time series. The remaining real- world datasets are multivariate are challenging to visualize. Particularly, the model’s forecasting at two-time points is plotted: at t = 900 and the end of learning, t = 5900 in FIG. 9. With the limited samples per task and the presence of multiple concept drifts, the standard online optimization collapsed to a naive solution of predicting random noises around zero. However, FSNet can successfully capture the time series’ patterns and provide better predictions.
[0067] This description and the accompanying drawings that illustrate inventive aspects, embodiments, implementations, or applications should not be taken as limiting. Various mechanical, compositional, structural, electrical, and operational changes may be made without departing from the spirit and scope of this description and the claims. In some instances, well-known circuits, structures, or techniques have not been shown or described in detail in order not to obscure the embodiments of this disclosure. Like numbers in two or more figures represent the same or similar elements.
[0068] In this description, specific details are set forth describing some embodiments consistent with the present disclosure. Numerous specific details are set forth in order to provide a thorough understanding of the embodiments. It will be apparent, however, to one skilled in the art that some embodiments may be practiced without some or all of these specific details. The specific embodiments disclosed herein are meant to be illustrative but not limiting. One skilled in the art may realize other elements that, although not specifically described here, are within the scope and the spirit of this disclosure. In addition, to avoid unnecessary repetition, one or more features shown and described in association with one embodiment may be incorporated into other embodiments unless specifically described otherwise or if the one or more features would make an embodiment non-functional.
[0069] Although illustrative embodiments have been shown and described, a wide range of modification, change and substitution is contemplated in the foregoing disclosure and in some instances, some features of the embodiments may be employed without a corresponding use of other features. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. Thus, the scope of the invention should be limited only by the following claims, and it is appropriate that the claims be construed broadly and in a manner consistent with the scope of the embodiments disclosed herein.

Claims

WHAT IS CLAIMED IS:
1. A method of forecasting time series data at future timestamps in a dynamic system, the method comprising: receiving, via a data interface, a time series dataset that includes a plurality of datapoints corresponding to a plurality of timestamps within a lookback time window; computing, at a first convolutional layer from a stack of convolutional layers, a first gradient based on exponential moving average of gradients corresponding to the first convolutional layer; determining first adaptation parameters corresponding to the first convolutional layer based on mapping portions of the first gradient to elements of the first adaptation parameters; computing an adapted feature map based at least in part on the first adaptation parameters and a previous adapted feature map from a preceding convolutional layer; generating, via a regressor, time series forecast data corresponding to a future time window based on a final feature map output from the stack of convolutional layers corresponding to the time series data within the lookback time window; computing a forecast loss based on the generated time series forecast data and groundtruth data corresponding to the future time window; and updating the stack of convolutional layers based on the forecast loss via backpropagation.
2. The method of claim 1, wherein the first adaptation parameters comprise a first weight adaptation component and a first feature adaptation component.
3. The method of claim 2, further comprising: for at least one convolutional layer in a temporal convolutional neural network: determining, based on the plurality of datapoints, a layer forecasting loss indicative of a loss contribution of the respective convolutional layer to an overall forecasting loss according to the plurality of datapoints; and updating the at least one convolutional layer based on the layer forecasting loss.
4. The method of claim 3, further comprising: computing a cosine similarity between the first gradient of the updated convolutional layer and a longer-term gradient associated with the at least one convolutional layer; in response to determining that determination that the cosine similarity is greater than a pre-predefined threshold: retrieving, from an indexed memory corresponding to the first convolutional layer, a current adaptation parameter; updating content stored at the indexed memory based on the current adaptation parameter and the first adaptation parameter; and updating the first adaptation parameters by taking a weighted average with the retrieved current adaptation parameter.
5. The method of claim 4, further comprising: computing an adapted layer parameter based on generating a first adapted weight based on the first weight adaptation component and a layer parameter corresponding to the first layer; and generating a feature map of the first convolutional layer with the first feature adaptation component.
6. The method of claim 5, wherein the adapted feature map is computed based on the first feature adaptation component and a first feature map of the first convolutional layer, and wherein the first feature map is a convolution of the adapted layer parameter and a previous adapted feature map from a preceding layer.
7. The method of claim 6, wherein the stack of convolutional layers and the regressor are updated by: updating the regressor via stochastic gradient descent; and updating, at the first convolutional layer, the first gradient and the first adaptation parameter.
8. The method of claim 4, further comprising: in response to in response to determining that determination that the cosine similarity is greater than a pre-predefined threshold: trigger a memory read or write operation that captures a current pattern of gradients.
9. The method of claim 8, wherein the current pattern is captured by: computing attentions based on a current content of the memory and a current adaptation parameter; selecting a set of top relevant attentions from the computed attentions; and updating the current adaptation parameter by taking a weighted sum of the current content of the memory weighted by the set of top relevant attentions.
10. The method of claim 9, further comprising: performing a write operation to update and accumulate the current content of the memory based on the updated current adaptation parameter.
11. A system for forecasting time series data at future timestamps in a dynamic system, the system comprising: a data interface that receives a time series dataset that includes a plurality of datapoints corresponding to a plurality of timestamps within a lookback time window; a memory that stores a plurality of processor-executable instructions; and a processor that reads from the memory and executes the instructions to perform operations comprising: computing, at a first convolutional layer from a stack of convolutional layers, a first gradient based on exponential moving average of gradients corresponding to the first convolutional layer; determining first adaptation parameters corresponding to the first convolutional layer based on mapping portions of the first gradient to elements of the first adaptation parameters; computing an adapted feature map based at least in part on the first adaptation parameters and a previous adapted feature map from a preceding convolutional layer; generating, via a regressor, time series forecast data corresponding to a future time window based on a final feature map output from the stack of convolutional layers corresponding to the time series data within the lookback time window; computing a forecast loss based on the generated time series forecast data and ground-truth data corresponding to the future time window; and updating the stack of convolutional layers based on the forecast loss via backpropagation.
12. The system of claim 11 , wherein the first adaptation parameters comprise a first weight adaptation component and a first feature adaptation component.
13. The system of claim 12, wherein the operations further comprise: for at least one convolutional layer in a temporal convolutional neural network: determining, based on the plurality of datapoints, a layer forecasting loss indicative of a loss contribution of the respective convolutional layer to an overall forecasting loss according to the plurality of datapoints; and updating the at least one convolutional layer based on the layer forecasting loss.
14. The system of claim 13, wherein the operations further comprise: computing a cosine similarity between the first gradient of the updated convolutional layer and a longer-term gradient associated with the at least one convolutional layer; in response to determining that determination that the cosine similarity is greater than a pre-predefined threshold: retrieving, from an indexed memory corresponding to the first convolutional layer, a current adaptation parameter; updating content stored at the indexed memory based on the current adaptation parameter and the first adaptation parameter; and updating the first adaptation parameters by taking a weighted average with the retrieved current adaptation parameter.
15. The system of claim 14, wherein the operations further comprise: computing an adapted layer parameter based on generating a first adapted weight based on the first weight adaptation component and a layer parameter corresponding to the first layer; and generating a feature map of the first convolutional layer with the first feature adaptation component.
16. The system of claim 15, wherein the adapted feature map is computed based on the first feature adaptation component and a first feature map of the first convolutional layer, and wherein the first feature map is a convolution of the adapted layer parameter and a previous adapted feature map from a preceding layer.
17. The system of claim 16, wherein the stack of convolutional layers and the regressor are updated by: updating the regressor via stochastic gradient descent; and updating, at the first convolutional layer, the first gradient and the first adaptation parameter.
18. The system of claim 14, wherein the operations further comprise: in response to in response to determining that determination that the cosine similarity is greater than a pre-predefined threshold: trigger a memory read or write operation that captures a current pattern of gradients.
19. The system of claim 18, wherein the current pattern is captured by: computing attentions based on a current content of the memory and a current adaptation parameter; selecting a set of top relevant attentions from the computed attentions; and updating the current adaptation parameter by taking a weighted sum of the current content of the memory weighted by the set of top relevant attentions.
20. A non-transitory processor-readable storage medium storing processor-readable instructions for forecasting time series data at future timestamps in a dynamic system, the instructions being executed by a processor to perform operations comprising: receiving, via a data interface, a time series dataset that includes a plurality of datapoints corresponding to a plurality of timestamps within a lookback time window; computing, at a first convolutional layer from a stack of convolutional layers, a first gradient based on exponential moving average of gradients corresponding to the first convolutional layer; determining first adaptation parameters corresponding to the first convolutional layer based on mapping portions of the first gradient to elements of the first adaptation parameters; computing an adapted feature map based at least in part on the first adaptation parameters and a previous adapted feature map from a preceding convolutional layer; generating, via a regressor, time series forecast data corresponding to a future time window based on a final feature map output from the stack of convolutional layers corresponding to the time series data within the lookback time window; computing a forecast loss based on the generated time series forecast data and groundtruth data corresponding to the future time window; and updating the stack of convolutional layers based on the forecast loss via backpropagation.
PCT/US2023/060618 2022-01-31 2023-01-13 Systems and methods for online time series forcasting WO2023147227A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202263305145P 2022-01-31 2022-01-31
US63/305,145 2022-01-31
US17/871,819 2022-07-22
US17/871,819 US20230244943A1 (en) 2022-01-31 2022-07-22 Systems and methods for online time series forcasting

Publications (1)

Publication Number Publication Date
WO2023147227A1 true WO2023147227A1 (en) 2023-08-03

Family

ID=85222133

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/060618 WO2023147227A1 (en) 2022-01-31 2023-01-13 Systems and methods for online time series forcasting

Country Status (1)

Country Link
WO (1) WO2023147227A1 (en)

Non-Patent Citations (13)

* Cited by examiner, † Cited by third party
Title
ALJUNDI ET AL.: "Online continual learning with maximal interfered retrieval", ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS, vol. 32, 2019, pages 11849 - 11860
ALJUNDI ET AL.: "Task-free continual learning", PROCEEDINGS OF THE IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, vol. 325, 2019, pages 11254 - 11263
ARANI ELAHE ET AL: "LEARNING FAST, LEARNING SLOW: A GENERAL CONTINUAL LEARNING METHOD BASED ON COMPLE- MENTARY LEARNING SYSTEM", 29 January 2022 (2022-01-29), XP093034801, Retrieved from the Internet <URL:https://arxiv.org/pdf/2201.12604v1.pdf> [retrieved on 20230327] *
BOTTOU ET AL.: "Online learning and stochastic approximations", ONLINE LEARNING IN NEURAL NETWORKS, vol. 17, no. 9, 1998, pages 142
BUZZEGA ET AL.: "Dark experience for general continual learning: a strong, simple baseline", 34TH CONFERENCE ON NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020, 2020
BUZZEGA ET AL.: "Dark experience for general continual learning: a strong, simple baseline", PROCEEDINGS OF 34TH CONFERENCE ON NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020, 2020
CHAUDHRY ET AL.: "On tiny episodic memories in continual learning", ARXIV PREPRINT ARXIV: 1902.10486, 2019
HINTON ET AL.: "Distilling the knowledge in a neural network", ARXIV PREPRINT ARXIV: 1503.02531, 2015
HINTON ET AL.: "Distilling the knowledge in a neural network", ARXIV PREPRINT ARXIV:1503.02531, 2015
SOOD SRIJAN SRIJAN SOOD@JPMORGAN COM ET AL: "Visual time series forecasting an image-driven approach", PROCEEDINGS OF THE 27TH ACM SYMPOSIUM ON VIRTUAL REALITY SOFTWARE AND TECHNOLOGY, ACMPUB27, NEW YORK, NY, USA, 3 November 2021 (2021-11-03), pages 1 - 9, XP058790326, ISBN: 978-1-4503-9148-1, DOI: 10.1145/3490354.3494387 *
VASWANI ET AL.: "Attention is all you need", ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS, vol. 30, 2017
ZHANG ET AL.: "Informer: Beyond efficient transformer for long sequence time-series forecasting", PROCEEDINGS OF AAAI, 2021
ZINKEVICH: "Online convex and generalized infinitesimal gradient ascent", PROCEEDINGS OF THE 20TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING, 2003, pages 928 - 936

Similar Documents

Publication Publication Date Title
US11593611B2 (en) Neural network cooperation
Rangapuram et al. Deep state space models for time series forecasting
Corchado et al. Hybrid artificial intelligence methods in oceanographic forecast models
Behera et al. Multiscale deep bidirectional gated recurrent neural networks based prognostic method for complex non-linear degradation systems
US11418029B2 (en) Method for recognizing contingencies in a power supply network
US10902311B2 (en) Regularization of neural networks
EP3422517B1 (en) A method for recognizing contingencies in a power supply network
US10783452B2 (en) Learning apparatus and method for learning a model corresponding to a function changing in time series
Heye et al. Precipitation nowcasting: Leveraging deep recurrent convolutional neural networks
US20180144266A1 (en) Learning apparatus and method for learning a model corresponding to real number time-series input data
Challu et al. Deep generative model with hierarchical latent factors for time series anomaly detection
Bhanja et al. Deep learning-based integrated stacked model for the stock market prediction
Tang et al. Short-term travel speed prediction for urban expressways: Hybrid convolutional neural network models
CN116021981A (en) Method, device, equipment and storage medium for predicting ice coating faults of power distribution network line
Qiao et al. Effective ensemble learning approach for SST field prediction using attention-based PredRNN
KR20220145007A (en) Data Processing Method of Detecting and Recovering Missing Values, Outliers and Patterns in Tensor Stream Data
CN117540336A (en) Time sequence prediction method and device and electronic equipment
Jiang et al. A timeseries supervised learning framework for fault prediction in chiller systems
US20230244943A1 (en) Systems and methods for online time series forcasting
WO2023147227A1 (en) Systems and methods for online time series forcasting
De et al. Forecasting chaotic weather variables with echo state networks and a novel swing training approach
WO2022165602A1 (en) Method, system and computer readable medium for probabilistic spatiotemporal forecasting
WO2022212031A1 (en) Controlling asynchronous fusion of spatio-temporal multimodal data
CN114004421B (en) Traffic data missing value interpolation method based on space-time integrated learning
Shi et al. An Attention-based Context Fusion Network for Spatiotemporal Prediction of Sea Surface Temperature

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23704676

Country of ref document: EP

Kind code of ref document: A1