WO2021255514A1 - Padding method for convolutional neural network layers adapted to perform multivariate time series analysis - Google Patents

Padding method for convolutional neural network layers adapted to perform multivariate time series analysis Download PDF

Info

Publication number
WO2021255514A1
WO2021255514A1 PCT/IB2020/061237 IB2020061237W WO2021255514A1 WO 2021255514 A1 WO2021255514 A1 WO 2021255514A1 IB 2020061237 W IB2020061237 W IB 2020061237W WO 2021255514 A1 WO2021255514 A1 WO 2021255514A1
Authority
WO
WIPO (PCT)
Prior art keywords
time series
input map
multivariate time
padding
bidimensional
Prior art date
Application number
PCT/IB2020/061237
Other languages
French (fr)
Inventor
Rui Jorge PEREIRA GONÇALVES
Fernando Manuel FERREIRA LOBO PEREIRA
Original Assignee
Universidade Do Porto
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Universidade Do Porto filed Critical Universidade Do Porto
Publication of WO2021255514A1 publication Critical patent/WO2021255514A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Definitions

  • the present invention is enclosed in the area of Convolutional Neural Networks.
  • the present invention relates to the field of Convolutional Neural Networks configured to perform Multivariate Time Series analysis .
  • a time series is a continuous sequence of observations taken repeatedly, normally at equal intervals, over time.
  • the relationship between past and future observations can be stochastic or non-deterministic.
  • MTS Multivariate Time Series
  • the Long Short-Term Memory which is a particular type of Recurrent Neural Network
  • CNN Convolutional Neural Networks
  • Some studies have applied CNNs in this domain, and a good example is WaveNet from Google DeepMind [1] applied to audio signal generation .
  • the convolution kernel When the convolution kernel is larger than (1,1), then the convolution result is smaller than the original output. Usually, this is not a concern for inputs with large dimensions, i.e. images, and small filters. However, it can constitute a major problem with small input dimensions or when considering a high number of staked convolutional layers. As such, the practical effect of large filter sizes and/or very deep CNNs on the size of the resulting feature map entails loss of information such that the model can simply run out of data upon which it operates.
  • padding is a process used for the border treatment of data before the convolution operation in a CNN.
  • the border treatment by the convolution kernel in the input space is pre-processed in order to retain as much original meaningful information as possible at the output level.
  • the standard procedure to avoid the border effect problem consists of including of zeros outside of the input map, in a number of rows above the top row and below the bottom row, and in a number of columns on the left of the first column and on the right of the last column of the input map. In this way, the convolution output size will have the same spatial extent as the input.
  • the input feature map has a relatively small size in the variables dimension. Therefore, the inclusion of zeros through the same padding approach implies a weaker learning capability since the learned kernel is affected by the dot product operation with the included zero values, thus, potentially promoting an erroneous generalization.
  • other known padding methods are commonly provided in image processing environments, that make use of the information in the input map to fill in the borders. Examples of padding mechanisms are Valid padding - or no padding -, Same padding, Casual padding, Constant n' padding, Reflection padding or Tile 2 padding.
  • the method includes receiving concentrically configured data of an object, the concentrically configured data correlating with an image, which was recorded concentrically to the object, deconvolving the concentrically configured data to form a data array, including real-coherent data on opposite sides of the data array, carrying out a convolution operation by using ring padding.
  • ring padding the real- coherent data of one side of the data array is utilized for padding the real-coherent data of a side of the data array opposite thereto, and/or vice versa.
  • the method described in the referred document is applied only to concentrically configured data and not to MTS analysis which is non-concentrically information by nature.
  • Document CN106447030A describes a method for optimizing a computational resource of a convolutional neural network, wherein a padding method is used to repeat information of the bidimensional input.
  • Document CN104794548A describes a parallelization calculation method based on ARIMA (Autoregressive integrated moving average) model load prediction, applicable to data correlation of time series data. It is proposed a multi threading technology, able to analyse the characteristics of the power load data, and to use the ARIMA model for parallelization processing.
  • ARIMA Automatic integrated moving average
  • the present application describes a new padding mechanism, which is specially conceived for MTS analysis, wherein the bidimensional input map is defined by a variables dimension and a time-steps dimension.
  • Such padding mechanism when included in a standard CNN architecture plays an important role in the overall performance of a CNN based model for MTS classification, improving its accuracy.
  • a method for operating such processing system is also provided.
  • Figure 1 - representation of a MTS bidimensional input map (1), defined by a variables dimension and a time-steps dimension.
  • Figure 2 - representation of the roll padding scheme of the invention applied to MTS analysis, wherein the reference signs represent: 1 - MTS bidimensional input map;
  • FIG. 3 representation of the roll padding method of the invention, wherein the time-steps dimension remains with no padding (i.e. valid padding).
  • FIG. 4 - representation of the roll padding scheme of the invention method of the invention, wherein causal padding is applied to the time-steps dimension.
  • Figure 5 representation of the application of the roll padding method of the invention to a MTS bidimensional input map.
  • the present invention relates to a padding method, designated for the purpose of the present description as roll padding, specially conceived for configuring convolutional neural network layers adapted to perform MTS analysis on a MTS bidimensional input map (1) defined by a variables dimension and a time-steps dimension, as can be seen in figure 1.
  • the characteristics defining the roll padding method now developed, creates a significant difference between the known padding methods from the state of the art, by allowing the MTS bidimensional input map (1) to be mapped onto a cylinder (2).
  • the configuration of CNN layers with this padding method allows a maximization of the levels of intercorrelation between input variables during the convolution operation, providing an improvement in the accuracy of standard CNN architectures in performing MTS analysis .
  • FIGS 2 and 3 help to understand how the roll padding method is applicable to the MTS bidimensional input map (1).
  • the roll padding method copies information from the opposite sides of the bidimensional input map (1) but only in one dimension, which is the variables dimension in the MTS bidimensional input map (1), effectively mapping the MTS bidimensional input map (1) onto a cylinder (2).
  • the wrapped rows above the top of the MTS bidimensional input map (1) are a copy of the bottom rows MTS bidimensional input map (1), respectively and vice-versa.
  • no padding i.e. valid padding
  • the reduction of time-steps dimension after several convolutions is not problematic due to the frequent presence of a high number of time-steps in this type of MTS problems. Nevertheless, for the time-steps dimension, roll padding method can be combined with other types of padding methods, such as causal padding, as can be seen in figure 4.
  • the use of the roll padding method of the invention avoids the variable dimension reduction throughout a Deep neural network with convolution layers, if no padding is applied, and correctly addresses the issue of convolution kernel correlations on the borders of the MTS bidimensional input map (1). Therefore, by applying the roll padding method in the variable dimension before the convolution operation will enable the convolution kernel (3) to search patterns that correlate the first time series variables with the last variables in the MTS bidimensional input map (1).
  • This roll padding method not only provides a way to apply deep convolution-based layers neural networks architectures to MTS problems, as also ensures an accurate performance in the analysis .
  • FIG 5 it is sketched an example of the application of the roll padding method of the invention.
  • the MTS bidimensional input map has a size of 72 time-steps x 8 variables.
  • the roll padding method it is possible to increase the variables dimension from 8 to 12, by copying information contained in the two rows from bottom of the map (1) in to two wrapped rows to be added above the top row of the map (1), and vice-versa, since the kernel (3) size in the next convolutional layer to be processed is of size (5,5). Note that the final output, after the convolution, has been reduced to size 8 in the variables dimension, the same as original input.
  • the roll padding method of the present invention comprises the steps of mapping a MTS bidimensional input map (1) onto a cylinder (2), by generating at least one wrapped row above the top row of the MTS bidimensional input map (1) and at least one wrapped row below the bottom row of the MTS bidimensional input map (1). More particularly, the wrapped rows added above the top row of the MTS bidimensional input map (1) are padded with copies of information contained in the variables dimension of bottom rows of the MTS bidimensional input map (1). The wrapped rows added below the bottom row of the MTS bidimensional input map (1) are padded with copies of information contained in the variables dimension of top rows of the MTS bidimensional input map (1).
  • the wrapped rows added above the top row of the MTS bidimensional input map (1) are padded with a copy of the bottom rows of the MTS bidimensional input map (1).
  • the wrapped rows added below the bottom row of the MTS bidimensional input map (1) are padded with a copy of the top rows of the MTS bidimensional input map (1).
  • the wrapped rows added above the top row of the MTS bidimensional input map (1) are a copy in the inverse order of the bottom rows of the MTS bidimensional input map (1).
  • the wrapped rows added below the bottom row of the MTS bidimensional input map (1) are padded with a copy in the inverse order of the top rows of the MTS bidimensional input map (1).
  • the number of wrapped rows added above the top row and below the bottom row of the MTS bidimensional input map (1) is dependent of the vertical size, K w , of a convolutional kernel (3) to be applied to the neural convolutional network layer. More particularly, and in another embodiment of the padding method, the number of wrapped rows is equal to . In another particular embodiment of the padding method developed, no columns are added on the left of the first column of the MTS bidimensional input map (1), and no columns are added on the right of the last column of the MTS bidimensional input map (1).
  • a padding mechanism is applied to fill a number of columns added on the left of the first column of the MTS bidimensional input map (1) and a number of columns added on the right of the last column of the MTS bidimensional input map (1), using information contained in the time-steps dimension.
  • the padding mechanism that can be used is one of the following: Same padding, Casual padding, Constant 'h' padding, Reflection padding or Tile 2 padding.
  • the present invention also relates to a processing system comprising processing means programmed to execute the padding method develop, in order to configure convolution neural network layers adapted to perform MTS analysis on a MTS bidimensional input map (1).
  • processing system further comprises processing means adapted to implement a neural network architecture configured to execute a convolution operation based on convolutional network layers pre-processed using the padding method developed.
  • the convolution operation is executed according to the following parameters : convolutional kernel (3) size ( K H ,K W ); and stride.
  • the present invention also relates to a method for operating the processing system described above. The referred method comprising the steps of: ii. Configuring convolutional neural network layers adapted to perform MTS analysis on a MTS bidimensional input map (1), by executing the roll padding method developed; and iii. Implementing a convolution operation to each layer.
  • the convolution operation is performed according to the following parameters: convolutional kernel (3) size (K H ,K W ); and stride.
  • the roll padding method is dependent of the vertical size, K w , of the convolutional kernel.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Complex Calculations (AREA)

Abstract

The present invention is enclosed in the field of Convolutional Neural Networks configured to perform Multivariate Time Series analysis, and relates to a padding method specially conceived for configuring convolutional neural network layers adapted to perform Multivariate Time Series analysis on a bidimensional input map (1). The padding method involves mapping the input map (1) onto a cylinder (2) by adding a predefined number of wrapped rows, depending on the vertical size of the convolutional kernel (3) to be used in the subsequent convolutional operation.

Description

DESCRIPTION
PADDING METHOD FOR CONVOLUTIONAL NEURAL NETWORK LAYERS ADAPTED TO PERFORM MULTIVARIATE TIME SERIES ANALYSIS
FIELD OF THE INVENTION
The present invention is enclosed in the area of Convolutional Neural Networks. In particular, the present invention relates to the field of Convolutional Neural Networks configured to perform Multivariate Time Series analysis .
PRIOR ART
A time series is a continuous sequence of observations taken repeatedly, normally at equal intervals, over time. The relationship between past and future observations can be stochastic or non-deterministic. In some Multivariate Time Series (MTS) studies, it is normal for the training and test datasets to be composed by observations/examples of independent time series segments with all the available time distributed information such that the context changes from example to example. Although the Long Short-Term Memory, which is a particular type of Recurrent Neural Network, is more suitable for MTS analysis from a theoretical point of view, the Convolutional Neural Networks (CNN) has gained popularity to analyse this particular type of problems (i.e. segmented MTS). Some studies have applied CNNs in this domain, and a good example is WaveNet from Google DeepMind [1] applied to audio signal generation .
When the convolution kernel is larger than (1,1), then the convolution result is smaller than the original output. Usually, this is not a concern for inputs with large dimensions, i.e. images, and small filters. However, it can constitute a major problem with small input dimensions or when considering a high number of staked convolutional layers. As such, the practical effect of large filter sizes and/or very deep CNNs on the size of the resulting feature map entails loss of information such that the model can simply run out of data upon which it operates.
An important mechanism used in convolution operations to tackle this issue is padding, which is a process used for the border treatment of data before the convolution operation in a CNN. Through this mechanism, the border treatment by the convolution kernel in the input space is pre-processed in order to retain as much original meaningful information as possible at the output level. Currently, the standard procedure to avoid the border effect problem consists of including of zeros outside of the input map, in a number of rows above the top row and below the bottom row, and in a number of columns on the left of the first column and on the right of the last column of the input map. In this way, the convolution output size will have the same spatial extent as the input.
Note, however, that if the goal is to analyse a MTS problem, the input feature map has a relatively small size in the variables dimension. Therefore, the inclusion of zeros through the same padding approach implies a weaker learning capability since the learned kernel is affected by the dot product operation with the included zero values, thus, potentially promoting an erroneous generalization. In this regard, other known padding methods are commonly provided in image processing environments, that make use of the information in the input map to fill in the borders. Examples of padding mechanisms are Valid padding - or no padding -, Same padding, Casual padding, Constant n' padding, Reflection padding or Tile 2 padding.
Solutions exist in the art, such as the document US2020285963, which describes a padding method for a convolutional neural network. The method includes receiving concentrically configured data of an object, the concentrically configured data correlating with an image, which was recorded concentrically to the object, deconvolving the concentrically configured data to form a data array, including real-coherent data on opposite sides of the data array, carrying out a convolution operation by using ring padding. In the case of ring padding, the real- coherent data of one side of the data array is utilized for padding the real-coherent data of a side of the data array opposite thereto, and/or vice versa. However, the method described in the referred document is applied only to concentrically configured data and not to MTS analysis which is non-concentrically information by nature.
Document CN106447030A describes a method for optimizing a computational resource of a convolutional neural network, wherein a padding method is used to repeat information of the bidimensional input.
Document CN104794548A describes a parallelization calculation method based on ARIMA (Autoregressive integrated moving average) model load prediction, applicable to data correlation of time series data. It is proposed a multi threading technology, able to analyse the characteristics of the power load data, and to use the ARIMA model for parallelization processing.
However, all the existing solutions are silent regarding the applicability of the respective padding methods to MTS analyses, and how the accuracy of the CNN architecture can be improved in the subsequent convolution operation.
The present solution intended to innovatively overcome such issues.
SUMMARY OF THE INVENTION
The present application describes a new padding mechanism, which is specially conceived for MTS analysis, wherein the bidimensional input map is defined by a variables dimension and a time-steps dimension. Such padding mechanism when included in a standard CNN architecture plays an important role in the overall performance of a CNN based model for MTS classification, improving its accuracy.
It is therefore an object of the present invention a padding method, for configuring convolutional neural network layers adapted to perform MTS analysis on a bidimensional input map defined by a variables dimension and a time-steps dimension.
It is also an object of the present invention a processing system operable to execute the padding method developed.
A method for operating such processing system is also provided.
DESCRIPTION OF FIGURES
Figure 1 - representation of a MTS bidimensional input map (1), defined by a variables dimension and a time-steps dimension.
Figure 2 - representation of the roll padding scheme of the invention applied to MTS analysis, wherein the reference signs represent: 1 - MTS bidimensional input map;
2 - mapped bidimensional input map onto a cylinder;
3 - convolutional kernel matrix with a horizontal size KH and a vertical size Kw.
Figure 3 - representation of the roll padding method of the invention, wherein the time-steps dimension remains with no padding (i.e. valid padding).
Figure 4 - representation of the roll padding scheme of the invention method of the invention, wherein causal padding is applied to the time-steps dimension.
Figure 5 - representation of the application of the roll padding method of the invention to a MTS bidimensional input map.
DETAILED DESCRIPTION
The more general and advantageous configurations of the present invention are described in the Summary of the invention. Such configurations are detailed below in accordance with other advantageous and/or preferred embodiments of implementation of the present invention.
The present invention relates to a padding method, designated for the purpose of the present description as roll padding, specially conceived for configuring convolutional neural network layers adapted to perform MTS analysis on a MTS bidimensional input map (1) defined by a variables dimension and a time-steps dimension, as can be seen in figure 1.
In particular, the characteristics defining the roll padding method now developed, creates a significant difference between the known padding methods from the state of the art, by allowing the MTS bidimensional input map (1) to be mapped onto a cylinder (2). The configuration of CNN layers with this padding method allows a maximization of the levels of intercorrelation between input variables during the convolution operation, providing an improvement in the accuracy of standard CNN architectures in performing MTS analysis .
Figures 2 and 3 help to understand how the roll padding method is applicable to the MTS bidimensional input map (1). The roll padding method copies information from the opposite sides of the bidimensional input map (1) but only in one dimension, which is the variables dimension in the MTS bidimensional input map (1), effectively mapping the MTS bidimensional input map (1) onto a cylinder (2). The wrapped rows above the top of the MTS bidimensional input map (1) are a copy of the bottom rows MTS bidimensional input map (1), respectively and vice-versa. On the right and left columns of the MTS bidimensional input map (1), no padding, i.e. valid padding, is applied. The reduction of time-steps dimension after several convolutions is not problematic due to the frequent presence of a high number of time-steps in this type of MTS problems. Nevertheless, for the time-steps dimension, roll padding method can be combined with other types of padding methods, such as causal padding, as can be seen in figure 4.
The use of the roll padding method of the invention avoids the variable dimension reduction throughout a Deep neural network with convolution layers, if no padding is applied, and correctly addresses the issue of convolution kernel correlations on the borders of the MTS bidimensional input map (1). Therefore, by applying the roll padding method in the variable dimension before the convolution operation will enable the convolution kernel (3) to search patterns that correlate the first time series variables with the last variables in the MTS bidimensional input map (1). This roll padding method not only provides a way to apply deep convolution-based layers neural networks architectures to MTS problems, as also ensures an accurate performance in the analysis .
In figure 5 it is sketched an example of the application of the roll padding method of the invention. In this example the MTS bidimensional input map has a size of 72 time-steps x 8 variables. By applying the roll padding method, it is possible to increase the variables dimension from 8 to 12, by copying information contained in the two rows from bottom of the map (1) in to two wrapped rows to be added above the top row of the map (1), and vice-versa, since the kernel (3) size in the next convolutional layer to be processed is of size (5,5). Note that the final output, after the convolution, has been reduced to size 8 in the variables dimension, the same as original input.
EMBODIMENTS
The roll padding method of the present invention comprises the steps of mapping a MTS bidimensional input map (1) onto a cylinder (2), by generating at least one wrapped row above the top row of the MTS bidimensional input map (1) and at least one wrapped row below the bottom row of the MTS bidimensional input map (1). More particularly, the wrapped rows added above the top row of the MTS bidimensional input map (1) are padded with copies of information contained in the variables dimension of bottom rows of the MTS bidimensional input map (1). The wrapped rows added below the bottom row of the MTS bidimensional input map (1) are padded with copies of information contained in the variables dimension of top rows of the MTS bidimensional input map (1).
In a particular embodiment of the padding method developed, the wrapped rows added above the top row of the MTS bidimensional input map (1) are padded with a copy of the bottom rows of the MTS bidimensional input map (1). The wrapped rows added below the bottom row of the MTS bidimensional input map (1) are padded with a copy of the top rows of the MTS bidimensional input map (1). Alternatively, in a particular embodiment of the padding method developed, the wrapped rows added above the top row of the MTS bidimensional input map (1) are a copy in the inverse order of the bottom rows of the MTS bidimensional input map (1). The wrapped rows added below the bottom row of the MTS bidimensional input map (1) are padded with a copy in the inverse order of the top rows of the MTS bidimensional input map (1).
In another particular embodiment of the padding method developed, the number of wrapped rows added above the top row and below the bottom row of the MTS bidimensional input map (1) is dependent of the vertical size, Kw, of a convolutional kernel (3) to be applied to the neural convolutional network layer. More particularly, and in another embodiment of the padding method, the number of wrapped rows is equal to
Figure imgf000010_0001
. In another particular embodiment of the padding method developed, no columns are added on the left of the first column of the MTS bidimensional input map (1), and no columns are added on the right of the last column of the MTS bidimensional input map (1). Alternatively, and in another embodiment of the padding method developed, a padding mechanism is applied to fill a number of columns added on the left of the first column of the MTS bidimensional input map (1) and a number of columns added on the right of the last column of the MTS bidimensional input map (1), using information contained in the time-steps dimension. The padding mechanism that can be used is one of the following: Same padding, Casual padding, Constant 'h' padding, Reflection padding or Tile 2 padding.
The present invention also relates to a processing system comprising processing means programmed to execute the padding method develop, in order to configure convolution neural network layers adapted to perform MTS analysis on a MTS bidimensional input map (1).
In one embodiment of the processing system, it further comprises processing means adapted to implement a neural network architecture configured to execute a convolution operation based on convolutional network layers pre-processed using the padding method developed. The convolution operation is executed according to the following parameters : convolutional kernel (3) size ( KH,KW); and stride. The present invention also relates to a method for operating the processing system described above. The referred method comprising the steps of: ii. Configuring convolutional neural network layers adapted to perform MTS analysis on a MTS bidimensional input map (1), by executing the roll padding method developed; and iii. Implementing a convolution operation to each layer.
In one embodiment of the method the convolution operation is performed according to the following parameters: convolutional kernel (3) size (KH,KW); and stride.
In another embodiment of the method the roll padding method is dependent of the vertical size, Kw, of the convolutional kernel.
As will be clear to one skilled in the art, the present invention should not be limited to the embodiments described herein, and a number of changes are possible which remain within the scope of the present invention.
Of course, the preferred embodiments shown above are combinable, in the different possible forms, being herein avoided the repetition all such combinations.
REFERENCES
[1] - Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen
Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio, 2016.

Claims

1. Padding method for convolutional neural network layers adapted to perform Multivariate Time Series analysis on a Multivariate Time Series bidimensional input map (1) defined by a variables dimension and a time-steps dimension; the padding method characterised by mapping the Multivariate Time Series bidimensional input map (1) onto a cylinder (2), through the following step: i. generating at least one wrapped row above the top row of the Multivariate Time Series bidimensional input map (1) and at least one wrapped row below the bottom row of the Multivariate Time Series bidimensional input map (1); wherein, wrapped rows added above the top row of the Multivariate Time Series bidimensional input map (1) are padded with copies of information contained in the variables dimension of bottom rows of the Multivariate Time Series bidimensional input map (1); and wrapped rows added below the bottom row of the Multivariate Time Series bidimensional input map (1) are padded with copies of information contained in the variables dimension of top rows of the Multivariate Time Series bidimensional input map (1).
2 . Padding method according to claim 1, wherein the wrapped rows added above the top row of the
Multivariate Time Series bidimensional input map (1) are padded with a copy of the bottom rows of the Multivariate Time Series bidimensional input map (1); and the wrapped rows added below the bottom row of the Multivariate Time Series bidimensional input map (1) are padded with a copy of the top rows of the Multivariate Time Series bidimensional input map (1).
3. Padding method according to claim 1, wherein the wrapped rows added above the top row of the
Multivariate Time Series bidimensional input map (1) are a copy in the inverse order of the bottom rows of the Multivariate Time Series bidimensional input map (1); and the wrapped rows added below the bottom row of the Multivariate Time Series bidimensional input map (1) are padded with a copy in the inverse order of the top rows of the Multivariate Time Series bidimensional input map (1).
4. Padding method according to any of the previous claims, wherein the number of wrapped rows added above the top row and below the bottom row of the Multivariate Time Series bidimensional input map (1) is dependent of the vertical size, Kw, of a convolutional kernel (3) to be applied to the neural convolutional network layer.
5. Padding method according to claim 4, wherein the number of wrapped rows is equal to
Figure imgf000015_0001
Figure imgf000015_0002
.
6. Padding method according to any of the previous claims, wherein, no columns are added on the left of the first column of the Multivariate Time Series bidimensional input map (1); and no columns are added on the right of the last column of the Multivariate Time Series bidimensional input map (1).
7 . Padding method according to any of the previous claims 1 to 5, wherein a padding mechanism is applied to fill, a number of columns added on the left of the first column of the Multivariate Time Series bidimensional input map (1); and a number of columns added on the right of the last column of the Multivariate Time Series bidimensional input map (1), using information contained in the time-steps dimension.
8. Padding method according to claim 7, wherein the padding mechanism is one of the following: Same padding, Casual padding, Constant n' padding, Reflection padding or Tile 2 padding.
9. Processing system comprising processing means programmed to execute the padding method of claims 1 to 8 to configure convolution neural network layers adapted to perform Multivariate Time Series analysis on a Multivariate Time Series bidimensional input map (1) defined by a variables dimension and a time-steps dimension.
10 . Processing system according to claim 9 further comprising processing means adapted to implement a neural network architecture configured to execute a convolution operation based on convolutional network layers pre-processed by the padding method; said convolution operation being executed according to the following parameters: convolutional kernel (3) size ( KH,KW); and stride.
11. Method for operating the processing system of claims 9 and 10, comprising the following steps: i. Configure a convolutional neural network layer adapted to perform Multivariate Time Series analysis on a Multivariate Time Series bidimensional input map (1) defined by a variables dimension and a time-steps dimension, by executing the padding method of claims 1 to 8; ii. Implement a convolution operation to that layer.
12. Method according to claim 11, wherein the convolution operation is performed according to the following parameters: convolutional kernel (3) size ( KH,KW); and stride.
13. Method according to claims 11 and 12, wherein the padding method is dependent of the vertical size, Kw, of the convolutional kernel.
PCT/IB2020/061237 2020-06-15 2020-11-27 Padding method for convolutional neural network layers adapted to perform multivariate time series analysis WO2021255514A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
PT116493 2020-06-15
PT11649320 2020-06-15

Publications (1)

Publication Number Publication Date
WO2021255514A1 true WO2021255514A1 (en) 2021-12-23

Family

ID=73856200

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2020/061237 WO2021255514A1 (en) 2020-06-15 2020-11-27 Padding method for convolutional neural network layers adapted to perform multivariate time series analysis

Country Status (1)

Country Link
WO (1) WO2021255514A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794548A (en) 2015-05-11 2015-07-22 中国科学技术大学 ARIMA (Autoregressive integrated moving average) model load prediction based parallelization computing method
CN106447030A (en) 2016-08-30 2017-02-22 深圳市诺比邻科技有限公司 Computing resource optimization method and system of convolutional neural network
US20180218497A1 (en) * 2017-01-27 2018-08-02 Arterys Inc. Automated segmentation utilizing fully convolutional networks
US20200285963A1 (en) 2019-03-06 2020-09-10 Robert Bosch Gmbh Padding method for a convolutional neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794548A (en) 2015-05-11 2015-07-22 中国科学技术大学 ARIMA (Autoregressive integrated moving average) model load prediction based parallelization computing method
CN106447030A (en) 2016-08-30 2017-02-22 深圳市诺比邻科技有限公司 Computing resource optimization method and system of convolutional neural network
US20180218497A1 (en) * 2017-01-27 2018-08-02 Arterys Inc. Automated segmentation utilizing fully convolutional networks
US20200285963A1 (en) 2019-03-06 2020-09-10 Robert Bosch Gmbh Padding method for a convolutional neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ALISHA SHARMA ET AL: "Unsupervised Learning of Depth and Ego-Motion from Cylindrical Panoramic Video", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 4 January 2019 (2019-01-04), XP081515219 *
ANONYMOUS: "What is padding in a neural network? - MachineCurve", 9 February 2020 (2020-02-09), XP055787597, Retrieved from the Internet <URL:https://www.machinecurve.com/index.php/2020/02/07/what-is-padding-in-a-neural-network/#reflection-padding> [retrieved on 20210319] *
CARLOS ESTEVES ET AL: "Polar Transformer Networks", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 6 September 2017 (2017-09-06), XP081318635 *
ERIC LALOY ET AL: "Approaching geoscientific inverse problems with adversarial vector-to-image domain transfer networks", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 6 May 2020 (2020-05-06), XP081666545 *

Similar Documents

Publication Publication Date Title
US11875268B2 (en) Object recognition with reduced neural network weight precision
Sun et al. Swformer: Sparse window transformer for 3d object detection in point clouds
Mundhenk et al. Efficient saliency maps for explainable AI
DE112020003128T5 (en) DILATED CONVOLUTION WITH SYSTOLIC ARRAY
DE102018117813A1 (en) Timely data reconstruction with an external recurrent neural network
DE112020004625T5 (en) TRANSPOSED CONVOLUTION WITH SYSTOLIC ARRAY
DE102018108324A1 (en) System and method for estimating an optical flow
JP2021100247A (en) Distorted document image correction method and device
JP7536893B2 (en) Image Processing Using Self-Attention Based Neural Networks
US20220101140A1 (en) Understanding deep learning models
JP2020107042A (en) Learning model generation device, learning model generation method, and program
US20210264237A1 (en) Processor for reconstructing artificial neural network, electrical device including the same, and operating method of processor
DE102018114799A1 (en) SEMINAR-LEANED LEARNING FOR ORIENTATION LOCALIZATION
Dengpan et al. Faster and transferable deep learning steganalysis on GPU
CN115631433A (en) Target detection method, device, equipment and medium
CN118314154A (en) Nuclear magnetic resonance image segmentation method, device, equipment, storage medium and program product
DE112020006070T5 (en) HARDWARE ACCELERATOR WITH RECONFIGURABLE INSTRUCTION SET
CN117333937A (en) Human body posture estimation method and device based on classification and distillation and electronic equipment
WO2021255514A1 (en) Padding method for convolutional neural network layers adapted to perform multivariate time series analysis
CN111445503A (en) Pyramid mutual information image registration method based on parallel programming model on GPU cluster
CN114565528A (en) Remote sensing image noise reduction method and system based on multi-scale and attention mechanism
KR102311659B1 (en) Apparatus for computing based on convolutional neural network model and method for operating the same
CN115210718A (en) Semiconductor manufacturing process parameter determination using generation countermeasure network
CN113821471A (en) Processing method of neural network and electronic device
US20220351334A1 (en) Method and electronic device for multi-functional image restoration

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20828307

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20828307

Country of ref document: EP

Kind code of ref document: A1