WO2014203039A1 - System and method for implementing reservoir computing using cellular automata - Google Patents

System and method for implementing reservoir computing using cellular automata Download PDF

Info

Publication number
WO2014203039A1
WO2014203039A1 PCT/IB2013/055042 IB2013055042W WO2014203039A1 WO 2014203039 A1 WO2014203039 A1 WO 2014203039A1 IB 2013055042 W IB2013055042 W IB 2013055042W WO 2014203039 A1 WO2014203039 A1 WO 2014203039A1
Authority
WO
WIPO (PCT)
Prior art keywords
cellular
automaton
reservoir
neural network
recurrent neural
Prior art date
Application number
PCT/IB2013/055042
Other languages
French (fr)
Inventor
Ozgur Yilmaz
Original Assignee
Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirketi
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirketi filed Critical Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirketi
Priority to PCT/IB2013/055042 priority Critical patent/WO2014203039A1/en
Publication of WO2014203039A1 publication Critical patent/WO2014203039A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks

Definitions

  • the present invention relates to a system and method for implementing a specific class of a recurrent neural network algorithm called reservoir computing using cellular automata.
  • RNNs Recurrent Neural Networks
  • RNNs are connectionist computational models that utilize distributed representation and nonlinear dynamics of its units. Information in RNNs is propagated and processed in time through the states of its hidden units, which make them appropriate tools for sequential information processing.
  • RNNs stochastic energy based with symmetric connections
  • deterministic with directed connections There are two broad types of RNNs: stochastic energy based with symmetric connections, and deterministic with directed connections.
  • RNNs are known to be Turing complete computational models (Siegelmann and Sontag, 1995) and universal approximators of dynamical systems (Funahashi and Nakamura, 1993). They are especially appealing for problems that require remembering long-range statistical relationships such as speech, natural language processing, video processing, financial data analysis etc. Additionally, RNNs are shown to be very successful generative models for data completion tasks (Salakhutdinov and Hinton, 2012). Despite their immense potential as universal computers, difficulties in training RNNs arise due to the inherent difficulty of learning long-term dependencies (Hochreiter, 1991 ; Bengio et al., 1994; and see Hochreiter and Schmidhuber, 1997) and convergence issues (Doya, 1992). However, recent advances suggest promising approaches in overcoming these issues, such as utilizing a reservoir of coupled oscillators (Maass et al., 2002; Jaeger, 2001).
  • Reservoir computing alleviates the problem of training in a recurrent network by using a static dynamical reservoir of coupled oscillators, which are operating at the edge of chaos. It is claimed that many of these type of dynamical systems possess high computational power (Bertschinger and Natschlager, 2004; Legenstein and Maass, 2007). In this approach, due to rich dynamics already provided by the reservoir, there is no need to train many recurrent layers and learning takes place only at the output (or read- out stage) layer. This simplification enables usage of recurrent neural networks in complicated tasks that require memory for long-range (both spatially and temporally) statistical relationships.
  • echo state property The essential feature of the network in the reservoir is called echo state property (Jaeger, 2001).
  • the effect of previous state and previous input dissipates gradually in the network without getting amplified.
  • this corresponds to weight matrix having spectral radius less than 1.
  • the network is generated randomly and sparsely, considering the spectral radius requirements of the weight matrix. Even though spectral radius constraint ensures stability of the network to some extent, it does not say anything about the short- term memory capacity of the network. The knowledge about this capacity is essential for proper design of the reservoir for the given task.
  • the reservoir is expected to operate at the edge of chaos because the dynamical systems are shown to present high computational power at this mode (Bertschinger and Natschlager, 2004; Legenstein and Maass, 2007). High memory capacity is also shown for reservoirs at the edge of chaos. Lyapunov exponent is a measure edge of chaos operation in a dynamical system, and it can be empirically computed for a reservoir network (Legenstein and Maass, 2007). However, this computation is not trivial or automatic, and needs expert intervention (Lukosevicius and Jaeger, 2009).
  • Cellular automaton is a discrete computational model consisting of a regular grid of cells, each in one of a finite number of states. The state of an individual cell evolves in time according to a fixed rule, depending on the current state and the state of neighbors. The information presented as the initial states of a grid of cells is processed in the state transitions of cellular automaton.
  • Cellular automata governed by some of the rules are proven to be computationally universal, capable of simulating a Turing machine (Cook, 2004).
  • the rules of cellular automata are classified (Wolfram, 2002) according to their behavior: attractor, oscillating, chaotic, and edge of chaos. Some of the rules in the last class are shown to be Turing complete (rule 110, Conway's game of life). Lyapunov exponent of a cellular automaton can be computed and it is shown to be a good indicator of computational power of the automata (Baetens and De Baets, 2010). A spectrum of Lyapunov exponent values can be achieved using different cellular automata rules. Therefore a dynamical system with specific memory capacity (i.e. Lyapunov exponent value) can be constructed by using a corresponding cellular automaton.
  • Clustering property of cellular automata is exploited in a family of approaches, using rules that form attractors in lattice space (Chady and Poly, 1997; Ganguly et al., 2003; Ganguly, 2004), The attractor dynamics of cellular automata resembles Hopfield network architectures (Hopfield, 1982). These approaches have two major problems: low dimensionality and low computational power. The first problem is due to the need for representing data in 2D space and the need for non-trivial algorithms in higher dimensions. The second problem is due to limiting the computational representation of cellular automata activity with attractor dynamics and clustering.
  • the time evolution of the cellular automata activity has very high computational representation, especially for edge of chaos dynamics, but this is not exploited if the presented data are classified according to the converged basin in 2D space.
  • Another approach is cellular neural network (Chua and Young, 1988a and 1988b, Austin et al., 1997) that emulates memory formation in a neural network, which is totally incapable of chaotic behavior essential for high computational power.
  • Cellular neural network architectures are patented in EP0649099 Bl and EP0797165 Al.
  • Cellular automaton is used for audio compression in patent US6567781 Bl.
  • Patent WO1997012330 Al defines a method for encoding and decoding data using cellular automata.
  • Patents US 20130060772 Al and US 8301628 B2 suggest using an echo state network for ontology generation, and patent EP 2389669 Al proposes using reservoir computing based neural network for geodatabase information processing.
  • the object of the invention is to provide a method for implementing reservoir computing based recurrent neural network using cellular automata.
  • Cellular automata replace the echo state neural network in classical reservoir computing.
  • Cellular automata rule search is executed for reservoir training, instead of tuning of echo state network connections.
  • the reservoir computing system receives the input data (101).
  • the encoding stage (102) translates the input into the initial states of a multidimensional cellular automaton.
  • the cellular automaton rules are executed for a fixed period of time (T) to evolve the initial states.
  • the evolution of the cellular automaton is recorded such that, at each time step a snapshot of the states in the cellular automaton is saved into a data structure.
  • decoding stage (104) the recorded cellular automaton activity is processed to output a cellular automaton representation of the given input.
  • This output (data vector, 303) is a projection of the input onto a nonlinear cellular automata state space.
  • the decoded cellular automaton output is used for further processing (105) according to the task (eg. classification, compression, clustering etc.).
  • the system output (106) comes from this stage, and it is communicated to the outside world.
  • the sub-steps of encoding stage (102) are given in Figure 2.
  • the input data instance is first pre-processed (201), and this can be a sequence of operations (filter, whiten, reduce dimensionality, transformations such as fourier or wavelet) that modifies and transforms the data.
  • the data At the end of pre-processing the data have an inherent number of dimensions, K.
  • K In the mapping stage (202), these dimensions are separately mapped onto the cellular automata cells.
  • K can be much larger than the number of cellular automata dimensions (P), and then in that case there are two alternatives proposed for the mapping algorithm:
  • the mapping algorithm exploits spatial and temporal partitioning.
  • An input dimension can be encoded in a specific spatial region of the cellular automata at a specific epoch (time).
  • Figure 3 a representative 2D cellular automaton is depicted to illustrate the mapping sub-step.
  • £TM is the initial states of cells in m th spatial region (partition) of the cellular automata at epoch, e,. This is depicted in Figure 3, as arrow 303. Spatial regions can possibly live in any subspace of the ambient space.
  • a function maps the k th component of the input (can be integer or binary) into the initial states of cells: r V '4 J (305)
  • / can simply be binary coding for binary cellular automaton, or any complex quantization function.
  • the same specific spatial region, m can represent another dimension of the input at another time epoch, e+1, as shown 304 in Figure 3. Therefore, the input is translated into a spatial- temporal code in initial states (203) of cellular automaton cells.
  • the complexity of the mapping algorithm and the need for dimensionality reducing preprocessing steps (i.e. principle component analysis) also increases.
  • Each epoch (301 and 302) of the cellular automaton is considered an independent initialization and evolved separately in reservoir computing stage (103), but with the same cellular automaton rule.
  • Figure 4 gives time evolution of a representative 2D cellular automaton, initialized as epoch, e. 401 and 402 are the states of cells at time, t and t+1 of the cellular automaton evolution.
  • g. f (403) is the value at cell, i.
  • the evolution of the cellular automaton is saved at each time instant into a separate data structure for each epoch:
  • the feature space of the pre-processed input is partitioned into subspaces and each subspace is processed separately. This separation limits the feature interactions in the cellular automaton processing, but decorrelation and whitening pre-processing steps will reduce the detrimental effect of this lack of interactions.
  • each cell of a real-valued cellular automaton receives weighted input from every (or subset) feature dimension of the pre-processed input ( Figure 5).
  • a representative real-valued 2D cellular automaton is shown (501).
  • Each cell (504) receives a weighted (503) sum of initial excitation from the pre-processed input vector (502).
  • Initial value of a cell i (504) is given by:
  • a single cell can receive input from a subset of feature dimensions. In that case, the weight vector for a cell is sparse and a subspace of the input is processed by specific cells.
  • the weights (503) can be set randomly as in echo state networks.
  • the multidimensional data structure c (404, 405) is first post-processed (601). This can be filtering (eg. low-pass, high-pass, denoise), normalization, whitening operations or a combination of these. Then, dimensionality reduction methods (eg. pooling, linear or kernel PCA, manifold learning) are applied onto the post-processed data structure values (602). This step can be skipped altogether if the subsequent stage (105) is able to handle the dimensionality in the data.
  • the values of the cells in the data structure or the output of the dimensionality reduction method is vectorized and assigned as the output (603) of the decoding stage, hence the reservoir.
  • Rule of the cellular automaton determines the computational power of the reservoir.
  • a totally random approach as in the case of classical echo state networks can be adopted such that, a hybrid or a hierarchical cellular automaton with random rules can be used.
  • this network will not be optimal neither for the data nor for the task assigned to the reservoir.
  • An optimization method ( Figure 7, 700) is proposed for estimating the best cellular automaton rule for a given dataset and task.
  • the rule of the automaton is initialized (701), based on whether it is an elementary, hybrid, hierarchical or any other type of automaton. The performance (i.e.
  • the modification sub-step (704) can be a search (any search algorithm) in the rule space of an elementary cellular automaton.
  • evolutionary/genetic algorithms can be used to evolve the rules of a hybrid automaton.
  • Lyapunov exponent, Z parameter, G-density, in-degree length (Wuensche, 1999),entropy, mutual information (Ganguly, 2004) complexity metrics of the automaton rules can be computed for guiding the rule search sub-step (704).
  • sub-steps of 704 are shown. It accepts the current automaton rule (801), and generates candidate rules (802) according to the current rule. This candidate generation can be according to genetic algorithm principles or via a set of rules with pre- computed complexity metrics (Lyapunov exponent, Z parameter etc.). Next, a long list of available complexity metrics are computed (803) and one of the rules is selected (804) based on the complexity of the current rule and of candidate rules.
  • the rule selection (804) can be memoryless, or with memory about the previous rule selections.
  • the memoryless approach selects the rule with various degrees of complexities, i.e., higher or lower complexity than current rule. Selection with memory draws a decision based on previous rule selections as well as the performances of those selected rules.
  • the proposed system utilizes a more structured reservoir, cellular automaton, while exploiting the simplicity and power of reservoir computing (Lukosevicius and Jaeger, 2009; Maass, 2010) principles.
  • Cellular automata are easier to analyze and have insurances on Turing completeness.
  • a search in rule space of automata is a more stable optimization procedure than gradient descent weight tuning in echo state network. Additionally, they are extremely easy to implement in parallel hardware such as FPGA, GPU or VLSI.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Machine Translation (AREA)

Abstract

An implementation of a reservoir computing based recurrent neural network is disclosed. Cellular automaton is used as the reservoir of dynamical systems. Input is projected onto the initial conditions of automaton cells and nonlinear computation is performed on the input via application of a rule in the automaton for a period of time. The evolution of the automaton creates a space-time volume of the automaton state space, and it is used as the output of the reservoir. The output is further processed according to reservoir computing principles to achieve the assigned task. The reservoir is trained for the specific task and dataset via optimizing the rule of the automaton.

Description

DESCRIPTION
SYSTEM AND METHOD FOR IMPLEMENTING RESERVOIR COMPUTING USING CELLULAR AUTOMATA
Field of the invention
The present invention relates to a system and method for implementing a specific class of a recurrent neural network algorithm called reservoir computing using cellular automata.
Background of the invention
Recurrent Neural Networks (RNNs) are connectionist computational models that utilize distributed representation and nonlinear dynamics of its units. Information in RNNs is propagated and processed in time through the states of its hidden units, which make them appropriate tools for sequential information processing. There are two broad types of RNNs: stochastic energy based with symmetric connections, and deterministic with directed connections.
RNNs are known to be Turing complete computational models (Siegelmann and Sontag, 1995) and universal approximators of dynamical systems (Funahashi and Nakamura, 1993). They are especially appealing for problems that require remembering long-range statistical relationships such as speech, natural language processing, video processing, financial data analysis etc. Additionally, RNNs are shown to be very successful generative models for data completion tasks (Salakhutdinov and Hinton, 2012). Despite their immense potential as universal computers, difficulties in training RNNs arise due to the inherent difficulty of learning long-term dependencies (Hochreiter, 1991 ; Bengio et al., 1994; and see Hochreiter and Schmidhuber, 1997) and convergence issues (Doya, 1992). However, recent advances suggest promising approaches in overcoming these issues, such as utilizing a reservoir of coupled oscillators (Maass et al., 2002; Jaeger, 2001).
Reservoir computing (echo state networks or liquid state machines) alleviates the problem of training in a recurrent network by using a static dynamical reservoir of coupled oscillators, which are operating at the edge of chaos. It is claimed that many of these type of dynamical systems possess high computational power (Bertschinger and Natschlager, 2004; Legenstein and Maass, 2007). In this approach, due to rich dynamics already provided by the reservoir, there is no need to train many recurrent layers and learning takes place only at the output (or read- out stage) layer. This simplification enables usage of recurrent neural networks in complicated tasks that require memory for long-range (both spatially and temporally) statistical relationships.
The essential feature of the network in the reservoir is called echo state property (Jaeger, 2001). In networks with this property, the effect of previous state and previous input dissipates gradually in the network without getting amplified. For specific network architectures with tanh node nonlinearities, this corresponds to weight matrix having spectral radius less than 1. In classical echo state networks the network is generated randomly and sparsely, considering the spectral radius requirements of the weight matrix. Even though spectral radius constraint ensures stability of the network to some extent, it does not say anything about the short- term memory capacity of the network. The knowledge about this capacity is essential for proper design of the reservoir for the given task. The reservoir is expected to operate at the edge of chaos because the dynamical systems are shown to present high computational power at this mode (Bertschinger and Natschlager, 2004; Legenstein and Maass, 2007). High memory capacity is also shown for reservoirs at the edge of chaos. Lyapunov exponent is a measure edge of chaos operation in a dynamical system, and it can be empirically computed for a reservoir network (Legenstein and Maass, 2007). However, this computation is not trivial or automatic, and needs expert intervention (Lukosevicius and Jaeger, 2009).
It is empirically shown that there is an optimum Lyapunov exponent of the reservoir network, related to the amount of memory needed for the task (Verstraeten et al., 2007). Thus, fine-tuning the connections in the reservoir for learning the optimal connections that lead to optimal Lyapunov exponent is very crucial for achieving good performance with the reservoir. There are many types of learning methods proposed for tuning the reservoir connections (see Lukosevicius and Jaeger, 2009 for a review), however optimization procedure on the weight matrix is prone to get stuck at local optimum due to high curvature in the weight space. It was thought that the problem can be eased by imposing structural constraints on the connectivity instead of random initial weights, but this is shown in effective in improving the performance of the reservoir (Liebald, 2004).
The input in a complex task is generated by multiple different processes, for which the dynamics and spatio-temporal correlations might be very different. One important shortcoming of the classical reservoir computing approach is its inability to deal with multiple spatio-temporal scales simultaneously. Modular reservoirs have been proposed that contain many decoupled sub-reservoirs operating in different scales, however fine tuning the sub-reservoirs according to the task is not trivial. Cellular automaton is a discrete computational model consisting of a regular grid of cells, each in one of a finite number of states. The state of an individual cell evolves in time according to a fixed rule, depending on the current state and the state of neighbors. The information presented as the initial states of a grid of cells is processed in the state transitions of cellular automaton. Cellular automata governed by some of the rules are proven to be computationally universal, capable of simulating a Turing machine (Cook, 2004).
The rules of cellular automata are classified (Wolfram, 2002) according to their behavior: attractor, oscillating, chaotic, and edge of chaos. Some of the rules in the last class are shown to be Turing complete (rule 110, Conway's game of life). Lyapunov exponent of a cellular automaton can be computed and it is shown to be a good indicator of computational power of the automata (Baetens and De Baets, 2010). A spectrum of Lyapunov exponent values can be achieved using different cellular automata rules. Therefore a dynamical system with specific memory capacity (i.e. Lyapunov exponent value) can be constructed by using a corresponding cellular automaton.
Cellular automata have been previously used for associative memory and classification tasks. Tzionas et al. (1994) proposed a cellular automaton based classification algorithm. Their algorithm clusters 2D data using cellular automata, creating boundaries between different seeds in the 2D lattice. The partitioned 2D space creates geometrical structure resembling a Voronoi diagram. Different data points belonging to the same class fall into the same island in the Voronoi structure, hence are attracted to the same basin. Clustering property of cellular automata is exploited in a family of approaches, using rules that form attractors in lattice space (Chady and Poly, 1997; Ganguly et al., 2003; Ganguly, 2004), The attractor dynamics of cellular automata resembles Hopfield network architectures (Hopfield, 1982). These approaches have two major problems: low dimensionality and low computational power. The first problem is due to the need for representing data in 2D space and the need for non-trivial algorithms in higher dimensions. The second problem is due to limiting the computational representation of cellular automata activity with attractor dynamics and clustering. The time evolution of the cellular automata activity has very high computational representation, especially for edge of chaos dynamics, but this is not exploited if the presented data are classified according to the converged basin in 2D space. Another approach is cellular neural network (Chua and Young, 1988a and 1988b, Austin et al., 1997) that emulates memory formation in a neural network, which is totally incapable of chaotic behavior essential for high computational power.
Cellular neural network architectures are patented in EP0649099 Bl and EP0797165 Al. Cellular automaton is used for audio compression in patent US6567781 Bl. Patent WO1997012330 Al defines a method for encoding and decoding data using cellular automata. There are patents that implement reservoir computing in software for specific purposes. Patents US 20130060772 Al and US 8301628 B2 suggest using an echo state network for ontology generation, and patent EP 2389669 Al proposes using reservoir computing based neural network for geodatabase information processing.
Objects of the invention
The object of the invention is to provide a method for implementing reservoir computing based recurrent neural network using cellular automata. Cellular automata replace the echo state neural network in classical reservoir computing. Cellular automata rule search is executed for reservoir training, instead of tuning of echo state network connections. Detailed description of the invention
In our reservoir computing method, data are passed to a cellular automaton instead of an echo state network and the nonlinear dynamics of cellular automaton provide the necessary projection of the input data onto a nonlinear space. In this configuration, instead of non-trivial fine tuning of network connections, a search is performed in the rule space of cellular automaton that has the optimal Lyapunov exponent for the task. Additionally, utilization of 'edge of chaos' automaton rules ensures Turing complete computation in the reservoir, which is very hard to do using classical reservoir computing approaches. Similar to the usage of modular reservoirs, a hybrid automaton (Sipper et al., 1997) or a hierarchical automaton (Sikdar et al., 2001) is used to handle different spatio- temporal scales in the input. A non-deterministic rule is used for the automaton to introduce randomness into the reservoir.
Algorithmic flow of our method is shown in Figure 1. The reservoir computing system receives the input data (101). The encoding stage (102) translates the input into the initial states of a multidimensional cellular automaton. In reservoir computing stage (103), the cellular automaton rules are executed for a fixed period of time (T) to evolve the initial states. The evolution of the cellular automaton is recorded such that, at each time step a snapshot of the states in the cellular automaton is saved into a data structure. Then, in decoding stage (104) the recorded cellular automaton activity is processed to output a cellular automaton representation of the given input. This output (data vector, 303) is a projection of the input onto a nonlinear cellular automata state space. Then the decoded cellular automaton output is used for further processing (105) according to the task (eg. classification, compression, clustering etc.). The system output (106) comes from this stage, and it is communicated to the outside world. The sub-steps of encoding stage (102) are given in Figure 2. The input data instance is first pre-processed (201), and this can be a sequence of operations (filter, whiten, reduce dimensionality, transformations such as fourier or wavelet) that modifies and transforms the data. At the end of pre-processing the data have an inherent number of dimensions, K. In the mapping stage (202), these dimensions are separately mapped onto the cellular automata cells. K can be much larger than the number of cellular automata dimensions (P), and then in that case there are two alternatives proposed for the mapping algorithm:
1. The mapping algorithm exploits spatial and temporal partitioning. An input dimension can be encoded in a specific spatial region of the cellular automata at a specific epoch (time). In Figure 3, a representative 2D cellular automaton is depicted to illustrate the mapping sub-step. Suppose we have a K dimensional pre-processed data vector X and X is the k dimension of the vector. £™ is the initial states of cells in mth spatial region (partition) of the cellular automata at epoch, e,. This is depicted in Figure 3, as arrow 303. Spatial regions can possibly live in any subspace of the ambient space. Then in mapping stage, a function maps the kth component of the input (can be integer or binary) into the initial states of cells: r V'4 J (305)
Here, / can simply be binary coding for binary cellular automaton, or any complex quantization function. The same specific spatial region, m, can represent another dimension of the input at another time epoch, e+1, as shown 304 in Figure 3. Therefore, the input is translated into a spatial- temporal code in initial states (203) of cellular automaton cells. As the dimensionality of the input data (201) increases, the complexity of the mapping algorithm and the need for dimensionality reducing preprocessing steps (i.e. principle component analysis) also increases. Each epoch (301 and 302) of the cellular automaton is considered an independent initialization and evolved separately in reservoir computing stage (103), but with the same cellular automaton rule. Figure 4 gives time evolution of a representative 2D cellular automaton, initialized as epoch, e. 401 and 402 are the states of cells at time, t and t+1 of the cellular automaton evolution. g.f (403), is the value at cell, i. The evolution of the cellular automaton is saved at each time instant into a separate data structure for each epoch:
The union of each epoch automaton's evolution gives the data structure output of the cellular automaton for a given input:
Figure imgf000009_0001
In this configuration, the feature space of the pre-processed input is partitioned into subspaces and each subspace is processed separately. This separation limits the feature interactions in the cellular automaton processing, but decorrelation and whitening pre-processing steps will reduce the detrimental effect of this lack of interactions.
In a more holistic approach, each cell of a real-valued cellular automaton receives weighted input from every (or subset) feature dimension of the pre-processed input (Figure 5). A representative real-valued 2D cellular automaton is shown (501). Each cell (504) receives a weighted (503) sum of initial excitation from the pre-processed input vector (502). Initial value of a cell i (504) is given by:
Figure imgf000009_0002
Alternatively, instead of receiving input from the whole set of feature dimensions, a single cell can receive input from a subset of feature dimensions. In that case, the weight vector for a cell is sparse and a subspace of the input is processed by specific cells. The weights (503) can be set randomly as in echo state networks.
The evolution of cellular automaton is appended at each time instant to get the data structure output of the cellular automaton for a given input:
Figure imgf000010_0001
In decoding stage (Figure 6, 104), the multidimensional data structure c (404, 405) is first post-processed (601). This can be filtering (eg. low-pass, high-pass, denoise), normalization, whitening operations or a combination of these. Then, dimensionality reduction methods (eg. pooling, linear or kernel PCA, manifold learning) are applied onto the post-processed data structure values (602). This step can be skipped altogether if the subsequent stage (105) is able to handle the dimensionality in the data. The values of the cells in the data structure or the output of the dimensionality reduction method is vectorized and assigned as the output (603) of the decoding stage, hence the reservoir.
Rule of the cellular automaton determines the computational power of the reservoir. A totally random approach as in the case of classical echo state networks can be adopted such that, a hybrid or a hierarchical cellular automaton with random rules can be used. However, this network will not be optimal neither for the data nor for the task assigned to the reservoir. An optimization method (Figure 7, 700) is proposed for estimating the best cellular automaton rule for a given dataset and task. In the first sub-step, the rule of the automaton is initialized (701), based on whether it is an elementary, hybrid, hierarchical or any other type of automaton. The performance (i.e. classification accuracy, reconstruction error etc.) of the automaton is evaluated (702) by using the cellular automaton reservoir for nonlinear projection, processing (105) reservoir output (603) and computing an error metric on the overall system output (106). If a stopping criterion (703) for the optimization procedure is not met, the rule is modified (704); otherwise the optimization stops and outputs the final automaton rule (705). The modification sub-step (704) can be a search (any search algorithm) in the rule space of an elementary cellular automaton. Alternatively, evolutionary/genetic algorithms (Ganguly, 2004) can be used to evolve the rules of a hybrid automaton. Lyapunov exponent, Z parameter, G-density, in-degree length (Wuensche, 1999),entropy, mutual information (Ganguly, 2004) complexity metrics of the automaton rules can be computed for guiding the rule search sub-step (704). In Figure 8, sub-steps of 704 are shown. It accepts the current automaton rule (801), and generates candidate rules (802) according to the current rule. This candidate generation can be according to genetic algorithm principles or via a set of rules with pre- computed complexity metrics (Lyapunov exponent, Z parameter etc.). Next, a long list of available complexity metrics are computed (803) and one of the rules is selected (804) based on the complexity of the current rule and of candidate rules. The rule selection (804) can be memoryless, or with memory about the previous rule selections. The memoryless approach selects the rule with various degrees of complexities, i.e., higher or lower complexity than current rule. Selection with memory draws a decision based on previous rule selections as well as the performances of those selected rules.
The proposed system utilizes a more structured reservoir, cellular automaton, while exploiting the simplicity and power of reservoir computing (Lukosevicius and Jaeger, 2009; Maass, 2010) principles. Cellular automata are easier to analyze and have insurances on Turing completeness. For training the reservoir, a search in rule space of automata is a more stable optimization procedure than gradient descent weight tuning in echo state network. Additionally, they are extremely easy to implement in parallel hardware such as FPGA, GPU or VLSI.
References:
Siegelmann, H., and Sontag, E. (1995). On the computational power of neural nets. J. Comput. Systems Sci., 50, 132- 150. Funahashi, K., and Nakamura, Y. (1993) Approximation of dynamical systems by continuous time recurrent neural networks. Neural Networks, 6, 801-806.
Salakhutdinov, R. and Hinton, G. E. (2012). An efficient learning procedure for deep Boltzmann machines. Neural Computation, 24, 1967-2006.
Hochreiter, S. (1991). Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, T.U. Munich.
Bengio, Y., Simard, R, and Frasconi, R (1994). Learning long-term dependencies with gradient descent is difficult. IEEE T. Neural Networks., 5(2).
Hochreiter S., Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9, 1735-1780.
Doya, K. (1992). Bifurcations in the learning of recurrent neural networks. Proceedings of IEEE International Symposium on Circuits and Systems, 6, 2777-2780.
Maass, W., Natschlager, T., and Markram, H. (2002). Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Computation, 14(11), 2531-2560.
Jaeger, H. (2001). The echo state approach to analysing and training recurrent neural networks. Technical Report GMD Report 148, German National Research Center for Information Technology. Lukosevicius, M., Jaeger, H. (2009). Reservoir computing approaches to recurrent neural network training. Computer Science Review, 3(3), 127-149. Maass, W. (2010). Liquid state machines: motivation, theory, and applications. In Computability and Context: Computation and Logic in the Real World, B. Cooper and A. Sorbi, Eds. Imperial College Press.
Nils Bertschinger and Thomas Natschlager (2004). Real-time computation at the edge of chaos in recurrent neural networks. Neural Computation, 16(7).
Robert A. Legenstein and Wolfgang Maass (2007). Edge of chaos and prediction of computational performance for neural circuit models. Neural Networks, 20(3):, 323-334.
Chrisantha Fernando and Sampsa Sojakka (2003). Pattern recognition in a bucket. In Proceedings of the 7th European Conference on Advances in Artificial Life (ECAL 2003), volume 2801 of LNCS, pages 588-597.
Adamatzky A. (2001). Computing in nonlinear media: make waves, study collisions. Lecture Notes in Artificial Intelligence. 2159, 1-11.
Adamatzky A. (2002). Experimental logical gates in a reaction-diffusion medium: The XOR gate and beyond. Physical Review E. 66, 046112.
Walmsley, I. (2001). Computing with interference: All-optical single-query 50- element database search. Conference on Lasers and Electro-Optics/Quantum Electronics and Laser Science.
Duport, F., Schneider, B., Smerieri, A., Haelterman, M., Massar, S. (2012). All optical reservoir computing. Opt. Express, 20, 22783.
Adamatzky (2004). Computing with Waves in Chemical Media: Massively Parallel Reaction-Diffusion Processors
Hinton, G.E., Osindero, S., Teh, Y (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18, 1527-1554. Benjamin Liebald (2004). Exploration of e_ects of di_erent network topologies on the ESN signal crosscorrelation matrix spectrum. Bachelor's thesis, Jacobs University Bremen.
Yanbo Xue, Le Yang, and Simon Haykin (2007). Decoupled echo state networks with lateral inhibition. Neural Networks, 20(3), 365-376.
Wolfgang Maass, Robert A. Legenstein, and Nils Bertschinger (2004). Methods for estimating the computational power and generalization capability of neural microcircuits. In Advances in Neural Information Processing Systems.
David Verstraeten, Benjamin Schrauwen, Michiel D'Haene, and Dirk Stroobandt (2007). An experimental unification of reservoir computing methods. Neural Networks, 20(3), 391-403.
Cook, Matthew (2004). Universality in Elementary Cellular Automata. Complex Systems, 15, 1-40.
Zenil, Hector (2010). Compression-based investigation of the dynamical properties of cellular automata and other systems. Complex Systems, 19(1).
Wolfram, Stephen (2002). A New Kind of Science. Wolfram Media.
M. Chady and R. Poli (1997). Evolution of Cellular-Automaton-based Associative Memories. Technical Report no. CSRP-97-15.
P. Tzionas, P. Tsalides, and A. Thanailakis (1993). A New Cellular Automaton- based NearestNeighbor Pattern Classifier and its VLSI Implementation. IEEE Trans, on VLSI Implementation, 2(3), 343-353.
J. J. Hopfield (1982). Neural Networks and Physical System with Emergent Collective Computational Abilities. Proc. of National Academic of Sciences, 79, 2554-2558. N. Ganguly, B.K. Sikdar, A. Deutsch, G. Canright, and P.P. Chaudhuri (2003). A survey on cellular automata. Technical Report 9, Centre for High Performance Computing, Dresden University of Technology.
Niloy Ganguly (2004). Cellular Automata Evolution :Theory and Applications in Pattern Recognition and Classification. Ph.D thesis. CST Dept. BECDU India
L. O. Chua and L. Yang (1988). Cellular Neural Networks : Application. IEEE Trans. On Circuits and Systems, 35(10),1273-1290.
L. O. Chua and L. Yang (1988). Cellular Neural Networks : Theory. IEEE Trans, on Circuits and Systems, 35(10), 1257-1272. J. Austin, J. Kennedy, S. Buckle, A. Moulds, and R. Pack (1997). The Cellular Neural NetworkAssociative Processor, C-NNAP. IEEE Monograph on Associative Computers.
J. M. Baetens and B. De Baets (2010). Phenomenological Study of Irregular Cellular Automata Based on Lyapunov Exponents and Jacobians, Chaos, 20, 033112.
J. M. Baetens and B. De Baets (2011). On the Topological Sensitivity of Cellular Automata. Chaos.
F. Bagnoli, R. Rechtman, and S. Ruffo (1992). Damage Spreading and Lyapunov Exponents in Cellular Automata. Physics Letters A, 172, 34-38.
Sipper, M., Tomassini, M., Capcarrere, M.S. (1997). Evolving asynchronous and scalable non-uniform cellular automata. In Smith, G.D., Steele, N.C., Albrecht, R.F., eds.: Proceedings of International Conference on Artificial Neural Networks and Genetic Algorithms (ICANNGA97),
B. K. Sikdar, P. Majumder, M. Mukherjee, N. Ganguly, D. K. Das, and P. Pal Chaudhuri (2001). Hierarchical Cellular Automata as An On-Chip Test Pattern Generator. In Proc. Intl. Conf. on VLSI Design, India, 403-408. A. Wuensche (1999). Classifying Cellular Automata Automatically. Complexity, 4(3), 47-66.
De Garis, H. , Hemmi, H. (1998). European Patent No. EP0649099 Bl.
Manganaro, G., Lavorgna, M, Lo, P.M., Fortuna, L. (1997). European Patent No. EP0797165 A1.
Olurinde E.L. (2003). US Patent No. US6567781 Bl.
Olurinde E.L. (1997). WIPO Patent No. WO1997012330 Al.
Clark, D., Pieslak, B., Gipson, B. Walton, Z (2013). US Patent No. US 20130060772 Al. Clark, D., Gipson, B. , Pieslak, B., Walton, Z (2012). US Patent No. US 8301628 B2.
Bellens, R., Gautama, S.(2011). European Patent No. EP 2389669 Al

Claims

A method (100) for implementing a reservoir computing based recurrent neural network algorithm using cellular automata, that comprises the steps of:
- receiving input using a memory, communication or computing device (101)
- encoding the input to translate it into the initial states of a multidimensional cellular automaton (102)
- computing a nonlinear representation of the encoded input via executing the cellular automaton rules for a fixed period of time (T), recording the evolution of the cellular automaton by saving the state configuration of the automata at each time step, and assigning this data structure as the reservoir representation of the processed input (103)
- decoding the cellular automaton representation via filtering, pooling, dimensionality reduction and transformation techniques to get the reservoir output (104)
- processing the reservoir output according to reservoir computing principles for achieving the assigned task (105)
- transmitting the output of the whole system (106).
A method (100) for implementing a reservoir computing based recurrent neural network algorithm using cellular automata characterized in that the step 102 further comprises the steps of: - pre-processing the received input to denoise, whiten, normalize and generally transform the data in order to make it more appropriate for the upcoming stages of processing (201)
- mapping the pre-processed input dimensions onto the cellular automata cells (202)
- initializing the cellular automata cells with the values computed from the mapping function and the input values (203).
A method (100) for implementing a reservoir computing based recurrent neural network algorithm using cellular automata characterized in that the sub-step 202 of step 102 comprises the steps of:
- employing multiple epochs of cellular automaton initial conditions for temporally partitioning the cellular automata processing (301, 302)
- spatially partitioning the cellular automata cells into non- overlapping regions (303, 304)
- mapping an input dimension, onto a specific spatial region of the cellular automata at a specific epoch (305).
A method (100) for implementing a reservoir computing based recurrent neural network algorithm using cellular automata characterized in that the sub-step 202 of step 102 comprises the steps of:
- initializing a weight vector for each cell in the real-valued cellular automaton, with the same size of the inputs, either having all nonzero values or being sparse (503) - linearly projecting the input onto the initial state of real-valued cells using the weight vectors (505).
A method (100) for implementing a reservoir computing based recurrent neural network algorithm using cellular automata characterized in that the step 103 further comprises the steps of:
- evolving the cellular automaton for a fixed period of time (401, 402)
- saving the evolution of the cellular automaton by taking a snapshot of cell states at each time instant and appending the snapshots to form a data structure (404, 405).
A method (100) for implementing a reservoir computing based recurrent neural network algorithm using cellular automata characterized in that the step 104 further comprises the steps of:
- post-processing the data structure output of the reservoir by filtering, normalization, whitening operations or a combination of these (601)
- reducing the dimensionality of the post-processed data structure by known methods (602)
- vectorizing the reduced data structure and sending the data vector output of the reservoir for further processing according to the task(603).
A method (100) for implementing a reservoir computing based recurrent neural network algorithm using cellular automata characterized in that, the training/optimization of the cellular automaton reservoir (700) for a given dataset and task comprises the steps of:
- initializing the automaton rule, either randomly or by manual selection (701)
- evaluating the predefined performance metrics of the cellular automata based reservoir computing algorithm for a given dataset and task (702)
- checking whether the stopping criterion is met (703)
- stopping the training if the criterion is met, and outputting the current automaton rule (705)
- modifying the automaton rule to improve task performance by using search algorithms based on complexity measures of cellular automaton rule, or genetic algorithms (704)
8. A method (100) for implementing a reservoir computing based recurrent neural network algorithm using cellular automata characterized in that, step 704 of training/optimization procedure (700) of the cellular automaton reservoir for a given dataset and task further comprises the steps of:
- accepting the current cellular automaton rule (801)
- generating a set of candidate rules, using either a search algorithm on a set of rules based on complexity measures of the cellular automata, or a genetic algorithm (802) - computing the complexity measures of the candidate cellular automata, which is a measure of computational capacity of the reservoir (803)
- selecting a rule, based on computational capacities of the current rule, candidates rules and previous rule selection performance results (804)
- replacing the current rule with the selected rule (805)
A system and method for building a reservoir computing based recurrent neural network, replacing echo state network with an elementary cellular automaton.
. A system and method for building a reservoir computing based recurrent neural network, replacing echo state network with a Turing complete cellular automaton.
11. A system and method for building a reservoir computing based recurrent neural network, replacing echo state network with a hybrid cellular automaton.
12. A system and method for building a reservoir computing based recurrent neural network, replacing echo state network with a hierarchical cellular automaton.
13. A system and method for building a reservoir computing based recurrent neural network, replacing echo state network with a non-deterministic cellular automaton.
14. A system and method for mapping the dimensions of input data onto a multidimensional cellular automaton, by first defining epochs of computation in the automaton and by assigning each dimension to the initial states of cells residing in a specific spatial region in the automaton at a specific epoch of computation.
15. A system and method for mapping the dimensions of input data onto a multidimensional real-valued cellular automaton, by linearly projecting a weighted combination of feature subspaces onto specific automaton cells.
16. A system and method for projecting the input data onto a nonlinear cellular automata state space by evolving the cellular automata representation of the input data for a period of time and concatenating the snapshots of each time step into a space-time volume of automaton state space.
17. A system and method for building a reservoir computing based recurrent neural network replacing echo state network with a cellular automaton, tailored for a specific dataset and task through training procedures defined as searching for the optimum automaton rule.
18. A system and method for building a reservoir computing based recurrent neural network replacing echo state network with a cellular automaton, in which cellular automaton is implemented in FPGA, GPU or VLSI.
PCT/IB2013/055042 2013-06-19 2013-06-19 System and method for implementing reservoir computing using cellular automata WO2014203039A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2013/055042 WO2014203039A1 (en) 2013-06-19 2013-06-19 System and method for implementing reservoir computing using cellular automata

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2013/055042 WO2014203039A1 (en) 2013-06-19 2013-06-19 System and method for implementing reservoir computing using cellular automata

Publications (1)

Publication Number Publication Date
WO2014203039A1 true WO2014203039A1 (en) 2014-12-24

Family

ID=49080922

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2013/055042 WO2014203039A1 (en) 2013-06-19 2013-06-19 System and method for implementing reservoir computing using cellular automata

Country Status (1)

Country Link
WO (1) WO2014203039A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106375136A (en) * 2016-11-17 2017-02-01 北京智芯微电子科技有限公司 Optical access network service flow sensing method and optical access network service flow sensing device
CN107329733A (en) * 2016-04-29 2017-11-07 北京中科寒武纪科技有限公司 Apparatus and method for performing pooling computings
WO2018197689A1 (en) * 2017-04-28 2018-11-01 Another Brain Automated method and associated device for the non-volatile storage, retrieval and management of message/label associations and vice versa, with maximum likelihood
CN109241489A (en) * 2018-01-11 2019-01-18 西安科技大学 A kind of new method of mining settlement dynamic
WO2020005353A1 (en) * 2018-06-27 2020-01-02 Ohio State Innovation Foundation Rapid time-series prediction with hardware-based reservoir computer
CN112116160A (en) * 2020-09-25 2020-12-22 国网新疆电力有限公司电力科学研究院 Important power transmission channel disaster monitoring method based on optimized neural network improved cellular automaton
CN112434957A (en) * 2020-11-27 2021-03-02 广东电网有限责任公司肇庆供电局 Cellular automaton-based distribution network line inspection area grid division method
CN114202032A (en) * 2021-12-15 2022-03-18 中国科学院深圳先进技术研究院 Gait detection method and device based on reservoir model and computer storage medium
CN116151487A (en) * 2023-04-19 2023-05-23 中国石油大学(华东) Physical knowledge and data hybrid-driven prediction algorithm for predicting sea surface oil spill track
CN116634344A (en) * 2023-07-24 2023-08-22 云天智能信息(深圳)有限公司 Intelligent remote monitoring method, system and storage medium based on hearing aid equipment
CN117077987A (en) * 2023-10-16 2023-11-17 湖南省通晓信息科技有限公司 Environmental sanitation management method based on cellular automaton and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997012330A1 (en) 1995-09-29 1997-04-03 Innovative Computing Group, Inc. Method and apparatus for information processing using cellular automata transform
EP0797165A1 (en) 1996-03-21 1997-09-24 STMicroelectronics S.r.l. Cellular neural network to obtain the so-called unfolded Chua's circuit
EP0649099B1 (en) 1993-10-06 1998-06-03 Atr Human Information Processing Research Laboratories Neural cellular automaton and optimizer employing the same
US6567781B1 (en) 1999-12-30 2003-05-20 Quikcat.Com, Inc. Method and apparatus for compressing audio data using a dynamical system having a multi-state dynamical rule set and associated transform basis function
US20100179935A1 (en) * 2009-01-13 2010-07-15 Gm Global Technology Operations, Inc. Spiking dynamical neural network for parallel prediction of multiple temporal events
EP2389669A1 (en) 2009-01-21 2011-11-30 Universiteit Gent Geodatabase information processing
US8301628B2 (en) 2005-01-12 2012-10-30 Metier, Ltd. Predictive analytic method and apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0649099B1 (en) 1993-10-06 1998-06-03 Atr Human Information Processing Research Laboratories Neural cellular automaton and optimizer employing the same
WO1997012330A1 (en) 1995-09-29 1997-04-03 Innovative Computing Group, Inc. Method and apparatus for information processing using cellular automata transform
EP0797165A1 (en) 1996-03-21 1997-09-24 STMicroelectronics S.r.l. Cellular neural network to obtain the so-called unfolded Chua's circuit
US6567781B1 (en) 1999-12-30 2003-05-20 Quikcat.Com, Inc. Method and apparatus for compressing audio data using a dynamical system having a multi-state dynamical rule set and associated transform basis function
US8301628B2 (en) 2005-01-12 2012-10-30 Metier, Ltd. Predictive analytic method and apparatus
US20130060772A1 (en) 2005-01-12 2013-03-07 Metier, Ltd. Predictive analytic method and apparatus
US20100179935A1 (en) * 2009-01-13 2010-07-15 Gm Global Technology Operations, Inc. Spiking dynamical neural network for parallel prediction of multiple temporal events
EP2389669A1 (en) 2009-01-21 2011-11-30 Universiteit Gent Geodatabase information processing

Non-Patent Citations (47)

* Cited by examiner, † Cited by third party
Title
A. WUENSCHE, CLASSIFYING CELLULAR AUTOMATA AUTOMATICALLY. COMPLEXITY, vol. 4, no. 3, 1999, pages 47 - 66
ADAMATZKY A.: "Computing in nonlinear media: make waves, study collisions", LECTURE NOTES IN ARTIFICIAL INTELLIGENCE, vol. 2159, 2001, pages 1 - 11
ADAMATZKY A.: "Experimental logical gates in a reaction-diffusion medium: The XOR gate and beyond", PHYSICAL REVIEW E, vol. 66, 2002, pages 046112
ADAMATZKY, COMPUTING WITH WAVES IN CHEMICAL MEDIA: MASSIVELY PARALLEL REACTION-DIFFUSION PROCESSORS, 2004
B. K. SIKDAR; P. MAJUMDER; M. MUKHERJEE; N. GANGULY; D. K. DAS; P. PAL CHAUDHURI: "Hierarchical Cellular Automata as An On-Chip Test Pattern Generator", PROC. INTL. CONF. ON VLSI DESIGN, INDIA, 2001, pages 403 - 408
BENGIO, Y.; SIMARD, P.; FRASCONI, P.: "Learning long-term dependencies with gradient descent is difficult", IEEE T. NEURAL NETWORKS, vol. 5, no. 2, 1994
CHRISANTHA FERNANDO; SAMPSA SOJAKKA: "Proceedings of the 7th European Conference on Advances in Artificial Life", vol. 2801, 2003, LNCS, article "Pattern recognition in a bucket", pages: 588 - 597
COOK; MATTHEW: "Universality in Elementary Cellular Automata", COMPLEX SYSTEMS, vol. 15, 2004, pages 1 - 40
DAVID VERSTRAETEN; BENJAMIN SCHRAUWEN; MICHIEL D'HAENE; DIRK STROOBANDT: "An experimental unification of reservoir computing methods", NEURAL NETWORKS, vol. 20, no. 3, 2007, pages 391 - 403
DOYA, K.: "Bifurcations in the learning of recurrent neural networks", PROCEEDINGS OF IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, vol. 6, 1992, pages 2777 - 2780
DUPORT, F.; SCHNEIDER, B.; SMERIERI, A.; HAELTERMAN, M.; MASSAR, S.: "All optical reservoir computing", OPT. EXPRESS, vol. 20, 2012, pages 22783
F. BAGNOLI; R. RECHTMAN; S. RUFFO: "Damage Spreading and Lyapunov Exponents in Cellular Automata", PHYSICS LETTERS A, vol. 172, 1992, pages 34 - 38
FRANÃ PRG OIS RHÃ CR AUME ET AL: "Multistate combination approaches for liquid state machine in supervised spatiotemporal pattern classification", NEUROCOMPUTING, ELSEVIER SCIENCE PUBLISHERS, AMSTERDAM, NL, vol. 74, no. 17, 23 March 2011 (2011-03-23), pages 2842 - 2851, XP028292597, ISSN: 0925-2312, [retrieved on 20110525], DOI: 10.1016/J.NEUCOM.2011.03.033 *
FUNAHASHI, K.; NAKAMURA, Y.: "Approximation of dynamical systems by continuous time recurrent neural networks", NEURAL NETWORKS, vol. 6, 1993, pages 801 - 806
HINTON, G.E.; OSINDERO, S.; TEH, Y: "A fast learning algorithm for deep belief nets", NEURAL COMPUTATION, vol. 18, 2006, pages 1527 - 1554
HISHIKI T ET AL: "A Novel Rotate-and-Fire Digital Spiking Neuron and its Neuron-Like Bifurcations and Responses", IEEE TRANSACTIONS ON NEURAL NETWORKS, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 22, no. 5, 1 May 2011 (2011-05-01), pages 752 - 767, XP011373911, ISSN: 1045-9227, DOI: 10.1109/TNN.2011.2116802 *
HOCHREITER S.; SCHMIDHUBER, J.: "Long short-term memory", NEURAL COMPUTATION, vol. 9, 1997, pages 1735 - 1780
HOCHREITER, S.: "Untersuchungen zu dynamischen neuronalen Netzen", DIPLOMA THESIS, 1991
J. AUSTIN; J. KENNEDY; S. BUCKLE; A. MOULDS; R. PACK: "The Cellular Neural NetworkAssociative Processor", IEEE MONOGRAPH ON ASSOCIATIVE COMPUTERS, 1997
J. J. HOPFIELD: "Neural Networks and Physical System with Emergent Collective Computational Abilities", PROC. OF NATIONAL ACADEMIC OF SCIENCES, vol. 79, 1982, pages 2554 - 2558
J. M. BAETENS; B. DE BAETS: "On the Topological Sensitivity of Cellular Automata", CHAOS, 2011
J. M. BAETENS; B. DE BAETS: "Phenomenological Study of Irregular Cellular Automata Based on Lyapunov Exponents and Jacobians", CHAOS, vol. 20, 2010, pages 033112
JAEGER, H.: "Technical Report GMD Report 148", 2001, GERMAN NATIONAL RESEARCH CENTER FOR INFORMATION TECHNOLOGY, article "The echo state approach to analysing and training recurrent neural networks"
L. O. CHUA; L. YANG: "Cellular Neural Networks : Application", IEEE TRANS. ON CIRCUITS AND SYSTEMS, vol. 35, no. 10, 1988, pages 1273 - 1290
L. O. CHUA; L. YANG: "Cellular Neural Networks : Theory", IEEE TRANS. ON CIRCUITS AND SYSTEMS, vol. 35, no. 10, 1988, pages 1257 - 1272
LUKOSEVICIUS M ET AL: "Reservoir computing approaches to recurrent neural network training", COMPUTER SCIENCE REVIEW, ELSEVIER, AMSTERDAM, NL, vol. 3, no. 3, 1 August 2009 (2009-08-01), pages 127 - 149, XP026470818, ISSN: 1574-0137, [retrieved on 20090513], DOI: 10.1016/J.COSREV.2009.03.005 *
LUKOSEVICIUS, M.; JAEGER, H.: "Reservoir computing approaches to recurrent neural network training", COMPUTER SCIENCE REVIEW, vol. 3, no. 3, 2009, pages 127 - 149
M. CHADY; R. POLI: "Evolution of Cellular-Automaton-based Associative Memories", TECHNICAL REPORT NO. CSRP-97-15, 1997
MAASS, W.: "Real World", 2010, IMPERIAL COLLEGE PRESS, article "Liquid state machines: motivation, theory, and applications. In Computability and Context: Computation and Logic"
MAASS, W.; NATSCHLAGER, T.; MARKRAM, H.: "Real-time computing without stable states: a new framework for neural computation based on perturbations", NEURAL COMPUTATION, vol. 14, no. 11, 2002, pages 2531 - 2560
N. GANGULY; B.K. SIKDAR; A. DEUTSCH; G. CANRIGHT; P.P. CHAUDHURI, A SURVEY ON CELLULAR AUTOMATA, 2003
NILOY GANGULY: "Cellular Automata Evolution :Theory and Applications in Pattern Recognition and Classification", PH.D THESIS, 2004
NILS BERTSCHINGER; THOMAS NATSCHLAGER: "Real-time computation at the edge of chaos in recurrent neural networks", NEURAL COMPUTATION, vol. 16, no. 7, 2004
P. TZIONAS; P. TSALIDES; A. THANAILAKIS: "A New Cellular Automaton- based NearestNeighbor Pattern Classifier and its VLSI Implementation", IEEE TRANS. ON VLSI IMPLEMENTATION, vol. 2, no. 3, 1993, pages 343 - 353
ROBERT A. LEGENSTEIN; WOLFGANG MAASS: "Edge of chaos and prediction of computational performance for neural circuit models", NEURAL NETWORKS, vol. 20, no. 3, 2007, pages 323 - 334
SALAKHUTDINOV, R.; HINTON, G. E: "An efficient learning procedure for deep Boltzmann machines", NEURAL COMPUTATION, vol. 24, 2012, pages 1967 - 2006
SIEGELMANN, H.; SONTAG, E.: "On the computational power of neural nets", J. COMPUT. SYSTEMS SCI., vol. 50, 1995, pages 132 - 150
SIPPER, M.; TOMASSINI, M.; CAPCARRERE, M.S.: "Proceedings of International Conference on Artificial Neural Networks and Genetic Algorithms", 1997, article "Evolving asynchronous and scalable non-uniform cellular automata"
TAKASHI MATSUBARA ET AL: "A Generalized Rotate-and-Fire Digital Spiking Neuron Model and Its On-FPGA Learning", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, IEEE, US, vol. 58, no. 10, 1 October 2011 (2011-10-01), pages 677 - 681, XP011363371, ISSN: 1549-7747, DOI: 10.1109/TCSII.2011.2161705 *
TAKASHI MATSUBARA ET AL: "A novel asynchronous digital spiking neuron model and its various neuron-like bifurcations and responses", NEURAL NETWORKS (IJCNN), THE 2011 INTERNATIONAL JOINT CONFERENCE ON, IEEE, 31 July 2011 (2011-07-31), pages 741 - 748, XP031970755, ISBN: 978-1-4244-9635-8, DOI: 10.1109/IJCNN.2011.6033295 *
TAKASHI MATSUBARA ET AL: "A Novel Bifurcation-Based Synthesis of Asynchronous Cellular Automaton Based Neuron", 11 September 2012, ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING ICANN 2012, SPRINGER BERLIN HEIDELBERG, BERLIN, HEIDELBERG, PAGE(S) 231 - 238, ISBN: 978-3-642-33268-5, XP047019242 *
TAKASHI MATSUBARA ET AL: "Asynchronous Cellular Automaton-Based Neuron: Theoretical Analysis and On-FPGA Learning", IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, IEEE, PISCATAWAY, NJ, USA, vol. 24, no. 5, 1 May 2013 (2013-05-01), pages 736 - 748, XP011496853, ISSN: 2162-237X, DOI: 10.1109/TNNLS.2012.2230643 *
WALMSLEY, I.: "Computing with interference: All-optical single-query 50- element database search", CONFERENCE ON LASERS AND ELECTRO-OPTICS/QUANTUM ELECTRONICS AND LASER SCIENCE, 2001
WOLFGANG MAASS; ROBERT A. LEGENSTEIN; NILS BERTSCHINGER: "Methods for estimating the computational power and generalization capability of neural microcircuits", ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS, 2004
WOLFRAM; STEPHEN: "A New Kind of Science", WOLFRAM MEDIA, 2002
YANBO XUE; LE YANG; SIMON HAYKIN: "Decoupled echo state networks with lateral inhibition", NEURAL NETWORKS, vol. 20, no. 3, 2007, pages 365 - 376
ZENIL; HECTOR: "Compression-based investigation of the dynamical properties of cellular automata and other systems", COMPLEX SYSTEMS, vol. 19, no. 1, 2010

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107329733A (en) * 2016-04-29 2017-11-07 北京中科寒武纪科技有限公司 Apparatus and method for performing pooling computings
CN107329733B (en) * 2016-04-29 2020-10-02 中科寒武纪科技股份有限公司 Apparatus and method for performing posing operations
CN106375136A (en) * 2016-11-17 2017-02-01 北京智芯微电子科技有限公司 Optical access network service flow sensing method and optical access network service flow sensing device
CN106375136B (en) * 2016-11-17 2019-01-25 北京智芯微电子科技有限公司 A kind of optical access network Business Stream cognitive method and device
WO2018197689A1 (en) * 2017-04-28 2018-11-01 Another Brain Automated method and associated device for the non-volatile storage, retrieval and management of message/label associations and vice versa, with maximum likelihood
FR3065826A1 (en) * 2017-04-28 2018-11-02 Patrick Pirim AUTOMATED METHOD AND ASSOCIATED DEVICE FOR MEMORIZING, RECALLING AND NON-VOLATILE MEMBERSHIP OF MESSAGES VERSUS LABELS AND VICE VERSA WITH MAXIMUM REALITY
US11526741B2 (en) 2017-04-28 2022-12-13 Another Brain Automated method and associated device for the non-volatile storage, retrieval and management of message/label associations and vice versa, with maximum likelihood
CN109241489A (en) * 2018-01-11 2019-01-18 西安科技大学 A kind of new method of mining settlement dynamic
CN109241489B (en) * 2018-01-11 2023-04-11 西安科技大学 Novel method for dynamic prediction of mining subsidence
WO2020005353A1 (en) * 2018-06-27 2020-01-02 Ohio State Innovation Foundation Rapid time-series prediction with hardware-based reservoir computer
CN112116160A (en) * 2020-09-25 2020-12-22 国网新疆电力有限公司电力科学研究院 Important power transmission channel disaster monitoring method based on optimized neural network improved cellular automaton
CN112434957A (en) * 2020-11-27 2021-03-02 广东电网有限责任公司肇庆供电局 Cellular automaton-based distribution network line inspection area grid division method
CN112434957B (en) * 2020-11-27 2022-09-06 广东电网有限责任公司肇庆供电局 Cellular automaton-based distribution network line inspection area grid division method
CN114202032A (en) * 2021-12-15 2022-03-18 中国科学院深圳先进技术研究院 Gait detection method and device based on reservoir model and computer storage medium
CN114202032B (en) * 2021-12-15 2023-07-18 中国科学院深圳先进技术研究院 Gait detection method, device and computer storage medium based on reserve pool model
CN116151487A (en) * 2023-04-19 2023-05-23 中国石油大学(华东) Physical knowledge and data hybrid-driven prediction algorithm for predicting sea surface oil spill track
CN116151487B (en) * 2023-04-19 2023-07-07 中国石油大学(华东) Physical knowledge and data hybrid-driven prediction algorithm for predicting sea surface oil spill track
CN116634344A (en) * 2023-07-24 2023-08-22 云天智能信息(深圳)有限公司 Intelligent remote monitoring method, system and storage medium based on hearing aid equipment
CN116634344B (en) * 2023-07-24 2023-10-27 云天智能信息(深圳)有限公司 Intelligent remote monitoring method, system and storage medium based on hearing aid equipment
CN117077987A (en) * 2023-10-16 2023-11-17 湖南省通晓信息科技有限公司 Environmental sanitation management method based on cellular automaton and storage medium
CN117077987B (en) * 2023-10-16 2024-01-02 湖南省通晓信息科技有限公司 Environmental sanitation management method based on cellular automaton and storage medium

Similar Documents

Publication Publication Date Title
WO2014203039A1 (en) System and method for implementing reservoir computing using cellular automata
He et al. Structured pruning for deep convolutional neural networks: A survey
Kipf et al. Semi-supervised classification with graph convolutional networks
Zareapoor et al. Kernelized support vector machine with deep learning: an efficient approach for extreme multiclass dataset
Birdal et al. Intrinsic dimension, persistent homology and generalization in neural networks
Georgousis et al. Graph deep learning: State of the art and challenges
Marinó et al. Deep neural networks compression: A comparative survey and choice recommendations
Yilmaz Symbolic computation using cellular automata-based hyperdimensional computing
Zhu et al. Spiking graph convolutional networks
Sekanina Neural architecture search and hardware accelerator co-search: A survey
Yilmaz Machine learning using cellular automata based feature expansion and reservoir computing.
Yilmaz Reservoir computing using cellular automata
Yang et al. Featurenorm: L2 feature normalization for dynamic graph embedding
Keller et al. Topographic vaes learn equivariant capsules
Saxe et al. The neural race reduction: Dynamics of abstraction in gated networks
Zheng et al. M-GWNN: Multi-granularity graph wavelet neural networks for semi-supervised node classification
Cantú-Paz Pruning neural networks with distribution estimation algorithms
Tang et al. Joint learning of graph representation and node features in graph convolutional neural networks
Yilmaz Connectionist-symbolic machine intelligence using cellular automata based reservoir-hyperdimensional computing
Chakraborty et al. Heterogeneous neuronal and synaptic dynamics for spike-efficient unsupervised learning: Theory and design principles
Cinque et al. Pooling strategies for simplicial convolutional networks
Dutta et al. Data-driven reduced order modeling of environmental hydrodynamics using deep autoencoders and neural ODEs
Chen et al. Sampling and recovery of graph signals based on graph neural networks
Balwani et al. Zeroth-order topological insights into iterative magnitude pruning
Hoang et al. Pydmobilenet: improved version of mobilenets with pyramid depthwise separable convolution

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13753686

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: P1692/2015

Country of ref document: AE

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13753686

Country of ref document: EP

Kind code of ref document: A1