WO2023210816A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2023210816A1
WO2023210816A1 PCT/JP2023/016886 JP2023016886W WO2023210816A1 WO 2023210816 A1 WO2023210816 A1 WO 2023210816A1 JP 2023016886 W JP2023016886 W JP 2023016886W WO 2023210816 A1 WO2023210816 A1 WO 2023210816A1
Authority
WO
WIPO (PCT)
Prior art keywords
learning
circuit
spatio
temporal
information processing
Prior art date
Application number
PCT/JP2023/016886
Other languages
French (fr)
Japanese (ja)
Inventor
稔 塚田
Original Assignee
学校法人玉川学園
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 学校法人玉川学園 filed Critical 学校法人玉川学園
Publication of WO2023210816A1 publication Critical patent/WO2023210816A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Definitions

  • the present invention relates to an information processing device, an information processing method, and a program.
  • the present invention has been made in view of these circumstances, and aims to improve convenience in recognizing spatio-temporal context in information processing.
  • an information processing device includes: A one-layer neural network comprising a feedforward circuit having one or more connection weights and a feedback circuit having one or more recursive connection weights, When performing memory learning for spatiotemporal context, Performing learning applying a spatio-temporal learning rule to the one or more connection weights of the feedforward circuit, performing learning applying Hebb's learning law to the one or more recursive connection weights of the feedback circuit;
  • the one-layer neural network is provided.
  • An information processing method and program according to one embodiment of the present invention are methods and programs corresponding to an information processing apparatus according to one embodiment of the present invention.
  • FIG. 1 is a diagram illustrating an overview of a configuration example of an embodiment of an information processing device of the present invention.
  • 2 is a diagram illustrating an example of input data and output data in the information processing apparatus of FIG. 1.
  • FIG. FIG. 2 is a diagram showing an overview of the characteristics of the feedforward circuit and feedback circuit shown in FIG. 1.
  • FIG. 2 is a diagram showing an overview of the characteristics of the feedforward circuit and feedback circuit shown in FIG. 1.
  • FIG. 2 is a diagram showing a configuration focusing on a feedforward circuit out of the configuration of FIG. 1.
  • FIG. 5 is a schematic diagram of a neuron showing characteristics of a spatio-temporal learning rule in the feedforward circuit of FIG. 4.
  • FIG. 2 is a diagram showing a configuration focusing on a feedback recursive circuit out of the configuration of FIG. 1.
  • FIG. 9 is a diagram illustrating characteristics of the Hebbian learning rule in the feedback recursive circuit of FIG. 8;
  • FIG. 9 is a diagram illustrating characteristics of the Hebbian learning rule in the feedback recursive circuit of FIG. 8;
  • 10 is a diagram illustrating a physiological experiment in which the Hebbian learning law and the spatio-temporal learning law of FIG. 9 coexist.
  • FIG. 11 is a diagram showing changes in long-term enhancement of synaptic load in the physiological experiment of FIG. 10.
  • FIG. FIG. 2 is a diagram showing an example of a computer simulation using the one-layer structure feedforward circuit and feedback recursive circuit of FIG. 1 and its results.
  • 13 is a diagram showing an example of the relationship between learning speed parameters and learning results in the computer simulation of FIG. 12.
  • FIG. 9 is a diagram illustrating characteristics of the Hebbian learning rule in the feedback
  • FIG. 1 is a diagram showing an overview of a configuration example of an embodiment of an information processing apparatus of the present invention.
  • the information processing device 1 includes an input unit 11, a neural network 12 to which one aspect of the present invention is applied (hereinafter referred to as “this neural network 12”), and an output unit 13.
  • this neural network 12 is a one-layer neural network composed of a feedforward circuit 121 having a connection weight K of 1 or more and a feedback recursion circuit 122 having a recursion weight WS of 1 or more. It is. By using the present neural network 12, it is possible to efficiently memorize spatio-temporal contexts to be described later.
  • the input unit 12 inputs the input data ID to the neural network 12.
  • the output unit 13 outputs output data OD from the neural network 12 when the input data ID is input.
  • the input data ID is the following data.
  • unit data DA one unit of data DA (hereinafter referred to as "unit data DA") consisting of a plurality of bits is simultaneously input from the input unit 12 to the present neural network 12.
  • unit data DA data in a 10 ⁇ 12 matrix consisting of 10 rows is employed as the unit data DA, with 12 bits of data representing one row.
  • a plurality of unit data A having different contents are sequentially input from the input unit 12 to the neural network 12 in time.
  • K pieces of different unit data A are set as one set, each of the K pieces of unit data A is arranged in a predetermined order, and the unit data A is arranged one by one at a predetermined time interval in the order of arrangement.
  • Each signal is input to the neural network 12 from the input unit 12.
  • L patterns are prepared as time patterns. That is, each of the L time patterns is sequentially input from the input section 12 to the present neural network 12.
  • the time direction is set as the left-right direction, and the time pattern that is inputted first in time is placed at the top, and then the time pattern that is inputted to the present neural network 12 is arranged in the order from top to bottom. If you place the time patterns in , and place the time pattern that is input last in terms of time at the bottom, a K ⁇ L matrix consisting of L rows is created, with K pieces of unit data A (time pattern) as one row. data is generated. The data in this L ⁇ K matrix is the input data ID.
  • the input data ID (L ⁇ K unit data DA) is understood to be composed of L temporal patterns and K spatial patterns. It can be seen that a plurality of patterns (patterns in which the unit data A, which is each element of the matrix, is different) can be prepared for such input data ID (L ⁇ K unit data DA). Therefore, the pattern of the input data ID (L ⁇ K unit data DA) is called a "spatiotemporal pattern.”
  • FIG. 2 is a diagram illustrating an example of input data and output data in the information processing apparatus of FIG. 1.
  • the input data ID is the following data.
  • the input data ID is composed of L ⁇ K unit data DA, and has L temporal patterns and K spatial patterns.
  • the unit data DA (data A3 in the example of FIG. 2) is data consisting of 120 bits. This 120-bit data is simultaneously input from the input unit 11 to the neural network 12 at a predetermined timing.
  • each unit data has a Hamming distance H from each other.
  • the Hamming distance H refers to the number of characters that differ between two character strings (here, 120-bit bit strings).
  • the input data ID (L ⁇ K unit data DA) is understood to be composed of L temporal patterns and K spatial patterns. It can be seen that a plurality of patterns (patterns in which the unit data A, which is each element of the matrix, is different) can be prepared for such input data ID (L ⁇ K unit data DA). Therefore, the pattern of the input data ID (L ⁇ K unit data DA) is called a "spatiotemporal pattern.”
  • this temporal pattern is understood as a so-called context.
  • Input data composed of L temporal patterns and K spatial patterns has a spatiotemporal pattern, so the configuration of the input data ID is appropriately called a spatiotemporal context pattern.
  • the feedforward circuit 121 is a circuit that applies a spatio-temporal learning rule. Although details will be described later, the spatio-temporal learning rule has the property shown in the following equation (1).
  • the feedback recursive circuit 122 is a circuit that applies Hebb's learning rule. Although details will be described later, the Hebbian learning rule has the property shown in the following equation (2).
  • the feedforward circuit 121 and the feedback recursive circuit 122 are coupled at a coordination ratio ⁇ , as shown in equation (3) below. ...(3)
  • the spatio-temporal context pattern of the input data ID is output as a difference in the spatial pattern of the output data OD depending on the spatio-temporal context pattern.
  • the present neural network 12 is a neural network that recognizes spatio-temporal context patterns and can output different output data OD depending on the spatio-temporal context patterns of input data ID.
  • each of the 120 bits constituting the unit data A is input from the input unit 11 to the neural network 12 at the same time. That is, when simulating the human brain, it can be understood that the input section 11 corresponds to a region made up of 120 input cells.
  • the connection weight WS and the recursive connection weight WK in the neural network 12 can be understood to correspond to a feedforward synaptic load and a feedback synaptic load, respectively, when imitating the human brain.
  • the output unit 13 corresponds to a part made up of 120 output cells when compared to a human brain.
  • FIG. 3 is a diagram showing an overview of the characteristics of the feedforward circuit and feedback circuit shown in FIG. 1.
  • the feedforward circuit 121 has an excellent pattern separation function (ability) shown in FIG. 3A. That is, as shown in FIG. 3A, there is an overlapping region between the input A pattern and the A' pattern. This indicates that the bit strings corresponding to the A pattern and the A' pattern at the time of input are similar to a predetermined degree. On the other hand, the A pattern and the A' pattern of the output in FIG. 3A do not overlap and are separated from each other. This indicates that the bit strings corresponding to the A pattern and the A' pattern at the time of output are not similar. In this way, the feedforward circuit 121 applies the spatio-temporal learning rule, and even if patterns that are similar to a predetermined degree are input, dissimilar outputs are produced.
  • the feedback recursive circuit 122 has an excellent pattern complementation function (ability) shown in FIG. 3B. That is, as shown in FIG. 3B, there is an overlapping region between the input A pattern and the A' pattern. This indicates that the bit strings corresponding to the A pattern and the A' pattern at the time of input are similar to a predetermined degree. On the other hand, the A pattern and the A' pattern of the output in FIG. 3B overlap more than the predetermined degree of the input. This indicates that the bit strings corresponding to the A pattern and the A' pattern at the time of output are extremely similar. In this way, the feedback recursive circuit 122 applies Hebb's learning law, and even if patterns that are similar to a predetermined degree are input, extremely similar outputs will be produced.
  • This neural network 12 realizes excellent spatiotemporal context pattern recognition by combining a feedforward circuit 121 with excellent pattern separation function and a feedback recursion circuit 122 with excellent pattern completion function in one layer. This is possible.
  • the characteristics of the feedforward circuit 121 having an excellent pattern separation function and the feedback recursive circuit 122 having an excellent pattern complementation function will be explained in more detail.
  • the characteristics of the feedforward circuit 121 will be explained in more detail using the results of a physiological experiment of the spatio-temporal learning rule with reference to FIGS. 4 to 7.
  • FIG. 4 is a diagram showing the configuration of FIG. 1, focusing on the feedforward circuit.
  • certain unit data IDs of input data ID are simultaneously input to the feedforward circuit 121.
  • the unit data DA that is input at the same time is subjected to the spatiotemporal learning rule of ⁇ W S ij shown in the above equation (1) in the feedforward circuit 121 that is coupled with the feedforward.
  • the parameter ⁇ W S ij is the amount of change in the connection weight WS of the synapse W ij .
  • the circuits within the neural network 12 and the connections between the circuits will be appropriately referred to using the term synapse in association with the human brain.
  • FIG. 5 is a schematic diagram of a neuron showing the characteristics of the spatio-temporal learning rule in the feedforward circuit of FIG. 4.
  • the spatiotemporal learning law induces synaptic load changes (plasticity) depending on the synchronization rate (I ij (t)) between input cells (synapses).
  • a signal having the weight of the synapse W ij is input from the i-th input x i at a predetermined timing shown at times t...t -m .
  • a signal of the weight of the synapse Wkj is also input from xk .
  • the spatio-temporal learning shown in the above equation (1) is performed on the signals input in this way. Note that in equation (1), h(x) is as shown in equation (4) below.
  • the spatiotemporal learning rule induces changes in synaptic weights based on the synchrony of firing between input cells, and has excellent ability to separate patterns in spatiotemporal context. Furthermore, unlike the feedback recursive circuit 122 described later, the spatiotemporal learning rule has a property that it is not directly related to the firing of the output cell. That is, the feedforward circuit 121 senses the difference between the input spatiotemporal patterns and exhibits the pattern separation function (ability) shown in FIG. 3A.
  • the characteristics of the feedforward circuit 121 have been explained in more detail using the physiological experiment results of the spatio-temporal learning rule.
  • the characteristics of the feedback recursive circuit 122 will be explained in more detail using the results of a physiological experiment in which the Hebbian learning law and the spatio-temporal learning law coexist with reference to FIGS. 8 and 9.
  • FIG. 8 is a diagram showing a configuration focusing on the feedback recursive circuit among the configurations of FIG. 1.
  • Feedback recursive circuit 122 includes the following two types of circuits.
  • Type 1 has a recursive connection weight WS from the output cell side to the input cell side.
  • type 2 the firing of the output cell directly propagates back to the input side of the dendrite and has a connection weight WS.
  • the Hebbian learning law is applied in that feedback from the output cell to the input side causes a load change ⁇ W ij H that depends on the timing of output firing. .
  • unit data DA with input data ID are input at the same time.
  • the ratio of the feedforward circuit load SK and the feedback load HK is added at a ratio of ⁇ :1 ⁇ .
  • FIG. 9 is a diagram illustrating the characteristics of the Hebbian learning rule in the feedback recursive circuit of FIG. 8.
  • FIG. 9(A) shows a schematic diagram of a neuron showing the characteristics of the Hebbian learning rule in the feedback recursive circuit 122 of FIG. 8.
  • FIG. 9B shows an example of the characteristics of the Hebbian learning rule in the feedback recursive circuit 122 of FIG. 8, that is, long-term enhancement and long-term suppression of spike timing.
  • the feedback recursive circuit 122 also includes the phenomenon of spike-timing-dependent long-term enhancement and long-term suppression (spike-timing-dependent LTP and LTD, STD/LTP/LTD) due to firing of output cells. Specifically, long-term enhancement and long-term suppression of spike timing is performed as shown in FIG. 8(B).
  • the Hebbian learning rule self-organizes the input information at that time depending on the firing of the output cells (later cells). Note that if the output cell does not fire, nothing will happen, that is, neither long-term enhancement nor long-term suppression will occur.
  • the memory information processing of this circuit network is known to have the ability to represent similar items as a single pattern and a pattern complement function. In other words, a circuit network that follows Hebb's learning law will recall the entire original information even if some missing information is input. For details on the fact that a circuit network that follows the Hebbian learning rule has a pattern completion function, see Amari, S.
  • the Hebbian learning rule induces synaptic weight changes based on the timing of firing spikes between input cells (previous cells) and output cells (later cells), and has excellent pattern completion ability. Furthermore, unlike the Hebbian learning law, the spatiotemporal learning law has the ability to separate similar patterns without being directly related to the firing of output cells. That is, the feedback recursive circuit 122 using the Hebbian learning rule exhibits the common pattern convergence function and pattern complementation function (ability) shown in FIG. 3B.
  • FIG. 10 is a diagram illustrating a physiological experiment in which the Hebbian learning law and the spatio-temporal learning law of FIG. 9 coexist.
  • FIG. 11 is a diagram showing changes in long-term enhancement of synaptic load in the physiological experiment of FIG. 10.
  • Figures 10 and 11 show the results of a physiological experiment in which long-term enhancement of synaptic load based on the synchrony of input stimuli according to the spatiotemporal learning law and long-term enhancement due to the firing of output cells according to the Hebbian learning law coexist. .
  • stimulation A and stimulation B are synchronously applied to two points 200 um apart on the axon of a neuron.
  • physiological experiments were conducted to measure changes in long-term potentiation of synaptic load in two experiments in which the presence or absence of blocking of back-propagation spikes in the firing of output cells was varied.
  • the graph shown in FIG. 11 shows the mean value (Mean) and standard error (S.E.M.) of the measurement results on a plane where the vertical axis is the long-term enhancement of synaptic load (%) and the horizontal axis is the elapsed time (minutes). , Standard Error of the Mean) is indicated by each marker.
  • the black markers shown in FIG. 11 are the results of measurements performed by blocking back propagation in the firing of output cells.
  • the white markers shown in FIG. 11 are the results of measurements in which back-propagation in the firing of output cells was not blocked.
  • the long-term enhancement of synaptic load after stimulation is approximately 120%. That is, as a result of the measurement performed by blocking the back propagation in the firing of the output cell shown in FIG. 10, it can be seen that a long-term enhancement of the synaptic load based on the synchrony of the input stimulus occurs.
  • the long-term enhancement of synaptic load after stimulation was approximately 150%.
  • This long-term enhancement of the synaptic load is larger than the measurement result in which back-propagation in the firing of the output cell shown in FIG. 10 was blocked. This is due to the measurement results shown in Figure 10 in which back-propagation in the firing of output cells was not blocked, and in addition to the long-term enhancement of synaptic load based on the synchrony of the input stimulus, long-term enhancement due to the firing of output cells occurs. It shows.
  • the long-term enhancement of the synaptic load based on the synchrony of the input stimulus coexists with the long-term enhancement due to the firing of the output cell of the Hebbian learning law.
  • the spatiotemporal learning law in the feedforward circuit 121 and the Hebbian learning law (long-term spike timing enhancement) in the feedback recursive circuit 122 coexist.
  • Tsukada M Yamazaki, Y. , Kojima, H.
  • the characteristics of the feedback recursive circuit 122 have been described above in more detail with reference to FIGS. 8 and 9 using physiological experiment results in which the Hebbian learning law and the spatio-temporal learning law coexist. In this way, physiological experiments have shown that the properties of the spatiotemporal learning law and the Hebbian learning law coexist in one actual cell. Therefore, the inventor cooperated (combined) the feedforward circuit 121 and the feedback recursive circuit 122 in a one-layer structure, added their respective loads with a balance of cooperation degree ⁇ , and thereby memorized (recognized) spatiotemporal context patterns. I came up with the idea of doing it.
  • FIG. 12 is a diagram illustrating an example of a computer simulation using the one-layer structure feedforward circuit and feedback recursive circuit of FIG. 1 and its results.
  • the 120 bits of the unit data DA explained using FIGS. 1 and 2 are used as one image of 12 ⁇ 10 pixels as an input vector for the computer simulation, and the For the difference (Hamming distance), 10 bits were prepared.
  • the center of FIG. 12 the 120 bits of the unit data DA explained using FIGS. 1 and 2 are used as one image of 12 ⁇ 10 pixels as an input vector for the computer simulation, and the For the difference (Hamming distance), 10 bits were prepared.
  • 24 spatio-temporal context patterns are prepared in which the arrangement of a plurality of (five in this case) unit data DA is different. Then, the firing sequence was verified by inputting the 120 bits of the unit data DA to a feedforward circuit 121 and a feedback recursive circuit 122, each of which has a number of 120 neurons.
  • the firing sequence of 120 cells is shown in the graph on the right side of FIG.
  • the vertical axis of the graph on the right side of FIG. 12 is the serial number (#neuron) of each of the 120 neurons.
  • the horizontal axis of the graph on the right side of FIG. 12 is the number of times (time step) that the 120 spatio-temporal context patterns are repeatedly input a plurality of times.
  • the Time (step) can be set as shown in the area (area surrounded by a frame) between 20 and 30. , it can be seen that the firing pattern (the presence or absence of firing of each of the 120 neurons at a certain time (step)) is stable.
  • degree of stability can be evaluated by convergence determination using the following equation (5).
  • the stability (convergence) in the area (area surrounded by a frame line) where Time (step) is 20 to 30 shown in the graph on the right side of FIG. 12 was set to be 0.5% or less.
  • the circuit (neural network 12 in FIG. 1) in which the feedforward circuit 121 and the feedback recursive circuit 122 cooperate in a single layer structure can memorize (recognize) spatiotemporal context patterns extremely stably. Verified.
  • the main points of the results of the computer simulation shown in FIG. 12 will be explained below.
  • it is important to introduce a cooperation ratio ⁇ between the feedforward circuit 121 using the spatio-temporal learning law and the feedback recursive circuit 122 using the Hebb's law. It is suitable for ⁇ to be in the range of 0.6 or more and 0.95 or less in order to create a spatio-temporal context memory. Furthermore, it is more preferable that the cooperation ratio ⁇ is in the range of 0.75 or more and 0.95 or less.
  • the specific weight of the feedforward circuit 121 using the spatio-temporal learning law is compared with that of the feedback recursive circuit 122 using Hebb's law. It is preferable to make it stronger.
  • the balance of the learning speed parameter, which is the next most important after the cooperation ratio ⁇ , will be explained below.
  • unstructured memory and structure-dependent memory there are regions of unstructured memory and structure-dependent memory (fractal structure). That is, depending on the balance between the learning speed ⁇ S of the spatio-temporal learning law in the feedforward circuit 121 and the learning speed ⁇ H of Hebb's law in the feedback recursive circuit 122, two types of memory are possible: unstructured memory and structure-dependent memory (fractal structure). The properties of this occur. As a reference example, in the domain of equation (5) below, the property of unstructured memory occurs as the first area RA.
  • the property of structure-dependent memory occurs as the second region RB.
  • FIG. 13 is a diagram showing an example of the relationship between learning speed parameters and learning results in the computer simulation of FIG. 12 as a reference example.
  • the vertical axis represents the training speed coefficient ⁇ S ( ⁇ STLR in FIG. 13) of the spatiotemporal learning law
  • the horizontal axis represents the training speed coefficient ⁇ H ( ⁇ HEB in FIG. 13) of the Hebbian law.
  • Each graph in FIG. 13 shows the learning results when the domains of the training speed coefficient ⁇ S of the spatiotemporal learning law in the feedforward circuit 121 and the training speed coefficient ⁇ H of the Hebb's law in the feedback recursion circuit 122 are made different. ing. Note that in obtaining the relationship shown in FIG. 13, the cooperation ratio ⁇ is set to 0.9.
  • the graph in the upper left column of FIG. 13 shows the degree of pattern complementation based on Hebb's law.
  • the example of the degree of pattern completion in the upper left column of FIG. 13 changes depending on each training speed coefficient ⁇ S and ⁇ H.
  • the graph in the lower left column of FIG. 13 is divided into two regions depending on whether the threshold ⁇ error is less than 0.5% for the graph in the upper left column of FIG.
  • the graph in the upper right column of FIG. 13 shows the degree of pattern separation based on the spatio-temporal learning rule.
  • the example of the degree of pattern separation in the upper right column of FIG. 13 varies depending on the respective training speed coefficients ⁇ S and ⁇ H.
  • the lower graph in the right column of FIG. 13 is divided into two regions depending on whether the threshold ⁇ variety is less than 80% for the graph in the upper right column of FIG.
  • the lower center graph in FIG. 13 is the result of multiplying the two areas of the convergence to a common pattern of the Hebbian learning rule and the pattern complement function and the pattern separation function of the spatiotemporal learning rule.
  • the multiplication result is divided into two areas, area RA and area RB.
  • the area RA is the domain of the above-mentioned equation (5), and the property of unstructured memory occurs.
  • the region RB is a domain of the above-mentioned equation (6), and the property of structure-dependent memory (fractal structure) occurs.
  • the Hebbian learning rule will be described as expressing the internal state of the neuron i, the input vector, and the load change vector as shown in Equation (8).
  • each parameter of equation (10) shown below is a normalized input vector at each of times t h and t h+1 .
  • each parameter of equation (11) shown below is the normalized internal state of the neuron i at each of times t h and t h+1 .
  • parameter of equation (12) shown below is the magnitude of the synaptic load change vector.
  • the output of Hebb's rule has the characteristic of converging to common elements of the spatiotemporal context of the input. This property is incorporated into the feedback recursion circuit 122 and contributes to the feature of pattern completion.
  • the parameter of equation (14) shown below is the strength of the degree of synchronization between synaptic input cells that governs the plasticity of the synapse W ij due to the spatiotemporal learning law.
  • each parameter in equation (15) shown below is a threshold value that distinguishes the strength of the degree of synchronization.
  • each parameter in equation (14) has a relationship shown in equation (16) below.
  • a spatio-temporal learning rule with high pattern separation ability is applied to the synaptic load K of the feedforward circuit 121 of the present neural network 12, and the recursive connection load WS of the feedback recursive circuit 122 is applied.
  • applied the Hebbian learning rule which has high pattern completion ability.
  • the memory of the spatio-temporal context was realized by the present neural network 12, which performs one-layer memory and learning, consisting of the feedforward circuit 121 and the feedback recursion circuit 122.
  • a spatiotemporal learning rule with a high spatiotemporal context pattern separation function is applied to the feedforward circuit 121. Its functions are based on the results of physiological experiments explained using Figures 6 and 7A and 7B, the results of computer simulation using a one-layer neural network explained using Figures 12 and 13, and the trinity research of the theoretical model results. proved by.
  • the Hebbian learning law which has high convergence to a common pattern and high pattern complementation function, is applied to the recursive connection weight WS of the feedback recursive circuit 122. Its function is to perform physiological experiments (Amari, S.: Learning patterns and pattern sequences by selforganizing nets of threshold). elements. IEEE Trance. Computers, C-21 (11), 1197-1206, 1972. Nakano, K.: Associateron-A model of associative memory. IEEE Trans., SMC-2, 380-388, 1972. and Hopfield, J. J.: Neural netw orks and physical systems with emergent computational abilities.
  • the spatio-temporal learning rules were determined by the physiological experiment explained using FIGS. 11 and 12. and Hebbian learning rule coexist (Tsukada M, Yamazaki, Y., Kojima, H.: Interaction between the spatiotemporal learning g rule (STLR) and Hebb type (HEBB) in single pyramidal cells in the hippocampal CA1 area Cogn. Neurodyn., 1, 157-167, 2007.).
  • the balance condition of the learning speed parameters (the training speed coefficient ⁇ S of the spatio-temporal learning law and the training speed coefficient ⁇ H of the Hebbian learning law) is important.
  • the output cell is It can be seen that the internal state can be controlled.
  • the condition of equation (17) above reduces sensitivity to common elements of spatio-temporal context and increases sensitivity to different elements. That is, it shows that there is high sensitivity to patterns with different contexts of series.
  • the output of the Hebbian learning rule converges to the common elements of the input spatiotemporal context. This is because the Hebbian learning rule of the feedback recursive circuit 122 has the characteristic of pattern completion.
  • the present neural network 12 can realize storage (recognition) of spatio-temporal context patterns with one layer of the feedforward circuit 121 and the feedback recursive circuit 122.
  • the features of the present neural network 12 will be explained while comparing with the conventional technology.
  • the learning/memory circuit requires a long calculation time and consumes a huge amount of energy. This is necessary for each target of application, and when applied to various fields, especially when applied to relearning during automatic driving, for example, there is a drawback that energy problems become significant.
  • the present neural network 12 has the following advantages.
  • the present neural network 12 has a one-layer structure in which a feedforward circuit 121 using a spatio-temporal learning rule and a feedback recursive circuit 122 using a Hebbian learning rule are combined. This makes it possible to solve the above-mentioned drawbacks of multi-layer information processing in current AI technologies (such as deep learning).
  • the present neural network 12 since the present neural network 12 has a one-layer structure, the drawback of the energy problem can be solved. Specifically, with conventional AI technology (learning using deep learning, etc.), learning is repeated several million times in order to memorize information, which requires a large amount of energy consumption.
  • the present neural network 12 is extremely economical in terms of energy consumption because it stores data as a one-layer neural network that combines pattern separation and pattern completion.
  • the present neural network 12 has two separate functions of pattern separation and pattern complementation, and has a cooperation ratio ⁇ .
  • the structure and function of this neural network 12 are closely related. More specifically, in this neural network 12, the structure and function of the neural network of the brain, as well as information representation, are closely connected. This enables system control that links structure, function, and information representation.
  • this neural network 12 can be utilized for essential understanding of the brain. More specifically, current applications of AI technology (for example, diagnosis of brain diseases) have a certain degree of accuracy and have been put into practical use. However, the AI model is unrelated to the brain's anatomical structure and function, as well as information representation. Therefore, it is not useful for elucidating the etiology or treating brain diseases. However, since the present neural network 12 is based on the structure and function of the actual physiological neural network of the brain, it can be said to be useful for investigating and treating the causes of brain diseases.
  • AI technology using this neural network 12 has a structure and function similar to the human brain, it is possible to communicate and control humans and machines, and it will not only be useful for the development of human-friendly robots, but also It can be said that runaway behavior can be prevented.
  • FIG. 1 is merely an example for achieving the purpose of the present invention, and is not particularly limited.
  • FIG. 1 is merely an example and is not particularly limited. In other words, it is sufficient that the information processing system has a function that can execute the above-mentioned processing as a whole, and the type of functional block or database used to realize this function is not particularly limited to the example shown in FIG. 1. .
  • the locations of the functional blocks and the database are not limited to those shown in FIG. 1, and may be arbitrary. In the example of FIG. 1, all processing is performed by the information processing device 1 of FIG. 1, but the configuration is not limited thereto.
  • another information processing device may include at least a portion of the functional blocks and database (not shown) arranged on the information processing device 1 side in FIG. 1 .
  • each circuit may be constituted by an electric element.
  • each circuit may be a program for calculating.
  • each circuit is implemented using a GPU (Graphics Processing Unit), an FPGA (Field-Programmable Gate Array), or a specially designed ASIC (Application Specific Integrated Circuit). rcuit), etc., which function in a predetermined form of element. It may be possible to have a combination of these functions.
  • a program constituting the software is installed on a computer or the like from a network or a recording medium.
  • the computer may be a computer built into dedicated hardware. Further, the computer may be a computer that can execute various functions by installing various programs, such as a server, a general-purpose smartphone, or a personal computer.
  • Recording media containing such programs not only consist of removable media (not shown) that is distributed separately from the main body of the device in order to provide the program to the user, but also are provided to the user in a state that is pre-installed in the main body of the device. Consists of provided recording media, etc.
  • the step of writing a program to be recorded on a recording medium is not only a process that is performed chronologically in accordance with the order, but also a process that is not necessarily performed chronologically but in parallel or individually. It also includes the processing to be executed.
  • an information processing device to which the present invention is applied only needs to have the following configuration, and can take various embodiments.
  • a feedforward circuit for example, the feedforward circuit 121 in FIG. 1) having a coupling load of 1 or more (for example, 120 in FIG. 1) (for example, the coupling load WS in FIG. 1) and a feedforward circuit (for example, the feedforward circuit 121 in FIG.
  • a one-layer neural network for example, the neural network 12 in FIG. 1) comprising a feedback circuit (for example, the feedback recursive circuit 122 in FIG. 1) having a recursive connection weight (for example, the recursive connection weight WK in FIG.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)

Abstract

The present invention addresses the problem of improving convenience when recognizing spatio-temporal context in information processing. A single layer neural network 12 comprises a feedforward circuit 121 having 120 connection weights WS and a feedback recursive circuit 122 having 1 to 4 recursive connection weights WK, wherein when the storage of spatio-temporal context is to be learned, learning using a spatio-temporal learning rule is applied to the connection weights WS of the 120 or more circuits of the feedforward circuit 121, while learning using a Hebbian learning rule is applied to the recursive connection weights of the 4 circuits of the feedback recursive circuit 122. This solves the above-described problem.

Description

情報処理装置、情報処理方法、及びプログラムInformation processing device, information processing method, and program
 本発明は、情報処理装置、情報処理方法、及びプログラムに関する。 The present invention relates to an information processing device, an information processing method, and a program.
 従来、多層ニューラルネットワークを用いて、画像認識を行うモデルを生成(学習)させる技術が存在する(例えば、特許文献1参照)。 Conventionally, there is a technology that uses a multilayer neural network to generate (learn) a model that performs image recognition (for example, see Patent Document 1).
特開2015-095215号公報JP2015-095215A
 上述の特許文献1等の先行技術やディープラーニングを用いたAI技術では、ヘブ学習による特徴抽出と統計最適化の機械学習をもちいた多層ニューラルネットワークが用いられたアプローチにより時空間文脈の記憶(認識)が行われていた。しかしながら、このような従来の時空間文脈の認識においては、エネルギーコストや類似の時空間文脈の分離(精度)等の十分なニーズが満たされておらず、利便性の向上が求められていた。 Prior art such as the above-mentioned patent document 1 and AI technology using deep learning are capable of memorizing spatiotemporal context (recognition ) was being carried out. However, in such conventional spatio-temporal context recognition, sufficient needs such as energy cost and separation (accuracy) of similar spatio-temporal contexts are not met, and improvements in convenience have been desired.
 本発明は、このような状況に鑑みてなされたものであり、情報処理における時空間文脈の認識における利便性を向上させることを目的とする。 The present invention has been made in view of these circumstances, and aims to improve convenience in recognizing spatio-temporal context in information processing.
 上記目的を達成するため、本発明の一態様の情報処理装置は、
 1以上の結合荷重を有するフィードフォワード回路と、1以上の再帰結合荷重を有するフィードバック回路とから構成される1層のニューラルネットワークであって、
 時空間文脈の記憶の学習を実行する際には、
 前記フィードフォワード回路の前記1以上の結合荷重に対しては時空間学習則を適用した学習を実行し、
 前記フィードバック回路の前記1以上の再帰結合荷重に対してはヘブの学習則を適用した学習を実行する、
 前記1層のニューラルネットワークを備える。
To achieve the above object, an information processing device according to one embodiment of the present invention includes:
A one-layer neural network comprising a feedforward circuit having one or more connection weights and a feedback circuit having one or more recursive connection weights,
When performing memory learning for spatiotemporal context,
Performing learning applying a spatio-temporal learning rule to the one or more connection weights of the feedforward circuit,
performing learning applying Hebb's learning law to the one or more recursive connection weights of the feedback circuit;
The one-layer neural network is provided.
 本発明の一態様の情報処理方法及びプログラムは、本発明の一態様の情報処理装置に対応する方法及びプログラムである。 An information processing method and program according to one embodiment of the present invention are methods and programs corresponding to an information processing apparatus according to one embodiment of the present invention.
 本発明によれば、情報処理における時空間文脈の認識における利便性を向上させることができる。 According to the present invention, it is possible to improve the convenience in recognizing spatiotemporal context in information processing.
本発明の情報処理装置の一実施形態の構成例の概要を示す図である。1 is a diagram illustrating an overview of a configuration example of an embodiment of an information processing device of the present invention. 図1の情報処理装置における入力データと出力データの一例を示す図である。2 is a diagram illustrating an example of input data and output data in the information processing apparatus of FIG. 1. FIG. 図1に示すフィードフォワード回路とフィードバック回路の特性の概要を示す図である。FIG. 2 is a diagram showing an overview of the characteristics of the feedforward circuit and feedback circuit shown in FIG. 1. FIG. 図1に示すフィードフォワード回路とフィードバック回路の特性の概要を示す図である。FIG. 2 is a diagram showing an overview of the characteristics of the feedforward circuit and feedback circuit shown in FIG. 1. FIG. 図1の構成のうちフィードフォワード回路に注目した構成を示す図である。2 is a diagram showing a configuration focusing on a feedforward circuit out of the configuration of FIG. 1. FIG. 図4のフィードフォワード回路における時空間学習則の特性を示すニューロンの模式図である。5 is a schematic diagram of a neuron showing characteristics of a spatio-temporal learning rule in the feedforward circuit of FIG. 4. FIG. 図1の構成のうちフィードバック再帰回路に注目した構成を示す図である。2 is a diagram showing a configuration focusing on a feedback recursive circuit out of the configuration of FIG. 1. FIG. 図8のフィードバック再帰回路におけるヘブ学習則の特性を説明する図である。FIG. 9 is a diagram illustrating characteristics of the Hebbian learning rule in the feedback recursive circuit of FIG. 8; 図8のフィードバック再帰回路におけるヘブ学習則の特性を説明する図である。FIG. 9 is a diagram illustrating characteristics of the Hebbian learning rule in the feedback recursive circuit of FIG. 8; 図9のヘブ学習則及び時空間学習則の共存する生理実験について説明する図である。10 is a diagram illustrating a physiological experiment in which the Hebbian learning law and the spatio-temporal learning law of FIG. 9 coexist. FIG. 図10の生理実験における、シナプス荷重の長期増強の変化を示す図である。11 is a diagram showing changes in long-term enhancement of synaptic load in the physiological experiment of FIG. 10. FIG. 図1の1層構造のフィードフォワード回路及びフィードバック再帰回路を用いたコンピュータシミュレーションとその結果の一例を示す図である。FIG. 2 is a diagram showing an example of a computer simulation using the one-layer structure feedforward circuit and feedback recursive circuit of FIG. 1 and its results. 図12のコンピュータシミュレーションにおける、学習速度パラメータと学習結果の関係性の一例を示す図である。13 is a diagram showing an example of the relationship between learning speed parameters and learning results in the computer simulation of FIG. 12. FIG.
 以下、本発明の実施形態について図面を用いて説明する。
 図1は、本発明の情報処理装置の一実施形態の構成例の概要を示す図である。
 情報処理装置1は、入力部11と、本発明の一態様が適用されるニューラルネットワーク12(以下、「本ニューラルネットワーク12」と呼ぶ)と、出力部13とを備えている。
Embodiments of the present invention will be described below with reference to the drawings.
FIG. 1 is a diagram showing an overview of a configuration example of an embodiment of an information processing apparatus of the present invention.
The information processing device 1 includes an input unit 11, a neural network 12 to which one aspect of the present invention is applied (hereinafter referred to as “this neural network 12”), and an output unit 13.
 詳細については後述するが、本ニューラルネットワーク12は、1以上の結合荷重Kを有するフィードフォワード回路121と、1以上の再帰結合荷重WSを有するフィードバック再帰回路122とから構成される1層のニューラルネットワークである。
 本ニューラルネットワーク12を用いることで、後述する時空間文脈について、効率的な記憶を行うことができる。
Although details will be described later, this neural network 12 is a one-layer neural network composed of a feedforward circuit 121 having a connection weight K of 1 or more and a feedback recursion circuit 122 having a recursion weight WS of 1 or more. It is.
By using the present neural network 12, it is possible to efficiently memorize spatio-temporal contexts to be described later.
 入力部12は、入力データIDを本ニューラルネットワーク12に入力する。
 出力部13は、入力データIDが入力された際の本ニューラルネットワーク12からの出力データODを出力する。
The input unit 12 inputs the input data ID to the neural network 12.
The output unit 13 outputs output data OD from the neural network 12 when the input data ID is input.
 ここで、入力データIDは、以下のようなデータである。
 まず、所定タイミング(瞬時)において、複数ビットからなる1単位のデータDA(以下、「単位データDA」と呼ぶ)が同時に、入力部12から本ニューラルネットワーク12に入力される。
 なお、以下、12ビットのデータを1行として、10行からなる10×12の行列のデータが、単位データDAとして採用されているものとする。
Here, the input data ID is the following data.
First, at a predetermined timing (instantaneous), one unit of data DA (hereinafter referred to as "unit data DA") consisting of a plurality of bits is simultaneously input from the input unit 12 to the present neural network 12.
Hereinafter, it is assumed that data in a 10×12 matrix consisting of 10 rows is employed as the unit data DA, with 12 bits of data representing one row.
 そして、夫々内容が異なる複数の単位データAが、時間的に順次入力部12から本ニューラルネットワーク12に入力される。
 ここで、K個の相異なる単位データAが1組とされて、K個の単位データAの夫々が所定の順番に配置されて、その配置の順番に1つずつ単位データAが所定時間間隔毎に、入力部12から本ニューラルネットワーク12に入力される。
 なお以下、K個の相異なる単位データAが、時間的に入力される順番に配置されたパターンを、「時間パターン」と呼ぶ。なお、以下説明の便宜上K=5とする。
Then, a plurality of unit data A having different contents are sequentially input from the input unit 12 to the neural network 12 in time.
Here, K pieces of different unit data A are set as one set, each of the K pieces of unit data A is arranged in a predetermined order, and the unit data A is arranged one by one at a predetermined time interval in the order of arrangement. Each signal is input to the neural network 12 from the input unit 12.
Note that hereinafter, a pattern in which K different unit data A are arranged in the order in which they are inputted in time will be referred to as a "temporal pattern." Note that for convenience of explanation below, K=5.
 時間パターンは、Lパターン用意される。
 即ち、Lの時間パターンの夫々が、入力部12から本ニューラルネットワーク12に順次入力される。
 ここで、図1に示すように、時間方向を左右方向として、時間的に最初に入力される時間パターンを一番上に配置して、その後本ニューラルネットワーク12に入力される順に上から下方向に時間パターンを配置して、時間的に最後に入力される時間パターンを一番下に配置すると、K個の単位データA(時間パターン)を1行として、L行からなるK×Lの行列のデータが生成される。このL×Kの行列のデータが、入力データIDである。
 時刻(時間的な順番)を固定して、上下方向のL個の単位データAの配置に着目すると、各時刻(時間的な順番)毎に、上下方向のL個の単位データAのパターンが異なることがわかる。なお、以下、このような上下方向を仮に「空間方向」と呼び、この空間方向のL個の単位データAのパターンを、「空間パターン」と呼ぶ。なお、以下説明の便宜上L=24とする。
L patterns are prepared as time patterns.
That is, each of the L time patterns is sequentially input from the input section 12 to the present neural network 12.
Here, as shown in FIG. 1, the time direction is set as the left-right direction, and the time pattern that is inputted first in time is placed at the top, and then the time pattern that is inputted to the present neural network 12 is arranged in the order from top to bottom. If you place the time patterns in , and place the time pattern that is input last in terms of time at the bottom, a K×L matrix consisting of L rows is created, with K pieces of unit data A (time pattern) as one row. data is generated. The data in this L×K matrix is the input data ID.
If we fix the time (temporal order) and focus on the arrangement of L unit data A in the vertical direction, the pattern of L unit data A in the vertical direction will be determined at each time (temporal order). You can see that it's different. Note that, hereinafter, such an up-down direction will be temporarily referred to as a "spatial direction", and a pattern of L pieces of unit data A in this spatial direction will be referred to as a "spatial pattern". Note that for convenience of explanation below, L=24.
 このようにして、入力データID(L×Kの単位データDA)は、L個の時間パターンから構成されると共に、K個の空間パターンから構成されると把握される。このような入力データID(L×Kの単位データDA)のパターンは、複数種類(行列の各要素たる単位データAを異ならせたパターン)用意できることがわかる。そこで、入力データID(L×Kの単位データDA)のパターンを「時空間パターン」と呼ぶ。 In this way, the input data ID (L×K unit data DA) is understood to be composed of L temporal patterns and K spatial patterns. It can be seen that a plurality of patterns (patterns in which the unit data A, which is each element of the matrix, is different) can be prepared for such input data ID (L×K unit data DA). Therefore, the pattern of the input data ID (L×K unit data DA) is called a "spatiotemporal pattern."
 以上まとめると、図2に示すように、本ニューラルネットワーク12は、入力データIDに基づいて、出力データODを出力するものである。
 以下、図2を用いて、上述の入力データIDと、出力データODの例について説明する。
 図2は、図1の情報処理装置における入力データと出力データの一例を示す図である。
In summary, as shown in FIG. 2, the present neural network 12 outputs output data OD based on input data ID.
Hereinafter, an example of the above-mentioned input data ID and output data OD will be explained using FIG. 2.
FIG. 2 is a diagram illustrating an example of input data and output data in the information processing apparatus of FIG. 1.
 ここで、入力データIDは、以下のようなデータである。
 入力データIDは、L×Kの単位データDAにより構成されており、L個の時間パターンと、K個の空間パターンを有している。
 単位データDA(図2の例においては、データA3)は、120ビットからなるデータである。この120ビットのデータは、所定タイミングにおいて、同時に、入力部11から本ニューラルネットワーク12に入力される。
Here, the input data ID is the following data.
The input data ID is composed of L×K unit data DA, and has L temporal patterns and K spatial patterns.
The unit data DA (data A3 in the example of FIG. 2) is data consisting of 120 bits. This 120-bit data is simultaneously input from the input unit 11 to the neural network 12 at a predetermined timing.
 そして、夫々内容が異なる複数の単位データAが、時間的に順次入力部12から本ニューラルネットワーク12に入力される。ここで、夫々の単位データは、相互にハミング距離Hを有している。ハミング距離Hとは、2つの文字列(ここでは120ビットのビット列)のうちことなる文字数をいう。
 本ニューラルネットワーク12は、例えば、120ビットのうちハミング距離H=8ビットといった、微小な差異の5つの単位データDAからなる入力データIDの時系列文脈を認識することができる。
Then, a plurality of unit data A having different contents are sequentially input from the input unit 12 to the neural network 12 in time. Here, each unit data has a Hamming distance H from each other. The Hamming distance H refers to the number of characters that differ between two character strings (here, 120-bit bit strings).
The neural network 12 can recognize the time-series context of input data ID consisting of five unit data DA with minute differences, such as Hamming distance H=8 bits out of 120 bits, for example.
 このようにして、入力データID(L×Kの単位データDA)は、L個の時間パターンから構成されると共に、K個の空間パターンから構成されると把握される。このような入力データID(L×Kの単位データDA)のパターンは、複数種類(行列の各要素たる単位データAを異ならせたパターン)用意できることがわかる。そこで、入力データID(L×Kの単位データDA)のパターンを「時空間パターン」と呼ぶ。
 このように、時間パターン、夫々内容(意味)が異なるK=5の単位データAが入力される順番で配置されたパターンを意味している。ヒトの脳(本実施形態では、本ニューラルネットワーク12)にとっては、この時間パターンがいわゆる文脈として把握されることになる。
 L個の時間パターンと、K個の空間パターンから構成される入力データは、入力データIDは、時空間パターンを有していることから、入力データIDの構成を時空間文脈パターンと適宜呼ぶ。
In this way, the input data ID (L×K unit data DA) is understood to be composed of L temporal patterns and K spatial patterns. It can be seen that a plurality of patterns (patterns in which the unit data A, which is each element of the matrix, is different) can be prepared for such input data ID (L×K unit data DA). Therefore, the pattern of the input data ID (L×K unit data DA) is called a "spatiotemporal pattern."
In this way, the time pattern refers to a pattern in which K=5 unit data A, each having a different content (meaning), are arranged in the order in which they are input. For the human brain (in this embodiment, the neural network 12), this temporal pattern is understood as a so-called context.
Input data composed of L temporal patterns and K spatial patterns has a spatiotemporal pattern, so the configuration of the input data ID is appropriately called a spatiotemporal context pattern.
 1つの時間パターン(5個の単位データDA)が入力部11から本ニューラルネットワーク12に入力されると、120ビットを単位とするデータDP(以下、単位データDP)が出力される。
 即ち、L=24の時間パターンが図2の配置順に順次入力されると、図2に示すようなL=24の単位データDPが空間方向に配置され、これが出力データODとなる。このように出力データODは、入力データIDの時空間パターンに対応して、空間パターンが形成される。
When one time pattern (5 unit data DA) is input to the neural network 12 from the input section 11, data DP in units of 120 bits (hereinafter referred to as unit data DP) is output.
That is, when L=24 time patterns are sequentially input in the arrangement order shown in FIG. 2, L=24 unit data DP as shown in FIG. 2 are arranged in the spatial direction, and this becomes output data OD. In this way, the output data OD has a spatial pattern corresponding to the spatiotemporal pattern of the input data ID.
 ここで、本ニューラルネットワーク12が有するフィードフォワード回路121と、フィードバック再帰回路122について簡単に説明する。 Here, the feedforward circuit 121 and feedback recursive circuit 122 included in the present neural network 12 will be briefly explained.
 フィードフォワード回路121は、時空間学習則を適用する回路である。
 詳しくは後述するが、時空間学習則は、以下の式(1)に示される性質を有している。
The feedforward circuit 121 is a circuit that applies a spatio-temporal learning rule.
Although details will be described later, the spatio-temporal learning rule has the property shown in the following equation (1).
Figure JPOXMLDOC01-appb-M000001
                           ・・・(1)
Figure JPOXMLDOC01-appb-M000001
...(1)
 フィードバック再帰回路122は、ヘブ学習則を適用する回路である。
 詳しくは後述するが、ヘブ学習則は、以下の式(2)に示される性質を有している。
The feedback recursive circuit 122 is a circuit that applies Hebb's learning rule.
Although details will be described later, the Hebbian learning rule has the property shown in the following equation (2).
Figure JPOXMLDOC01-appb-M000002
                           ・・・(2)
Figure JPOXMLDOC01-appb-M000002
...(2)
 そして、フィードフォワード回路121と、フィードバック再帰回路122は、以下の式(3)で示すように、協調比率αで結合されている。
Figure JPOXMLDOC01-appb-M000003
                           ・・・(3)
The feedforward circuit 121 and the feedback recursive circuit 122 are coupled at a coordination ratio α, as shown in equation (3) below.
Figure JPOXMLDOC01-appb-M000003
...(3)
 これにより、入力データIDの時空間文脈パターンは、その時空間文脈パターンにより出力データODの空間パターンの違いとして出力される。換言すれば、本ニューラルネットワーク12は、入力データIDの時空間文脈パターンに応じて、異なる出力データODを出力することができる、時空間文脈パターンを認識するニューラルネットワークなのである。 As a result, the spatio-temporal context pattern of the input data ID is output as a difference in the spatial pattern of the output data OD depending on the spatio-temporal context pattern. In other words, the present neural network 12 is a neural network that recognizes spatio-temporal context patterns and can output different output data OD depending on the spatio-temporal context patterns of input data ID.
 瞬時(所定時刻)に着目すると、単位データAを構成する120ビットの夫々は、同時に入力部11から本ニューラルネットワーク12に入力される。即ち、ヒトの脳に模すると、入力部11は120個の入力細胞から構成されている部位に相当すると把握することができる。
 そして、本ニューラルネットワーク12内の結合荷重WS及び再帰結合荷重WKの夫々は、ヒトの脳に模すると、フィードフォワードのシナプス荷重及びフィードバックのシナプス荷重の夫々に相当すると把握することができる。
 また、出力部13は、ヒトの脳に模すると、120個の出力細胞から構成されている部位に相当すると把握することができる。
Focusing on an instant (predetermined time), each of the 120 bits constituting the unit data A is input from the input unit 11 to the neural network 12 at the same time. That is, when simulating the human brain, it can be understood that the input section 11 corresponds to a region made up of 120 input cells.
The connection weight WS and the recursive connection weight WK in the neural network 12 can be understood to correspond to a feedforward synaptic load and a feedback synaptic load, respectively, when imitating the human brain.
In addition, it can be understood that the output unit 13 corresponds to a part made up of 120 output cells when compared to a human brain.
 次に、次に、1以上の結合荷重WSを有するフィードフォワード回路121及び1以上の再帰結合荷重WKを有するフィードバック再帰回路122について、図3乃至図9を用いて説明する。 Next, the feedforward circuit 121 having one or more coupling loads WS and the feedback recursive circuit 122 having one or more recursive coupling loads WK will be explained using FIGS. 3 to 9.
 まず、フィードフォワード回路121及びフィードバック再帰回路122の夫々の特性について簡単に説明する。
 図3は、図1に示すフィードフォワード回路とフィードバック回路の特性の概要を示す図である。
First, the characteristics of the feedforward circuit 121 and the feedback recursive circuit 122 will be briefly described.
FIG. 3 is a diagram showing an overview of the characteristics of the feedforward circuit and feedback circuit shown in FIG. 1.
 フィードフォワード回路121は、図3Aに示すパターン分離機能(能力)に優れている。
 即ち、図3Aに示すように、インプット(input)のAパターンとA’パターンは、重複している領域が存在する。これは、インプット時におけるAパターンとA’パターンの夫々に対応するビット列が所定程度だけ類似していることを示している。
 これに対し、図3Aのアウトプット(output)のAパターンとA’パターンは、重複しておらず離れている。これは、アウトプット時におけるAパターンとA’パターンの夫々に対応するビット列が類似していないことを示している。
 このように、フィードフォワード回路121では、時空間学習則が適用され、所定程度だけ類似するパターンを入力しても、類似していない出力がなされるのである。
The feedforward circuit 121 has an excellent pattern separation function (ability) shown in FIG. 3A.
That is, as shown in FIG. 3A, there is an overlapping region between the input A pattern and the A' pattern. This indicates that the bit strings corresponding to the A pattern and the A' pattern at the time of input are similar to a predetermined degree.
On the other hand, the A pattern and the A' pattern of the output in FIG. 3A do not overlap and are separated from each other. This indicates that the bit strings corresponding to the A pattern and the A' pattern at the time of output are not similar.
In this way, the feedforward circuit 121 applies the spatio-temporal learning rule, and even if patterns that are similar to a predetermined degree are input, dissimilar outputs are produced.
 フィードバック再帰回路122は、図3Bに示すパターン補完機能(能力)に優れている。
 即ち、図3Bに示すように、インプット(input)のAパターンとA’パターンは、重複している領域が存在する。これは、インプット時におけるAパターンとA’パターンの夫々に対応するビット列が所定程度だけ類似していることを示している。
 これに対し、図3Bのアウトプット(output)のAパターンとA’パターンは、インプットの所定程度と比較して更に重複している。これは、アウトプット時におけるAパターンとA’パターンの夫々に対応するビット列が極めて類似していることを示している。
 このように、フィードバック再帰回路122では、ヘブ学習則が適用され、所定程度だけ類似するパターンを入力しても、極めて類似した出力がなされるのである。
The feedback recursive circuit 122 has an excellent pattern complementation function (ability) shown in FIG. 3B.
That is, as shown in FIG. 3B, there is an overlapping region between the input A pattern and the A' pattern. This indicates that the bit strings corresponding to the A pattern and the A' pattern at the time of input are similar to a predetermined degree.
On the other hand, the A pattern and the A' pattern of the output in FIG. 3B overlap more than the predetermined degree of the input. This indicates that the bit strings corresponding to the A pattern and the A' pattern at the time of output are extremely similar.
In this way, the feedback recursive circuit 122 applies Hebb's learning law, and even if patterns that are similar to a predetermined degree are input, extremely similar outputs will be produced.
 本ニューラルネットワーク12は、パターン分離機能に優れたフィードフォワード回路121と、パターン補完機能に優れたフィードバック再帰回路122とを1層で結合することで、優れた時空間文脈パターンの認識を実現することができるのである。
 以下、パターン分離機能に優れたフィードフォワード回路121と、パターン補完機能に優れたフィードバック再帰回路122との性質についてより詳細に説明する。
 以下、図4乃至7を用いて、フィードフォワード回路121の特性を、時空間学習則の生理実験結果を用いてより詳細に説明する。
This neural network 12 realizes excellent spatiotemporal context pattern recognition by combining a feedforward circuit 121 with excellent pattern separation function and a feedback recursion circuit 122 with excellent pattern completion function in one layer. This is possible.
Hereinafter, the characteristics of the feedforward circuit 121 having an excellent pattern separation function and the feedback recursive circuit 122 having an excellent pattern complementation function will be explained in more detail.
Hereinafter, the characteristics of the feedforward circuit 121 will be explained in more detail using the results of a physiological experiment of the spatio-temporal learning rule with reference to FIGS. 4 to 7.
 図4は、図1の構成のうちフィードフォワード回路に注目した構成を示す図である。
 図1を用いて説明したように、図4においても、入力データIDのある単位データIDは、フィードフォワード回路121に対して同時に入力される。
 同時に入力される単位データDAは、フィードフォワード結合されたフィードフォワード回路121において、上述の式(1)に示したΔW ijの時空間学習則を適用される。詳しくは後述するが、パラメータΔW ijとは、シナプスWijの結合荷重WSの変化量である。なお、以下、ニューラルネットワーク12内の回路やその回路同士の接続について、人間の脳に対応付けて、シナプスの用語を用いて適宜呼ぶ。
FIG. 4 is a diagram showing the configuration of FIG. 1, focusing on the feedforward circuit.
As explained using FIG. 1, also in FIG. 4, certain unit data IDs of input data ID are simultaneously input to the feedforward circuit 121.
The unit data DA that is input at the same time is subjected to the spatiotemporal learning rule of ΔW S ij shown in the above equation (1) in the feedforward circuit 121 that is coupled with the feedforward. As will be described in detail later, the parameter ΔW S ij is the amount of change in the connection weight WS of the synapse W ij . Hereinafter, the circuits within the neural network 12 and the connections between the circuits will be appropriately referred to using the term synapse in association with the human brain.
 フィードフォワード回路121、即ち、時空間学習則の特性について説明する。
 図5は、図4のフィードフォワード回路における時空間学習則の特性を示すニューロンの模式図である。
The characteristics of the feedforward circuit 121, that is, the spatio-temporal learning rule will be explained.
FIG. 5 is a schematic diagram of a neuron showing the characteristics of the spatio-temporal learning rule in the feedforward circuit of FIG. 4.
 時空間学習則は、入力細胞(シナプス)間の同期率(Iij(t))に依存して、シナプス荷重変化(可塑性)を誘起する。
 具体的には、図5に示すように、i番目の入力xから、時刻t……t-mに示す所定タイミングで信号がシナプスWijの重みの信号が入力される。このとき、例えば、図5に示すように、例えばxからシナプスWkjの重みの信号も入力される。
 このように入力された信号に対して、上述の式(1)に示す時空間学習がなされる。
 なお、式(1)において、h(x)は以下に示す式(4)の通りである。
The spatiotemporal learning law induces synaptic load changes (plasticity) depending on the synchronization rate (I ij (t)) between input cells (synapses).
Specifically, as shown in FIG. 5, a signal having the weight of the synapse W ij is input from the i-th input x i at a predetermined timing shown at times t...t -m . At this time, for example, as shown in FIG. 5, a signal of the weight of the synapse Wkj is also input from xk .
The spatio-temporal learning shown in the above equation (1) is performed on the signals input in this way.
Note that in equation (1), h(x) is as shown in equation (4) below.
Figure JPOXMLDOC01-appb-M000004
                           ・・・(4)
Figure JPOXMLDOC01-appb-M000004
...(4)
 その結果、図5に示すように、複数のシナプスからの同期が発生した同期率に応じて3タイプの荷重変化を生じる。同期率が高い場合(xがθ以上)は正の荷重変化、同期率が低い場合は負の荷重変化、同期率が中間の場合(xがθ以下)は負の荷重変化、同期率が中間の場合両者(xがθより小さくθより大きい)は無変化である。 As a result, as shown in FIG. 5, three types of load changes occur depending on the synchronization rate at which synchronization from a plurality of synapses occurs. When the synchronization rate is high (x is θ 1 or more), there is a positive load change, when the synchronization rate is low, there is a negative load change, and when the synchronization rate is intermediate (x is θ 2 or less), there is a negative load change. When is intermediate, both (x is smaller than θ 1 and larger than θ 2 ) remain unchanged.
 このように、時空間学習則は、入力細胞間の発火の同期性に基づいてシナプス加重変化を誘起し、時空間文脈のパターン分離能力に優れている。また、時空間学習則は、後述するフィードバック再帰回路122とは異なり、出力細胞の発火に直接関係しない、という性質を有する。
 即ち、フィードフォワード回路121において、入力時空間パターン間の違いを感受し、図3Aに示すパターン分離機能(能力)が発揮されるのである。
In this way, the spatiotemporal learning rule induces changes in synaptic weights based on the synchrony of firing between input cells, and has excellent ability to separate patterns in spatiotemporal context. Furthermore, unlike the feedback recursive circuit 122 described later, the spatiotemporal learning rule has a property that it is not directly related to the firing of the output cell.
That is, the feedforward circuit 121 senses the difference between the input spatiotemporal patterns and exhibits the pattern separation function (ability) shown in FIG. 3A.
 次に、フィードフォワード回路121に適用される時空間学習則についての生理実験結果について説明する。 Next, physiological experiment results regarding the spatiotemporal learning rule applied to the feedforward circuit 121 will be explained.
 なお、本実験の詳細については、Tsukada M, Aihara T, KobayashiY, Shimazaki H : Spatial analysis of spike-timing dependent LTP and LTD in the CA1 area of hippocampal slices using optional imaging. Hippocampus, 15,104-109, 2005.に記載の図を参照して説明する。
 海馬CA1回路を用いた実験(海馬スライス実験)における入力細胞間の同期発火によるシナプス荷重の長期増強と非同期による長期抑圧の生理実験結果を示している。
 具体的には、同論文には、海馬CA1回路のスライス図と、Stim.A及びStim.Bの位置関係が図示されている。
For details of this experiment, see Tsukada M, Aihara T, Kobayashi Y, Shimazaki H: Spatial analysis of spike-timing dependent LTP and LTD in the C. A1 area of hippocampal slices using optional imaging. Hippocampus, 15, 104-109, 2005. This will be explained with reference to the figures described in .
This figure shows the results of a physiological experiment using the hippocampal CA1 circuit (hippocampal slice experiment) of long-term enhancement of synaptic load due to synchronous firing between input cells and long-term suppression due to asynchrony.
Specifically, the paper includes slice diagrams of the hippocampal CA1 circuit and Stim. A and Stim. The positional relationship of B is illustrated.
 同論文に示されたStim.A及びStim.Bにおける刺激タイミングの例が図示されている。
 Stim.Aにおいて、2s(秒)毎に、刺激する。
 また、Stim.Bにおいて、Stim.Aから時間差τだけオフセットし(ずらし)、2s(秒)毎に、刺激する。
 即ち、τ=0の刺激は「同期」の刺激であって、τ=0でない刺激は「非同期」の刺激である。
Stim. shown in the same paper. A and Stim. An example of stimulation timing in B is illustrated.
Stim. At A, stimulate every 2s (seconds).
Also, Stim. In B, Stim. It is offset (shifted) from A by a time difference τ and stimulated every 2 s (seconds).
That is, a stimulus with τ=0 is a "synchronous" stimulus, and a stimulus other than τ=0 is an "asynchronous" stimulus.
 同期の刺激を与えた場合、LTP/LTDは200%程度となり、長期増強(LTP)していることがわかった。
 非同期の刺激を与えた場合、τ=+50ms、+10ms、-10ms、-50msのとき、LTP/LTDは100%程度となり、長期増強(LTP)も長期抑制(LTD)もおきていないことがわかった。
 また、非同期の刺激を与えた場合、τ=+20ms、-20msのとき、LTP/LTDは70%程度となり、長期抑制(LTD)していることがわかった。
When synchronous stimulation was applied, LTP/LTD was approximately 200%, indicating long-term potentiation (LTP).
When asynchronous stimulation was applied, LTP/LTD was approximately 100% when τ = +50ms, +10ms, -10ms, and -50ms, indicating that neither long-term potentiation (LTP) nor long-term depression (LTD) occurred. .
Furthermore, when asynchronous stimulation was applied, LTP/LTD was approximately 70% when τ = +20 ms and -20 ms, indicating long-term suppression (LTD).
 このように、海馬CA1回路を用いた実験(海馬スライス実験)により、同期刺激によるシナプス荷重変化(可塑性)の長期増強(LTP)が存在することが分かっている。 As described above, experiments using the hippocampal CA1 circuit (hippocampal slice experiments) have revealed the existence of long-term potentiation (LTP) of synaptic load changes (plasticity) due to synchronous stimulation.
 以上、フィードフォワード回路121の特性を、時空間学習則の生理実験結果を用いてより詳細に説明した。
 以下、図8及び9を用いて、フィードバック再帰回路122の特性を、ヘブ学習則及び時空間学習則の共存する生理実験結果を用いてより詳細に説明する。
Above, the characteristics of the feedforward circuit 121 have been explained in more detail using the physiological experiment results of the spatio-temporal learning rule.
Hereinafter, the characteristics of the feedback recursive circuit 122 will be explained in more detail using the results of a physiological experiment in which the Hebbian learning law and the spatio-temporal learning law coexist with reference to FIGS. 8 and 9.
 図8は、図1の構成のうちフィードバック再帰回路に注目した構成を示す図である。
 フィードバック再帰回路122は、次の2つのタイプの回路を含む。タイプ1は出力細胞側から入力細胞側に再帰的結合荷重WSをもつ。タイプ2は出力細胞の発火が直接樹状突起の入力側に逆伝搬し結合荷重WSを持つ。
 いずれも、出力細胞から入力側にフィードバックし出力発火のタイミングに依存した荷重変化ΔWij を起こす点においてヘブ学習則が適用される。。
FIG. 8 is a diagram showing a configuration focusing on the feedback recursive circuit among the configurations of FIG. 1.
Feedback recursive circuit 122 includes the following two types of circuits. Type 1 has a recursive connection weight WS from the output cell side to the input cell side. In type 2, the firing of the output cell directly propagates back to the input side of the dendrite and has a connection weight WS.
In both cases, the Hebbian learning law is applied in that feedback from the output cell to the input side causes a load change ΔW ij H that depends on the timing of output firing. .
 なお、図1を用いて説明したように、図8においても、入力データIDのある単位データDAは、同時に入力される。また、図1を用いて説明したように、フィードフォワード回路と共同して働きそのバランスは強調比率αである。フィードフォワード回路の荷重SKとフィードバックの荷重HKの比がα:1-αの割合で加算される。 Note that, as explained using FIG. 1, also in FIG. 8, unit data DA with input data ID are input at the same time. Moreover, as explained using FIG. 1, it works in cooperation with the feedforward circuit and the balance thereof is the emphasis ratio α. The ratio of the feedforward circuit load SK and the feedback load HK is added at a ratio of α:1−α.
 フィードバック再帰回路122、即ち、ヘブ学習則の特性について説明する。
 図9は、図8のフィードバック再帰回路におけるヘブ学習則の特性を説明する図である。
 図9(A)には、図8のフィードバック再帰回路122におけるヘブ学習則の特性を示すニューロンの模式図が示されている。
 図9(B)には、図8のフィードバック再帰回路122におけるヘブ学習則の特性、即ち、スパイクタイミングの長期増強及び長期抑制の例が示されている。
The characteristics of the feedback recursive circuit 122, that is, the Hebbian learning rule, will be explained.
FIG. 9 is a diagram illustrating the characteristics of the Hebbian learning rule in the feedback recursive circuit of FIG. 8.
FIG. 9(A) shows a schematic diagram of a neuron showing the characteristics of the Hebbian learning rule in the feedback recursive circuit 122 of FIG. 8.
FIG. 9B shows an example of the characteristics of the Hebbian learning rule in the feedback recursive circuit 122 of FIG. 8, that is, long-term enhancement and long-term suppression of spike timing.
 フィードバック再帰回路122の中には出力細胞の発火によるスパイクタイミング依存性の長期増強及び長期抑圧(spike-timing-dependent LTP and LTD,STD・LTP/LTD)の現象も含まれる。
 具体的には、図8(B)に示す、スパイクタイミングの長期増強及び長期抑制が行われる。
The feedback recursive circuit 122 also includes the phenomenon of spike-timing-dependent long-term enhancement and long-term suppression (spike-timing-dependent LTP and LTD, STD/LTP/LTD) due to firing of output cells.
Specifically, long-term enhancement and long-term suppression of spike timing is performed as shown in FIG. 8(B).
 このように、ヘブ学習則は、出力細胞(後段の細胞)の発火に依存してその時の入力情報を自己組織化する。なお、出力細胞が発火しなければ何も起こらない、即ち、長期増強も長期抑圧もされない。この回路網の記憶情報処理は、類似なものを一つのパターンに代表して表現する能力とパターン補完機能を持つことが知られている。即ち、ヘブ学習則に従う回路網は、一部の欠けた情報が入力されても元の全体の情報を想起してくれる。なお、ヘブ学習則に従う回路網がパターン補完機能を持つことの詳細については、Amari, S.: Learning patterns and pattern sequences by self-organizing nets of threshold elements Institute of Electrical and Electronics Engineers Transactions C-21 (1972) 1197-1206.、及び、Hopfield, J. J.: Neural networks and physical systems with emergent collective computational properties. Proceedings of National Academy of Science (USA) 79 (1982) 2554-2558.を参照されたい。 In this way, the Hebbian learning rule self-organizes the input information at that time depending on the firing of the output cells (later cells). Note that if the output cell does not fire, nothing will happen, that is, neither long-term enhancement nor long-term suppression will occur. The memory information processing of this circuit network is known to have the ability to represent similar items as a single pattern and a pattern complement function. In other words, a circuit network that follows Hebb's learning law will recall the entire original information even if some missing information is input. For details on the fact that a circuit network that follows the Hebbian learning rule has a pattern completion function, see Amari, S. : Learning patterns and pattern sequences by self-organizing nets of threshold elements Insti Tute of Electrical and Electronics Engineers Transactions C-21 (1972) 1197-1206. , and Hopfield, J. J. : Neural networks and physical systems with emergent collective computational properties. Proceedings of National Academy of Science (USA) 79 (1982) 2554-2558. Please refer to
 このように、ヘブ学習則は、入力細胞(前段の細胞)と出力細胞(後段の細胞)間の発火のスパイクタイミングに基づいてシナプス加重変化を誘起し、のパターン補完能力に優れている。また、時空間学習則は、このヘブ学習則とは異なり、出力細胞の発火に直接関係なく、類似なパターンを分離する能力がある。
 即ち、ヘブ学習則を用いたフィードバック再帰回路122において、図3Bに示す共通パターンへの収束機能とパターン補完機能(能力)が発揮されるのである。
In this way, the Hebbian learning rule induces synaptic weight changes based on the timing of firing spikes between input cells (previous cells) and output cells (later cells), and has excellent pattern completion ability. Furthermore, unlike the Hebbian learning law, the spatiotemporal learning law has the ability to separate similar patterns without being directly related to the firing of output cells.
That is, the feedback recursive circuit 122 using the Hebbian learning rule exhibits the common pattern convergence function and pattern complementation function (ability) shown in FIG. 3B.
 次に、図10を用いて、フィードバック再帰回路122に適用されるヘブ学習則及び時空間学習則の共存する生理実験結果について説明する。
 図10は、図9のヘブ学習則及び時空間学習則の共存する生理実験について説明する図である。
 図11は、図10の生理実験における、シナプス荷重の長期増強の変化を示す図である。
Next, the results of a physiological experiment in which the Hebbian learning law and the spatio-temporal learning law that are applied to the feedback recursive circuit 122 coexist will be described using FIG. 10.
FIG. 10 is a diagram illustrating a physiological experiment in which the Hebbian learning law and the spatio-temporal learning law of FIG. 9 coexist.
FIG. 11 is a diagram showing changes in long-term enhancement of synaptic load in the physiological experiment of FIG. 10.
 図10及び図11は、時空間学習則の入力刺激の同期性に基づくシナプス荷重の長期増強と、ヘブ学習則の出力細胞の発火による長期増強とが共存している生理実験結果を示している。
 具体的には、図10に示すように、ニューロンの軸索(axon)上の200um離れた2点に、刺激A及び刺激Bを同期して与える。
 このとき、出力細胞の発火における逆伝搬スパイクのブロックの有無を異ならせた2つの実験において、シナプス荷重の長期増強の変化について測定する生理実験を行った。
Figures 10 and 11 show the results of a physiological experiment in which long-term enhancement of synaptic load based on the synchrony of input stimuli according to the spatiotemporal learning law and long-term enhancement due to the firing of output cells according to the Hebbian learning law coexist. .
Specifically, as shown in FIG. 10, stimulation A and stimulation B are synchronously applied to two points 200 um apart on the axon of a neuron.
At this time, physiological experiments were conducted to measure changes in long-term potentiation of synaptic load in two experiments in which the presence or absence of blocking of back-propagation spikes in the firing of output cells was varied.
 図11に示すグラフは、縦軸がシナプス荷重の長期増強(%)、横軸が経過時間(分)で示す平面に、測定結果の平均値(Mean)及び標準誤差(S.E.M.、Standard Error of the Mean)を各マーカで示したものである。
 図11に示す黒塗りのマーカは、出力細胞の発火における逆伝搬のブロックを行った測定の結果である。
 図11に示す白抜きのマーカは、出力細胞の発火における逆伝搬のブロックを行わなかった測定の結果である。
The graph shown in FIG. 11 shows the mean value (Mean) and standard error (S.E.M.) of the measurement results on a plane where the vertical axis is the long-term enhancement of synaptic load (%) and the horizontal axis is the elapsed time (minutes). , Standard Error of the Mean) is indicated by each marker.
The black markers shown in FIG. 11 are the results of measurements performed by blocking back propagation in the firing of output cells.
The white markers shown in FIG. 11 are the results of measurements in which back-propagation in the firing of output cells was not blocked.
 図11に示すように、出力細胞の発火における逆伝搬のブロックを行った測定の結果によれば、刺激の後、シナプス荷重の長期増強は120%程度となっている。
 即ち、図10の出力細胞の発火における逆伝搬のブロックを行った測定の結果、入力刺激の同期性に基づくシナプス荷重の長期増強が起きていることがわかる。
As shown in FIG. 11, according to the results of measurements performed by blocking back propagation in the firing of output cells, the long-term enhancement of synaptic load after stimulation is approximately 120%.
That is, as a result of the measurement performed by blocking the back propagation in the firing of the output cell shown in FIG. 10, it can be seen that a long-term enhancement of the synaptic load based on the synchrony of the input stimulus occurs.
 また、図11に示すように、出力細胞の発火における逆伝搬のブロックを行わなかった測定の結果によれば、刺激の後、シナプス荷重の長期増強は150%程度となっている。このシナプス荷重の長期増強は、上述の図10の出力細胞の発火における逆伝搬のブロックを行った測定の結果より大きい。
 これは、図10の出力細胞の発火における逆伝搬のブロックを行わなかった測定の結果、入力刺激の同期性に基づくシナプス荷重の長期増強に加え、出力細胞の発火による長期増強が起きていることを示している。
 即ち、図10の出力細胞の発火における逆伝搬のブロックを行った測定の結果、入力刺激の同期性に基づくシナプス荷重の長期増強と、ヘブ学習則の出力細胞の発火による長期増強とが共存していることがわかる。
 より具体的には、海馬CA1回路網(海馬スライス実験)において、フィードフォワード回路121における時空間学習則とフィードバック再帰回路122におけるヘブ学習則(スパイクタイミング長期増強)が共存していることがわかる。
 本実験の詳細については、Tsukada M, Yamazaki, Y., Kojima,H. : Interaction between the spatiotemporal learning rule (STLR) and Hebb type (HEBB) in single pyramidal cells in the hippocampal CA1 area. Cogn. Neurodyn., 1, 157-167, 2007.に記載されている。
 このように、生理実験により、フィードフォワード回路121とフィードバック再帰回路122とが実際の海馬CA1回路網において共存していることがわかる。
Furthermore, as shown in FIG. 11, according to the results of measurements in which back propagation in the firing of output cells was not blocked, the long-term enhancement of synaptic load after stimulation was approximately 150%. This long-term enhancement of the synaptic load is larger than the measurement result in which back-propagation in the firing of the output cell shown in FIG. 10 was blocked.
This is due to the measurement results shown in Figure 10 in which back-propagation in the firing of output cells was not blocked, and in addition to the long-term enhancement of synaptic load based on the synchrony of the input stimulus, long-term enhancement due to the firing of output cells occurs. It shows.
That is, as a result of the measurement performed by blocking the back propagation of the firing of the output cell in Figure 10, the long-term enhancement of the synaptic load based on the synchrony of the input stimulus coexists with the long-term enhancement due to the firing of the output cell of the Hebbian learning law. It can be seen that
More specifically, it can be seen that in the hippocampal CA1 circuit network (hippocampal slice experiment), the spatiotemporal learning law in the feedforward circuit 121 and the Hebbian learning law (long-term spike timing enhancement) in the feedback recursive circuit 122 coexist.
For details of this experiment, see Tsukada M, Yamazaki, Y. , Kojima, H. : Interaction between the spatiotemporal learning rule (STLR) and Hebb type (HEBB) in single pyramidal cells in the hippoca mpal CA1 area. Cogn. Neurodyn. , 1, 157-167, 2007. It is described in.
As described above, the physiological experiment shows that the feedforward circuit 121 and the feedback recursive circuit 122 coexist in the actual hippocampal CA1 circuit network.
 以上、図8及び9を用いて、フィードバック再帰回路122の特性を、ヘブ学習則及び時空間学習則の共存する生理実験結果を用いてより詳細に説明した。
 このように、生理実験により、実際の1つの細胞において、時空間学習則と、ヘブ学習則の性質が共存している。そこで、発明者は、フィードフォワード回路121及びフィードバック再帰回路122を1層構造で協調(結合)させ、それぞれの荷重を協調度αのバランスで加算し、それによって時空間文脈パターンを記憶(認識)させることを想到した。
The characteristics of the feedback recursive circuit 122 have been described above in more detail with reference to FIGS. 8 and 9 using physiological experiment results in which the Hebbian learning law and the spatio-temporal learning law coexist.
In this way, physiological experiments have shown that the properties of the spatiotemporal learning law and the Hebbian learning law coexist in one actual cell. Therefore, the inventor cooperated (combined) the feedforward circuit 121 and the feedback recursive circuit 122 in a one-layer structure, added their respective loads with a balance of cooperation degree α, and thereby memorized (recognized) spatiotemporal context patterns. I came up with the idea of doing it.
 発明者は、フィードフォワード回路121及びフィードバック再帰回路122を1層構造で協調させることで、時空間文脈パターンを記憶(認識)させることができるかの検証のため、コンピュータシミュレーションを行った。
 図12は、図1の1層構造のフィードフォワード回路及びフィードバック再帰回路を用いたコンピュータシミュレーションとその結果の一例を示す図である。
 図12の左側に示すように、コンピュータシミュレーションの入力ベクトルとして、図1及び図2を用いて説明した単位データDAの120ビットを、12×10画素の1画像とし、各単位データDAの間の違い(ハミング距離)は、10ビットを用意した。
 図12の中央に示すように、時空間文脈パターンは、複数(ここでは5個)の単位データDAの配列を異ならせた24系列を用意した。
 そして、単位データDAの120ビットを同時に入力する120個のニューロン数の、フィードフォワード回路121及びフィードバック再帰回路122に入力させ、発火系列を検証した。
The inventor conducted a computer simulation to verify whether a spatiotemporal context pattern can be memorized (recognized) by coordinating the feedforward circuit 121 and the feedback recursive circuit 122 in a single layer structure.
FIG. 12 is a diagram illustrating an example of a computer simulation using the one-layer structure feedforward circuit and feedback recursive circuit of FIG. 1 and its results.
As shown on the left side of FIG. 12, the 120 bits of the unit data DA explained using FIGS. 1 and 2 are used as one image of 12×10 pixels as an input vector for the computer simulation, and the For the difference (Hamming distance), 10 bits were prepared.
As shown in the center of FIG. 12, 24 spatio-temporal context patterns are prepared in which the arrangement of a plurality of (five in this case) unit data DA is different.
Then, the firing sequence was verified by inputting the 120 bits of the unit data DA to a feedforward circuit 121 and a feedback recursive circuit 122, each of which has a number of 120 neurons.
 120個の細胞の発火系列を図12の右側のグラフに示す。
 図12の右側のグラフの縦軸は、120個のニューロンの夫々の通し番号(#neuron)である。
 図12の右側のグラフの横軸は、120の時空間文脈パターンを複数回繰り返し入力した回数(Time step)である。
 図12の右側のグラフに示すように、ある1系列の時空間文脈パターンを複数回繰り返し入力することにより、Time(step)が20乃至30の領域(枠線で囲った領域)に示すように、発火パターン(あるTime(step)における120個のニューロンの夫々の発火の有無)が安定していることがわかる。
 このような、安定の程度(安定性)は、以下の式(5)による収束判定で評価することができる。
The firing sequence of 120 cells is shown in the graph on the right side of FIG.
The vertical axis of the graph on the right side of FIG. 12 is the serial number (#neuron) of each of the 120 neurons.
The horizontal axis of the graph on the right side of FIG. 12 is the number of times (time step) that the 120 spatio-temporal context patterns are repeatedly input a plurality of times.
As shown in the graph on the right side of Fig. 12, by repeatedly inputting one series of spatio-temporal context patterns multiple times, the Time (step) can be set as shown in the area (area surrounded by a frame) between 20 and 30. , it can be seen that the firing pattern (the presence or absence of firing of each of the 120 neurons at a certain time (step)) is stable.
Such degree of stability (stability) can be evaluated by convergence determination using the following equation (5).
Figure JPOXMLDOC01-appb-M000005
                           ・・・(5)
Figure JPOXMLDOC01-appb-M000005
...(5)
 図12の右側のグラフに示すTime(step)が20乃至30の領域(枠線で囲った領域)における安定性(収束性)は0.5%以下とした。
 このように、フィードフォワード回路121及びフィードバック再帰回路122を1層構造で協調させた回路(図1のニューラルネットワーク12)は、極めて安定して時空間文脈パターンを記憶(認識)することができることが検証された。
 以下、図12のコンピュータシミュレーションの結果の要点について説明する。
The stability (convergence) in the area (area surrounded by a frame line) where Time (step) is 20 to 30 shown in the graph on the right side of FIG. 12 was set to be 0.5% or less.
In this way, the circuit (neural network 12 in FIG. 1) in which the feedforward circuit 121 and the feedback recursive circuit 122 cooperate in a single layer structure can memorize (recognize) spatiotemporal context patterns extremely stably. Verified.
The main points of the results of the computer simulation shown in FIG. 12 will be explained below.
 まず、第1に、時空間学習則を用いたフィードフォワード回路121とヘブ則を用いたフィードバック再帰回路122の協調比率αの導入が第一に重要である。αは、0.6以上かつ0.95以下の領域とすることが時空間文脈記憶を作るために好適である。更に言えば、協調比率αは、0.75以上かつ0.95以下の領域とすると、更に好適である。
 即ち、フィードフォワード回路121とヘブ則を用いたフィードバック再帰回路122の結合回路網たるニューラルネットワーク12において、時空間学習則を用いたフィードフォワード回路121の比重を、ヘブ則を用いたフィードバック再帰回路122より強めると、好適である。
 以下、協調比率αの次に重要な学習速度パラメータのバランスについて説明する。
First, it is important to introduce a cooperation ratio α between the feedforward circuit 121 using the spatio-temporal learning law and the feedback recursive circuit 122 using the Hebb's law. It is suitable for α to be in the range of 0.6 or more and 0.95 or less in order to create a spatio-temporal context memory. Furthermore, it is more preferable that the cooperation ratio α is in the range of 0.75 or more and 0.95 or less.
That is, in the neural network 12 which is a combined circuit network of a feedforward circuit 121 and a feedback recursive circuit 122 using Hebb's law, the specific weight of the feedforward circuit 121 using the spatio-temporal learning law is compared with that of the feedback recursive circuit 122 using Hebb's law. It is preferable to make it stronger.
The balance of the learning speed parameter, which is the next most important after the cooperation ratio α, will be explained below.
 第2に、学習速度パラメータにより、無構造記憶と構造依存性記憶(フラクタル的構造)の領域が存在する。
 即ち、フィードフォワード回路121における時空間学習則の学習速度ηと、フィードバック再帰回路122におけるヘブ則の学習速度ηのバランスにより、無構造記憶と構造依存性記憶(フラクタル的構造)の2種類の性質が発生する。
 参考例として、以下の式(5)の変域において、第1の領域RAとして、無構造記憶の性質が発生する。
Second, depending on the learning speed parameter, there are regions of unstructured memory and structure-dependent memory (fractal structure).
That is, depending on the balance between the learning speed η S of the spatio-temporal learning law in the feedforward circuit 121 and the learning speed η H of Hebb's law in the feedback recursive circuit 122, two types of memory are possible: unstructured memory and structure-dependent memory (fractal structure). The properties of this occur.
As a reference example, in the domain of equation (5) below, the property of unstructured memory occurs as the first area RA.
Figure JPOXMLDOC01-appb-M000006
                           ・・・(6)
Figure JPOXMLDOC01-appb-M000006
...(6)
 また、参考例として以下の式(7)の変域において、第2の領域RBとして、構造依存性記憶(フラクタル的構造)の性質が発生する。 Furthermore, as a reference example, in the domain of equation (7) below, the property of structure-dependent memory (fractal structure) occurs as the second region RB.
Figure JPOXMLDOC01-appb-M000007
                           ・・・(7)
Figure JPOXMLDOC01-appb-M000007
...(7)
 図13は、参考例として図12のコンピュータシミュレーションにおける、学習速度パラメータと学習結果の関係性の一例を示す図である。
 図13の各グラフは、縦軸が時空間学習則の訓練速度係数η(図13においてはηSTLR)、横軸がヘブ則の訓練速度係数η(図13においてはηHEB)で示されている。
 図13の各グラフは、フィードフォワード回路121における時空間学習則の訓練速度係数ηと、フィードバック再帰回路122におけるヘブ則の訓練速度係数ηの変域を異ならせた場合における学習結果を示している。
 なお、図13の関係性を得るにあたり、協調比率αは0.9とされている。
FIG. 13 is a diagram showing an example of the relationship between learning speed parameters and learning results in the computer simulation of FIG. 12 as a reference example.
In each graph in FIG. 13, the vertical axis represents the training speed coefficient η SSTLR in FIG. 13) of the spatiotemporal learning law, and the horizontal axis represents the training speed coefficient η HHEB in FIG. 13) of the Hebbian law. has been done.
Each graph in FIG. 13 shows the learning results when the domains of the training speed coefficient η S of the spatiotemporal learning law in the feedforward circuit 121 and the training speed coefficient η H of the Hebb's law in the feedback recursion circuit 122 are made different. ing.
Note that in obtaining the relationship shown in FIG. 13, the cooperation ratio α is set to 0.9.
 図13の左列上側のグラフには、ヘブ則によるパターン補完の程度が示されている。
 図13の左列上側のパターン補完の程度の例は、各訓練速度係数η及びηに依存して、変化している。
 図13の左列下側のグラフは、図13の左列上側のグラフについて閾値(Threshold)θerrorが0.5%未満であるか否かにより、2つの領域に区別されたものである。
The graph in the upper left column of FIG. 13 shows the degree of pattern complementation based on Hebb's law.
The example of the degree of pattern completion in the upper left column of FIG. 13 changes depending on each training speed coefficient η S and η H.
The graph in the lower left column of FIG. 13 is divided into two regions depending on whether the threshold θ error is less than 0.5% for the graph in the upper left column of FIG.
 図13の右列上側のグラフには、時空間学習則によるパターン分離の程度が示されている。
 図13の右列上側のパターン分離の程度の例は、各訓練速度係数η及びηに依存して、変化している。
 図13の右列下側のグラフは、図13の右列上側のグラフについて閾値(Threshold)θvarietyが80%未満であるか否かにより、2つの領域に区別されたものである。
The graph in the upper right column of FIG. 13 shows the degree of pattern separation based on the spatio-temporal learning rule.
The example of the degree of pattern separation in the upper right column of FIG. 13 varies depending on the respective training speed coefficients η S and η H.
The lower graph in the right column of FIG. 13 is divided into two regions depending on whether the threshold θ variety is less than 80% for the graph in the upper right column of FIG.
 図13の中央下側のグラフは、ヘブ学習則の共通パターンへの収束性とパターン補完機能及び時空間学習則のパターン分離機能の夫々における、2領域の夫々について、掛け合わせたものである。
 掛け合わせた結果は、領域RAと領域RBの2領域に区分される。
 ここで、領域RAは、上述の式(5)の変域であって、無構造記憶の性質が発生する。
 また、領域RBは、上述の式(6)の変域であって、構造依存性記憶(フラクタル的構造)の性質が発生する。
The lower center graph in FIG. 13 is the result of multiplying the two areas of the convergence to a common pattern of the Hebbian learning rule and the pattern complement function and the pattern separation function of the spatiotemporal learning rule.
The multiplication result is divided into two areas, area RA and area RB.
Here, the area RA is the domain of the above-mentioned equation (5), and the property of unstructured memory occurs.
Further, the region RB is a domain of the above-mentioned equation (6), and the property of structure-dependent memory (fractal structure) occurs.
 このように、図13に示す、図12のコンピュータシミュレーションの結果から、各学習速度パラメータ・バランスにより、無構造記憶及び構造依存性記憶(フラクタル的構造)といった性質の異なる学習がなされることがわかる。 In this way, from the results of the computer simulation shown in FIG. 12 shown in FIG. 13, it can be seen that learning with different properties such as unstructured memory and structure-dependent memory (fractal structure) is achieved depending on each learning speed parameter balance. .
 ここで、以下、時空間学習則とヘブ則の情報処理の特徴の理論的根拠について説明する。
 まず、神経細胞iの内部状態、入力ベクトル、荷重変化ベクトルについて、式(8)のように表現するものとしてヘブ学習則について説明する。
Hereinafter, the theoretical basis of the information processing characteristics of the spatio-temporal learning law and Hebb's law will be explained.
First, the Hebbian learning rule will be described as expressing the internal state of the neuron i, the input vector, and the load change vector as shown in Equation (8).
Figure JPOXMLDOC01-appb-M000008
                           ・・・(8)
Figure JPOXMLDOC01-appb-M000008
...(8)
 ヘブ学習則の神経細胞iの内部状態の時間遷移は次の式(9)で与えられる。 The time transition of the internal state of neuron i according to the Hebbian learning rule is given by the following equation (9).
Figure JPOXMLDOC01-appb-M000009
                           ・・・(9)
Figure JPOXMLDOC01-appb-M000009
...(9)
 ここで、次に示す式(10)の夫々のパラメータは、時刻t及びth+1の夫々における正規化された入力ベクトルである。 Here, each parameter of equation (10) shown below is a normalized input vector at each of times t h and t h+1 .
Figure JPOXMLDOC01-appb-M000010
                           ・・・(10)
Figure JPOXMLDOC01-appb-M000010
...(10)
 また、次に示す式(11)の夫々のパラメータは、時刻t及びth+1の夫々における正規化された神経細胞iの内部状態である。 Further, each parameter of equation (11) shown below is the normalized internal state of the neuron i at each of times t h and t h+1 .
Figure JPOXMLDOC01-appb-M000011
                           ・・・(11)
Figure JPOXMLDOC01-appb-M000011
...(11)
 また、次に示す式(12)のパラメータは、シナプス荷重変化ベクトルの大きさである。 Furthermore, the parameter of equation (12) shown below is the magnitude of the synaptic load change vector.
Figure JPOXMLDOC01-appb-M000012
                           ・・・(12)
Figure JPOXMLDOC01-appb-M000012
...(12)
 この結果から、ヘブ則の出力は入力の時空間文脈の共通要素に収束する特徴を持つ。この性質はフィードバック再帰回路122に組み込まれてパターン補完の特徴に寄与している。 From this result, the output of Hebb's rule has the characteristic of converging to common elements of the spatiotemporal context of the input. This property is incorporated into the feedback recursion circuit 122 and contributes to the feature of pattern completion.
 つぎに、神経細胞iの内部状態を式(7)のように表現するものとして時空間学習則について説明する。
 神経細胞iの内部状態の時間遷移は次の式(13)で与えられる。
Next, the spatio-temporal learning rule will be explained assuming that the internal state of the neuron i is expressed as shown in equation (7).
The time transition of the internal state of neuron i is given by the following equation (13).
Figure JPOXMLDOC01-appb-M000013
                           ・・・(13)
Figure JPOXMLDOC01-appb-M000013
...(13)
 ここで、次に示す式(14)のパラメータは、時空間学習則によるシナプスWijの可塑性を支配するシナプス入力細胞間の同期度の強さである。 Here, the parameter of equation (14) shown below is the strength of the degree of synchronization between synaptic input cells that governs the plasticity of the synapse W ij due to the spatiotemporal learning law.
Figure JPOXMLDOC01-appb-M000014
                           ・・・(14)
Figure JPOXMLDOC01-appb-M000014
...(14)
 また、次に示す式(15)の夫々のパラメータは、同期度の強さを区別する閾値である。 Furthermore, each parameter in equation (15) shown below is a threshold value that distinguishes the strength of the degree of synchronization.
Figure JPOXMLDOC01-appb-M000015
                           ・・・(15)
Figure JPOXMLDOC01-appb-M000015
...(15)
 また、式(14)の夫々のパラメータは、以下の式(16)に示す関係にある。 Furthermore, each parameter in equation (14) has a relationship shown in equation (16) below.
Figure JPOXMLDOC01-appb-M000016
                           ・・・(16)
Figure JPOXMLDOC01-appb-M000016
...(16)
 この結果は、式(14)の夫々のパラメータ、すなわち、時空間学習則の入力細胞の発火の同期度の閾値によって出力細胞の内部状態を制御できることを示している。
 ここで、時刻tにおける出力細胞iの荷重変化の総和を以下に示す式(17)と表示するものとする。
This result shows that the internal state of the output cell can be controlled by each parameter of Equation (14), that is, the threshold value of the degree of synchronization of firing of the input cell of the spatiotemporal learning rule.
Here, it is assumed that the sum of the load changes of the output cell i at time tk is expressed as equation (17) shown below.
Figure JPOXMLDOC01-appb-M000017
                           ・・・(17)
Figure JPOXMLDOC01-appb-M000017
...(17)
 このとき、以下に示す式(18)の条件を考える。 At this time, consider the condition of equation (18) shown below.
Figure JPOXMLDOC01-appb-M000018
                           ・・・(18)
Figure JPOXMLDOC01-appb-M000018
...(18)
 この条件では訓練ベクトル(入力ベクトル)を学習すると同時にその逆ベクトルも強く学習することによって、時空間文脈の共通要素に対する感受性を低下させ異なる要素に対する感受性が増加する。このことは、時空間学習則が文脈パターンの分離能力を一層高める原因となっている。 Under this condition, by learning the training vector (input vector) and at the same time strongly learning its inverse vector, sensitivity to common elements of spatiotemporal context is reduced and sensitivity to different elements is increased. This is the reason why the spatio-temporal learning rule further enhances the ability to separate context patterns.
 このように、本実施形態では、本ニューラルネットワーク12のフィードフォワード回路121のシナプス荷重Kに対してパターン分離能力の高い時空間学習則を適用し、フィードバック再帰回路122の再帰結合荷重WSに対してはパターン補完能力の高いヘブ学習則を適用した。これにより、フィードフォワード回路121及びフィードバック再帰回路122からなる1層の記憶及び学習をする本ニューラルネットワーク12よる時空間文脈の記憶が実現された。 In this way, in this embodiment, a spatio-temporal learning rule with high pattern separation ability is applied to the synaptic load K of the feedforward circuit 121 of the present neural network 12, and the recursive connection load WS of the feedback recursive circuit 122 is applied. applied the Hebbian learning rule, which has high pattern completion ability. As a result, the memory of the spatio-temporal context was realized by the present neural network 12, which performs one-layer memory and learning, consisting of the feedforward circuit 121 and the feedback recursion circuit 122.
 以下、上述した、時空間文脈の記憶(認識)の実現方法とその根拠についてまとめる。 Below, we will summarize the method and basis for realizing the memory (recognition) of spatio-temporal context as described above.
 第1に、フィードフォワード回路121には時空間文脈パターンの分離機能の高い時空間学習則を適用した。その機能は、図6及び図7A,図7Bを用いて説明した生理実験結果、図12及び図13を用いて説明した1層ニューラルネットワークによるコンピュータシミュレーション結果、並びに、その理論モデル結果の三位一体の研究によって証明された。 First, a spatiotemporal learning rule with a high spatiotemporal context pattern separation function is applied to the feedforward circuit 121. Its functions are based on the results of physiological experiments explained using Figures 6 and 7A and 7B, the results of computer simulation using a one-layer neural network explained using Figures 12 and 13, and the trinity research of the theoretical model results. proved by.
 第2に、フィードバック再帰回路122の再帰結合荷重WSには共通パターンへの収束性とパターン補完機能の高いヘブ学習則が適用されている。その機能は、生理実験(Amari, S.: Learning patterns and pattern sequences by selforganizing nets of threshold elements. IEEE Trance. Computers, C-21(11), 1197-1206, 1972. Nakano, K. :  Associatron-A model of associative memory. IEEE Trans., SMC-2, 380-388, 1972.、及び、Hopfield, J. J. : Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, 79, 2554-2558, 1982.)、理論の文献(Nakazawa K,McHugh TJ.Wilson MA, Tonegawa S:NMDA receptor, place cells and hippocampal spatial memory. Nature Reviews Neuroscience 5:361-372.,2004 )、及び、上述の時空間学習則とヘブ則の情報処理の特徴の理論的根拠によって明らかである。 Second, the Hebbian learning law, which has high convergence to a common pattern and high pattern complementation function, is applied to the recursive connection weight WS of the feedback recursive circuit 122. Its function is to perform physiological experiments (Amari, S.: Learning patterns and pattern sequences by selforganizing nets of threshold). elements. IEEE Trance. Computers, C-21 (11), 1197-1206, 1972. Nakano, K.: Associateron-A model of associative memory. IEEE Trans., SMC-2, 380-388, 1972. and Hopfield, J. J.: Neural netw orks and physical systems with emergent computational abilities. Proceedings of the Nation onal Academy of Sciences, 79, 2554-2558, 1982.), theoretical literature (Nakazawa K, McHugh TJ. Wilson MA, Tonegawa S: NMDA receptor, place cells and h ippocampal spatial memory. Nature Reviews Neuroscience 5:361-372., 2004) and the above This is evident by the theoretical basis of the spatiotemporal learning law and the information processing characteristics of Hebb's law.
 第3に、フィードフォワード回路121及びフィードバック再帰回路122からなる1層の本ニューラルネットワーク12の時空間文脈の記憶の実現では、図11及び図12を用いて説明した生理実験によって、時空間学習則とヘブ学習則とが共存することを示した(Tsukada M, Yamazaki, Y., Kojima,H. : Interaction between the spatiotemporal learning rule (STLR) and Hebb type (HEBB) in single pyramidal cells in the hippocampal CA1 area. Cogn. Neurodyn., 1, 157-167, 2007.)。 Thirdly, in order to realize the memory of the spatio-temporal context of the one-layer neural network 12 consisting of the feedforward circuit 121 and the feedback recursive circuit 122, the spatio-temporal learning rules were determined by the physiological experiment explained using FIGS. 11 and 12. and Hebbian learning rule coexist (Tsukada M, Yamazaki, Y., Kojima, H.: Interaction between the spatiotemporal learning g rule (STLR) and Hebb type (HEBB) in single pyramidal cells in the hippocampal CA1 area Cogn. Neurodyn., 1, 157-167, 2007.).
 更に、図12及び図13を用いて説明したように、コンピュータシミュレーションによって次の事項を明らかにした。
 まず、時空間学習則を用いたフィードフォワード回路121とヘブ則を用いたフィードバック再帰回路122の協調比率αのバランスが重要である。
Furthermore, as explained using FIGS. 12 and 13, the following matters were clarified by computer simulation.
First, it is important to balance the cooperation ratio α between the feedforward circuit 121 using the spatiotemporal learning law and the feedback recursive circuit 122 using Hebb's law.
 次に、学習速度パラメータ(時空間学習則の訓練速度係数η及びヘブ学習則の訓練速度係数η)のバランス条件が重要である。 Next, the balance condition of the learning speed parameters (the training speed coefficient η S of the spatio-temporal learning law and the training speed coefficient η H of the Hebbian learning law) is important.
 さらに、上述の時空間学習則とヘブ則の情報処理の特徴の理論的根拠に示したとおり、時空間学習則の入力細胞の発火の同期度の閾値(θ,θ)によって出力細胞の内部状態を制御できることがわかる。
 特に、上述の式(17)の条件では、時空間文脈の共通要素に対する感受性を低下させ異なる要素に対する感受性を増加させる。即ち、系列の文脈の異なるパターンに感度が高いことを示している。
Furthermore, as shown in the theoretical basis of the information processing characteristics of the spatiotemporal learning law and Hebb's law above, the output cell is It can be seen that the internal state can be controlled.
In particular, the condition of equation (17) above reduces sensitivity to common elements of spatio-temporal context and increases sensitivity to different elements. That is, it shows that there is high sensitivity to patterns with different contexts of series.
 一方、ヘブ学習則の出力は入力の時空間文脈の共通要素に収束する。このことは、フィードバック再帰回路122のヘブ学習則がパターン補完の特徴を持つ原因になっているためである。 On the other hand, the output of the Hebbian learning rule converges to the common elements of the input spatiotemporal context. This is because the Hebbian learning rule of the feedback recursive circuit 122 has the characteristic of pattern completion.
 このように、本ニューラルネットワーク12は、フィードフォワード回路121及びフィードバック再帰回路122の1層で、時空間文脈パターンの記憶(認識)を実現することができるのである。
 ここで、従来技術と比較しつつ、本ニューラルネットワーク12が有する特徴について説明する。
In this way, the present neural network 12 can realize storage (recognition) of spatio-temporal context patterns with one layer of the feedforward circuit 121 and the feedback recursive circuit 122.
Here, the features of the present neural network 12 will be explained while comparing with the conventional technology.
 従来のディープラーニングを用いたAI技術ではヘブ学習による特徴抽出と統計最適化の機械学習をもちいた多層ニューラルネットワークによって部時空間文脈記憶を実現していた。
 このため、次の欠点があった。
Conventional AI technology using deep learning has achieved partial spatiotemporal context memory using a multilayer neural network using Hebbian learning feature extraction and statistical optimization machine learning.
This resulted in the following drawbacks.
 第1に、ニューラルネットワークを多重に結合するため、学習・記憶回路では長時間の計算時間と莫大なエネルギー消費が必要であった。これは、応用の対象毎に必要となり、各方面への応用、特に、例えば自動運転中の再学習に応用する際には、エネルギー問題が顕著となる欠点があった。 First, since neural networks are connected in multiple ways, the learning/memory circuit requires a long calculation time and consumes a huge amount of energy. This is necessary for each target of application, and when applied to various fields, especially when applied to relearning during automatic driving, for example, there is a drawback that energy problems become significant.
 第2に、従来のAI技術における画像処理は、ヘブ学習がパターン補完機能に強いことから、ヘブ学習による多重層を用いた画像の特徴抽出に基づいていた。しかしながら、このようなパターン補完機能に強いヘブ学習だけでは、類似な時空間文脈を分離ができにくい欠点があった。 Second, image processing in conventional AI technology has been based on image feature extraction using multiple layers using Hebbian learning, since Hebbian learning has a strong pattern complementation function. However, Hebbian learning, which is strong in pattern completion, has the drawback that it is difficult to separate similar spatiotemporal contexts.
 第3に、ディープラーニングモデルは問題解決の入出力の理論的因果律や、構造と機能の関係が明確とならない。そのため、機能と構造を結び付けたコントロールができないという欠点があった。 Third, in deep learning models, the theoretical causality of input and output for problem solving and the relationship between structure and function are not clear. Therefore, there was a drawback that it was not possible to control functions and structures together.
 第4に、従来のAIの応用例として、例えば、病気の診断応用では専門家の既存のデータを教師データ等とすることで、専門家の再現されたモデル、最適処理のモデルを作ることが可能である。しかしながら、上述の構造と機能と情報表現の因果関係を結びつけることができない欠点があるため、仮に最適処理に利用できたとしても、本質的な病因解明や治療に役立たないという欠点があった。 Fourth, as an example of a conventional application of AI, for example, in a disease diagnosis application, by using the existing data of an expert as training data, it is possible to create a model that reproduces the expert or an optimal processing model. It is possible. However, it has the drawback of not being able to connect the causal relationship between structure, function, and information representation as described above, so even if it could be used for optimal processing, it would not be useful for elucidating the underlying pathogenesis or for treatment.
 これらの従来のAI技術の欠点に対して、本ニューラルネットワーク12は以下に示す優位性を有している。 In contrast to the drawbacks of these conventional AI technologies, the present neural network 12 has the following advantages.
 第1に、上述したように、本ニューラルネットワーク12は、時空間学習則を用いたフィードフォワード回路121とヘブ学習則を用いたフィードバック再帰回路122を結合した1層の構造を有している。これにより、上述の現在のAI技術(ディープラーニング等)の多重層情報処理の欠点を解決することができる。 First, as described above, the present neural network 12 has a one-layer structure in which a feedforward circuit 121 using a spatio-temporal learning rule and a feedback recursive circuit 122 using a Hebbian learning rule are combined. This makes it possible to solve the above-mentioned drawbacks of multi-layer information processing in current AI technologies (such as deep learning).
 即ち、本ニューラルネットワーク12は、1層構造のため、エネルギー問題の欠点が解消する。
 具体的には、従来のAI技術(ディープラーニング等による学習)では記憶するために数百万回の学習を繰り返して実施するため多大なエネルギー消費を余儀なくされる。本ニューラルネットワーク12は、パターン分離とパターン補完を結合した1層ニューラルネットワークで記憶するためエネルギー消費の観点から極めて経済的である。
That is, since the present neural network 12 has a one-layer structure, the drawback of the energy problem can be solved.
Specifically, with conventional AI technology (learning using deep learning, etc.), learning is repeated several million times in order to memorize information, which requires a large amount of energy consumption. The present neural network 12 is extremely economical in terms of energy consumption because it stores data as a one-layer neural network that combines pattern separation and pattern completion.
 また、本ニューラルネットワーク12は、パターン分離とパターン補完の2つの機能を個別に有し、その協調比率αを有する。これにより、本ニューラルネットワーク12は、構造と機能とが密接に関係している。
 より具体的には、本ニューラルネットワーク12は、脳の神経回路網の構造と機能、さらには情報表現の関係が密接に結びついている。これにより、構造-機能-情報表現を結び付けたシステムコントロールが可能となる。
Further, the present neural network 12 has two separate functions of pattern separation and pattern complementation, and has a cooperation ratio α. As a result, the structure and function of this neural network 12 are closely related.
More specifically, in this neural network 12, the structure and function of the neural network of the brain, as well as information representation, are closely connected. This enables system control that links structure, function, and information representation.
 また、本ニューラルネットワーク12は、脳の本質的な理解に活用することができる。
 より具体的には、現在のAI技術による応用(例えば脳の病気の診断)はある程度の精度を持ち実用化されている。しかしながら、そのAIモデルは、脳の解剖学的構造や機能さらには情報表現とは無関係である。そのため、脳疾患の病因解明や治療には役立たない。
 しかしながら、本ニューラルネットワーク12は、実際の脳の生理学的神経回路網の構造と機能に基づいているため、脳疾患の原因究明と治療に役立つといえる。
Further, this neural network 12 can be utilized for essential understanding of the brain.
More specifically, current applications of AI technology (for example, diagnosis of brain diseases) have a certain degree of accuracy and have been put into practical use. However, the AI model is unrelated to the brain's anatomical structure and function, as well as information representation. Therefore, it is not useful for elucidating the etiology or treating brain diseases.
However, since the present neural network 12 is based on the structure and function of the actual physiological neural network of the brain, it can be said to be useful for investigating and treating the causes of brain diseases.
 また、本ニューラルネットワーク12を用いたAI技術は人間の脳と類似な構造と機能を持つため、人間とマシンのコミュニケーションやコントロールが可能であり、人間に優しいロボットの開発に役立たばかりでなくロボットの暴走を防ぐことができるといえる。 In addition, since AI technology using this neural network 12 has a structure and function similar to the human brain, it is possible to communicate and control humans and machines, and it will not only be useful for the development of human-friendly robots, but also It can be said that runaway behavior can be prevented.
 以上、本発明の一実施形態について説明したが、本発明は、上述の実施形態に限定されるものではなく、本発明の目的を達成できる範囲での変形、改良等は本発明に含まれるものとみなす。 Although one embodiment of the present invention has been described above, the present invention is not limited to the above-described embodiment, and modifications, improvements, etc. within the range that can achieve the purpose of the present invention are included in the present invention. regarded as.
 また、図1に示すシステム構成は、本発明の目的を達成するための例示に過ぎず、特に限定されない。 Furthermore, the system configuration shown in FIG. 1 is merely an example for achieving the purpose of the present invention, and is not particularly limited.
 また、図1に示すブロック図は、例示に過ぎず、特に限定されない。即ち、上述した処理を全体として実行できる機能が情報処理システムに備えられていれば足り、この機能を実現するためにどのような機能ブロックやデータベースを用いるのかは、特に図1の例に限定されない。 Furthermore, the block diagram shown in FIG. 1 is merely an example and is not particularly limited. In other words, it is sufficient that the information processing system has a function that can execute the above-mentioned processing as a whole, and the type of functional block or database used to realize this function is not particularly limited to the example shown in FIG. 1. .
 また、機能ブロック及びデータベースの存在場所も、図1に限定されず、任意でよい。
 図1の例では、全ての処理は、図1の情報処理装置1により行われる構成となっているが、これに限定されない。例えば図1の情報処理装置1側に配置された機能ブロックや図示せぬデータベースの少なくとも一部を、図示せぬ他の情報処理装置が備える構成としてもよい。
Further, the locations of the functional blocks and the database are not limited to those shown in FIG. 1, and may be arbitrary.
In the example of FIG. 1, all processing is performed by the information processing device 1 of FIG. 1, but the configuration is not limited thereto. For example, another information processing device (not shown) may include at least a portion of the functional blocks and database (not shown) arranged on the information processing device 1 side in FIG. 1 .
 また、上述した一連の処理は、ハードウェアにより実行させることもできるし、ソフトウェアにより実行させることもできる。
 また、1つの機能ブロックは、ハードウェア単体で構成してもよいし、ソフトウェア単体で構成してもよいし、それらの組み合わせで構成してもよい。
 即ち、ニューラルネットワーク12は、フィードフォワード回路121と、フィードバック再帰回路122とで構成されているものとしたが、これらの回路は、ハードウェア的にもソフトウェア的にも実現され得る。
 具体的には例えば、各回路は、電気素子により構成されていてもよい。また例えば、各回路は、演算するプログラムでよい。
 即ち、各回路は、GPU(Graphics Processing Unit)や、FPGA(Field-Programmable Gate Array)、更には、専用に設計されたASIC(Application Specific Integrated Circuit)等、所定の形態の素子において機能するものであってもよく、これらの組合せにおいて機能するものであってもよい。
Furthermore, the series of processes described above can be executed by hardware or by software.
Further, one functional block may be configured by a single piece of hardware, a single piece of software, or a combination thereof.
That is, although the neural network 12 is assumed to be composed of a feedforward circuit 121 and a feedback recursive circuit 122, these circuits may be realized in terms of hardware or software.
Specifically, for example, each circuit may be constituted by an electric element. Further, for example, each circuit may be a program for calculating.
In other words, each circuit is implemented using a GPU (Graphics Processing Unit), an FPGA (Field-Programmable Gate Array), or a specially designed ASIC (Application Specific Integrated Circuit). rcuit), etc., which function in a predetermined form of element. It may be possible to have a combination of these functions.
 一連の処理をソフトウェアにより実行させる場合には、そのソフトウェアを構成するプログラムが、コンピュータ等にネットワークや記録媒体からインストールされる。
 コンピュータは、専用のハードウェアに組み込まれているコンピュータであってもよい。
 また、コンピュータは、各種のプログラムをインストールすることで、各種の機能を実行することが可能なコンピュータ、例えばサーバの他汎用のスマートフォンやパーソナルコンピュータであってもよい。
When a series of processes is executed by software, a program constituting the software is installed on a computer or the like from a network or a recording medium.
The computer may be a computer built into dedicated hardware.
Further, the computer may be a computer that can execute various functions by installing various programs, such as a server, a general-purpose smartphone, or a personal computer.
 このようなプログラムを含む記録媒体は、ユーザにプログラムを提供するために装置本体とは別に配布される図示せぬリムーバブルメディアにより構成されるだけでなく、装置本体に予め組み込まれた状態でユーザに提供される記録媒体等で構成される。 Recording media containing such programs not only consist of removable media (not shown) that is distributed separately from the main body of the device in order to provide the program to the user, but also are provided to the user in a state that is pre-installed in the main body of the device. Consists of provided recording media, etc.
 なお、本明細書において、記録媒体に記録されるプログラムを記述するステップは、その順序に沿って時系列的に行われる処理はもちろん、必ずしも時系列的に処理されなくとも、並列的あるいは個別に実行される処理をも含むものである。 Note that in this specification, the step of writing a program to be recorded on a recording medium is not only a process that is performed chronologically in accordance with the order, but also a process that is not necessarily performed chronologically but in parallel or individually. It also includes the processing to be executed.
 以上をまとめると、本発明が適用される情報処理装置は、次のような構成を有していれば足り、各種各様な実施の形態を取ることができる。 To summarize the above, an information processing device to which the present invention is applied only needs to have the following configuration, and can take various embodiments.
 即ち、本発明が適用される情報処理装置(例えば図1の情報処理装置)は、
 1以上(例えば図1においては120)の結合荷重(例えば図1の結合荷重WS)を有するフィードフォワード回路(例えば図1のフィードフォワード回路121)と、1以上(例えば図1においては4)の再帰結合荷重(例えば図1の再帰結合荷重WK)を有するフィードバック回路(例えば図1のフィードバック再帰回路122)とから構成される1層のニューラルネットワーク(例えば図1のニューラルネットワーク12)であって、
 時空間文脈の記憶の学習を実行する際には、
 前記フィードフォワード回路の前記1以上の結合荷重に対しては時空間学習則を適用した学習(例えば明細書における式(1)の学習)を実行し、
 前記フィードバック回路の前記1以上の再帰結合荷重に対してはヘブの学習則(例えば明細書における式(2)の学習)を適用した学習を実行する、
 前記1層のニューラルネットワークを備える。
 これにより、情報処理における時空間文脈の認識における利便性を向上させることができる。
That is, the information processing device (for example, the information processing device in FIG. 1) to which the present invention is applied,
A feedforward circuit (for example, the feedforward circuit 121 in FIG. 1) having a coupling load of 1 or more (for example, 120 in FIG. 1) (for example, the coupling load WS in FIG. 1) and a feedforward circuit (for example, the feedforward circuit 121 in FIG. A one-layer neural network (for example, the neural network 12 in FIG. 1) comprising a feedback circuit (for example, the feedback recursive circuit 122 in FIG. 1) having a recursive connection weight (for example, the recursive connection weight WK in FIG. 1),
When performing memory learning for spatiotemporal context,
Performing learning applying a spatio-temporal learning rule (for example, learning equation (1) in the specification) for the one or more connection loads of the feedforward circuit,
performing learning applying Hebb's learning law (for example, learning equation (2) in the specification) to the one or more recursive connection loads of the feedback circuit;
The one-layer neural network is provided.
Thereby, it is possible to improve the convenience in recognizing spatio-temporal context in information processing.
 前記時空間文脈の記憶の学習を実行する際には、
 前記時空間学習則を適用した学習が実行される前記フィードフォワード回路と、前記ヘブの学習則を適用した学習が実行される前記フィードバック回路との結合バランス(協調比率)αが最重要である。
When performing memory learning of the spatio-temporal context,
The most important factor is the coupling balance (cooperation ratio) α between the feedforward circuit where learning is performed using the spatio-temporal learning law and the feedback circuit where learning is performed using Hebb's learning law.
 次に前記時空間文脈の記憶の学習を実行する際には、
 前記時空間学習則の速度係数ηと、前記ヘブの学習則の速度係数ηのバランスが重要である。
 次の式(18)の範囲内の値が採用される、こともできる。
Next, when learning the memory of the spatio-temporal context,
The balance between the speed coefficient η S of the spatiotemporal learning law and the speed coefficient η H of the Hebbian learning law is important.
It is also possible to adopt a value within the range of the following equation (18).
Figure JPOXMLDOC01-appb-M000019
                           ・・・(19)
Figure JPOXMLDOC01-appb-M000019
...(19)
 1・・・情報処理装置、11・・・入力部、12・・・ニューラルネットワーク、13・・・出力部、121・・・フィードフォワード回路、122・・・フィードバック再帰回路 1... Information processing device, 11... Input section, 12... Neural network, 13... Output section, 121... Feedforward circuit, 122... Feedback recursion circuit

Claims (5)

  1.  1以上の結合荷重を有するフィードフォワード回路と、1以上の再帰結合荷重を有するフィードバック回路とから構成される1層のニューラルネットワークであって、
     時空間文脈の記憶の学習を実行する際には、
     前記フィードフォワード回路の前記1以上の結合荷重に対しては時空間学習則を適用した学習を実行し、
     前記フィードバック回路の前記1以上の再帰結合荷重に対してはヘブの学習則を適用した学習を実行する、
     前記1層のニューラルネットワークを備える、
     情報処理装置。
    A one-layer neural network comprising a feedforward circuit having one or more connection weights and a feedback circuit having one or more recursive connection weights,
    When performing memory learning for spatiotemporal context,
    Performing learning applying a spatio-temporal learning rule to the one or more connection weights of the feedforward circuit,
    performing learning applying Hebb's learning law to the one or more recursive connection weights of the feedback circuit;
    comprising the one-layer neural network;
    Information processing device.
  2.  前記時空間文脈の記憶の学習を実行する際には、
     前記時空間学習則を適用した学習が実行される前記フィードフォワード回路と、前記ヘブの学習則を適用した学習が実行される前記フィードバック回路との協調比率のバランスの導入が採用される。
     請求項1に記載の情報処理装置。
    When performing memory learning of the spatio-temporal context,
    Introducing a balance in the cooperation ratio between the feedforward circuit, which performs learning applying the spatio-temporal learning law, and the feedback circuit, which performs learning applying Hebb's learning law, is adopted.
    The information processing device according to claim 1.
  3.  前記時空間文脈の記憶の学習を実行する際には、
     前記時空間学習則の速度係数と、前記ヘブの学習則の速度係数のバランスの導入が採用される、
     請求項1に記載の情報処理装置。
    When performing memory learning of the spatio-temporal context,
    Introducing a balance between the speed coefficient of the spatio-temporal learning law and the speed coefficient of the Hebbian learning law is adopted,
    The information processing device according to claim 1.
  4.  1以上の結合荷重を有するフィードフォワード回路と、1以上の再帰結合荷重を有するフィードバック回路とから構成される1層のニューラルネットワークを備える情報処理装置が実行する情報処理方法であって、
     時空間文脈の記憶の学習を実行する際には、
     前記フィードフォワード回路の前記1以上の結合荷重に対しては時空間学習則を適用した学習を実行し、
     前記フィードバック回路の前記1以上の再帰結合荷重に対してはヘブの学習則を適用した学習を実行する、
     情報処理方法。
    An information processing method executed by an information processing device including a one-layer neural network configured of a feedforward circuit having one or more connection weights and a feedback circuit having one or more recursive connection weights, the method comprising:
    When performing memory learning for spatiotemporal context,
    Performing learning applying a spatio-temporal learning rule to the one or more connection weights of the feedforward circuit,
    performing learning applying Hebb's learning law to the one or more recursive connection weights of the feedback circuit;
    Information processing method.
  5.  1以上の結合荷重を有するフィードフォワード回路と、1以上の再帰結合荷重を有するフィードバック回路とから構成される1層のニューラルネットワークを備えるコンピュータに、
     時空間文脈の記憶の学習を実行する処理として、
     前記フィードフォワード回路の前記1以上の結合荷重に対しては時空間学習則を適用した学習を実行し、
     前記フィードバック回路の前記1以上の再帰結合荷重に対してはヘブの学習則を適用した学習を実行する、
     処理を実行させるプログラム。
    A computer equipped with a one-layer neural network composed of a feedforward circuit having one or more connection weights and a feedback circuit having one or more recursive connection weights,
    As a process that performs memory learning of spatiotemporal context,
    Performing learning applying a spatio-temporal learning rule to the one or more connection weights of the feedforward circuit,
    performing learning applying Hebb's learning law to the one or more recursive connection weights of the feedback circuit;
    A program that executes processing.
PCT/JP2023/016886 2022-04-28 2023-04-28 Information processing device, information processing method, and program WO2023210816A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-074903 2022-04-28
JP2022074903A JP2023163775A (en) 2022-04-28 2022-04-28 Information processing device, information processing method and program

Publications (1)

Publication Number Publication Date
WO2023210816A1 true WO2023210816A1 (en) 2023-11-02

Family

ID=88518924

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/016886 WO2023210816A1 (en) 2022-04-28 2023-04-28 Information processing device, information processing method, and program

Country Status (2)

Country Link
JP (1) JP2023163775A (en)
WO (1) WO2023210816A1 (en)

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Advances in Cognitive Neurodynamics (VII) Proceedings of the Seventh International Conference on Cognitive Neurodynamics", 1 January 2019, SPRINGER NATURE SINGAPORE PTE LTD, ISBN: 978-981-16-0317-4, article TSUKADA, HIROMICHI AND TSUKADA, MINORU: "Context-Dependent Learning and Memory Based on Spatio-Temporal Learning Rule", pages: 89 - 94, XP009549941, DOI: 10.1007/978-981-16-0317-4_10 *

Also Published As

Publication number Publication date
JP2023163775A (en) 2023-11-10

Similar Documents

Publication Publication Date Title
Hunsberger et al. Spiking deep networks with LIF neurons
Fausett Fundamentals of neural networks: architectures, algorithms and applications
Todo et al. Unsupervised learnable neuron model with nonlinear interaction on dendrites
US11126913B2 (en) Methods and systems for implementing deep spiking neural networks
TW201426576A (en) Method and apparatus for designing emergent multi-layer spiking networks
Zeng et al. Improving multi-layer spiking neural networks by incorporating brain-inspired rules
Shon et al. Motion detection and prediction through spike-timing dependent plasticity
Kratzer et al. Neuronal network analysis of serum electrophoresis.
Zorins et al. Artificial neural networks and human brain: Survey of improvement possibilities of learning
WO2023210816A1 (en) Information processing device, information processing method, and program
Woo et al. Characterization of multiscale logic operations in the neural circuits
Song et al. Identification of short-term and long-term functional synaptic plasticity from spiking activities
Hourdakis et al. Computational modeling of cortical pathways involved in action execution and action observation
Pupezescu Pulsating Multilayer Perceptron
Świetlik et al. Artificial neural networks in nuclear medicine
CN111582470A (en) Self-adaptive unsupervised learning image identification method and system based on STDP
CA2898216C (en) Methods and systems for implementing deep spiking neural networks
JPH04501327A (en) pattern transfer neural network
Galeazzi et al. The development of hand-centered visual representations in the primate brain: a computer modeling study using natural visual scenes
Vaila et al. Spiking CNNs with PYNN and NEURON
Cheok A Deep-Learning-Based Method For Spike Sorting With Spiking Neural Network
Foreacre On Building a Mind: Replicating a Neural Network Model of a Neuron
Deco et al. Neural mechanisms of visual memory: A neurocomputational perspective
Michler Self-Organization of Spiking Neural Networks for Visual Object Recognition
Li et al. Spiking neural network with synaptic plasticity for recognition

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23796548

Country of ref document: EP

Kind code of ref document: A1