WO2002015125A9 - Neural network device for evolving appropriate connections - Google Patents
Neural network device for evolving appropriate connectionsInfo
- Publication number
- WO2002015125A9 WO2002015125A9 PCT/US2001/025616 US0125616W WO0215125A9 WO 2002015125 A9 WO2002015125 A9 WO 2002015125A9 US 0125616 W US0125616 W US 0125616W WO 0215125 A9 WO0215125 A9 WO 0215125A9
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- connections
- existing
- weight
- ratio
- incipient
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
Definitions
- the present invention relates generally to computer processing and more specifically, to a system and method for evolving appropriate connections in sparse neural networks.
- a neural network is a system composed of many simple processing elements operating in parallel which has a natural propensity for storing experiential knowledge and making it available for use. It resembles the brain in two respects: 1) knowledge is acquired by the network through a learning process, and 2) interneuron connection strengths known as synaptic weights are used to store the knowledge.
- Neural network devices are used to process information using neuron-like units connected by variable strength "synapses". These devices can be realized in either hardware or software, and they perform massively parallel computations that are useful in pattern recognition, finance and a host of other applications.
- the computations performed by the devices are determined by the strengths of the connections, which constitute the computational "program”.
- There are various procedures for setting the strengths of the connections mostly variants of the so-called “Hebbian” rule, a type of feedforward neural network in which the connection strengths are determined in a training phase by the past history of activity of both input and output neurons.
- the most well known variant is "backpropagation”.
- the backpropagation algorithm compares an output (result) that was obtained with the result that was expected.
- the present invention is directed towards a system and method for efficiently evolving new connections in a neural network.
- new connections are created in the neighborhood of existing connections by adding a "mutational" component to whatever synaptic strengthening rule is used in a conventional neural network device.
- This allows a combination of the advantages of fully connected networks and of sparse networks.
- weight adjustment is only allowed to occur across connections if monitoring "K" units decree that unnecessary spread of connections is unlikely to occur.
- a method for evolving appropriate connections among units in a neural network comprising the steps of calculating weight changes at each existing connection and incipient connections between units for each training example; determining a K ratio using the weight changes, wherein if the K ratio exceeds a threshold, further comprising the steps of: increasing a weight of the existing connection; creating new connections at the incipient connections; and pruning weak connections between the units.
- FIG. 1 illustrates an exemplary flow diagram of a method for evolving appropriate connections in a neural network device according to an aspect of the present invention.
- FIG. 2 depicts an exemplary presynaptic relationship neighborhood illustrating a preferred circuitry for monitoring weight adjustment of connections according to an aspect of the present invention.
- FIG. 3 depicts an exemplary postsynaptic relationship neighborhood illustrating a preferred circuitry for monitoring weight adjustment of connections according to an aspect of the present invention.
- FIG. 4 illustrates an exemplary case where an errant synapse flourishes and a relay cell transfers its allegiance to a neighbor of an original layer 4 cell.
- the exemplary system modules and method steps described herein may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof.
- the present invention is implemented in software as an application program tangibly embodied on one or more program storage devices.
- the application program may be executed by any machine, device or platform comprising suitable architecture.
- the constituent system modules and method steps depicted in this specification are preferably implemented in software, the actual connections between the system components (or the process steps) may differ depending upon the manner in which the present invention is programmed. Given the teachings herein, one of ordinary skill in the related art will be able to contemplate these and similar implementations or configurations of the present invention.
- a system and method for creating new connections in feedforward topological networks (i.e., networks where the units or "neuroids" have spatial relationships) comprising two steps: 1) a procedure for creating new connections, and 2) a spatialised third layer of units called K units or "nousoids" which are used to control connection adjustment.
- feedforward topological networks i.e., networks where the units or "neuroids” have spatial relationships
- K units or "nousoids” a spatialised third layer of units
- the network weights are preferably digitized, so weight adjustments (which can be done according to any standard procedure, such as back-propagation, Hebbian learning, etc.) correspond to the addition (or removal) of "synapses" to a connection.
- a variable "learning rate factor” (typically 1 or 0) is preferably associated with each connection, and input/output pairs of neuroids that are not linked by synapses are "unconnected".
- FIG. 1 illustrates an exemplary flow diagram of a method for evolving appropriate connections in a neural network device according to an aspect of the present invention.
- weight changes induced by each particular training sample or pattern are calculated using, for example, a conventional network training rule such as Hebb or backpropagation (step
- K ratio a ratio of the weight changes for existing connections to incipient connections
- a system according to the present invention preferably uses a constant value of E, which can be approximately determined, for example, by calculating:
- I each representing a layer of neurons in a conventional 2 layer feedforward neural network device.
- a system according to the present invention uses discrete levels of connection strength (0,1,2 etc.).
- strengthening connections corresponds to adding discrete synapses (weight) to connections.
- a mutation rule is presented in which an amount (l-E)x(total weight of existing connections) is added to an appropriate existing connection (step 105).
- the remaining amount (E)x(tota ⁇ weight change of existing connections) is added to form neighboring connections (step 107). It is to be noted that if neighboring connections are not yet in existence (i.e, they are incipient connections), they can be created by this mutation rule; however, whether such new connections are created depends on the size of the weight increase computed in step 101, together with the magnitude of E.
- connections that are weak are deleted (step 109).
- This threshold corresponds, for example, to the strength of a single discrete weight level (a single synapse).
- a 0-strength connection does not exist, and thus the calculation of modifications of such non-existent connections are completely avoided. Since in conventional NNDs composed of very large numbers of neurons most of the connections are very weak, in a system according to the present invention, most of the connections will be eliminated through pruning, thus greatly reducing the number of training calculations, and speeding up the training phase. Thus, the net effect of such mutation and pruning is to import the Darwinian evolutionary principle directly into the activity-dependent learning phase, rather than as a prior step used to set a fixed network architecture. Following step 109, the system returns to step 101.
- the overall effect of the addition of the mutation rule is that during the training of the device, although only a small fraction of the possible connections is available at any time (greatly decreasing the calculations that have to be done to train the device), new connections are constantly being recruited and unimportant connections eliminated, so that the final small set of connections is close to the optimum small set for the particular problem at hand. If the problem should change (for example, if new training examples become available), the set of connections can be further modified.
- a system according to the present invention includes the possibility of automatically forming new connections so that although at any one time there are only a small number of connections, there is the capability to form new connections and eliminate old or faulty ones; thus, the network constantly evolves and improves itself.
- the new connections are preferably formed randomly in the vicinity of the existing connections, and are tested to evaluate if they are "good” or "bad", which can be determined, for example, by the conventional network training rule used.
- good connections are retained, and bad ones are eliminated and/or replaced by good ones.
- the connections are shifted continuously until a best possible set of connections is found.
- the exemplary process described above can be applied to any neural network in which there exists some natural relationship between the elements of the input (as in the postsynaptic case of FIG. 3) or output (as in the presynaptic case of FIG. 2) patterns or vectors.
- Such is often the case in problems to which neural networks are applied, such as pattern recognition.
- a problem might arise wherein the new connections may propagate uncontrollably. This would be problematic since new connections could be potentially harmful; in addition, an overabundance of new connections would defeat the purpose of the present invention as having only a small number of connections to deal with at any particular time.
- FIG. 2 depicts an exemplary presynaptic relationship neighborhood illustrating a preferred circuitry for monitoring weight adjustment of connections according to an aspect of the present invention.
- Each cell in an input (J) layer 203 and output (I) layer 205 comprises a standard connectionist unit which computes a weighted sum of its input (input units can in turn receive inputs from other input units, not shown, these inputs could be treated a conventional NND or as a neural network according to the present invention).
- J units provide inputs to I units via discrete synaptic weights shown for example, as small black dots. If an existing connection strengthens during network training as a result of the activity across it (e.g.
- the added discrete increments of synaptic strength may appear, for example, at the existing connection in the amount of (l-E)x(total weight change of existing connections) or at the incipient connections (shown as small open circles and dashed lines) in the amount of (Z ⁇ );c(total weight change of existing connections).
- the added synaptic strength for each neighboring connection would be in the amount of (E/2);t(total weight change of existing connections).
- connection 202 is, for example, 3 units (represented as the 3 black dots). Neighbors of the (J 0 to ⁇ connection are shown as connections (Jo to L 0(204) and (J 0 to 10(206). These are incipient connections (in the sense that by the mutation rule these connections 204 and 206 may be formed if the existing connection 202 strengthens) and are shown as dotted lines and open dots. It is to be noted that if new connections are formed as a result of this mutational rule during training, they themselves undergo adjustment (including possible elimination) during further training.
- a neural network according to the present invention is sparsely connected and the number of computer calculations or hardware links needed is much less than for an equivalent NND, thus resulting in increased efficiency in training.
- the mutational rule may prevent training if E is too large or too small.
- optimal training is assured using an additional chaperone layer, marked K (201).
- a third layer (K) (201) of neuron-like units is introduced (shown as chimney triangles in FIGS. 2 and 3).
- This K layer contains 3n or 3m neurons (depending whether a pre- or postsynaptic neighborhood rule is used). It is to be noted that in problems in which the natural neighborhood relations between input or output variables is 2 rather than 1 dimensional, larger numbers of K neurons would be needed. In FIGS. 2 and 3, only 3 of these K layer units are shown for illustrative purposes. (If the connections made or received by a J or I layer neuron involve more than one partner, a central K cell monitors the average of the activity across all the relevant J or I layer neurons).
- nousoids is added to the input layer J (203) and the output layer I (205).
- a dedicated “center” nousoid or K cell 207 receives input from both the input 203 and output 205 neuroids contributing to that connection.
- the center K cell 207 computes whatever quantity determines the updating of the feedforward connection (it therefore receives all signals used to calculate that update).
- Neighboring "surround" nousoids 208 and 209 of the center K cell 207 receive and compute a similar quantity for the pairs of unconnected neuroids that immediately surround the connected neuroids. That is, each K unit computes products of its J unit inputs (shown in FIGS. 2 and 3 as the inputs to the tops of the chimneys) and their I unit inputs (shown in FIGS. 2 and 3 on the bases of the chimneys). The central K unit 207 computes this product for the units contributing to the existing connection (which may include more than the one I cell shown). Flanking K units 208 and 209 compute this product for incipient connections (it is to be noted that for certain problems there may be more than the 2 flanking K units shown here).
- the central K cell 207 monitors the activity across the existing connections formed by each J or I unit, and the 2 flanking K cells 208 and 209 monitor the activity across each of the corresponding incipient connections. Interaction between center K cell 207 and flanking K cells 208 and 209 allows the computation of an "update K ratio" which is then used by the center K cell 207 to control the learning rate factor at the connection corresponding to the center K cell.
- This update ratio (“K" ratio) is calculated by the center K cell 207 by computing a ratio of its own input products to the input products of its flanking K units 208 and 209. For example, this can be done by dividing the amount by which an existing connection strengthens (i.e., its weight change) by the amount by which the incipient connections strengthens. That is:
- K ratio weight change of existing connections/ weight change of incipient connections
- the central K cell 207 computes the ratio of the existing to incipient activities, and if this ratio exceeds some threshold T (which depends on E), it sends a signal 211 to the relevant J cell (here, cell J 0 ) making the connection.
- T which depends on E
- This signal 211 allows activity- dependent connection adjustment to occur. In the absence of this signal, the activity across the existing connection 202 caused by the training example has no effect on the existing connection 202. It is to be noted that at other connections Tmay be exceeded and learning
- connections may gain strength not only from the activity they receive directly, but also from spillover from the strengthening of neighboring connections.
- the K cells 207, 208 and 209 act as "chaperones" for the connections between the input layer J (203) and output layer I (205), since they only allow weight adjustment to occur if it is likely that new connections formed during training will be eliminated. As a result, high E values can be used, thus allowing training to proceed rapidly.
- the net result of this chaperoning is that the mutational spread of connections during training, which is necessary to find a final nearly optimal set of connections, is never allowed to get out of control. Overall, in FIG.
- the flanking K units 208 and 209 monitor the conjoint activity (if training is "Hebbian") of unit pairs J 0 and Li or J 0 and Ij, respectively. More generally, K cells monitor whatever signal causes strengthening of existing or incipient connections (for example, the product of pre- and postsynaptic activites ("Hebb rule”), or the product of presynaptic activity and the difference between postsynaptic activity and a target value (“Delta Rule”), or in backprop a generalized Delta rule).
- Hebb rule the product of pre- and postsynaptic activites
- Delta Rule the product of presynaptic activity and the difference between postsynaptic activity and a target value
- FIG. 3 depicts an exemplary postsynaptic relationship neighborhood illustrating a preferred circuitry for monitoring weight adjustment of connections according to an aspect of the present invention.
- Postsynaptic mutation can also be used to convert a NND to a neural network according to the present invention, but this requires a different arrangement of chaperoning K-units.
- the neighbors of the (Jo to Io) connection 202 are the connections (J.i to Io)(301) and (Jj to I 0 )(303).
- the top 2 rows of circles represent the neurons of a conventional 2 layer feedforward NND.
- the concept of "neighborhood" implies that there is some natural preferred relation between the input or output variables. Such a situation would automatically arise, for example, in a pattern recognition task, where the input variables might be image pixels.
- the central K cell 207 monitors the activity across the existing connections formed by each J or I unit, and the 2 flanking K cells 208 and 209 monitor the activity across each of the corresponding incipient connections. Interaction between center K cell 207 and flanking K cells 208 and 209 allows the computation of the update K ratio which is then used by the center K cell 207 to control the learning rate factor at the connection corresponding to the center K cell.
- the central K cell 207 computes the ratio of the existing to incipient activities, and if this ratio exceeds some threshold T, it sends a signal 305 to the relevant I cell (here, cell Io) making the connection. This signal 305 allows activity-dependent connection adjustment to occur.
- the flanking K units 208 and 209 monitor conjoint activity across unit pairs (J_ ⁇ to I 0 )and (Ji to IQ), respectively. It is to be noted that both approaches (presynaptic and postsynaptic) to constructing neural network devices according to an aspect of the present invention could be combined in one device, though this would require two different types of K units.
- the effect of applying the chaperoning layer K is that weights are only updated if the ensuing possible creation of new connections will not seriously degrade network performance.
- a recalibration algorithm is described in "Implications of Synaptic Digitisation and Error for Neocortical Function", Neurocomputing 32-33 (2000) 673-678, authored by Kingsley J.A. Cox and Paul Adams, the disclosure of which is herein incorporated by reference.
- the recalibration algorithm comprises steps which simulate a process which occurs in the brain, for example, FIG. 4 illustrates an exemplary case where, as a result of daytime learning, an errant synapse flourishes and a relay cell transfers its allegiance to a neighbor of the original layer 4 cell (corresponds, for example, to the I cell layer).
- the connections of layer 6 cells (which correspond, for example, to the K cells) must be updated, so they can continue to limit error spread.
- This updating which involves breaking the dotted connections 401 and 403 and making dashed connections 405 and 407, is preferably done offline, with the feedforward T - 4 connections rendered implastic. (Layer T corresponds, for example, to the J cell layer).
- Layer 4 to layer 6 connections which define the columns of cortex, are permanent.
- the connections from layer T to layer 6 can be updated if the T cell fires in bursts, since this will fire the newly connected layer 4 cell, which will fire the correct layer 6 cell.
- the connection from the new layer 4 cell to its partner layer 6 cell will undergo Hebbian strengthening, and errant synapses will form onto the neighbors of that layer 6 cell, as required. If this bursting activity advances as a calibrating wave across thalamic nuclei, it will automatically update all the T-to-6 connections. Updating the return pathway to thalamus is trickier, because the new layer 6 cell must find all the relay cells that comprise the receptive field of its partner layer 4 cell.
- layer 6 cells act as correlation detectors, they can do reverse correlation analysis to determine this receptive field.
- White noise must be played into thalamus to perform offline updating of the corticofhalamic connections.
- a group of relay cells fire the new layer 4 cell, so the calibrating, white noise input should be in tonic mode.
- These requirements match the features of slow wave and paradoxical sleep. In the former case, traveling bursts are imposed on thalamus by its reticular nucleus, in the latter, irregularly discharging brainstem cholinergic neurons bombard relay cells with brief nicotinic epsps.
- FIG. 4 shows the start and end of the allegiance transfer, but not intermediate points, when the thalamic cell makes synapses on both layer 4 cells, which at the end of the day comprise an extended high-fitness zone. Now it is the average correlation in the high- fitness zone that should be compared to the flanking correlations, and used to control the plasticity of the thalamic neuron. This can be achieved if there is also offline updating of the lateral interactions between neighboring layer 6 cells, so that during the transfer layer 6 cells marked 0 and 1 act as a unit, comparing their average activity to that of the flanking cells (marked -1 and 2) and both feeding back to the relay cell. This (and similar updating in layers 2, 3 and 5) can again be accomplished in slow wave sleep.
- layer 6 control of postsynaptic plasticity must also loop back through thalamus, via matrix relay cells that synapse in layer 1.
- white noise input must be played into the postsynaptic cells, presumably by random matrix cell spikes fired into the apical tufts under conditions where apical epsps initiate somatic spikes.
- a system and method according to the present invention simplifies the task of setting connection strengths in NNDs by, for example, greatly reducing the set of connections that have to be adjusted.
- pointers can be used to direct the calculations which would only be performed at existing connections. These program pointers are integers which correspond to discrete memory locations. Calculations are preferably controlled by these pointers.
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2001283397A AU2001283397A1 (en) | 2000-08-16 | 2001-08-16 | Neural network device for evolving appropriate connections |
US10/333,667 US7080053B2 (en) | 2000-08-16 | 2001-08-16 | Neural network device for evolving appropriate connections |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US22558100P | 2000-08-16 | 2000-08-16 | |
US60/225,581 | 2000-08-16 |
Publications (3)
Publication Number | Publication Date |
---|---|
WO2002015125A2 WO2002015125A2 (en) | 2002-02-21 |
WO2002015125A9 true WO2002015125A9 (en) | 2003-03-27 |
WO2002015125A3 WO2002015125A3 (en) | 2004-02-26 |
Family
ID=22845434
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2001/025616 WO2002015125A2 (en) | 2000-08-16 | 2001-08-16 | Neural network device for evolving appropriate connections |
Country Status (3)
Country | Link |
---|---|
US (1) | US7080053B2 (en) |
AU (1) | AU2001283397A1 (en) |
WO (1) | WO2002015125A2 (en) |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8010467B2 (en) * | 2003-03-24 | 2011-08-30 | Fiske Software Llc | Active element machine computation |
US8712942B2 (en) * | 2003-03-24 | 2014-04-29 | AEMEA Inc. | Active element machine computation |
US7398260B2 (en) * | 2003-03-24 | 2008-07-08 | Fiske Software Llc | Effector machine computation |
US8019705B2 (en) * | 2003-03-24 | 2011-09-13 | Fiske Software, LLC. | Register and active element machines: commands, programs, simulators and translators |
MY138544A (en) | 2003-06-26 | 2009-06-30 | Neuramatix Sdn Bhd | Neural networks with learning and expression capability |
US7519452B2 (en) * | 2004-04-15 | 2009-04-14 | Neurosciences Research Foundation, Inc. | Mobile brain-based device for use in a real world environment |
US7904398B1 (en) | 2005-10-26 | 2011-03-08 | Dominic John Repici | Artificial synapse component using multiple distinct learning means with distinct predetermined learning acquisition times |
US7953279B2 (en) | 2007-06-28 | 2011-05-31 | Microsoft Corporation | Combining online and offline recognizers in a handwriting recognition system |
US20090276385A1 (en) * | 2008-04-30 | 2009-11-05 | Stanley Hill | Artificial-Neural-Networks Training Artificial-Neural-Networks |
US10268843B2 (en) | 2011-12-06 | 2019-04-23 | AEMEA Inc. | Non-deterministic secure active element machine |
US9129222B2 (en) | 2011-06-22 | 2015-09-08 | Qualcomm Incorporated | Method and apparatus for a local competitive learning rule that leads to sparse connectivity |
US9015096B2 (en) | 2012-05-30 | 2015-04-21 | Qualcomm Incorporated | Continuous time spiking neural network event-based simulation that schedules co-pending events using an indexable list of nodes |
CN104598748B (en) * | 2015-01-29 | 2018-05-04 | 中国人民解放军军械工程学院 | A kind of computational methods of suppressive Boolean network degeneracy |
US10438112B2 (en) | 2015-05-26 | 2019-10-08 | Samsung Electronics Co., Ltd. | Method and apparatus of learning neural network via hierarchical ensemble learning |
CN107578099B (en) * | 2016-01-20 | 2021-06-11 | 中科寒武纪科技股份有限公司 | Computing device and method |
EP3436967A4 (en) * | 2016-03-30 | 2019-08-21 | C-B4 Context Based Forecasting Ltd | System, method and computer program product for data analysis |
US20170364799A1 (en) * | 2016-06-15 | 2017-12-21 | Kneron Inc. | Simplifying apparatus and simplifying method for neural network |
CN107886167B (en) * | 2016-09-29 | 2019-11-08 | 北京中科寒武纪科技有限公司 | Neural network computing device and method |
US10657426B2 (en) | 2018-01-25 | 2020-05-19 | Samsung Electronics Co., Ltd. | Accelerating long short-term memory networks via selective pruning |
US10832137B2 (en) | 2018-01-30 | 2020-11-10 | D5Ai Llc | Merging multiple nodal networks |
US11321612B2 (en) | 2018-01-30 | 2022-05-03 | D5Ai Llc | Self-organizing partially ordered networks and soft-tying learned parameters, such as connection weights |
US11461655B2 (en) | 2018-01-30 | 2022-10-04 | D5Ai Llc | Self-organizing partially ordered networks |
WO2020150415A1 (en) * | 2019-01-16 | 2020-07-23 | Nasdaq, Inc. | Systems and methods of processing diverse data sets with a neural network to generate synthesized data sets for predicting a target metric |
CN111880406B (en) * | 2020-07-14 | 2022-04-15 | 金陵科技学院 | Self-adaptive prediction control main queue management method based on Hebb learning |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5588091A (en) * | 1989-05-17 | 1996-12-24 | Environmental Research Institute Of Michigan | Dynamically stable associative learning neural network system |
GB8929146D0 (en) * | 1989-12-22 | 1990-02-28 | British Telecomm | Neural networks |
US5288645A (en) * | 1992-09-04 | 1994-02-22 | Mtm Engineering, Inc. | Hydrogen evolution analyzer |
JP2737583B2 (en) * | 1992-11-26 | 1998-04-08 | 松下電器産業株式会社 | Neural network circuit |
DE69314293T2 (en) * | 1992-12-16 | 1998-04-02 | Koninkl Philips Electronics Nv | Neural system and construction method |
US5751915A (en) * | 1993-07-13 | 1998-05-12 | Werbos; Paul J. | Elastic fuzzy logic system |
DE19611732C1 (en) * | 1996-03-25 | 1997-04-30 | Siemens Ag | Neural network weightings suitable for removal or pruning determination method |
US6424961B1 (en) * | 1999-12-06 | 2002-07-23 | AYALA FRANCISCO JOSé | Adaptive neural learning system |
-
2001
- 2001-08-16 WO PCT/US2001/025616 patent/WO2002015125A2/en active Application Filing
- 2001-08-16 US US10/333,667 patent/US7080053B2/en not_active Expired - Fee Related
- 2001-08-16 AU AU2001283397A patent/AU2001283397A1/en not_active Abandoned
Also Published As
Publication number | Publication date |
---|---|
WO2002015125A3 (en) | 2004-02-26 |
WO2002015125A2 (en) | 2002-02-21 |
US7080053B2 (en) | 2006-07-18 |
US20040128004A1 (en) | 2004-07-01 |
AU2001283397A1 (en) | 2002-02-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7080053B2 (en) | Neural network device for evolving appropriate connections | |
US11544538B2 (en) | Pulse driving apparatus for minimising asymmetry with respect to weight in synapse element, and method therefor | |
US10885424B2 (en) | Structural plasticity in spiking neural networks with symmetric dual of an electronic neuron | |
US8005773B2 (en) | System and method for cortical simulation | |
Natschlger et al. | Efficient temporal processing with biologically realistic dynamic synapses | |
US4912647A (en) | Neural network training tool | |
Sullivan et al. | Homeostatic synaptic scaling in self-organizing maps | |
US4912649A (en) | Accelerating learning in neural networks | |
Hertz et al. | Learning short synfire chains by self-organization | |
CN111882064B (en) | Method and system for realizing pulse neural network competition learning mechanism based on memristor | |
Bibbig et al. | A neural network model of the cortico-hippocampal interplay and the representation of contexts | |
US10489706B2 (en) | Discovering and using informative looping signals in a pulsed neural network having temporal encoders | |
Kratzer et al. | Neuronal network analysis of serum electrophoresis. | |
Cox et al. | Implications of synaptic digitisation and error for neocortical function | |
Pai | Fundamentals of Neural Networks | |
Nanami et al. | Spike pattern detection with close-to-biology spiking neuronal network | |
Wu et al. | Enhancing the performance of a hippocampal model by increasing variability early in learning | |
Kiselev | Statistical approach to unsupervised recognition of spatio-temporal patterns by spiking neurons | |
Koopman et al. | Dynamic neural networks, comparing spiking circuits and LSTM | |
Kim et al. | Real-time Neural Connectivity Inference with Presynaptic Spike-driven Spike Timing-Dependent Plasticity | |
Reyes et al. | Analysis of synfire chains above saturation | |
Bugmann et al. | A model for latencies in the visual system | |
Svetlik | Self-organizing maps with spiking neuron model JASTAP | |
Carota | Neural network approach to problems of static/dynamic classification | |
Zuters | Spiking Neural Networks to Detect Temporal Patterns |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 10333667 Country of ref document: US |
|
COP | Corrected version of pamphlet |
Free format text: PAGES 1/4-4/4, DRAWINGS, REPLACED BY NEW PAGES 1/4-4/4; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
122 | Ep: pct application non-entry in european phase | ||
NENP | Non-entry into the national phase |
Ref country code: JP |