CN115496209A - Activation normalization method, storage medium, chip and electronic device - Google Patents
Activation normalization method, storage medium, chip and electronic device Download PDFInfo
- Publication number
- CN115496209A CN115496209A CN202211437264.4A CN202211437264A CN115496209A CN 115496209 A CN115496209 A CN 115496209A CN 202211437264 A CN202211437264 A CN 202211437264A CN 115496209 A CN115496209 A CN 115496209A
- Authority
- CN
- China
- Prior art keywords
- network
- sub
- neural network
- chip
- normalization method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/061—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
Abstract
The invention discloses an activation normalization method, a storage medium, a chip and electronic equipment. In order to solve the technical problem that the deeper artificial neural network cannot be effectively scaled in the prior art, the activation normalization method disclosed by the invention is applied to the artificial neural network and comprises the following steps: acquiring sub-network output of the artificial neural network; acquiring the p percentile of the sub-network output according to the sub-network output; dividing the synaptic weight of the neuron connected to the last layer in the subnetwork by the p percentile to update the synaptic weight. The invention takes the dynamic sub-network as the basis, dynamically updates the synaptic weights or/and the offset layer by layer as the technical means, solves the technical problems of network precision deterioration and the like after the deep neural network is converted, obtains the feedforward network which can be scaled at any depth without causing the reduction of the network accuracy, and effectively improves the technical effect of the internal performance of the neural morphological chip. The invention is suitable for the fields of neuromorphic chips and pulse neural networks.
Description
Technical Field
The invention relates to an activation normalization method, a storage medium, a chip and electronic equipment, in particular to an activation normalization method, a storage medium, a chip and electronic equipment which can be used for constructing a deep (deep) impulse neural network.
Background
The activation function of the conventional Artificial Neural Network (ANN) neuron is a mathematical function such as ReLU, sigmoid, etc., which is a high abstraction for biological neurons, and the processing of function calculations in conventional computing platforms is a very common and easy matter. Neurons in the impulse neural networks (SNNs) developed today resemble biological neurons, and when the membrane potential exceeds a threshold, an impulse event is fired, much like a biological neuron.
However, how values such as synaptic weights in spiking neural networks should be trained is a very difficult problem, because the process of spiking neuron activation, if abstracted as a mathematical function, is an inconducible function, and becomes impractical to apply to the desirability of back-propagation methods, which are widely used in ANN and rely on differential calculations, also in SNN.
A class of SNN schemes based on the conversion of ANN has been developed for this purpose by training the ANN and converting it to SNN in an attempt to achieve SNN that achieves or approaches the performance of ANN. Such schemes typically require neurons in the ANN to employ the ReLU activation function, which in SNNs is replaced by the frequency of pulsing the neurons (recent research results indicate that ReLU functionality can also be implemented by elaborately designed TTFS coding, which helps to reduce the number of pulses and power consumption).
Many conversion schemes normalize (normalize) the output of the ANN to a maximum desired value, because the output of the SNN is similarly limited to a maximum value due to being limited by the number of time steps per layer. These conversion schemes use what is proposed in prior art 1 and referred to as a data-based weight normalization scheme, which is a very popular scheme for normalizing network output. But this scheme results in a degradation of the normalized ANN performance (and thus the translated SNN performance) at least because of the scaling of the bias (scale), especially when the network is deep. In other words, the solution of prior art 1 impairs the accuracy of network reasoning, in particular of the constructed deep neural network.
Prior art 1: B. rueckauer, I.A. Lungu, Y.Hu, M.Pfeiffer, and S.C.Liu, "Conversion of connected-valued deep networks to effect event-driven networks for image classification", frontiers in the science, vol.11, p.682, 2017.
Therefore, a solution for improving the accuracy of the neural network, especially a neural network activation normalization solution for improving the accuracy of the deep neural network, is needed.
Disclosure of Invention
In order to solve or alleviate some or all of the technical problems, the invention is realized by the following technical scheme:
an activation normalization method applied to an artificial neural network, the activation normalization method comprising the steps of: acquiring sub-network output of the artificial neural network; obtaining a p percentile of the subnetwork output according to the subnetwork output, wherein p is a positive number less than 100; dividing the synaptic weight connected to the neuron in the last layer of the subnetwork by the p percentile as an updated synaptic weight connected to the neuron in the last layer of the subnetwork.
In some class of embodiments, the sub-network is the first 1 to the artificial neural networkLayer network ofIs a positive integer not less than 1.
In some class of embodiments, starting with the sub-network comprising only layer 1, synaptic weights connected to neurons in the last layer of the sub-network are updated, and the sub-network is continually expanded and synaptic weights connected to neurons in the last layer of the sub-network are updated.
In some class of embodiments, starting from the sub-network comprising only layer 1, synaptic weights or/and biases connected to neurons in the last layer of the sub-network are updated, and the sub-network is continually expanded and synaptic weights or/and biases connected to neurons in the last layer of the sub-network are updated.
In one class of embodiments, the bias connected to the neuron in the last layer in the subnetwork is divided byAs updated bias connected to neurons in the last layer of the subnetwork; wherein s is i When the sub-network comprises the front 1 to i layers of the artificial neural network, the output p percentile of the sub-network is more than or equal to 1 and less than or equal to i。
In certain embodiments, the artificial neural network is a feedforward type neural network.
In certain classes of embodiments, p is 99 or 99.9 or 99.99.
In some embodiments, after all layers of the artificial neural network are activated and normalized, the activated and normalized artificial neural network is converted into a spiking neural network, and the spiking neural network obtained by conversion is deployed into a neuromorphic chip.
A storage medium storing a first computer instruction or a source code which can be compiled into the first computer instruction, and reading the first computer instruction after reading or compiling the source code to execute any one of the foregoing activation normalization methods.
In certain embodiments, the storage medium further comprises second computer instructions for performing the converting step, the second computer instructions for converting the artificial neural network into a spiking neural network.
A chip is a neuromorphic chip which comprises a plurality of impulse neurons and is deployed with the impulse neural network.
In some class of embodiments, the network configuration parameters of the spiking neural network are deployed on the chip.
An electronic device is provided with the chip, and the electronic device responds according to the inference result of the chip on the environment signal.
Some or all embodiments of the invention have the following beneficial technical effects:
the feedforward network with any depth can be scaled, the accuracy of the network is not reduced (numerical precision is achieved), and the internal performance of the neuromorphic chip is improved.
Further advantages will be further described in the preferred embodiments.
The technical solutions/features disclosed above are intended to be summarized in the detailed description, and thus the ranges may not be completely the same. The technical features disclosed in this section, together with technical features disclosed in the subsequent detailed description and parts of the drawings not explicitly described in the specification, disclose further aspects in a mutually rational combination.
The technical scheme combined by all the technical features disclosed at any position of the invention is used for supporting the generalization of the technical scheme, the modification of the patent document and the disclosure of the technical scheme.
Drawings
FIG. 1 is a schematic diagram of an artificial neural network in relation to a subnetwork;
FIG. 2 is a schematic diagram of a structure having only the first two layers of sub-networks;
FIG. 3 is a comparison graph of normalized activation effects of a 2-layer neural network;
FIG. 4 is a graph comparing normalized activation effects of a 5-layer neural network;
fig. 5 is a comparison graph of normalized activation effects of an 8-layer neural network.
Detailed Description
Since various alternatives cannot be exhaustively described, the following will clearly and completely describe the gist of the technical solution in the embodiment of the present invention with reference to the drawings in the embodiment of the present invention. Other technical solutions and details not disclosed in detail below are generally regarded as technical objects or technical features that are conventionally achieved in the art by means of conventional means, and are not described in detail herein.
Unless defined otherwise, a "/" at any position in the present disclosure means a logical "or". The ordinal numbers "first," "second," etc. in any position of the invention are used merely as distinguishing labels in description and do not imply an absolute sequence in time or space, nor that the terms in which such a number is prefaced must be read differently than the terms in which it is prefaced by the same term in another definite sentence.
The present invention may be described in terms of various elements combined into various embodiments, which may be combined into various methods, articles of manufacture. In the present invention, even if the points are described only when introducing the method/product scheme, it means that the corresponding product/method scheme explicitly includes the technical features.
The presence or inclusion of a step, module, feature in any location in the disclosure does not imply that such presence is the only exclusive presence, and those skilled in the art are fully enabled to derive other embodiments based on the teachings herein, along with other techniques. The embodiments disclosed herein are generally for the purpose of disclosing preferred embodiments, but this does not imply that the opposite embodiment to the preferred embodiment is excluded/excluded from the present invention, and it is intended to cover the present invention as long as such opposite embodiment solves at least some technical problem of the present invention. Based on the point described in the embodiments of the present invention, those skilled in the art can fully apply the means of replacement, deletion, addition, combination, and order exchange to some features to obtain a technical solution still following the concept of the present invention. Such a configuration without departing from the technical idea of the present invention is also within the scope of the present invention.
With reference to figure 1 of the drawings,for an Artificial Neural Network (ANN) to be converted into a Spiking Neural Network (SNN), which receives at least one sample input x, a feature image is obtained after various information processing, such as convolution, of neurons of each layer (each layer includes a plurality of neurons) in the network, and an inference result is finally obtained. Without loss of generality, the whole ANN network is recorded asPreferably the networkIs a feed forward type network. In addition, the networkAnd its converted SNN, may be part of some larger network.
Referring to FIG. 2, a network is illustratedSchematic diagram of the first 2 layers of the structure. For the first layer of the network (labeled 1 in the figure), the bias and synaptic weights connected to the neurons of the first layer are b, respectively 1 And w 1 Recording the network at this timeIs a sub-networkThe sub-network as described laterIs a dynamic, constantly changing subnetwork.
The sample input x is subjected to a series of processing such as weighting, biasing and activation of neurons in the first layer to obtain the first layer (i.e., the sub-network at that time)) Output y of 1 I.e. having y 1 =f(w 1 x+b 1 ) WhereinAs a function of neuron activation. The sample input may be an image (such as an impulse event output by an event camera), sound, vibration, physiological signals, IMU signals, and the like.
Obtaining a sub-network at that timeOutput y of 1 P (p) of (<100 Percentile of s 1 . Percentile is a statistical term meaning a set of n observations, numerically sized, and a value at p% position called the pth percentile, e.g., p =99, representing s 1 Value ratio y of 1 The 99% value is large.
Synaptic weights w for neurons in the first layer 1 The method for normalizing the data comprises the following steps: w is a 1 / s 1 . Or/and, the method comprises (same as below): for bias b 1 The method for normalizing the data comprises the following steps: b 1 / s 1 . And recording the normalized synaptic weight and bias as w * 1 And b * 1 。
Then the sub-network is updatedAs a networkFirst and second layers (shown in dashed lines in fig. 2), hereinafter referred to as subnetworks. Sample input x again passes through the subnetworkBefore passing through the first layer and the bias is normalized synaptic weight w * 1 And bias b * 1 The synaptic weight and bias before the second layer are the synaptic weight w to be normalized 2 And bias b 2 . Passing through sub-networksTo obtain a sub-networkOutput y of 2 . Preferably, the output y is also obtained at this time 2 Is the p percentile of s 2 。
Synaptic weight w for neurons in the second layer 2 The method for normalizing the data comprises the following steps: w is a 2 / s 2 . For bias b 2 The normalization method comprises the following steps: b is a mixture of 2 /( s 1 ×s 2 ). And recording the normalized synaptic weight and bias as w * 2 And b * 2 。
Then the sub-network is updatedAs a networkAnd a third layer, similarly for synaptic weights w of neurons in the third layer 3 The method for normalizing the data comprises the following steps: w is a 3 / s 3 . For bias b 3 The method for normalizing the data comprises the following steps: b is a mixture of 3 /( s 1 ×s 2 ×s 3 ) Wherein s is 3 Meaning that the sub-network is nowOutput y of (a) 3 Is the p-th percentile of 3 . Updates to the burst weights and offsets in subsequent networks, and so on.
With continued reference to FIG. 1, without loss of generality, the network is recordedSubnet inCollaterals of kidney meridianIncluding 1 st toLayer (A)A positive integer not less than 1), sample input x, through a sub-networkThe latter output being y l And output y l Is the p percentile of s l . Then for the secondSynaptic weight w of a layer l And bias b l The normalization methods of (1) are respectively: w is a l /s l ,b l /( s 1 ×s 2 ×s 3… ×s l ) (in other words,). If it isThe last layer of the network, the whole network is processed by the normalizationActivation of (c) is fully normalized.
The value of p often depends on the learned synaptic weight or bias. Preferably, p is 99, 99.9 or 99.99. The sample input may be many more than one, with more inputs often contributing to network accuracy.
For the network after activation normalization processingAnd after the SNN is converted into a corresponding SNN network, the SNN network is deployed into a neuromorphic chip. The network obtained based on the activation normalization processing method provided by the inventionIt is supported that the network can be converted to a more deep SNN with stable performance. To this end, a deep-pulse neural network can be constructed in a neuromorphic chip. The present invention improves the internal performance of neuromorphic chips, at least in terms of accuracy and depth. In addition, ANN networkThe technical means for converting into the corresponding SNN network may be any reasonable manner, which is not limited in the present invention.
Referring to fig. 3, 4 and 5, there are shown graphs comparing the effects after activation normalized by applying the normalization methods of the prior art 1 and the present invention at different network depths (layers 2,5, 8), respectively. The horizontal axis is the activation of the original ANN layer and the vertical axis is the activation of the same neurons of the output normalized layer. Each color represents a channel, and if all activations are scaled correctly, a diagonal line should be formed in the figure.
Fig. 3 corresponds to the activation of layer 2. Prior art 1 also achieves scaling well in shallow networks. Fig. 4 corresponds to the activation situation of layer 5, where prior art 1 has begun to deviate significantly from the ideal activation result.
Fig. 5 shows the activation corresponding to the deeper 8 th layer. Prior art 1 causes information loss in a deeper network structure, thereby causing a larger deviation, which means that it is difficult for prior art 1 to construct a deep neural network. It can be seen from the figure that the invention can always maintain good scaling function no matter the depth of the network, thereby providing the possibility of constructing a deep neural network.
In addition, the present invention also discloses a storage medium, which stores the first computer instruction or a source code that can be compiled into the first computer instruction, and the first computer instruction is read (or read after the source code is compiled) to execute any one of the foregoing methods for activating and normalizing an artificial neural network.
Further, the storage medium further comprises a second computer instruction for executing the converting step, wherein the second computer instruction is used for converting the artificial neural network into the impulse neural network.
A chip is a neuromorphic chip which comprises a plurality of pulse neurons, and a pulse neural network obtained by the conversion is deployed on the neuromorphic chip. Specifically, the on-chip is deployed with network configuration parameters of the impulse neural network.
An electronic device is configured with the chip and used for responding to the inference result of the chip to the environment signal. The electronic equipment can realize the monitoring of the environment always-on due to the low-power consumption chip.
Although the present invention has been described herein with reference to particular features and embodiments thereof, various modifications, combinations, and substitutions are possible without departing from the scope of the present invention. The scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification, and it is intended that the method, means, and method may be practiced in association with, inter-dependent on, inter-operative with, or after one or more other products, methods.
Therefore, the specification and drawings should be considered simply as a description of some embodiments of the technical solutions defined by the appended claims, and therefore the appended claims should be interpreted according to the principles of maximum reasonable interpretation and are intended to cover all modifications, variations, combinations, or equivalents within the scope of the disclosure as possible, while avoiding an unreasonable interpretation.
To achieve better technical results or for certain applications, a person skilled in the art may make further improvements on the technical solution based on the present invention. However, even if the partial improvement/design is inventive or/and advanced, the technical idea of the present invention is covered by the technical features defined in the claims, and the technical solution is also within the protection scope of the present invention.
Several technical features mentioned in the attached claims may have alternative technical features or may be rearranged with respect to the order of certain technical processes, materials organization, etc. Those skilled in the art can easily understand the alternative means, or change the sequence of the technical process and the material organization sequence, and then adopt substantially the same means to solve substantially the same technical problems to achieve substantially the same technical effects, so that even if the means or/and the sequence are explicitly defined in the claims, the modifications, changes and substitutions shall fall within the protection scope of the claims according to the equivalent principle.
The method steps or modules described in connection with the embodiments disclosed herein may be embodied in hardware, software, or a combination of both, and the steps and components of the embodiments have been described in a functional generic manner in the foregoing description for the sake of clarity in describing the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Claims (10)
1. An activation normalization method applied to an artificial neural network is characterized in that: the activation normalization method comprises the following steps:
acquiring sub-network output of the artificial neural network;
obtaining a p percentile of the subnetwork output according to the subnetwork output, wherein p is a positive number less than 100;
dividing the synaptic weight connected to the neuron in the last layer of the subnetwork by the p percentile as an updated synaptic weight connected to the neuron in the last layer of the subnetwork.
3. Activation normalization method according to claim 2, characterized in that:
starting from the sub-network comprising only layer 1, the synaptic weights of the neurons in the last layer connected to the sub-network are updated, and the sub-network is continuously enlarged and the synaptic weights of the neurons in the last layer connected to the sub-network are updated.
4. Activation normalization method according to claim 2, characterized in that:
starting from the sub-network comprising only layer 1, updating the synaptic weights or/and biases connected to neurons in the last layer of the sub-network, and expanding the sub-network and updating the synaptic weights or/and biases connected to neurons in the last layer of the sub-network.
5. Activation normalization method according to claim 4, characterized in that:
dividing bias connected to neurons in the last layer in the sub-network byAs updated bias connected to neurons in the last layer of the subnetwork; wherein s is i When the sub-network comprises the front 1 to i layers of the artificial neural network, the output p percentile of the sub-network is more than or equal to 1 and less than or equal to i。
6. Activation normalization method according to any one of claims 1 to 5, characterized in that:
and after all layers of the artificial neural network are activated and normalized, converting the activated and normalized artificial neural network into a pulse neural network, and deploying the pulse neural network obtained by conversion into a neuromorphic chip.
7. A storage medium, characterized by: the storage medium stores first computer instructions or source code capable of being compiled into the first computer instructions, and the first computer instructions are read after the source code is read or compiled so as to execute the activation normalization method according to any one of the preceding claims 1 to 6.
8. The storage medium of claim 7, wherein: the storage medium further includes second computer instructions for performing the converting step, the second computer instructions for converting the artificial neural network into a spiking neural network.
9. A chip, the chip is a neuromorphic chip, characterized in that:
the neuromorphic chip including a plurality of spiking neurons and having the storage medium of claim 8 disposed therein.
10. An electronic device, characterized in that:
the electronic device is configured with a chip according to claim 9, the electronic device being responsive to a result of an inference of an environmental signal by said chip.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211437264.4A CN115496209B (en) | 2022-11-16 | 2022-11-16 | Activation normalization method, storage medium, chip and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211437264.4A CN115496209B (en) | 2022-11-16 | 2022-11-16 | Activation normalization method, storage medium, chip and electronic device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115496209A true CN115496209A (en) | 2022-12-20 |
CN115496209B CN115496209B (en) | 2023-08-08 |
Family
ID=85115968
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211437264.4A Active CN115496209B (en) | 2022-11-16 | 2022-11-16 | Activation normalization method, storage medium, chip and electronic device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115496209B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140379623A1 (en) * | 2013-06-19 | 2014-12-25 | Brain Corporation | Apparatus and methods for processing inputs in an artificial neuron network |
US20150269479A1 (en) * | 2014-03-24 | 2015-09-24 | Qualcomm Incorporated | Conversion of neuron types to hardware |
US9552546B1 (en) * | 2013-07-30 | 2017-01-24 | Brain Corporation | Apparatus and methods for efficacy balancing in a spiking neuron network |
US20180121802A1 (en) * | 2016-11-02 | 2018-05-03 | Samsung Electronics Co., Ltd. | Method of converting neural network and recognition apparatus using the same |
US20180174033A1 (en) * | 2016-12-20 | 2018-06-21 | Michael I. Davies | Population-based connectivity architecture for spiking neural networks |
WO2020115539A1 (en) * | 2018-12-07 | 2020-06-11 | Telefonaktiebolaget Lm Ericsson (Publ) | System, method and network node for generating at least one classification based on machine learning techniques |
CN112116010A (en) * | 2020-09-21 | 2020-12-22 | 中国科学院自动化研究所 | ANN-SNN conversion classification method based on membrane potential pretreatment |
CN114662644A (en) * | 2021-11-03 | 2022-06-24 | 北京理工大学 | Image identification method of deep pulse neural network based on dynamic threshold neurons |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11126913B2 (en) * | 2015-07-23 | 2021-09-21 | Applied Brain Research Inc | Methods and systems for implementing deep spiking neural networks |
-
2022
- 2022-11-16 CN CN202211437264.4A patent/CN115496209B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140379623A1 (en) * | 2013-06-19 | 2014-12-25 | Brain Corporation | Apparatus and methods for processing inputs in an artificial neuron network |
US9552546B1 (en) * | 2013-07-30 | 2017-01-24 | Brain Corporation | Apparatus and methods for efficacy balancing in a spiking neuron network |
US20150269479A1 (en) * | 2014-03-24 | 2015-09-24 | Qualcomm Incorporated | Conversion of neuron types to hardware |
US20180121802A1 (en) * | 2016-11-02 | 2018-05-03 | Samsung Electronics Co., Ltd. | Method of converting neural network and recognition apparatus using the same |
US20180174033A1 (en) * | 2016-12-20 | 2018-06-21 | Michael I. Davies | Population-based connectivity architecture for spiking neural networks |
WO2020115539A1 (en) * | 2018-12-07 | 2020-06-11 | Telefonaktiebolaget Lm Ericsson (Publ) | System, method and network node for generating at least one classification based on machine learning techniques |
CN112116010A (en) * | 2020-09-21 | 2020-12-22 | 中国科学院自动化研究所 | ANN-SNN conversion classification method based on membrane potential pretreatment |
CN114662644A (en) * | 2021-11-03 | 2022-06-24 | 北京理工大学 | Image identification method of deep pulse neural network based on dynamic threshold neurons |
Also Published As
Publication number | Publication date |
---|---|
CN115496209B (en) | 2023-08-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Lukoševičius et al. | Reservoir computing approaches to recurrent neural network training | |
Schäfer et al. | Recurrent neural networks are universal approximators | |
LeCun et al. | Deep learning | |
Schmidhuber | Deep learning in neural networks: An overview | |
Behnke | Discovering hierarchical speech features using convolutional non-negative matrix factorization | |
Ma et al. | Good students play big lottery better | |
Lukoševicius | Reservoir computing and self-organized neural hierarchies | |
Ashfahani et al. | Unsupervised continual learning in streaming environments | |
Grossberg | Associative and competitive principles of learning and development: The temporal unfolding and stability of STM and LTM patterns | |
CN115496209A (en) | Activation normalization method, storage medium, chip and electronic device | |
Zhang et al. | Weighted data normalization based on eigenvalues for artificial neural network classification | |
Luciw et al. | Where-what network-4: The effect of multiple internal areas | |
Wani et al. | Optimization of deep network models through fine tuning | |
Wadhwa et al. | Learning sparse, distributed representations using the hebbian principle | |
Lawrence et al. | Can recurrent neural networks learn natural language grammars? | |
US6490571B1 (en) | Method and apparatus for neural networking using semantic attractor architecture | |
Watson et al. | The effect of Hebbian learning on optimisation in Hopfield networks | |
Su | Neurashed: A phenomenological model for imitating deep learning training | |
Báez et al. | A Parametric Study of Humann in Relation to the Noise: Application for the Identification of Compounds of Environmental Interest | |
Mandziuk et al. | Incremental class learning-an approach to longlife and scalable learning | |
Nugent et al. | Unsupervised adaptation to improve fault tolerance of neural network classifiers | |
Wu et al. | An intelligent classification method with multiple time-scale analysis | |
Duch et al. | Almost random projection machine | |
Zhou et al. | A Noise-Invariant Sparse Coding Model for Binary Input | |
Forcada | Neural networks: Automata and formal models of computation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |