CN115169547B - Neuromorphic chip and electronic device - Google Patents

Neuromorphic chip and electronic device Download PDF

Info

Publication number
CN115169547B
CN115169547B CN202211101286.3A CN202211101286A CN115169547B CN 115169547 B CN115169547 B CN 115169547B CN 202211101286 A CN202211101286 A CN 202211101286A CN 115169547 B CN115169547 B CN 115169547B
Authority
CN
China
Prior art keywords
lif
synaptic
layer
time constant
neurons
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211101286.3A
Other languages
Chinese (zh)
Other versions
CN115169547A (en
Inventor
汉娜·博斯
迪兰·理查德·缪尔
邢雁南
胡雅伦
李波
凌于雅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shizhi Technology Co ltd
Original Assignee
Shenzhen Shizhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shizhi Technology Co ltd filed Critical Shenzhen Shizhi Technology Co ltd
Priority to CN202211101286.3A priority Critical patent/CN115169547B/en
Publication of CN115169547A publication Critical patent/CN115169547A/en
Application granted granted Critical
Publication of CN115169547B publication Critical patent/CN115169547B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7807System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Abstract

The invention discloses a neuromorphic chip and electronic equipment. In order to solve the technical problem of how to realize the inference on the environmental signals with low power consumption, low cost and high precision, the invention arranges a feed-forward network which is easy to train and a deep pulse neural network with a synapse time constant pyramid in a neuromorphic chip. According to the invention, LIF neurons with different synapse time constants are arranged in different layers of a network as a technical means, so that the technical problems of high-efficiency and intelligent processing of time domain signals are solved, and the technical effects of low power consumption, low cost and high precision are obtained. The invention is suitable for the fields of brain-like chips and brain-like calculation.

Description

Neuromorphic chip and electronic device
Technical Field
The invention relates to a neuromorphic chip and electronic equipment, in particular to a neuromorphic chip and electronic equipment capable of processing time-domain signals with ultra-low power consumption.
Background
When a conventional Deep Neural Network (DNN) processes a time domain signal, either by buffering the input data with time information removed in a buffer window of about 40 milliseconds and treating the entire window as a single frame; or by applying complex cyclic dynamic networks such as LSTMs. Compared with these Artificial Neural Networks (ANN), the impulse neural network (SNN) has dynamics of multiple time-domain related states and is configurable in time scale, which is a natural advantage of SNN in processing time-domain signals. Using these dynamics, existing schemes for extracting features from time domain signals either project randomly (as in prior art 1) or build a cyclic network through carefully chosen time domain features (as in prior art 2).
Prior art 1: blouw P, choo X, hunsberger E, et al, benchmark search functional on neurological search [ C ]// Proceedings of the 7th annual-embedded computerized projects work 2019: 1-8.
Prior art 2: voelker A, kaji \263I, eliasmith C. Legendre memory units, continuous-time representation in recurrent neural networks [ J ]. Advances in neural information processing systems, 2019, 32.
An alternative is to construct a feed-forward network with individual pulse units tuned to various frequency ranges by choosing the time constants of synapses and cell membranes, such as prior art 3, which has the advantage of being easier to train than previous cyclic networks.
Prior art 3: weidel P, sheik S. WaveSense: effective Temporal concentrations with Spiking Neural Networks for Keyword Spotting [ J ]. ArXiv preprinting arXiv:2111.01456, 2021.
The time domain signal is a low dimensional signal. In the technology and market, the power consumption, the lower the resource, the better the accuracy and the better the processing of the signal are required. For example, it is advantageous to select optimal noise reduction filter parameters if the location of the device can be accurately detected with low power consumption by sound.
Based on the above, the invention discloses a neuromorphic chip and an electronic device which are ultra-low in power consumption, low in cost and high in precision.
Disclosure of Invention
In order to solve or alleviate some or all of the technical problems, the invention is realized by the following technical scheme:
a neuromorphic chip comprises a plurality of LIF neurons, at least N layers are constructed by the LIF neurons, and N is a positive integer; the N layers form a feedforward network, and the front layer of the N layers sends pulses to the back layer; the LIF neuron has a synaptic state associated with the LIF neuron; in the feedforward network, a synaptic time constant set or an attenuation parameter set associated with all LIF neurons in a previous layer is a proper subset of the synaptic time constant set or the attenuation parameter set associated with all LIF neurons in a next layer; wherein the synaptic time constant or decay parameter is used to perform a decay operation on the value of the synaptic state.
The proper subset here means that the set of the time constants of the burst or the set of the attenuation parameters in the previous layer is not necessarily as many as the number of elements in the set of the time constants of the burst or the set of the attenuation parameters in the next layer. In other words, the set of synaptic time constants or the set of decay parameters in the next layer is larger, with new elements added compared to the previous layer.
In some embodiments, the decay parameter and the synaptic time constant may be obtained from each other by a logarithmic or exponential conversion relationship.
In certain embodiments, the synaptic state associated with the LIF neuron is a synaptic current.
In certain embodiments, the LIF neuron has a membrane state associated with the LIF neuron, the membrane state being a membrane potential.
In some kind of embodiments, performing an attenuation operation on the value of the synapse state is achieved by means of bit shifting in accordance with a synapse time constant or an attenuation parameter; when further attenuation of the value of the synaptic state cannot be achieved by means of bit shifting, linear attenuation is performed on the value of the synaptic state.
Bit shifting here generally refers to a shifting method that shifts right, i.e., the value becomes smaller. The shifted value is a value that needs to be attenuated.
In certain embodiments, the membrane time constants of the LIF neurons are the same.
The LIF neurons are neurons distributed in N layers.
In a certain embodiment, the neuromorphic chip is configured to process an environmental signal, and the environmental signal is processed by at least a filter array, and then finally converted into a pulse and transmitted to a pulse neural network formed by at least the feedforward network to perform inference.
In a certain type of embodiment, elements in a synaptic time constant set associated with all LIF neurons in any layer of the feedforward network are arranged from small to large and meet an exponential growth relationship; and/or the first and/or second light sources,
the elements in the attenuation parameter set associated with all LIF neurons in any layer in the feedforward network are arranged from small to large and then accord with a linear growth relation.
In a certain class of embodiments, in the feed-forward network, the number of elements in the synaptic time constant set or the attenuation parameter set associated with all LIF neurons in the previous layer is half of the number of elements in the synaptic time constant set or the attenuation parameter set associated with all LIF neurons in the next layer.
The previous layer here means any previous layer among the N layers. With time constants taken from { τ } in the previous layer 12 Then the next layer is taken from { tau } 12 , τ 34 }: the requirement of the proper subset is met, and the requirement of the number of elements is met.
An electronic device, comprising the neuromorphic chip as set forth in any one of the preceding claims, for performing inference on an environmental signal to obtain an inference result; the electronic device responds according to the inference result.
Some or all embodiments of the invention have the following beneficial technical effects:
1) Time domain signal processing is realized with ultra-low power consumption;
2) By setting a special synapse time constant and the distribution of the synapse time constant in the network, the LIF neurons are sensitive to a specific information time scale, a feed-forward network can be directly used, and a circulating network which is difficult to train is avoided;
3) High inference precision, low delay and small chip occupation area.
Further advantages will be further described in the preferred embodiments.
The technical solutions/features disclosed above are intended to be summarized in the detailed description, and thus the ranges may not be exactly the same. The technical features disclosed in this section, together with technical features disclosed in the subsequent detailed description and parts of the drawings not explicitly described in the specification, disclose further aspects in a mutually rational combination.
The technical scheme combined by all the technical features disclosed at any position of the invention is used for supporting the generalization of the technical scheme, the modification of the patent document and the disclosure of the technical scheme.
Drawings
FIG. 1 is a schematic diagram of a digital LIF neuron;
FIG. 2 is a pseudo-code flow diagram of a bit-shifting low power consumption decay scheme;
FIG. 3 is a flow diagram of the overall processing of an ambient signal in certain embodiments of the invention;
fig. 4 is a delay profile of the classification.
Detailed Description
Since various alternatives cannot be exhaustively described, the following will clearly and completely describe the gist of the technical solution in the embodiment of the present invention with reference to the drawings in the embodiment of the present invention. It is to be understood that the invention is not limited to the details disclosed herein, which may vary widely from one implementation to another.
Unless defined otherwise, a "/" at any position in the present disclosure means a logical "or". The ordinal numbers "first," "second," etc. in any position of the invention are used merely as distinguishing labels in description and do not imply an absolute sequence in time or space, nor that the terms in which such a number is prefaced must be read differently than the terms in which it is prefaced by the same term in another definite sentence.
The present invention may be described in terms of various elements combined into various embodiments, which may be combined into various methods, articles of manufacture. In the present invention, even if the points are described only when introducing the method/product scheme, it means that the corresponding product/method scheme explicitly includes the technical features.
When a step, a module or a feature is described as being present or included in any position of the present invention, it is not implied that the presence is exclusive and only exists, and other embodiments can be fully realized by the technical solution disclosed by the present invention and other technical means. The embodiments disclosed herein are generally intended for the purpose of disclosing preferred embodiments, but this does not imply that the opposite embodiment to the preferred embodiment is intended to be excluded/excluded from the present invention, so long as such opposite embodiment solves at least some of the technical problems of the present invention, which is intended to be covered by the present invention. Based on the point described in the embodiments of the present invention, those skilled in the art can completely apply the means of substitution, deletion, addition, combination, and order change to some technical features to obtain a technical solution still following the concept of the present invention. Such solutions are also within the scope of protection of the present invention, without departing from the technical idea of the invention.
A neuromorphic chip comprising a number of circuits that mimic the function of neurons, preferably LIF neurons. The neurons can be realized by digital circuits, analog circuits, digital-analog mixed circuits, photoelectric devices (namely optical pulse neural network processors) and memristors/ReRAM. The present invention is not limited to the implementation of the neuromorphic chip.
As an example, the construction of a digital LIF neuron may refer to fig. 1. Each digital LIF neuron includes an integer synaptic state (e.g., synaptic current I) syn ) And a membrane state (e.g., membrane voltage/membrane potential V of cell membrane) mem ) For example, both are 16 bits, and the synaptic weight may be 8 bits.
Preferably, the synaptic state or/and the membrane state can be realized by means of bit shifting, which has the advantage of replacing multiplication operations that are very resource-consuming and power-consuming by shifting and adding/subtracting. I.e. subtracting the value (value) of the current value of synapse state or/and membrane state by the value of the current value right shifted by the dash bit>>dash), resulting in an updated current value (new _ value). Where dash is the result of converting the time constant τ into an attenuation parameter: dash = log 2 (τ/dt), where dt is a simulation time step length.
As a further improvement of the invention, linear attenuation is performed after value > > dash value is 0. For example, the current value of synaptic state or/and membrane state (value) is subtracted by 1 as the updated current value. The bit shifting method may specifically refer to the pseudo code shown in fig. 2. The advantage of this scheme is that the attenuation of parameters is achieved with ultra-low power consumption, which is especially evident in the case of a large number of neurons.
Illustratively, the mathematical description of the LIF neuron is:
Figure 373348DEST_PATH_IMAGE001
wherein x (t) is weightedInput event of (1), I syn Value of synaptic State, V mem Is the value of the film state.
Figure 228172DEST_PATH_IMAGE002
Is the time constant of the film and is,
Figure 945592DEST_PATH_IMAGE003
is the synaptic time constant, θ is the pulse firing threshold, and b is the bias. z (t) is an output, wherein
Figure 871960DEST_PATH_IMAGE004
Membrane voltage for LIF neurons at time t k A pulse (event) is issued when the pulse issue threshold is exceeded, and the "point" at the top of the symbol is the derivative notation.
As part of the overall signal processing chain, the incoming ambient signal is preferably first passed through a filter array. The filter array may be a variety of reasonable ways to convert the analog signal into pulses (sequences). For example, the filter array is implemented based on an analog circuit, 16 filter units are counted, and the center frequencies of the filter units are different and are distributed between 50 to 8000hz. Finally, this information is encoded by the IAF neurons as a pulse sequence and fed into the SNN organized by several of the aforementioned LIF neurons. Of course, the pre-processing of the signals can also be achieved by digital signal processing.
The SNN is preferably a feed-forward network and comprises N layers. For example, N =3, but may be any number of reasonable layers such as 2, 4, or 5. The former layer of the N layers issues pulses to the latter layer, for example, the first layer (former layer) issues pulses to the second layer (latter layer), the second layer (former layer) issues pulses to the third layer (latter layer), and so on.
The SNN formed by LIF neurons in the neuromorphic chip disclosed in the present invention may be a part of an information processing network or an information processing method/apparatus, such as a part of a larger SNN in the neuromorphic chip or only a part of neuron resources in the neuromorphic chip are utilized, and the present invention is not limited to the specific embodiment disclosed in the present invention.
Referring to fig. 3, the overall processing flow of the environmental signal in the present invention is disclosed. After the filtering array processing, the single-channel environment signal is divided into a plurality of channels, for example, 16 channels, in a frequency range of 50 to 8000hz. The energy in each channel is quantized to 4 bits (for example) and then fed into the SNN. The SNN comprises at least 3 layers (also referred to as 3 hidden layers), a first layer, a second layer, and a third layer, respectively, in turn forming a feed-forward network (or part of a larger network, so that additional layers may be present). Of the first to third layers, each layer includes a number of LIF neurons (without excluding still other types of neurons).
For example, the first layer to the third layer each include 24 LIF neurons, and the LIF neurons in each layer have a synaptic time constant, and the synaptic weight matrix projected to each layer of neurons is W 1 , W 2 , W 3 . In certain embodiments, the membrane time constant of each layer of LIF neurons
Figure 92857DEST_PATH_IMAGE002
May be fixedly set to 2ms.
In an improved aspect of the present invention, in the feed-forward network, the synaptic time constant or the attenuation parameter associated with all LIF neurons in the previous layer is a proper subset of the synaptic time constant or the attenuation parameter associated with all LIF neurons in the next layer. For example, for a layer-3 network, the synaptic time constant of LIF neurons therein
Figure 446478DEST_PATH_IMAGE005
Is set in the range of 2ms to 256ms (for example). Preferably, the synaptic time constants of the LIF neurons in the first layer include only two: 2ms, 4ms (in FIG. 3. Tau.) 12 ) (ii) a The synaptic time constants of LIF neurons in the second layer include only 4: 2ms, 4ms, 8ms, 16ms (. Tau.in FIG. 3) 14 ) (ii) a The synaptic time constants of LIF neurons in the third layer include 8: 2ms, 4ms, 8ms, 16ms, 32ms, 64ms, 128ms, 256ms (graph)In 3 is tau 18 ). In the preferred embodiment, the number of elements in the synaptic time constant set or the attenuation parameter set associated with all LIF neurons in the previous layer is half of the number of elements in the synaptic time constant set or the attenuation parameter set associated with all LIF neurons in the next layer.
In other words, in this kind of embodiment, as the depth of the layer increases (advances along the pulse delivery direction), the types (representing the diversity of values, such as 2, 4, and 8 in the previous example) and the distribution range (representing the size of the value range, such as 2 to 4, 2 to 16, and 2 to 256 in the previous example) of the synaptic time constant of the LIF neuron also increase progressively; for example, the nature of the synaptic time constant increases exponentially. In the foregoing preferred embodiment, in the feed-forward network, the number of elements in the synaptic time constant set or the attenuation parameter set associated with all LIF neurons in the previous layer is half of the number of elements in the synaptic time constant set or the attenuation parameter set associated with all LIF neurons in the next layer. Figuratively depicted, the SNN is a spiking neural network with a pyramid of synaptic time constants.
The arrangement aiming at the synapse time constant can be used for sensing and processing a plurality of time scales, and avoids using a cycle network and the defects brought by the cycle network. In the present invention, the first layer includes LIF neurons with short synaptic time constants, the third layer includes LIF neurons with short to long synaptic time constants, and the second layer is between the first layer and the third layer. Based on the teaching of the foregoing examples of the present invention, the arrangement scheme of the synaptic time constants can be set as other schemes according to the needs, and the present invention is not limited thereto.
For the readout layer, the number may be defined according to the total result of the classification, e.g. 4 classes of results correspond to 4 neurons. W 4 Is a synaptic weight matrix projected onto neurons in the readout layer, which are optionally also LIF neurons. The offsets are optional and may be included in some embodiments while others are not.
The SNN is finally present in the neuromorphic chip, but to obtain a better on-chip inference accuracy, training needs to be performed in a training device (GPU, etc.), and the trained network configuration parameters are deployed in the neuromorphic chip, so that the chip has an ability to infer an environmental signal (particularly, a time-domain signal). An alternative training scheme is a proxy gradient scheme.
Finally, for signal classification of places (automobiles, streets, cafes and homes), the precision of a training set is 98.8%, the precision of a verification set is 98.7%, and the testing precision after the signals are deployed to a chip is 98%. More importantly, the power consumption of the on-chip SNN core can be as low as several microwatts. The median delay of the whole system is 100ms (the time scale of scene cuts, which is generally much larger), and specific reference can be made to the classified delay profile shown in fig. 4.
An electronic device, such as a headset or a mobile phone, is provided with the neuromorphic chip, and through the neuromorphic chip classifying environmental signals, the electronic device responds according to the result, such as adjusting noise reduction filtering parameters to match the environment with the best needs. The electronic product has the remarkable intelligent characteristic of low-cost acquisition.
Although the present invention has been described herein with reference to particular features and embodiments thereof, various modifications, combinations, and substitutions are possible without departing from the scope of the present invention. The scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification, and it is intended that the method, means, and method may be practiced in association with, inter-dependent on, inter-operative with, or after one or more other products, methods.
Therefore, the specification and drawings should be considered simply as a description of some embodiments of the technical solutions defined by the appended claims, and therefore the appended claims should be interpreted according to the principles of maximum reasonable interpretation and are intended to cover all modifications, variations, combinations, or equivalents within the scope of the disclosure as possible, while avoiding an unreasonable interpretation.
To achieve better technical results or for certain applications, a person skilled in the art may make further improvements on the technical solution on the basis of the present invention. However, even if the partial improvement/design is inventive or/and advanced, the technical idea of the present invention is covered by the technical features defined in the claims, and the technical solution is also within the protection scope of the present invention.
Several technical features mentioned in the attached claims may have alternative technical features or may be rearranged with respect to the order of certain technical processes, materials organization, etc. Those skilled in the art can easily understand the alternative means, or change the sequence of the technical process and the material organization sequence, and then adopt substantially the same means to solve substantially the same technical problems to achieve substantially the same technical effects, so that even if the means or/and the sequence are explicitly defined in the claims, the modifications, changes and substitutions shall fall within the protection scope of the claims according to the equivalent principle.
The method steps or modules described in connection with the embodiments disclosed herein may be embodied in hardware, software, or a combination of both, and the steps and components of the embodiments have been described in a functional generic manner in the foregoing description for the sake of clarity in describing the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application or design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention as claimed.

Claims (10)

1. A neuromorphic chip comprises a plurality of LIF neurons, at least N layers are constructed by the LIF neurons, and N is a positive integer; the method is characterized in that:
the N layers form a feedforward network, and a front layer of the N layers sends pulses to a rear layer;
the LIF neuron has a synaptic state associated with the LIF neuron;
in the feedforward network, a synapse time constant set or an attenuation parameter set associated with all LIF neurons in a previous layer is a proper subset of a synapse time constant set or an attenuation parameter set associated with all LIF neurons in a next layer;
wherein the synaptic time constant or decay parameter is used to perform a decay operation on the value of the synaptic state.
2. The neuromorphic chip of claim 1, wherein:
the decay parameter and the synaptic time constant may be related to each other by a logarithmic or exponential transformation.
3. The neuromorphic chip of claim 1, wherein:
the synaptic state associated with the LIF neuron is synaptic current.
4. The neuromorphic chip of claim 1, wherein:
the LIF neuron has a membrane state associated with the LIF neuron, the membrane state being a membrane potential.
5. The neuromorphic chip of claim 1, wherein:
implementing attenuation operation on the value of the synapse state by means of bit shifting according to a synapse time constant or an attenuation parameter; when further attenuation of the value of the synaptic state cannot be achieved by means of bit shifting, linear attenuation is performed on the value of the synaptic state.
6. The neuromorphic chip of claim 1 wherein:
the membrane time constants of the LIF neurons are the same.
7. The neuromorphic chip of claim 1 wherein:
the neural morphological chip is used for processing an environment signal, and the environment signal is finally converted into a pulse after being processed by at least a filter array and is transmitted to a pulse neural network formed by at least the feedforward network so as to carry out reasoning.
8. The neuromorphic chip of any one of claims 1-7, wherein:
elements in a synaptic time constant set associated with all LIF neurons in any layer of the feedforward network are arranged from small to large and accord with an exponential growth relation; and/or the first and/or second light sources,
the elements in the attenuation parameter set associated with all LIF neurons in any layer of the feedforward network are arranged from small to large and conform to a linear growth relationship.
9. The neuromorphic chip of claim 8, wherein:
in the feedforward network, the number of elements in a synaptic time constant set or an attenuation parameter set associated with all LIF neurons in the previous layer is half of the number of elements in a synaptic time constant set or an attenuation parameter set associated with all LIF neurons in the next layer.
10. An electronic device, characterized in that: the electronic equipment comprises the neuromorphic chip of any one of claims 1-9 and is used for reasoning the environmental signals and obtaining a reasoning result; the electronic device responds according to the inference result.
CN202211101286.3A 2022-09-09 2022-09-09 Neuromorphic chip and electronic device Active CN115169547B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211101286.3A CN115169547B (en) 2022-09-09 2022-09-09 Neuromorphic chip and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211101286.3A CN115169547B (en) 2022-09-09 2022-09-09 Neuromorphic chip and electronic device

Publications (2)

Publication Number Publication Date
CN115169547A CN115169547A (en) 2022-10-11
CN115169547B true CN115169547B (en) 2022-11-29

Family

ID=83482337

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211101286.3A Active CN115169547B (en) 2022-09-09 2022-09-09 Neuromorphic chip and electronic device

Country Status (1)

Country Link
CN (1) CN115169547B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201435757A (en) * 2012-11-20 2014-09-16 Qualcomm Inc Dynamical event neuron and synapse models for learning spiking neural networks
CN108334933A (en) * 2018-03-02 2018-07-27 广东工业大学 A kind of neuron activation functions parameter adjusting method and its device
CN113255905A (en) * 2021-07-16 2021-08-13 成都时识科技有限公司 Signal processing method of neurons in impulse neural network and network training method
CN114202068A (en) * 2022-02-17 2022-03-18 浙江大学 Self-learning implementation system for brain-like computing chip
CN114819113A (en) * 2022-07-01 2022-07-29 深圳时识科技有限公司 SNN training method and device, storage medium, chip and electronic device
CN114861892A (en) * 2022-07-06 2022-08-05 深圳时识科技有限公司 Chip on-loop agent training method and device, chip and electronic device

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101971166B (en) * 2008-03-14 2013-06-19 惠普开发有限公司 Neuromorphic circuit
US9189730B1 (en) * 2012-09-20 2015-11-17 Brain Corporation Modulated stochasticity spiking neuron network controller apparatus and methods
US9959499B2 (en) * 2013-09-25 2018-05-01 Qualcomm Incorporated Methods and apparatus for implementation of group tags for neural models
US20150278680A1 (en) * 2014-03-26 2015-10-01 Qualcomm Incorporated Training, recognition, and generation in a spiking deep belief network (dbn)
US20160042271A1 (en) * 2014-08-08 2016-02-11 Qualcomm Incorporated Artificial neurons and spiking neurons with asynchronous pulse modulation
US10878313B2 (en) * 2017-05-02 2020-12-29 Intel Corporation Post synaptic potential-based learning rule
US11763139B2 (en) * 2018-01-19 2023-09-19 International Business Machines Corporation Neuromorphic chip for updating precise synaptic weight values
WO2020049541A1 (en) * 2018-09-09 2020-03-12 Gore Mihir Ratnakar A neuromorphic computing system
US11195085B2 (en) * 2019-03-21 2021-12-07 International Business Machines Corporation Spiking synaptic elements for spiking neural networks
CN116438541A (en) * 2020-10-05 2023-07-14 因纳特拉纳米系统有限公司 Adaptation of SNN by transient synchronization
FR3119696B1 (en) * 2021-02-11 2024-02-09 Thales Sa NEUROMORPHIC CIRCUIT AND ASSOCIATED TRAINING METHOD
CN113313240B (en) * 2021-08-02 2021-10-15 成都时识科技有限公司 Computing device and electronic device
CN114997391B (en) * 2022-08-02 2022-11-29 深圳时识科技有限公司 Leakage method in electronic nervous system, chip and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201435757A (en) * 2012-11-20 2014-09-16 Qualcomm Inc Dynamical event neuron and synapse models for learning spiking neural networks
CN108334933A (en) * 2018-03-02 2018-07-27 广东工业大学 A kind of neuron activation functions parameter adjusting method and its device
CN113255905A (en) * 2021-07-16 2021-08-13 成都时识科技有限公司 Signal processing method of neurons in impulse neural network and network training method
CN114202068A (en) * 2022-02-17 2022-03-18 浙江大学 Self-learning implementation system for brain-like computing chip
CN114819113A (en) * 2022-07-01 2022-07-29 深圳时识科技有限公司 SNN training method and device, storage medium, chip and electronic device
CN114861892A (en) * 2022-07-06 2022-08-05 深圳时识科技有限公司 Chip on-loop agent training method and device, chip and electronic device

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
harlotte Frenkel等."ReckOn: A 28nm Sub-mm2 Task-Agnostic Spiking Recurrent Neural Network Processor Enabling On-Chip Learning over Second-Long Timescales".《Neural and Evolutionary Computing 》.2022, *
Philipp Weidel 等."WaveSense: Efficient Temporal Convolutions with Spiking Neural Networks for Keyword Spotting".《Computer Science > Machine Learning》.2021, *
张玲 等."人工神经形态器件发展现状与展望".《电子与封装》.2021,第21卷(第6期), *
王雨辰等.类脑计算新发展――"TrueNorth"神经元芯片.《计算机科学》.2016,28-31+35. *
胡一凡 等."脉冲神经网络研究进展综述".《控制与决策》.2020,第36卷(第1期), *
黄蕾."面向神经拟态计算的光片上网络研究".《中国博士学位论文全文数据库 (信息科技辑)》.2020,(第(2020)4期),I135-23. *

Also Published As

Publication number Publication date
CN115169547A (en) 2022-10-11

Similar Documents

Publication Publication Date Title
CN110298663B (en) Fraud transaction detection method based on sequence wide and deep learning
Stornetta et al. A dynamical approach to temporal pattern processing
CN113313240B (en) Computing device and electronic device
Lin et al. Learning of time-frequency attention mechanism for automatic modulation recognition
Schäfer et al. From one to many: A deep learning coincident gravitational-wave search
Baltus et al. Convolutional neural network for gravitational-wave early alert: Going down in frequency
CN115169547B (en) Neuromorphic chip and electronic device
Angloher et al. Towards an automated data cleaning with deep learning in CRESST
Ai et al. Timing and characterization of shaped pulses with MHz ADCs in a detector system: a comparative study and deep learning approach
Wang et al. Time series sequences classification with inception and LSTM module
Lu et al. A parallel and modular multi-sieving neural network architecture for constructive learning
CN112434716A (en) Underwater target data amplification method and system based on conditional adversarial neural network
Bredikhin et al. Named Entity Recognition Methods in Zero Knowledge Network Traffic Analysis
Hu et al. Coal mine disaster warning Internet of Things intrusion detection system based on back propagation neural network improved by genetic algorithms
Mascione Pruning Deep Neural Networks for LHC Challenges
Ko et al. Background noise suppression for signal enhancement by novelty filtering
CN111008674B (en) Underwater target detection method based on rapid cycle unit
Yang et al. Multilayer perceptron for detection of seismic anomalies
US20230139347A1 (en) Per-embedding-group activation quantization
Mehrabi et al. An Optimized Multi-layer Spiking Neural Network implementation in FPGA Without Multipliers
AVCI et al. Implementation of an hybrid approach on FPGA for license plate detection using genetic algorithm and neural networks
Mayr et al. Pulsed multi-layered image filtering: a VLSI implementation
Haddadi et al. The hamming code performance analysis using rbf neural network
Dodia Detecting residues of cosmic events using residual neural network
CN115906978A (en) Reconfigurable optical tensor convolution acceleration method and device based on time-frequency linkage

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant