CN116796207A - Self-organizing mapping clustering method based on impulse neural network - Google Patents

Self-organizing mapping clustering method based on impulse neural network Download PDF

Info

Publication number
CN116796207A
CN116796207A CN202310784438.2A CN202310784438A CN116796207A CN 116796207 A CN116796207 A CN 116796207A CN 202310784438 A CN202310784438 A CN 202310784438A CN 116796207 A CN116796207 A CN 116796207A
Authority
CN
China
Prior art keywords
pulse
impulse
neurons
neural network
neuron
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310784438.2A
Other languages
Chinese (zh)
Inventor
莫凌飞
曹磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202310784438.2A priority Critical patent/CN116796207A/en
Publication of CN116796207A publication Critical patent/CN116796207A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a self-organizing map clustering method based on a pulse neural network, which uses pulse neurons as basic units to construct a time self-adapting self-organizing map TASOM algorithm model; setting a neighborhood range in a TASOM algorithm model, and adjusting synaptic connection weights among impulse neurons by winning impulse neurons according to the distance between peripheral impulse neurons; setting a suppression coefficient in a TASOM algorithm model, wherein the suppression coefficient is updated along with the update of a training step length in the training process, and the suppression intensity is increased along with the increase of the suppression coefficient, so that the pulse release of a pulse neuron is influenced; the fully-connected impulse neural network model constructed based on the algorithm model can complete clustering and classification tasks after training of different data sets. The invention provides a method for realizing clustering through the impulse neural network, so that the image clustering effect based on the impulse neural network is better.

Description

Self-organizing mapping clustering method based on impulse neural network
Technical Field
The invention belongs to the field of impulse neural networks, and particularly relates to a self-organizing map clustering method based on an impulse neural network.
Background
The impulse neural network originates from brain science, and along with the continuous breakthrough of the structure and mechanism research of the biological neural network, more and more research results are applied to computational neuroscience and brain-like computation. The study and development of impulse neural networks is also a process that humans continuously progress from recognizing the brain, to simulating the brain, and to better utilizing the brain.
In the traditional self-organizing mapping network model, sample data is mapped into a low-dimensional topological space in an unsupervised learning mode, and then a final clustering result is obtained through division of the space and continuous adjustment of a self-adaptive strategy. SOM itself also has certain limitation, firstly, SOM network has no specific objective function in training process, training result mainly depends on maximum iteration times; some network parameters are relatively fixed and cannot be dynamically changed; in the training process of the network, some neurons cannot be activated all the time; the initial state of the network connection has a great influence on the convergence speed of the network.
Disclosure of Invention
In order to solve the problems, the invention discloses a self-organizing map clustering method based on a pulse neural network, improves a classical SOM algorithm, provides a time self-adapting self-organizing map algorithm, and combines a TASOM algorithm with the pulse neural network (spike neural network, SNN) to construct a SNN-TASOM algorithm model. The model has the advantages of two methods while making up the defects of the traditional method. The SNN and the TASOM algorithm are combined, the characteristic of the pulse neural network changing along with time is given to the self-organizing map algorithm, the training task can be better completed through the improvement of the algorithm, and a new thought is provided for solving the problem of high-dimensional data clustering.
In order to achieve the above purpose, the technical scheme of the invention is as follows:
a self-organizing map clustering method based on a pulse neural network comprises the definition of a time self-adapting self-organizing map TASOM network model under the semantics of the pulse neural network; designing a time self-adaptive self-organizing mapping network structure by taking a pulse neuron as a basic unit; the fully-connected impulse neural network model constructed based on the network structure can complete clustering and classification tasks. The invention provides a method for realizing clustering through the impulse neural network, so that the clustering effect based on the impulse neural network is better.
Further, the definition of the time self-adaptive self-organizing map TASOM network model under the pulse neural network semantic is that the clustering is realized by utilizing SOM self-organizing machine manufacturing and adding a time self-adaptive mechanism in the pulse neural network.
Further, the impulse neuron is a neuron model with impulses as output, including but not limited to Hodgkin-Huxley (HH) model, integrate-release (Integrate and Fire, IF) model, leak-integrate-release (Leaky Integrate and Fire, LIF) model, irickevich (Izhikevich) model, etc., which are commonly used in the impulse neural network field.
Further, the fully-connected pulse neural network model is generally of a 3-layer structure, and comprises an input layer, an excitation layer, an inhibition layer and a layer from front to back in sequence, wherein the layers are connected through synapses; the input layer includes a group of impulse neurons of picture pixel information. In the training process of the fully connected impulse neural network model, an input MNIST data set is preprocessed, namely an input image is encoded into a poisson pulse sequence with certain time sequence information. The input layer firstly carries out poisson pulse coding, converts pixel information of the picture and generates a corresponding pulse sequence. The adaptive poisson pulse coding is used for adaptively improving coding parameters when the input pulse intensity of the SNN is too low; when the input pulse intensity of SNN is too high, the coding parameters are reduced in a self-adaptive mode.
Further, the layers of the fully-connected pulse neural network model are connected through synapses, firstly, the coded pulse sequence is transmitted into the pulse neurons, the input layer neurons send pulses, the excited layer neurons connected with the input layer pulse neurons correspondingly adjust the connection weights, according to the improved STDP learning rule, the weights of the excited layer neuron synapses can be adjusted according to different pulse sending sequences, and the different pulse sending sequences of the presynaptic neurons and the postsynaptic neurons indicate different correlation between the presynaptic neurons and the postsynaptic neurons. Since the pre-synaptic and post-synaptic STDP rule updating may be unbalanced, in order to prevent the problem of infinite or infinitesimal synaptic weight, a new learning mechanism needs to be adopted to ensure the value range of the synaptic weight, thereby ensuring that the network training is in a stable updated state. By the adjustment mechanism, the weight of the post-synaptic neuron is normalized after each iteration, so that the sum of the synaptic weights of each connected post-synaptic neuron is a constant value. When some synaptic weights are increased, other synaptic weights are correspondingly decreased, and the method can make the sizes of all the weights more uniform.
Further, in the fully-connected impulse neural network model, the inhibition layer is provided with an inhibition coefficient, and the inhibition coefficient is gradually increased along with the increase of training time. In the training process, along with the updating of the training step length, the inhibition layer generates lateral inhibition, the inhibition coefficient starts to be updated, and the difference of time and space distance can lead to the difference of inhibition effects, so that the response of the impulse neuron has a diversified situation. During training, each new sample is input, pulse issuing statistics are carried out on the exciting layer pulse neurons, and then the weights of the pulse neuron synapse connections are updated until the suppression parameters reach a set maximum value. With the update of training time, the synaptic connection weight between impulse neurons is updated with the update of the inhibition parameter.
Furthermore, the fully-connected impulse neural network model is characterized in that the synaptic connection weights among impulse neurons are trained through an unsupervised learning rule, and the learning effect of the impulse neurons is evaluated through impulse issuing activities of the impulse neurons. The specific method is as follows: recording the pulse issuing conditions of all neurons of the output layer, counting the pulse issuing conditions of all the pulse neurons after training a certain number of samples each time, and marking the neurons of the output layer as the corresponding class when the number of issued pulses is the largest. If the type of the impulse neuron is marked as the same as the type marked last time, the impulse neuron is considered to learn the feature corresponding to the type. And inputting all the test set samples to obtain an output pulse sequence. Each input sample corresponds to a pulse, and the category corresponding to the pulse neuron which currently emits the pulse is the category classified after network learning. And finally, counting the number of correctly classified samples according to the data set labels to obtain the classification accuracy of the feature learning network. Each individual output peak is calculated as a ticket. The more votes a neuron gets, the greater the likelihood that it will be selected. In the maximum decision rule, the winner will be the impulse neuron that fires the most impulses.
The beneficial effects of the invention are as follows:
the invention discloses a self-organizing map clustering method based on a pulse neural network. The fully-connected impulse neural network model constructed based on the algorithm model can be constructed, and clustering and classification tasks can be completed after different data sets are trained. The invention provides a method for realizing clustering through the impulse neural network, so that the image clustering effect based on the impulse neural network is better.
Drawings
Fig. 1 is a schematic diagram of a time adaptive self-organizing map network based on a pulse neural network provided by the invention.
Fig. 2 is a schematic diagram of a fully connected impulse neural network according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of poisson coding of a pulsed neural network according to an embodiment of the present invention.
Fig. 4 is a training flowchart of the impulse neural network according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of a time-adaptive self-organizing map network neighborhood change in an embodiment of the present invention.
Detailed Description
The present invention is further illustrated in the following drawings and detailed description, which are to be understood as being merely illustrative of the invention and not limiting the scope of the invention.
The invention relates to a logic operation method of a pulse neural network, and fig. 1 is a schematic diagram of a time self-adaptive self-organizing map network based on the pulse neural network, wherein a in the diagram represents a state at a time t, b represents a state at a time t+delta t, wherein a black solid arrow represents neighborhood reinforcement, the strength of reinforcement does not change with time, a broken line represents neighborhood inhibition, and the strength of inhibition changes with time. If neuron 1 is activated, the probability of activation of the surrounding neurons will change, neuron 2 will be strengthened to some extent, and neurons 3 and 4 will be inhibited to different extents. The inhibition intensity increases gradually over time up to the maximum inhibition intensity. C (C) inh Indicating the intensity of inhibition, C 0 Indicating the initial inhibition intensity magnitude, beta inh Representing the suppression constant. The inhibition intensity of the neurons is increased along with the increase of the distance, and the neighborhood change and the training process are described in detail later. Neurons have neighborhood reinforcement within the dashed circle, solid lines indicate neighborhood reinforcement, lateral inhibition outside the dashed circle, dashed lines indicate lateral inhibition. The lateral inhibition intensity at different times is different, and the inhibition intensity is increased according to the formula in the figure.
Referring to fig. 2, a schematic diagram of a fully connected impulse neural network model according to an embodiment of the present invention is shown. The embodiment of the invention adopts a network structure, wherein the fully-connected impulse neural network model is generally of a 3-layer structure, and comprises an input layer, an excitation layer, an inhibition layer and a layer from front to back in sequence, wherein the layers are connected through synapses; the input layer contains impulse neuron groups of picture pixel information. In the training process of the fully-connected impulse neural network model, an input MNIST data set is preprocessed, namely an input image is encoded into a poisson pulse sequence with certain time sequence information. The input layer firstly carries out poisson pulse coding, converts pixel information of the picture and generates a corresponding pulse sequence. The Poisson pulse coding is adaptive Poisson pulse coding, and the adaptive Poisson pulse coding can adaptively improve coding parameters when the input pulse intensity of SNN is too low; when the input pulse intensity of SNN is too high, the coding parameters are reduced in a self-adaptive mode.
In embodiments of the present invention, impulse neurons employ a conductance-adjusted based model of neuronal synapses. The differential equation for the membrane potential of the neuron model is as follows: synaptic conductance g E And g I The kinetic equation of (2) is shown in the following formula.
Wherein τ E 、τ I Time constants corresponding to excitatory synapses and inhibitory synapses, respectively. If no presynaptic pulse occurs, the synaptic leads g E And g I Will correspondingly be a time constant tau E Or τ I Exponentially decays. w (w) ij Representing the connection weight between neurons i and j, t represents the time t,representing the pulse of neuron i at time k, N E Indicating the number of excitatory neurons, N I Representing the number of inhibitory neurons.
From the above g E And g I The equation derivation for post-synaptic LIF neurons is available as follows.
Wherein g L =1ns represents drain conductance, V E And V I Representing the potential of excitatory neurons and inhibitory neurons. τ m Time constant representing excitatory synapse m, dt represents time derivative, V m Representing the voltage of neuron m, V rest Indicating the resting potential.
Referring to fig. 3, a schematic diagram of poisson encoding of a pulsed neural network according to an embodiment of the present invention is shown. The poisson coding mode is relatively simple, the coding parameters are easy to adjust, and the method is suitable for event simulation of biological neuron pulse release, so that the method adopts the poisson coding mode to code input information, wherein the input neurons are pulse sequence generators distributed in poisson, and each input neuron corresponds to one pixel of an input image. In the encoding process, an original image is first preprocessed. Each pixel value is then modeled as a pulse sequence according to a poisson distribution, the average frequency of the pulse sequence being determined by the pixel value multiplied by the encoding parameter λ. The input neurons issue pulse sequences to the output layer, which after a certain time will enter a short rest state, i.e. a refractory period time. Thus, an adaptive poisson pulse coding scheme is employed herein. The scheme can adaptively improve the coding parameters when the input pulse intensity of the SNN is too low; when the input pulse intensity of SNN is too high, the coding parameters are reduced in a self-adaptive mode. Poisson distribution is generally used to describe the number of random events occurring per unit time, and this mechanism is just suitable for describing the events of biological neuron pulse release, and a specific flow for implementing poisson coding on MNIST is shown in fig. 3.
In the embodiment of the invention, the synapse connection between the input layer and the feature layer uses STDP rules based on trace variables, and the formula is as follows: the invention uses three trace variables x in the form of one presynaptic pulse and two postsynaptic pulses pre 、y poss1 、y poss2 . When a presynaptic pulse occurs, the decrease in weight depends on the value y of the postsynaptic pulse trace at that time pos1 The method comprises the steps of carrying out a first treatment on the surface of the When a post-synaptic pulse occurs, the weight is increased by an amount that is not only dependent on the value x of the pre-synaptic pulse trace at that time pre Also depending on the value y of the post-synaptic pulse trace at the previous moment poss2 。x pre 、y poss1 、y post2 The specific variation of (2) is shown in the following formula:
t i and t j Neurons i and j are at time t.
The amount of change in the synaptic weight that can be obtained is given by the following formulaShown as η pre =1×10 -4 ,η post =1×10 -2 Representing the corresponding learning rate in both cases.
Since the pre-synaptic and post-synaptic STDP rule updating may be unbalanced, in order to prevent the problem of infinite or infinitesimal synaptic weight, a new learning mechanism needs to be adopted to ensure the value range of the synaptic weight, thereby ensuring that the network training is in a stable updated state. By the adjustment mechanism, the weight of the post-synaptic neuron is normalized after each iteration, so that the sum of the synaptic weights of each connected post-synaptic neuron is a constant value. When some synaptic weights are increased, other synaptic weights are correspondingly decreased, and the method can make the sizes of all the weights more uniform. The manner in which the weights scale is shown in the following formula.
Wherein w is ij Is the weight before scaling, β ε (0, 1) is the scaling factor, N in Is all synapses Σw connected to neuron j j Is the sum of all synaptic weights connected to neuron j.
Referring to fig. 5, a schematic diagram of a time adaptive self-organizing map network neighborhood change according to an embodiment of the present invention is shown. As can be seen from FIG. 5, the range of the region Nc is continuously adjusted over time, and as the training time passes, nc decreases toward a C-centered range, eventually stopping at neuron C, N c = { C }. From the figure, it can be seen that Nc (t k-1 ) Ratio of ranges at the time N c (t k-2 ) The time is one turn smaller, and the adjacent area is continuously smaller along with the time as measured by the chebyshev distance method. In the learning process, the learning rate gradually tends to zero along with the increase of time, so that the learning process is ensured to be converged. Updating the winning neighborhood, centering on C, and updating t timeWeight adjustment domain for etching, general initial neighborhood N c And the field distance in the training process is shrunk along with the training time.
Referring to fig. 4, a training flow chart of the impulse neural network according to an embodiment of the present invention is shown. In the encoding process, an original image is preprocessed, each pixel value is modeled as a pulse sequence according to poisson distribution, a winning neuron is found, the number of pulses issued by the neuron is recorded, and weight coefficient correction is carried out on the connection between an input neuron and an excited neuron according to an improved STDP learning rule. The best matching excitatory neurons are activated while the inhibitory neurons at the corresponding locations are activated. The inhibitory neuron begins to produce lateral inhibition, for which the lateral inhibition intensity between it and neurons surrounding the excitatory neuron increases with increasing distance, and the inhibition parameter gradually increases with time. The level of inhibition is calculated by multiplying the inhibition parameter by the distance, and when one neuron exceeds its firing threshold, it will not inhibit firing of all other neurons, and adjacent neurons will have some enhancement and will most likely also fire. This stimulates neighboring neurons to learn similar input features for the same input.
The invention discloses a self-organizing map clustering method based on a pulse neural network, which uses pulse neurons as basic units to construct a time self-adapting self-organizing map TASOM algorithm model; setting a neighborhood range in a TASOM algorithm model, and adjusting synaptic connection weights among impulse neurons by winning impulse neurons according to the distance between peripheral impulse neurons; setting a suppression coefficient in a TASOM algorithm model, wherein the suppression coefficient is updated along with the update of a training step length in the training process, and the suppression intensity is increased along with the increase of the suppression coefficient, so that the pulse release of a pulse neuron is influenced; the fully-connected impulse neural network model constructed based on the algorithm model can complete clustering and classification tasks after training of different data sets. The invention provides a method for realizing clustering through the impulse neural network, so that the image clustering effect based on the impulse neural network is better.
It should be noted that the foregoing merely illustrates the technical idea of the present invention and is not intended to limit the scope of the present invention, and that a person skilled in the art may make several improvements and modifications without departing from the principles of the present invention, which fall within the scope of the claims of the present invention.

Claims (9)

1. A self-organizing map clustering method based on a pulse neural network is characterized in that: constructing a time self-adaptive self-organizing map TASOM algorithm model by taking impulse neurons as basic units; setting a neighborhood range in a TASOM algorithm model, and adjusting synaptic connection weights among impulse neurons by winning impulse neurons according to the distance between peripheral impulse neurons; setting a suppression coefficient in a TASOM algorithm model, wherein the suppression coefficient is updated along with the update of a training step length in the training process, and the suppression intensity is increased along with the increase of the suppression coefficient, so that the pulse release of a pulse neuron is influenced; the fully connected impulse neural network model constructed based on the TASOM algorithm model can complete clustering and classification tasks after training different data sets.
2. The self-organizing map clustering method based on the impulse neural network according to claim 1, wherein the method comprises the following steps: the pulse neuron is a neuron model taking pulse as output; the pulse release mode of the pulse neuron comprises output information, and different types of input are distinguished through the output pulse release mode.
3. The self-organizing map clustering method based on the impulse neural network according to claim 1, wherein the method comprises the following steps: the neighborhood range is a circular area taking the winning impulse neuron as a center, continuously contracts and finally stops at the winning impulse neuron.
4. The self-organizing map clustering method based on the impulse neural network according to claim 1, wherein the method comprises the following steps: by a means ofThe inhibition coefficient beta inh Increasing with increasing training step length, thereby making the inhibition strength C inh And increases up to the maximum inhibition strength; the intensity of the inhibition increases with increasing distance between the impulse neurons, thereby affecting the firing of the impulse neurons.
5. The self-organizing map clustering method based on the impulse neural network according to claim 1, wherein the method comprises the following steps: the fully-connected impulse neural network model is constructed based on the algorithm model, the impulse neural network multi-layer network structure comprises an input layer, an excitation layer and an inhibition layer from front to back in sequence, and the layers are connected through synapses; the network structure is that the input layer neurons and the excited layer neurons are connected in a one-to-one manner; the network structure adopts a one-to-one connection mode for the exciting layer neurons and the inhibiting layer neurons, wherein the inhibiting layer laterally inhibits the exciting layer.
6. The self-organizing map clustering method based on the impulse neural network according to claim 5, wherein the method comprises the following steps: in the fully-connected pulse neural network model, layers are connected through synapses, firstly, a coded pulse sequence is transmitted into pulse neurons, the input layer neurons release pulses, the excited layer neurons connected with the input layer pulse neurons correspondingly adjust the connection weights, the weights of the excited layer neuron synapses can be adjusted according to different pulse release sequences, and the forms of one presynaptic pulse and two postsynaptic pulses are used for three trace variables x pre 、y post1 、y post2 The method comprises the steps of carrying out a first treatment on the surface of the When a presynaptic pulse occurs, the decrease in weight depends on the value y of the postsynaptic pulse trace at that time post1 The method comprises the steps of carrying out a first treatment on the surface of the When a post-synaptic pulse occurs, the weight is increased by an amount that is not only dependent on the value x of the pre-synaptic pulse trace at that time pre Also depending on the value y of the post-synaptic pulse trace at the previous moment post2 The method comprises the steps of carrying out a first treatment on the surface of the If a presynaptic pulse is generated, the link weight between the pulse neurons is changed to Deltaw ij =-η pre y post1 The method comprises the steps of carrying out a first treatment on the surface of the If a post-synaptic pulse is generated, the link weights between the impulse neurons are changed to Deltaw ij =η post x pre y post2
7. The self-organizing map clustering method based on the impulse neural network according to claim 5, wherein the method comprises the following steps: in the fully-connected pulse neural network model, imbalance occurs in pre-synaptic and post-synaptic STDP rule updating, and in order to prevent the problem of infinite or infinitesimal synaptic weights, a new learning mechanism is required to ensure the value range of the synaptic weights, so that the network training is ensured to be in a stable updated state; by means of the described adjustment mechanism,wherein w is ij Is the weight before scaling, β ε (0, 1) is the scaling factor, N in All synapses Σw connected to the impulse neuron j j Is the sum of all synaptic weights connected to the impulse neuron j; normalizing the weight of the post-synaptic neuron after each iteration to make the sum of the synaptic weights of each post-synaptic neuron connected with the post-synaptic neuron be a constant value; when some synaptic weights are increased, others are correspondingly decreased, and the method can make the sizes of all the weights more uniform.
8. The self-organizing map clustering method based on the impulse neural network according to claim 5, wherein the method comprises the following steps: the full-connection pulse neural network model is characterized in that an inhibition layer is provided with an inhibition coefficient, and the inhibition coefficient is gradually increased along with the increase of training time; in the training process, along with the updating of the training step length, the inhibition layer generates lateral inhibition, the inhibition coefficient starts to be updated, and the difference of time and space distance can lead to the difference of inhibition effects, so that the response of the impulse neuron has a diversified situation; in the training process, each new sample is input, pulse issuing statistics is carried out on the exciting layer pulse neurons, then the weights of the pulse neuron synapse connections are updated until the suppression parameters reach the set maximum values; with the update of training time, the synaptic connection weight between impulse neurons is updated with the update of the inhibition parameter.
9. The self-organizing map clustering method based on the impulse neural network according to claim 5, wherein the method comprises the following steps: the fully-connected impulse neural network model is characterized in that the synaptic connection weights among impulse neurons are trained through an unsupervised learning rule, and the learning effect of the impulse neurons is evaluated through impulse issuing activities of the impulse neurons; the specific method is as follows: recording pulse issuing conditions of all neurons of the output layer, counting the pulse issuing conditions of all the pulse neurons after training a certain number of samples each time, and marking the neurons of the output layer as the corresponding class when the number of issued pulses is the largest; if the marked category of the impulse neuron is the same as the last marked category, the impulse neuron is considered to learn the feature corresponding to the category; inputting all test set samples to obtain an output pulse sequence; each input sample corresponds to a pulse, and the category corresponding to the pulse neuron which currently distributes the pulse is the category classified after network learning; and finally, counting the number of correctly classified samples according to the data set labels to obtain the classification accuracy of the feature learning network.
CN202310784438.2A 2023-06-29 2023-06-29 Self-organizing mapping clustering method based on impulse neural network Pending CN116796207A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310784438.2A CN116796207A (en) 2023-06-29 2023-06-29 Self-organizing mapping clustering method based on impulse neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310784438.2A CN116796207A (en) 2023-06-29 2023-06-29 Self-organizing mapping clustering method based on impulse neural network

Publications (1)

Publication Number Publication Date
CN116796207A true CN116796207A (en) 2023-09-22

Family

ID=88046594

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310784438.2A Pending CN116796207A (en) 2023-06-29 2023-06-29 Self-organizing mapping clustering method based on impulse neural network

Country Status (1)

Country Link
CN (1) CN116796207A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117311516A (en) * 2023-11-28 2023-12-29 北京师范大学 Motor imagery electroencephalogram channel selection method and system
CN117437382A (en) * 2023-12-19 2024-01-23 成都电科星拓科技有限公司 Updating method and system for data center component

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117311516A (en) * 2023-11-28 2023-12-29 北京师范大学 Motor imagery electroencephalogram channel selection method and system
CN117311516B (en) * 2023-11-28 2024-02-20 北京师范大学 Motor imagery electroencephalogram channel selection method and system
CN117437382A (en) * 2023-12-19 2024-01-23 成都电科星拓科技有限公司 Updating method and system for data center component
CN117437382B (en) * 2023-12-19 2024-03-19 成都电科星拓科技有限公司 Updating method and system for data center component

Similar Documents

Publication Publication Date Title
CN116796207A (en) Self-organizing mapping clustering method based on impulse neural network
Papageorgiou et al. Fuzzy cognitive map learning based on nonlinear Hebbian rule
Shrestha et al. Stable spike-timing dependent plasticity rule for multilayer unsupervised and supervised learning
CN112633497A (en) Convolutional pulse neural network training method based on reweighted membrane voltage
CN111858989A (en) Image classification method of pulse convolution neural network based on attention mechanism
CN112906828A (en) Image classification method based on time domain coding and impulse neural network
CN114266351A (en) Pulse neural network training method and system based on unsupervised learning time coding
CN109635938B (en) Weight quantization method for autonomous learning impulse neural network
CN111091815A (en) Voice recognition method of aggregation label learning model based on membrane voltage driving
Zilouchian Fundamentals of neural networks
Jin et al. Evolutionary multi-objective optimization of spiking neural networks
Zhang et al. Introduction to artificial neural network
CN110874629A (en) Structure optimization method of reserve pool network based on excitability and inhibition STDP
CN116403054A (en) Image optimization classification method based on brain-like network model
Wu et al. Echo state network prediction based on backtracking search optimization algorithm
CN113628615B (en) Voice recognition method and device, electronic equipment and storage medium
KR20200094354A (en) Method for generating spiking neural network based on burst spikes and inference apparatus based on spiking neural network
CN115412332A (en) Internet of things intrusion detection system and method based on hybrid neural network model optimization
CN114118378A (en) Hardware-friendly STDP learning method and system based on threshold self-adaptive neurons
CN111582470B (en) Self-adaptive unsupervised learning image identification method and system based on STDP
CN111582461B (en) Neural network training method and device, terminal equipment and readable storage medium
CN112232494A (en) Method for constructing pulse neural network for feature extraction based on frequency induction
Reid et al. Forecasting natural events using axonal delay
CN113408611A (en) Multilayer image classification method based on delay mechanism
Wang et al. A supervised learning algorithm to binary classification problem for spiking neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination