CN111582460B - Hidden layer neuron self-adaptive activation method and device and terminal equipment - Google Patents

Hidden layer neuron self-adaptive activation method and device and terminal equipment Download PDF

Info

Publication number
CN111582460B
CN111582460B CN202010438305.6A CN202010438305A CN111582460B CN 111582460 B CN111582460 B CN 111582460B CN 202010438305 A CN202010438305 A CN 202010438305A CN 111582460 B CN111582460 B CN 111582460B
Authority
CN
China
Prior art keywords
hidden layer
activation
layer neuron
current
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010438305.6A
Other languages
Chinese (zh)
Other versions
CN111582460A (en
Inventor
李楠
李清江
刘森
李纪伟
徐晖
刁节涛
陈长林
宋兵
王义楠
刘海军
于红旗
李智炜
王伟
王玺
步凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202010438305.6A priority Critical patent/CN111582460B/en
Publication of CN111582460A publication Critical patent/CN111582460A/en
Application granted granted Critical
Publication of CN111582460B publication Critical patent/CN111582460B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Semiconductor Memories (AREA)

Abstract

The embodiment of the invention discloses a hidden layer neuron self-adaptive activation method, a hidden layer neuron self-adaptive activation device and terminal equipment, which are applied to a memristor pulse neural network, wherein each synapse in the memristor pulse neural network only comprises a memristor, and the method comprises the following steps: calculating an average input current according to the input current of each hidden layer neuron in the same layer; injecting the average input current into the individual hidden layer neurons; controlling hidden layer neurons with corresponding input currents larger than the average input current to discharge in sequence according to a preset discharge sequence rule; determining an activation value of each discharged hidden layer neuron according to the discharge sequence of each discharged hidden layer neuron; and utilizing the activation value to activate the hidden layer neurons of each discharge. The hidden layer neuron self-adaptive activation technology provided by the scheme realizes the lateral inhibition between hidden layer neurons; and each synapse can only contain one memristor, avoiding the construction of complex hardware connection networks.

Description

Hidden layer neuron self-adaptive activation method and device and terminal equipment
Technical Field
The invention relates to the field of artificial intelligence, in particular to a hidden layer neuron self-adaptive activation method, a hidden layer neuron self-adaptive activation device and terminal equipment.
Background
In the big data era, artificial Intelligence (AI) technology has rapidly developed. A Memristor impulse Neural network (SNNs) based on a Memristor as a physical basis and an SNNs (Neural Networks) as an algorithm basis provides a high-efficiency brain-like calculation solution. A memristor is a two-port nonvolatile memory device whose resistance state can be changed by a voltage applied across it or a current flowing through it, has a high integration density and extremely low operating power consumption; the impulse neural network is a novel biological inspired network, and has great potential in network efficiency by transmitting information between neurons through impulse signals with space-time information. As proved to have the characteristics of Spike-timing-dependent plasticity (STDP) similar to those of the biological brain, the memristor can be used as a synaptic unit to be applied to SNNs, and a brain-like computing system close to the efficacy and performance of the biological brain is realized.
At present, an additional inhibition layer neuron is required to be introduced for realizing lateral inhibition between hidden layer neurons, so that the calculation process is more complicated; and two memristors are used as synapses to generate positive and negative pulses to simulate positive and negative weights, so that hardware connection is complicated, and the implementation cost is increased to a certain extent.
Disclosure of Invention
In view of the above problems, the present invention provides a hidden layer neuron adaptive activation method, apparatus and terminal device, which can achieve lateral inhibition without introducing an additional inhibition layer neuron, and each synapse may only contain one memristor, avoiding constructing a complex hardware connection network.
A first embodiment of the invention provides a hidden layer neuron adaptive activation method, which is applied to a memristor pulse neural network, wherein each synapse in the memristor pulse neural network only contains one memristor, and the method comprises the following steps:
calculating an average input current according to the input current of each hidden layer neuron in the same layer;
injecting the average input current into the individual hidden layer neurons;
controlling hidden layer neurons with corresponding input currents larger than the average input current to sequentially discharge according to a preset discharge sequence rule;
determining an activation value of each discharged hidden layer neuron according to the discharge sequence of each discharged hidden layer neuron;
and utilizing the activation value to activate the hidden layer neurons of each discharge.
In the method for adaptively activating a hidden layer neuron according to the above embodiment, the preset firing order rule includes: the larger the input current, the earlier the discharge.
In the method for adaptively activating a hidden layer neuron according to the above embodiment, the activation value is obtained according to the following formula:
Figure BDA0002503124710000021
represents the corresponding activation value of the mth hidden layer neuron in the ith discharge, N max Represents a preset total number of neurons allowed to discharge, and τ represents a preset constant.
The method for adaptive activation of hidden neurons according to the second embodiment of the present invention further comprises:
and when the current-time activation state of the discharged hidden layer neuron is less than or equal to a preset activation state threshold value, updating the current-time activation state of the discharged hidden layer neuron.
The hidden layer neuron adaptive activation method described in the above embodiment determines the activation state at the current time according to the following formula:
Figure BDA0002503124710000031
v active,1 representing said current moment of activation state, v active,0 Representing the activation state at said previous moment, δ representing a predetermined incremental parameter, δ>0, sigma represents a predetermined attenuation parameter, 0<σ<1,/>
Figure BDA0002503124710000032
Represents the corresponding activation value of the mth hidden layer neuron in the lth discharge, and/or the corresponding judgment value of the hidden layer neuron in the lth discharge>
Figure BDA0002503124710000033
Represents the corresponding activation value of the (m-1) th undischarged hidden layer neuron in the l layer, t refractory,1 Representing the refractory period time at the current time.
The method for adaptive activation of hidden neurons according to the above embodiment further includes:
and when the current-time activation state of the discharged hidden layer neuron is larger than a preset activation state threshold value, updating the current-time refractory period of the discharged hidden layer neuron.
The adaptive hidden layer neuron activation method described in the above embodiment updates the refractory period time at the current time according to the following formula:
Figure BDA0002503124710000034
t refractory,1 representing the refractory period time, t, of said current moment refractory,0 Represents the refractory period time of the previous moment; v. of active,th A threshold value representing an activation state; tau is max Representing a preset maximum time of inactivity, Δ τ r Represents a presetThe corresponding decrement value of the due period time.
A third embodiment of the present invention provides an adaptive hidden layer neuron activation device applied to a memristor impulse neural network, wherein each synapse in the memristor impulse neural network only contains one memristor, the device comprising:
the initial module is used for calculating the average input current according to the input current of each hidden layer neuron in the same layer;
a cancellation module for injecting the average input current into the hidden neurons;
the discharging module is used for controlling the hidden layer neurons of which the corresponding input currents are larger than the average input current to sequentially discharge according to a preset discharging sequence rule;
the calculation module is used for determining the activation value of each discharged hidden layer neuron according to the discharge sequence of each discharged hidden layer neuron;
an activation module for using the activation value to activate the respective discharged hidden neurons.
The above embodiments of the present invention relate to a terminal device, including a memory for storing a computer program and a processor for executing the computer program to cause the terminal device to execute the hidden layer neuron adaptive activation method according to the above embodiments.
The above-described embodiments of the present invention relate to a readable storage medium storing a computer program that, when run on a processor, executes the hidden layer neuron adaptive activation method according to the above-described embodiments.
The hidden layer neuron self-adaptive activation method disclosed by the technical scheme injects an average input current into all hidden layer neurons in the same layer, controls the hidden layer neurons with the corresponding input currents larger than the average input current to sequentially discharge according to a preset discharge sequence rule so as to enable only one memristor to be used as a synapse to generate a positive pulse simulation positive weight value, and then calculates an activation value through the discharge sequence of the hidden layer neurons so as to activate the corresponding hidden layer neurons by using the activation value. On one hand, the technical scheme can realize the lateral inhibition between hidden layer neurons without additionally introducing an inhibition layer neuron, so that the calculation process is simpler; on the other hand, the technical scheme avoids adopting two memristors as synapses to generate positive and negative pulses to simulate positive and negative weights, hardware connection is simpler, and implementation cost is reduced.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings required to be used in the embodiments will be briefly described below, and it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope of the present invention. Like components are numbered similarly in the various figures.
FIG. 1 illustrates a hidden layer neuron adaptive activation method provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram showing the spatial interaction in hidden neuron adaptive activation provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating temporal interactions in adaptive activation of hidden layer neurons provided by embodiments of the present invention;
fig. 4 illustrates a hidden layer neuron adaptive activation apparatus according to an embodiment of the present invention.
Description of the main element symbols:
1-hidden layer neuron adaptive activation means; 100-initial module; 200-a cancellation module; 300-a discharge module; 400-a calculation module; 500-activation module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Hereinafter, the terms "including", "having", and their derivatives, which may be used in various embodiments of the present invention, are intended to indicate only specific features, numerals, steps, operations, elements, components, or combinations of the foregoing, and should not be construed as first excluding the presence of or adding to one or more other features, numerals, steps, operations, elements, components, or combinations of the foregoing.
Furthermore, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of the present invention belong. The terms (such as those defined in commonly used dictionaries) should be interpreted as having a meaning that is consistent with their contextual meaning in the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in various embodiments of the present invention.
First, it is clear that the way neurons are activated in the present invention is derived from the classical leaky-integrate-and-fire (LIF) neuron model. In addition, considering that the elements in the output vector of the first layer of neurons are distributed as plus or minus 1 (determined by the input sample) in a non-uniform way, the polarity of the current input by the second layer of neurons is uncertain and varies in a large range, so that the neuron membrane potential threshold is difficult to determine. In other researches, two memristors are adopted as synapses to realize a method of positive and negative weights, the weights in the method have no negative weights, and w belongs to (0,1), namely each synapse only contains one memristor.
Example 1
This example, referring to fig. 1, shows a hidden layer neuron adaptive activation method, which comprises the following steps:
step S100: and calculating the average input current according to the input current of each hidden layer neuron in the same layer.
The hidden layer neuron activation mode in the embodiment is derived from a classical leaky-integrate-and-fire (LIF) neuron model, and on the basis, the average input current is calculated according to the input current of each hidden layer neuron in the same layer
Figure BDA0002503124710000071
Wherein +>
Figure BDA0002503124710000072
An input current representing the mth hidden layer neuron of the l-th layer, M representing the total number of hidden layer neurons in the l-th layer in the neural network, and +>
Figure BDA00025031247100000711
Representing the average input current.
Step S200: injecting the average input current into the individual hidden layer neurons.
Exemplarily, referring to fig. 2, the average input current obtained by calculating the input current of each hidden layer neuron is injected into each hidden layer neuron to cancel the original input current. I.e. the classical LIF neuron model is here modified to:
Figure BDA0002503124710000073
at time t, the current->
Figure BDA0002503124710000074
Input current ^ equivalent to mth hidden layer neuron of l layer>
Figure BDA0002503124710000075
And the membrane potential->
Figure BDA0002503124710000076
Is then currentIs calculated.
Step S300: and controlling the hidden layer neurons with the corresponding input currents larger than the average input current to discharge in sequence according to a preset discharge sequence rule.
Can be combined with
Figure BDA0002503124710000077
Is set to 0, i.e. the input current of the mth hidden layer neuron->
Figure BDA0002503124710000078
Upon exceeding the average input current>
Figure BDA0002503124710000079
The discharge is allowed to occur. Exemplarily, in a sample period, the total number of hidden layer neurons that are discharged in the l-th layer may be M, where the discharge time of a certain neuron is t f The above activation principle can be defined as: />
Figure BDA00025031247100000710
Wherein n is the discharge sequence number of the mth hidden layer neuron of the l layer. For example, n =4 indicates that the mth hidden layer neuron in the l-th layer is discharged at the 4 th in the l-th layer.
Preferably, the preset discharging sequence rule comprises: the hidden layer neurons in the first layer discharge earlier the larger the input current.
Mth hidden layer neuron in the first layer at the time of discharge t f A pulse signal is generated AND connected by wire-AND input to a temporal relationship analysis (TRE) unit. TRE integrates these pulse signals generated at different times using a bus to indicate the sequencing of their respective pulse generation and then back into the individual neurons. The initial pulse of each neuron is anded (anded) with this bus signal to determine the firing order n of the corresponding neuron.
Step S400: and determining the activation value of each discharged hidden layer neuron according to the discharge sequence of each discharged hidden layer neuron.
Exemplarily, the activation value of a hidden layer neuron is obtained according to the following formula:
Figure BDA0002503124710000081
represents the corresponding activation value of the mth hidden layer neuron in the ith discharge, N max Represents a preset total number of neurons allowed to discharge, and τ represents a preset constant.
N max The total number of neurons allowed to fire can be set according to specific classification conditions, and it can be understood that the activation value of the 1 st firing hidden layer neuron is the largest and is 1, and the activation value of the subsequent firing hidden layer neurons is correspondingly reduced. By adjusting the preset constant τ, the rate of the corresponding decrease in the activation value of the hidden neurons can be adjusted.
Preferably, in a specific hardware implementation, the activation values may be simulated by pulse trains of different densities. Exemplarily, the pulse train may comprise two consecutive portions, namely a negative pulse and a positive pulse having the same pulse width. It will be appreciated that, based on the initial driving pulse train of the input neuron, the corresponding hidden layer neuron will generate a corresponding pulse train after being activated.
Step S500: and utilizing the activation value to activate the hidden layer neurons of each discharge.
Calculating the activation value obtained by the steps
Figure BDA0002503124710000082
Corresponding to the activation of the mth hidden layer neuron of the l layer. For example, the activation value ^ of the first firing neuron of the l-th layer>
Figure BDA0002503124710000083
With activation values>
Figure BDA0002503124710000084
Deactivating the corresponding first discharged hidden layer neuron of layer i.
The hidden layer neuron self-adaptive activation method disclosed by the technical scheme injects an average input current into all hidden layer neurons in the same layer, controls the hidden layer neurons with the corresponding input currents larger than the average input current to sequentially discharge according to a preset discharge sequence rule so as to enable only one memristor to be used as a synapse to generate a positive pulse simulation positive weight value, and then calculates an activation value through the discharge sequence of the hidden layer neurons so as to activate the corresponding hidden layer neurons by using the activation value. On one hand, the activation value is controlled through the discharging sequence, so that one hidden layer neuron in the same layer can be enabled to be uniquely activated when discharging, other hidden layer neurons are inhibited, and the other hidden layer neurons are prevented from generating interference on the discharged hidden layer neurons. According to the technical scheme, lateral inhibition among hidden layer neurons can be realized without additionally introducing an inhibition layer neuron, so that the calculation process is simpler; on the other hand, the technical scheme avoids adopting two memristors as synapses to generate positive and negative pulses to simulate positive and negative weights, hardware connection is simpler, and implementation cost is reduced.
Example 2
In the training phase, the hidden neurons' activation is affected not only by other hidden neurons in parallel with them, but also by themselves in the time dimension. This example, see fig. 3, shows the interaction of the hidden neurons themselves in the time dimension in the hidden neuron adaptive activation method.
It can be understood that after the mth hidden layer neuron of the l layer enters the activation state, the current-time activation state of the discharged hidden layer neuron is compared with a preset activation state threshold, and when the current-time activation state of the discharged hidden layer neuron is less than or equal to the preset activation state threshold, the current-time activation state of the discharged hidden layer neuron is updated; when the current-time activation state of the discharged hidden layer neuron is larger than a preset activation state threshold value, the discharged hidden layer neuron enters a refractory period, and the current-time refractory period time of the discharged hidden layer neuron is updated, so that the discharged hidden layer neuron enters the refractory period and does not react to the input current within the refractory period time. The hidden layer neurons which are excessively activated in the refractory period are inhibited, so that a self-adjusting effect is achieved, and the activation times of all the hidden layer neurons are controlled to be equal.
It will be appreciated that the activation state, as described above, may reflect the liveness of hidden neurons. The number of times of activation of hidden layer neurons in the l-th layer should be as equal as possible in order to ensure that the features included in the weights are as diverse as possible. When the mth hidden layer neuron of the l-th layer fires first (first discharge), i.e., n =1, the activation state increases by δ (δ > 0), otherwise, the activation state decays by a coefficient σ (0 < σ < 1).
Further, the current time activation state v of the draining hidden neurons active,1 Less than or equal to a preset activation state threshold v active,th And updating the current-time activation state of the discharged hidden layer neuron. Correspondingly, the active state update rule at the current time is as follows:
from the time perspective, the activation state is divided into two states, namely the activation state comprises the activation state at the previous moment and the activation state at the current moment, and the activation state at the current moment is determined according to the following formula:
Figure BDA0002503124710000101
v active,1 representing said current moment of activation state, v active,0 Representing the activation state at the previous moment, δ representing a preset incremental parameter, δ>0, sigma represents a predetermined attenuation parameter, 0<σ<1,/>
Figure BDA0002503124710000102
Represents the corresponding activation value of the mth hidden layer neuron in the lth discharge, and/or the corresponding judgment value of the hidden layer neuron in the lth discharge>
Figure BDA0002503124710000103
Represents the corresponding activation value of the (m-1) th undischarged hidden layer neuron in the l layer, t refractory,1 Representing the refractory period time at the current time.
Exemplarily, in the present embodiment, only one hidden layer neuron in the l-th layer is allowed to discharge,i.e. the total number N of hidden neurons that are allowed to discharge max 1, calculating formula according to the activation value
Figure BDA0002503124710000104
The activation value of the mth hidden layer neuron that is only activated in layer i can be calculated as ^ based on the activation value of the mth hidden layer neuron>
Figure BDA0002503124710000105
The remaining m-1 unactivated hidden layer neurons in layer I have an activation value of 0.
In layer l, only one hidden layer neuron is allowed to discharge. Active state v at the present time active,1 And the previous moment activation state v active,0 The method comprises the following steps of presetting an increment parameter delta, a preset attenuation parameter sigma, and a corresponding activation value of the mth hidden layer neuron of the l layer during the nth discharge
Figure BDA0002503124710000106
Activation value ^ corresponding to m-1 th undischarged hidden layer neuron on ith layer>
Figure BDA0002503124710000107
And the refractory period time t at the current moment refractory,1 It is relevant.
Exemplary, the activation state v at the previous moment is preset active,0 =1, δ =1, σ =0.4. If, at the current time, the only activated m hidden layer neuron in the l layer corresponds to the activated value
Figure BDA0002503124710000111
And, the current time refractory period time t refractory,1 =0, then the state v is activated at the current moment active,1 =v active,0 + delta, i.e. v active,1 =2; if, at the current time, the activation value corresponding to m-1 hidden layer neurons that are not activated in layer I->
Figure BDA0002503124710000112
And, the current time refractory period time t refractory,1 =0, thenActive state v at the present time active,1 =σ·v active,0 I.e. v active,1 =0.4; if so, the refractory period time t at the current moment refractory,1 >0, then the current moment activates the state v active,1 =0。
It should be appreciated that at the current time, the only mth hidden layer neuron in layer I that is activated corresponds to the activation value
Figure BDA0002503124710000113
And, the current time refractory period time t refractory,1 =0, then the state v is activated at the current moment active,1 =v active,0 + δ, active state v at the current time active,1 Increasing according to a preset increment parameter delta, and activating the state v when increasing to the current moment active,1 Greater than the activation state threshold v active,th In time, the current-time refractory period time of the discharged hidden layer neuron should be updated so that the discharged hidden layer neuron enters the refractory period and does not react to the input current within the refractory period time, thereby inhibiting the excessively activated hidden layer neuron. Correspondingly, the current refractory period time update rule is as follows:
the refractory period time comprises refractory period time of the previous moment and refractory period time of the current moment; the refractory period time at the current moment is updated according to the following formula:
Figure BDA0002503124710000114
t refractory,1 is the refractory period time of the current time, t refractory,0 Is the refractory period time of the previous time; v. of active,th Is a threshold for the active state; τ is the maximum time of inactivity, Δ τ r Is a corresponding decrement value of the preset refractory period time.
Exemplarily, when the mth hidden layer neuron in the l-th layer is activated multiple times, the activation state v at the current moment active,1 Increasing according to a preset increment parameter delta, and activating the state v when increasing to the current moment active,1 Greater than the activation state threshold v active,th In the first layerCurrent time refractory period time t of mth hidden layer neuron refractory,1 Updating to a preset maximum inactivity time τ max To ensure that t + tau starts from the current time max At that time, the mth hidden layer neuron in the lth layer does not respond to the input current. With the change of time, the refractory period time t of the previous moment refractory,0 According to the corresponding decrement value delta tau of the preset refractory period time r Gradually decreasing until the number decreases to 0, and the mth hidden layer neuron in the l layer can enter a response period and respond to the input current.
According to the technical scheme, when the hidden layer neuron is continuously activated to an activated state and reaches an activated state threshold value, the hidden layer neuron enters a refractory period and does not react to input current in the refractory period, so that the hidden layer neuron is prevented from being excessively activated, and the hidden layer neuron is prevented from being excessively activated by entering the refractory period. Furthermore, the activation times of hidden layer neurons in the same layer should be made as equal as possible to ensure that the features contained in the weight are as diverse as possible.
Example 3
In this embodiment, referring to fig. 4, a schematic structural diagram of the hidden layer neuron adaptive activation device 1 is shown, and the device includes an initialization module 100, a cancellation module 200, a discharge module 300, a calculation module 400, and an activation module 500.
An initial module 100, configured to calculate an average input current according to input currents of all hidden layer neurons in the same layer; a cancellation module 200 for injecting the average input current into the respective hidden layer neurons; the discharging module 300 is configured to control hidden layer neurons with corresponding input currents larger than the average input current to sequentially discharge according to a preset discharging sequence rule; a calculation module 400, configured to determine, according to a discharging order of each discharged hidden layer neuron, an activation value of each discharged hidden layer neuron; an activation module 500 configured to activate the respective discharged hidden neurons using the activation values.
The hidden layer neuron adaptive activation device 1 of this embodiment is configured to execute the hidden layer neuron adaptive activation method according to the foregoing embodiment through the cooperative use of the initial module 100, the cancellation module 200, the discharging module 300, the calculation module 400, and the activation module 500, and the implementation and beneficial effects related to the foregoing embodiment are also applicable in this embodiment, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, each functional module or unit in each embodiment of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or a part thereof which contributes to the prior art in essence can be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a smart phone, a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention.

Claims (6)

1. A hidden layer neuron adaptive activation method is applied to a memristor pulse neural network, each synapse in the memristor pulse neural network only comprises one memristor, and the method comprises the following steps:
calculating an average input current according to the input current of each hidden layer neuron in the same layer;
injecting the average input current into the individual hidden layer neurons;
controlling hidden layer neurons with corresponding input currents larger than the average input current to discharge in sequence according to a preset discharge sequence rule;
determining an activation value of each discharged hidden layer neuron according to the discharge sequence of each discharged hidden layer neuron;
using the activation values to stress activate the respective discharged hidden layer neurons;
when the current-time activation state of the discharged hidden layer neuron is less than or equal to a preset activation state threshold value, updating the current-time activation state of the discharged hidden layer neuron;
when the current-time activation state of the discharged hidden layer neuron is larger than a preset activation state threshold value, updating the current-time refractory period of the discharged hidden layer neuron;
wherein the current moment activation state is determined according to the following formula:
Figure QLYQS_1
v active,1 representing the active state at the present moment, v active,0 Representing the activation state at the previous moment, δ representing a predetermined incremental parameter, δ>0, sigma represents a predetermined attenuation parameter, 0<σ<1,/>
Figure QLYQS_2
Represents the corresponding activation value of the mth hidden layer neuron in the lth discharge, and/or the corresponding judgment value of the hidden layer neuron in the lth discharge>
Figure QLYQS_3
Represents the corresponding activation value of the (m-1) th undischarged hidden layer neuron in the l layer, t refractory,1 Representing the refractory period time at the current moment;
and updating the refractory period time at the current moment according to the following formula:
Figure QLYQS_4
t refractory,1 representing the refractory period time, t, of said current moment refractory,0 Representing the refractory period time of the previous moment; v. of active,th A threshold value representing an activation state; tau is max Representing a preset maximum time of inactivity, Δ τ r Representing a corresponding decrement value for a preset refractory period time.
2. The hidden layer neuron adaptive activation method according to claim 1, wherein the preset firing order rule comprises: the larger the input current, the earlier the discharge.
3. The hidden neuron adaptive activation method according to claim 1, wherein the activation value is obtained according to the following formula:
Figure QLYQS_5
Figure QLYQS_6
represents the corresponding activation value of the mth hidden layer neuron in the ith discharge, N max Represents a preset total number of neurons allowed to discharge, and τ represents a preset constant.
4. An adaptive hidden layer neuron activation device applied to a memristor pulse neural network, wherein each synapse in the memristor pulse neural network only contains one memristor, the device comprising:
the initial module is used for calculating the average input current according to the input current of each hidden layer neuron in the same layer;
a cancellation module for injecting the average input current into the hidden neurons;
the discharging module is used for controlling the hidden layer neurons of which the corresponding input currents are larger than the average input current to sequentially discharge according to a preset discharging sequence rule;
the calculation module is used for determining the activation value of each discharged hidden layer neuron according to the discharge sequence of each discharged hidden layer neuron;
an activation module for using the activation values to stress activate the respective discharged hidden layer neurons;
the updating module is used for updating the current-time activation state of the discharged hidden layer neuron when the current-time activation state of the discharged hidden layer neuron is less than or equal to a preset activation state threshold value; when the current-time activation state of the discharged hidden layer neuron is larger than a preset activation state threshold value, updating the current-time refractory period of the discharged hidden layer neuron;
wherein the current moment activation state is determined according to the following formula:
Figure QLYQS_7
v active,1 representing the active state at the present moment, v active,0 Representing the activation state at the previous moment, δ representing a preset incremental parameter, δ>0, σ represents a predetermined attenuation parameter, 0<σ<1,/>
Figure QLYQS_8
Represents the corresponding activation value of the mth hidden layer neuron in the lth discharge, and/or the corresponding judgment value of the hidden layer neuron in the lth discharge>
Figure QLYQS_9
Represents the corresponding activation value of the (m-1) th undischarged hidden layer neuron in the l layer, t refractory,1 Representing the refractory period time at the current moment;
and updating the refractory period time at the current moment according to the following formula:
Figure QLYQS_10
t refractory,1 representing the refractory period time, t, of said current moment refractory,0 Representing the refractory period time at the previous moment; v. of active,th A threshold value representing an activation state; tau. max Representing a preset maximum time of inactivity, Δ τ r A corresponding decrement value representing a preset refractory period time.
5. A terminal device comprising a memory for storing a computer program and a processor for executing the computer program to cause the terminal device to perform the hidden layer neuron adaptive activation method according to any one of claims 1 to 3.
6. A readable storage medium characterized by storing a computer program which, when run on a processor, performs the hidden layer neuron adaptive activation method according to any one of claims 1 to 3.
CN202010438305.6A 2020-05-21 2020-05-21 Hidden layer neuron self-adaptive activation method and device and terminal equipment Active CN111582460B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010438305.6A CN111582460B (en) 2020-05-21 2020-05-21 Hidden layer neuron self-adaptive activation method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010438305.6A CN111582460B (en) 2020-05-21 2020-05-21 Hidden layer neuron self-adaptive activation method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN111582460A CN111582460A (en) 2020-08-25
CN111582460B true CN111582460B (en) 2023-04-14

Family

ID=72125238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010438305.6A Active CN111582460B (en) 2020-05-21 2020-05-21 Hidden layer neuron self-adaptive activation method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN111582460B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1656472A (en) * 2001-11-16 2005-08-17 陈垣洋 Plausible neural network with supervised and unsupervised cluster analysis
WO2013000940A1 (en) * 2011-06-30 2013-01-03 Commissariat A L'energie Atomique Et Aux Energies Alternatives Network of artificial neurones based on complementary memristive devices
CN106815636A (en) * 2016-12-30 2017-06-09 华中科技大学 A kind of neuron circuit based on memristor

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10171084B2 (en) * 2017-04-24 2019-01-01 The Regents Of The University Of Michigan Sparse coding with Memristor networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1656472A (en) * 2001-11-16 2005-08-17 陈垣洋 Plausible neural network with supervised and unsupervised cluster analysis
WO2013000940A1 (en) * 2011-06-30 2013-01-03 Commissariat A L'energie Atomique Et Aux Energies Alternatives Network of artificial neurones based on complementary memristive devices
CN106815636A (en) * 2016-12-30 2017-06-09 华中科技大学 A kind of neuron circuit based on memristor

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Sheng-Yang Sun 等.Quaternary synapses network for memristor-based spiking convolutional neural networks.《IEICE Electronics Express》.2019,第16卷(第4期),全文. *
李清江 等.基于忆阻器的感存算一体技术研究进展.《微纳电子与智能制造》.2019,第1卷(第4期),全文. *

Also Published As

Publication number Publication date
CN111582460A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
KR101793011B1 (en) Efficient hardware implementation of spiking networks
EP3136304A1 (en) Methods and systems for performing reinforcement learning in hierarchical and temporally extended environments
KR20160136364A (en) Cold neuron spike timing back propagation
US20150134581A1 (en) Method for training an artificial neural network
WO2015112262A1 (en) Configuring sparse neuronal networks
KR20170031695A (en) Decomposing convolution operation in neural networks
KR101790909B1 (en) Method and apparatus for producing programmable probability distribution function of pseudo-random numbers
KR20160084401A (en) Implementing synaptic learning using replay in spiking neural networks
KR101819880B1 (en) Blink and averted gaze avoidance in photographic images
KR20160136381A (en) Differential encoding in neural networks
KR20160123309A (en) Event-based inference and learning for stochastic spiking bayesian networks
KR20150087266A (en) Piecewise linear neuron modeling
WO2015112643A1 (en) Monitoring neural networks with shadow networks
KR20160125967A (en) Method and apparatus for efficient implementation of common neuron models
KR20160058825A (en) Methods and apparatus for implementation of group tags for neural models
KR20160047581A (en) Methods and apparatus for implementing a breakpoint determination unit in an artificial nervous system
TW201533668A (en) Short-term synaptic memory based on a presynaptic spike
EP3108413A2 (en) Dynamic spatial target selection
KR20160135206A (en) Analog signal reconstruction and recognition via sub-threshold modulation
KR101825937B1 (en) Plastic synapse management
Mobus Toward a theory of learning and representing causal inferences in neural networks
WO2015057302A2 (en) Congestion avoidance in networks of spiking neurons
CN111582460B (en) Hidden layer neuron self-adaptive activation method and device and terminal equipment
KR101825933B1 (en) Phase-coding for coordinate transformation
WO2015023441A2 (en) Post ghost plasticity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant