WO2018133568A1 - Compound-mode neuron information processing method and system, and computer device - Google Patents

Compound-mode neuron information processing method and system, and computer device Download PDF

Info

Publication number
WO2018133568A1
WO2018133568A1 PCT/CN2017/114661 CN2017114661W WO2018133568A1 WO 2018133568 A1 WO2018133568 A1 WO 2018133568A1 CN 2017114661 W CN2017114661 W CN 2017114661W WO 2018133568 A1 WO2018133568 A1 WO 2018133568A1
Authority
WO
WIPO (PCT)
Prior art keywords
neuron
information
current
pulse
output information
Prior art date
Application number
PCT/CN2017/114661
Other languages
French (fr)
Chinese (zh)
Inventor
裴京
邓磊
施路平
吴臻志
李国齐
Original Assignee
清华大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学 filed Critical 清华大学
Publication of WO2018133568A1 publication Critical patent/WO2018133568A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/061Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit

Definitions

  • the present invention relates to the field of artificial neural network technology, and in particular to a composite mode neuron information processing method, system and computer device.
  • the neurons of the traditional neuromorphic system are limited to support a single information processing and transmission mode: artificial neural network or pulsed neural network, resulting in a high cost of neural network construction for a single task, and low efficiency of neural networks for multiple tasks.
  • Reading a neuron working mode configuration parameter the neuron working mode configuration parameter comprising an artificial neuron working mode configuration parameter or a pulsed neuron working mode configuration parameter;
  • the current neuron working mode includes an artificial neuron working mode or a pulsed neuron working mode
  • front-end neuron output information includes front-end artificial neuron output information or front-end pulse neuron output information;
  • Reading current neuron information including current artificial neuron information or current pulsed neuron information;
  • Calculating current neuron output information according to the front end neuron output information and the current neuron information including Calculating current artificial neuron output information according to the front end artificial neuron output information and the current artificial neuron information, or calculating current pulse neurons according to the front end pulse neuron output information and the current pulsed neuron information Output information;
  • the front end artificial neuron output information includes: membrane potential information output by the front end artificial neuron, and a connection weight index of the front end artificial neuron and the current artificial neuron;
  • the current artificial neuron information includes: current artificial neuron offset information;
  • the current artificial neuron output information including:
  • connection weight index of the front end artificial neuron and the current artificial neuron the connection weight of the front end artificial neuron and the current artificial neuron is read;
  • the membrane potential information output according to the front end artificial neuron, the connection weight of the front end artificial neuron and the current artificial neuron, and the current artificial neuron bias information are calculated by a preset artificial neuron activation function. After the step of outputting information by the artificial neurons, the method further includes:
  • the outputting the current artificial neuron output information includes:
  • the front-end pulse neuron output information includes: pulse tip information output by the front-end pulse neuron, and a connection weight index of the front-end pulse neuron and the current pulse neuron;
  • the current pulse neuron information includes: a current time window width, a pulse tip information sequence in a current time window, historical membrane potential information, and membrane potential leakage information;
  • the current pulse neuron output information including:
  • connection weight index of the front-end pulse neuron and the current pulse neuron the connection weight of the front-end pulse neuron and the current pulse neuron is read;
  • the pulsed nerve is passed through the pulse according to the front-end pulse neuron input information, the connection weight of the front-end pulse neuron and the current pulsed neuron, the historical membrane potential information, and the membrane potential leakage information.
  • the meta-calculation model after the step of calculating the current pulse neuron output information, before the step of outputting the current pulse neuron output information, the method further includes:
  • the issuing triggering flag information includes: issuing a trigger or issuing a non-trigger; and when the issuing triggering flag information is issued When triggered,
  • the refractory period timer is reset, and the historical membrane potential information is updated to a preset reset membrane potential information.
  • the method further includes:
  • the issue trigger flag information When the issue trigger flag information is not triggered by the issue, the current time step of the refractory period width and the refractory period timer is read;
  • the obtaining a threshold potential includes:
  • the threshold potential is determined based on the threshold random amount and the threshold offset.
  • the outputting the current pulse neuron output information comprises:
  • Reading a second issuance enablement identifier where the second issuance enablement identifier includes allowing data to be issued or not allowing data to be issued, and when the second issuance enablement identifier is allowed to issue data,
  • the artificial neuron output information is compared with the potential extremum, and if the artificial neuron output information is greater than or equal to the potential extremum, the potential extremum is updated to
  • the output information of the artificial neurons can make the neural network complete the operation of the pooling step directly in the convolutional layer, without allocating physical space for the pooled layer neurons, greatly saving the neuron resources, thereby reducing the construction cost of the neural network.
  • the current artificial neuron output information is determined by setting the release enable identifier and the artificial neuron release data type parameter, so that the output of the artificial neuron is more controllable, and the release enable flag can be configured with
  • the neurons are not allowed to issue data, but only used as intermediate auxiliary computational neurons, which is necessary for some functions that require multiple neurons to work together.
  • the release enable flag and the release type can work together to set the area.
  • the neuron outputs information to a current neuron output information (the neuron output information is the maximum value in each neuron output information of the set region), and the direct maximum pooling operation is completed.
  • the pulse tip information sequence in the current time window is updated according to the pulse tip information output by the front end pulse neuron and the pulse tip information sequence in the current time window, and the current time window pulse is acquired.
  • the tip information update sequence can support the spatiotemporal pulse neural network model with time depth according to the current time window width, the connection weight of the front end pulse neuron and the current pulse neuron, and the front end pulse neuron input information calculated by the attenuation function. Compared with the neural network technology scheme with only one time depth, the space-time information coding ability of the pulse neural network can be greatly improved, and the application space of the pulse neural network is enriched.
  • the threshold potential is determined by reading a random threshold mask potential and a threshold offset and receiving a configuration value given by a configuration register such that the neuron issues pulse tip information with a probability of randomness, Regardless of whether the membrane potential exceeds a fixed threshold bias, due to the existence of a positive or negative threshold random superposition amount, the neuron cell body may issue pulses, which improves the computational power and information processing capability of the pulse neural network model.
  • the present invention also provides a computer device comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, the processor executing the computer program to implement the method of any of the above embodiments A step of
  • the invention also provides a composite mode neuron information processing system, comprising:
  • a neuron working mode reading module configured to read a neuron working mode configuration parameter, where the neuron working mode configuration parameter comprises an artificial neuron working mode configuration parameter or a pulsed neuron working mode configuration parameter;
  • a neuron working mode configuration module configured to configure a current neuron working mode according to the neuron working mode configuration parameter, where the current neuron working mode includes an artificial neuron working mode or a pulsed neuron working mode;
  • the front-end neuron output information receiving module is configured to receive front-end neuron output information, where the front-end neuron output information includes front-end artificial neuron output information or front-end pulse neuron output information;
  • a current neuron information reading module configured to read current neuron information, where the current neuron information includes current artificial neuron information or current pulsed neuron information;
  • the current neuron output information calculation module is configured to calculate current neuron output information according to the front end neuron output information and the current neuron information, including outputting information according to the front end artificial neuron and the current artificial neuron Information, calculating current artificial neuron output information, or calculating current pulse neuron output information according to the front end pulse neuron output information and the current pulse neuron information;
  • the current neuron output information output module is configured to output the current artificial neuron output information or the current pulse neuron output information.
  • the composite mode neuron information processing method, the computer device and the system provided by the present invention configure a corresponding artificial neuron working mode or a pulsed neuron working mode according to a preset neuron working mode configuration parameter, and the corresponding neuron In the working mode, the current artificial neuron output information or the current pulsed neuron output information is calculated and outputted by receiving the front-end neuron output information and reading the current neuron information.
  • the composite mode neuron information processing system provided by the invention can configure the corresponding neuron working mode according to the requirements of the task, and only needs to modify the working mode of the neuron when switching between different tasks in different neural network working modes.
  • the configuration parameters can be different from the neural network accelerator scheme that only supports the artificial neural network mode, and the neuromorphic scheme that only supports the pulse neural network.
  • the present invention can support the artificial neural network mode based on the same architecture.
  • Machine learning applications and computational neuroscience applications based on pulsed neural networks enrich the type of processing information for brain-like computing platforms, reduce the cost of multi-task execution in different neural network operating modes, and improve the working mode of different neural networks. The efficiency of multitasking.
  • FIG. 1 is a schematic flow chart of a multi-mode neural network information processing method according to an embodiment
  • FIG. 2 is a schematic flow chart of a multi-mode neural network information processing method according to another embodiment
  • FIG. 3 is a schematic flow chart of a multi-mode neural network information processing method according to still another embodiment
  • FIG. 4 is a schematic flow chart of a multi-mode neural network information processing method according to still another embodiment
  • FIG. 5 is a schematic flowchart diagram of a multi-mode neural network information processing method according to still another embodiment
  • FIG. 6 is a schematic flow chart of a multi-mode neural network information processing method according to still another embodiment
  • FIG. 7 is a schematic structural diagram of a multi-mode neural network information processing system according to an embodiment
  • FIG. 8 is a schematic structural diagram of a multi-mode neural network information processing system according to another embodiment.
  • FIG. 9 is a schematic structural diagram of a multi-mode neural network information processing system according to still another embodiment.
  • FIG. 10 is a schematic structural diagram of a multi-mode neural network information processing system according to still another embodiment.
  • FIG. 11 is a schematic structural diagram of a multi-mode neural network information processing system according to still another embodiment
  • FIG. 12 is a schematic structural diagram of a multi-mode neural network information processing system according to still another embodiment.
  • FIG. 1 is a schematic flowchart of a multi-mode neural network information processing method according to an embodiment, and the multi-mode neural network information processing method shown in FIG. 1 includes:
  • Step S100 reading a neuron working mode configuration parameter, where the neuron working mode configuration parameter comprises an artificial neuron working mode configuration parameter or a pulsed neuron working mode configuration parameter.
  • the present invention provides a multi-mode and multi-functional composite neuromorphic cell unit based on an all-digital circuit, integrating an artificial neural network (ANN) and a pulsed neural network (SNN). Two kinds of information processing and transmission modes.
  • the neuron cell unit can be set to the working mode of the ANN or SNN according to the processing requirements of different tasks.
  • the neuron working mode configuration parameter is a working mode of a well-defined neural network.
  • Step S200 Configure a current neuron working mode according to the neuron working mode configuration parameter, where the current neuron working mode includes an artificial neuron working mode or a pulsed neuron working mode.
  • the neuron cell unit may be set to an ANN or SNN working mode.
  • Step S300 receiving front-end neuron output information, where the front-end neuron output information includes front-end artificial neuron output Output information or front-end pulse neuron output information.
  • receiving different front-end neuron output information if configured as an artificial neuron working mode, receiving front-end artificial neuron output information, if configured as a pulsed neuron working mode, receiving The front-end pulsed neuron outputs information.
  • Step S400 reading current neuron information, the current neuron information including current artificial neuron information or current pulsed neuron information.
  • reading different current neuron information if configured as an artificial neuron working mode, reading current artificial neuron information, if configured as a pulsed neuron working mode, reading Take the current pulse neuron information.
  • Step S500 calculating current neuron output information according to the front-end neuron output information and the current neuron information, including calculating current artificial neurons according to the front-end artificial neuron output information and the current artificial neuron information Outputting information, or calculating current pulse neuron output information according to the front end pulse neuron output information and the current pulse neuron information.
  • the calculation process of the artificial neuron or the calculation process of the pulse neuron is performed, and the current artificial neuron output information or the current pulse neuron output information is obtained.
  • Step S600 outputting the current artificial neuron output information or the current pulse neuron output information.
  • the configured neuron working mode when working in the artificial neuron mode, outputting the current artificial neuron output information according to an output manner of the artificial neural network, when working in the pulsed neuron mode, The current pulsed neuron output information is output according to an output manner of the pulsed neural network.
  • the composite mode neuron information processing method provided by the invention provides a corresponding artificial neuron working mode or a pulsed neuron working mode according to a preset neuron working mode configuration parameter, and passes the corresponding neuron working mode. Receiving front-end neuron output information, and reading current neuron information, calculating current artificial neuron output information or current pulsed neuron output information, and outputting.
  • the composite mode neuron information processing method provided by the invention can configure the corresponding neuron working mode according to the requirements of the task, and only needs to modify the neuron working mode when switching between different tasks in different neural network working modes.
  • the configuration parameters can be different from the neural network accelerator scheme that only supports the artificial neural network mode, and the neuromorphic scheme that only supports the pulse neural network.
  • the present invention can support the artificial neural network mode based on the same architecture.
  • Machine learning applications and computational neuroscience applications based on pulsed neural networks enrich the type of processing information for brain-like computing platforms, reduce the cost of multi-task execution in different neural network operating modes, and improve the working mode of different neural networks. The efficiency of multitasking.
  • FIG. 2 is a schematic flowchart diagram of a multi-mode neural network information processing method according to another embodiment, and the multi-mode neural network information processing method shown in FIG. 2 is the method for configuring the neuron working mode in the method shown in FIG.
  • the refining steps of step S300 to step S500 include:
  • Step S100a receiving the membrane potential information output by the front end artificial neuron, and the connection weight index of the front end artificial neuron and the current artificial neuron.
  • connection weight index of the front end artificial neuron and the current artificial neuron is a weight index sent by the front end neuron together with the front end artificial neuron output information, and is used to indicate the extraction of the current neuron weight.
  • step S200a the current artificial neuron offset information is read.
  • the offset information of the artificial neuron is a membrane potential bias value.
  • Step S300a The connection weight of the front end artificial neuron and the current artificial neuron is read according to the connection weight index of the front end artificial neuron and the current artificial neuron.
  • connection weight index of the front end artificial neuron and the current artificial neuron is an address information
  • the current neuron is indexed according to the received connection weight of the front end artificial neuron and the current artificial neuron
  • the current nerve is
  • the connection weight of the front end artificial neuron and the current artificial neuron is read, and according to the connection weight information, the output information of the front end neuron can be used in the calculation process of participating in the current neuron output information. More accurately reflects the weight of the output information of the front-end neurons.
  • Step S400a according to the membrane potential information output by the front end artificial neuron, the connection weight of the front end artificial neuron and the current artificial neuron, and the current artificial neuron bias information, by a preset artificial neuron activation function , calculate the current artificial neuron output information.
  • V ANN f (V input + V bias )
  • V input is the input of the current beat accumulation, equivalent to the above
  • the composite mode neuron information processing system provided by the embodiment is configured as an artificial neuron working mode according to the requirements of the task, and can realize most of the current artificial neural network models and support the application of the brain computing platform in the field of machine learning.
  • the membrane potential information output according to the front end artificial neuron, the connection weight of the front end artificial neuron and the current artificial neuron, and the current artificial neuron bias information are preset.
  • the current artificial neuron output information is compared with the potential extreme value, and if the current artificial neuron output information is greater than or equal to the potential extreme value, the potential extreme value is updated to
  • the current artificial neuron output information enables the neural network to complete the operation of the pooling step directly in the convolutional layer without allocating physical space for the pooled layer neurons, thereby greatly saving neuron resources, thereby reducing the construction of the neural network. cost.
  • FIG. 3 is a schematic flowchart of a multi-mode neural network information processing method according to still another embodiment.
  • the multi-mode neural network information processing method shown in FIG. 3 is a refinement process of step S600 of FIG. 1, and includes:
  • Step S610a reading the first issuance enablement identifier, where the first issuance enablement identifier includes allowing data to be issued or not allowing data to be issued, when the first issuance enable identifier is allowed to issue data.
  • the first issuance enablement identifier is control information that is determined in the task to determine whether the final neuron output information is issued.
  • the first release enable identifier is not allowed to issue data
  • the The calculated artificial neuron output information is not allowed to be issued, and the process ends.
  • step S620a is followed.
  • step S620 the artificial neuron issuance data type parameter is read, and the artificial neuron issuance data type includes: issuing the current artificial neuron output information, issuing the potential extremum, and issuing an extremum corresponding to the potential extremum A neuron identifies one of them.
  • the artificial neuron issuance data type parameter is read, and according to the subsequent calculation requirement, different data types may be selected, for example, to satisfy subsequent convolution.
  • the maximum pooling operation in the neural network requires the output of the potential extremum.
  • Step S630a determining, according to the artificial neuron issuing data type parameter, that the current artificial neuron finally loses Information.
  • Step S640a outputting the final output information of the current artificial neuron.
  • the current artificial neuron output information is determined, so that the output of the artificial neuron is more controllable, and the release enable flag can be configured.
  • Neurons are not allowed to issue data, but only as intermediate auxiliary computing neurons, which is necessary for some functions that require multiple neurons to work together.
  • the release enable flag and the release type can work together to set the area of the nerve.
  • the meta-output information is reduced to a current neuron output information (the neuron output information is a maximum value in each neuron output information of the set region), and the direct maximum pooling operation is completed.
  • step S300 to step S500 includes:
  • Step S100b Receive pulse tip information output by the front end pulse neuron, and a connection weight index of the front end pulse neuron and the current pulse neuron.
  • connection weight index of the front-end pulse neuron and the current pulse neuron is a weight index sent by the front-end neuron together with the output information of the front-end pulse neuron, and is used to indicate the extraction of the current neuron weight.
  • the pulse tip information output by the front end pulse neuron is a pulse tip signal sent by the front end pulse neuron.
  • Step S200b reading the current time window width, the pulse tip information sequence in the current time window, the historical membrane potential information, and the membrane potential leakage information.
  • the pulse tip information sequence in the current time window refers to a sequence of information in which the pulse tip information received in a time step within a certain range in the current time window width is sequentially buffered in chronological order.
  • Step S300b reading the connection weight of the front-end pulse neuron and the current pulse neuron according to the connection weight index of the front-end pulse neuron and the current pulse neuron.
  • connection weight index of the front-end pulse neuron and the current pulse neuron is an address information
  • the current neuron is indexed according to the received connection weight of the front-end pulse neuron and the current pulse neuron.
  • the connection weight of the front-end pulse neuron and the current pulse neuron is read, and according to the connection weight information, the output information of the front-end neuron can be used in the calculation process of participating in the current neuron output information. More accurately reflects the weight of the output information of the front-end neurons, carrying more abundant information.
  • Step S400b updating the pulse tip information sequence in the current time window according to the pulse tip information output by the front-end pulse neuron and the pulse tip information sequence in the current time window, and acquiring the pulse tip information in the current time window. Update the sequence.
  • the pulse tip information sequence after storing a new pulse tip information in the operation step of each pulse neuron, deletes the pulse tip information at the tail position of the sequence, and updates the pulse tip sequence once. .
  • Step S500b Calculate the front-end pulse neuron input information by using the attenuation function according to the current time window width and the pulse tip information update sequence in the current time window.
  • T w is the time window width
  • ⁇ j is a pulse front information update sequence in the current time window after the front-end neuron j issues a spike in the current time window Time step.
  • K( ⁇ t) is an attenuation function that decreases rapidly as ⁇ t increases.
  • Step S600b calculating, according to the front-end pulse neuron input information, the connection weight of the front-end pulse neuron and the current pulsed neuron, the historical membrane potential information, and the membrane potential leakage information, by calculating a pulse neuron calculation model Current pulsed neuron output information.
  • the calculation of the input information of the front-end pulse neuron is represented by the following formula:
  • W ij is the connection weight of the front-end pulse neuron j and the current pulse neuron i
  • T w is the time window width
  • ⁇ j is the front-end neuron j after the spike is issued in the current time window, at the current The time step within the time window of the pulse tip information update sequence.
  • K( ⁇ t) is an attenuation function that decreases rapidly as ⁇ t increases.
  • V SNN f(V+V input +V leak )
  • V is the historical membrane potential information stored in the memory
  • V input is the input of the current beat accumulation, equivalent to the above V leak is the leak value information.
  • the pulse tip information sequence in the current time window is updated according to the pulse tip information output by the front end pulse neuron and the pulse tip information sequence in the current time window, and the pulse tip in the current time window is acquired.
  • the information update sequence can support the spatio-temporal pulse neural network model with time depth according to the current time window width, the connection weight of the front-end pulse neuron and the current pulsed neuron, and the front-end pulse neuron input information through the attenuation function.
  • the space-time information coding ability of the pulse neural network can be greatly improved, and the application space of the pulse neural network is enriched.
  • FIG. 5 is a schematic flowchart of a multi-mode neural network information processing method according to still another embodiment, and the multi-mode neural network information processing method shown in FIG. 5 is after all the steps shown in FIG. 4, before the step of S600 in FIG. ,include:
  • step S100c a threshold potential is acquired.
  • the acquiring a threshold potential includes: reading a random threshold mask potential, a threshold offset, and a random threshold; The random threshold and the random threshold mask potential are subjected to bitwise AND operation to obtain a threshold random superposition amount; and the threshold potential is determined according to the threshold random superimposition amount and the threshold offset.
  • the pseudo-random number generator generates a random threshold V rand , and uses the random threshold and the preset random threshold mask potential V mask to perform a bitwise AND operation to generate a threshold random superposition amount, and then the threshold random superposition amount and the pre- The set threshold offset Vth0 is added to produce a true threshold potential Vth .
  • the initial seed of the pseudo random number generator is given by the configuration register V seed .
  • Step S200c Comparing the current pulse neuron output information with the threshold potential, and determining the issuance trigger flag information according to the comparison result, where the issuing trigger flag information includes: issuing a trigger or issuing a non-trigger.
  • the process proceeds to step S300c, and when the issuance triggering flag information is not issued, the process proceeds to step S400c.
  • the current pulse neuron output information is compared, and the trigger trigger flag information is determined according to the comparison result. Only when the current pulse neuron output information is greater than or equal to the threshold potential, the current pulse neuron output information may be transmitted.
  • Step S300c resetting the refractory period timer, and updating the historical membrane potential information to a preset reset membrane potential information.
  • the issue trigger flag information is an issue trigger
  • the current pulse neuron output information may be sent, the refractory period timer is reset, and the historical membrane potential information is updated to a preset membrane potential information. And the historical membrane potential information is updated, and the membrane potential is selectively reset to the current membrane potential, the current membrane potential and the threshold potential difference, or a fixed reset voltage according to the configured reset type Reset_type.
  • Step S400c reading the current time step of the refractory period width and the refractory period timer.
  • the current pulse neuron output information is not sent, and further determining whether the current period is within the refractory period.
  • the refractory period width is greater than the value of the refractory period timer, which is timed by means of a time step.
  • Step S500c Determine whether the current time is within the refractory period according to the refractory period width and the current time step of the refractory period timer. If the current time is within the refractory period, step S600c is followed, and if the current time is outside the refractory period, step S700c is followed.
  • the cumulative calculation of the current time step of the refractory period timer it can be determined whether the current time step is still in the refractory period.
  • Step S600c accumulating the refractory period timer for one time step, and not updating the historical film potential information.
  • the pulsed neurons of the time step need to read the information, that is, during the refractory period, the pulsed neuron output information calculated this time does not participate in the calculation of the next time step.
  • Step S700c accumulating the refractory period timer for one time step, and updating the historical membrane potential information as the current pulse neuron output information.
  • the historical membrane potential information is updated to the current pulsed neuron output information to participate in the calculation of the next time step.
  • the threshold potential is determined by reading the random threshold mask potential and the threshold offset and receiving the configuration value given by the configuration register, so that the neuron issues the pulse tip information with a certain probability of randomness, regardless of Whether the membrane potential exceeds a fixed threshold bias, and because there is a positive or negative threshold random superposition amount, the neuron cell body may issue pulses, which improves the computational power and information processing capability of the pulse neural network model.
  • FIG. 6 is a schematic flowchart diagram of a multi-mode neural network information processing method according to still another embodiment, and the multi-mode neural network information processing method shown in FIG. 6 is configured for the neuron working mode in the method shown in FIG.
  • the refinement process of step S600 includes:
  • step S610b the second issuance enablement identifier is read, and the second issuance enablement identifier includes permission to issue data or disallowing to issue data, and when the second issuance enable identifier is allowed to issue data.
  • the second issuance enablement identifier is control information that determines whether the final neuron output information is issued.
  • the calculated artificial neuron The output information is not allowed to be issued and the process ends.
  • step S620b is followed.
  • Step S620b the issuing trigger flag information is read, when the issuing trigger flag information is an issue trigger.
  • the second issuance enable identifier is the permission method data
  • Step S630b outputting the current pulse neuron output information.
  • the release enable identifier and the issue trigger flag by setting the release enable identifier and the issue trigger flag, the current pulse neuron output information is determined, so that the output of the pulse neuron is more controllable, and the release enable flag can be configured with neurons that are not allowed. Issuing data, but only as an intermediate auxiliary computing neuron, is necessary for some functions that require multiple neurons to work together.
  • an embodiment of the present invention further provides a computer device, including a memory and a processor. And a computer program stored on the memory and operable on the processor, wherein the processor executes the computer program to implement the steps of the method mentioned in the above embodiments.
  • Non-volatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in a variety of formats, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization chain.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • Synchlink DRAM SLDRAM
  • Memory Bus Radbus
  • RDRAM Direct RAM
  • DRAM Direct Memory Bus Dynamic RAM
  • RDRAM Memory Bus Dynamic RAM
  • FIG. 7 is a schematic structural diagram of a multi-mode neural network information processing system according to an embodiment.
  • the multi-mode neural network information processing system shown in FIG. 7 includes:
  • the neuron working mode reading module 100 is configured to read a neuron working mode configuration parameter, where the neuron working mode configuration parameter comprises an artificial neuron working mode configuration parameter or a pulsed neuron working mode configuration parameter.
  • the neuron working mode configuration module 200 is configured to configure a current neuron working mode according to the neuron working mode configuration parameter, where the current neuron working mode includes an artificial neuron working mode or a pulsed neuron working mode.
  • the front-end neuron output information receiving module 300 is configured to receive front-end neuron output information, where the front-end neuron output information includes front-end artificial neuron output information or front-end pulse neuron output information.
  • the current neuron information reading module 400 is configured to read current neuron information, where the current neuron information includes current artificial neuron information or current pulsed neuron information.
  • the current neuron output information calculation module 500 is configured to calculate current neuron output information according to the front end neuron output information and the current neuron information, including outputting information according to the front end artificial neuron and the current artificial neural network. Meta-information, calculating current artificial neuron output information, or calculating current pulse neuron output information according to the front-end pulse neuron output information and the current pulsed neuron information.
  • the current neuron output information output module 600 is configured to output the current artificial neuron output information or the current pulse neuron output information.
  • the composite mode neuron information processing system provided by the invention provides a corresponding artificial neuron working mode or a pulsed neuron working mode according to a preset neuron working mode configuration parameter, and passes the corresponding neuron working mode. Receiving front-end neuron output information, and reading current neuron information, calculating current artificial neuron output information or current pulsed neuron output information, and outputting.
  • the composite mode neuron information processing system provided by the invention can configure the corresponding neuron working mode according to the requirements of the task, and only needs to modify the working mode of the neuron when switching between different tasks in different neural network working modes.
  • the configuration parameters can be different from the neural network accelerator scheme that only supports the artificial neural network mode, and the neuromorphic scheme that only supports the pulse neural network.
  • the present invention can support the artificial neural network mode based on the same architecture.
  • Machine learning applications and computational neuroscience applications based on pulsed neural networks enrich the type of processing information for brain-like computing platforms, reduce the cost of multi-task execution in different neural network operating modes, and improve the working mode of different neural networks. The efficiency of multitasking.
  • FIG. 8 is a schematic structural diagram of a multi-mode neural network information processing system according to another embodiment.
  • the multi-mode neural network information processing system shown in FIG. 8 includes:
  • the front end artificial neuron output information receiving unit 100a is configured to receive the membrane potential information output by the front end artificial neuron, and the connection weight index of the front artificial liver neuron and the current artificial neuron.
  • the current artificial neuron information reading unit 200a is configured to read current artificial neuron offset information.
  • the artificial neuron connection weight reading unit 300a is configured to read the connection weight of the pre-artificial neuron and the current artificial neuron according to the connection weight index of the front-end artificial neuron and the current artificial neuron.
  • the current artificial neuron output information calculation unit 400a is configured to: according to the membrane potential information output by the front end artificial neuron, the connection weight of the front end artificial neuron and the current artificial neuron, and the current artificial neuron offset information, The current artificial neuron output information is calculated by a preset artificial neuron activation function.
  • the composite mode neuron information processing system provided by the embodiment is configured as an artificial neuron working mode according to the requirements of the task, and can realize most of the current artificial neural network models and support the application of the brain computing platform in the field of machine learning.
  • FIG. 9 is a schematic structural diagram of a multi-mode neural network information processing system according to still another embodiment.
  • the multi-mode neural network information processing system shown in FIG. 9 includes:
  • a potential extremum reading unit 500a for reading a potential extremum and an extremum neuron identifier corresponding to the potential extremum;
  • the potential extreme value comparison unit 600a is configured to compare the current artificial neuron output information with the potential extreme value, and if the current artificial neuron output information is greater than or equal to the potential extreme value,
  • the potential extremum updating unit 700a is configured to update the potential extremum to the current artificial neuron output information, and update the extremum neuron identifier to an identifier of the current artificial neuron.
  • the current artificial neuron output information is compared with the potential extreme value, and if the current artificial neuron output information is greater than or equal to the potential extreme value, the potential extreme value is updated to The artificial neuron outputs information, so that the neural network completes the operation of the pooling step directly in the convolution layer, without allocating physical space for the pooled layer neurons, greatly saving neuron resources, thereby reducing the construction cost of the neural network. .
  • FIG. 10 is a schematic structural diagram of a multi-mode neural network information processing system according to still another embodiment.
  • the multi-mode neural network information processing system shown in FIG. 10 includes:
  • the first issuance enablement reading unit 800a is configured to read the first issuance enablement identifier, where the first issue enablement identifier includes allowing data to be released or not allowing data to be issued, when the first issue enablement identifier is When data is allowed to be issued.
  • the artificial neuron issue data type reading unit 900a is configured to read an artificial neuron release data type parameter, and the artificial neuron issue data type includes: issuing the current artificial neuron output information, and issuing the potential extremum, One of the extreme neuron identifiers corresponding to the potential extremum is issued.
  • the artificial neuron issuance data type determining unit 910a is configured to determine the final output information of the current artificial neuron according to the artificial neuron issuing data type parameter.
  • the current artificial neuron output information output unit 920a is configured to output the final output information of the current artificial neuron.
  • the current artificial neuron output information is determined, so that the output of the artificial neuron is more controllable, and the release enable flag can be configured.
  • Neurons are not allowed to issue data, but only as intermediate auxiliary computational neurons, which is necessary for some functions that require multiple neurons to work together.
  • the release enable flag and the release type can work together to set the area.
  • the neuron output information is reduced to one current neuron output information (the neuron output information is a maximum value in each neuron output information of the set region), and the direct maximum pooling operation is completed.
  • FIG. 11 is a schematic structural diagram of a multi-mode neural network information processing system according to still another embodiment.
  • the multi-mode neural network information processing system shown in FIG. 11 includes:
  • the front end pulse neuron output information receiving unit 100b is configured to receive pulse tip information output by the front end pulse neuron, and a connection weight index of the front end pulse neuron and the current pulse neuron.
  • the current pulse neuron information reading unit 200b is configured to read the current time window width, the current time window intrapulse information sequence, the historical membrane potential information, and the membrane potential leakage information.
  • the pulse neuron connection weight reading unit 300b is configured to be based on the front end pulse neuron and the current pulse neuron The weight index is connected, and the connection weight of the front-end pulse neuron and the current pulse neuron is read.
  • the time-window pulse tip information sequence updating unit 400b is configured to update the current time window intra-pulse tip information sequence according to the pulse tip information output by the front-end pulse neuron and the current time-window pulse tip information sequence. Get the pulse tip information update sequence in the current time window.
  • the front end pulse neuron input information calculation unit 500b is configured to calculate front end pulse neuron input information by using an attenuation function according to the current time window width and the current time window intrapulse information update sequence.
  • the pulse neuron output information calculation unit 600b is configured to: according to the front end pulse neuron input information, a connection weight of the front end pulse neuron and the current pulse neuron, the historical membrane potential information, the membrane potential leakage information, The current pulsed neuron output information is calculated by a pulsed neuron calculation model.
  • the pulse tip information sequence in the current time window is updated according to the pulse tip information output by the front end pulse neuron and the pulse tip information sequence in the current time window, and the pulse tip in the current time window is acquired.
  • the information update sequence can support the spatio-temporal pulse neural network model with time depth according to the current time window width, the connection weight of the front-end pulse neuron and the current pulsed neuron, and the front-end pulse neuron input information through the attenuation function.
  • the space-time information coding ability of the pulse neural network can be greatly improved, and the application space of the pulse neural network is enriched.
  • FIG. 12 is a schematic structural diagram of a multi-mode neural network information processing system according to still another embodiment.
  • the multi-mode neural network information processing system shown in FIG. 12 includes:
  • a threshold potential obtaining unit 700b configured to acquire a threshold potential; comprising a threshold information receiving subunit for reading a random threshold mask potential, a threshold offset, and a random threshold; and a threshold random superposition amount obtaining subunit, configured to:
  • the threshold value and the random threshold mask potential are bitwise AND operated to obtain a threshold random superposition amount;
  • the threshold potential determination subunit is configured to determine the threshold potential according to the threshold random superposition amount and the threshold offset.
  • the issue trigger determination unit 800b is configured to compare the current pulse neuron output information with the threshold potential, and determine the issue trigger flag information according to the comparison result, where the issue trigger flag information includes: issuing a trigger or issuing a non-trigger.
  • the triggering action unit 910b is configured to reset the refractory period timer and update the historical membrane potential information to a preset reset membrane potential information.
  • a non-trigger action unit 920b including a refractory period sub-unit for reading a current time step of the refractory period width and the refractory period timer; and according to the refractory period width and the refractory period timer Current time step, determine current time Whether it is within the refractory period, if the refusal period judges that the subunit judges that the current time is within the refractory period, the subunit should not be run during the period, and is used to accumulate the refractory period timer for one time step.
  • the historical membrane potential information is not updated; if the refusal period judges that the subunit determines that the current time is not within the refractory period, the subunit may not be run out of time, for accumulating the refractory period timer A time step and updating the historical membrane potential information for the current pulsed neuron output information.
  • the second issuance enablement reading unit 930b is configured to read the second issuance enablement identifier, where the second issue enablement identifier includes allowing data to be issued or not allowed to be issued, and when the second issue enablement identifier is When the data is allowed to be distributed, the issuing trigger flag information reading unit is configured to read the issuing trigger flag information, and when the issuing trigger flag information is an issue trigger; the current pulse neuron output information output unit is configured to output The current pulsed neuron outputs information.
  • the release enable identifier and the issue trigger flag by setting the release enable identifier and the issue trigger flag, the current pulse neuron output information is determined, so that the output of the pulse neuron is more controllable, and the release enable flag can be configured with neurons that are not allowed. Issuing data, but only as an intermediate auxiliary computing neuron, is necessary for some functions that require multiple neurons to work together.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Feedback Control In General (AREA)

Abstract

A compound-mode neuron information processing method and system, and a computer device. The method comprises: reading a neuron working mode configuration parameter, wherein the neuron working mode configuration parameter comprises an artificial neuron working mode configuration parameter or a pulse neuron working mode configuration parameter; configuring the current neuron working mode according to the neuron working mode configuration parameter; receiving front-end neuron output information; reading the current neuron information; calculating the current neuron output information according to the front-end neuron output information and the current neuron information; and outputting the current artificial neuron output information or the current pulse neuron output information. The method simultaneously supports an artificial neural network and a pulse neural network under the same architecture, enriches the information processing type of a brain-like computing platform, reduces the cost of multi-task execution in different neural network working modes, and improves the multi-task execution efficiency in different neural network working modes.

Description

复合模式神经元信息处理方法、系统及计算机设备Composite mode neuron information processing method, system and computer equipment
相关申请Related application
本申请要求2017年01月20日申请的,申请号为201710041892.3,名称为“复合模式神经元信息处理方法和系统”的中国专利申请的优先权,在此将其全文引入作为参考。The present application claims priority to Chinese Patent Application Serial No. JP-A------------
技术领域Technical field
本发明涉及人工神经网络技术领域,特别是涉及复合模式神经元信息处理方法、系统和计算机设备。The present invention relates to the field of artificial neural network technology, and in particular to a composite mode neuron information processing method, system and computer device.
背景技术Background technique
如今的人工神经网络研究绝大多数仍是在冯·诺依曼计算机软件并搭配高性能GPGPU(General Purpose Graphic Processing Units通用图形处理单元)平台中实现的,整个过程的硬件开销、能耗和信息处理速度都不容乐观。为此,近几年神经形态计算领域迅猛发展,即采用硬件电路直接构建神经网络从而模拟大脑的功能,试图实现大规模并行、低能耗、可支撑复杂模式学习的计算平台。Most of today's artificial neural network research is still implemented in von Neumann computer software and with the high-performance GPGPU (General Purpose Graphic Processing Units) platform, the hardware overhead, energy consumption and information of the whole process. Processing speed is not optimistic. To this end, in recent years, the field of neuromorphic computing has developed rapidly, that is, the use of hardware circuits to directly construct neural networks to simulate the function of the brain, trying to achieve a massively parallel, low-energy, computing platform that can support complex mode learning.
然而,传统的神经形态系统的神经元局限于支持单一的信息处理和传输模式:人工神经网络或脉冲神经网络,从而导致单个任务的神经网络构建成本高,多个任务的神经网络执行效率低。However, the neurons of the traditional neuromorphic system are limited to support a single information processing and transmission mode: artificial neural network or pulsed neural network, resulting in a high cost of neural network construction for a single task, and low efficiency of neural networks for multiple tasks.
发明内容Summary of the invention
基于此,有必要针对需要为不同神经网络工作模式下的任务,构建不同工作模式的神经网络的问题,提供一种复合模式神经元信息处理方法和系统,所述方法包括:Based on this, it is necessary to provide a composite mode neuron information processing method and system for the problem of constructing a neural network with different working modes for tasks in different neural network working modes, the method comprising:
读取神经元工作模式配置参数,所述神经元工作模式配置参数包括人工神经元工作模式配置参数或脉冲神经元工作模式配置参数;Reading a neuron working mode configuration parameter, the neuron working mode configuration parameter comprising an artificial neuron working mode configuration parameter or a pulsed neuron working mode configuration parameter;
根据所述神经元工作模式配置参数,配置当前神经元工作模式,所述当前神经元工作模式包括人工神经元工作模式或脉冲神经元工作模式;And configuring a current neuron working mode according to the neuron working mode configuration parameter, where the current neuron working mode includes an artificial neuron working mode or a pulsed neuron working mode;
接收前端神经元输出信息,所述前端神经元输出信息包括前端人工神经元输出信息或前端脉冲神经元输出信息;Receiving front-end neuron output information, where the front-end neuron output information includes front-end artificial neuron output information or front-end pulse neuron output information;
读取当前神经元信息,所述当前神经元信息包括当前人工神经元信息或当前脉冲神经元信息;Reading current neuron information, the current neuron information including current artificial neuron information or current pulsed neuron information;
根据所述前端神经元输出信息和所述当前神经元信息,计算当前神经元输出信息,包括 根据所述前端人工神经元输出信息和所述当前人工神经元信息,计算当前人工神经元输出信息,或根据所述前端脉冲神经元输出信息和所述当前脉冲神经元信息,计算当前脉冲神经元输出信息;Calculating current neuron output information according to the front end neuron output information and the current neuron information, including Calculating current artificial neuron output information according to the front end artificial neuron output information and the current artificial neuron information, or calculating current pulse neurons according to the front end pulse neuron output information and the current pulsed neuron information Output information;
输出所述当前人工神经元输出信息或所述当前脉冲神经元输出信息。Outputting the current artificial neuron output information or the current pulsed neuron output information.
在其中一个实施例中,In one of the embodiments,
所述前端人工神经元输出信息包括:前端人工神经元输出的膜电位信息、前端人工神经元与当前人工神经元的连接权重索引;The front end artificial neuron output information includes: membrane potential information output by the front end artificial neuron, and a connection weight index of the front end artificial neuron and the current artificial neuron;
所述当前人工神经元信息包括:当前人工神经元偏置信息;The current artificial neuron information includes: current artificial neuron offset information;
则所述根据所述前端人工神经元输出的膜电位信息和所述当前人工神经元信息,计算当前人工神经元输出信息,包括:And calculating, according to the membrane potential information output by the front end artificial neuron and the current artificial neuron information, the current artificial neuron output information, including:
根据所述前端人工神经元与当前人工神经元的连接权重索引,读取前端人工神经元与当前人工神经元的连接权重;And according to the connection weight index of the front end artificial neuron and the current artificial neuron, the connection weight of the front end artificial neuron and the current artificial neuron is read;
根据所述前端人工神经元输出的膜电位信息、所述前端人工神经元与当前人工神经元的连接权重、所述当前人工神经元偏置信息,通过预设的人工神经元激活函数,计算当前人工神经元输出信息。And calculating the current state according to the membrane potential information output by the front end artificial neuron, the connection weight of the front end artificial neuron and the current artificial neuron, and the current artificial neuron bias information by using a preset artificial neuron activation function. Artificial neurons output information.
在其中一个实施例中,In one of the embodiments,
在根据所述前端人工神经元输出的膜电位信息、所述前端人工神经元与当前人工神经元的连接权重、所述当前人工神经元偏置信息,通过预设的人工神经元激活函数,计算当前人工神经元输出信息的步骤之后,所述方法还包括:The membrane potential information output according to the front end artificial neuron, the connection weight of the front end artificial neuron and the current artificial neuron, and the current artificial neuron bias information are calculated by a preset artificial neuron activation function. After the step of outputting information by the artificial neurons, the method further includes:
读取电位极值,以及与所述电位极值对应的极值神经元标识;Reading a potential extremum and an extremum neuron identifier corresponding to the potential extremum;
将所述当前人工神经元输出信息和所述电位极值进行比较,若所述当前人工神经元输出信息大于或等于所述电位极值,则Comparing the current artificial neuron output information with the potential extreme value, and if the current artificial neuron output information is greater than or equal to the potential extreme value,
将所述电位极值更新为所述当前人工神经元输出信息,并将所述极值神经元标识更新为当前人工神经元的标识。Updating the potential extremum to the current artificial neuron output information, and updating the extremum neuron identifier to an identifier of the current artificial neuron.
在其中一个实施例中,In one of the embodiments,
所述输出所述当前人工神经元输出信息,包括:The outputting the current artificial neuron output information includes:
读取第一发放使能标识,所述第一发放使能标识包括允许发放数据或不允许发放数据;Reading the first issuance enable identifier, where the first issue enable identifier includes allowing data to be released or not allowing data to be issued;
当所述第一发放使能标识为允许发放数据时,读取人工神经元发放数据类型参数,所述人工神经元发放数据类型包括:发放所述当前人工神经元输出信息、发放所述电位极值、发放所述电位极值对应的极值神经元标识其中的一种;Reading the artificial neuron issuing data type parameter when the first issuance enable identifier is allowed to issue data, the artificial neuron issuing the data type, including: issuing the current artificial neuron output information, and issuing the potential pole And extracting one of the extreme value neuron identifiers corresponding to the potential extreme values;
根据所述人工神经元发放数据类型参数,确定所述当前人工神经元最终输出信息;Determining, according to the artificial neuron, a data type parameter, determining a final output information of the current artificial neuron;
输出所述当前人工神经元最终输出信息。 Outputting the final output information of the current artificial neuron.
在其中一个实施例中,所述前端脉冲神经元输出信息包括:前端脉冲神经元输出的脉冲尖端信息、前端脉冲神经元与当前脉冲神经元的连接权重索引;In one embodiment, the front-end pulse neuron output information includes: pulse tip information output by the front-end pulse neuron, and a connection weight index of the front-end pulse neuron and the current pulse neuron;
所述当前脉冲神经元信息包括:当前时间窗宽度、当前时间窗内脉冲尖端信息序列、历史膜电位信息和膜电位泄漏信息;The current pulse neuron information includes: a current time window width, a pulse tip information sequence in a current time window, historical membrane potential information, and membrane potential leakage information;
则根据所述前端脉冲神经元输出信息和所述当前脉冲神经元信息,计算当前脉冲神经元输出信息,包括:And calculating, according to the front-end pulse neuron output information and the current pulse neuron information, the current pulse neuron output information, including:
根据所述前端脉冲神经元与当前脉冲神经元的连接权重索引,读取前端脉冲神经元与当前脉冲神经元的连接权重;And according to the connection weight index of the front-end pulse neuron and the current pulse neuron, the connection weight of the front-end pulse neuron and the current pulse neuron is read;
根据所述前端脉冲神经元输出的脉冲尖端信息,和所述当前时间窗内脉冲尖端信息序列,更新所述当前时间窗内脉冲尖端信息序列,获取当前时间窗内脉冲尖端信息更新序列;And updating the pulse tip information sequence in the current time window according to the pulse tip information output by the front-end pulse neuron and the pulse tip information sequence in the current time window, and acquiring a pulse tip information update sequence in the current time window;
根据所述当前时间窗宽度、所述当前时间窗内脉冲尖端信息更新序列,通过衰减函数计算前端脉冲神经元输入信息;And calculating, according to the current time window width, the pulse tip information update sequence in the current time window, the front end pulse neuron input information by using an attenuation function;
根据所述前端脉冲神经元输入信息、所述前端脉冲神经元与当前脉冲神经元的连接权重、所述历史膜电位信息、所述膜电位泄露信息,通过脉冲神经元计算模型,计算当前脉冲神经元输出信息。Calculating the current pulsed nerve by using a pulse neuron calculation model according to the front end pulse neuron input information, the connection weight of the front end pulse neuron and the current pulse neuron, the historical membrane potential information, and the membrane potential leakage information Meta output information.
在其中一个实施例中,在根据所述前端脉冲神经元输入信息、所述前端脉冲神经元与当前脉冲神经元的连接权重、所述历史膜电位信息、所述膜电位泄露信息,通过脉冲神经元计算模型,计算当前脉冲神经元输出信息的步骤之后,在输出所述当前脉冲神经元输出信息的步骤之前,所述方法还包括:In one embodiment, the pulsed nerve is passed through the pulse according to the front-end pulse neuron input information, the connection weight of the front-end pulse neuron and the current pulsed neuron, the historical membrane potential information, and the membrane potential leakage information. The meta-calculation model, after the step of calculating the current pulse neuron output information, before the step of outputting the current pulse neuron output information, the method further includes:
获取阈值电位;Obtaining a threshold potential;
将所述当前脉冲神经元输出信息和所述阈值电位进行比较,根据比较结果确定发放触发标志信息,所述发放触发标志信息包括:发放触发或发放不触发;当所述发放触发标志信息为发放触发时,Comparing the current pulsed neuron output information with the threshold potential, and determining the issuance triggering flag information according to the comparison result, where the issuing triggering flag information includes: issuing a trigger or issuing a non-trigger; and when the issuing triggering flag information is issued When triggered,
复位不应期计时器,并更新所述历史膜电位信息为预设的复位膜电位信息。The refractory period timer is reset, and the historical membrane potential information is updated to a preset reset membrane potential information.
在其中一个实施例中,在将所述当前脉冲神经元输出信息和所述阈值电位进行比较,根据比较结果确定发放触发标志信息的步骤之后,所述方法还包括:In one embodiment, after the step of comparing the current pulsed neuron output information with the threshold potential and determining the issuance of the trigger flag information according to the comparison result, the method further includes:
当所述发放触发标志信息为发放不触发时,读取不应期宽度和不应期计时器的当前时间步;When the issue trigger flag information is not triggered by the issue, the current time step of the refractory period width and the refractory period timer is read;
根据所述不应期宽度和所述不应期计时器的当前时间步,判断当前时间是否在不应期内,若当前时间在所述不应期内,将所述不应期计时器累加计时一个时间步,不更新所述历史膜电位信息;Determining whether the current time is within the refractory period according to the refractory period width and the current time step of the refractory period timer, and if the current time is within the refractory period, accumulating the refractory period timer Timing a time step without updating the historical membrane potential information;
若当前时间不在所述不应期内,将所述不应期计时器累加计时一个时间步,并更新所述 历史膜电位信息为所述当前脉冲神经元输出信息。If the current time is not within the refractory period, accumulating the refractory period timer for one time step and updating the Historical membrane potential information is the current pulsed neuron output information.
在其中一个实施例中,所述获取阈值电位,包括:In one embodiment, the obtaining a threshold potential includes:
读取随机阈值掩模电位、阈值偏置和随机阈值;Reading a random threshold mask potential, a threshold offset, and a random threshold;
将所述随机阈值和所述随机阈值掩模电位进行按位与操作,获取阈值随机叠加量;Performing a bitwise AND operation on the random threshold and the random threshold mask potential to obtain a threshold random superposition amount;
根据所述阈值随机叠加量和所述阈值偏置,确定所述阈值电位。The threshold potential is determined based on the threshold random amount and the threshold offset.
在其中一个实施例中,所述输出所述当前脉冲神经元输出信息,包括:In one embodiment, the outputting the current pulse neuron output information comprises:
读取第二发放使能标识,所述第二发放使能标识包括允许发放数据或不允许发放数据,当所述第二发放使能标识为允许发放数据时,Reading a second issuance enablement identifier, where the second issuance enablement identifier includes allowing data to be issued or not allowing data to be issued, and when the second issuance enablement identifier is allowed to issue data,
读取所述发放触发标志信息,当所述发放触发标志信息为发放触发时;Reading the issue trigger flag information, when the issue trigger flag information is an issue trigger;
输出所述当前脉冲神经元输出信息。Outputting the current pulse neuron output information.
在其中一个实施例中,所述人工神经元输出信息和所述电位极值进行比较,若所述人工神经元输出信息大于或等于所述电位极值,则将所述电位极值更新为所述人工神经元输出信息,可以使神经网络直接在卷积层完成池化步骤的操作,而不需要为池化层神经元分配物理空间,大大节约神经元资源,进而降低神经网络的构建成本。In one embodiment, the artificial neuron output information is compared with the potential extremum, and if the artificial neuron output information is greater than or equal to the potential extremum, the potential extremum is updated to The output information of the artificial neurons can make the neural network complete the operation of the pooling step directly in the convolutional layer, without allocating physical space for the pooled layer neurons, greatly saving the neuron resources, thereby reducing the construction cost of the neural network.
在其中一个实施例中,通过设置发放使能标识和人工神经元发放数据类型参数,确定当前人工神经元输出信息,使得人工神经元的输出的可控性更高,发放使能标志可以配置有的神经元不允许发放数据,而只用作中间辅助计算神经元,这对于一些需要多神经元协作完成的功能非常必要,另外,发放使能标志和发放类型可以配合工作,将设定区域的神经元输出信息,缩小为一个当前神经元输出信息(该神经元输出信息为设定区域的各神经元输出信息中的极大值),完成直接最大池化的操作。In one embodiment, the current artificial neuron output information is determined by setting the release enable identifier and the artificial neuron release data type parameter, so that the output of the artificial neuron is more controllable, and the release enable flag can be configured with The neurons are not allowed to issue data, but only used as intermediate auxiliary computational neurons, which is necessary for some functions that require multiple neurons to work together. In addition, the release enable flag and the release type can work together to set the area. The neuron outputs information to a current neuron output information (the neuron output information is the maximum value in each neuron output information of the set region), and the direct maximum pooling operation is completed.
在其中一个实施例中,根据所述前端脉冲神经元输出的脉冲尖端信息,和所述当前时间窗内脉冲尖端信息序列,更新所述当前时间窗内脉冲尖端信息序列,获取当前时间窗内脉冲尖端信息更新序列,根据所述当前时间窗宽度、所述前端脉冲神经元与当前脉冲神经元的连接权重,通过衰减函数计算前端脉冲神经元输入信息,可以支持具有时间深度的时空脉冲神经网络模型,相比于时间深度仅仅为一的神经网络技术方案,可以大大提高脉冲神经网络的时空信息编码能力,丰富脉冲神经网络的应用空间。In one embodiment, the pulse tip information sequence in the current time window is updated according to the pulse tip information output by the front end pulse neuron and the pulse tip information sequence in the current time window, and the current time window pulse is acquired. The tip information update sequence can support the spatiotemporal pulse neural network model with time depth according to the current time window width, the connection weight of the front end pulse neuron and the current pulse neuron, and the front end pulse neuron input information calculated by the attenuation function. Compared with the neural network technology scheme with only one time depth, the space-time information coding ability of the pulse neural network can be greatly improved, and the application space of the pulse neural network is enriched.
在其中一个实施例中,通过读取随机阈值掩模电位和阈值偏置,并接收配置寄存器给出的配置值,确定所述阈值电位,使得神经元发放脉冲尖端信息具有一定概率的随机性,无论膜电位有没有超过固定阈值偏置,由于还有一个可正可负的阈值随机叠加量的存在,该神经元胞体都有可能发放脉冲,提高了脉冲神经网络模型的计算能力和信息处理能力。In one embodiment, the threshold potential is determined by reading a random threshold mask potential and a threshold offset and receiving a configuration value given by a configuration register such that the neuron issues pulse tip information with a probability of randomness, Regardless of whether the membrane potential exceeds a fixed threshold bias, due to the existence of a positive or negative threshold random superposition amount, the neuron cell body may issue pulses, which improves the computational power and information processing capability of the pulse neural network model. .
本发明还提供一种计算机设备,其中,包括存储器、处理器,及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述任意实施例的方法 的步骤The present invention also provides a computer device comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, the processor executing the computer program to implement the method of any of the above embodiments A step of
本发明还提供一种复合模式神经元信息处理系统,包括:The invention also provides a composite mode neuron information processing system, comprising:
神经元工作模式读取模块,用于读取神经元工作模式配置参数,所述神经元工作模式配置参数包括人工神经元工作模式配置参数或脉冲神经元工作模式配置参数;a neuron working mode reading module, configured to read a neuron working mode configuration parameter, where the neuron working mode configuration parameter comprises an artificial neuron working mode configuration parameter or a pulsed neuron working mode configuration parameter;
神经元工作模式配置模块,用于根据所述神经元工作模式配置参数,配置当前神经元工作模式,所述当前神经元工作模式包括人工神经元工作模式或脉冲神经元工作模式;a neuron working mode configuration module, configured to configure a current neuron working mode according to the neuron working mode configuration parameter, where the current neuron working mode includes an artificial neuron working mode or a pulsed neuron working mode;
前端神经元输出信息接收模块,用于接收前端神经元输出信息,所述前端神经元输出信息包括前端人工神经元输出信息或前端脉冲神经元输出信息;The front-end neuron output information receiving module is configured to receive front-end neuron output information, where the front-end neuron output information includes front-end artificial neuron output information or front-end pulse neuron output information;
当前神经元信息读取模块,用于读取当前神经元信息,所述当前神经元信息包括当前人工神经元信息或当前脉冲神经元信息;a current neuron information reading module, configured to read current neuron information, where the current neuron information includes current artificial neuron information or current pulsed neuron information;
当前神经元输出信息计算模块,用于根据所述前端神经元输出信息和所述当前神经元信息,计算当前神经元输出信息,包括根据所述前端人工神经元输出信息和所述当前人工神经元信息,计算当前人工神经元输出信息,或根据所述前端脉冲神经元输出信息和所述当前脉冲神经元信息,计算当前脉冲神经元输出信息;The current neuron output information calculation module is configured to calculate current neuron output information according to the front end neuron output information and the current neuron information, including outputting information according to the front end artificial neuron and the current artificial neuron Information, calculating current artificial neuron output information, or calculating current pulse neuron output information according to the front end pulse neuron output information and the current pulse neuron information;
当前神经元输出信息输出模块,用于输出所述当前人工神经元输出信息或所述当前脉冲神经元输出信息。The current neuron output information output module is configured to output the current artificial neuron output information or the current pulse neuron output information.
本发明提供的复合模式神经元信息处理方法、计算机设备和系统,根据预设的神经元工作模式配置参数,配置相应的人工神经元工作模式,或脉冲神经元工作模式,并在相应的神经元工作模式下,通过接收前端神经元输出信息,和读取当前神经元信息,计算当前人工神经元输出信息或当前脉冲神经元输出信息,并进行输出。本发明所提供的复合模式神经元信息处理系统,可以根据任务的需求,配置相应的神经元工作模式,在针对不同的神经网络工作模式下的不同任务间切换时,只需要修改神经元工作模式配置参数即可,不同于仅仅支持人工神经网络模式的神经网络加速器方案,也不同于仅支持脉冲神经网络的神经形态方案,本发明可以在同一架构下,同时支持以人工神经网络模式为基础的机器学习应用和以脉冲神经网络为基础的计算神经科学应用,丰富类脑计算平台的处理信息类型,降低了不同神经网络工作模式下的多任务执行的成本,并提高了不同神经网络工作模式下的多任务的执行效率。The composite mode neuron information processing method, the computer device and the system provided by the present invention configure a corresponding artificial neuron working mode or a pulsed neuron working mode according to a preset neuron working mode configuration parameter, and the corresponding neuron In the working mode, the current artificial neuron output information or the current pulsed neuron output information is calculated and outputted by receiving the front-end neuron output information and reading the current neuron information. The composite mode neuron information processing system provided by the invention can configure the corresponding neuron working mode according to the requirements of the task, and only needs to modify the working mode of the neuron when switching between different tasks in different neural network working modes. The configuration parameters can be different from the neural network accelerator scheme that only supports the artificial neural network mode, and the neuromorphic scheme that only supports the pulse neural network. The present invention can support the artificial neural network mode based on the same architecture. Machine learning applications and computational neuroscience applications based on pulsed neural networks enrich the type of processing information for brain-like computing platforms, reduce the cost of multi-task execution in different neural network operating modes, and improve the working mode of different neural networks. The efficiency of multitasking.
附图说明DRAWINGS
为了清楚地说明本申请的技术方案,下面将对实施例中所需要使用的附图作简单地介绍。显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to clearly explain the technical solutions of the present application, the drawings to be used in the embodiments will be briefly described below. Obviously, the drawings in the following description are only some of the embodiments of the present application, and those skilled in the art can obtain other drawings according to the drawings without any creative work.
图1为一个实施例的多模式神经网络信息处理方法的流程示意图; 1 is a schematic flow chart of a multi-mode neural network information processing method according to an embodiment;
图2为另一个实施例的多模式神经网络信息处理方法的流程示意图;2 is a schematic flow chart of a multi-mode neural network information processing method according to another embodiment;
图3为又一个实施例的多模式神经网络信息处理方法的流程示意图;3 is a schematic flow chart of a multi-mode neural network information processing method according to still another embodiment;
图4为再一个实施例的多模式神经网络信息处理方法的流程示意图;4 is a schematic flow chart of a multi-mode neural network information processing method according to still another embodiment;
图5为再一个实施例的多模式神经网络信息处理方法的流程示意图;FIG. 5 is a schematic flowchart diagram of a multi-mode neural network information processing method according to still another embodiment; FIG.
图6为再一个实施例的多模式神经网络信息处理方法的流程示意图;6 is a schematic flow chart of a multi-mode neural network information processing method according to still another embodiment;
图7为一个实施例的多模式神经网络信息处理系统的结构示意图;7 is a schematic structural diagram of a multi-mode neural network information processing system according to an embodiment;
图8为另一个实施例的多模式神经网络信息处理系统的结构示意图;8 is a schematic structural diagram of a multi-mode neural network information processing system according to another embodiment;
图9为又一个实施例的多模式神经网络信息处理系统的结构示意图;9 is a schematic structural diagram of a multi-mode neural network information processing system according to still another embodiment;
图10为再一个实施例的多模式神经网络信息处理系统的结构示意图;FIG. 10 is a schematic structural diagram of a multi-mode neural network information processing system according to still another embodiment; FIG.
图11为再一个实施例的多模式神经网络信息处理系统的结构示意图;11 is a schematic structural diagram of a multi-mode neural network information processing system according to still another embodiment;
图12为再一个实施例的多模式神经网络信息处理系统的结构示意图。FIG. 12 is a schematic structural diagram of a multi-mode neural network information processing system according to still another embodiment.
具体实施方式detailed description
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。The present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
图1为一个实施例的多模式神经网络信息处理方法的流程示意图,如图1所示的多模式神经网络信息处理方法,包括:FIG. 1 is a schematic flowchart of a multi-mode neural network information processing method according to an embodiment, and the multi-mode neural network information processing method shown in FIG. 1 includes:
步骤S100,读取神经元工作模式配置参数,所述神经元工作模式配置参数包括人工神经元工作模式配置参数或脉冲神经元工作模式配置参数。Step S100, reading a neuron working mode configuration parameter, where the neuron working mode configuration parameter comprises an artificial neuron working mode configuration parameter or a pulsed neuron working mode configuration parameter.
具体的,本发明提供一种多模式和多功能复合的神经形态胞体单元,该单元基于全数字电路,集成了人工神经网络(Artificial Neural Network,ANN)和脉冲神经网络(Spiking Neural Network,SNN)两种信息处理及传输模式。根据不同任务的处理需求,将神经元胞体单元可以设置为ANN或SNN的工作模式。Specifically, the present invention provides a multi-mode and multi-functional composite neuromorphic cell unit based on an all-digital circuit, integrating an artificial neural network (ANN) and a pulsed neural network (SNN). Two kinds of information processing and transmission modes. The neuron cell unit can be set to the working mode of the ANN or SNN according to the processing requirements of different tasks.
所述的神经元工作模式配置参数,即是一个任务确定好的神经网络的工作模式。The neuron working mode configuration parameter is a working mode of a well-defined neural network.
步骤S200,根据所述神经元工作模式配置参数,配置当前神经元工作模式,所述当前神经元工作模式包括人工神经元工作模式或脉冲神经元工作模式。Step S200: Configure a current neuron working mode according to the neuron working mode configuration parameter, where the current neuron working mode includes an artificial neuron working mode or a pulsed neuron working mode.
具体的,根据读取到的所述神经元工作模式配置参数,将所述神经元胞体单元设置为ANN或SNN工作模式即可。Specifically, according to the read neuron working mode configuration parameter, the neuron cell unit may be set to an ANN or SNN working mode.
步骤S300,接收前端神经元输出信息,所述前端神经元输出信息包括前端人工神经元输 出信息或前端脉冲神经元输出信息。Step S300, receiving front-end neuron output information, where the front-end neuron output information includes front-end artificial neuron output Output information or front-end pulse neuron output information.
具体的,根据所述配置好的神经元工作模式,接收不同的前端神经元输出信息,若配置为人工神经元工作模式,接收前端人工神经元输出信息,若配置为脉冲神经元工作模式,接收前端脉冲神经元输出信息。Specifically, according to the configured neuron working mode, receiving different front-end neuron output information, if configured as an artificial neuron working mode, receiving front-end artificial neuron output information, if configured as a pulsed neuron working mode, receiving The front-end pulsed neuron outputs information.
步骤S400,读取当前神经元信息,所述当前神经元信息包括当前人工神经元信息或当前脉冲神经元信息。Step S400, reading current neuron information, the current neuron information including current artificial neuron information or current pulsed neuron information.
具体的,根据所述配置好的神经元工作模式,读取不同的当前神经元信息,若配置为人工神经元工作模式,读取当前人工神经元信息,若配置为脉冲神经元工作模式,读取当前脉冲神经元信息。Specifically, according to the configured neuron working mode, reading different current neuron information, if configured as an artificial neuron working mode, reading current artificial neuron information, if configured as a pulsed neuron working mode, reading Take the current pulse neuron information.
步骤S500,根据所述前端神经元输出信息和所述当前神经元信息,计算当前神经元输出信息,包括根据所述前端人工神经元输出信息和所述当前人工神经元信息,计算当前人工神经元输出信息,或根据所述前端脉冲神经元输出信息和所述当前脉冲神经元信息,计算当前脉冲神经元输出信息。Step S500, calculating current neuron output information according to the front-end neuron output information and the current neuron information, including calculating current artificial neurons according to the front-end artificial neuron output information and the current artificial neuron information Outputting information, or calculating current pulse neuron output information according to the front end pulse neuron output information and the current pulse neuron information.
具体的,根据所述配置好的神经元工作模式,执行人工神经元的计算过程或执行脉冲神经元的计算过程,得到当前人工神经元输出信息或当前脉冲神经元输出信息。Specifically, according to the configured neuron working mode, the calculation process of the artificial neuron or the calculation process of the pulse neuron is performed, and the current artificial neuron output information or the current pulse neuron output information is obtained.
步骤S600,输出所述当前人工神经元输出信息或所述当前脉冲神经元输出信息。Step S600, outputting the current artificial neuron output information or the current pulse neuron output information.
具体的,同样根据配置好的神经元工作模式,当工作在人工神经元模式下时,根据人工神经网络的输出方式输出所述当前人工神经元输出信息,当工作在脉冲神经元模式下时,根据脉冲神经网络的输出方式输出所述当前脉冲神经元输出信息。Specifically, according to the configured neuron working mode, when working in the artificial neuron mode, outputting the current artificial neuron output information according to an output manner of the artificial neural network, when working in the pulsed neuron mode, The current pulsed neuron output information is output according to an output manner of the pulsed neural network.
本发明提供的复合模式神经元信息处理方法,根据预设的神经元工作模式配置参数,配置相应的人工神经元工作模式,或脉冲神经元工作模式,并在相应的神经元工作模式下,通过接收前端神经元输出信息,和读取当前神经元信息,计算当前人工神经元输出信息或当前脉冲神经元输出信息,并进行输出。本发明所提供的复合模式神经元信息处理方法,可以根据任务的需求,配置相应的神经元工作模式,在针对不同的神经网络工作模式下的不同任务间切换时,只需要修改神经元工作模式配置参数即可,不同于仅仅支持人工神经网络模式的神经网络加速器方案,也不同于仅支持脉冲神经网络的神经形态方案,本发明可以在同一架构下,同时支持以人工神经网络模式为基础的机器学习应用和以脉冲神经网络为基础的计算神经科学应用,丰富类脑计算平台的处理信息类型,降低了不同神经网络工作模式下的多任务执行的成本,并提高了不同神经网络工作模式下的多任务的执行效率。 The composite mode neuron information processing method provided by the invention provides a corresponding artificial neuron working mode or a pulsed neuron working mode according to a preset neuron working mode configuration parameter, and passes the corresponding neuron working mode. Receiving front-end neuron output information, and reading current neuron information, calculating current artificial neuron output information or current pulsed neuron output information, and outputting. The composite mode neuron information processing method provided by the invention can configure the corresponding neuron working mode according to the requirements of the task, and only needs to modify the neuron working mode when switching between different tasks in different neural network working modes. The configuration parameters can be different from the neural network accelerator scheme that only supports the artificial neural network mode, and the neuromorphic scheme that only supports the pulse neural network. The present invention can support the artificial neural network mode based on the same architecture. Machine learning applications and computational neuroscience applications based on pulsed neural networks enrich the type of processing information for brain-like computing platforms, reduce the cost of multi-task execution in different neural network operating modes, and improve the working mode of different neural networks. The efficiency of multitasking.
图2为另一个实施例的多模式神经网络信息处理方法的流程示意图,如图2所示的多模式神经网络信息处理方法,为图1所示的方法中,所述神经元工作模式配置参数为人工神经元工作模式配置参数时,步骤S300至步骤S500的细化步骤,包括:FIG. 2 is a schematic flowchart diagram of a multi-mode neural network information processing method according to another embodiment, and the multi-mode neural network information processing method shown in FIG. 2 is the method for configuring the neuron working mode in the method shown in FIG. When the parameters are configured for the artificial neuron working mode, the refining steps of step S300 to step S500 include:
步骤S100a,接收前端人工神经元输出的膜电位信息、前端人工神经元与当前人工神经元的连接权重索引。Step S100a, receiving the membrane potential information output by the front end artificial neuron, and the connection weight index of the front end artificial neuron and the current artificial neuron.
具体的,所述前端人工神经元与当前人工神经元的连接权重索引,是前端神经元与所述前端人工神经元输出信息一同发送的权重索引,用于指示当前神经元权重的提取。Specifically, the connection weight index of the front end artificial neuron and the current artificial neuron is a weight index sent by the front end neuron together with the front end artificial neuron output information, and is used to indicate the extraction of the current neuron weight.
步骤S200a,读取当前人工神经元偏置信息。In step S200a, the current artificial neuron offset information is read.
具体的,所述人工神经元的偏置信息,为膜电位偏置值。Specifically, the offset information of the artificial neuron is a membrane potential bias value.
步骤S300a,根据所述前端人工神经元与当前人工神经元的连接权重索引,读取前端人工神经元与当前人工神经元的连接权重。Step S300a: The connection weight of the front end artificial neuron and the current artificial neuron is read according to the connection weight index of the front end artificial neuron and the current artificial neuron.
具体的,所述前端人工神经元与当前人工神经元的连接权重索引,是一个地址信息,当前神经元根据接收到的所述前端人工神经元与当前人工神经元的连接权重索引,在当前神经元内的存储器中,读取到前端人工神经元与当前人工神经元的连接权重,根据所述的连接权重信息,可以将前端神经元的输出信息,在参与当前神经元输出信息的计算过程中,更准确的反应出前端神经元的输出信息的权重。Specifically, the connection weight index of the front end artificial neuron and the current artificial neuron is an address information, and the current neuron is indexed according to the received connection weight of the front end artificial neuron and the current artificial neuron, and the current nerve is In the memory in the element, the connection weight of the front end artificial neuron and the current artificial neuron is read, and according to the connection weight information, the output information of the front end neuron can be used in the calculation process of participating in the current neuron output information. More accurately reflects the weight of the output information of the front-end neurons.
步骤S400a,根据所述前端人工神经元输出的膜电位信息、所述前端人工神经元与当前人工神经元的连接权重、所述当前人工神经元偏置信息,通过预设的人工神经元激活函数,计算当前人工神经元输出信息。Step S400a, according to the membrane potential information output by the front end artificial neuron, the connection weight of the front end artificial neuron and the current artificial neuron, and the current artificial neuron bias information, by a preset artificial neuron activation function , calculate the current artificial neuron output information.
具体的,
Figure PCTCN2017114661-appb-000001
其中Yi为当前神经元输出的膜电位信息、Xi为前端神经元输出信息、Wji为前端人工神经元与当前人工神经元的连接权重、Vbias为当前人工神经元偏置,f(·)为预设的人工神经元激活函数,常用的主要为ReLU函数
Figure PCTCN2017114661-appb-000002
Sigmoid函数
Figure PCTCN2017114661-appb-000003
tanh函数
Figure PCTCN2017114661-appb-000004
等。若应用于芯片设计,在胞体处的基本模型可以简化为:VANN=f(Vinput+Vbias),Vinput是当前拍累加的输入,等效于上述的
Figure PCTCN2017114661-appb-000005
specific,
Figure PCTCN2017114661-appb-000001
Where Y i is the membrane potential information of the current neuron output, X i is the front-end neuron output information, W ji is the connection weight of the front-end artificial neuron and the current artificial neuron, V bias is the current artificial neuron bias, f( ·) is the default artificial neuron activation function, commonly used mainly for ReLU function
Figure PCTCN2017114661-appb-000002
Sigmoid function
Figure PCTCN2017114661-appb-000003
Tanh function
Figure PCTCN2017114661-appb-000004
Wait. If applied to chip design, the basic model at the cell body can be simplified as: V ANN = f (V input + V bias ), V input is the input of the current beat accumulation, equivalent to the above
Figure PCTCN2017114661-appb-000005
本实施例所提供的复合模式神经元信息处理系统,根据任务的需求,配置为人工神经元工作模式,可以实现目前大多数的人工神经网络模型,支撑类脑计算平台在机器学习领域的应用。 The composite mode neuron information processing system provided by the embodiment is configured as an artificial neuron working mode according to the requirements of the task, and can realize most of the current artificial neural network models and support the application of the brain computing platform in the field of machine learning.
在其中一个实施例中,在根据所述前端人工神经元输出的膜电位信息、所述前端人工神经元与当前人工神经元的连接权重、所述当前人工神经元偏置信息,通过预设的人工神经元激活函数,计算人工神经元输出信息的步骤之后,所述方法还包括:In one embodiment, the membrane potential information output according to the front end artificial neuron, the connection weight of the front end artificial neuron and the current artificial neuron, and the current artificial neuron bias information are preset. After the step of calculating the artificial neuron output information by the artificial neuron activation function, the method further includes:
读取电位极值,以及与所述电位极值对应的极值神经元标识;将所述当前人工神经元输出信息和所述电位极值进行比较,若所述当前人工神经元输出信息大于或等于所述电位极值,则将所述电位极值更新为所述当前人工神经元输出信息,并将所述极值神经元标识更新为当前人工神经元的标识。Reading a potential extremum, and an extremum neuron identifier corresponding to the potential extremum; comparing the current artificial neuron output information with the potential extremum, if the current artificial neuron output information is greater than or And equal to the potential extremum, updating the potential extremum to the current artificial neuron output information, and updating the extremum neuron identifier to an identifier of the current artificial neuron.
具体的,ANN模式下,每个胞体都会将当前人工神经元输出信息,即当前膜电位VANN与存储的电位极值,即胞体间横向最大电位值Vmax进行比较。若VANN≥Vmax,则将所述胞体间横向最大电位值更新为所述当前膜电位VANN,Vmax=VANN,并更新最大值胞体序号Nmax为当前胞体标识序号。Specifically, in the ANN mode, each cell body compares the current artificial neuron output information, that is, the current membrane potential V ANN with the stored potential extreme value, that is, the transverse maximum potential value V max between the cells. If V ANN ≥ V max , the inter-cell lateral maximum potential value is updated to the current membrane potential V ANN , V max =V ANN , and the maximum cell number N max is updated to the current cell identification number.
在本实施例中,所述当前人工神经元输出信息和所述电位极值进行比较,若所述当前人工神经元输出信息大于或等于所述电位极值,则将所述电位极值更新为所述当前人工神经元输出信息,可以使神经网络直接在卷积层完成池化步骤的操作,而不需要为池化层神经元分配物理空间,大大节约神经元资源,进而降低神经网络的构建成本。In this embodiment, the current artificial neuron output information is compared with the potential extreme value, and if the current artificial neuron output information is greater than or equal to the potential extreme value, the potential extreme value is updated to The current artificial neuron output information enables the neural network to complete the operation of the pooling step directly in the convolutional layer without allocating physical space for the pooled layer neurons, thereby greatly saving neuron resources, thereby reducing the construction of the neural network. cost.
图3为又一个实施例的多模式神经网络信息处理方法的流程示意图,如图3所示的多模式神经网络信息处理方法,为图1的步骤S600的细化流程,包括:FIG. 3 is a schematic flowchart of a multi-mode neural network information processing method according to still another embodiment. The multi-mode neural network information processing method shown in FIG. 3 is a refinement process of step S600 of FIG. 1, and includes:
步骤S610a,读取第一发放使能标识,所述第一发放使能标识包括允许发放数据或不允许发放数据,当所述第一发放使能标识为允许发放数据时。Step S610a, reading the first issuance enablement identifier, where the first issuance enablement identifier includes allowing data to be issued or not allowing data to be issued, when the first issuance enable identifier is allowed to issue data.
具体的,所述第一发放使能标识,是任务中设定好的决定最终的神经元输出信息是否发放的控制信息,当所述第一发放使能标识为不允许发放数据时,所述计算出的人工神经元输出信息不允许发放,流程结束。当所述第一发放使能标识为允许发放数据时,接步骤S620a。Specifically, the first issuance enablement identifier is control information that is determined in the task to determine whether the final neuron output information is issued. When the first release enable identifier is not allowed to issue data, the The calculated artificial neuron output information is not allowed to be issued, and the process ends. When the first issuance enable identifier is allowed to issue data, step S620a is followed.
步骤S620a,读取人工神经元发放数据类型参数,所述人工神经元发放数据类型包括:发放所述当前人工神经元输出信息,发放所述电位极值、发放所述电位极值对应的极值神经元标识其中的一种。In step S620, the artificial neuron issuance data type parameter is read, and the artificial neuron issuance data type includes: issuing the current artificial neuron output information, issuing the potential extremum, and issuing an extremum corresponding to the potential extremum A neuron identifies one of them.
具体的,当所述第一发放使能标识为允许发放数据时,读取人工神经元发放数据类型参数,根据后续的计算需求,可选择发放不同的数据类型,例如,为满足后续的卷积神经网络中的最大池化操作,需要输出所述电位极值。Specifically, when the first issuance enable identifier is allowed to issue data, the artificial neuron issuance data type parameter is read, and according to the subsequent calculation requirement, different data types may be selected, for example, to satisfy subsequent convolution. The maximum pooling operation in the neural network requires the output of the potential extremum.
步骤S630a,根据所述人工神经元发放数据类型参数,确定所述当前人工神经元最终输 出信息。Step S630a, determining, according to the artificial neuron issuing data type parameter, that the current artificial neuron finally loses Information.
步骤S640a,输出所述当前人工神经元最终输出信息。Step S640a, outputting the final output information of the current artificial neuron.
在本实施例中,通过设置发放使能标识和人工神经元发放数据类型参数,确定当前人工神经元输出信息,使得人工神经元的输出的可控性更高,发放使能标志可以配置有的神经元不允许发放数据,而只用作中间辅助计算神经元,这对于一些需要多神经元协作完成的功能非常必要,另外,发放使能标志和发放类型可以配合工作,将设定区域的神经元输出信息,缩小为一个当前神经元输出信息(该神经元输出信息为设定区域的各神经元输出信息中的极大值),完成直接最大池化的操作。In this embodiment, by setting the release enable identifier and the artificial neuron release data type parameter, the current artificial neuron output information is determined, so that the output of the artificial neuron is more controllable, and the release enable flag can be configured. Neurons are not allowed to issue data, but only as intermediate auxiliary computing neurons, which is necessary for some functions that require multiple neurons to work together. In addition, the release enable flag and the release type can work together to set the area of the nerve. The meta-output information is reduced to a current neuron output information (the neuron output information is a maximum value in each neuron output information of the set region), and the direct maximum pooling operation is completed.
图4为再一个实施例的多模式神经网络信息处理方法的流程示意图,如图4所示的多模式神经网络信息处理方法,为图1所示的方法中的所述神经元工作模式配置参数为脉冲神经元工作模式配置参数时,步骤S300至步骤S500的细化流程,包括:4 is a schematic flow chart of a multi-mode neural network information processing method according to still another embodiment, and the multi-mode neural network information processing method shown in FIG. 4 is a configuration parameter of the neuron working mode in the method shown in FIG. When the parameters are configured for the pulsed neuron working mode, the refinement process of step S300 to step S500 includes:
步骤S100b,接收前端脉冲神经元输出的脉冲尖端信息、前端脉冲神经元与当前脉冲神经元的连接权重索引。Step S100b: Receive pulse tip information output by the front end pulse neuron, and a connection weight index of the front end pulse neuron and the current pulse neuron.
具体的,所述前端脉冲神经元与当前脉冲神经元的连接权重索引,是前端神经元与所述前端脉冲神经元输出信息一同发送的权重索引,用于指示当前神经元权重的提取。所述前端脉冲神经元输出的脉冲尖端信息,为前端脉冲神经元发送的脉冲尖端信号(spike)。Specifically, the connection weight index of the front-end pulse neuron and the current pulse neuron is a weight index sent by the front-end neuron together with the output information of the front-end pulse neuron, and is used to indicate the extraction of the current neuron weight. The pulse tip information output by the front end pulse neuron is a pulse tip signal sent by the front end pulse neuron.
步骤S200b,读取当前时间窗宽度、当前时间窗内脉冲尖端信息序列、历史膜电位信息和膜电位泄漏信息。Step S200b, reading the current time window width, the pulse tip information sequence in the current time window, the historical membrane potential information, and the membrane potential leakage information.
具体的,所述当前时间窗内脉冲尖端信息序列,是指在所述当前时间窗宽度内,将过去一定范围内的时间步接收到的脉冲尖端信息,按时间顺序依次缓存的一个信息序列。Specifically, the pulse tip information sequence in the current time window refers to a sequence of information in which the pulse tip information received in a time step within a certain range in the current time window width is sequentially buffered in chronological order.
步骤S300b,根据所述前端脉冲神经元与当前脉冲神经元的连接权重索引,读取前端脉冲神经元与当前脉冲神经元的连接权重。Step S300b, reading the connection weight of the front-end pulse neuron and the current pulse neuron according to the connection weight index of the front-end pulse neuron and the current pulse neuron.
具体的,所述前端脉冲神经元与当前脉冲神经元的连接权重索引,是一个地址信息,当前神经元根据接收到的所述前端脉冲神经元与当前脉冲神经元的连接权重索引,在当前神经元内的存储器中,读取到前端脉冲神经元与当前脉冲神经元的连接权重,根据所述的连接权重信息,可以将前端神经元的输出信息,在参与当前神经元输出信息的计算过程中,更准确的反应出前端神经元的输出信息的权重,携带更丰富的信息。Specifically, the connection weight index of the front-end pulse neuron and the current pulse neuron is an address information, and the current neuron is indexed according to the received connection weight of the front-end pulse neuron and the current pulse neuron. In the memory in the element, the connection weight of the front-end pulse neuron and the current pulse neuron is read, and according to the connection weight information, the output information of the front-end neuron can be used in the calculation process of participating in the current neuron output information. More accurately reflects the weight of the output information of the front-end neurons, carrying more abundant information.
步骤S400b,根据所述前端脉冲神经元输出的脉冲尖端信息,和所述当前时间窗内脉冲尖端信息序列,更新所述当前时间窗内脉冲尖端信息序列,获取当前时间窗内脉冲尖端信息 更新序列。Step S400b, updating the pulse tip information sequence in the current time window according to the pulse tip information output by the front-end pulse neuron and the pulse tip information sequence in the current time window, and acquiring the pulse tip information in the current time window. Update the sequence.
具体的,所述脉冲尖端信息序列,在每个脉冲神经元的操作步,在序列头存储一个新的脉冲尖端信息后,删除一个序列尾位置上的脉冲尖端信息,更新一次所述脉冲尖端序列。Specifically, the pulse tip information sequence, after storing a new pulse tip information in the operation step of each pulse neuron, deletes the pulse tip information at the tail position of the sequence, and updates the pulse tip sequence once. .
步骤S500b,根据所述当前时间窗宽度、所述当前时间窗内脉冲尖端信息更新序列,通过衰减函数计算前端脉冲神经元输入信息。Step S500b: Calculate the front-end pulse neuron input information by using the attenuation function according to the current time window width and the pulse tip information update sequence in the current time window.
具体的,利用
Figure PCTCN2017114661-appb-000006
计算所述前端脉冲神经元输入信息,其中,Tw为所述时间窗宽度,δj为前端神经元j在当前时间窗内发放spike后,在所述当前时间窗内脉冲尖端信息更新序列内的时间步。K(Δt)为一个衰减函数,随着Δt增大而迅速减小。
Specifically, use
Figure PCTCN2017114661-appb-000006
Calculating the front-end pulse neuron input information, wherein T w is the time window width, and δ j is a pulse front information update sequence in the current time window after the front-end neuron j issues a spike in the current time window Time step. K(Δt) is an attenuation function that decreases rapidly as Δt increases.
步骤S600b,根据所述前端脉冲神经元输入信息、所述前端脉冲神经元与当前脉冲神经元的连接权重、所述历史膜电位信息、所述膜电位泄露信息,通过脉冲神经元计算模型,计算当前脉冲神经元输出信息。Step S600b, calculating, according to the front-end pulse neuron input information, the connection weight of the front-end pulse neuron and the current pulsed neuron, the historical membrane potential information, and the membrane potential leakage information, by calculating a pulse neuron calculation model Current pulsed neuron output information.
具体的,利用如下公式表示前端脉冲神经元输入信息的计算:Specifically, the calculation of the input information of the front-end pulse neuron is represented by the following formula:
Figure PCTCN2017114661-appb-000007
Figure PCTCN2017114661-appb-000007
其中Wij为所述前端脉冲神经元j和当前脉冲神经元i的连接权重,Tw为所述时间窗宽度,δj为前端神经元j在当前时间窗内发放spike后,在所述当前时间窗内脉冲尖端信息更新序列内的时间步。K(Δt)为一个衰减函数,随着Δt增大而迅速减小。在胞体处的基本模型可以简化为:Where W ij is the connection weight of the front-end pulse neuron j and the current pulse neuron i, T w is the time window width, and δ j is the front-end neuron j after the spike is issued in the current time window, at the current The time step within the time window of the pulse tip information update sequence. K(Δt) is an attenuation function that decreases rapidly as Δt increases. The basic model at the cell body can be simplified to:
VSNN=f(V+Vinput+Vleak)V SNN =f(V+V input +V leak )
发放模型和复位模型不变,其中V是存储器保存的历史膜电位信息,Vinput是当前拍累加的输入,等效于上述的
Figure PCTCN2017114661-appb-000008
Vleak为泄漏值信息。
The release model and the reset model are unchanged, where V is the historical membrane potential information stored in the memory, and V input is the input of the current beat accumulation, equivalent to the above
Figure PCTCN2017114661-appb-000008
V leak is the leak value information.
在本实施例中,根据所述前端脉冲神经元输出的脉冲尖端信息,和所述当前时间窗内脉冲尖端信息序列,更新所述当前时间窗内脉冲尖端信息序列,获取当前时间窗内脉冲尖端信息更新序列,根据所述当前时间窗宽度、所述前端脉冲神经元与当前脉冲神经元的连接权重,通过衰减函数计算前端脉冲神经元输入信息,可以支持具有时间深度的时空脉冲神经网络模型,相比于时间深度仅仅为一的神经网络技术方案,可以大大提高脉冲神经网络的时空信息编码能力,丰富脉冲神经网络的应用空间。In this embodiment, the pulse tip information sequence in the current time window is updated according to the pulse tip information output by the front end pulse neuron and the pulse tip information sequence in the current time window, and the pulse tip in the current time window is acquired. The information update sequence can support the spatio-temporal pulse neural network model with time depth according to the current time window width, the connection weight of the front-end pulse neuron and the current pulsed neuron, and the front-end pulse neuron input information through the attenuation function. Compared with the neural network technology scheme with only one time depth, the space-time information coding ability of the pulse neural network can be greatly improved, and the application space of the pulse neural network is enriched.
图5为再一个实施例的多模式神经网络信息处理方法的流程示意图,如图5所示的多模式神经网络信息处理方法,为图4所示的全部步骤之后,图1中S600的步骤之前,包括:5 is a schematic flowchart of a multi-mode neural network information processing method according to still another embodiment, and the multi-mode neural network information processing method shown in FIG. 5 is after all the steps shown in FIG. 4, before the step of S600 in FIG. ,include:
步骤S100c,获取阈值电位。In step S100c, a threshold potential is acquired.
具体的,所述获取阈值电位,包括:读取随机阈值掩模电位、阈值偏置和随机阈值;将 所述随机阈值和所述随机阈值掩模电位进行按位与操作,获取阈值随机叠加量;根据所述阈值随机叠加量和所述阈值偏置,确定所述阈值电位。Specifically, the acquiring a threshold potential includes: reading a random threshold mask potential, a threshold offset, and a random threshold; The random threshold and the random threshold mask potential are subjected to bitwise AND operation to obtain a threshold random superposition amount; and the threshold potential is determined according to the threshold random superimposition amount and the threshold offset.
伪随机数发生器产生一个随机阈值Vrand,利用所述随机阈值与预设的随机阈值掩模电位Vmask按位取与操作,产生阈值随机叠加量,再将所述阈值随机叠加量与预设的阈值偏置Vth0相加,产生真正的阈值电位Vth。其中,伪随机数发生器的初始种子由配置寄存器Vseed给出。The pseudo-random number generator generates a random threshold V rand , and uses the random threshold and the preset random threshold mask potential V mask to perform a bitwise AND operation to generate a threshold random superposition amount, and then the threshold random superposition amount and the pre- The set threshold offset Vth0 is added to produce a true threshold potential Vth . Wherein, the initial seed of the pseudo random number generator is given by the configuration register V seed .
掩模电位Vmask用于限制阈值增量的范围:若Vmask=0,则阈值随机叠加量也为0,发放模式退化为固定阈值发放,固定阈值为Vth0;若Vmask≠0,则发放模式为部分概率阈值发放。当极端情况Vth0=0,则发放模式为完全概率阈值发放。The mask potential V mask is used to limit the range of threshold increments: if V mask =0, the threshold random superposition amount is also 0, the release mode degenerates to a fixed threshold release, and the fixed threshold is V th0 ; if V mask ≠ 0, then The issuance mode is a partial probability threshold issue. When the extreme case V th0 =0, the issuance mode is a full probability threshold issuance.
步骤S200c,将所述当前脉冲神经元输出信息和所述阈值电位进行比较,根据比较结果确定发放触发标志信息,所述发放触发标志信息包括:发放触发或发放不触发。当所述发放触发标志信息为发放触发时,接步骤S300c,当所述发放触发标志信息为发放不触发时,接步骤S400c。Step S200c: Comparing the current pulse neuron output information with the threshold potential, and determining the issuance trigger flag information according to the comparison result, where the issuing trigger flag information includes: issuing a trigger or issuing a non-trigger. When the issuance triggering flag information is the issuance trigger, the process proceeds to step S300c, and when the issuance triggering flag information is not issued, the process proceeds to step S400c.
具体的,根据获取到的阈值电位,与所述当前脉冲神经元输出信息进行比较,并根据比较结果确定发放触发标志信息。只有所述当前脉冲神经元输出信息大于或等于所述阈值电位时,所述当前脉冲神经元输出信息才有可能会被发送。Specifically, according to the obtained threshold potential, the current pulse neuron output information is compared, and the trigger trigger flag information is determined according to the comparison result. Only when the current pulse neuron output information is greater than or equal to the threshold potential, the current pulse neuron output information may be transmitted.
步骤S300c,复位不应期计时器,并更新所述历史膜电位信息为预设的复位膜电位信息。Step S300c, resetting the refractory period timer, and updating the historical membrane potential information to a preset reset membrane potential information.
具体的,当所述发放触发标志信息为发放触发时,所述当前脉冲神经元输出信息可能被发送,不应期计时器被复位,并更新所述历史膜电位信息为预设的膜电位信息,且所述的历史膜电位信息更新,根据配置的复位类型Reset_type,选择性将膜电位复位为当前膜电位、当前膜电位和阈值电位差值,或固定复位电压。Specifically, when the issue trigger flag information is an issue trigger, the current pulse neuron output information may be sent, the refractory period timer is reset, and the historical membrane potential information is updated to a preset membrane potential information. And the historical membrane potential information is updated, and the membrane potential is selectively reset to the current membrane potential, the current membrane potential and the threshold potential difference, or a fixed reset voltage according to the configured reset type Reset_type.
步骤S400c,读取不应期宽度和不应期计时器的当前时间步。Step S400c, reading the current time step of the refractory period width and the refractory period timer.
具体的,当所述发放触发标志信息为发放不触发时,所述当前脉冲神经元输出信息不被发送,进一步判断当前是否在不应期内。所述不应期宽度大于不应期计时器的值,所述不应期计时器利用时间步的方式计时。Specifically, when the issuing trigger flag information is not triggered by the issuance, the current pulse neuron output information is not sent, and further determining whether the current period is within the refractory period. The refractory period width is greater than the value of the refractory period timer, which is timed by means of a time step.
步骤S500c,根据所述不应期宽度和所述不应期计时器的当前时间步,判断当前时间是否在不应期内。若当前时间在所述不应期内,接步骤S600c,若当前时间在所述不应期外,接步骤S700c。Step S500c: Determine whether the current time is within the refractory period according to the refractory period width and the current time step of the refractory period timer. If the current time is within the refractory period, step S600c is followed, and if the current time is outside the refractory period, step S700c is followed.
具体的,根据所述不应期计时器的当前时间步的累计计算,可以判断出当前时间步是否还在不应期内。 Specifically, according to the cumulative calculation of the current time step of the refractory period timer, it can be determined whether the current time step is still in the refractory period.
步骤S600c,将所述不应期计时器累加计时一个时间步,不更新所述历史膜电位信息。Step S600c, accumulating the refractory period timer for one time step, and not updating the historical film potential information.
具体的,若当前时间在所述不应期内,根据脉冲神经网络的仿生特点,不对所述脉冲神经输出信息进行任何回应,不更新历史膜电位信息,所述历史膜电位信息,是下一个时间步的脉冲神经元需要读取的信息,即在不应期内,本次计算出的脉冲神经元输出信息不参与下一个时间步的计算。Specifically, if the current time is within the refractory period, according to the biomimetic characteristics of the pulsed neural network, no response is made to the pulsed nerve output information, and the historical membrane potential information is not updated, and the historical membrane potential information is the next The pulsed neurons of the time step need to read the information, that is, during the refractory period, the pulsed neuron output information calculated this time does not participate in the calculation of the next time step.
步骤S700c,将所述不应期计时器累加计时一个时间步,并更新所述历史膜电位信息为所述当前脉冲神经元输出信息。Step S700c, accumulating the refractory period timer for one time step, and updating the historical membrane potential information as the current pulse neuron output information.
具体的,如在不应期外,则将所述历史膜电位信息更新为所述当前脉冲神经元输出信息,参与下一个时间步的计算。Specifically, as in the refractory period, the historical membrane potential information is updated to the current pulsed neuron output information to participate in the calculation of the next time step.
在本实施例中,通过读取随机阈值掩模电位和阈值偏置,并接收配置寄存器给出的配置值,确定所述阈值电位,使得神经元发放脉冲尖端信息具有一定概率的随机性,无论膜电位有没有超过固定阈值偏置,由于还有一个可正可负的阈值随机叠加量的存在,该神经元胞体都有可能发放脉冲,提高了脉冲神经网络模型的计算能力和信息处理能力。In this embodiment, the threshold potential is determined by reading the random threshold mask potential and the threshold offset and receiving the configuration value given by the configuration register, so that the neuron issues the pulse tip information with a certain probability of randomness, regardless of Whether the membrane potential exceeds a fixed threshold bias, and because there is a positive or negative threshold random superposition amount, the neuron cell body may issue pulses, which improves the computational power and information processing capability of the pulse neural network model.
图6为再一个实施例的多模式神经网络信息处理方法的流程示意图,如图6所示的多模式神经网络信息处理方法,为图1所示的方法中的所述神经元工作模式配置参数为脉冲神经元工作模式配置参数时,步骤S600的细化流程,包括:FIG. 6 is a schematic flowchart diagram of a multi-mode neural network information processing method according to still another embodiment, and the multi-mode neural network information processing method shown in FIG. 6 is configured for the neuron working mode in the method shown in FIG. When configuring parameters for the pulsed neuron working mode, the refinement process of step S600 includes:
步骤S610b,读取第二发放使能标识,所述第二发放使能标识包括允许发放数据或不允许发放数据,当所述第二发放使能标识为允许发放数据时。In step S610b, the second issuance enablement identifier is read, and the second issuance enablement identifier includes permission to issue data or disallowing to issue data, and when the second issuance enable identifier is allowed to issue data.
具体的,所述第二发放使能标识,是决定最终的神经元输出信息是否发放的控制信息,当所述第二发放使能标识为不允许发放数据时,所述计算出的人工神经元输出信息不允许发放,流程结束。当所述第二发放使能标识为允许发放数据时,接步骤S620b。Specifically, the second issuance enablement identifier is control information that determines whether the final neuron output information is issued. When the second issuance enable identifier is not allowed to issue data, the calculated artificial neuron The output information is not allowed to be issued and the process ends. When the second issuance enable identifier is allowed to issue data, step S620b is followed.
步骤S620b,读取所述发放触发标志信息,当所述发放触发标志信息为发放触发时。Step S620b, the issuing trigger flag information is read, when the issuing trigger flag information is an issue trigger.
具体的,在所述第二发放使能标识为允许方法数据时,还需要进一步判读所述发放触发标志是否为发放触发。Specifically, when the second issuance enable identifier is the permission method data, it is further required to further determine whether the issue trigger flag is an issue trigger.
步骤S630b,输出所述当前脉冲神经元输出信息。Step S630b, outputting the current pulse neuron output information.
在本实施例中,通过设置发放使能标识和发放触发标志,确定当前脉冲神经元输出信息,使得脉冲神经元的输出的可控性更高,发放使能标志可以配置有的神经元不允许发放数据,而只用作中间辅助计算神经元,这对于一些需要多神经元协作完成的功能是非常必要的。In this embodiment, by setting the release enable identifier and the issue trigger flag, the current pulse neuron output information is determined, so that the output of the pulse neuron is more controllable, and the release enable flag can be configured with neurons that are not allowed. Issuing data, but only as an intermediate auxiliary computing neuron, is necessary for some functions that require multiple neurons to work together.
基于同样的发明思想,本发明一个实施例还提供一种计算机设备,包括存储器、处理器, 及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述计算机程序时实现上述实施例所提及方法的步骤。Based on the same inventive concept, an embodiment of the present invention further provides a computer device, including a memory and a processor. And a computer program stored on the memory and operable on the processor, wherein the processor executes the computer program to implement the steps of the method mentioned in the above embodiments.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序或指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。A person skilled in the art can understand that all or part of the process of implementing the foregoing embodiment can be completed by a computer program or instruction related hardware, and the program can be stored in a computer readable storage medium. When executed, the flow of an embodiment of the methods as described above may be included. Any reference to a memory, storage, database or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory can include random access memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of formats, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization chain. Synchlink DRAM (SLDRAM), Memory Bus (Rambus) Direct RAM (RDRAM), Direct Memory Bus Dynamic RAM (DRDRAM), and Memory Bus Dynamic RAM (RDRAM).
图7为一个实施例的多模式神经网络信息处理系统的结构示意图,如图7所示的多模式神经网络信息处理系统包括:FIG. 7 is a schematic structural diagram of a multi-mode neural network information processing system according to an embodiment. The multi-mode neural network information processing system shown in FIG. 7 includes:
神经元工作模式读取模块100,用于读取神经元工作模式配置参数,所述神经元工作模式配置参数包括人工神经元工作模式配置参数或脉冲神经元工作模式配置参数。The neuron working mode reading module 100 is configured to read a neuron working mode configuration parameter, where the neuron working mode configuration parameter comprises an artificial neuron working mode configuration parameter or a pulsed neuron working mode configuration parameter.
神经元工作模式配置模块200,用于根据所述神经元工作模式配置参数,配置当前神经元工作模式,所述当前神经元工作模式包括人工神经元工作模式或脉冲神经元工作模式。The neuron working mode configuration module 200 is configured to configure a current neuron working mode according to the neuron working mode configuration parameter, where the current neuron working mode includes an artificial neuron working mode or a pulsed neuron working mode.
前端神经元输出信息接收模块300,用于接收前端神经元输出信息,所述前端神经元输出信息包括前端人工神经元输出信息或前端脉冲神经元输出信息。The front-end neuron output information receiving module 300 is configured to receive front-end neuron output information, where the front-end neuron output information includes front-end artificial neuron output information or front-end pulse neuron output information.
当前神经元信息读取模块400,用于读取当前神经元信息,所述当前神经元信息包括当前人工神经元信息或当前脉冲神经元信息。The current neuron information reading module 400 is configured to read current neuron information, where the current neuron information includes current artificial neuron information or current pulsed neuron information.
当前神经元输出信息计算模块500,用于根据所述前端神经元输出信息和所述当前神经元信息,计算当前神经元输出信息,包括根据所述前端人工神经元输出信息和所述当前人工神经元信息,计算当前人工神经元输出信息,或根据所述前端脉冲神经元输出信息和所述当前脉冲神经元信息,计算当前脉冲神经元输出信息。The current neuron output information calculation module 500 is configured to calculate current neuron output information according to the front end neuron output information and the current neuron information, including outputting information according to the front end artificial neuron and the current artificial neural network. Meta-information, calculating current artificial neuron output information, or calculating current pulse neuron output information according to the front-end pulse neuron output information and the current pulsed neuron information.
当前神经元输出信息输出模块600,用于输出所述当前人工神经元输出信息或所述当前脉冲神经元输出信息。 The current neuron output information output module 600 is configured to output the current artificial neuron output information or the current pulse neuron output information.
本发明提供的复合模式神经元信息处理系统,根据预设的神经元工作模式配置参数,配置相应的人工神经元工作模式,或脉冲神经元工作模式,并在相应的神经元工作模式下,通过接收前端神经元输出信息,和读取当前神经元信息,计算当前人工神经元输出信息或当前脉冲神经元输出信息,并进行输出。本发明所提供的复合模式神经元信息处理系统,可以根据任务的需求,配置相应的神经元工作模式,在针对不同的神经网络工作模式下的不同任务间切换时,只需要修改神经元工作模式配置参数即可,不同于仅仅支持人工神经网络模式的神经网络加速器方案,也不同于仅支持脉冲神经网络的神经形态方案,本发明可以在同一架构下,同时支持以人工神经网络模式为基础的机器学习应用和以脉冲神经网络为基础的计算神经科学应用,丰富类脑计算平台的处理信息类型,降低了不同神经网络工作模式下的多任务执行的成本,并提高了不同神经网络工作模式下的多任务的执行效率。The composite mode neuron information processing system provided by the invention provides a corresponding artificial neuron working mode or a pulsed neuron working mode according to a preset neuron working mode configuration parameter, and passes the corresponding neuron working mode. Receiving front-end neuron output information, and reading current neuron information, calculating current artificial neuron output information or current pulsed neuron output information, and outputting. The composite mode neuron information processing system provided by the invention can configure the corresponding neuron working mode according to the requirements of the task, and only needs to modify the working mode of the neuron when switching between different tasks in different neural network working modes. The configuration parameters can be different from the neural network accelerator scheme that only supports the artificial neural network mode, and the neuromorphic scheme that only supports the pulse neural network. The present invention can support the artificial neural network mode based on the same architecture. Machine learning applications and computational neuroscience applications based on pulsed neural networks enrich the type of processing information for brain-like computing platforms, reduce the cost of multi-task execution in different neural network operating modes, and improve the working mode of different neural networks. The efficiency of multitasking.
图8为另一个实施例的多模式神经网络信息处理系统的结构示意图,如图8所示的多模式神经网络信息处理系统包括:FIG. 8 is a schematic structural diagram of a multi-mode neural network information processing system according to another embodiment. The multi-mode neural network information processing system shown in FIG. 8 includes:
前端人工神经元输出信息接收单元100a,用于接收前端人工神经元输出的膜电位信息、前端人工肝神经元与当前人工神经元的连接权重索引。The front end artificial neuron output information receiving unit 100a is configured to receive the membrane potential information output by the front end artificial neuron, and the connection weight index of the front artificial liver neuron and the current artificial neuron.
当前人工神经元信息读取单元200a,用于读取当前人工神经元偏置信息。The current artificial neuron information reading unit 200a is configured to read current artificial neuron offset information.
人工神经元连接权重读取单元300a,用于根据所述前端人工神经元与当前人工神经元的连接权重索引,读取前人工端神经元与当前人工神经元的连接权重。The artificial neuron connection weight reading unit 300a is configured to read the connection weight of the pre-artificial neuron and the current artificial neuron according to the connection weight index of the front-end artificial neuron and the current artificial neuron.
当前人工神经元输出信息计算单元400a,用于根据所述前端人工神经元输出的膜电位信息、所述前端人工神经元与当前人工神经元的连接权重、所述当前人工神经元偏置信息,通过预设的人工神经元激活函数,计算当前人工神经元输出信息。The current artificial neuron output information calculation unit 400a is configured to: according to the membrane potential information output by the front end artificial neuron, the connection weight of the front end artificial neuron and the current artificial neuron, and the current artificial neuron offset information, The current artificial neuron output information is calculated by a preset artificial neuron activation function.
本实施例所提供的复合模式神经元信息处理系统,根据任务的需求,配置为人工神经元工作模式,可以实现目前大多数的人工神经网络模型,支撑类脑计算平台在机器学习领域的应用。The composite mode neuron information processing system provided by the embodiment is configured as an artificial neuron working mode according to the requirements of the task, and can realize most of the current artificial neural network models and support the application of the brain computing platform in the field of machine learning.
图9为又一个实施例的多模式神经网络信息处理系统的结构示意图,如图9所示的多模式神经网络信息处理系统包括:FIG. 9 is a schematic structural diagram of a multi-mode neural network information processing system according to still another embodiment. The multi-mode neural network information processing system shown in FIG. 9 includes:
电位极值读取单元500a,用于读取电位极值,以及与所述电位极值对应的极值神经元标识;a potential extremum reading unit 500a for reading a potential extremum and an extremum neuron identifier corresponding to the potential extremum;
电位极值比较单元600a,用于将所述当前人工神经元输出信息和所述电位极值进行比较,若所述当前人工神经元输出信息大于或等于所述电位极值,则 The potential extreme value comparison unit 600a is configured to compare the current artificial neuron output information with the potential extreme value, and if the current artificial neuron output information is greater than or equal to the potential extreme value,
电位极值更新单元700a,用于将所述电位极值更新为所述当前人工神经元输出信息,并将所述极值神经元标识更新为当前人工神经元的标识。The potential extremum updating unit 700a is configured to update the potential extremum to the current artificial neuron output information, and update the extremum neuron identifier to an identifier of the current artificial neuron.
在本实施例中,所述当前人工神经元输出信息和所述电位极值进行比较,若所述当前人工神经元输出信息大于或等于所述电位极值,则将所述电位极值更新为所述人工神经元输出信息,可以使神经网络直接在卷积层完成池化步骤的操作,而不需要为池化层神经元分配物理空间,大大节约神经元资源,进而降低神经网络的构建成本。In this embodiment, the current artificial neuron output information is compared with the potential extreme value, and if the current artificial neuron output information is greater than or equal to the potential extreme value, the potential extreme value is updated to The artificial neuron outputs information, so that the neural network completes the operation of the pooling step directly in the convolution layer, without allocating physical space for the pooled layer neurons, greatly saving neuron resources, thereby reducing the construction cost of the neural network. .
图10为再一个实施例的多模式神经网络信息处理系统的结构示意图,如图10所示的多模式神经网络信息处理系统包括:FIG. 10 is a schematic structural diagram of a multi-mode neural network information processing system according to still another embodiment. The multi-mode neural network information processing system shown in FIG. 10 includes:
第一发放使能标识读取单元800a,用于读取第一发放使能标识,所述第一发放使能标识包括允许发放数据或不允许发放数据,当所述第一发放使能标识为允许发放数据时。The first issuance enablement reading unit 800a is configured to read the first issuance enablement identifier, where the first issue enablement identifier includes allowing data to be released or not allowing data to be issued, when the first issue enablement identifier is When data is allowed to be issued.
人工神经元发放数据类型读取单元900a,用于读取人工神经元发放数据类型参数,所述人工神经元发放数据类型包括:发放所述当前人工神经元输出信息,发放所述电位极值、发放所述电位极值对应的极值神经元标识其中的一种。The artificial neuron issue data type reading unit 900a is configured to read an artificial neuron release data type parameter, and the artificial neuron issue data type includes: issuing the current artificial neuron output information, and issuing the potential extremum, One of the extreme neuron identifiers corresponding to the potential extremum is issued.
人工神经元发放数据类型确定单元910a,用于根据所述人工神经元发放数据类型参数,确定所述当前人工神经元最终输出信息。The artificial neuron issuance data type determining unit 910a is configured to determine the final output information of the current artificial neuron according to the artificial neuron issuing data type parameter.
当前人工神经元输出信息输出单元920a,用于输出所述当前人工神经元最终输出信息。The current artificial neuron output information output unit 920a is configured to output the final output information of the current artificial neuron.
在本实施例中,通过设置发放使能标识和人工神经元发放数据类型参数,确定当前人工神经元输出信息,使得人工神经元的输出的可控性更高,发放使能标志可以配置有的神经元不允许发放数据,而只用作中间辅助计算神经元,这对于一些需要多神经元协作完成的功能是非常必要的,另外,发放使能标志和发放类型可以配合工作,将设定区域的神经元输出信息,缩小为1个当前神经元输出信息(该神经元输出信息为设定区域的各神经元输出信息中的极大值),完成直接最大池化的操作。In this embodiment, by setting the release enable identifier and the artificial neuron release data type parameter, the current artificial neuron output information is determined, so that the output of the artificial neuron is more controllable, and the release enable flag can be configured. Neurons are not allowed to issue data, but only as intermediate auxiliary computational neurons, which is necessary for some functions that require multiple neurons to work together. In addition, the release enable flag and the release type can work together to set the area. The neuron output information is reduced to one current neuron output information (the neuron output information is a maximum value in each neuron output information of the set region), and the direct maximum pooling operation is completed.
图11为再一个实施例的多模式神经网络信息处理系统的结构示意图,如图11所示的多模式神经网络信息处理系统包括:11 is a schematic structural diagram of a multi-mode neural network information processing system according to still another embodiment. The multi-mode neural network information processing system shown in FIG. 11 includes:
前端脉冲神经元输出信息接收单元100b,用于接收前端脉冲神经元输出的脉冲尖端信息、前端脉冲神经元与当前脉冲神经元的连接权重索引。The front end pulse neuron output information receiving unit 100b is configured to receive pulse tip information output by the front end pulse neuron, and a connection weight index of the front end pulse neuron and the current pulse neuron.
当前脉冲神经元信息读取单元200b,所述当前脉冲神经元信息读取单元,用于读取当前时间窗宽度、当前时间窗内脉冲尖端信息序列、历史膜电位信息和膜电位泄漏信息。The current pulse neuron information reading unit 200b is configured to read the current time window width, the current time window intrapulse information sequence, the historical membrane potential information, and the membrane potential leakage information.
脉冲神经元连接权重读取单元300b,用于根据所述前端脉冲神经元与当前脉冲神经元的 连接权重索引,读取前端脉冲神经元与当前脉冲神经元的连接权重。The pulse neuron connection weight reading unit 300b is configured to be based on the front end pulse neuron and the current pulse neuron The weight index is connected, and the connection weight of the front-end pulse neuron and the current pulse neuron is read.
时间窗内脉冲尖端信息序列更新单元400b,用于根据所述前端脉冲神经元输出的脉冲尖端信息,和所述当前时间窗内脉冲尖端信息序列,更新所述当前时间窗内脉冲尖端信息序列,获取当前时间窗内脉冲尖端信息更新序列。The time-window pulse tip information sequence updating unit 400b is configured to update the current time window intra-pulse tip information sequence according to the pulse tip information output by the front-end pulse neuron and the current time-window pulse tip information sequence. Get the pulse tip information update sequence in the current time window.
前端脉冲神经元输入信息计算单元500b,用于根据所述当前时间窗宽度、所述当前时间窗内脉冲尖端信息更新序列,通过衰减函数计算前端脉冲神经元输入信息。The front end pulse neuron input information calculation unit 500b is configured to calculate front end pulse neuron input information by using an attenuation function according to the current time window width and the current time window intrapulse information update sequence.
脉冲神经元输出信息计算单元600b,用于根据所述前端脉冲神经元输入信息、所述前端脉冲神经元与当前脉冲神经元的连接权重、所述历史膜电位信息、所述膜电位泄露信息,通过脉冲神经元计算模型,计算当前脉冲神经元输出信息。The pulse neuron output information calculation unit 600b is configured to: according to the front end pulse neuron input information, a connection weight of the front end pulse neuron and the current pulse neuron, the historical membrane potential information, the membrane potential leakage information, The current pulsed neuron output information is calculated by a pulsed neuron calculation model.
在本实施例中,根据所述前端脉冲神经元输出的脉冲尖端信息,和所述当前时间窗内脉冲尖端信息序列,更新所述当前时间窗内脉冲尖端信息序列,获取当前时间窗内脉冲尖端信息更新序列,根据所述当前时间窗宽度、所述前端脉冲神经元与当前脉冲神经元的连接权重,通过衰减函数计算前端脉冲神经元输入信息,可以支持具有时间深度的时空脉冲神经网络模型,相比于时间深度仅仅为一的神经网络技术方案,可以大大提高脉冲神经网络的时空信息编码能力,丰富脉冲神经网络的应用空间。In this embodiment, the pulse tip information sequence in the current time window is updated according to the pulse tip information output by the front end pulse neuron and the pulse tip information sequence in the current time window, and the pulse tip in the current time window is acquired. The information update sequence can support the spatio-temporal pulse neural network model with time depth according to the current time window width, the connection weight of the front-end pulse neuron and the current pulsed neuron, and the front-end pulse neuron input information through the attenuation function. Compared with the neural network technology scheme with only one time depth, the space-time information coding ability of the pulse neural network can be greatly improved, and the application space of the pulse neural network is enriched.
图12为再一个实施例的多模式神经网络信息处理系统的结构示意图,如图12所示的多模式神经网络信息处理系统包括:12 is a schematic structural diagram of a multi-mode neural network information processing system according to still another embodiment. The multi-mode neural network information processing system shown in FIG. 12 includes:
阈值电位获取单元700b,用于获取阈值电位;包括阈值信息接收子单元,用于读取随机阈值掩模电位、阈值偏置和随机阈值;阈值随机叠加量获取子单元,用于将所述随机阈值和所述随机阈值掩模电位进行按位与操作,获取阈值随机叠加量;阈值电位确定子单元,用于根据所述阈值随机叠加量和所述阈值偏置,确定所述阈值电位。a threshold potential obtaining unit 700b, configured to acquire a threshold potential; comprising a threshold information receiving subunit for reading a random threshold mask potential, a threshold offset, and a random threshold; and a threshold random superposition amount obtaining subunit, configured to: The threshold value and the random threshold mask potential are bitwise AND operated to obtain a threshold random superposition amount; the threshold potential determination subunit is configured to determine the threshold potential according to the threshold random superposition amount and the threshold offset.
发放触发确定单元800b,用于将所述当前脉冲神经元输出信息和所述阈值电位进行比较,根据比较结果确定发放触发标志信息,所述发放触发标志信息包括:发放触发或发放不触发。The issue trigger determination unit 800b is configured to compare the current pulse neuron output information with the threshold potential, and determine the issue trigger flag information according to the comparison result, where the issue trigger flag information includes: issuing a trigger or issuing a non-trigger.
当所述发放触发标志信息为发放触发时,When the issuing trigger flag information is an issue trigger,
发放触发动作单元910b,用于复位不应期计时器,并更新所述历史膜电位信息为预设的复位膜电位信息。The triggering action unit 910b is configured to reset the refractory period timer and update the historical membrane potential information to a preset reset membrane potential information.
当所述发放触发标志信息为发放不触发时,When the issue trigger flag information is not triggered by the issue,
发放不触发动作单元920b,包括不应期判断子单元,用于读取不应期宽度和不应期计时器的当前时间步;并根据所述不应期宽度和所述不应期计时器的当前时间步,判断当前时间 是否在不应期内,若不应期判断子单元判断结果为当前时间在所述不应期内,不应期内运行子单元,用于将所述不应期计时器累加计时一个时间步,不更新所述历史膜电位信息;若不应期判断子单元判断结果为当前时间不在所述不应期内,不应期外运行子单元,用于将所述不应期计时器累加计时一个时间步,并更新所述历史膜电位信息为所述当前脉冲神经元输出信息。a non-trigger action unit 920b, including a refractory period sub-unit for reading a current time step of the refractory period width and the refractory period timer; and according to the refractory period width and the refractory period timer Current time step, determine current time Whether it is within the refractory period, if the refusal period judges that the subunit judges that the current time is within the refractory period, the subunit should not be run during the period, and is used to accumulate the refractory period timer for one time step. The historical membrane potential information is not updated; if the refusal period judges that the subunit determines that the current time is not within the refractory period, the subunit may not be run out of time, for accumulating the refractory period timer A time step and updating the historical membrane potential information for the current pulsed neuron output information.
第二发放使能标识读取单元930b,用于读取第二发放使能标识,所述第二发放使能标识包括允许发放数据或不允许发放数据,当所述第二发放使能标识为允许发放数据时,所述发放触发标志信息读取单元,用于读取所述发放触发标志信息,当所述发放触发标志信息为发放触发时;当前脉冲神经元输出信息输出单元,用于输出所述当前脉冲神经元输出信息。The second issuance enablement reading unit 930b is configured to read the second issuance enablement identifier, where the second issue enablement identifier includes allowing data to be issued or not allowed to be issued, and when the second issue enablement identifier is When the data is allowed to be distributed, the issuing trigger flag information reading unit is configured to read the issuing trigger flag information, and when the issuing trigger flag information is an issue trigger; the current pulse neuron output information output unit is configured to output The current pulsed neuron outputs information.
在本实施例中,通过设置发放使能标识和发放触发标志,确定当前脉冲神经元输出信息,使得脉冲神经元的输出的可控性更高,发放使能标志可以配置有的神经元不允许发放数据,而只用作中间辅助计算神经元,这对于一些需要多神经元协作完成的功能是非常必要的。In this embodiment, by setting the release enable identifier and the issue trigger flag, the current pulse neuron output information is determined, so that the output of the pulse neuron is more controllable, and the release enable flag can be configured with neurons that are not allowed. Issuing data, but only as an intermediate auxiliary computing neuron, is necessary for some functions that require multiple neurons to work together.
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above-described embodiments may be arbitrarily combined. For the sake of brevity of description, all possible combinations of the technical features in the above embodiments are not described. However, as long as there is no contradiction between the combinations of these technical features, All should be considered as the scope of this manual.
以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。 The above-mentioned embodiments are merely illustrative of several embodiments of the present invention, and the description thereof is more specific and detailed, but is not to be construed as limiting the scope of the invention. It should be noted that a number of variations and modifications may be made by those skilled in the art without departing from the spirit and scope of the invention. Therefore, the scope of the invention should be determined by the appended claims.

Claims (11)

  1. 一种复合模式神经元信息处理方法,其特征在于,所述方法包括:A composite mode neuron information processing method, characterized in that the method comprises:
    读取神经元工作模式配置参数,所述神经元工作模式配置参数包括人工神经元工作模式配置参数或脉冲神经元工作模式配置参数;Reading a neuron working mode configuration parameter, the neuron working mode configuration parameter comprising an artificial neuron working mode configuration parameter or a pulsed neuron working mode configuration parameter;
    根据所述神经元工作模式配置参数,配置当前神经元工作模式,所述当前神经元工作模式包括人工神经元工作模式或脉冲神经元工作模式;And configuring a current neuron working mode according to the neuron working mode configuration parameter, where the current neuron working mode includes an artificial neuron working mode or a pulsed neuron working mode;
    接收前端神经元输出信息,所述前端神经元输出信息包括前端人工神经元输出信息或前端脉冲神经元输出信息;Receiving front-end neuron output information, where the front-end neuron output information includes front-end artificial neuron output information or front-end pulse neuron output information;
    读取当前神经元信息,所述当前神经元信息包括当前人工神经元信息或当前脉冲神经元信息;Reading current neuron information, the current neuron information including current artificial neuron information or current pulsed neuron information;
    根据所述前端神经元输出信息和所述当前神经元信息,计算当前神经元输出信息,包括根据所述前端人工神经元输出信息和所述当前人工神经元信息,计算当前人工神经元输出信息,或根据所述前端脉冲神经元输出信息和所述当前脉冲神经元信息,计算当前脉冲神经元输出信息;Calculating current neuron output information according to the front-end neuron output information and the current neuron information, including calculating current artificial neuron output information according to the front-end artificial neuron output information and the current artificial neuron information, Or calculating current pulse neuron output information according to the front end pulse neuron output information and the current pulse neuron information;
    输出所述当前人工神经元输出信息或所述当前脉冲神经元输出信息。Outputting the current artificial neuron output information or the current pulsed neuron output information.
  2. 根据权利要求1所述的复合模式神经元信息处理方法,其特征在于:The composite mode neuron information processing method according to claim 1, wherein:
    所述前端人工神经元输出信息包括:前端人工神经元输出的膜电位信息、前端人工神经元与当前人工神经元的连接权重索引;The front end artificial neuron output information includes: membrane potential information output by the front end artificial neuron, and a connection weight index of the front end artificial neuron and the current artificial neuron;
    所述当前人工神经元信息包括:当前人工神经元偏置信息;The current artificial neuron information includes: current artificial neuron offset information;
    则所述根据所述前端人工神经元输出的膜电位信息和所述当前人工神经元信息,计算当前人工神经元输出信息,包括:And calculating, according to the membrane potential information output by the front end artificial neuron and the current artificial neuron information, the current artificial neuron output information, including:
    根据所述前端人工神经元与当前人工神经元的连接权重索引,读取前端人工神经元与当前人工神经元的连接权重;And according to the connection weight index of the front end artificial neuron and the current artificial neuron, the connection weight of the front end artificial neuron and the current artificial neuron is read;
    根据所述前端人工神经元输出的膜电位信息、所述前端人工神经元与当前人工神经元的连接权重、所述当前人工神经元偏置信息,通过预设的人工神经元激活函数,计算当前人工神经元输出信息。And calculating the current state according to the membrane potential information output by the front end artificial neuron, the connection weight of the front end artificial neuron and the current artificial neuron, and the current artificial neuron bias information by using a preset artificial neuron activation function. Artificial neurons output information.
  3. 根据权利要求2所述的复合模式神经元信息处理方法,其特征在于,在根据所述前端人工神经元输出的膜电位信息、所述前端人工神经元与当前人工神经元的连接权重、所述当前人工神经元偏置信息,通过预设的人工神经元激活函数,计算当前人工神经元输出信息的步骤之后,所述方法还包括:The composite mode neuron information processing method according to claim 2, wherein the membrane potential information outputted by the front end artificial neuron, the connection weight of the front end artificial neuron and the current artificial neuron, The current artificial neuron bias information, after the step of calculating the current artificial neuron output information by using a preset artificial neuron activation function, the method further includes:
    读取电位极值,以及与所述电位极值对应的极值神经元标识;Reading a potential extremum and an extremum neuron identifier corresponding to the potential extremum;
    将所述当前人工神经元输出信息和所述电位极值进行比较,若所述当前人工神经元输出 信息大于或等于所述电位极值,则Comparing the current artificial neuron output information with the potential extremum if the current artificial neuron output If the information is greater than or equal to the potential extreme value, then
    将所述电位极值更新为所述当前人工神经元输出信息,并将所述极值神经元标识更新为当前人工神经元的标识。Updating the potential extremum to the current artificial neuron output information, and updating the extremum neuron identifier to an identifier of the current artificial neuron.
  4. 根据权利要求3所述的复合模式神经元信息处理方法,其特征在于,所述输出所述当前人工神经元输出信息,包括:The composite mode neuron information processing method according to claim 3, wherein the outputting the current artificial neuron output information comprises:
    读取第一发放使能标识,所述第一发放使能标识包括允许发放数据或不允许发放数据;Reading the first issuance enable identifier, where the first issue enable identifier includes allowing data to be released or not allowing data to be issued;
    当所述第一发放使能标识为允许发放数据时,读取人工神经元发放数据类型参数,所述人工神经元发放数据类型包括:发放所述当前人工神经元输出信息、发放所述电位极值、发放所述电位极值对应的极值神经元标识其中的一种;Reading the artificial neuron issuing data type parameter when the first issuance enable identifier is allowed to issue data, the artificial neuron issuing the data type, including: issuing the current artificial neuron output information, and issuing the potential pole And extracting one of the extreme value neuron identifiers corresponding to the potential extreme values;
    根据所述人工神经元发放数据类型参数,确定所述当前人工神经元最终输出信息;Determining, according to the artificial neuron, a data type parameter, determining a final output information of the current artificial neuron;
    输出所述当前人工神经元最终输出信息。Outputting the final output information of the current artificial neuron.
  5. 根据权利要求1所述的复合模式神经元信息处理方法,其特征在于:The composite mode neuron information processing method according to claim 1, wherein:
    所述前端脉冲神经元输出信息包括:前端脉冲神经元输出的脉冲尖端信息、前端脉冲神经元与当前脉冲神经元的连接权重索引;The front-end pulse neuron output information includes: pulse tip information output by the front-end pulse neuron, and a connection weight index of the front-end pulse neuron and the current pulse neuron;
    所述当前脉冲神经元信息包括:当前时间窗宽度、当前时间窗内脉冲尖端信息序列、历史膜电位信息和膜电位泄漏信息;The current pulse neuron information includes: a current time window width, a pulse tip information sequence in a current time window, historical membrane potential information, and membrane potential leakage information;
    则根据所述前端脉冲神经元输出信息和所述当前脉冲神经元信息,计算当前脉冲神经元输出信息,包括:And calculating, according to the front-end pulse neuron output information and the current pulse neuron information, the current pulse neuron output information, including:
    根据所述前端脉冲神经元与当前脉冲神经元的连接权重索引,读取前端脉冲神经元与当前脉冲神经元的连接权重;And according to the connection weight index of the front-end pulse neuron and the current pulse neuron, the connection weight of the front-end pulse neuron and the current pulse neuron is read;
    根据所述前端脉冲神经元输出的脉冲尖端信息,和所述当前时间窗内脉冲尖端信息序列,更新所述当前时间窗内脉冲尖端信息序列,获取当前时间窗内脉冲尖端信息更新序列;And updating the pulse tip information sequence in the current time window according to the pulse tip information output by the front-end pulse neuron and the pulse tip information sequence in the current time window, and acquiring a pulse tip information update sequence in the current time window;
    根据所述当前时间窗宽度、所述当前时间窗内脉冲尖端信息更新序列,通过衰减函数计算前端脉冲神经元输入信息;And calculating, according to the current time window width, the pulse tip information update sequence in the current time window, the front end pulse neuron input information by using an attenuation function;
    根据所述前端脉冲神经元输入信息、所述前端脉冲神经元与当前脉冲神经元的连接权重、所述历史膜电位信息、所述膜电位泄露信息,通过脉冲神经元计算模型,计算当前脉冲神经元输出信息。Calculating the current pulsed nerve by using a pulse neuron calculation model according to the front end pulse neuron input information, the connection weight of the front end pulse neuron and the current pulse neuron, the historical membrane potential information, and the membrane potential leakage information Meta output information.
  6. 根据权利要求5所述的复合模式神经元信息处理方法,其特征在于,在根据所述前端脉冲神经元输入信息、所述前端脉冲神经元与当前脉冲神经元的连接权重、所述历史膜电位信息、所述膜电位泄露信息,通过脉冲神经元计算模型,计算当前脉冲神经元输出信息的步骤之后,在输出所述当前脉冲神经元输出信息的步骤之前,所述方法还包括:The composite mode neuron information processing method according to claim 5, characterized in that, according to the front end pulse neuron input information, the connection weight of the front end pulse neuron and the current pulse neuron, the historical membrane potential The information, the membrane potential leakage information, the step of calculating the current pulsed neuron output information by the pulse neuron calculation model, and the step of outputting the current pulse neuron output information, the method further includes:
    获取阈值电位; Obtaining a threshold potential;
    将所述当前脉冲神经元输出信息和所述阈值电位进行比较,根据比较结果确定发放触发标志信息,所述发放触发标志信息包括:发放触发或发放不触发;当所述发放触发标志信息为发放触发时,Comparing the current pulsed neuron output information with the threshold potential, and determining the issuance triggering flag information according to the comparison result, where the issuing triggering flag information includes: issuing a trigger or issuing a non-trigger; and when the issuing triggering flag information is issued When triggered,
    复位不应期计时器,并更新所述历史膜电位信息为预设的复位膜电位信息。The refractory period timer is reset, and the historical membrane potential information is updated to a preset reset membrane potential information.
  7. 根据权利要求6所述的复合模式神经元信息处理方法,其特征在于,在将所述当前脉冲神经元输出信息和所述阈值电位进行比较,根据比较结果确定发放触发标志信息的步骤之后,所述方法还包括:The composite mode neuron information processing method according to claim 6, wherein after the current pulse neuron output information is compared with the threshold potential, and the step of issuing trigger flag information is determined based on the comparison result, The method also includes:
    当所述发放触发标志信息为发放不触发时,读取不应期宽度和不应期计时器的当前时间步;When the issue trigger flag information is not triggered by the issue, the current time step of the refractory period width and the refractory period timer is read;
    根据所述不应期宽度和所述不应期计时器的当前时间步,判断当前时间是否在不应期内,若当前时间在所述不应期内,将所述不应期计时器累加计时一个时间步,不更新所述历史膜电位信息;Determining whether the current time is within the refractory period according to the refractory period width and the current time step of the refractory period timer, and if the current time is within the refractory period, accumulating the refractory period timer Timing a time step without updating the historical membrane potential information;
    若当前时间不在所述不应期内,将所述不应期计时器累加计时一个时间步,并更新所述历史膜电位信息为所述当前脉冲神经元输出信息。If the current time is not within the refractory period, the refractory period timer is cumulatively counted for one time step, and the historical membrane potential information is updated to the current pulsed neuron output information.
  8. 根据权利要求6所述的复合模式神经元信息处理方法,其特征在于,所述获取阈值电位,包括:The composite mode neuron information processing method according to claim 6, wherein the acquiring the threshold potential comprises:
    读取随机阈值掩模电位、阈值偏置和随机阈值;Reading a random threshold mask potential, a threshold offset, and a random threshold;
    将所述随机阈值和所述随机阈值掩模电位进行按位与操作,获取阈值随机叠加量;Performing a bitwise AND operation on the random threshold and the random threshold mask potential to obtain a threshold random superposition amount;
    根据所述阈值随机叠加量和所述阈值偏置,确定所述阈值电位。The threshold potential is determined based on the threshold random amount and the threshold offset.
  9. 根据权利要求6所述的复合模式神经元信息处理方法,其特征在于,所述输出所述当前脉冲神经元输出信息,包括:The composite mode neuron information processing method according to claim 6, wherein the outputting the current pulse neuron output information comprises:
    读取第二发放使能标识,所述第二发放使能标识包括允许发放数据或不允许发放数据,当所述第二发放使能标识为允许发放数据时,Reading a second issuance enablement identifier, where the second issuance enablement identifier includes allowing data to be issued or not allowing data to be issued, and when the second issuance enablement identifier is allowed to issue data,
    读取所述发放触发标志信息,当所述发放触发标志信息为发放触发时;Reading the issue trigger flag information, when the issue trigger flag information is an issue trigger;
    输出所述当前脉冲神经元输出信息。Outputting the current pulse neuron output information.
  10. 一种复合模式神经元信息处理系统,其特征在于,包括:A composite mode neuron information processing system, comprising:
    神经元工作模式读取模块,用于读取神经元工作模式配置参数,所述神经元工作模式配置参数包括人工神经元工作模式配置参数或脉冲神经元工作模式配置参数;a neuron working mode reading module, configured to read a neuron working mode configuration parameter, where the neuron working mode configuration parameter comprises an artificial neuron working mode configuration parameter or a pulsed neuron working mode configuration parameter;
    神经元工作模式配置模块,用于根据所述神经元工作模式配置参数,配置当前神经元工作模式,所述当前神经元工作模式包括人工神经元工作模式或脉冲神经元工作模式;a neuron working mode configuration module, configured to configure a current neuron working mode according to the neuron working mode configuration parameter, where the current neuron working mode includes an artificial neuron working mode or a pulsed neuron working mode;
    前端神经元输出信息接收模块,用于接收前端神经元输出信息,所述前端神经元输出信息包括前端人工神经元输出信息或前端脉冲神经元输出信息; The front-end neuron output information receiving module is configured to receive front-end neuron output information, where the front-end neuron output information includes front-end artificial neuron output information or front-end pulse neuron output information;
    当前神经元信息读取模块,用于读取当前神经元信息,所述当前神经元信息包括当前人工神经元信息或当前脉冲神经元信息;a current neuron information reading module, configured to read current neuron information, where the current neuron information includes current artificial neuron information or current pulsed neuron information;
    当前神经元输出信息计算模块,用于根据所述前端神经元输出信息和所述当前神经元信息,计算当前神经元输出信息,包括根据所述前端人工神经元输出信息和所述当前人工神经元信息,计算当前人工神经元输出信息,或根据所述前端脉冲神经元输出信息和所述当前脉冲神经元信息,计算当前脉冲神经元输出信息;The current neuron output information calculation module is configured to calculate current neuron output information according to the front end neuron output information and the current neuron information, including outputting information according to the front end artificial neuron and the current artificial neuron Information, calculating current artificial neuron output information, or calculating current pulse neuron output information according to the front end pulse neuron output information and the current pulse neuron information;
    当前神经元输出信息输出模块,用于输出所述当前人工神经元输出信息或所述当前脉冲神经元输出信息。The current neuron output information output module is configured to output the current artificial neuron output information or the current pulse neuron output information.
  11. 一种计算机设备,其特征在于,包括存储器、处理器,及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现权利要求1-9中任意一项方法的步骤。 A computer device, comprising: a memory, a processor, and a computer program stored on the memory and operable on the processor, the processor executing the computer program to implement any one of claims 1-9 The steps of the method.
PCT/CN2017/114661 2017-01-20 2017-12-05 Compound-mode neuron information processing method and system, and computer device WO2018133568A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710041892.3 2017-01-20
CN201710041892.3A CN106875004B (en) 2017-01-20 2017-01-20 Composite mode neuronal messages processing method and system

Publications (1)

Publication Number Publication Date
WO2018133568A1 true WO2018133568A1 (en) 2018-07-26

Family

ID=59158426

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/114661 WO2018133568A1 (en) 2017-01-20 2017-12-05 Compound-mode neuron information processing method and system, and computer device

Country Status (2)

Country Link
CN (1) CN106875004B (en)
WO (1) WO2018133568A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111082949A (en) * 2019-10-29 2020-04-28 广东工业大学 Method for efficiently transmitting pulse data packets in brain-like computer

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106875004B (en) * 2017-01-20 2019-09-10 北京灵汐科技有限公司 Composite mode neuronal messages processing method and system
CN107563503A (en) * 2017-09-14 2018-01-09 胡明建 A kind of codified selects the design method that threshold values selects function artificial neuron
CN107545304A (en) * 2017-09-16 2018-01-05 胡明建 A kind of design method for changing activation primitive artificial neuron according to network demand
CN107578096A (en) * 2017-09-21 2018-01-12 胡明建 A kind of voltage-frequency formula selects the design method of end artificial neuron
CN107578097A (en) * 2017-09-25 2018-01-12 胡明建 A kind of design method of more threshold values polygamma function feedback artificial neurons
CN107609640A (en) * 2017-10-01 2018-01-19 胡明建 A kind of threshold values selects the design method of end graded potential formula artificial neuron
CN108171326B (en) * 2017-12-22 2020-08-04 清华大学 Data processing method, device, chip, equipment and storage medium of neural network
CN108764464B (en) * 2018-04-12 2020-10-16 清华大学 Neuron information sending method, device and storage medium
CN109685252B (en) * 2018-11-30 2023-04-07 西安工程大学 Building energy consumption prediction method based on cyclic neural network and multi-task learning model
WO2021114133A1 (en) * 2019-12-11 2021-06-17 Autonym Pte. Ltd. Method and system for informed decision making
CN114254106A (en) * 2020-09-25 2022-03-29 北京灵汐科技有限公司 Text classification method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831476A (en) * 2012-08-22 2012-12-19 中国科学院上海光学精密机械研究所 Pattern detecting device and pattern detecting method for pulse neural network
CN105095961A (en) * 2015-07-16 2015-11-25 清华大学 Mixing system with artificial neural network and impulsive neural network
CN105095966A (en) * 2015-07-16 2015-11-25 清华大学 Hybrid computing system of artificial neural network and impulsive neural network
CN105303235A (en) * 2015-10-26 2016-02-03 清华大学 Construction method of large-scale hierarchical neural network
CN106875004A (en) * 2017-01-20 2017-06-20 清华大学 Composite mode neuronal messages processing method and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1381721A (en) * 2002-06-04 2002-11-27 复旦大学 Portable intelligent electronic nose and its preparing process
US9111225B2 (en) * 2012-02-08 2015-08-18 Qualcomm Incorporated Methods and apparatus for spiking neural computation
US9477926B2 (en) * 2012-11-20 2016-10-25 Qualcomm Incorporated Piecewise linear neuron modeling
US9418331B2 (en) * 2013-10-28 2016-08-16 Qualcomm Incorporated Methods and apparatus for tagging classes using supervised learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831476A (en) * 2012-08-22 2012-12-19 中国科学院上海光学精密机械研究所 Pattern detecting device and pattern detecting method for pulse neural network
CN105095961A (en) * 2015-07-16 2015-11-25 清华大学 Mixing system with artificial neural network and impulsive neural network
CN105095966A (en) * 2015-07-16 2015-11-25 清华大学 Hybrid computing system of artificial neural network and impulsive neural network
CN105303235A (en) * 2015-10-26 2016-02-03 清华大学 Construction method of large-scale hierarchical neural network
CN106875004A (en) * 2017-01-20 2017-06-20 清华大学 Composite mode neuronal messages processing method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111082949A (en) * 2019-10-29 2020-04-28 广东工业大学 Method for efficiently transmitting pulse data packets in brain-like computer
CN111082949B (en) * 2019-10-29 2022-01-28 广东工业大学 Method for efficiently transmitting pulse data packets in brain-like computer

Also Published As

Publication number Publication date
CN106875004A (en) 2017-06-20
CN106875004B (en) 2019-09-10

Similar Documents

Publication Publication Date Title
WO2018133568A1 (en) Compound-mode neuron information processing method and system, and computer device
US10521729B2 (en) Neural architecture search for convolutional neural networks
TWI503761B (en) Apparatus and methods for synaptic update in a pulse-coded network
KR20200088475A (en) Simultaneous training of functional networks of neural networks
JP6275868B2 (en) Neural watchdog
US20210304005A1 (en) Trace-based neuromorphic architecture for advanced learning
CN109034371B (en) Deep learning model reasoning period acceleration method, device and system
CN106875003B (en) Adaptive leakage value neuron information processing method and system
US11308395B2 (en) Method and system for performing machine learning
EP3567523B1 (en) Blink and averted gaze avoidance in photographic images
TWI729345B (en) Event prediction method and device, electronic equipment
CN114781272A (en) Carbon emission prediction method, device, equipment and storage medium
WO2018133570A1 (en) Self-adaptive threshold neuron information processing method, self-adaptive leakage value neuron information processing method and system, and computer device and readable storage medium
EP3685266A1 (en) Power state control of a mobile device
KR20160068823A (en) Congestion avoidance in networks of spiking neurons
US9542645B2 (en) Plastic synapse management
KR101782760B1 (en) Dynamically assigning and examining synaptic delay
WO2024074072A1 (en) Spiking neural network accelerator learning method and apparatus, terminal, and storage medium
CN106815638B (en) Input weight expanded neuron information processing method and system
WO2020019780A1 (en) Event prediction method and apparatus, and electronic device
CN110730971B (en) Neuron information processing method and system with depth time window
CN113269313A (en) Synapse weight training method, electronic device and computer readable medium
CN108764464B (en) Neuron information sending method, device and storage medium
CN108171326B (en) Data processing method, device, chip, equipment and storage medium of neural network
WO2018133567A1 (en) Neuron weight information processing method and system, neuron information processing method and system, and computer device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17892997

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17892997

Country of ref document: EP

Kind code of ref document: A1