CN105095967A - Multi-mode neural morphological network core - Google Patents

Multi-mode neural morphological network core Download PDF

Info

Publication number
CN105095967A
CN105095967A CN201510419465.5A CN201510419465A CN105095967A CN 105095967 A CN105095967 A CN 105095967A CN 201510419465 A CN201510419465 A CN 201510419465A CN 105095967 A CN105095967 A CN 105095967A
Authority
CN
China
Prior art keywords
dendron
unit
modal
network core
aixs cylinder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510419465.5A
Other languages
Chinese (zh)
Other versions
CN105095967B (en
Inventor
裴京
施路平
王栋
邓磊
徐海峥
李国齐
马骋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ling Xi Technology Co. Ltd.
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201510419465.5A priority Critical patent/CN105095967B/en
Publication of CN105095967A publication Critical patent/CN105095967A/en
Application granted granted Critical
Publication of CN105095967B publication Critical patent/CN105095967B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Feedback Control In General (AREA)

Abstract

Provided in the invention is a multi-mode neural morphological network core comprising a mode register, an axon input unit, a synaptic weight storage unit, a dendrite unit and a neuron computing unit. According to the multi-mode neural morphological network core, both artificial neural network computing and impulsive neural network computing can be carried out; and switching between an artificial neural network operation mode and an impulsive neural network operation mode can be realized according to demands.

Description

A kind of multi-modal neuromorphic network core
Technical field
The present invention relates to a kind of neuromorphic network core.
Background technology
Neural network is the computing system that a kind of mimic biology brain cynapse-neuronal structure carries out data processing, by being divided into connecting to form of the computing node of multilayer and interlayer.Each node simulates a neuron, performs certain certain operations, such as activation function, the connecting analog nerve synapse between node, and the weighted value connected for inputting from last layer node represents synapse weight.Neural network has powerful non-linear, adaptive information processing power.
Neuron in artificial neural network is using the output of accumulated value as self after activation function process from connection input.Corresponding to different network topology structures, neuron models and learning rules, artificial neural network comprises again the tens of kinds of network models such as perceptron, Hopfield network, Boltzmann machine, diversified function can be realized, in pattern-recognition, complex control, signal transacting and optimization etc., have application.Traditional artificial neural network data can be thought to be encoded by the frequency information of neuron pulse, and each layer neuron successively serial runs.The biological nervous system hierarchy of artificial Neural Network Simulation, but fail to mate completely the information processing architecture of cortex. such as time series is on the impact of study, and as real biological cortex on process information, not independent static to the study of information data, but have contextual contact along with the time.Impulsive neural networks is the new neural network occurred in recent ten years, third generation neural network of being known as.Data in impulsive neural networks are with the space time information of neuron pulse signal coding, and the information transmission between the input and output of network and neuron shows as the pulse that neuron sends and the temporal information sending pulse, and neuron needs parallel running simultaneously.Compared with traditional artificial neural network, impulsive neural networks has relatively big difference in information processing manner, neuron models, concurrency etc., and the method for operation is closer to real biosystem.The pulse train of impulsive neural networks application accurate timing is encoded to nerve information and processes, this computation model comprising Time Calculation element has more biological explanatory, be the effective tool carrying out complicated space time information process, multi-modal information can be processed and information processing is more real-time.But the uncertainty of the uncontinuity of the neuron models of impulsive neural networks, the complicacy of space-time code, network structure causes the description being difficult to mathematically complete overall network, therefore be difficult to build effective and general supervised learning algorithm, limit its calculating scale and degree of accuracy.
Summary of the invention
In view of this, necessaryly provide a kind of and both can carry out the neuromorphic network core that artificial neural networks also can carry out impulsive neural networks calculating.
A kind of multi-modal neuromorphic network core, comprising: mode register, aixs cylinder input block, synapse weight storage unit, dendron unit and neuron computes unit;
Described mode register is connected with described aixs cylinder input block, dendron unit and neuron computes unit, controls said units and operates in artificial neural network pattern or impulsive neural networks pattern;
Described aixs cylinder input block is connected with described dendron unit, receives and stores aixs cylinder input;
Described synapse weight storage unit is connected with described dendron unit, stores synapse weight matrix;
Described dendron unit is connected with described neuron computes unit, comprise dendron multiplicaton addition unit and dendron summing elements, when operating in artificial neural network pattern, described aixs cylinder input vector and synapse weight matrix are sent into described dendron multiplicaton addition unit and are carried out multiply-add operation, when operating in impulsive neural networks pattern, described aixs cylinder input vector and synapse weight matrix are sent into described dendron summing elements and are carried out accumulating operation;
Described neuron computes unit comprises the first computing unit and the second computing unit, when operating in artificial neural network pattern, the multiply-add operation result that described dendron multiplicaton addition unit is sent is sent into described first computing unit and is carried out artificial neural network computing, when operating in impulsive neural networks pattern, the accumulating operation result that described dendron summing elements is sent is sent into described second computing unit and is carried out impulsive neural networks calculating.
Compared with prior art, multi-modal neuromorphic network core provided by the invention both can carry out artificial neural network computing, also impulsive neural networks computing can be carried out, and can switch between artificial neural network operational mode and impulsive neural networks operational mode as required, can carry out real-time, multi-modal or complicated space-time signal calculate and can ensure calculating degree of accuracy.
Accompanying drawing explanation
Basic computational ele-ment structural drawing in the hybrid system of the artificial neural network that Fig. 1 provides for first embodiment of the invention and impulsive neural networks.
Fig. 2 is cascaded structure schematic diagram of the present invention
Fig. 3 is parallel-connection structure schematic diagram of the present invention.
Fig. 4 is parallel organization schematic diagram of the present invention.
Fig. 5 is study structural representation of the present invention.
Fig. 6 is feedback arrangement schematic diagram of the present invention.
Fig. 7 calculates elementary layer level structure schematic diagram in hybrid system provided by the invention.
Fig. 8 is the hybrid system of artificial neural network provided by the invention and impulsive neural networks.
Fig. 9 is the schematic diagram in second embodiment of the invention, the numerical quantities that artificial neural network exports being converted to pulse train.
The frequency coding pulse train that impulsive neural networks exports is converted to numerical quantities schematic diagram in second embodiment of the invention by Figure 10.
The Population Coding pulse train that impulsive neural networks exports is converted to numerical quantities schematic diagram in second embodiment of the invention by Figure 11.
The time encoding pulse train that impulsive neural networks exports is converted to numerical quantities schematic diagram in second embodiment of the invention by Figure 12.
The binary-coding pulse train that impulsive neural networks exports is converted to numerical quantities schematic diagram in second embodiment of the invention by Figure 13.
The multi-modal neuromorphic network nuclear structure block diagram that Figure 14 provides for third embodiment of the invention.
Figure 15 is structured flowchart when multi-modal neuromorphic network core that third embodiment of the invention provides operates in artificial neural network.
Figure 16 is the operational flow diagram of a multi-modal neuromorphic network core time step under artificial neural network pattern.
Figure 17 is structured flowchart when multi-modal neuromorphic network core that third embodiment of the invention provides operates in impulsive neural networks.
Figure 18 is the operational flow diagram of a multi-modal neuromorphic network core time step under impulsive neural networks pattern.
The commingled system of the artificial neural network that Figure 19 provides for fourth embodiment of the invention and impulsive neural networks.
Figure 20 is fourth embodiment of the invention routing nodes structured flowchart.
Figure 21 is fourth embodiment of the invention routing data inclusion composition.
Figure 22 is fourth embodiment of the invention routing nodes workflow diagram.
Main element symbol description
Hybrid system 100 Mode register 211
Basic computational ele-ment 110 Aixs cylinder input block 212
First basic computational ele-ment 110a Synapse weight storage unit 213
Second basic computational ele-ment 110b Dendron unit 214
Unit 111 Dendron multiplicaton addition unit 214a
Neuron 115 Dendron summing elements 214b
Cynapse 116 Neuron computes unit 215
Composite computing unit 120 First computing unit 215a
Series connection recombiner unit 120a Second computing unit 215b
Recombiner unit in parallel 120b Dendron expands storage unit 2151
Parallel composition unit 120c Parameter storage unit 2152
Study recombiner unit 120d Integration leak calculation unit 2153
Feedback complex unit 120e Trigger pip counter 216
Commingled system 200 Controller 217
Neuromorphic network core 210 Routing node 220
Multi-modal neuromorphic network core 210a
Following embodiment will further illustrate the present invention in conjunction with above-mentioned accompanying drawing.
Embodiment
Below in conjunction with the accompanying drawings and the specific embodiments multi-modal neuromorphic network core provided by the invention is described in further detail.
First embodiment of the invention provides the hybrid system 100 of a kind of artificial neural network and impulsive neural networks, comprise at least two basic computational ele-ment 110, in these at least two basic computational ele-ment 110, at least one is artificial neural network computing unit, bear artificial neural networks, at least one is impulsive neural networks computing unit, bear impulsive neural networks to calculate, these at least two basic computational ele-ment 110 are interconnected according to topological structure, jointly realize neural computing function.
Refer to Fig. 1, described at least one artificial neural networks unit and described at least one impulsive neural networks computing unit can regard an independently neural network respectively as, this neural network comprises multiple neuron 115, connected by cynapse 116 between the plurality of neuron 115, composition single or multiple lift structure.Synapse weight represents the weighted value that postsynaptic neuron receives presynaptic neuron output.
Described at least one impulsive neural networks computing unit is used for performing impulsive neural networks to the data received and calculates.The input data of described at least one impulsive neural networks computing unit, to export the data transmitted between data and neuron 115 be spike sequence, the model of neuron 115 described in described at least one impulsive neural networks computing unit is the neuron models calculated based on spike, can be but at least one be not limited in leakage-integration-fire model, spike response model and Hodgkin-Huxley model.
Described at least one artificial neural networks unit is used for performing artificial neural networks to the data received.The input data of described at least one artificial neural networks unit, to export the data transmitted between data and neuron 115 be numerical quantities.Described at least one artificial neural networks unit, further according to the difference of neuron models, network structure, learning algorithm, can be at least one in perceptron neural network computing unit, BP neural computing unit, Hopfield neural computing unit, adaptive resonance theory neural computing unit, degree of depth conviction neural computing unit and convolutional neural networks computing unit.
Described at least one artificial neural networks unit and at least one impulsive neural networks computing unit Topology connection are to form a complex neural network computing unit.
The topological structure of described Topology connection comprises at least one in cascaded structure, parallel-connection structure, parallel organization, study structure and feedback arrangement.
Refer to Fig. 2, described two basic computational ele-ment 110 are connected in series to form a series connection composite computing unit 120a.Described two basic computational ele-ment 110 are respectively the first basic computational ele-ment 110a and the second basic computational ele-ment 110b, and the output terminal of described first basic computational ele-ment 110a connects the input end of the second basic computational ele-ment 110b.In described first basic computational ele-ment 110a and the second basic computational ele-ment 110b, one is artificial neural network computing unit, and another is impulsive neural networks computing unit.System input is first through the first basic computational ele-ment 110a process, and the result after process is as the input of the second basic computational ele-ment 110b, and the result after the second basic computational ele-ment 110b process is that system exports.
Refer to Fig. 3, described two basic basic computational ele-ment 110 are connected in parallel to form a recombiner unit 120b in parallel.Described two basic computational ele-ment 110 are respectively the first basic computational ele-ment 110a and the second basic computational ele-ment 110b, the input end of described first basic computational ele-ment 110a connects the input end of described second basic computational ele-ment 110b, and the output terminal of described first basic computational ele-ment 110a connects the output terminal of described second basic computational ele-ment 110b.In described first basic computational ele-ment 110a and the second basic computational ele-ment 110b, one is artificial neural network computing unit, and another is impulsive neural networks computing unit.System input is input to described first basic computational ele-ment 110a simultaneously and described second basic computational ele-ment 110b carries out parallel processing, the result that described first basic computational ele-ment 110a and the second basic computational ele-ment 110b obtains separately is gathered, and exports as system.
Refer to Fig. 4, described two basic basic computational ele-ment 110 parallel joins are to form a parallel recombiner unit 120c.Described two basic computational ele-ment 110 are respectively the first basic computational ele-ment 110a and the second basic computational ele-ment 110b, the input end of described first basic computational ele-ment 110a and the input end of the second basic computational ele-ment 110b are independent separately, and the output terminal of the first basic computational ele-ment 110a connects the output terminal of the second basic computational ele-ment 110b.In described first basic computational ele-ment 110a and the second basic computational ele-ment 110b, one is artificial neural network computing unit, and another is impulsive neural networks computing unit.System input is divided into input 1 and input 2 two parts, wherein input 1 and send into the first basic computational ele-ment 110a, by the first basic computational ele-ment 110a process, the second basic computational ele-ment 110b is sent in input 2, by the second basic computational ele-ment 110b process, the result of the first basic computational ele-ment 110a and the second basic computational ele-ment 110b is gathered and to export as system afterwards.
Refer to Fig. 5, described two basic basic computational ele-ment 110 and a unit 111 form a study recombiner unit 120d.Described two basic computational ele-ment 110 are respectively the first basic computational ele-ment 110a and the second basic computational ele-ment 110b.In described first basic computational ele-ment 110a and the second basic computational ele-ment 110b, one is artificial neural network computing unit, and another is impulsive neural networks computing unit.System input obtains actual output after the first basic computational ele-ment 110a process, this actual difference exported with target that exports is input to unit 111, described unit 111 is according to the parameter such as network structure, synapse weight of the described second basic computational ele-ment 110b of this difference adjustment, learning algorithm in described unit can be Delta rule, BP algorithm, simulated annealing, genetic algorithm etc., and the learning algorithm adopted in the present embodiment is BP algorithm.The output of described second basic computational ele-ment 110b can be used as the parameter such as network structure, synapse weight of the first basic computational ele-ment 110a, or adjusts the parameter such as network structure, synapse weight of the first basic computational ele-ment 110a according to the output of the second basic computational ele-ment 110b.
Refer to Fig. 6, in a certain embodiment of the present invention, described two basic computational ele-ment 110 form a feedback complex unit 120e.Described two basic computational ele-ment 110 are respectively the first basic computational ele-ment 110a and the second basic computational ele-ment 110b.The output terminal of the first basic computational ele-ment 110a is connected with the input end of the second basic computational ele-ment 110b, and the result of calculation of the second basic computational ele-ment 110b outputs to the first basic computational ele-ment 110a as feedback.In described first basic computational ele-ment 110a and the second basic computational ele-ment 110b, one be artificial neural network computing unit another be impulsive neural networks computing unit.System input exports after the first basic computational ele-ment 110a process, and Output rusults is as the input of the second basic computational ele-ment 110b, and the output of the second basic computational ele-ment 110b is input to the first basic computational ele-ment 110a as value of feedback.
Each example is all combined under certain topological structure by two basic computational ele-ment 110 above, form various composite computing unit, further, the basic computational ele-ment 110 of greater number can also be combined under certain topological structure, form various composite computing unit, and various composite computing unit is again combined under certain topological structure, form more complicated mixing computation structure, thus produce rich and varied mixing computation structure.Refer to Fig. 7, ground floor composite computing unit 120 is a serial mixing computation structure, this ground floor composite computing unit 120 is decomposed into the series connection of two composite computing unit 120 at the second layer, further, this second layer composite computing unit 120 is broken down into the parallel connection of two composite computing unit 120 in third layer, above-mentioned decomposable process can go on always, until in the end one deck is broken down into basic computational ele-ment 110, basic computational ele-ment 110 is minimum computing unit structure.Refer to Fig. 8, this figure is the mixing computation structure by one of above-mentioned hierarchic design method acquisition concrete artificial neural network and impulsive neural networks, comprising cascaded structure, parallel-connection structure and feedback arrangement.
The hybrid system 100 of described artificial neural network and impulsive neural networks comprises at least one format conversion unit further, be arranged between described artificial neural networks unit and pulse nerve net computing unit, this format conversion unit is for realizing the data transmission between dissimilar neural computing unit.The numerical quantities that artificial neural networks unit exports can be converted to pulse train by described format conversion unit, or the pulse train that impulsive neural networks computing unit exports is converted to numerical quantities realizes format conversion, to ensure to carry out data transmission between dissimilar basic computational ele-ment 110.
Further, the hybrid system 100 of described artificial neural network and impulsive neural networks can comprise multiple described artificial neural network unit and multiple impulsive neural networks unit, and the plurality of artificial neural network unit and multiple impulsive neural networks unit realize Topology connection with above-described topological structure.
The hybrid system 100 of the artificial neural network that first embodiment of the invention provides and impulsive neural networks combines the computation schema of artificial neural network and impulsive neural networks two kinds of neural networks, artificial neural network is used for the computing unit needing precise information process or need complete mathematical to describe, be used for impulsive neural networks needing snap information process or complicated space-time signal process or need to process the computing unit of multi-modal signal (such as audio visual signal) simultaneously, formation can be carried out real-time, multi-modal or complicated space-time signal also can ensure the system of counting accuracy.Such as do audio visual Integrated Real-time Processing by this system, the multi-modal complicated space-time signal input pulse neural computing unit comprising image (video) and sound can be carried out pre-service, space-time characteristic needed for quick reduction signal Space-time Complexity or extract real-time, pretreated data continue input artificial neural networks unit, the artificial neural network that can realize compared with precise information process can be built by the mode building Complete mathematic model or supervised learning algorithm in this step, ensure that the accuracy of output.
Second embodiment of the invention provides a kind of mixed communication method of artificial neural network and impulsive neural networks nerve, comprise: judge to carry out transmit leg in the neural network basic computational ele-ment 110 communicated whether consistent with the data type of take over party, if consistent, carry out data transmission, if inconsistent, then perform Data Format Transform, the data type conversion sent by transmit leg is the data type identical with take over party, and carries out data transmission.The data type of artificial neural network is numerical value, and the data type of impulsive neural networks is pulse train.
Particularly, in the process performing described Data Format Transform, if transmit leg is described artificial neural networks unit, take over party is impulsive neural networks computing unit, then exported by the numeric format of artificial neural network and be converted to pulse train input pulse neural network; If transmit leg is impulsive neural networks computing unit, take over party is artificial neural network computing unit, then the pulse train formatted output of impulsive neural networks is converted to numerical value input artificial neural network.
The form difference to some extent of dissimilar basic computational ele-ment 110 data input and output.Artificial neural network is based on numerical operation, and input and output are all numerical value, and impulsive neural networks is based on pulse train computing, and input and output are all pulse train.Dissimilar basic computational ele-ment 110 is integrated in same mixing computation structure, needs the problem solving communication each other.Namely a kind of mixed communication method that artificial neural network and impulsive neural networks nerve are provided is needed, the numeric format of artificial neural networks unit is exported to be accepted by impulsive neural networks computing unit, and the pulse train formatted output of impulsive neural networks computing unit can be accepted by artificial neural networks unit.
Refer to Fig. 9, in a certain embodiment of the present invention, described impulsive neural networks unit is data receiver, described artificial neural network unit is data receiver, described communication process is: the pulse train numerical quantities that artificial neural networks unit exports being converted to respective frequencies, and using the input of this pulse train as impulsive neural networks computing unit, the frequency of pulse train after described respective frequencies refers to conversion is directly proportional to the size of numerical quantities.
In a certain embodiment of the present invention, described artificial neural network unit is data receiver, described impulsive neural networks unit is data receiver, described communication process is: the numerical quantities pulse train formatted output of impulsive neural networks being converted to corresponding size, and using this numerical quantities as the input as artificial neural network.According to the difference of impulsive neural networks pulse train coded system, following 4 kinds of situations can be divided into again:
Refer to Figure 10, pulse train adopts frequency coding, and network exports effective information and only represented by pulsed frequency.When impulsive neural networks adopts this coded system, by the method that the pulse train of frequency coding is converted to numerical quantities be: numerical quantities pulse train being converted to corresponding numerical value, described corresponding numerical value refers to that the size of numerical quantities is directly proportional to the frequency of pulse train, that is the inverse process of above-mentioned communication means from artificial neural network to impulsive neural networks.
Refer to Figure 11, pulse train is Population Coding, and the characterized same information of multiple neuronic output, effective information is the neuron number that synchronization sends spike.When impulsive neural networks adopts this coded system, corresponding communication means is: the neuron number of the transmission spike of synchronization is converted to corresponding numerical value, and the size of numerical value is directly proportional to the neuron number providing spike.
Refer to Figure 12, pulse train is time encoding, and effective information is the time that neuron sends spike.When impulsive neural networks adopts this coded system, corresponding communication means is: time neuron being sent spike is converted to corresponding numerical value, the size of numerical value and spike Time Of Release exponentially funtcional relationship.
Refer to Figure 13, pulse train is binary-coding, and effective information is that whether neuron provides spike within a certain period of time.When impulsive neural networks adopts this coded system, corresponding communication means is: if provide spike in limiting time, then numerical value is 1, otherwise numerical value is 0.
The mixed communication method of the artificial neural network that second embodiment of the invention provides and impulsive neural networks nerve achieves the direct communication of artificial neural network and impulsive neural networks, can realize the hybrid operation of above-mentioned two kinds of neural networks on this basis.
Refer to Figure 14, third embodiment of the invention provides a kind of multi-modal neuromorphic network core 210a, comprising: mode register 211, aixs cylinder input block 212, synapse weight storage unit 213, dendron unit 214 and neuron computes unit 215.
In the present embodiment, described multi-modal neuromorphic network core 210a has a bar to input aixs cylinder, b bar dendron and b pericaryon, and every bar aixs cylinder is connected with a bar dendron respectively, and each tie point place is a cynapse, total a × b cynapse, connection weight is synapse weight.The corresponding pericaryon of every bar dendron.Therefore the maximum network size that can carry of this multi-modal neuromorphic network core 210a is a input × b neuron.Wherein a, b integer all for being greater than 0, the value of a, b is generally determined according to practical application.A is minimum can be taken as 1, and maximal value, for being limited to hardware resource, comprises storage resources, logical resource etc., the single neuromorphic network core that can realize maximum aixs cylinder input number; B minimum desirable 1, maximal value, while being limited to hardware resource, is also limited to the neuron computing total degree that each neuromorphic network core can perform in the single trigger pip cycle.In the present embodiment, the value of a and b is 256.Be appreciated that in actual applications, the concrete number of above-mentioned input aixs cylinder, dendron and pericaryon can adjust according to concrete condition.
Described mode register 211 controls the operational mode of described multi-modal neuromorphic network core 210a.Described mode register 211 is connected with described aixs cylinder input block 212, dendron unit 214 and neuron computes unit 215, controls the artificial neural network pattern that operates in or the impulsive neural networks pattern of said units.The width of described mode register 211 is 1, and its register value can be configured by user.
Described aixs cylinder input block 212 is connected with described dendron unit 214.Described aixs cylinder input block 212 receives a bar aixs cylinder and inputs and store.In the present embodiment, every bar aixs cylinder has the storage unit of 16.When operating in impulsive neural networks pattern, the input of every bar aixs cylinder is all the spike of 1, the aixs cylinder input of described cell stores 16 time steps.When operating in artificial neural network pattern, the input of every bar aixs cylinder is all the signed number of 8,8 aixs cylinder inputs of a described cell stores time step.
Described synapse weight storage unit 213 is connected with described dendron unit 214, and described synapse weight storage unit 213 stores a × b synapse weight, and in the present embodiment, described synapse weight is the signed number of 8.
The input end of described dendron unit 214 connects described aixs cylinder input block 212 and synapse weight storage unit 213, and output terminal connects described neuron computes unit 215.Described dendron unit 214 performs the vector-matrix multiplication computing of a aixs cylinder input vector and a × b synapse weight matrix, and a the result of calculation that computing obtains is a neuronic aixs cylinder input.Described dendron unit 214 comprises dendron multiplicaton addition unit 214a and dendron summing elements 214b.When operating in artificial neural network pattern, described aixs cylinder input vector and synapse weight matrix are sent into described dendron multiplicaton addition unit 214a and are carried out multiply-add operation, and vector-matrix multiplication is realized by multiplier and totalizer.When operating in impulsive neural networks pattern, described aixs cylinder input vector and synapse weight matrix are sent into described dendron summing elements and are carried out accumulating operation, and now, aixs cylinder is input as 1, and vector-matrix multiplication is realized by data selector and totalizer.
Described neuron computes unit 215, for performing neuron computes, comprises the first computing unit 215a and the second computing unit 215b.When operating in artificial neural network pattern, the multiply-add operation result that described dendron multiplicaton addition unit 214a sends is sent into described first computing unit 215a and is carried out artificial neural network computing.In the present embodiment, neuron passes through the look-up tables'implementation Any Nonlinear Function function of 1024 × 8, and output is 8 bit value.When operating in impulsive neural networks pattern, the accumulating operation result that described dendron summing elements 214b sends is sent into described second computing unit 215b and is carried out impulsive neural networks calculating.Neuron in the present embodiment is integration-leakage-igniting (LIF) model, exports as spike signal and current film potential.
Described multi-modal neuromorphic network core 210a comprises trigger pip counter 216 further, and described trigger pip counter 216 receives trigger pip and records trigger pip number, i.e. current time step.Described trigger pip is the clock signal of fixed cycle, and in the present embodiment, the cycle is 1ms.
Described multi-modal neuromorphic network core 210a comprises controller 217 further.Described controller 217 is connected with described aixs cylinder input block 212, synapse weight storage unit 213, dendron unit 214, neuron computes unit 215, and described controller 217 controls the operation sequential of above-mentioned aixs cylinder input block 212, synapse weight storage unit 213, dendron unit 214, neuron computes unit 215.In addition described controller 217 is also responsible for the startup and the termination that control described multi-modal neuromorphic network core 210a, and described controller 217 starts the 210a computing of described multi-modal neuromorphic network core and control algorithm flow process at described trigger pip rising edge.
Impulsive neural networks operational mode and the artificial neural network operational mode of described multi-modal neuromorphic network core 210a will be introduced respectively below.
Refer to Figure 15, described multi-modal neuromorphic network core 210a operates in artificial neural network pattern, the aixs cylinder that under this pattern, described aixs cylinder input block 212 receives is input as the packet of 16, comprising 8 target aixs cylinder sequence numbers (0 ~ 255) and 8 inputs (signed number-128 ~ 127).The now storer 256 × 8 of described aixs cylinder input block 212,8 inputs in this storer record 256 aixs cylinders.
Described synapse weight storage unit 213 stores 256 × 256 synaptic weight value, and each synaptic weight value is the signed number of 8.
Described dendron multiplicaton addition unit 214a comprises multiplier and totalizer, and every time step ties up the product of aixs cylinder input vector by 256 dimension synapse weight vectors and 256 on its dendron of neuron computes, and using the input of result of calculation as described first computing unit 215a.
In described first computing unit 215a executor artificial neural networks, neuronic non-linear/linear activation function calculates.Before use, first according to the activation function that this neuromorphic network uses, calculating for value is 0 ~ 1,023 8 outputs inputting correspondence, and by above-mentioned result of calculation stored in the look-up table storage of neuron computes unit.Operationally, taken advantage of by the dendron from described dendron multiplicaton addition unit 214a and add result and input as the address of look-up table, 8 outputs that this address of look-up table stores are neuron and export.
Refer to Figure 16, under artificial neural network pattern, the operational scheme of a described multi-modal neuromorphic network core 210a time step comprises the following steps:
S11, after trigger pip being detected, described trigger pip counter 216 circulates from adding 1, and described controller 217 starts multi-modal neuromorphic network core 210a and runs;
S12, described aixs cylinder input block 212 reads current time step aixs cylinder input vector according to the value of described trigger pip counter 216, and be sent to described dendron multiplicaton addition unit 214a, described synapse weight storage unit 213 reads 1 ~ No. 256 neuron dendron synapse weight vector successively, and is sent to described dendron multiplicaton addition unit 214a;
S13, described dendron multiplicaton addition unit 214a calculate described aixs cylinder input vector and described 1 ~ No. 256 neuron dendron synapse weight vector product successively according to input sequence, and are sent to described first computing unit 215a;
S14, the output of described dendron multiplicaton addition unit 214a as look-up table address, is shown that neuron exports by described first computing unit 215a;
S15, described controller 217 stops this multi-modal neuromorphic network core 210a to run, and returns step S11.
Refer to Figure 17, described multi-modal neuromorphic network core 210a operates in impulsive neural networks pattern, the aixs cylinder that under this pattern, described aixs cylinder input block 212 receives is input as the packet of 12, comprising 8 target aixs cylinder sequence numbers (0 ~ 255) and 4 delayed datas (0 ~ 15).Described delayed data represents the time step that this input comes into force and the difference that current time walks.Now the storer of described aixs cylinder input block 212 is 256 × 16, input that is current and totally 16 time steps afterwards in this storer record 256 aixs cylinders.If certain position is 1, then corresponding aixs cylinder activates on corresponding time step, is namely input as 1; If certain position is 0, then corresponding aixs cylinder does not activate on corresponding time step, is namely input as 0.
Described synapse weight storage unit 213 stores 256 × 256 synaptic weight value, and each synaptic weight value is the signed number of 8.
Described dendron summing elements 214b comprises data selector and totalizer, calculate successively all activated cynapse of current time step on every bar dendron weight and, and using the input of result of calculation as the second computing unit 215b in described neuron computes unit 215.
Described second computing unit 215b is used for carrying out impulsive neural networks calculating, and the neuron in the present embodiment is integration-leakage-igniting (LIF) model.Described second computing unit 215b comprises further: dendron expands storage unit 2151, parameter storage unit 2152 and integration leak calculation unit 2153.At each time step, described second computing unit 215b runs 256 times successively, realizes 256 neuronic computings by time-multiplexed mode.Wherein said dendron is expanded storage unit 2151 and is comprised 256 storage unit, input packet is expanded for receiving the dendron sent in the external world, described dendron is expanded input packet and is comprised transmitting terminal membrane potential of neurons value and target nerve unit sequence number, and according to neuron sequence number, transmitting terminal membrane potential of neurons value is stored into corresponding stored unit.Described parameter storage unit 2152 comprises 256 storage unit, stores 256 neuronic film potentials, threshold value and leakage value.Described integration leak calculation unit 2153 performs integration-leakage-ignition operation to each neuron successively, when film potential exceedes positive threshold value, and output spike pulse signal and current film potential value.Wherein said integration-leakage-ignition operation is as follows:
Film potential=former film potential+dendron input+dendron expands input-leakage value
If film potential is greater than positive threshold value, then sends spike signal and film potential value, and film potential is reset.If film potential is less than negative threshold value, does not then send signal and film potential is reset.
Refer to Figure 18, the operational scheme of an impulsive neural networks pattern time step of described multi-modal neuromorphic network core 210a comprises the following steps:
S21, after trigger pip being detected, described trigger pip counter 216 circulates from adding 1, and described controller 217 starts multi-modal neuromorphic network core 210a and runs;
S22, described aixs cylinder input block 212 reads current time step aixs cylinder input vector according to the value of described trigger pip counter 216, and be sent to described dendron summing elements 214b, described synapse weight storage unit 213 reads 1 ~ No. 256 neuronic dendron synapse weight vector successively, and is sent to described dendron summing elements 214b;
S23, described dendron summing elements 214b calculate described aixs cylinder input vector and described 1 ~ No. 256 neuron dendron synapse weight vector product successively according to input sequence, and are sent to described second computing unit 215b;
S24, described second computing unit 215b read 1 ~ No. No. 256 neuron parameter successively from described parameter storage unit 2152, and calculate with the input value from described dendron summing elements 214b, output spike pulse signal and current film potential;
S25, after neuron computes terminates, described controller 217 stops this multi-modal neuromorphic network core 210a to run, and returns step S21.
Further, the basis of the multi-modal neuromorphic network core 210a that the present embodiment can be provided obtains two kinds of single mode neuromorphic network cores.Operate in the single mode neuromorphic network core under artificial neural network pattern, comprising: aixs cylinder input block 212, synapse weight storage unit 213, dendron multiplicaton addition unit 214a and the first computing unit 215a.Operate in the single mode neuromorphic network core under impulsive neural networks pattern, comprising: aixs cylinder input block 212, synapse weight storage unit 213, dendron summing elements 214b and the second computing unit 215b.Above-mentioned two kinds of single mode neuromorphic network cores are all on the basis of the 3rd embodiment, done corresponding simplification, and concrete syndeton and the function of inner each unit can with reference to third embodiment of the invention.
Be appreciated that multi-modal neuromorphic network core 210a and the two kind of single mode neuromorphic network that third embodiment of the invention provides is endorsed using the basic computational ele-ment 110 in the hybrid system 100 as first embodiment of the invention artificial neural network and impulsive neural networks.
The multi-modal neuromorphic network core 210a that third embodiment of the invention provides both can carry out artificial neural network computing, also impulsive neural networks computing can be carried out, and can switch between artificial neural network operational mode and impulsive neural networks operational mode as required, can carry out real-time, multi-modal or complicated space-time signal calculate and can ensure calculating degree of accuracy.
Refer to Figure 19, fourth embodiment of the invention provides the commingled system 200 of a kind of artificial neural network and impulsive neural networks, comprise: multiple neuromorphic network core 210 and with the plurality of neuromorphic network core multiple routing node 220 one to one, described multiple routing node 220 forms the cancellated route network of m × n, wherein, m, n are the integer being greater than 0.Line direction in this m × n array is defined as X-direction, and column direction is defined as Y-direction, and often pair of neuromorphic network core 210 and routing node 220 have a unique local XY coordinate.
Described neuromorphic network core 210 and routing node 220 one_to_one corresponding, refer to that each neuromorphic network core 210 corresponds to a routing node 220, each routing node 220 corresponds to a neuromorphic network core 210.The a pair neuromorphic network core 210 of mutual correspondence is with routing node 220, and neuromorphic network core 210 is called the local neuromorphic network core of routing node 220, and routing node 220 is called the local routing node of neuromorphic network core 210.The input of described neuromorphic network core 210 is all from its local routing node, and neuron computes result also can be sent to this local routing node, and then is exported or be sent to target nerve form network core by route network.
In the present embodiment, the quantity of described neuromorphic network core 210 and routing node 220 is 9, and described 9 routing nodes 220 form 3 × 3 cancellated route networks.
Described neuromorphic network core 210 can be single mode neuromorphic network core, also can be multi-modal neuromorphic network core, the multi-modal neuromorphic network core 210a that such as third embodiment of the invention provides.So-called single mode neuromorphic network core refers to that this neuromorphic network core can only operate in artificial neural network pattern or impulsive neural networks pattern; So-called multi-modal neuromorphic network core refers to that this neuromorphic network core has two kinds of operational modes: artificial neural network pattern and impulsive neural networks pattern, can realize switching between above-mentioned two kinds of operational modes by configuration inherent parameters, the configuration operation pattern that each multi-modal neuromorphic network core can be independent.
Described neuromorphic network core 210 has multiple neuron elements and the input of multiple aixs cylinder.In the present embodiment, described neuromorphic network core 210 has 256 neuron elements and 256 aixs cylinder inputs, and maximum can bearing comprises 256 neuronic neural network computings.If the scale of neural network realized is greater than 256 neurons, then need to realize multiple neuromorphic network core 210 to connect together on the topology by route network, each neuromorphic network core 210 bears a part of neural network computing, the neural network that common composition one is large.The commingled system 200 that the present embodiment provides has 9 neuromorphic network cores 210, maximum can bearing comprises 2304 neuronic neural computings, and described neural computing can be the hybrid network computing of artificial neural network computing, impulsive neural networks computing or artificial neural network and impulsive neural networks.
When described neuromorphic network core 210 operates under artificial neural network pattern, the result of calculation of 256 neuron elements in this neuromorphic network core 210 is all the numerical value of 8; When neuromorphic network core 210 operates under impulsive neural networks pattern, the result of calculation of 256 neuronic unit in this neuromorphic network core 210 is all the spike of 1.Under no matter operating in which kind of pattern, the result of calculation of neuronic unit all directly can be sent to the local routing node of this neuromorphic network core 210.
Described multiple routing node 220 forms cancellated route network, this route network bears data-transformation facility, the transmission of described data comprises: system input, system export and transmission between neuromorphic network core 210, wherein transmit between neuromorphic network core 210 and the data being divided into model identical neuromorphic network internuclear transmission and the internuclear data transmission of different mode neuromorphic network.Arbitrary routing node 220 in described route network the unique and XY planimetric coordinates determined can represent with one.In the present embodiment, 9 routing nodes 220 form the array of 3 × 3, line direction in array is defined as X-direction, column direction is defined as Y-direction, each routing node 220 described directly can carry out data transmission with it at X positive dirction, X negative direction, Y positive dirction, routing node that Y negative direction is adjacent, forms mesh topology.Being appreciated that described mesh topology is except above-mentioned reticulate texture, also can be other common structures such as hub-and-spoke configuration, bus type structure.
The data of described route network transmission comprise: system input data, system export the data of transmission between data and neuromorphic network core 210.Above-mentioned data are transmitted in route network according to the routing rule preset.In the present embodiment, routing rule is: data are transmitted first in X direction, exports after arriving target X-coordinate routing node along Y-direction again, until arrive the routing node of target XY coordinate.If represent the coordinate of initial routing node with (x0, y0), (x1, y1) represents the coordinate of target routing node, and above-mentioned routing rule is: (x0, y0) → (x1, y0) → (x1, y1).Be appreciated that in actual applications, also can set different routing rules according to specific circumstances.
The implementation procedure that below introducing system input respectively, system export and transmit between neuromorphic network core 210.
The transmitting procedure of described system input data is: first system input data are input to arbitrary routing node of route network outermost, can be sent to target nerve form network core afterwards by route network according to above-mentioned routing rule.
The transmitting procedure that described system exports data is: first the result of calculation of described neuromorphic network core 210 be sent to local routing node, be sent to arbitrary routing node of route network outermost afterwards according to above-mentioned routing rule by route network, be sent to outside system by this routing node again, completion system exports.
Between described neuromorphic network core 210, the process of transmission is: first the result of calculation of described neuromorphic network core 210 be sent to local routing node, target routing node is sent to according to above-mentioned routing rule afterwards by route network, be sent to its local neuromorphic network core by this target routing node again, complete the data transmission between neuromorphic network core 210.
Refer to Figure 20, described routing node 220 comprises the routing table with multiple storage unit, a neuron elements of the corresponding local neuromorphic network core of each storage unit of this routing table.The XY coordinate address of the object neuromorphic network core that the corresponding neuron elements of described cell stores exports, target aixs cylinder input sequence number and delayed data.In the present embodiment, neuromorphic network core 210 comprises 256 neuron elements, and in described routing table, the number of storage unit is also 256.
In the present embodiment, the data that described system input data, system export communication between data and neuromorphic network core 210 are all transmitted between routing node 220 with the form of 32 route data packets.Figure 21 is depicted as the form of above-mentioned route data packets, comprise 6 target nerve form network core X-direction addresses, 6 target nerve form network core Y-direction addresses, 4 aixs cylinder time delays, 8 target aixs cylinder input sequence numbers and 8 bit data in this route data packets, amount to 32 bit data.Wherein, the aixs cylinder time delay of 4 is effective under target nerve form network core operates in impulsive neural networks pattern, and 8 bit data are neuronic output under artificial neural network pattern.
The packet that described routing node 220 is dealt into local neuromorphic network core is 4 the aixs cylinder time delays intercepted from above-mentioned 32 route data packets, 8 target aixs cylinder input sequence numbers and 8 bit data, amounts to 20 bit data.For the neuromorphic network core 210 operated under artificial neural network pattern, after receiving the packet from local routing node, using the input of 8 bit data as corresponding sequence number aixs cylinder.For the neuromorphic network core 210 operated under impulsive neural networks pattern, after receiving the packet from local routing node, the input of corresponding sequence number aixs cylinder after corresponding time delay is put 1.
Refer to Figure 22, the workflow of described routing node 220 comprises:
S1, described routing node 220 receives to hang oneself the neuron computes result of local neuromorphic network core;
S2, described routing node 220 reads corresponding neuronic routing iinformation from routing table, and this routing iinformation and described neuron computes result are combined as route data packets;
S3, described routing node 220 judges the sending direction of described route data packets, sends this route data packets according to judged result.
In step S1, under operating in artificial neural network pattern, described neuron computes result is for exporting data; Under operating in artificial neural network pattern, described neuron computes result is spike.
In step S3, described route data packets comprises target nerve form network core at the relative address x of X-direction and relative address y in the Y direction.Described routing node 220 judges the sending direction of described route data packets according to the numerical value of x and y, be specially: as x>0, described route data packets is sent to the routing node that X positive dirction is adjacent, as x<0, described route data packets is sent to the routing node that X negative direction is adjacent, as y>0, described route data packets is sent to the routing node that Y positive dirction is adjacent, as y<0, described route data packets is sent to the routing node that Y negative direction is adjacent, as x=y=0, described route data packets is directly sent it back the local neuromorphic network core of this routing node 220.After next routing node receives the route data packets that a routing node sends, relative address in this route data packets is revised, be specially: if x<0, then make revised relative address x '=x+1, if x>0, then make revised relative address x '=x-1, if y<0, then make revised relative address y '=y+1, if y>0, then make revised relative address y '=y-1.Described relative address is often once revised, until x=y=0 through a routing node 220.
In the present embodiment, first carry out judgement and the transmission of X-direction, after X-direction transmission terminates, i.e. x '=0, then judgement and the transmission of carrying out other direction, until be sent to target nerve form network core by route data packets.
The commingled system 200 of the artificial neural network that fourth embodiment of the invention provides and impulsive neural networks combines the computation schema of artificial neural network and impulsive neural networks two kinds of neural networks, both there is the complicated space-time signal processing power of impulsive neural networks, artificial neural network can be made full use of again enrich and powerful computing power, can carry out real-time, multi-modal or complicated space-time signal calculate and can ensure calculating degree of accuracy.
In addition, those skilled in the art also can do other changes in spirit of the present invention, and certainly, these changes done according to the present invention's spirit, all should be included within the present invention's scope required for protection.

Claims (11)

1. a multi-modal neuromorphic network core, is characterized in that, comprising: mode register, aixs cylinder input block, synapse weight storage unit, dendron unit and neuron computes unit;
Described mode register is connected with described aixs cylinder input block, dendron unit and neuron computes unit, controls said units and operates in artificial neural network pattern or impulsive neural networks pattern;
Described aixs cylinder input block is connected with described dendron unit, receives and stores aixs cylinder input;
Described synapse weight storage unit is connected with described dendron unit, stores synapse weight matrix;
Described dendron unit is connected with described neuron computes unit, comprise dendron multiplicaton addition unit and dendron summing elements, when operating in artificial neural network pattern, described aixs cylinder input vector and synapse weight matrix are sent into described dendron multiplicaton addition unit and are carried out multiply-add operation, when operating in impulsive neural networks pattern, described aixs cylinder input vector and synapse weight matrix are sent into described dendron summing elements and are carried out accumulating operation;
Described neuron computes unit comprises the first computing unit and the second computing unit, when operating in artificial neural network pattern, the multiply-add operation result that described dendron multiplicaton addition unit is sent is sent into described first computing unit and is carried out artificial neural network computing, when operating in impulsive neural networks pattern, the accumulating operation result that described dendron summing elements is sent is sent into described second computing unit and is carried out impulsive neural networks calculating.
2. multi-modal neuromorphic network core as claimed in claim 1, it is characterized in that, described multi-modal neuromorphic network core comprises trigger pip counter further, described trigger pip counter is connected with described aixs cylinder input block, receive trigger pip and record trigger pip number, i.e. current time step, and this trigger pip number is sent to described aixs cylinder input block.
3. multi-modal neuromorphic network core as claimed in claim 1, it is characterized in that, described multi-modal neuromorphic network core comprises controller further, described controller is connected with described aixs cylinder input block, synapse weight storage unit, dendron unit, neuron computes unit, control the operation sequential of connected said units, described controller controls startup and the termination of described multi-modal neuromorphic network core.
4. multi-modal neuromorphic network core as claimed in claim 1, it is characterized in that, described dendron multiplicaton addition unit comprises multiplier and totalizer, for calculating the product of synapse weight vector and aixs cylinder input vector on its dendron.
5. multi-modal neuromorphic network core as claimed in claim 1, it is characterized in that, described first computing unit comprises a look-up table, and described taking advantage of is added the address input look-up table of result as look-up table, and the output that this address stores is neuron and exports.
6. multi-modal neuromorphic network core as claimed in claim 5, is characterized in that, under artificial neural network pattern, the operational scheme of a described multi-modal neuromorphic network core time step comprises the following steps:
S11, after trigger pip being detected, described trigger pip counter cycle is from adding 1, and described controller starts multi-modal neuromorphic network core and runs;
S12, described aixs cylinder input block reads current time step aixs cylinder input vector according to the value of described trigger pip counter, and be sent to described dendron multiplicaton addition unit, described synapse weight storage unit reads 1 ~ n neuron dendron synapse weight vector successively, and is sent to described dendron multiplicaton addition unit;
S13, described dendron multiplicaton addition unit calculates described aixs cylinder input vector and described 1 ~ n neuron dendron synapse weight vector product successively according to input sequence, and is sent to described first computing unit;
S14, the output of described dendron multiplicaton addition unit as look-up table address, is shown that neuron exports by described first computing unit;
S15, described controller stops this multi-modal neuromorphic network core to run, and returns step S11.
7. multi-modal neuromorphic network core as claimed in claim 1, it is characterized in that, described dendron summing elements comprises data selector and totalizer, calculate successively all activated cynapse of current time step on every bar dendron weight and.
8. multi-modal neuromorphic network core as claimed in claim 1, it is characterized in that, in described second computing unit, neuron is integration-leakage-fire model, described second computing unit comprises: dendron expands storage unit, parameter storage unit and integration leak calculation unit, described expansion storage unit is connected with described integration leak calculation unit, store transmitting terminal membrane potential of neurons value, described parameter storage unit is connected with described integration leak calculation unit, store neuronic film potential, threshold value and leakage value, describedly perform integration-leakage-ignition operation with integration leak calculation unit.
9. multi-modal neuromorphic network core as claimed in claim 8, is characterized in that, under impulsive neural networks pattern, the operational scheme of a described multi-modal neuromorphic network core time step comprises the following steps:
S21, after trigger pip being detected, described trigger pip counter cycle is from adding 1, and described controller starts multi-modal neuromorphic network core and runs;
S22, described aixs cylinder input block reads current time step aixs cylinder input vector according to the value of described trigger pip counter, and be sent to described dendron summing elements, described synapse weight storage unit reads 1 ~ No. n neuronic dendron synapse weight vector successively, and is sent to described dendron summing elements;
S23, described dendron summing elements calculates described aixs cylinder input vector and described 1 ~ n neuron dendron synapse weight vector product successively according to input sequence, and is sent to described second computing unit;
S24, described second computing unit reads 1 ~ No. No. n neuron parameter successively from described parameter storage unit, and calculates with the input value from described dendron summing elements, output spike pulse signal and current film potential;
S25, described controller stops this multi-modal neuromorphic network core to run, and returns step S21.
10. multi-modal neuromorphic network core as claimed in claim 1, it is characterized in that, described multi-modal neuromorphic network core has the input of a bar aixs cylinder, b bar dendron and b pericaryon, every bar aixs cylinder is connected with b bar dendron respectively, each tie point place is a cynapse, total a × b cynapse, every bar dendron corresponding a pericaryon, wherein a, b integer all for being greater than 0.
11. multi-modal neuromorphic network cores as claimed in claim 8, is characterized in that, described a=b=256.
CN201510419465.5A 2015-07-16 2015-07-16 A kind of multi-modal neuromorphic network core Active CN105095967B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510419465.5A CN105095967B (en) 2015-07-16 2015-07-16 A kind of multi-modal neuromorphic network core

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510419465.5A CN105095967B (en) 2015-07-16 2015-07-16 A kind of multi-modal neuromorphic network core

Publications (2)

Publication Number Publication Date
CN105095967A true CN105095967A (en) 2015-11-25
CN105095967B CN105095967B (en) 2018-02-16

Family

ID=54576341

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510419465.5A Active CN105095967B (en) 2015-07-16 2015-07-16 A kind of multi-modal neuromorphic network core

Country Status (1)

Country Link
CN (1) CN105095967B (en)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105719000A (en) * 2016-01-21 2016-06-29 广西师范大学 Neuron hardware structure and method of simulating pulse neural network by adopting neuron hardware structure
CN106056211A (en) * 2016-05-25 2016-10-26 清华大学 Neuron computing unit, neuron computing module and artificial neural network computing core
CN106201651A (en) * 2016-06-27 2016-12-07 鄞州浙江清华长三角研究院创新中心 The simulator of neuromorphic chip
CN106845632A (en) * 2017-01-25 2017-06-13 清华大学 Impulsive neural networks information is converted to the method and system of artificial neural network information
CN106845633A (en) * 2017-01-25 2017-06-13 清华大学 Neutral net information conversion method and system
CN106875005A (en) * 2017-01-20 2017-06-20 清华大学 Adaptive threshold neuronal messages processing method and system
CN106909969A (en) * 2017-01-25 2017-06-30 清华大学 Neutral net message receiving method and system
CN106971228A (en) * 2017-02-17 2017-07-21 清华大学 Neuronal messages sending method and system
CN106971229A (en) * 2017-02-17 2017-07-21 清华大学 Neural computing nuclear information processing method and system
WO2017168275A1 (en) * 2016-03-31 2017-10-05 International Business Machines Corporation Energy-efficient time-multiplexed neurosynaptic core for implementing neural networks
CN107369108A (en) * 2016-05-11 2017-11-21 耐能有限公司 Multilayer artificial neural networks and its control method
CN107657312A (en) * 2017-09-18 2018-02-02 东南大学 Towards the two-value real-time performance system of voice everyday words identification
CN106022468B (en) * 2016-05-17 2018-06-01 成都启英泰伦科技有限公司 the design method of artificial neural network processor integrated circuit and the integrated circuit
CN108475214A (en) * 2016-03-28 2018-08-31 谷歌有限责任公司 adaptive artificial neural network selection technique
CN108510064A (en) * 2016-04-18 2018-09-07 中国科学院计算技术研究所 The processing system and method for artificial neural network including multiple cores processing module
CN109242094A (en) * 2016-01-20 2019-01-18 北京中科寒武纪科技有限公司 Device and method for executing artificial neural network forward operation
CN109564637A (en) * 2016-09-30 2019-04-02 国际商业机器公司 Expansible stream cynapse supercomputer for extreme handling capacity neural network
CN109643392A (en) * 2016-09-07 2019-04-16 罗伯特·博世有限公司 The method of the neuronal layers of multilayer perceptron model is calculated using simplified activation primitive
CN109901878A (en) * 2019-02-25 2019-06-18 北京灵汐科技有限公司 One type brain computing chip and calculating equipment
CN110263926A (en) * 2019-05-18 2019-09-20 南京惟心光电系统有限公司 Impulsive neural networks and its system and operation method based on photoelectricity computing unit
CN110322010A (en) * 2019-07-02 2019-10-11 深圳忆海原识科技有限公司 The impulsive neural networks arithmetic system and method calculated for class brain intelligence with cognition
CN110874633A (en) * 2018-09-03 2020-03-10 三星电子株式会社 Neuromorphic methods and apparatus with multi-site neuromorphic manipulation
WO2020134824A1 (en) * 2018-12-29 2020-07-02 北京灵汐科技有限公司 Brain-like computing system
CN111565152A (en) * 2020-03-27 2020-08-21 中国人民解放军国防科技大学 Brain-like chip routing system data communication method based on routing domain division
CN111667064A (en) * 2020-04-22 2020-09-15 南京惟心光电系统有限公司 Hybrid neural network based on photoelectric computing unit and operation method thereof
CN111971693A (en) * 2018-04-27 2020-11-20 国际商业机器公司 Central scheduler and instruction dispatcher for neuro-inference processor
CN112308219A (en) * 2019-07-26 2021-02-02 爱思开海力士有限公司 Method of performing arithmetic operation and semiconductor device performing arithmetic operation
US10909449B2 (en) 2017-04-14 2021-02-02 Samsung Electronics Co., Ltd. Monolithic multi-bit weight cell for neuromorphic computing
CN112966814A (en) * 2021-03-17 2021-06-15 上海新氦类脑智能科技有限公司 Information processing method of fused impulse neural network and fused impulse neural network
CN113313240B (en) * 2021-08-02 2021-10-15 成都时识科技有限公司 Computing device and electronic device
CN113554162A (en) * 2021-07-23 2021-10-26 上海新氦类脑智能科技有限公司 Axon input extension method, device, equipment and storage medium
CN114611686A (en) * 2022-05-12 2022-06-10 之江实验室 Synapse delay implementation system and method based on programmable neural mimicry core
CN114861864A (en) * 2022-02-24 2022-08-05 天津大学 Neuron network modeling method and device with dendritic morphology

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6269352B1 (en) * 1995-10-13 2001-07-31 Stmicroelectronics S.R.L. Low-voltage, very-low-power conductance mode neuron
CN1516070A (en) * 2003-01-08 2004-07-28 剑 王 Associative memory neural network
US20090313195A1 (en) * 2008-06-17 2009-12-17 University Of Ulster Artificial neural network architecture
CN102457419A (en) * 2010-10-22 2012-05-16 中国移动通信集团广东有限公司 Method and device for optimizing transmission network route based on neural network model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6269352B1 (en) * 1995-10-13 2001-07-31 Stmicroelectronics S.R.L. Low-voltage, very-low-power conductance mode neuron
CN1516070A (en) * 2003-01-08 2004-07-28 剑 王 Associative memory neural network
US20090313195A1 (en) * 2008-06-17 2009-12-17 University Of Ulster Artificial neural network architecture
CN102457419A (en) * 2010-10-22 2012-05-16 中国移动通信集团广东有限公司 Method and device for optimizing transmission network route based on neural network model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HU JUN ETAL.: ""A Spike-Timing-Based Integrated Model"", 《NEURAL COMPUTATION》 *
LI GUOQI ETAL.: ""Model-Based Online Learning With Kernels"", 《IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》 *

Cited By (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242094A (en) * 2016-01-20 2019-01-18 北京中科寒武纪科技有限公司 Device and method for executing artificial neural network forward operation
CN105719000A (en) * 2016-01-21 2016-06-29 广西师范大学 Neuron hardware structure and method of simulating pulse neural network by adopting neuron hardware structure
CN105719000B (en) * 2016-01-21 2018-02-16 广西师范大学 A kind of neuron hardware unit and the method with this unit simulation impulsive neural networks
CN108475214A (en) * 2016-03-28 2018-08-31 谷歌有限责任公司 adaptive artificial neural network selection technique
WO2017168275A1 (en) * 2016-03-31 2017-10-05 International Business Machines Corporation Energy-efficient time-multiplexed neurosynaptic core for implementing neural networks
US10990872B2 (en) 2016-03-31 2021-04-27 International Business Machines Corporation Energy-efficient time-multiplexed neurosynaptic core for implementing neural networks spanning power- and area-efficiency
GB2557780B (en) * 2016-03-31 2022-02-09 Ibm Energy-efficient time-multiplexed neurosynaptic core for implementing neural networks
GB2557780A (en) * 2016-03-31 2018-06-27 Ibm Energy-efficient time-multiplexed neurosynaptic core for implementing neural networks
CN108510064B (en) * 2016-04-18 2021-12-10 中国科学院计算技术研究所 Processing system and method for artificial neural network comprising multiple core processing modules
CN108510064A (en) * 2016-04-18 2018-09-07 中国科学院计算技术研究所 The processing system and method for artificial neural network including multiple cores processing module
CN107369108A (en) * 2016-05-11 2017-11-21 耐能有限公司 Multilayer artificial neural networks and its control method
CN106022468B (en) * 2016-05-17 2018-06-01 成都启英泰伦科技有限公司 the design method of artificial neural network processor integrated circuit and the integrated circuit
CN106056211A (en) * 2016-05-25 2016-10-26 清华大学 Neuron computing unit, neuron computing module and artificial neural network computing core
CN106056211B (en) * 2016-05-25 2018-11-23 清华大学 Neuron computing unit, neuron computing module and artificial neural networks core
CN106201651A (en) * 2016-06-27 2016-12-07 鄞州浙江清华长三角研究院创新中心 The simulator of neuromorphic chip
CN109643392A (en) * 2016-09-07 2019-04-16 罗伯特·博世有限公司 The method of the neuronal layers of multilayer perceptron model is calculated using simplified activation primitive
CN109564637B (en) * 2016-09-30 2023-05-12 国际商业机器公司 Methods, systems, and media for scalable streaming synaptic supercomputers
CN109564637A (en) * 2016-09-30 2019-04-02 国际商业机器公司 Expansible stream cynapse supercomputer for extreme handling capacity neural network
CN106875005B (en) * 2017-01-20 2019-09-20 清华大学 Adaptive threshold neuronal messages processing method and system
CN106875005A (en) * 2017-01-20 2017-06-20 清华大学 Adaptive threshold neuronal messages processing method and system
CN106909969A (en) * 2017-01-25 2017-06-30 清华大学 Neutral net message receiving method and system
CN106845633A (en) * 2017-01-25 2017-06-13 清华大学 Neutral net information conversion method and system
CN106845632A (en) * 2017-01-25 2017-06-13 清华大学 Impulsive neural networks information is converted to the method and system of artificial neural network information
CN106845633B (en) * 2017-01-25 2021-07-09 北京灵汐科技有限公司 Neural network information conversion method and system
CN106845632B (en) * 2017-01-25 2020-10-16 清华大学 Method and system for converting impulse neural network information into artificial neural network information
CN106971229A (en) * 2017-02-17 2017-07-21 清华大学 Neural computing nuclear information processing method and system
CN106971228A (en) * 2017-02-17 2017-07-21 清华大学 Neuronal messages sending method and system
CN106971228B (en) * 2017-02-17 2020-04-07 北京灵汐科技有限公司 Method and system for sending neuron information
CN106971229B (en) * 2017-02-17 2020-04-21 清华大学 Neural network computing core information processing method and system
US10909449B2 (en) 2017-04-14 2021-02-02 Samsung Electronics Co., Ltd. Monolithic multi-bit weight cell for neuromorphic computing
CN107657312A (en) * 2017-09-18 2018-02-02 东南大学 Towards the two-value real-time performance system of voice everyday words identification
CN111971693A (en) * 2018-04-27 2020-11-20 国际商业机器公司 Central scheduler and instruction dispatcher for neuro-inference processor
CN110874633A (en) * 2018-09-03 2020-03-10 三星电子株式会社 Neuromorphic methods and apparatus with multi-site neuromorphic manipulation
WO2020134824A1 (en) * 2018-12-29 2020-07-02 北京灵汐科技有限公司 Brain-like computing system
US11461626B2 (en) 2019-02-25 2022-10-04 Lynxi Technologies Co., Ltd. Brain-like computing chip and computing device
CN109901878B (en) * 2019-02-25 2021-07-23 北京灵汐科技有限公司 Brain-like computing chip and computing equipment
CN109901878A (en) * 2019-02-25 2019-06-18 北京灵汐科技有限公司 One type brain computing chip and calculating equipment
CN110263926A (en) * 2019-05-18 2019-09-20 南京惟心光电系统有限公司 Impulsive neural networks and its system and operation method based on photoelectricity computing unit
CN110322010B (en) * 2019-07-02 2021-06-25 深圳忆海原识科技有限公司 Pulse neural network operation system and method for brain-like intelligence and cognitive computation
CN110322010A (en) * 2019-07-02 2019-10-11 深圳忆海原识科技有限公司 The impulsive neural networks arithmetic system and method calculated for class brain intelligence with cognition
CN112308219A (en) * 2019-07-26 2021-02-02 爱思开海力士有限公司 Method of performing arithmetic operation and semiconductor device performing arithmetic operation
CN111565152A (en) * 2020-03-27 2020-08-21 中国人民解放军国防科技大学 Brain-like chip routing system data communication method based on routing domain division
CN111667064A (en) * 2020-04-22 2020-09-15 南京惟心光电系统有限公司 Hybrid neural network based on photoelectric computing unit and operation method thereof
CN111667064B (en) * 2020-04-22 2023-10-13 南京惟心光电系统有限公司 Hybrid neural network based on photoelectric computing unit and operation method thereof
CN112966814A (en) * 2021-03-17 2021-06-15 上海新氦类脑智能科技有限公司 Information processing method of fused impulse neural network and fused impulse neural network
CN113554162A (en) * 2021-07-23 2021-10-26 上海新氦类脑智能科技有限公司 Axon input extension method, device, equipment and storage medium
CN113554162B (en) * 2021-07-23 2022-12-20 上海新氦类脑智能科技有限公司 Axon input extension method, device, equipment and storage medium
CN113313240B (en) * 2021-08-02 2021-10-15 成都时识科技有限公司 Computing device and electronic device
CN114861864A (en) * 2022-02-24 2022-08-05 天津大学 Neuron network modeling method and device with dendritic morphology
CN114611686A (en) * 2022-05-12 2022-06-10 之江实验室 Synapse delay implementation system and method based on programmable neural mimicry core

Also Published As

Publication number Publication date
CN105095967B (en) 2018-02-16

Similar Documents

Publication Publication Date Title
CN105095967A (en) Multi-mode neural morphological network core
CN105095961A (en) Mixing system with artificial neural network and impulsive neural network
CN105095966A (en) Hybrid computing system of artificial neural network and impulsive neural network
CN105095965A (en) Hybrid communication method of artificial neural network and impulsive neural network
Sayama et al. Modeling complex systems with adaptive networks
Stromatias et al. Scalable energy-efficient, low-latency implementations of trained spiking deep belief networks on spinnaker
CN112000108B (en) Multi-agent cluster grouping time-varying formation tracking control method and system
Ferreira et al. An approach to reservoir computing design and training
Samoilenko et al. Using Data Envelopment Analysis (DEA) for monitoring efficiency-based performance of productivity-driven organizations: Design and implementation of a decision support system
Topalli et al. A hybrid learning for neural networks applied to short term load forecasting
CN104077438B (en) Power network massive topologies structure construction method and system
CN113408743A (en) Federal model generation method and device, electronic equipment and storage medium
CN105589333B (en) Control method is surrounded in multi-agent system grouping
Zheng et al. Improving the efficiency of multi-objective evolutionary algorithms through decomposition: An application to water distribution network design
CN108241964A (en) Capital construction scene management and control mobile solution platform based on BP artificial nerve network model algorithms
Nasira et al. Vegetable price prediction using data mining classification technique
Jiang et al. A data-driven based decomposition–integration method for remanufacturing cost prediction of end-of-life products
Li et al. A recurrent neural network and differential equation based spatiotemporal infectious disease model with application to covid-19
Zong et al. Price forecasting for agricultural products based on BP and RBF Neural Network
Ferreira et al. Evolutionary strategy for simultaneous optimization of parameters, topology and reservoir weights in echo state networks
Pasam et al. Multi-objective Decision Based Available Transfer Capability in Deregulated Power System Using Heuristic Approaches
Wang et al. Convolution neural network based load model parameter selection considering short-term voltage stability
Stork et al. Surrogate-assisted learning of neural networks
Savchenko-Synyakova et al. The Tools for Intelligent Data Analysis, Modeling and Forecasting Of Social and Economic Processes
de Araujo Góes et al. NAROAS: a neural network-based advanced operator support system for the assessment of systems reliability

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20180212

Address after: 100036 Beijing city Haidian District West Sanhuan Road No. 10 wanghailou B block two layer 200-30

Patentee after: Beijing Ling Xi Technology Co. Ltd.

Address before: 100084 Beijing Beijing 100084-82 mailbox

Patentee before: Tsinghua University