CN105095961A - Mixing system with artificial neural network and impulsive neural network - Google Patents

Mixing system with artificial neural network and impulsive neural network Download PDF

Info

Publication number
CN105095961A
CN105095961A CN201510419325.8A CN201510419325A CN105095961A CN 105095961 A CN105095961 A CN 105095961A CN 201510419325 A CN201510419325 A CN 201510419325A CN 105095961 A CN105095961 A CN 105095961A
Authority
CN
China
Prior art keywords
network
routing node
neural networks
neuromorphic
network core
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510419325.8A
Other languages
Chinese (zh)
Other versions
CN105095961B (en
Inventor
裴京
施路平
王栋
邓磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ling Xi Technology Co. Ltd.
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201510419325.8A priority Critical patent/CN105095961B/en
Publication of CN105095961A publication Critical patent/CN105095961A/en
Application granted granted Critical
Publication of CN105095961B publication Critical patent/CN105095961B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Feedback Control In General (AREA)

Abstract

Provided in the invention is a mixing system with an artificial neural network and an impulsive neural network. The mixing system comprises a plurality of neural morphological network cores and a plurality of routing nodes corresponding to the neural morphological network cores one by one. The plurality of neural morphological network cores are used for executing neural network computing and use local routing nodes to realize data inputting and outputting. The plurality of routing nodes form a routing network and is responsible for data inputting and outputting of the whole system. According to the invention, the mixing system based on combination of computing modes of the artificial neural network and the impulsive neural network is capable of carrying out real-time multi-mode or complex space-time signal computing rapidly and the computing precision can be guaranteed.

Description

The commingled system of a kind of artificial neural network and impulsive neural networks
Technical field
The present invention relates to a kind of neural computing system.
Background technology
Neural network is the computing system that a kind of mimic biology brain cynapse-neuronal structure carries out data processing, by being divided into connecting to form of the computing node of multilayer and interlayer.Each node simulates a neuron, performs certain certain operations, such as activation function, the connecting analog nerve synapse between node, and the weighted value connected for inputting from last layer node represents synapse weight.Neural network has powerful non-linear, adaptive information processing power.
Neuron in artificial neural network is using the output of accumulated value as self after activation function process from connection input.Corresponding to different network topology structures, neuron models and learning rules, artificial neural network comprises again the tens of kinds of network models such as perceptron, Hopfield network, Boltzmann machine, diversified function can be realized, in pattern-recognition, complex control, signal transacting and optimization etc., have application.Traditional artificial neural network data can be thought to be encoded by the frequency information of neuron pulse, and each layer neuron successively serial runs.The biological nervous system hierarchy of artificial Neural Network Simulation, but fail to mate completely the information processing architecture of cortex. such as time series is on the impact of study, and as real biological cortex on process information, not independent static to the study of information data, but have contextual contact along with the time.Impulsive neural networks is the new neural network occurred in recent ten years, third generation neural network of being known as.Data in impulsive neural networks are with the space time information of neuron pulse signal coding, and the information transmission between the input and output of network and neuron shows as the pulse that neuron sends and the temporal information sending pulse, and neuron needs parallel running simultaneously.Compared with traditional artificial neural network, impulsive neural networks has relatively big difference in information processing manner, neuron models, concurrency etc., and the method for operation is closer to real biosystem.The pulse train of impulsive neural networks application accurate timing is encoded to nerve information and processes, this computation model comprising Time Calculation element has more biological explanatory, be the effective tool carrying out complicated space time information process, multi-modal information can be processed and information processing is more real-time.But the uncertainty of the uncontinuity of the neuron models of impulsive neural networks, the complicacy of space-time code, network structure causes the description being difficult to mathematically complete overall network, therefore be difficult to build effective and general supervised learning algorithm, limit its calculating scale and degree of accuracy.
Summary of the invention
In view of this, necessaryly provide a kind of and can carry out real-time, multi-modal or complicated space-time signal calculates and can ensure the neural computing system of counting accuracy.
The commingled system of a kind of artificial neural network and impulsive neural networks, comprise: multiple neuromorphic network core and with the plurality of neuromorphic network core multiple routing node one to one, often pair of neuromorphic network core XY coordinate corresponding to routing node, a pair mutually corresponding neuromorphic network core and routing node claim local form network core and local routing node mutually; Described multiple neuromorphic network core is for performing neural computing, described multiple neuromorphic network core realizes data constrained input by local routing node, in described multiple neuromorphic network core, at least one operates in artificial neural network pattern, and at least one operates in impulsive neural networks pattern; Described multiple routing node composition route network, bears the data constrained input of whole system.
Compared with prior art, the commingled system of artificial neural network provided by the invention and impulsive neural networks combines the computation schema of artificial neural network and impulsive neural networks two kinds of neural networks, can carry out real-time, multi-modal or complicated space-time signal calculate and can ensure calculating degree of accuracy.
Accompanying drawing explanation
Basic computational ele-ment structural drawing in the hybrid system of the artificial neural network that Fig. 1 provides for first embodiment of the invention and impulsive neural networks.
Fig. 2 is cascaded structure schematic diagram of the present invention.
Fig. 3 is parallel-connection structure schematic diagram of the present invention.
Fig. 4 is parallel organization schematic diagram of the present invention.
Fig. 5 is study structural representation of the present invention.
Fig. 6 is feedback arrangement schematic diagram of the present invention.
Fig. 7 calculates elementary layer level structure schematic diagram in hybrid system provided by the invention.
Fig. 8 is the hybrid system of artificial neural network provided by the invention and impulsive neural networks.
Fig. 9 is the schematic diagram in second embodiment of the invention, the numerical quantities that artificial neural network exports being converted to pulse train.
The frequency coding pulse train that impulsive neural networks exports is converted to numerical quantities schematic diagram in second embodiment of the invention by Figure 10.
The Population Coding pulse train that impulsive neural networks exports is converted to numerical quantities schematic diagram in second embodiment of the invention by Figure 11.
The time encoding pulse train that impulsive neural networks exports is converted to numerical quantities schematic diagram in second embodiment of the invention by Figure 12.
The binary-coding pulse train that impulsive neural networks exports is converted to numerical quantities schematic diagram in second embodiment of the invention by Figure 13.
The multi-modal neuromorphic network nuclear structure block diagram that Figure 14 provides for third embodiment of the invention.
Figure 15 is structured flowchart when multi-modal neuromorphic network core that third embodiment of the invention provides operates in artificial neural network.
Figure 16 is the operational flow diagram of a multi-modal neuromorphic network core time step under artificial neural network pattern.
Figure 17 is structured flowchart when multi-modal neuromorphic network core that third embodiment of the invention provides operates in impulsive neural networks.
Figure 18 is the operational flow diagram of a multi-modal neuromorphic network core time step under impulsive neural networks pattern.
The commingled system of the artificial neural network that Figure 19 provides for fourth embodiment of the invention and impulsive neural networks.
Figure 20 is fourth embodiment of the invention routing nodes structured flowchart.
Figure 21 is fourth embodiment of the invention routing data inclusion composition.
Figure 22 is fourth embodiment of the invention routing nodes workflow diagram.
Main element symbol description
Hybrid system 100 Mode register 211
Basic computational ele-ment 110 Aixs cylinder input block 212
First basic computational ele-ment 110a Synapse weight storage unit 213
Second basic computational ele-ment 110b Dendron unit 214
Unit 111 Dendron multiplicaton addition unit 214a
Neuron 115 Dendron summing elements 214b
Cynapse 116 Neuron computes unit 215
Composite computing unit 120 First computing unit 215a
Series connection recombiner unit 120a Second computing unit 215b
Recombiner unit in parallel 120b Dendron expands storage unit 2151
Parallel composition unit 120c Parameter storage unit 2152
Study recombiner unit 120d Integration leak calculation unit 2153
Feedback complex unit 120e Trigger pip counter 216
Commingled system 200 Controller 217
Neuromorphic network core 210 Routing node 220
Multi-modal neuromorphic network core 210a
Following embodiment will further illustrate the present invention in conjunction with above-mentioned accompanying drawing.
Embodiment
Below in conjunction with the accompanying drawings and the specific embodiments the commingled system of artificial neural network provided by the invention and impulsive neural networks is described in further detail.
First embodiment of the invention provides the hybrid system 100 of a kind of artificial neural network and impulsive neural networks, comprise at least two basic computational ele-ment 110, in these at least two basic computational ele-ment 110, at least one is artificial neural network computing unit, bear artificial neural networks, at least one is impulsive neural networks computing unit, bear impulsive neural networks to calculate, these at least two basic computational ele-ment 110 are interconnected according to topological structure, jointly realize neural computing function.
Refer to Fig. 1, described at least one artificial neural networks unit and described at least one impulsive neural networks computing unit can regard an independently neural network respectively as, this neural network comprises multiple neuron 115, connected by cynapse 116 between the plurality of neuron 115, composition single or multiple lift structure.Synapse weight represents the weighted value that postsynaptic neuron receives presynaptic neuron output.
Described at least one impulsive neural networks computing unit is used for performing impulsive neural networks to the data received and calculates.The input data of described at least one impulsive neural networks computing unit, to export the data transmitted between data and neuron 115 be spike sequence, the model of neuron 115 described in described at least one impulsive neural networks computing unit is the neuron models calculated based on spike, can be but at least one be not limited in leakage-integration-fire model, spike response model and Hodgkin-Huxley model.
Described at least one artificial neural networks unit is used for performing artificial neural networks to the data received.The input data of described at least one artificial neural networks unit, to export the data transmitted between data and neuron 115 be numerical quantities.Described at least one artificial neural networks unit, further according to the difference of neuron models, network structure, learning algorithm, can be at least one in perceptron neural network computing unit, BP neural computing unit, Hopfield neural computing unit, adaptive resonance theory neural computing unit, degree of depth conviction neural computing unit and convolutional neural networks computing unit.
Described at least one artificial neural networks unit and at least one impulsive neural networks computing unit Topology connection are to form a complex neural network computing unit.
The topological structure of described Topology connection comprises at least one in cascaded structure, parallel-connection structure, parallel organization, study structure and feedback arrangement.
Refer to Fig. 2, described two basic computational ele-ment 110 are connected in series to form a series connection composite computing unit 120a.Described two basic computational ele-ment 110 are respectively the first basic computational ele-ment 110a and the second basic computational ele-ment 110b, and the output terminal of described first basic computational ele-ment 110a connects the input end of the second basic computational ele-ment 110b.In described first basic computational ele-ment 110a and the second basic computational ele-ment 110b, one is artificial neural network computing unit, and another is impulsive neural networks computing unit.System input is first through the first basic computational ele-ment 110a process, and the result after process is as the input of the second basic computational ele-ment 110b, and the result after the second basic computational ele-ment 110b process is that system exports.
Refer to Fig. 3, described two basic basic computational ele-ment 110 are connected in parallel to form a recombiner unit 120b in parallel.Described two basic computational ele-ment 110 are respectively the first basic computational ele-ment 110a and the second basic computational ele-ment 110b, the input end of described first basic computational ele-ment 110a connects the input end of described second basic computational ele-ment 110b, and the output terminal of described first basic computational ele-ment 110a connects the output terminal of described second basic computational ele-ment 110b.In described first basic computational ele-ment 110a and the second basic computational ele-ment 110b, one is artificial neural network computing unit, and another is impulsive neural networks computing unit.System input is input to described first basic computational ele-ment 110a simultaneously and described second basic computational ele-ment 110b carries out parallel processing, the result that described first basic computational ele-ment 110a and the second basic computational ele-ment 110b obtains separately is gathered, and exports as system.
Refer to Fig. 4, described two basic basic computational ele-ment 110 parallel joins are to form a parallel recombiner unit 120c.Described two basic computational ele-ment 110 are respectively the first basic computational ele-ment 110a and the second basic computational ele-ment 110b, the input end of described first basic computational ele-ment 110a and the input end of the second basic computational ele-ment 110b are independent separately, and the output terminal of the first basic computational ele-ment 110a connects the output terminal of the second basic computational ele-ment 110b.In described first basic computational ele-ment 110a and the second basic computational ele-ment 110b, one is artificial neural network computing unit, and another is impulsive neural networks computing unit.System input is divided into input 1 and input 2 two parts, wherein input 1 and send into the first basic computational ele-ment 110a, by the first basic computational ele-ment 110a process, the second basic computational ele-ment 110b is sent in input 2, by the second basic computational ele-ment 110b process, the result of the first basic computational ele-ment 110a and the second basic computational ele-ment 110b is gathered and to export as system afterwards.
Refer to Fig. 5, described two basic basic computational ele-ment 110 and a unit 111 form a study recombiner unit 120d.Described two basic computational ele-ment 110 are respectively the first basic computational ele-ment 110a and the second basic computational ele-ment 110b.In described first basic computational ele-ment 110a and the second basic computational ele-ment 110b, one is artificial neural network computing unit, and another is impulsive neural networks computing unit.System input obtains actual output after the first basic computational ele-ment 110a process, this actual difference exported with target that exports is input to unit 111, described unit 111 is according to the parameter such as network structure, synapse weight of the described second basic computational ele-ment 110b of this difference adjustment, learning algorithm in described unit can be Delta rule, BP algorithm, simulated annealing, genetic algorithm etc., and the learning algorithm adopted in the present embodiment is BP algorithm.The output of described second basic computational ele-ment 110b can be used as the parameter such as network structure, synapse weight of the first basic computational ele-ment 110a, or adjusts the parameter such as network structure, synapse weight of the first basic computational ele-ment 110a according to the output of the second basic computational ele-ment 110b.
Refer to Fig. 6, in a certain embodiment of the present invention, described two basic computational ele-ment 110 form a feedback complex unit 120e.Described two basic computational ele-ment 110 are respectively the first basic computational ele-ment 110a and the second basic computational ele-ment 110b.The output terminal of the first basic computational ele-ment 110a is connected with the input end of the second basic computational ele-ment 110b, and the result of calculation of the second basic computational ele-ment 110b outputs to the first basic computational ele-ment 110a as feedback.In described first basic computational ele-ment 110a and the second basic computational ele-ment 110b, one be artificial neural network computing unit another be impulsive neural networks computing unit.System input exports after the first basic computational ele-ment 110a process, and Output rusults is as the input of the second basic computational ele-ment 110b, and the output of the second basic computational ele-ment 110b is input to the first basic computational ele-ment 110a as value of feedback.
Each example is all combined under certain topological structure by two basic computational ele-ment 110 above, form various composite computing unit, further, the basic computational ele-ment 110 of greater number can also be combined under certain topological structure, form various composite computing unit, and various composite computing unit is again combined under certain topological structure, form more complicated mixing computation structure, thus produce rich and varied mixing computation structure.Refer to Fig. 7, ground floor composite computing unit 120 is a serial mixing computation structure, this ground floor composite computing unit 120 is decomposed into the series connection of two composite computing unit 120 at the second layer, further, this second layer composite computing unit 120 is broken down into the parallel connection of two composite computing unit 120 in third layer, above-mentioned decomposable process can go on always, until in the end one deck is broken down into basic computational ele-ment 110, basic computational ele-ment 110 is minimum computing unit structure.Refer to Fig. 8, this figure is the mixing computation structure by one of above-mentioned hierarchic design method acquisition concrete artificial neural network and impulsive neural networks, comprising cascaded structure, parallel-connection structure and feedback arrangement.
The hybrid system 100 of described artificial neural network and impulsive neural networks comprises at least one format conversion unit further, be arranged between described artificial neural networks unit and pulse nerve net computing unit, this format conversion unit is for realizing the data transmission between dissimilar neural computing unit.The numerical quantities that artificial neural networks unit exports can be converted to pulse train by described format conversion unit, or the pulse train that impulsive neural networks computing unit exports is converted to numerical quantities realizes format conversion, to ensure to carry out data transmission between dissimilar basic computational ele-ment 110.
Further, the hybrid system 100 of described artificial neural network and impulsive neural networks can comprise multiple described artificial neural network unit and multiple impulsive neural networks unit, and the plurality of artificial neural network unit and multiple impulsive neural networks unit realize Topology connection with above-described topological structure.
The hybrid system 100 of the artificial neural network that first embodiment of the invention provides and impulsive neural networks combines the computation schema of artificial neural network and impulsive neural networks two kinds of neural networks, artificial neural network is used for the computing unit needing precise information process or need complete mathematical to describe, be used for impulsive neural networks needing snap information process or complicated space-time signal process or need to process the computing unit of multi-modal signal (such as audio visual signal) simultaneously, formation can be carried out real-time, multi-modal or complicated space-time signal also can ensure the system of counting accuracy.Such as do audio visual Integrated Real-time Processing by this system, the multi-modal complicated space-time signal input pulse neural computing unit comprising image (video) and sound can be carried out pre-service, space-time characteristic needed for quick reduction signal Space-time Complexity or extract real-time, pretreated data continue input artificial neural networks unit, the artificial neural network that can realize compared with precise information process can be built by the mode building Complete mathematic model or supervised learning algorithm in this step, ensure that the accuracy of output.
Second embodiment of the invention provides a kind of mixed communication method of artificial neural network and impulsive neural networks nerve, comprise: judge to carry out transmit leg in the neural network basic computational ele-ment 110 communicated whether consistent with the data type of take over party, if consistent, carry out data transmission, if inconsistent, then perform Data Format Transform, the data type conversion sent by transmit leg is the data type identical with take over party, and carries out data transmission.The data type of artificial neural network is numerical value, and the data type of impulsive neural networks is pulse train.
Particularly, in the process performing described Data Format Transform, if transmit leg is described artificial neural networks unit, take over party is impulsive neural networks computing unit, then exported by the numeric format of artificial neural network and be converted to pulse train input pulse neural network; If transmit leg is impulsive neural networks computing unit, take over party is artificial neural network computing unit, then the pulse train formatted output of impulsive neural networks is converted to numerical value input artificial neural network.
The form difference to some extent of dissimilar basic computational ele-ment 110 data input and output.Artificial neural network is based on numerical operation, and input and output are all numerical value, and impulsive neural networks is based on pulse train computing, and input and output are all pulse train.Dissimilar basic computational ele-ment 110 is integrated in same mixing computation structure, needs the problem solving communication each other.Namely a kind of mixed communication method that artificial neural network and impulsive neural networks nerve are provided is needed, the numeric format of artificial neural networks unit is exported to be accepted by impulsive neural networks computing unit, and the pulse train formatted output of impulsive neural networks computing unit can be accepted by artificial neural networks unit.
Refer to Fig. 9, in a certain embodiment of the present invention, described impulsive neural networks unit is data receiver, described artificial neural network unit is data receiver, described communication process is: the pulse train numerical quantities that artificial neural networks unit exports being converted to respective frequencies, and using the input of this pulse train as impulsive neural networks computing unit, the frequency of pulse train after described respective frequencies refers to conversion is directly proportional to the size of numerical quantities.
In a certain embodiment of the present invention, described artificial neural network unit is data receiver, described impulsive neural networks unit is data receiver, described communication process is: the numerical quantities pulse train formatted output of impulsive neural networks being converted to corresponding size, and using this numerical quantities as the input as artificial neural network.According to the difference of impulsive neural networks pulse train coded system, following 4 kinds of situations can be divided into again:
Refer to Figure 10, pulse train adopts frequency coding, and network exports effective information and only represented by pulsed frequency.When impulsive neural networks adopts this coded system, by the method that the pulse train of frequency coding is converted to numerical quantities be: numerical quantities pulse train being converted to corresponding numerical value, described corresponding numerical value refers to that the size of numerical quantities is directly proportional to the frequency of pulse train, that is the inverse process of above-mentioned communication means from artificial neural network to impulsive neural networks.
Refer to Figure 11, pulse train is Population Coding, and the characterized same information of multiple neuronic output, effective information is the neuron number that synchronization sends spike.When impulsive neural networks adopts this coded system, corresponding communication means is: the neuron number of the transmission spike of synchronization is converted to corresponding numerical value, and the size of numerical value is directly proportional to the neuron number providing spike.
Refer to Figure 12, pulse train is time encoding, and effective information is the time that neuron sends spike.When impulsive neural networks adopts this coded system, corresponding communication means is: time neuron being sent spike is converted to corresponding numerical value, the size of numerical value and spike Time Of Release exponentially funtcional relationship.
Refer to Figure 13, pulse train is binary-coding, and effective information is that whether neuron provides spike within a certain period of time.When impulsive neural networks adopts this coded system, corresponding communication means is: if provide spike in limiting time, then numerical value is 1, otherwise numerical value is 0.
The mixed communication method of the artificial neural network that second embodiment of the invention provides and impulsive neural networks nerve achieves the direct communication of artificial neural network and impulsive neural networks, can realize the hybrid operation of above-mentioned two kinds of neural networks on this basis.
Refer to Figure 14, third embodiment of the invention provides a kind of multi-modal neuromorphic network core 210a, comprising: mode register 211, aixs cylinder input block 212, synapse weight storage unit 213, dendron unit 214 and neuron computes unit 215.
In the present embodiment, described multi-modal neuromorphic network core 210a has a bar to input aixs cylinder, b bar dendron and b pericaryon, and every bar aixs cylinder is connected with a bar dendron respectively, and each tie point place is a cynapse, total a × b cynapse, connection weight is synapse weight.The corresponding pericaryon of every bar dendron.Therefore the maximum network size that can carry of this multi-modal neuromorphic network core 210a is a input × b neuron.Wherein a, b integer all for being greater than 0, the value of a, b is generally determined according to practical application.A is minimum can be taken as 1, and maximal value, for being limited to hardware resource, comprises storage resources, logical resource etc., the single neuromorphic network core that can realize maximum aixs cylinder input number; B minimum desirable 1, maximal value, while being limited to hardware resource, is also limited to the neuron computing total degree that each neuromorphic network core can perform in the single trigger pip cycle.In the present embodiment, the value of a and b is 256.Be appreciated that in actual applications, the concrete number of above-mentioned input aixs cylinder, dendron and pericaryon can adjust according to concrete condition.
Described mode register 211 controls the operational mode of described multi-modal neuromorphic network core 210a.Described mode register 211 is connected with described aixs cylinder input block 212, dendron unit 214 and neuron computes unit 215, controls the artificial neural network pattern that operates in or the impulsive neural networks pattern of said units.The width of described mode register 211 is 1, and its register value can be configured by user.
Described aixs cylinder input block 212 is connected with described dendron unit 214.Described aixs cylinder input block 212 receives a bar aixs cylinder and inputs and store.In the present embodiment, every bar aixs cylinder has the storage unit of 16.When operating in impulsive neural networks pattern, the input of every bar aixs cylinder is all the spike of 1, the aixs cylinder input of described cell stores 16 time steps.When operating in artificial neural network pattern, the input of every bar aixs cylinder is all the signed number of 8,8 aixs cylinder inputs of a described cell stores time step.
Described synapse weight storage unit 213 is connected with described dendron unit 214, and described synapse weight storage unit 213 stores a × b synapse weight, and in the present embodiment, described synapse weight is the signed number of 8.
The input end of described dendron unit 214 connects described aixs cylinder input block 212 and synapse weight storage unit 213, and output terminal connects described neuron computes unit 215.Described dendron unit 214 performs the vector-matrix multiplication computing of a aixs cylinder input vector and a × b synapse weight matrix, and a the result of calculation that computing obtains is a neuronic aixs cylinder input.Described dendron unit 214 comprises dendron multiplicaton addition unit 214a and dendron summing elements 214b.When operating in artificial neural network pattern, described aixs cylinder input vector and synapse weight matrix are sent into described dendron multiplicaton addition unit 214a and are carried out multiply-add operation, and vector-matrix multiplication is realized by multiplier and totalizer.When operating in impulsive neural networks pattern, described aixs cylinder input vector and synapse weight matrix are sent into described dendron summing elements and are carried out accumulating operation, and now, aixs cylinder is input as 1, and vector-matrix multiplication is realized by data selector and totalizer.
Described neuron computes unit 215, for performing neuron computes, comprises the first computing unit 215a and the second computing unit 215b.When operating in artificial neural network pattern, the multiply-add operation result that described dendron multiplicaton addition unit 214a sends is sent into described first computing unit 215a and is carried out artificial neural network computing.In the present embodiment, neuron passes through the look-up tables'implementation Any Nonlinear Function function of 1024 × 8, and output is 8 bit value.When operating in impulsive neural networks pattern, the accumulating operation result that described dendron summing elements 214b sends is sent into described second computing unit 215b and is carried out impulsive neural networks calculating.Neuron in the present embodiment is integration-leakage-igniting (LIF) model, exports as spike signal and current film potential.
Described multi-modal neuromorphic network core 210a comprises trigger pip counter 216 further, and described trigger pip counter 216 receives trigger pip and records trigger pip number, i.e. current time step.Described trigger pip is the clock signal of fixed cycle, and in the present embodiment, the cycle is 1ms.
Described multi-modal neuromorphic network core 210a comprises controller 217 further.Described controller 217 is connected with described aixs cylinder input block 212, synapse weight storage unit 213, dendron unit 214, neuron computes unit 215, and described controller 217 controls the operation sequential of above-mentioned aixs cylinder input block 212, synapse weight storage unit 213, dendron unit 214, neuron computes unit 215.In addition described controller 217 is also responsible for the startup and the termination that control described multi-modal neuromorphic network core 210a, and described controller 217 starts the 210a computing of described multi-modal neuromorphic network core and control algorithm flow process at described trigger pip rising edge.
Impulsive neural networks operational mode and the artificial neural network operational mode of described multi-modal neuromorphic network core 210a will be introduced respectively below.
Refer to Figure 15, described multi-modal neuromorphic network core 210a operates in artificial neural network pattern, the aixs cylinder that under this pattern, described aixs cylinder input block 212 receives is input as the packet of 16, comprising 8 target aixs cylinder sequence numbers (0 ~ 255) and 8 inputs (signed number-128 ~ 127).The now storer 256 × 8 of described aixs cylinder input block 212,8 inputs in this storer record 256 aixs cylinders.
Described synapse weight storage unit 213 stores 256 × 256 synaptic weight value, and each synaptic weight value is the signed number of 8.
Described dendron multiplicaton addition unit 214a comprises multiplier and totalizer, and every time step ties up the product of aixs cylinder input vector by 256 dimension synapse weight vectors and 256 on its dendron of neuron computes, and using the input of result of calculation as described first computing unit 215a.
In described first computing unit 215a executor artificial neural networks, neuronic non-linear/linear activation function calculates.Before use, first according to the activation function that this neuromorphic network uses, calculating for value is 0 ~ 1,023 8 outputs inputting correspondence, and by above-mentioned result of calculation stored in the look-up table storage of neuron computes unit.Operationally, taken advantage of by the dendron from described dendron multiplicaton addition unit 214a and add result and input as the address of look-up table, 8 outputs that this address of look-up table stores are neuron and export.
Refer to Figure 16, under artificial neural network pattern, the operational scheme of a described multi-modal neuromorphic network core 210a time step comprises the following steps:
S11, after trigger pip being detected, described trigger pip counter 216 circulates from adding 1, and described controller 217 starts multi-modal neuromorphic network core 210a and runs;
S12, described aixs cylinder input block 212 reads current time step aixs cylinder input vector according to the value of described trigger pip counter 216, and be sent to described dendron multiplicaton addition unit 214a, described synapse weight storage unit 213 reads 1 ~ No. 256 neuron dendron synapse weight vector successively, and is sent to described dendron multiplicaton addition unit 214a;
S13, described dendron multiplicaton addition unit 214a calculate described aixs cylinder input vector and described 1 ~ No. 256 neuron dendron synapse weight vector product successively according to input sequence, and are sent to described first computing unit 215a;
S14, the output of described dendron multiplicaton addition unit 214a as look-up table address, is shown that neuron exports by described first computing unit 215a;
S15, described controller 217 stops this multi-modal neuromorphic network core 210a to run, and returns step S11.
Refer to Figure 17, described multi-modal neuromorphic network core 210a operates in impulsive neural networks pattern, the aixs cylinder that under this pattern, described aixs cylinder input block 212 receives is input as the packet of 12, comprising 8 target aixs cylinder sequence numbers (0 ~ 255) and 4 delayed datas (0 ~ 15).Described delayed data represents the time step that this input comes into force and the difference that current time walks.Now the storer of described aixs cylinder input block 212 is 256 × 16, input that is current and totally 16 time steps afterwards in this storer record 256 aixs cylinders.If certain position is 1, then corresponding aixs cylinder activates on corresponding time step, is namely input as 1; If certain position is 0, then corresponding aixs cylinder does not activate on corresponding time step, is namely input as 0.
Described synapse weight storage unit 213 stores 256 × 256 synaptic weight value, and each synaptic weight value is the signed number of 8.
Described dendron summing elements 214b comprises data selector and totalizer, calculate successively all activated cynapse of current time step on every bar dendron weight and, and using the input of result of calculation as the second computing unit 215b in described neuron computes unit 215.
Described second computing unit 215b is used for carrying out impulsive neural networks calculating, and the neuron in the present embodiment is integration-leakage-igniting (LIF) model.Described second computing unit 215b comprises further: dendron expands storage unit 2151, parameter storage unit 2152 and integration leak calculation unit 2153.At each time step, described second computing unit 215b runs 256 times successively, realizes 256 neuronic computings by time-multiplexed mode.Wherein said dendron is expanded storage unit 2151 and is comprised 256 storage unit, input packet is expanded for receiving the dendron sent in the external world, described dendron is expanded input packet and is comprised transmitting terminal membrane potential of neurons value and target nerve unit sequence number, and according to neuron sequence number, transmitting terminal membrane potential of neurons value is stored into corresponding stored unit.Described parameter storage unit 2152 comprises 256 storage unit, stores 256 neuronic film potentials, threshold value and leakage value.Described integration leak calculation unit 2153 performs integration-leakage-ignition operation to each neuron successively, when film potential exceedes positive threshold value, and output spike pulse signal and current film potential value.Wherein said integration-leakage-ignition operation is as follows:
Film potential=former film potential+dendron input+dendron expands input-leakage value
If film potential is greater than positive threshold value, then sends spike signal and film potential value, and film potential is reset.If film potential is less than negative threshold value, does not then send signal and film potential is reset.
Refer to Figure 18, the operational scheme of an impulsive neural networks pattern time step of described multi-modal neuromorphic network core 210a comprises the following steps:
S21, after trigger pip being detected, described trigger pip counter 216 circulates from adding 1, and described controller 217 starts multi-modal neuromorphic network core 210a and runs;
S22, described aixs cylinder input block 212 reads current time step aixs cylinder input vector according to the value of described trigger pip counter 216, and be sent to described dendron summing elements 214b, described synapse weight storage unit 213 reads 1 ~ No. 256 neuronic dendron synapse weight vector successively, and is sent to described dendron summing elements 214b;
S23, described dendron summing elements 214b calculate described aixs cylinder input vector and described 1 ~ No. 256 neuron dendron synapse weight vector product successively according to input sequence, and are sent to described second computing unit 215b;
S24, described second computing unit 215b read 1 ~ No. No. 256 neuron parameter successively from described parameter storage unit 2152, and calculate with the input value from described dendron summing elements 214b, output spike pulse signal and current film potential;
S25, after neuron computes terminates, described controller 217 stops this multi-modal neuromorphic network core 210a to run, and returns step S21.
Further, the basis of the multi-modal neuromorphic network core 210a that the present embodiment can be provided obtains two kinds of single mode neuromorphic network cores.Operate in the single mode neuromorphic network core under artificial neural network pattern, comprising: aixs cylinder input block 212, synapse weight storage unit 213, dendron multiplicaton addition unit 214a and the first computing unit 215a.Operate in the single mode neuromorphic network core under impulsive neural networks pattern, comprising: aixs cylinder input block 212, synapse weight storage unit 213, dendron summing elements 214b and the second computing unit 215b.Above-mentioned two kinds of single mode neuromorphic network cores are all on the basis of the 3rd embodiment, done corresponding simplification, and concrete syndeton and the function of inner each unit can with reference to third embodiment of the invention.
Be appreciated that multi-modal neuromorphic network core 210a and the two kind of single mode neuromorphic network that third embodiment of the invention provides is endorsed using the basic computational ele-ment 110 in the hybrid system 100 as first embodiment of the invention artificial neural network and impulsive neural networks.
The multi-modal neuromorphic network core 210a that third embodiment of the invention provides both can carry out artificial neural network computing, also impulsive neural networks computing can be carried out, and can switch between artificial neural network operational mode and impulsive neural networks operational mode as required, can carry out real-time, multi-modal or complicated space-time signal calculate and can ensure calculating degree of accuracy.
Refer to Figure 19, fourth embodiment of the invention provides the commingled system 200 of a kind of artificial neural network and impulsive neural networks, comprise: multiple neuromorphic network core 210 and with the plurality of neuromorphic network core multiple routing node 220 one to one, described multiple routing node 220 forms the cancellated route network of m × n, wherein, m, n are the integer being greater than 0.Line direction in this m × n array is defined as X-direction, and column direction is defined as Y-direction, and often pair of neuromorphic network core 210 and routing node 220 have a unique local XY coordinate.
Described neuromorphic network core 210 and routing node 220 one_to_one corresponding, refer to that each neuromorphic network core 210 corresponds to a routing node 220, each routing node 220 corresponds to a neuromorphic network core 210.The a pair neuromorphic network core 210 of mutual correspondence is with routing node 220, and neuromorphic network core 210 is called the local neuromorphic network core of routing node 220, and routing node 220 is called the local routing node of neuromorphic network core 210.The input of described neuromorphic network core 210 is all from its local routing node, and neuron computes result also can be sent to this local routing node, and then is exported or be sent to target nerve form network core by route network.
In the present embodiment, the quantity of described neuromorphic network core 210 and routing node 220 is 9, and described 9 routing nodes 220 form 3 × 3 cancellated route networks.
Described neuromorphic network core 210 can be single mode neuromorphic network core, also can be multi-modal neuromorphic network core, the multi-modal neuromorphic network core 210a that such as third embodiment of the invention provides.So-called single mode neuromorphic network core refers to that this neuromorphic network core can only operate in artificial neural network pattern or impulsive neural networks pattern; So-called multi-modal neuromorphic network core refers to that this neuromorphic network core has two kinds of operational modes: artificial neural network pattern and impulsive neural networks pattern, can realize switching between above-mentioned two kinds of operational modes by configuration inherent parameters, the configuration operation pattern that each multi-modal neuromorphic network core can be independent.
Described neuromorphic network core 210 has multiple neuron elements and the input of multiple aixs cylinder.In the present embodiment, described neuromorphic network core 210 has 256 neuron elements and 256 aixs cylinder inputs, and maximum can bearing comprises 256 neuronic neural network computings.If the scale of neural network realized is greater than 256 neurons, then need to realize multiple neuromorphic network core 210 to connect together on the topology by route network, each neuromorphic network core 210 bears a part of neural network computing, the neural network that common composition one is large.The commingled system 200 that the present embodiment provides has 9 neuromorphic network cores 210, maximum can bearing comprises 2304 neuronic neural computings, and described neural computing can be the hybrid network computing of artificial neural network computing, impulsive neural networks computing or artificial neural network and impulsive neural networks.
When described neuromorphic network core 210 operates under artificial neural network pattern, the result of calculation of 256 neuron elements in this neuromorphic network core 210 is all the numerical value of 8; When neuromorphic network core 210 operates under impulsive neural networks pattern, the result of calculation of 256 neuronic unit in this neuromorphic network core 210 is all the spike of 1.Under no matter operating in which kind of pattern, the result of calculation of neuronic unit all directly can be sent to the local routing node of this neuromorphic network core 210.
Described multiple routing node 220 forms cancellated route network, this route network bears data-transformation facility, the transmission of described data comprises: system input, system export and transmission between neuromorphic network core 210, wherein transmit between neuromorphic network core 210 and the data being divided into model identical neuromorphic network internuclear transmission and the internuclear data transmission of different mode neuromorphic network.Arbitrary routing node 220 in described route network the unique and XY planimetric coordinates determined can represent with one.In the present embodiment, 9 routing nodes 220 form the array of 3 × 3, line direction in array is defined as X-direction, column direction is defined as Y-direction, each routing node 220 described directly can carry out data transmission with it at X positive dirction, X negative direction, Y positive dirction, routing node that Y negative direction is adjacent, forms mesh topology.Being appreciated that described mesh topology is except above-mentioned reticulate texture, also can be other common structures such as hub-and-spoke configuration, bus type structure.
The data of described route network transmission comprise: system input data, system export the data of transmission between data and neuromorphic network core 210.Above-mentioned data are transmitted in route network according to the routing rule preset.In the present embodiment, routing rule is: data are transmitted first in X direction, exports after arriving target X-coordinate routing node along Y-direction again, until arrive the routing node of target XY coordinate.If represent the coordinate of initial routing node with (x0, y0), (x1, y1) represents the coordinate of target routing node, and above-mentioned routing rule is: (x0, y0) → (x1, y0) → (x1, y1).Be appreciated that in actual applications, also can set different routing rules according to specific circumstances.
The implementation procedure that below introducing system input respectively, system export and transmit between neuromorphic network core 210.
The transmitting procedure of described system input data is: first system input data are input to arbitrary routing node of route network outermost, can be sent to target nerve form network core afterwards by route network according to above-mentioned routing rule.
The transmitting procedure that described system exports data is: first the result of calculation of described neuromorphic network core 210 be sent to local routing node, be sent to arbitrary routing node of route network outermost afterwards according to above-mentioned routing rule by route network, be sent to outside system by this routing node again, completion system exports.
Between described neuromorphic network core 210, the process of transmission is: first the result of calculation of described neuromorphic network core 210 be sent to local routing node, target routing node is sent to according to above-mentioned routing rule afterwards by route network, be sent to its local neuromorphic network core by this target routing node again, complete the data transmission between neuromorphic network core 210.
Refer to Figure 20, described routing node 220 comprises the routing table with multiple storage unit, a neuron elements of the corresponding local neuromorphic network core of each storage unit of this routing table.The XY coordinate address of the object neuromorphic network core that the corresponding neuron elements of described cell stores exports, target aixs cylinder input sequence number and delayed data.In the present embodiment, neuromorphic network core 210 comprises 256 neuron elements, and in described routing table, the number of storage unit is also 256.
In the present embodiment, the data that described system input data, system export communication between data and neuromorphic network core 210 are all transmitted between routing node 220 with the form of 32 route data packets.Figure 21 is depicted as the form of above-mentioned route data packets, comprise 6 target nerve form network core X-direction addresses, 6 target nerve form network core Y-direction addresses, 4 aixs cylinder time delays, 8 target aixs cylinder input sequence numbers and 8 bit data in this route data packets, amount to 32 bit data.Wherein, the aixs cylinder time delay of 4 is effective under target nerve form network core operates in impulsive neural networks pattern, and 8 bit data are neuronic output under artificial neural network pattern.
The packet that described routing node 220 is dealt into local neuromorphic network core is 4 the aixs cylinder time delays intercepted from above-mentioned 32 route data packets, 8 target aixs cylinder input sequence numbers and 8 bit data, amounts to 20 bit data.For the neuromorphic network core 210 operated under artificial neural network pattern, after receiving the packet from local routing node, using the input of 8 bit data as corresponding sequence number aixs cylinder.For the neuromorphic network core 210 operated under impulsive neural networks pattern, after receiving the packet from local routing node, the input of corresponding sequence number aixs cylinder after corresponding time delay is put 1.
Refer to Figure 22, the workflow of described routing node 220 comprises:
S1, described routing node 220 receives to hang oneself the neuron computes result of local neuromorphic network core;
S2, described routing node 220 reads corresponding neuronic routing iinformation from routing table, and this routing iinformation and described neuron computes result are combined as route data packets;
S3, described routing node 220 judges the sending direction of described route data packets, sends this route data packets according to judged result.
In step S1, under operating in artificial neural network pattern, described neuron computes result is for exporting data; Under operating in artificial neural network pattern, described neuron computes result is spike.
In step S3, described route data packets comprises target nerve form network core at the relative address x of X-direction and relative address y in the Y direction.Described routing node 220 judges the sending direction of described route data packets according to the numerical value of x and y, be specially: as x>0, described route data packets is sent to the routing node that X positive dirction is adjacent, as x<0, described route data packets is sent to the routing node that X negative direction is adjacent, as y>0, described route data packets is sent to the routing node that Y positive dirction is adjacent, as y<0, described route data packets is sent to the routing node that Y negative direction is adjacent, as x=y=0, described route data packets is directly sent it back the local neuromorphic network core of this routing node 220.After next routing node receives the route data packets that a routing node sends, relative address in this route data packets is revised, be specially: if x<0, then make revised relative address x '=x+1, if x>0, then make revised relative address x '=x-1, if y<0, then make revised relative address y '=y+1, if y>0, then make revised relative address y '=y-1.Described relative address is often once revised, until x=y=0 through a routing node 220.
In the present embodiment, first carry out judgement and the transmission of X-direction, after X-direction transmission terminates, i.e. x '=0, then judgement and the transmission of carrying out other direction, until be sent to target nerve form network core by route data packets.
The commingled system 200 of the artificial neural network that fourth embodiment of the invention provides and impulsive neural networks combines the computation schema of artificial neural network and impulsive neural networks two kinds of neural networks, both there is the complicated space-time signal processing power of impulsive neural networks, artificial neural network can be made full use of again enrich and powerful computing power, can carry out real-time, multi-modal or complicated space-time signal calculate and can ensure calculating degree of accuracy.
In addition, those skilled in the art also can do other changes in spirit of the present invention, and certainly, these changes done according to the present invention's spirit, all should be included within the present invention's scope required for protection.

Claims (10)

1. the commingled system of an artificial neural network and impulsive neural networks, it is characterized in that, comprise: multiple neuromorphic network core and with the plurality of neuromorphic network core multiple routing node one to one, often pair of neuromorphic network core XY coordinate corresponding to routing node, a pair mutually corresponding neuromorphic network core and routing node claim local form network core and local routing node mutually; Described multiple neuromorphic network core is for performing neural computing, described multiple neuromorphic network core realizes data constrained input by local routing node, in described multiple neuromorphic network core, at least one operates in artificial neural network pattern, and at least one operates in impulsive neural networks pattern; Described multiple routing node composition route network, bears the data constrained input of whole system.
2. the commingled system of artificial neural network as claimed in claim 1 and impulsive neural networks, it is characterized in that, described neuromorphic network core is multi-modal neuromorphic network core, this multi-modal neuromorphic network core has two kinds of operational modes: artificial neural network pattern and impulsive neural networks pattern, realizes switching between above-mentioned two kinds of operational modes by configuration inherent parameters.
3. the commingled system of artificial neural network as claimed in claim 1 and impulsive neural networks, it is characterized in that, described neuromorphic network core is single mode neuromorphic network core, and this single mode neuromorphic network core operates in artificial neural network pattern or impulsive neural networks pattern.
4. the commingled system of artificial neural network as claimed in claim 1 and impulsive neural networks, is characterized in that, described neuromorphic network core comprises n neuron elements and n aixs cylinder inputs, described n be greater than 0 integer.
5. the commingled system of artificial neural network as claimed in claim 4 and impulsive neural networks, it is characterized in that, in described multiple routing node, each routing node comprises the routing table with multiple storage unit, a neuron elements in this routing table in the corresponding local neuromorphic network core of each storage unit, coordinate address, the target aixs cylinder of the object neuromorphic network core of described cell stores corresponding neuron elements output data input sequence number and delayed data.
6. the commingled system of artificial neural network as claimed in claim 1 and impulsive neural networks, it is characterized in that, described multiple routing node composition m × n array, wherein, m, n are the integer being greater than 0, line direction in this m × n array is defined as X-direction, and column direction is defined as Y-direction, and each routing node directly carries out data transmission with from X positive dirction, X negative direction, Y positive dirction, routing node that Y negative direction is adjacent.
7. the commingled system of artificial neural network as claimed in claim 6 and impulsive neural networks, it is characterized in that, system input data, system exports data and the internuclear data of neuromorphic network are transmitted between described routing node with the form of route data packets, described route data packets comprises X-direction and the Y-direction address of target nerve form network core, aixs cylinder time delay, target aixs cylinder input sequence number and data, wherein, when target nerve form network core operates in impulsive neural networks pattern, described aixs cylinder time delay is effective, when target nerve form network core operates in artificial neural network pattern, described data are effective.
8. the commingled system of artificial neural network as claimed in claim 6 and impulsive neural networks, it is characterized in that, the routing rule between described routing node is: if the coordinate of initial routing node is (x0, y0), the coordinate of target routing node is (x1, y1), then data are transmitted first in X direction, arrive target X-coordinate routing node (x1, y0) after, again along Y-direction transmission, until arrive target routing node (x1, y1).
9. the commingled system of artificial neural network as claimed in claim 8 and impulsive neural networks, it is characterized in that, the transmitting procedure of described system input data is: system input data are first input to arbitrary routing node of route network outermost, are then sent to target nerve form network core by route network according to described routing rule; The transmitting procedure that described system exports data is: the result of calculation of described neuromorphic network core is first sent to local routing node, be sent to arbitrary routing node of route network outermost afterwards according to described routing rule by route network, be sent to outside system by this routing node again, completion system exports.
10. the commingled system of artificial neural network as claimed in claim 8 and impulsive neural networks, it is characterized in that, the workflow of described routing node comprises:
S1, described routing node receives the neuron computes result from local neuromorphic network core;
S2, described routing node reads corresponding neuronic routing iinformation from routing table, and this routing iinformation and described neuron computes result are combined as route data packets;
S3, described routing node judges the sending direction of described route data packets, sends this route data packets according to judged result.
CN201510419325.8A 2015-07-16 2015-07-16 A kind of hybrid system of artificial neural network and impulsive neural networks Active CN105095961B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510419325.8A CN105095961B (en) 2015-07-16 2015-07-16 A kind of hybrid system of artificial neural network and impulsive neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510419325.8A CN105095961B (en) 2015-07-16 2015-07-16 A kind of hybrid system of artificial neural network and impulsive neural networks

Publications (2)

Publication Number Publication Date
CN105095961A true CN105095961A (en) 2015-11-25
CN105095961B CN105095961B (en) 2017-09-29

Family

ID=54576335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510419325.8A Active CN105095961B (en) 2015-07-16 2015-07-16 A kind of hybrid system of artificial neural network and impulsive neural networks

Country Status (1)

Country Link
CN (1) CN105095961B (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105719000A (en) * 2016-01-21 2016-06-29 广西师范大学 Neuron hardware structure and method of simulating pulse neural network by adopting neuron hardware structure
CN106201651A (en) * 2016-06-27 2016-12-07 鄞州浙江清华长三角研究院创新中心 The simulator of neuromorphic chip
CN106845632A (en) * 2017-01-25 2017-06-13 清华大学 Impulsive neural networks information is converted to the method and system of artificial neural network information
CN106875004A (en) * 2017-01-20 2017-06-20 清华大学 Composite mode neuronal messages processing method and system
CN106897768A (en) * 2017-01-25 2017-06-27 清华大学 Neutral net method for sending information and system
CN106909969A (en) * 2017-01-25 2017-06-30 清华大学 Neutral net message receiving method and system
WO2017168275A1 (en) * 2016-03-31 2017-10-05 International Business Machines Corporation Energy-efficient time-multiplexed neurosynaptic core for implementing neural networks
CN107369108A (en) * 2016-05-11 2017-11-21 耐能有限公司 Multilayer artificial neural networks and its control method
CN107578095A (en) * 2017-09-01 2018-01-12 中国科学院计算技术研究所 Neural computing device and the processor comprising the computing device
CN108171326A (en) * 2017-12-22 2018-06-15 清华大学 Data processing method, device, chip, equipment and the storage medium of neural network
CN108268939A (en) * 2016-12-30 2018-07-10 上海寒武纪信息科技有限公司 For performing the device of LSTM neural network computings and operation method
WO2018137412A1 (en) * 2017-01-25 2018-08-02 清华大学 Neural network information reception method, sending method, system, apparatus and readable storage medium
CN108830379A (en) * 2018-05-23 2018-11-16 电子科技大学 A kind of neuromorphic processor shared based on parameter quantization
CN109491956A (en) * 2018-11-09 2019-03-19 北京灵汐科技有限公司 A kind of isomery cooperated computing system
CN109564637A (en) * 2016-09-30 2019-04-02 国际商业机器公司 Expansible stream cynapse supercomputer for extreme handling capacity neural network
CN109858620A (en) * 2018-12-29 2019-06-07 北京灵汐科技有限公司 One type brain computing system
CN110059800A (en) * 2019-01-26 2019-07-26 中国科学院计算技术研究所 Impulsive neural networks conversion method and related conversion chip
CN110100255A (en) * 2017-01-06 2019-08-06 国际商业机器公司 Region is effective, reconfigurable, energy saving, the effective neural network substrate of speed
CN110163016A (en) * 2019-04-29 2019-08-23 清华大学 Hybrid system and mixing calculation method
CN110188872A (en) * 2019-06-05 2019-08-30 北京灵汐科技有限公司 A kind of isomery cooperative system and its communication means
CN110213165A (en) * 2019-06-05 2019-09-06 北京灵汐科技有限公司 A kind of isomery cooperative system and its communication means
CN110378469A (en) * 2019-07-11 2019-10-25 中国人民解放军国防科技大学 SCNN inference device based on asynchronous circuit, PE unit, processor and computer equipment thereof
CN111052150A (en) * 2017-08-30 2020-04-21 国际商业机器公司 Computing method for feedback in hierarchical neural networks
WO2020155741A1 (en) * 2019-01-29 2020-08-06 清华大学 Fusion structure and method of convolutional neural network and pulse neural network
CN112242963A (en) * 2020-10-14 2021-01-19 广东工业大学 Rapid high-concurrency neural pulse data packet distribution and transmission method
CN112862100A (en) * 2021-01-29 2021-05-28 网易有道信息技术(北京)有限公司 Method and apparatus for optimizing neural network model inference
CN113285875A (en) * 2021-05-14 2021-08-20 南京大学 Space route prediction method based on impulse neural network
CN113554162A (en) * 2021-07-23 2021-10-26 上海新氦类脑智能科技有限公司 Axon input extension method, device, equipment and storage medium
CN113723594A (en) * 2021-08-31 2021-11-30 绍兴市北大信息技术科创中心 Impulse neural network target identification method
CN113807511A (en) * 2021-09-24 2021-12-17 北京大学 Impulse neural network multicast router and method
WO2022143625A1 (en) * 2020-12-30 2022-07-07 北京灵汐科技有限公司 Neural network model, method, electronic device, and readable medium
US11537879B2 (en) 2017-07-03 2022-12-27 Tsinghua University Neural network weight discretizing method, system, device, and readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090313195A1 (en) * 2008-06-17 2009-12-17 University Of Ulster Artificial neural network architecture
CN102457419A (en) * 2010-10-22 2012-05-16 中国移动通信集团广东有限公司 Method and device for optimizing transmission network route based on neural network model

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090313195A1 (en) * 2008-06-17 2009-12-17 University Of Ulster Artificial neural network architecture
CN102457419A (en) * 2010-10-22 2012-05-16 中国移动通信集团广东有限公司 Method and device for optimizing transmission network route based on neural network model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DENG LEI ETAL.: "Ultra low power of artificial cognitive memory for brain-like computation", 《2014 IEEE INTERNATIONAL NANOELECTRONICS CONFERENCE》 *
LI GUOQI ET AL.: "Hierarchical encoding of human working memory", 《2015 IEEE 10TH CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS》 *

Cited By (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105719000A (en) * 2016-01-21 2016-06-29 广西师范大学 Neuron hardware structure and method of simulating pulse neural network by adopting neuron hardware structure
CN105719000B (en) * 2016-01-21 2018-02-16 广西师范大学 A kind of neuron hardware unit and the method with this unit simulation impulsive neural networks
US10990872B2 (en) 2016-03-31 2021-04-27 International Business Machines Corporation Energy-efficient time-multiplexed neurosynaptic core for implementing neural networks spanning power- and area-efficiency
GB2557780A (en) * 2016-03-31 2018-06-27 Ibm Energy-efficient time-multiplexed neurosynaptic core for implementing neural networks
GB2557780B (en) * 2016-03-31 2022-02-09 Ibm Energy-efficient time-multiplexed neurosynaptic core for implementing neural networks
WO2017168275A1 (en) * 2016-03-31 2017-10-05 International Business Machines Corporation Energy-efficient time-multiplexed neurosynaptic core for implementing neural networks
CN107369108A (en) * 2016-05-11 2017-11-21 耐能有限公司 Multilayer artificial neural networks and its control method
CN106201651A (en) * 2016-06-27 2016-12-07 鄞州浙江清华长三角研究院创新中心 The simulator of neuromorphic chip
CN109564637B (en) * 2016-09-30 2023-05-12 国际商业机器公司 Methods, systems, and media for scalable streaming synaptic supercomputers
CN109564637A (en) * 2016-09-30 2019-04-02 国际商业机器公司 Expansible stream cynapse supercomputer for extreme handling capacity neural network
CN108268939A (en) * 2016-12-30 2018-07-10 上海寒武纪信息科技有限公司 For performing the device of LSTM neural network computings and operation method
CN110100255B (en) * 2017-01-06 2023-10-20 国际商业机器公司 Area efficient, reconfigurable, energy efficient, speed efficient neural network substrate
CN110100255A (en) * 2017-01-06 2019-08-06 国际商业机器公司 Region is effective, reconfigurable, energy saving, the effective neural network substrate of speed
CN106875004A (en) * 2017-01-20 2017-06-20 清华大学 Composite mode neuronal messages processing method and system
WO2018133568A1 (en) * 2017-01-20 2018-07-26 清华大学 Compound-mode neuron information processing method and system, and computer device
CN106875004B (en) * 2017-01-20 2019-09-10 北京灵汐科技有限公司 Composite mode neuronal messages processing method and system
CN106897768B (en) * 2017-01-25 2020-04-21 清华大学 Neural network information sending method and system
CN106909969A (en) * 2017-01-25 2017-06-30 清华大学 Neutral net message receiving method and system
US11823030B2 (en) * 2017-01-25 2023-11-21 Tsinghua University Neural network information receiving method, sending method, system, apparatus and readable storage medium
CN106845632A (en) * 2017-01-25 2017-06-13 清华大学 Impulsive neural networks information is converted to the method and system of artificial neural network information
CN106897768A (en) * 2017-01-25 2017-06-27 清华大学 Neutral net method for sending information and system
CN106845632B (en) * 2017-01-25 2020-10-16 清华大学 Method and system for converting impulse neural network information into artificial neural network information
US20190377998A1 (en) * 2017-01-25 2019-12-12 Tsinghua University Neural network information receiving method, sending method, system, apparatus and readable storage medium
WO2018137412A1 (en) * 2017-01-25 2018-08-02 清华大学 Neural network information reception method, sending method, system, apparatus and readable storage medium
US11537879B2 (en) 2017-07-03 2022-12-27 Tsinghua University Neural network weight discretizing method, system, device, and readable storage medium
CN111052150B (en) * 2017-08-30 2023-12-29 国际商业机器公司 Calculation method for feedback in hierarchical neural network
CN111052150A (en) * 2017-08-30 2020-04-21 国际商业机器公司 Computing method for feedback in hierarchical neural networks
CN107578095A (en) * 2017-09-01 2018-01-12 中国科学院计算技术研究所 Neural computing device and the processor comprising the computing device
CN107578095B (en) * 2017-09-01 2018-08-10 中国科学院计算技术研究所 Neural computing device and processor comprising the computing device
CN108171326A (en) * 2017-12-22 2018-06-15 清华大学 Data processing method, device, chip, equipment and the storage medium of neural network
CN108171326B (en) * 2017-12-22 2020-08-04 清华大学 Data processing method, device, chip, equipment and storage medium of neural network
CN108830379A (en) * 2018-05-23 2018-11-16 电子科技大学 A kind of neuromorphic processor shared based on parameter quantization
CN108830379B (en) * 2018-05-23 2021-12-17 电子科技大学 Neural morphology processor based on parameter quantification sharing
CN109491956A (en) * 2018-11-09 2019-03-19 北京灵汐科技有限公司 A kind of isomery cooperated computing system
CN109858620A (en) * 2018-12-29 2019-06-07 北京灵汐科技有限公司 One type brain computing system
WO2020134824A1 (en) * 2018-12-29 2020-07-02 北京灵汐科技有限公司 Brain-like computing system
CN109858620B (en) * 2018-12-29 2021-08-20 北京灵汐科技有限公司 Brain-like computing system
CN110059800A (en) * 2019-01-26 2019-07-26 中国科学院计算技术研究所 Impulsive neural networks conversion method and related conversion chip
CN110059800B (en) * 2019-01-26 2021-09-14 中国科学院计算技术研究所 Pulse neural network conversion method and related conversion chip
WO2020155741A1 (en) * 2019-01-29 2020-08-06 清华大学 Fusion structure and method of convolutional neural network and pulse neural network
CN110163016A (en) * 2019-04-29 2019-08-23 清华大学 Hybrid system and mixing calculation method
CN110163016B (en) * 2019-04-29 2021-08-03 清华大学 Hybrid computing system and hybrid computing method
CN110213165A (en) * 2019-06-05 2019-09-06 北京灵汐科技有限公司 A kind of isomery cooperative system and its communication means
CN110188872B (en) * 2019-06-05 2021-04-13 北京灵汐科技有限公司 Heterogeneous cooperative system and communication method thereof
CN110188872A (en) * 2019-06-05 2019-08-30 北京灵汐科技有限公司 A kind of isomery cooperative system and its communication means
WO2020244370A1 (en) * 2019-06-05 2020-12-10 北京灵汐科技有限公司 Heterogeneous cooperative system and communication method therefor
CN110378469A (en) * 2019-07-11 2019-10-25 中国人民解放军国防科技大学 SCNN inference device based on asynchronous circuit, PE unit, processor and computer equipment thereof
CN112242963A (en) * 2020-10-14 2021-01-19 广东工业大学 Rapid high-concurrency neural pulse data packet distribution and transmission method
US11853896B2 (en) 2020-12-30 2023-12-26 Lynxi Technologies Co., Ltd. Neural network model, method, electronic device, and readable medium
WO2022143625A1 (en) * 2020-12-30 2022-07-07 北京灵汐科技有限公司 Neural network model, method, electronic device, and readable medium
CN112862100A (en) * 2021-01-29 2021-05-28 网易有道信息技术(北京)有限公司 Method and apparatus for optimizing neural network model inference
CN113285875B (en) * 2021-05-14 2022-07-29 南京大学 Space route prediction method based on impulse neural network
CN113285875A (en) * 2021-05-14 2021-08-20 南京大学 Space route prediction method based on impulse neural network
CN113554162A (en) * 2021-07-23 2021-10-26 上海新氦类脑智能科技有限公司 Axon input extension method, device, equipment and storage medium
CN113723594A (en) * 2021-08-31 2021-11-30 绍兴市北大信息技术科创中心 Impulse neural network target identification method
CN113723594B (en) * 2021-08-31 2023-12-05 绍兴市北大信息技术科创中心 Pulse neural network target identification method
CN113807511B (en) * 2021-09-24 2023-09-26 北京大学 Impulse neural network multicast router and method
CN113807511A (en) * 2021-09-24 2021-12-17 北京大学 Impulse neural network multicast router and method

Also Published As

Publication number Publication date
CN105095961B (en) 2017-09-29

Similar Documents

Publication Publication Date Title
CN105095961A (en) Mixing system with artificial neural network and impulsive neural network
CN105095967A (en) Multi-mode neural morphological network core
CN105095966A (en) Hybrid computing system of artificial neural network and impulsive neural network
CN105095965A (en) Hybrid communication method of artificial neural network and impulsive neural network
CN112000108B (en) Multi-agent cluster grouping time-varying formation tracking control method and system
Sayama et al. Modeling complex systems with adaptive networks
Stromatias et al. Scalable energy-efficient, low-latency implementations of trained spiking deep belief networks on spinnaker
Ferreira et al. An approach to reservoir computing design and training
Topalli et al. A hybrid learning for neural networks applied to short term load forecasting
CN104077438B (en) Power network massive topologies structure construction method and system
CN105589333B (en) Control method is surrounded in multi-agent system grouping
Zheng et al. Improving the efficiency of multi-objective evolutionary algorithms through decomposition: An application to water distribution network design
Nasira et al. Vegetable price prediction using data mining classification technique
CN108241964A (en) Capital construction scene management and control mobile solution platform based on BP artificial nerve network model algorithms
Jiang et al. A data-driven based decomposition–integration method for remanufacturing cost prediction of end-of-life products
Ebrahimnejad et al. A new method for solving dual DEA problems with fuzzy stochastic data
Li et al. A recurrent neural network and differential equation based spatiotemporal infectious disease model with application to covid-19
Ferreira et al. Evolutionary strategy for simultaneous optimization of parameters, topology and reservoir weights in echo state networks
JPH02136034A (en) Optimal power load distribution system by neural network
Pasam et al. Multi-objective Decision Based Available Transfer Capability in Deregulated Power System Using Heuristic Approaches
CN114036582A (en) Multiplicative neural network model and privacy calculation method
Schwenker et al. Echo state networks and neural network ensembles to predict sunspots activity
de Araujo Góes et al. NAROAS: a neural network-based advanced operator support system for the assessment of systems reliability
Sambatti et al. Self-configured neural network for data assimilation using FPGA for ocean circulation
CN106570586A (en) Field detection vehicle path planning method for electric energy metering device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20180212

Address after: 100036 Beijing city Haidian District West Sanhuan Road No. 10 wanghailou B block two layer 200-30

Patentee after: Beijing Ling Xi Technology Co. Ltd.

Address before: 100084 Beijing Beijing 100084-82 mailbox

Patentee before: Tsinghua University