CN105095961B - A kind of hybrid system of artificial neural network and impulsive neural networks - Google Patents

A kind of hybrid system of artificial neural network and impulsive neural networks Download PDF

Info

Publication number
CN105095961B
CN105095961B CN201510419325.8A CN201510419325A CN105095961B CN 105095961 B CN105095961 B CN 105095961B CN 201510419325 A CN201510419325 A CN 201510419325A CN 105095961 B CN105095961 B CN 105095961B
Authority
CN
China
Prior art keywords
network
neural networks
routing node
neuromorphic
network core
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510419325.8A
Other languages
Chinese (zh)
Other versions
CN105095961A (en
Inventor
裴京
施路平
王栋
邓磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ling Xi Technology Co. Ltd.
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201510419325.8A priority Critical patent/CN105095961B/en
Publication of CN105095961A publication Critical patent/CN105095961A/en
Application granted granted Critical
Publication of CN105095961B publication Critical patent/CN105095961B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The present invention provides the hybrid system of a kind of artificial neural network and impulsive neural networks, including:Multiple neuromorphic network cores and with the one-to-one multiple routing nodes of the plurality of neuromorphic network core.The multiple neuromorphic network core is used to perform neural computing, and the multiple neuromorphic network core realizes data input and output by local routing node.The multiple routing node constitutes route network, undertakes data input and the output of whole system.The artificial neural network and the hybrid system of impulsive neural networks that the present invention is provided combine the computation schema of two kinds of neutral nets of artificial neural network and impulsive neural networks, can carry out real-time, multi-modal or complicated space-time signal and calculate and can guarantee that the accuracy of calculating.

Description

A kind of hybrid system of artificial neural network and impulsive neural networks
Technical field
The present invention relates to a kind of neural computing system.
Background technology
Neutral net is the computing system that a kind of mimic biology brain cynapse-neuronal structure carries out data processing, by dividing The connection composition of calculate node and interlayer for multilayer.Each node simulates a neuron, performs some certain operations, for example Connection analog neuron cynapse between activation primitive, node, connects and is represented for the weighted value inputted from last layer node Synapse weight.Neutral net has powerful non-linear, adaptive information disposal ability.
Neuron in artificial neural network is used as itself after the accumulated value from connection input is handled with activation primitive Output.Corresponding to different network topology structures, neuron models and learning rules, artificial neural network includes perceiving again Tens of kinds of network models such as device, Hopfield networks, Boltzmann machine, it is possible to achieve diversified function, pattern-recognition, There is application in terms of complex control, signal transacting and optimization.Traditional artificial neural network data, which may be considered, to be passed through The frequency information coding of neuron pulse, each layer neuron is serially run successively.The biological nerve of artificial Neural Network Simulation System hlerarchy, but influence of the information processing architecture such as time serieses of matching cortex completely to study is the failure to, and make For real biological cortex in processing information for, the study to information data is not independent static state, but over time There is the contact of context.Impulsive neural networks are the new neural networks occurred in recent ten years, the third generation of being known as Neutral net.Data in impulsive neural networks are encoded with the space time information of neuron pulse signal, the input and output of network with And the information transmission between neuron shows as the pulse of neuron transmission and sends the temporal information of pulse, neuron needs same Shi Binghang is run.Compared with traditional artificial neural network, impulsive neural networks are in information processing manner, neuron models, simultaneously There is relatively big difference in terms of row, the method for operation is closer to real biosystem.Impulsive neural networks application accurate timing Pulse train nerve information is encoded and handled, this computation model comprising Time Calculation element is more biological to be explained Property, it is the effective tool for carrying out complicated space time information processing, multi-modal information can be handled and information processing is more real-time. But the discontinuity of the neuron models of impulsive neural networks, the complexity of space-time code, the uncertainty of network structure cause It is difficult to mathematically complete to the overall description of network, therefore, it is difficult to build effective and general supervised learning algorithm, limitation Its calculation scale and accuracy.
The content of the invention
In view of this, real-time, multi-modal or complicated space-time signal calculating can be carried out simultaneously it is necessory to provide one kind It can guarantee that the neural computing system of counting accuracy.
A kind of hybrid system of artificial neural network and impulsive neural networks, including:Multiple neuromorphic network cores and With the one-to-one multiple routing nodes of the plurality of neuromorphic network core, each pair neuromorphic network core is corresponding with routing node One XY coordinate, mutual corresponding a pair of neuromorphics network core mutually claims local form network core and local routing with routing node Node;The multiple neuromorphic network core is used to perform neural computing, and the multiple neuromorphic network core passes through this Ground routing node realizes that at least one in data input and output, the multiple neuromorphic network core operates in ANN Network pattern, and at least one operates in impulsive neural networks pattern;The multiple routing node constitutes route network, undertakes whole The data input of individual system and output.
Compared with prior art, the artificial neural network and the hybrid system of impulsive neural networks that the present invention is provided are combined The computation schema of two kinds of neutral nets of artificial neural network and impulsive neural networks, can carry out real-time, multi-modal or complexity Space-time signal calculates and can guarantee that the accuracy of calculating.
Brief description of the drawings
In artificial neural network and the hybrid system of impulsive neural networks that Fig. 1 provides for first embodiment of the invention Basic computational ele- ment structure chart.
Fig. 2 is cascaded structure schematic diagram of the invention.
Fig. 3 is parallel-connection structure schematic diagram of the invention.
Fig. 4 is parallel organization schematic diagram of the invention.
Fig. 5 is learning structure schematic diagram of the invention.
Fig. 6 is feedback arrangement schematic diagram of the invention.
Elementary layer level structure schematic diagram is calculated in the hybrid system that Fig. 7 provides for the present invention.
Artificial neural network and the hybrid system of impulsive neural networks that Fig. 8 provides for the present invention.
Fig. 9 is the signal that the numerical quantities that export artificial neural network are converted to pulse train in second embodiment of the invention Figure.
Figure 10 is converted to number for the frequency coding pulse train for exporting impulsive neural networks in second embodiment of the invention Value amount schematic diagram.
Figure 11 is converted to number for the Population Coding pulse train for exporting impulsive neural networks in second embodiment of the invention Value amount schematic diagram.
Figure 12 is converted to number for the time encoding pulse train for exporting impulsive neural networks in second embodiment of the invention Value amount schematic diagram.
Figure 13 is converted to number for the binary-coding pulse train for exporting impulsive neural networks in second embodiment of the invention Value amount schematic diagram.
The multi-modal neuromorphic network nuclear structure block diagram that Figure 14 provides for third embodiment of the invention.
When Figure 15 operates in artificial neural network for the multi-modal neuromorphic network core that third embodiment of the invention is provided Structured flowchart.
Figure 16 is the operational flow diagram of one time step of multi-modal neuromorphic network core under artificial neural network pattern.
When Figure 17 operates in impulsive neural networks for the multi-modal neuromorphic network core that third embodiment of the invention is provided Structured flowchart.
Figure 18 is the operational flow diagram of one time step of multi-modal neuromorphic network core under impulsive neural networks pattern.
Artificial neural network and the hybrid system of impulsive neural networks that Figure 19 provides for fourth embodiment of the invention.
Figure 20 is fourth embodiment of the invention routing nodes structured flowchart.
Figure 21 is fourth embodiment of the invention routing data inclusion composition.
Figure 22 is fourth embodiment of the invention routing nodes workflow diagram.
Main element symbol description
Hybrid system 100 Mode register 211
Basic computational ele- ment 110 Aixs cylinder input block 212
First basic computational ele- ment 110a Synapse weight memory cell 213
Second basic computational ele- ment 110b Dendron unit 214
Unit 111 Dendron multiplicaton addition unit 214a
Neuron 115 Dendron summing elements 214b
Cynapse 116 Neuron computing unit 215
Composite computing unit 120 First computing unit 215a
Series connection recombiner unit 120a Second computing unit 215b
Recombiner unit in parallel 120b Dendron expands memory cell 2151
Parallel composition unit 120c Parameter storage unit 2152
Learn recombiner unit 120d Integrate leak calculation unit 2153
Feedback complex unit 120e Trigger signal counter 216
Hybrid system 200 Controller 217
Neuromorphic network core 210 Routing node 220
Multi-modal neuromorphic network core 210a
Following embodiment will further illustrate the present invention with reference to above-mentioned accompanying drawing.
Embodiment
Below in conjunction with the accompanying drawings and the specific embodiments to the artificial neural network that provides of the present invention and impulsive neural networks Hybrid system is described in further detail.
First embodiment of the invention provides the hybrid system 100 of a kind of artificial neural network and impulsive neural networks, Including at least two basic computational ele- ments 110, at least two basic computational ele- ment 110, at least one is ANN Network computing unit, undertakes artificial neural networks, and at least one is impulsive neural networks computing unit, undertakes pulse nerve net Network is calculated, and at least two basic computational ele- ment 110 is connected with each other according to topological structure, and neural computing work(is realized jointly Energy.
Fig. 1 is referred to, an at least artificial neural networks unit is calculated with an at least impulsive neural networks Unit is considered as an independent neutral net respectively, and the neutral net includes multiple neurons 115, the plurality of neuron Connected between 115 by cynapse 116, constitute single or multiple lift structure.Synapse weight represents postsynaptic neuron and receives the presynaptic The weighted value of neuron output.
An at least impulsive neural networks computing unit is used to perform impulsive neural networks calculating to the data received. The data at least transmitted between the input data of an impulsive neural networks computing unit, output data and neuron 115 are Spike sequence, the model of the neuron 115 at least described in an impulsive neural networks computing unit is based on spike arteries and veins Rush calculate neuron models, can be but be not limited to leakage-integration-fire model, spike response model and At least one of Hodgkin-Huxley models.
An at least artificial neural networks unit is used to perform artificial neural networks to the data received. The data at least transmitted between the input data of an artificial neural networks unit, output data and neuron 115 are Numerical quantities.At least artificial neural networks unit is further according to neuron models, network structure, learning algorithm Difference, can be perceptron neural network computing unit, BP neural network computing unit, Hopfield neural computing lists Member, adaptive resonance theory neural computing unit, depth conviction neural computing unit and convolutional neural networks are calculated At least one of unit.
An at least artificial neural networks unit and at least an impulsive neural networks computing unit Topology connection To form a complex neural network computing unit.
The topological structure of the Topology connection includes cascaded structure, parallel-connection structure, parallel organization, learning structure and feedback At least one of structure.
Fig. 2 is referred to, described two basic computational ele- ments 110 are connected in series to form a series connection composite computing unit 120a.Described two basic computational ele- ments 110 are respectively the first basic computational ele- ment 110a and the second basic computational ele- ment 110b, The output end of the first basic computational ele- ment 110a connects the second basic computational ele- ment 110b input.Described first is basic In computing unit 110a and the second basic computational ele- ment 110b, one is artificial neural network computing unit, and another is pulse Neural computing unit.System input first passes around the first basic computational ele- ment 110a processing, and the result after processing is used as the Two basic computational ele- ment 110b input, the result after the second basic computational ele- ment 110b processing exports for system.
Fig. 3 is referred to, described two basic basic computational ele- ments 110 are connected in parallel to form a recombiner unit in parallel 120b.Described two basic computational ele- ments 110 are respectively the first basic computational ele- ment 110a and the second basic computational ele- ment 110b, The input of the first basic computational ele- ment 110a connects the input of the second basic computational ele- ment 110b, described first Basic computational ele- ment 110a output end connects the output end of the second basic computational ele- ment 110b.First basic calculating In unit 110a and the second basic computational ele- ment 110b, one is artificial neural network computing unit, and another is pulse nerve Network calculations unit.System input is input to the first basic computational ele- ment 110a and second basic computational ele- ment simultaneously 110b carries out parallel processing, the place that the first basic computational ele- ment 110a and the second basic computational ele- ment 110b are each obtained Reason result is collected, and is exported as system.
Fig. 4 is referred to, described two basic basic computational ele- ments 110 are connected in parallel to form a parallel recombiner unit 120c.Described two basic computational ele- ments 110 are respectively the first basic computational ele- ment 110a and the second basic computational ele- ment 110b, The input of the first basic computational ele- ment 110a and the second basic computational ele- ment 110b input are each independent, the first base This computing unit 110a output end connects the second basic computational ele- ment 110b output end.First basic computational ele- ment In 110a and the second basic computational ele- ment 110b, one is artificial neural network computing unit, and another is impulsive neural networks Computing unit.System input is divided into 2 two parts of input 1 and input, wherein the first basic computational ele- ment 110a is sent into input 1, By the first basic computational ele- ment 110a processing, the second basic computational ele- ment 110b is sent into input 2, by the second basic calculating list First 110b processing, makees after the first basic computational ele- ment 110a and the second basic computational ele- ment 110b result are collected Exported for system.
Fig. 5 is referred to, described two basic basic computational ele- ments 110 and the study of the formation of unit 111 one are multiple Close unit 120d.Described two basic computational ele- ments 110 are respectively the first basic computational ele- ment 110a and the second basic calculating list First 110b.In the first basic computational ele- ment 110a and the second basic computational ele- ment 110b, one is artificial neural network meter Unit is calculated, another is impulsive neural networks computing unit.System input is obtained after being handled through the first basic computational ele- ment 110a Reality output, the difference that the reality output and target are exported is input to unit 111, and the unit 111 is according to this Difference adjusts in the parameters such as network structure, the synapse weight of the second basic computational ele- ment 110b, the unit Practising algorithm can be for Delta rules, BP algorithm, simulated annealing, genetic algorithm etc., the learning algorithm used in the present embodiment For BP algorithm.The second basic computational ele- ment 110b output can as the first basic computational ele- ment 110a network structure, The parameters such as synapse weight, or according to the second basic computational ele- ment 110b the first basic computational ele- ment of output adjustment 110a's The parameters such as network structure, synapse weight.
Refer in Fig. 6, a certain embodiment of the invention, one feedback complex list of described two formation of basic computational ele- ment 110 First 120e.Described two basic computational ele- ments 110 are respectively the first basic computational ele- ment 110a and the second basic computational ele- ment 110b.First basic computational ele- ment 110a output end is connected with the second basic computational ele- ment 110b input, and second is basic Computing unit 110b result of calculation is output to the first basic computational ele- ment 110a as feedback.First basic computational ele- ment In 110a and the second basic computational ele- ment 110b, one be artificial neural network computing unit another be impulsive neural networks meter Calculate unit.System input is exported after being handled through the first basic computational ele- ment 110a, and output result is used as the second basic computational ele- ment 110b input, the second basic computational ele- ment 110b output is input to the first basic computational ele- ment 110a as value of feedback.
Each example is to be combined two basic computational ele- ments 110 under certain topological structure above, is constituted various Composite computing unit, further, can also be carried out greater number of basic computational ele- ment 110 under certain topological structure Combination, constitutes various composite computing units, and by various composite computing units under certain topological structure carry out group again Close, constitute increasingly complex mixing and calculate structure, so that producing rich and varied mixing calculates structure.Refer to Fig. 7, first layer Composite computing unit 120 is that a serial mixing calculates structure, and the first layer composite computing unit 120 is decomposed into two in the second layer The series connection of individual composite computing unit 120, further, the second layer composite computing unit 120 are broken down into two in third layer The parallel connection of composite computing unit 120, above-mentioned decomposable process can go on always, until being broken down into substantially in last layer Computing unit 110, basic computational ele- ment 110 is minimum computing unit structure.Fig. 8 is referred to, the figure is by above-mentioned level The mixing of specific an artificial neural network and impulsive neural networks that design method is obtained calculates structure, including series connection Structure, parallel-connection structure and feedback arrangement.
The hybrid system 100 of the artificial neural network and impulsive neural networks further comprises at least one form Converting unit, is arranged between the artificial neural networks unit and pulse nerve net computing unit, form conversion Unit is used to realize the data transfer between different types of neural computing unit.The format conversion unit can be by people The numerical quantities of artificial neural networks computing unit output are converted to pulse train, or impulsive neural networks computing unit is exported Pulse train is converted to numerical quantities and realizes that form is changed, to ensure to carry out data between different types of basic computational ele- ment 110 Transmission.
Further, the hybrid system 100 of the artificial neural network and impulsive neural networks may include multiple institutes State artificial neural network unit and multiple impulsive neural networks units, the plurality of artificial neural network unit and multiple pulses god Topology connection is realized with above-described topological structure through NE.
The artificial neural network and the hybrid system 100 of impulsive neural networks that first embodiment of the invention is provided are combined The computation schemas of two kinds of neutral nets of artificial neural network and impulsive neural networks, artificial neural network is used to needing accurate Data processing either need complete mathematical describe computing unit, by impulsive neural networks be used for need snap information handle or Person is complicated space-time signal processing or while needs to handle the computing unit of multi-modal signal (such as audio visual signal), composition Real-time, multi-modal or complicated space-time signal and the system that can guarantee that counting accuracy can be carried out.For example done by the system Audio visual Integrated Real-time Processing, can will include the multi-modal complicated space-time signal input pulse nerve of image (video) and sound Network calculations unit is pre-processed, space-time characteristic needed for quick reduction signal Space-time Complexity or extract real-time, pretreatment Data afterwards continue to input artificial neural networks unit, can be by building Complete mathematic model or having supervision in this step The mode of learning algorithm, which is built, can realize the artificial neural network compared with precise information processing, it is ensured that the accuracy of output.
Second embodiment of the invention provides a kind of mixed communication method of artificial neural network and impulsive neural networks nerve, Including:Judge whether the data type of sender and recipient in the neutral net basic computational ele- ment 110 that is communicated are consistent, Carry out data transmission if consistent, if inconsistent, perform Data Format Transform, be by the data type conversion that sender sends With recipient's identical data type, and carry out data transmission.The data type of artificial neural network is numerical value, pulse nerve net The data type of network is pulse train.
Specifically, during the Data Format Transform is performed, if sender is the artificial neural networks Unit, recipient is impulsive neural networks computing unit, then the numeric format output of artificial neural network is converted into pulse sequence Row input pulse neutral net;If sender is impulsive neural networks computing unit, recipient is artificial neural networks list Member, then be converted to numerical value input artificial neural network by the pulse train form output of impulsive neural networks.
The form of the different types of data input of basic computational ele- ment 110 output difference.Artificial neural network is based on Numerical operation, input and output are all numerical value, and impulsive neural networks are based on pulse train computing, and input and output are all pulse train. Different types of basic computational ele- ment 110 is integrated into same mixing to calculate in structure, it is necessary to the problem of solving communication each other. Need to provide a kind of artificial neural network the mixed communication method neural with impulsive neural networks so that artificial neural network meter Calculating the numeric format output of unit can be received by impulsive neural networks computing unit, and impulsive neural networks computing unit The output of pulse train form can be received by artificial neural networks unit.
Refer in Fig. 9, a certain embodiment of the invention, the impulsive neural networks unit is data receiver, the people Artificial neural networks unit is data sender, and the communication process is:The numerical quantities that artificial neural networks unit is exported Be converted to the pulse train of respective frequencies, and using the pulse train as impulsive neural networks computing unit input, it is described right Frequency is answered to refer to that the frequency of pulse train after conversion is directly proportional to the size of numerical quantities.
In a certain embodiment of the present invention, the artificial neural network unit is data receiver, the impulsive neural networks Unit is data sender, and the communication process is:The pulse train form output of impulsive neural networks is converted into correspondence big Small numerical quantities, and the numerical quantities are used as to the input as artificial neural network.Compiled according to impulsive neural networks pulse train The difference of code mode, can be divided into following 4 kinds of situations again:
Figure 10 is referred to, pulse train uses frequency coding, and network output effective information is only represented by pulse frequency.Work as arteries and veins When rushing neutral net using this coded system, the method that the pulse train of frequency coding is converted into numerical quantities is:By pulse Sequence is converted to the numerical quantities of correspondence numerical value, and the correspondence numerical value refers to the size of numerical quantities and the frequency of pulse train into just Than, that is, the inverse process of the above-mentioned communication means from artificial neural network to impulsive neural networks.
Figure 11 is referred to, pulse train is Population Coding, the characterized same information of output of multiple neurons, effectively letter Cease the neuron number that spike is sent for synchronization.It is corresponding logical when impulsive neural networks use this coded system Letter method is:The neuron number of the transmission spike of synchronization is converted into corresponding numerical value, the size of numerical value and granting The neuron number of spike is directly proportional.
Figure 12 is referred to, pulse train is time encoding, and effective information is the time that neuron sends spike.Work as arteries and veins When rushing neutral net using this coded system, corresponding communication means is:Neuron is sent to the time conversion of spike For corresponding numerical value, size and the spike Time Of Release exponentially functional relation of numerical value.
Figure 13 is referred to, pulse train is binary-coding, and whether effective information provides point for neuron within a certain period of time Peak pulse.When impulsive neural networks use this coded system, corresponding communication means is:If providing point in limiting time Peak pulse, then numerical value is 1, and otherwise numerical value is 0.
The mixed communication method of the artificial neural network that second embodiment of the invention is provided and impulsive neural networks nerve is real The direct communication of artificial neural network and impulsive neural networks is showed, above two neutral net can be realized on this basis Hybrid operation.
Figure 14 is referred to, third embodiment of the invention provides a kind of multi-modal neuromorphic network core 210a, including:Pattern Register 211, aixs cylinder input block 212, synapse weight memory cell 213, dendron unit 214 and neuron computing unit 215.
In the present embodiment, the multi-modal neuromorphic network core 210a has a bars to input aixs cylinder, b bars dendron and b nerve First cell space, every aixs cylinder is connected with a bar dendrons respectively, is a cynapse at each tie point, is had a × b cynapse, connection weight It is synapse weight again.Every dendron one pericaryon of correspondence.Therefore the multi-modal neuromorphic network core 210a is maximum The network size that can be carried is a inputs × b neurons.Wherein a, b are the integer more than 0, and a, b value are general according to reality Depending on.A minimums can be taken as 1, and maximum can be realized to be limited to hardware resource, including storage resource, logical resource etc. The maximum aixs cylinder input number of single neuromorphic network core;B minimums desirable 1, maximum while hardware resource is limited to, also by It is limited to the neural n ary operation total degree that each neuromorphic network core can be performed in the single trigger signal cycle.A in the present embodiment Value with b is 256.It is appreciated that in actual applications, the specific number of above-mentioned input aixs cylinder, dendron and pericaryon Mesh can be adjusted according to concrete condition.
The mode register 211 controls the operational mode of the multi-modal neuromorphic network core 210a.The pattern Register 211 is connected with the aixs cylinder input block 212, dendron unit 214 and neuron computing unit 215, is controlled above-mentioned Unit operates in artificial neural network pattern or impulsive neural networks pattern.The width of the mode register 211 is 1, Its register value can pass through user configuring.
The aixs cylinder input block 212 is connected with the dendron unit 214.The aixs cylinder input block 212 receives a bar axles It is prominent to input and store.In the present embodiment, every aixs cylinder has the memory cell of 16.When operating in impulsive neural networks pattern, The input of every aixs cylinder is all the spike of 1, and the memory cell stores the aixs cylinder input of 16 time steps.Work as operation In artificial neural network pattern, the input of every aixs cylinder is all the signed number of 8, and the memory cell stores a time 8 aixs cylinders input of step.
The synapse weight memory cell 213 is connected with the dendron unit 214, the synapse weight memory cell 213 Store in a × b synapse weight, the present embodiment, the synapse weight is the signed number of 8.
The input of the dendron unit 214 connects the aixs cylinder input block 212 and synapse weight memory cell 213, Output end connects the neuron computing unit 215.The dendron unit 214 performs a aixs cylinder input vector and a × b cynapses The vector-matrix multiplication computing of weight matrix, a result of calculation that computing is obtained is that the aixs cylinder of a neuron is inputted.Institute Stating dendron unit 214 includes dendron multiplicaton addition unit 214a and dendron summing elements 214b.When operating in artificial neural network pattern When, the aixs cylinder input vector sends into the dendron multiplicaton addition unit 214a with synapse weight matrix and carries out multiply-add operation, moment of a vector Battle array multiplication is realized by multiplier and accumulator.When operating in impulsive neural networks pattern, the aixs cylinder input vector is with dashing forward Touch weight matrix and send into the dendron summing elements progress accumulating operation, now, aixs cylinder input is 1, and vector-matrix multiplication is led to Cross data selector and accumulator is realized.
The neuron computing unit 215 is used to perform neuron calculating, including the meters of the first computing unit 215a and second Calculate unit 215b.When operating in artificial neural network pattern, the multiply-add operation result that the dendron multiplicaton addition unit 214a is sent Send into the first computing unit 215a and carry out artificial neural network computing.Neuron passes through one 1024 × 8 in the present embodiment The look-up tables'implementation Any Nonlinear Function function of position, is output as 8 bit values.When operating in impulsive neural networks pattern, institute State accumulating operation result feeding the second computing unit 215b progress impulsive neural networks that dendron summing elements 214b is sent Calculate.Neuron in the present embodiment is integration-leakage-igniting (LIF) model, is output as spike signal and when cephacoria electricity Position.
The multi-modal neuromorphic network core 210a further comprises trigger signal counter 216, the trigger signal Counter 216 receives trigger signal and records trigger signal number, i.e., current time is walked.The trigger signal for the fixed cycle when The cycle is 1ms in clock signal, the present embodiment.
The multi-modal neuromorphic network core 210a further comprises controller 217.The controller 217 and the axle Prominent input block 212, synapse weight memory cell 213, dendron unit 214, neuron computing unit 215 are connected, the control Device 217 controls above-mentioned aixs cylinder input block 212, synapse weight memory cell 213, dendron unit 214, neuron computing unit 215 operation sequential.In addition the controller 217 be also responsible for controlling the multi-modal neuromorphic network core 210a startup and Terminate, the controller 217 starts the multi-modal neuromorphic network core 210a computings simultaneously in the trigger signal rising edge Control computing flow.
Impulsive neural networks operational mode and the people of the multi-modal neuromorphic network core 210a will be introduced respectively below Artificial neural networks operational mode.
Figure 15 is referred to, the multi-modal neuromorphic network core 210a operates in artificial neural network pattern, the pattern Under the aixs cylinder input that receives of the aixs cylinder input block 212 be the packet of 16, including 8 target aixs cylinder sequence numbers (0~ 255) with 8 inputs (signed number -128~127).The now memory of the aixs cylinder input block 212 256 × 8, this is deposited Reservoir records 8 inputs in 256 aixs cylinders.
The synapse weight memory cell 213 stores 256 × 256 synaptic weight values, and each synaptic weight value is 8 Signed number.
The dendron multiplicaton addition unit 214a includes multiplier and adder, is calculated per time step by neuron on its dendron The product of 256 dimension synapse weight vectors and 256 dimension aixs cylinder input vectors, and it regard result of calculation as first computing unit 215a input.
The first computing unit 215a performs non-linear/linear activation primitive meter of neuron in artificial neural network Calculate.Before use, first according to used in the neuromorphic network activation primitive, calculate for value be 0~1023 input Corresponding 8 output, and the look-up table that above-mentioned result of calculation is stored in into neuron computing unit stores interior.Operationally, in the future Inputted from the multiply-add result of dendron of the dendron multiplicaton addition unit 214a as the address of look-up table, the look-up table address is stored 8 outputs are neuron output.
Refer under Figure 16, artificial neural network pattern, described multi-modal mono- time step of neuromorphic network core 210a Operational process comprise the following steps:
S11, is detected after trigger signal, and the circulation of trigger signal counter 216 is from Jia 1, and the controller 217 starts Multi-modal neuromorphic network core 210a operations;
S12, the aixs cylinder input block 212 reads current time step axle according to the value of the trigger signal counter 216 Prominent input vector, and the dendron multiplicaton addition unit 214a is sent to, the synapse weight memory cell 213 sequential reads out 1~256 Number neuron dendron synapse weight vector, and it is sent to the dendron multiplicaton addition unit 214a;
S13, the dendron multiplicaton addition unit 214a calculated successively according to input sequence the aixs cylinder input vector with described 1~ No. 256 neuron dendron synapse weight vector products, and it is sent to the first computing unit 215a;
S14, the first computing unit 215a as table address is searched, obtain the output of the dendron multiplicaton addition unit 214a Go out neuron output;
S15, the controller 217 stops the multi-modal neuromorphic network core 210a operations, and return to step S11.
Figure 17 is referred to, the multi-modal neuromorphic network core 210a operates in impulsive neural networks pattern, the pattern Under the aixs cylinder input that receives of the aixs cylinder input block 212 be the packet of 12, including 8 target aixs cylinder sequence numbers (0~ 255) with 4 delayed datas (0~15).The delayed data represents the difference that the time step that the input comes into force is walked with current time Value.Now the memory of the aixs cylinder input block 212 is 256 × 16, the memory record it is current in 256 aixs cylinders and it The input of totally 16 time steps afterwards.If certain position is 1, corresponds to aixs cylinder and activated on correspondence time step, be i.e. input is 1;If certain position For 0, then correspond to aixs cylinder and do not activated on correspondence time step, be i.e. input is 0.
The synapse weight memory cell 213 stores 256 × 256 synaptic weight values, and each synaptic weight value is 8 Signed number.
The dendron summing elements 214b includes data selector and adder, and current time on every dendron is calculated successively Walk all activated cynapse weight and, and regard result of calculation as the second computing unit in the neuron computing unit 215 215b input.
It is product that the second computing unit 215b, which is used to carry out the neuron in impulsive neural networks calculating, the present embodiment, Point-leakage-igniting (LIF) model.The second computing unit 215b further comprises:Dendron expands memory cell 2151, ginseng Number memory cell 2152 and integration leak calculation unit 2153.In each time step, the second computing unit 215b is transported successively Row 256 times, the computing of 256 neurons is realized by time-multiplexed mode.Wherein described dendron expands memory cell 2151 Comprising 256 memory cell, input data bag is expanded for receiving the extraneous dendron sent, the dendron expands input data bag Including transmitting terminal membrane potential of neurons value and target nerve member sequence number, and according to neuron sequence number by transmitting terminal membrane potential of neurons Correspondence memory cell is arrived in value storage.The parameter storage unit 2152 includes 256 memory cell, stores 256 neurons Film potential, threshold value and leakage value.The integration leak calculation unit 2153 each neuron is performed successively integration-leakage- Ignition operation, when film potential exceedes positive threshold value, output spike pulse signal and current film potential value.It is wherein described to integrate-let out Leakage-ignition operation is as follows:
Film potential=original film potential+dendron input+dendron expands input-leakage value
If film potential is more than positive threshold value, spike signal and film potential value are sent, and film potential is reset.If Film potential is less than negative threshold value, then does not send signal and reset film potential.
Figure 18 is referred to, one time step of impulsive neural networks pattern of the multi-modal neuromorphic network core 210a Operational process comprises the following steps:
S21, is detected after trigger signal, and the circulation of trigger signal counter 216 is from Jia 1, and the controller 217 starts Multi-modal neuromorphic network core 210a operations;
S22, the aixs cylinder input block 212 reads current time step axle according to the value of the trigger signal counter 216 Prominent input vector, and the dendron summing elements 214b is sent to, the synapse weight memory cell 213 sequential reads out 1~256 The dendron synapse weight vector of number neuron, and it is sent to the dendron summing elements 214b;
S23, the dendron summing elements 214b calculated successively according to input sequence the aixs cylinder input vector with described 1~ No. 256 neuron dendron synapse weight vector products, and it is sent to the second computing unit 215b;
S24, the second computing unit 215b read 1~No. No. 256 god from the parameter storage unit 2152 successively Calculated through first parameter, and with the input value from the dendron summing elements 214b, output spike pulse signal and current Film potential;
S25, after neuron calculating terminates, the controller 217 stops multi-modal neuromorphic network core 210a operations, And return to step S21.
It is possible to further obtain two kinds on the basis of the multi-modal neuromorphic network core 210a that provides the present embodiment Single mode neuromorphic network core.A kind of single mode neuromorphic network core operated under artificial neural network pattern, including: Aixs cylinder input block 212, synapse weight memory cell 213, dendron multiplicaton addition unit 214a and the first computing unit 215a.It is a kind of The single mode neuromorphic network core operated under impulsive neural networks pattern, including:Aixs cylinder input block 212, synapse weight Memory cell 213, dendron summing elements 214b and the second computing unit 215b.Above two single mode neuromorphic network core It is that corresponding simplification has been done on the basis of 3rd embodiment, the specific attachment structure and function of internal each unit are referred to Third embodiment of the invention.
It is appreciated that multi-modal neuromorphic network core 210a and two kinds of single modes that third embodiment of the invention is provided State neuromorphic network core can be calculated as the mixing of first embodiment of the invention artificial neural network and impulsive neural networks Basic computational ele- ment 110 in system 100.
The multi-modal neuromorphic network core 210a that third embodiment of the invention is provided can both carry out artificial neural network Computing, can also carry out impulsive neural networks computing, and can be as required in artificial neural network operational mode and pulse god Through switching between Network operation mode, real-time, multi-modal or complicated space-time signal can be carried out and calculate and can guarantee that calculating Accuracy.
Figure 19 is referred to, fourth embodiment of the invention provides the mixed stocker of a kind of artificial neural network and impulsive neural networks System 200, including:Multiple neuromorphic network cores 210 and with the one-to-one multiple routes of the plurality of neuromorphic network core Node 220, the multiple routing node 220 constitutes the route network of m × n network structures, wherein, m, n are whole more than 0 Number.Line direction in the m n array is defined as X-direction, column direction is defined as Y-direction, each pair neuromorphic network core 210 with Routing node 220 has an only local XY coordinate.
The neuromorphic network core 210 is corresponded with routing node 220, refers to each neuromorphic network core 210 Corresponding to a routing node 220, each routing node 220 corresponds to a neuromorphic network core 210.Mutually corresponding one To in neuromorphic network core 210 and routing node 220, neuromorphic network core 210 is referred to as the local nerve of routing node 220 Form network core, routing node 220 is referred to as the local routing node of neuromorphic network core 210.The neuromorphic network core 210 input both is from its local routing node, and neuron result of calculation can also be sent to the local routing node, and then pass through Route network exports or is sent to target nerve form network core.
In the present embodiment, the quantity of the neuromorphic network core 210 and routing node 220 is 9,9 routes Node 220 constitutes the route network of 3 × 3 network structures.
The neuromorphic network core 210 can be single mode neuromorphic network core, or multi-modal neural shape The multi-modal neuromorphic network core 210a that state network core, such as third embodiment of the invention are provided.The neural shape of so-called single mode State network core refers to that the neuromorphic network core can only operate in artificial neural network pattern or impulsive neural networks pattern;It is so-called Multi-modal neuromorphic network core refers to that the neuromorphic network core has two kinds of operational modes:Artificial neural network pattern and pulse Network mode, can be switched, each multi-modal god by configuring inherent parameters and realizing between above two operational mode Can single configuration operation pattern through form network core.
The neuromorphic network core 210 has multiple neuron elements and the input of multiple aixs cylinders.In the present embodiment, the god There are 256 neuron elements and 256 aixs cylinder inputs through form network core 210, maximum can undertake the god for including 256 neurons Through network operations.If the scale of neural network to be realized is more than 256 neurons, need realize will be many by route network Individual neuromorphic network core 210 is connected together on the topology, and each neuromorphic network core 210 undertakes a part of nerve Network operations, collectively constitute a big neutral net.The hybrid system 200 that the present embodiment is provided has 9 neuromorphic nets Network core 210, maximum can undertake the neural computing for including 2304 neurons, and the neural computing can be artificial god Hybrid network computing through network operations, impulsive neural networks computing or artificial neural network and impulsive neural networks.
When the neuromorphic network core 210 is operated under artificial neural network pattern, the neuromorphic network core 210 In the result of calculation of 256 neuron elements be all the numerical value of 8;When neuromorphic network core 210 operates in pulse nerve When under network mode, the result of calculation of the unit of 256 neurons in the neuromorphic network core 210 is all the spike of 1 Pulse.Which kind of no matter operate under pattern, the result of calculation of the unit of neuron can all be sent directly to the neuromorphic network The local routing node of core 210.
The multiple routing node 220 constitutes the route network of network structure, and the route network undertakes data-transformation facility, The data transfer includes:Transmission between system input, system output and neuromorphic network core 210, wherein neuromorphic Between network core 210 transmission again be divided into the internuclear data transfer of model identical neuromorphic network and different mode neuromorphic The internuclear data transfer of network.Any routing node 220 in the route network can use an XY that is unique and determining Plane coordinates is represented.In the present embodiment, 9 routing nodes 220 constitute 3 × 3 array, and line direction in array is defined as into X side To column direction is defined as Y-direction, and each routing node 220 can be born with it in X positive directions, X negative directions, Y positive directions, Y The adjacent routing node in direction directly carries out data transmission, and forms mesh topology.It is appreciated that the mesh topology In addition to above-mentioned network structure, or other common structures such as hub-and-spoke configuration, bus type structure.
The data of the route network transmission include:System input data, system output data and neuromorphic network The data transmitted between core 210.Above-mentioned data are transmitted according to routing rule set in advance in route network.The present embodiment Middle routing rule is:Data are transmitted in X direction first, are exported again along Y-direction after reaching target X-coordinate routing node, directly To the routing node for reaching target XY coordinates.If being represented to originate the coordinate of routing node with (x0, y0), (x1, y1) represents mesh The coordinate of routing node is marked, above-mentioned routing rule is:(x0,y0)→(x1,y0)→(x1,y1).It is appreciated that actually should In, different routing rules can also be set according to specific circumstances.
The implementation process transmitted individually below between introducing system input, system output and neuromorphic network core 210.
The transmitting procedure of the system input data is:System input data is first inputted to appointing for route network outermost One routing node, according to above-mentioned routing rule can be sent to target nerve form network core by route network afterwards.
The transmitting procedure of the system output data is:The result of calculation of the neuromorphic network core 210 is sent first To local routing node, any route of route network outermost is sent to according to above-mentioned routing rule by route network afterwards Node, then be sent to by the routing node outside system, complete system output.
The process transmitted between the neuromorphic network core 210 is:The result of calculation of the neuromorphic network core 210 is first Local routing node is first sent to, target routing node is sent to according to above-mentioned routing rule by route network afterwards, then by The target routing node is sent to its local neuromorphic network core, completes the data transfer between neuromorphic network core 210.
Refer to Figure 20, the routing node 220 includes the routing table with multiple memory cell, the routing table it is each One neuron elements of the local neuromorphic network core of memory cell correspondence.The memory cell stores correspondence neuron list XY coordinate address, target aixs cylinder input sequence number and the delayed data of the purpose neuromorphic network core of member output.In the present embodiment The number that neuromorphic network core 210 includes memory cell in 256 neuron elements, the routing table is also 256.
In the present embodiment, communicated between the system input data, system output data and neuromorphic network core 210 Data are transmitted in the form of 32 route data bags between routing node 220.Figure 21 show the lattice of above-mentioned route data bag 6 target nerve form network core X-direction addresses, 6 target nerve form network core Y sides are included in formula, the route data bag Sequence numbers and 8 data are inputted to address, 4 aixs cylinder delays, 8 target aixs cylinders, altogether 32 data.Wherein, the aixs cylinder of 4 is prolonged When operate under impulsive neural networks pattern effectively when target nerve form network core, 8 data are artificial neural network pattern The output of lower neuron.
It is from above-mentioned 32 route data bags that the routing node 220, which is dealt into the packet of local neuromorphic network core, 4 aixs cylinders delay of interception, 8 target aixs cylinders input sequence numbers and 8 data, altogether 20 data.For operating in artificial god Through the neuromorphic network core 210 under network mode, receive after the packet from local routing node, 8 data are made For the input of correspondence sequence number aixs cylinder.For operating in the neuromorphic network core 210 under impulsive neural networks pattern, receive and From after the packet of local routing node, input of the correspondence sequence number aixs cylinder after correspondence delay is put 1.
Figure 22 is referred to, the workflow of the routing node 220 includes:
S1, the routing node 220 is received come the neuron result of calculation for local neuromorphic network core of hanging oneself;
S2, the routing node 220 reads the routing iinformation of corresponding neuron from routing table, and by the routing iinformation with The neuron result of calculation is combined as route data bag;
S3, the routing node 220 judges the sending direction of the route data bag, and the route is sent according to judged result Packet.
In step S1, when operating under artificial neural network pattern, the neuron result of calculation is output data;Work as fortune Row is under artificial neural network pattern, and the neuron result of calculation is spike.
In step S3, the route data bag include target nerve form network core in the relative address x of X-direction and Relative address y in the Y direction.The routing node 220 judges the sending direction of the route data bag according to x and y numerical value, Specially:Work as x>When 0, the route data bag is sent to the adjacent routing node of X positive directions, works as x<When 0, by the route Packet is sent to the adjacent routing node of X negative directions, works as y>When 0, the route data bag is sent to Y positive directions adjacent Routing node, works as y<When 0, the route data bag is sent to the adjacent routing node of Y negative directions, as x=y=0, by institute State the local neuromorphic network core that route data bag directly transmits back the routing node 220.Next routing node is received After the route data bag that one routing node is sent, the relative address in the route data bag is modified, is specially:If x<0, Revised relative address x '=x+1 is then made, if x>0, then revised relative address x '=x-1 is made, if y<0, then order amendment Relative address y '=y+1 afterwards, if y>0, then make revised relative address y '=y-1.The relative address often passes through one Routing node 220 is once corrected, until x=y=0.
In the present embodiment, the judgement and transmission of X-direction are carried out first, after X-direction, which is sent, to be terminated, i.e. x '=0, then carry out The judgement and transmission of other direction, send to target nerve form network core until by route data bag.
The artificial neural network and the hybrid system 200 of impulsive neural networks that fourth embodiment of the invention is provided combine people The computation schema of two kinds of neutral nets of artificial neural networks and impulsive neural networks, both with impulsive neural networks complicated space-time letter Number disposal ability, can make full use of artificial neural network abundant and powerful computing capability, can carry out real-time, multi-modal again Or complicated space-time signal calculates and can guarantee that the accuracy of calculating.
In addition, those skilled in the art can also do other changes in spirit of the invention, certainly, these are according to present invention essence The change that god is done, should all be included within scope of the present invention.

Claims (10)

1. the hybrid system of a kind of artificial neural network and impulsive neural networks, it is characterised in that including:Multiple neuromorphic nets Network core and with the one-to-one multiple routing nodes of the plurality of neuromorphic network core, each pair neuromorphic network core and route Node one XY coordinate of correspondence, mutual corresponding a pair of neuromorphics network core and routing node mutually claim local form network core with Local routing node;The multiple neuromorphic network core is used to perform neural computing, the multiple neuromorphic network Core realizes that at least one in data input and output, the multiple neuromorphic network core operates in people by local routing node Artificial neural networks pattern, and at least one operates in impulsive neural networks pattern;The multiple routing node constitutes routing network Network, undertakes data input and the output of whole system.
2. the hybrid system of artificial neural network as claimed in claim 1 and impulsive neural networks, it is characterised in that the god It is multi-modal neuromorphic network core through form network core, the multi-modal neuromorphic network core has two kinds of operational modes:Manually Network mode and impulsive neural networks pattern, are cut by configuring inherent parameters realization between above two operational mode Change.
3. the hybrid system of artificial neural network as claimed in claim 1 and impulsive neural networks, it is characterised in that the god It is single mode neuromorphic network core through form network core, the single mode neuromorphic network core operates in artificial neural network mould Formula or impulsive neural networks pattern.
4. the hybrid system of artificial neural network as claimed in claim 1 and impulsive neural networks, it is characterised in that the god Include n neuron elements and n aixs cylinder input through form network core, the n is the integer more than 0.
5. the hybrid system of artificial neural network as claimed in claim 4 and impulsive neural networks, it is characterised in that described many Each routing node includes each memory cell pair in the routing table with multiple memory cell, the routing table in individual routing node Answer a neuron elements in local neuromorphic network core, the memory cell storage correspondence neuron elements output data The coordinate address of purpose neuromorphic network core, target aixs cylinder input sequence number and delayed data.
6. the hybrid system of artificial neural network as claimed in claim 1 and impulsive neural networks, it is characterised in that described many Individual routing node constitutes m n array, wherein, m, n are the integer more than 0, and line direction in the m n array is defined as into X side To column direction is defined as Y-direction, and each routing node is adjacent with certainly in X positive directions, X negative directions, Y positive directions, Y negative directions Routing node directly carry out data transmission.
7. the hybrid system of artificial neural network as claimed in claim 6 and impulsive neural networks, it is characterised in that system is defeated Enter data, system output data and the internuclear data of neuromorphic network in the form of route data bag in the routing node Between transmit, X-direction and Y-direction address of the route data bag comprising target nerve form network core, aixs cylinder delay, target axle Prominent input sequence number and data, wherein, when target nerve form network core operates in impulsive neural networks pattern, the aixs cylinder Delay is effective, and when target nerve form network core operates in artificial neural network pattern, the data are effective.
8. the hybrid system of artificial neural network as claimed in claim 6 and impulsive neural networks, it is characterised in that the road Routing rule between node is:If the coordinate of starting routing node is (x0, y0), the coordinate of target routing node for (x1, Y1), then data are transmitted in X direction first, are reached after target X-coordinate routing node (x1, y0), then are transmitted along Y-direction, directly To arrival target routing node (x1, y1).
9. the hybrid system of artificial neural network as claimed in claim 8 and impulsive neural networks, it is characterised in that the system System input data transmitting procedure be:System input data is first input into any routing node of route network outermost, then Target nerve form network core is sent to according to the routing rule by route network;The system output data is transmitted across Cheng Wei:The result of calculation of the neuromorphic network core is first sent to local routing node, afterwards by route network according to institute Any routing node that routing rule is sent to route network outermost is stated, then is sent to by the routing node outside system, is completed System is exported.
10. the hybrid system of artificial neural network as claimed in claim 8 and impulsive neural networks, it is characterised in that described The workflow of routing node includes:
S1, the routing node receives the neuron result of calculation from local neuromorphic network core;
S2, the routing node reads the routing iinformation of corresponding neuron from routing table, and by the routing iinformation and the god Route data bag is combined as through first result of calculation;
S3, the routing node judges the sending direction of the route data bag, and the route data bag is sent according to judged result.
CN201510419325.8A 2015-07-16 2015-07-16 A kind of hybrid system of artificial neural network and impulsive neural networks Active CN105095961B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510419325.8A CN105095961B (en) 2015-07-16 2015-07-16 A kind of hybrid system of artificial neural network and impulsive neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510419325.8A CN105095961B (en) 2015-07-16 2015-07-16 A kind of hybrid system of artificial neural network and impulsive neural networks

Publications (2)

Publication Number Publication Date
CN105095961A CN105095961A (en) 2015-11-25
CN105095961B true CN105095961B (en) 2017-09-29

Family

ID=54576335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510419325.8A Active CN105095961B (en) 2015-07-16 2015-07-16 A kind of hybrid system of artificial neural network and impulsive neural networks

Country Status (1)

Country Link
CN (1) CN105095961B (en)

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105719000B (en) * 2016-01-21 2018-02-16 广西师范大学 A kind of neuron hardware unit and the method with this unit simulation impulsive neural networks
US10990872B2 (en) * 2016-03-31 2021-04-27 International Business Machines Corporation Energy-efficient time-multiplexed neurosynaptic core for implementing neural networks spanning power- and area-efficiency
US20170330069A1 (en) * 2016-05-11 2017-11-16 Kneron Inc. Multi-layer artificial neural network and controlling method thereof
CN106201651A (en) * 2016-06-27 2016-12-07 鄞州浙江清华长三角研究院创新中心 The simulator of neuromorphic chip
US11270193B2 (en) * 2016-09-30 2022-03-08 International Business Machines Corporation Scalable stream synaptic supercomputer for extreme throughput neural networks
CN113537481B (en) * 2016-12-30 2024-04-02 上海寒武纪信息科技有限公司 Apparatus and method for performing LSTM neural network operation
US11295204B2 (en) * 2017-01-06 2022-04-05 International Business Machines Corporation Area-efficient, reconfigurable, energy-efficient, speed-efficient neural network substrate
CN106875004B (en) * 2017-01-20 2019-09-10 北京灵汐科技有限公司 Composite mode neuronal messages processing method and system
CN106909969B (en) * 2017-01-25 2020-02-21 清华大学 Neural network information receiving method and system
CN106897768B (en) * 2017-01-25 2020-04-21 清华大学 Neural network information sending method and system
CN106845632B (en) * 2017-01-25 2020-10-16 清华大学 Method and system for converting impulse neural network information into artificial neural network information
US11823030B2 (en) * 2017-01-25 2023-11-21 Tsinghua University Neural network information receiving method, sending method, system, apparatus and readable storage medium
CN109214502B (en) 2017-07-03 2021-02-26 清华大学 Neural network weight discretization method and system
US20190065935A1 (en) * 2017-08-30 2019-02-28 International Business Machines Corporation Computational method for feedback in a hierarchical neural network
CN107578095B (en) * 2017-09-01 2018-08-10 中国科学院计算技术研究所 Neural computing device and processor comprising the computing device
CN108171326B (en) * 2017-12-22 2020-08-04 清华大学 Data processing method, device, chip, equipment and storage medium of neural network
CN108830379B (en) * 2018-05-23 2021-12-17 电子科技大学 Neural morphology processor based on parameter quantification sharing
CN109491956B (en) * 2018-11-09 2021-04-23 北京灵汐科技有限公司 Heterogeneous collaborative computing system
CN109858620B (en) * 2018-12-29 2021-08-20 北京灵汐科技有限公司 Brain-like computing system
CN110059800B (en) * 2019-01-26 2021-09-14 中国科学院计算技术研究所 Pulse neural network conversion method and related conversion chip
CN109816026B (en) * 2019-01-29 2021-09-10 清华大学 Fusion device and method of convolutional neural network and impulse neural network
CN110163016B (en) * 2019-04-29 2021-08-03 清华大学 Hybrid computing system and hybrid computing method
CN110188872B (en) * 2019-06-05 2021-04-13 北京灵汐科技有限公司 Heterogeneous cooperative system and communication method thereof
CN110213165B (en) * 2019-06-05 2021-04-13 北京灵汐科技有限公司 Heterogeneous cooperative system and communication method thereof
CN110378469B (en) * 2019-07-11 2021-06-04 中国人民解放军国防科技大学 SCNN inference device based on asynchronous circuit, PE unit, processor and computer equipment thereof
CN112242963B (en) * 2020-10-14 2022-06-24 广东工业大学 Rapid high-concurrency neural pulse data packet distribution and transmission method and system
CN112686381A (en) * 2020-12-30 2021-04-20 北京灵汐科技有限公司 Neural network model, method, electronic device, and readable medium
CN112862100B (en) * 2021-01-29 2022-02-08 网易有道信息技术(北京)有限公司 Method and apparatus for optimizing neural network model inference
CN113285875B (en) * 2021-05-14 2022-07-29 南京大学 Space route prediction method based on impulse neural network
CN113554162B (en) * 2021-07-23 2022-12-20 上海新氦类脑智能科技有限公司 Axon input extension method, device, equipment and storage medium
CN113723594B (en) * 2021-08-31 2023-12-05 绍兴市北大信息技术科创中心 Pulse neural network target identification method
CN113807511B (en) * 2021-09-24 2023-09-26 北京大学 Impulse neural network multicast router and method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102457419A (en) * 2010-10-22 2012-05-16 中国移动通信集团广东有限公司 Method and device for optimizing transmission network route based on neural network model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0811057D0 (en) * 2008-06-17 2008-07-23 Univ Ulster Artificial neural network architecture

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102457419A (en) * 2010-10-22 2012-05-16 中国移动通信集团广东有限公司 Method and device for optimizing transmission network route based on neural network model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Hierarchical encoding of human working memory;Li guoqi et al.;《2015 IEEE 10th conference on industrial electronics and applications》;20150615;第866-871页 *
Ultra low power of artificial cognitive memory for brain-like computation;Deng lei etal.;《2014 IEEE international nanoelectronics conference》;20140728;第1-4页 *

Also Published As

Publication number Publication date
CN105095961A (en) 2015-11-25

Similar Documents

Publication Publication Date Title
CN105095961B (en) A kind of hybrid system of artificial neural network and impulsive neural networks
CN105095967B (en) A kind of multi-modal neuromorphic network core
CN105095965B (en) The mixed communication method of artificial neural network and impulsive neural networks nerve
CN105095966B (en) The hybrid system of artificial neural network and impulsive neural networks
CN108717570A (en) A kind of impulsive neural networks parameter quantification method
CN107909206A (en) A kind of PM2.5 Forecasting Methodologies based on deep structure Recognition with Recurrent Neural Network
CN107423839A (en) A kind of method of the intelligent building microgrid load prediction based on deep learning
CN105589333B (en) Control method is surrounded in multi-agent system grouping
CN108346293A (en) A kind of arithmetic for real-time traffic flow Forecasting Approach for Short-term
Igiri et al. Effect of learning rate on artificial neural network in machine learning
Zong et al. Price forecasting for agricultural products based on BP and RBF Neural Network
CN104915714A (en) Predication method and device based on echo state network (ESN)
Daneshfar et al. Adaptive fuzzy urban traffic flow control using a cooperative multi-agent system based on two stage fuzzy clustering
CN107944076A (en) A kind of deployed with devices scheme acquisition methods and device
Guoqiang et al. Study of RBF neural network based on PSO algorithm in nonlinear system identification
CN105260556B (en) The overhead crane modeling method of hair clip mutation operation RNA genetic algorithm
Abbas et al. The impact of training iterations on ANN applications using BPNN algorithm
Prado et al. FPGA based implementation of a Fuzzy Neural Network modular architecture for embedded systems
Sugunnasil et al. Modelling a neural network using an algebraic method
Suykens et al. Coupled chaotic simulated annealing processes
Ann et al. Calculation of hybrid multi-layered perceptron neural network output using matrix multiplication
CN106570586A (en) Field detection vehicle path planning method for electric energy metering device
Xing et al. An optimization algorithm based on evolution rules on cellular system
Sambatti et al. Self-configured neural network for data assimilation using FPGA for ocean circulation
Kwon et al. Nonlinear mapping of interval vectors by neural networks

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20180212

Address after: 100036 Beijing city Haidian District West Sanhuan Road No. 10 wanghailou B block two layer 200-30

Patentee after: Beijing Ling Xi Technology Co. Ltd.

Address before: 100084 Beijing Beijing 100084-82 mailbox

Patentee before: Tsinghua University