CN116227588A - Large-scale visual cortex neural network simulation method based on virtual synapse thought - Google Patents

Large-scale visual cortex neural network simulation method based on virtual synapse thought Download PDF

Info

Publication number
CN116227588A
CN116227588A CN202310250311.2A CN202310250311A CN116227588A CN 116227588 A CN116227588 A CN 116227588A CN 202310250311 A CN202310250311 A CN 202310250311A CN 116227588 A CN116227588 A CN 116227588A
Authority
CN
China
Prior art keywords
synaptic
neuron
virtual
synapse
calculation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310250311.2A
Other languages
Chinese (zh)
Inventor
韩芳
杨豪
王直杰
王青云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Donghua University
Original Assignee
Donghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Donghua University filed Critical Donghua University
Priority to CN202310250311.2A priority Critical patent/CN116227588A/en
Publication of CN116227588A publication Critical patent/CN116227588A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/10Interfaces, programming languages or software development kits, e.g. for simulating neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/061Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

A large-scale visual cortex neural network simulation method based on virtual synapse thought relates to brain-like calculation, neural network modeling and simulation. Synaptic conductivity calculation strategies using "virtual synapses"; the independence in the calculation process of each state variable is reduced, and the neuron parameter update is carried out by means of the CUDA operation platform to carry out multi-thread synchronous operation. A synaptic conductivity computation strategy for "virtual synapses," which is an integration of post-synaptic neurons that receive synaptic inputs, each having n "virtual synapses. When the simulation operation is performed on the large-scale neural network, the invention reduces the huge sudden-current calculation process, reduces the independence in the calculation process of each state variable, saves the memory occupation and reduces the time consumption.

Description

Large-scale visual cortex neural network simulation method based on virtual synapse thought
Technical Field
The invention relates to brain-like calculation, neural network modeling and simulation, in particular to a virtual synapse-based large-scale visual cortex neural network simulation method.
Background
The brain is the control center of living beings, comprising 10 11 Neurons of the order of magnitude and 10 14 Order of magnitude synaptic structure, large number of neurons and synaptic structuresAn extremely complex biological neural network is formed. Vision is one of the most important ways to perceive information acquired by the outside world. The retina receives light and converts the light into electric signals, the electric signals are transmitted to each brain region of the visual cortex of the brain layer by layer, deeper processing treatment is carried out, and finally, a picture which is perceived by people characterized by nerve activity is formed. Human beings have considerable knowledge of the response characteristics of neurons in the primary visual cortex (V1). These neurons exhibit stimulus selectivity for a range of different parameters simultaneously, such as the location, size, shape and color of the object.
There are two main categories of research approaches for neuroscience: performing experiments from microscopic molecules to macroscopic behaviors and from animal brain to human brain functions by adopting various experimental techniques; the biological nervous system is simulated and studied from different levels by using methods such as mathematical analysis, numerical calculation, computer simulation and the like, and the mystery of the brain nervous system is revealed from the calculation point of view by researching and exploring an information processing mechanism of the brain. An important goal of visual neurophysiology on the one hand is to model these neurons to describe how this stimulation selectivity occurs, and finally integrate all of these models into a single theory, predicting the response of neurons and populations to arbitrary stimuli. On the other hand, in order to answer the question of how neural activity in the brain causes visual cognition and behavior related to visual cognition, knowledge integration based on the structural and functional relationships of the neural matrix is also one of the main targets of neuroinformatics and data driven computational modeling.
However, the biological neural network system is a complex nonlinear dynamics system, and the characteristics of complex neurons and synaptic dynamics properties, a complicated network connection topological structure, mutual coupling of mass network state data calculation and the like of the complex nonlinear dynamics system bring great challenges to modeling and calculation of the biological neural network system, in particular to the aspects of timeliness, parallel implementation and the like of a large-scale biological neural network simulation algorithm. For example, in large-scale biological neural network simulation calculation, the consumption of the inrush current calculation resource is an important part of the consumption of the whole large-scale simulation calculation resource, and how to effectively reduce the consumption of the resource in the huge inrush current calculation process is an important difficult problem; how to effectively reduce the independence in the calculation process of each state variable of the biological neural network so as to improve the simulation efficiency is another important problem.
Disclosure of Invention
The purpose of the invention is that: the large-scale visual cortex neural network simulation method based on the virtual synapse thought is provided, when the large-scale neural network is subjected to simulation operation, the huge synaptic current calculation process is reduced, and the independence in each state variable calculation process is reduced, so that the memory occupation is saved, and the time consumption is reduced.
In order to achieve the purpose, the invention provides a virtual synapse-based large-scale visual cortex network simulation algorithm. In order to reduce the huge synaptic current calculation process, the invention uses a synaptic current conduction calculation strategy of virtual synapse; in order to reduce the independence in the calculation process of each state variable, the invention carries out multithreading synchronous operation on neuron parameter updating by means of a CUDA operation platform.
To reduce the massive synaptic current computation process, the present invention uses a synaptic electrical conductivity computation strategy of "virtual synapses," which is an integration of the post-synaptic neurons that receive synaptic inputs, each with n "virtual synapses" (n depends on the number of synaptic types used by the network). The total input synaptic current calculation of neurons consists of three phases for which the present strategy is optimized:
the first stage, at the post-synaptic neuron end, calculating the single synaptic ion gate opening of the current clock step according to the pre-synaptic neuron state information (discharge sequence, synaptic ion channel gate opening sequence); for this stage, considering that the synaptic weight is related to the presynaptic neuron discharge sequence and is irrelevant to the current moment state, the strategy discretizes the time course of the synaptic weight, and updates the synaptic conductivity of the post-synaptic neuron 'virtual synapse' according to the specific synaptic delay only at the presynaptic neuron discharge moment, thereby avoiding the conductivity update at each moment.
The second stage, according to the synaptic parameter (delay, maximum conductance) obtained, carrying out single synaptic electric conductance calculation on each synapse; for this stage, the same type of synapses have the same time evolution process, so that the "virtual synapses" integrate the same type of synapse inputs and perform synchronous time evolution, avoiding the piece-by-piece computation of all synapses.
And thirdly, calculating synaptic current according to the synaptic electric conduction, and accumulating all input synaptic currents to obtain the total input synaptic current of the neuron. When the synaptic current calculation is considered, the synaptic current calculation is only related to the post-synaptic neuron, and only the virtual synaptic weight of the synaptic current calculation is needed to be accumulated, so that repeated calculation of the synaptic conductivity coefficient at the post-synaptic neuron end is avoided, and the calculation process is simplified.
Meanwhile, the synaptic conductivity calculation is separated from the current calculation of the post-synaptic neuron, so that the coupling degree of the states of the pre-synaptic neuron and the post-synaptic neuron in the synaptic current calculation process is reduced, the parallel performance of the traditional clock driving algorithm is enhanced, and therefore, the introduction of a virtual synaptic conceptual model reduces the calculation resource consumption of the traditional clock driving algorithm and improves the parallel performance of the algorithm.
Preferably, when implemented in software, the virtual synapse is designed as a circular array structure, the array length is determined by the maximum synaptic delay time, and there is a "flag" pointing to the current position of the array. The array member pointed by the mark is virtual synaptic electric conduction at the current moment, each time step neuron calculates input current according to the virtual synaptic electric conduction at the current moment, the electric conduction at the next moment is updated according to the state at the current moment and the time evolution process of the synaptic electric conduction, then the position is reset to zero, and the mark is moved one bit later. When the presynaptic neuron generates action potential, it will have a correction effect on the synaptic electric conduction, and the correction is recorded in the position of the corresponding bit of the post-shift synaptic delay of the number group flag bit.
In order to reduce the independence in the calculation process of each state variable, the invention carries out multithread synchronous operation on neuron parameter update by means of a CUDA operation platform, and the algorithm structure and the idea are as follows:
1) A class "Neuron" representing a node of neurons in a neural network. The member variables record the membrane potential of the neuron and various parameters required for updating the membrane potential, the synaptic connection structure received and sent by the neuron and the structure of data record in the simulation process.
2) A class "Synapse" representing a Synapse of a neuron in a neural network. Its member variables record the basal conductance, delay and postsynaptic neurons of the synapse.
3) The class "VirtualSyn" that represents the receipt of virtual synapses by neurons in a neural network. Its member variables record the parameters related to the burst conductivity cycle array and the conductivity time course.
Because the calculation of the input current and the updating of the membrane potential are only related to the virtual synapse condition, the membrane potential of each Neuron is synchronously carried out in a single thread of the GPU; if the neuron generates action potential, the correction amount of the specific position of the virtual synaptic weight cyclic array of the post-synaptic neuron is updated, and an atomic operation mode is used for ensuring the safety of writing into the memory. Finally, the synchronous thread ensures that all the neuron states are updated and the synapse calculation is finished, and the next time step operation is performed.
By the synchronous operation of the virtual synapses and the CUDA, the method simplifies two processes of traversing synapses to calculate the synapse weights and traversing neurons to update membrane potentials, which are the longest in time consumption, in the neural network simulation, thereby shortening the simulation time and providing a new idea for the simulation of large-scale neuron networks with more complex synapse connection topological structures.
Drawings
Fig. 1 is a schematic diagram of a virtual synaptic structure of a post-synaptic neuron j.
Fig. 2 is a schematic diagram of an algorithm for exponential difference abrupt conductivity synchronous attenuation and local correction.
FIG. 3 is a schematic diagram of the data structure of a C++/CUDA software implementation.
FIG. 4 is a schematic diagram of the operational flow of a C++/CUDA software implementation.
Detailed Description
The invention will be further illustrated with reference to specific examples. It is to be understood that these examples are illustrative of the present invention and are not intended to limit the scope of the present invention. Further, it is understood that various changes and modifications may be made by those skilled in the art after reading the teachings of the present invention, and such equivalents are intended to fall within the scope of the claims appended hereto.
The large-scale visual cortex neural network simulation method based on the virtual synapse thought provided by the invention accelerates the simulation of the neural network from the angles of simplifying single synapse weight calculation and single neuron current input and synchronizing neuron membrane potential updating. A specific visual cortex neuron network is constructed to perform simulation.
Membrane potential V of neuron and its change rate
Figure BDA0004127661160000041
The expIF model is adopted for modeling, as shown in the formula 1, the calculation efficiency is high, and the membrane potential time process is more realistic than that of a simple IAF model:
Figure BDA0004127661160000042
wherein g exc And g inh Input synaptic conductance, R, are excitatory and inhibitory, respectively m Is membrane resistance, τ m Film time constant, g exc And g inh Excitatory, inhibitory synaptic conductance, E exc And E is inh Respectively, are excitatory and inhibitory reversal potentials, delta T Is a threshold slope factor. When the membrane potential is greater than the threshold value V T When the neuron generates action potential, the membrane potential is set to a resting value E L And enter into the refractory period, the duration of which is t ref During which the membrane potential remains unchanged. The simulated time step dt is set to 0.01ms.
The synaptic structure considered by the model can be classified into three types of rapid excitation type, slow excitation type and inhibition type according to the action speed and action type, which respectively correspond to the synaptic structure taking three different chemical substances of AMPA (alpha-amino-3-hydroxy-5-methyl-4-isoxazole-propionic acid, alpha-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid), NMDA (N-methyl-D-aspartic acid ) and GABA (gamma-aminobutyric acid, gamma-aminobutyric acid) as neurotransmitters in organisms. The evolution of the weights of these synaptic structures over time t is modeled as an exponential difference model:
Figure BDA0004127661160000043
where σ is all possible presynaptic neuronal species, g σ Is the basis weight when the presynaptic neuron is sigma type; pre is the set of sigma-type presynaptic neurons, k is the number of action potentials that have been generated by presynaptic neuron m, t j,m The generation time of the j-th action potential of the presynaptic neuron m; g Type s describes the time course of synaptic conductance for a presynaptic neuron m to produce an action potential at s=0. The change part Gt of the synaptic weight at t time after the action potential is generated is modeled by an exponential difference function, τ r For the rise time constant τ d Is the decay time constant of GABA. These two parameters are different for different types of synaptic inputs.
To reduce the massive synaptic current computation process, the present invention uses a synaptic electrical conductivity computation strategy of "virtual synapses," which is an integration of the post-synaptic neurons that receive synaptic inputs, each with n "virtual synapses" (n depends on the number of synaptic types used by the network). Fig. 1 schematically illustrates the structure of a virtual synapse.
The total input synaptic current calculation of neurons consists of three phases for which the present strategy is optimized:
the first stage, at the post-synaptic neuron end, calculating the single synaptic ion gate opening of the current clock step according to the pre-synaptic neuron state information (discharge sequence, synaptic ion channel gate opening sequence); for this stage, considering that the synaptic weight is related to the presynaptic neuron discharge sequence and is irrelevant to the current moment state, the strategy discretizes the time course of the synaptic weight, and updates the synaptic conductivity of the post-synaptic neuron 'virtual synapse' according to the specific synaptic delay only at the presynaptic neuron discharge moment, thereby avoiding the conductivity update at each moment.
The second stage, according to the synaptic parameter (delay, maximum conductance) obtained, carrying out single synaptic electric conductance calculation on each synapse; for this stage, the same type of synapses have the same time evolution process, so that the "virtual synapses" integrate the same type of synapse inputs and perform synchronous time evolution, avoiding the piece-by-piece computation of all synapses.
And thirdly, calculating synaptic current according to the synaptic electric conduction, and accumulating all input synaptic currents to obtain the total input synaptic current of the neuron. When the synaptic current calculation is considered, the synaptic current calculation is only related to the post-synaptic neuron, and only the virtual synaptic weight of the synaptic current calculation is needed to be accumulated, so that repeated calculation of the synaptic conductivity coefficient at the post-synaptic neuron end is avoided, and the calculation process is simplified.
To apply virtual synapses in an exponential difference synapse model, the time course and calculation of synaptic conductances are processed as follows:
1) According to formula (2), for a single synapse, let
Figure BDA0004127661160000051
Then->
Figure BDA0004127661160000052
Where k is the number of action potentials generated by the presynaptic neuron, t k Is the action potential time. Note that the two exponential functions are independent of each other, here g d ,g r Respectively express, let->
Figure BDA0004127661160000053
The instantaneous rate of change of both->
Figure BDA0004127661160000054
Respectively->
Figure BDA0004127661160000055
Visible g r And g d They all decays exponentially with a determined time constant. G when the presynaptic neuron generates an action potential and transmits to the postsynaptic neuron at the current time r And g d And adding correction amounts s respectively. Such that conductance of a single synapse gt=g d t-g r t。
2) According to the result of 1), when a plurality of tau are added r And τ d When equal synapses are integrated together, the sum of the two exponential functions of the corresponding synapses is respectively marked as sigma g d Sum sigma g r It can be synchronously attenuated at each moment
Figure BDA0004127661160000056
Figure BDA0004127661160000057
At the same time, sigma g is performed according to the action potential of each synapse received and influenced by the synapse d Sum sigma g r Is to be used for the local correction of: />
Figure BDA0004127661160000058
Wherein m represents a presynaptic neuron that produces an action potential, s m Representing the correction amount of the mth neuron, adding the required correction amount +.>
Figure BDA0004127661160000059
And is simultaneously superimposed on Sigma g d Sum sigma g r . At this point the total conductance of the class of synapses g t = Σg d -∑g r
3) According to the result of 2), a data structure related to synapses is designed as shown in FIG. 2. The presynaptic neuron records the sequence numbers of all the neurons generating postsynaptic delays and basic weights; post-synaptic neurons record Σg for all synaptic types d Sum sigma g r At the same time, there is an array for recording the local correction of the synaptic weight at each moment, the length of the array is equal to the maximum synaptic delay of the synapse, the position of the current conducting correction in the array is indicated by a pointer, the pointer moves back by one bit at each simulation step, and finally the pointer is redirected to the first bit, i.e. the correction array can be regarded as a ring structure. Fig. 2 exemplifies all GABA-type synapses received by post-synaptic neurons.
3-1) neuron i traverses its list of synaptic structures after generating action potentials, updating the correction for a particular type of synapse for each post-synaptic neuron.
3-2) for i-j connections, the base weight of this synapse, the synaptic delay, and the GABA type synapse structure pointing to j are recorded in the list of synapses structures for i. After generating action potential, the correction quantity array of j is based on the current time position and synaptic delay t ji Finding the position to be updated, adding the basic weight g ji
3-3) updating the membrane potential of j, sigma g for various synaptic structures first d Sum sigma g r Synchronous damping is performed, and the current time position correction amount (GABA type correction amount n as shown in the figure) is superimposed 3 ) This type of synaptic weight gt= Σgis calculated d -∑g r And further calculates the synaptic current. The current position correction is then cleared and the position pointer is moved one bit back. And t is ji After time, the effect of the i neuron action potential on j is reflected by local correction.
According to the invention, synaptic conductivity calculation is separated from current calculation of the post-synaptic neuron, so that the coupling degree of the states of the pre-synaptic neuron and the post-synaptic neuron in the synaptic current calculation process is reduced, and the parallel performance of a traditional clock driving algorithm is enhanced, so that the introduction of a virtual synaptic conceptual model reduces the calculation resource consumption of the traditional clock driving algorithm, and improves the parallel performance of the algorithm.
Virtual synapses are designed as a circular array structure, the array length is determined by the maximum synaptic delay time, and there is a "flag" pointing to the current position of the array. The array member pointed by the mark is virtual synaptic electric conduction at the current moment, each time step neuron calculates input current according to the virtual synaptic electric conduction at the current moment, the electric conduction at the next moment is updated according to the state at the current moment and the time evolution process of the synaptic electric conduction, then the position is reset to zero, and the mark is moved one bit later. When the presynaptic neuron generates action potential, it will have a correction effect on the synaptic electric conduction, and the correction is recorded in the position of the corresponding bit of the post-shift synaptic delay of the number group flag bit.
In order to reduce the independence in the calculation process of each state variable, the invention carries out multithread synchronous operation on neuron parameter update by means of a CUDA operation platform, and the algorithm structure and the idea are as follows:
1) A class "Neuron" representing a node of neurons in a neural network. The member variables record the membrane potential of the neuron and various parameters required for updating the membrane potential, the synaptic connection structure received and sent by the neuron and the structure of data record in the simulation process.
2) A class "Synapse" representing a Synapse of a neuron in a neural network. Its member variables record the basal conductance, delay and postsynaptic neurons of the synapse.
3) The class "VirtualSyn" that represents the receipt of virtual synapses by neurons in a neural network. Its member variables record the parameters related to the burst conductivity cycle array and the conductivity time course.
Because the calculation of the input current and the updating of the membrane potential are only related to the virtual synapse condition, the membrane potential of each Neuron is synchronously carried out in a single thread of the GPU; if the neuron generates action potential, the correction amount of the specific position of the virtual synaptic weight cyclic array of the post-synaptic neuron is updated, and an atomic operation mode is used for ensuring the safety of writing into the memory. Finally, the synchronous thread ensures that all the neuron states are updated and the synapse calculation is finished, and the next time step operation is performed.
The C++/CUDA software implementation data structure is shown in FIG. 3. Neurons were recorded using the Neuron class, presynaptic connections from presynaptic neurons were recorded using the Synapse class, and postsynaptic types received by postsynaptic neurons were recorded using the VirtualSyn class.
1) The Neuron class is applicable to all types of neurons in the model, and the member variables store parameters required by the expIF Neuron model, so that the Neuron parameters can be different; the class has an object array of a Synapse class, and the length of the object array is the total number of synapses when the neuron is used as a presynaptic neuron; the class has an object array of a VirtualSyn class, and the length is determined by a division strategy of the synapse type, such as being divided into AMPA, NMDA, GABA type or lgN input, intra-layer excitability input, inter-layer excitability input, intra-layer inhibitory input and the like; the class is provided with a data storage structure, and the membrane potential, different types of synaptic conductances and discharge moments in the simulation process are recorded. Class methods include initializing, storing data, clearing memory and membrane potential updates.
2) The member variables of the Synapse class contain the latency and base weights of this Synapse while pointing to the particular type of object of the corresponding post-Synapse neuron object VirtualSyn array. Class methods include initializing, memory purging, and updating corrections to the VirtualSyn object pointed to.
3) The member variables of the VirtualSyn class include the rising and decaying components Σg of the total synaptic conductance of this type d Sum sigma g r And corresponding time constants, note that multiple sets of rising and falling components are allowed to exist to simulate the superposition of multiple time courses, where the respective proportions are specified when calculating the total conductance (e.g., intra-cortical excitatory input is defined as 0.7AMPA+0.3NMDA); the method comprises the steps of including an array of conductivity correction amounts, the length of which is determined by the maximum time lag of the synapse, and recording the influence of presynaptic neuron discharge at each moment on the conductivity by the array; the method comprises the steps of pointing to a mark of the current position of a correction quantity array, adding the correction quantity of the current position to each simulation step conductance component, and then moving the mark one bit backwards. The class method comprises initializing and cleaning a memory; synchronous decay and local correction of synaptic conductance is invoked each time Neuron updates membrane potential; calculating synaptic conductance, which is invoked each time Neuron updates membrane potential; the update of the local correction is called by Synapse when the pre-neuron fires.
The C++/CUDA software implementation is shown in FIG. 4.
1) Reading the Neuron structure of the network from the mat file, distributing space for the Neuron object array and initializing;
2) Initializing a VirtualSyn object array according to the neuron type, and initializing a correction quantity array according to the maximum delay occupied time step;
3) Allocating space for the Synapse object array according to the number of synapses sent by the neurons, reading the Synapse structure from the mat file and initializing;
4) Other parameters required by simulation are specified, such as duration, simulation step length, number of threads in each thread block and the like;
5) The LGN neuron input current is read from the mat file, and simulation is operated;
6) Storing data to the mat file, and cleaning the memory.
In the process of running simulation, the membrane potential of the Euler method neurons is adopted for updating, and the flow of each time step is as follows:
5-1) synchronous attenuation and local correction of the VirtualSyn electric shock conductivity component, wherein the correction quantity at the current moment is set to 0, and the position mark is shifted one bit later;
5-2) VirtualSyn calculates the synaptic conductance of this type;
5-3) the LGN neuron obtains the input current at the current moment according to the data of the mat file, and the cortical neuron obtains the input current according to the membrane potential and the synaptic electric conduction;
5-4) confirming the refractory period mark, and updating the membrane potential if the refractory period mark is not in the refractory period; resetting if the membrane potential exceeds a threshold value, setting a refractory period mark, recording an action potential event, traversing a synapese object array, and updating a postsynaptic neuron virtual Syn correction quantity;
5-5) recording membrane potential, various synaptic conductances.
The invention reduces the time consumption of large-scale cortical network simulation. The present invention simulates a primary visual cortex model comprising 15000 neurons and more than ten million synapse structures in total in a two-layer structure, wherein the simulation experiment lasts 2000ms, the time step is 0.01ms, the total running time of the c++/CUDA program is within 20 minutes, wherein the reading of the network structure takes about 8 minutes, and the simulation process using the multi-thread operation takes less than 12 minutes. This means that the invention has high simulation efficiency. Meanwhile, the invention simulates the phenomenon conforming to physiological experiments. This illustrates that the present invention has reliability in terms of simulation of a neural network.
Complex dynamics characteristics, intricate network topology structures, mass state data correlation characteristics, exchange and the like in the biological neural network are large-scale biological neural network simulation difficulties. The invention provides an algorithm for generating and storing a visual cortex network, provides a conversion strategy from actual visual stimulus to nerve electric signals, and provides a simulation algorithm of a large-scale nerve network with multithread synchronous operation. In the simulation of the determined neural network, the algorithm shows high operation efficiency, obtains the result which accords with biological reality, and provides a beneficial method for realizing the realistic simulation of the large-scale biological neural network.

Claims (5)

1. A large-scale visual cortex neural network simulation method based on virtual synapse thought is characterized in that: synaptic conductivity calculation strategies using "virtual synapses"; the independence in the calculation process of each state variable is reduced, and the neuron parameter update is carried out by means of the CUDA operation platform to carry out multi-thread synchronous operation.
2. The method for simulating a large-scale visual cortical neural network according to claim 1, wherein: the above-described synaptic electrical conductivity calculation strategy for "virtual synapses," which are an integration of post-synaptic neurons that receive synaptic inputs, each having n "virtual synapses.
3. The method for simulating a large-scale visual cortical neural network according to claim 2, wherein: the total input synaptic current calculation of the neurons comprises three stages, namely:
the first stage, at the post-synaptic neuron end, calculating the opening degree of a single synaptic ion gate of the current clock step according to the pre-synaptic neuron state information; the time process of the synaptic weight is discretized, and the synaptic conductivity of the virtual synapse of the post-synaptic neuron is updated according to the delay of a specific synapse only at the moment of discharging the presynaptic neuron, so that the conductivity update at each moment is avoided;
the second stage, carrying out single synapse conductivity calculation on synapses according to the acquired synapse parameters; the same type of synaptic inputs are integrated, synchronous time evolution is carried out, and the gradual calculation of all synapses is avoided;
the third stage, calculating synaptic current according to synaptic conductivity, and accumulating all input synaptic currents to obtain total input synaptic current of the neuron; and the virtual synaptic weights of the virtual synaptic weights are accumulated in a traversing way, so that repeated calculation of synaptic conductivity coefficients at the post-synaptic neuron end is avoided, and the calculation process is simplified.
4. A method of large scale visual cortical neural network simulation according to claim 3, wherein: virtual synapses are designed into a cyclic array structure, the array length is determined by the maximum synapse delay, a mark pointing to the current position of the array exists, the array member pointed by the mark is virtual synapse conductance at the current moment, each time step neuron calculates input current according to the virtual synapse conductance, the conductance at the next moment is updated according to the state at the current moment and the time evolution process of the synapse conductance, then the position is reset, and the mark is moved backwards by one bit; when the presynaptic neuron generates action potential, it will have a correction effect on the synaptic electric conduction, and the correction is recorded in the position of the corresponding bit of the post-shift synaptic delay of the number group flag bit.
5. The method for simulating a large-scale visual cortical neural network according to claim 4, wherein: the method comprises the following steps of carrying out multithreading synchronous operation on neuron parameter updating by means of a CUDA (compute unified device architecture) operation platform, and specifically:
1) The class "Neuron" representing the Neuron node in the neural network, the member variable records various parameters required by Neuron membrane potential and membrane potential update, the synaptic connection structure received and sent by the Neuron and the structure of data record in the simulation process;
2) A class "Synapse" representing a Synapse emitted by a neuron in a neural network, the member variables recording the basic conductance, delay, and post-synaptic neuron of the Synapse;
3) The class 'virtual syn' representing the virtual synapse received by the neuron in the neural network, and the member variable records the parameters related to the synaptic electric conduction cycle array and the electric conduction time process;
under the virtual synapse condition, the calculation of input current and the updating of membrane potential are only related to the input current and the updating of the membrane potential are only related to the input current, so that the membrane potential of each Neuron is synchronously carried out in a single thread of the GPU; if the neuron generates action potential, updating the correction quantity of the specific position of the virtual synaptic weight cyclic array of the postsynaptic neuron; finally, the synchronous thread ensures that all the neuron states are updated and the synapse calculation is finished, and the next time step operation is performed.
CN202310250311.2A 2023-03-16 2023-03-16 Large-scale visual cortex neural network simulation method based on virtual synapse thought Pending CN116227588A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310250311.2A CN116227588A (en) 2023-03-16 2023-03-16 Large-scale visual cortex neural network simulation method based on virtual synapse thought

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310250311.2A CN116227588A (en) 2023-03-16 2023-03-16 Large-scale visual cortex neural network simulation method based on virtual synapse thought

Publications (1)

Publication Number Publication Date
CN116227588A true CN116227588A (en) 2023-06-06

Family

ID=86576754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310250311.2A Pending CN116227588A (en) 2023-03-16 2023-03-16 Large-scale visual cortex neural network simulation method based on virtual synapse thought

Country Status (1)

Country Link
CN (1) CN116227588A (en)

Similar Documents

Publication Publication Date Title
Rolls et al. A computational theory of hippocampal function, and empirical tests of the theory
TWI585695B (en) Method, apparatus and computer-readable medium for defining dynamics of multiple neurons
TW201541374A (en) Event-based inference and learning for stochastic spiking bayesian networks
JP2017529592A (en) Artificial and spiking neurons using asynchronous pulse modulation
KR20160084401A (en) Implementing synaptic learning using replay in spiking neural networks
TW201539335A (en) Implementing a neural-network processor
US11238342B2 (en) Method and a system for creating dynamic neural function libraries
KR20160125967A (en) Method and apparatus for efficient implementation of common neuron models
US20140351186A1 (en) Spike time windowing for implementing spike-timing dependent plasticity (stdp)
TW201543382A (en) Neural network adaptation to current computational resources
TWI550530B (en) Method, apparatus, computer readable medium, and computer program product for generating compact representations of spike timing-dependent plasticity curves
WO2015119963A2 (en) Short-term synaptic memory based on a presynaptic spike
Bichler et al. Design exploration methodology for memristor-based spiking neuromorphic architectures with the Xnet event-driven simulator
KR20160135206A (en) Analog signal reconstruction and recognition via sub-threshold modulation
CN112101535A (en) Signal processing method of pulse neuron and related device
JP2016537711A (en) Congestion avoidance in spiking neuron networks
US9418332B2 (en) Post ghost plasticity
KR20160123312A (en) Auditory source separation in a spiking neural network
CN116227588A (en) Large-scale visual cortex neural network simulation method based on virtual synapse thought
Lin et al. An uncertainty-incorporated approach to predict the winner in StarCraft II using neural processes
WO2014197175A2 (en) Efficient implementation of neural population diversity in neural system
Thiruvarudchelvan et al. Analysis of SpikeProp convergence with alternative spike response functions
CN113792863A (en) Impulse neural network modeling method, system and application thereof
Hu et al. Modeling nonlinear synaptic dynamics: a laguerre-volterra network framework for improved computational efficiency in large scale simulations
Meunier et al. Evolutionary supervision of a dynamical neural network allows learning with on-going weights

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination