WO2021000890A1 - 用于类脑智能与认知计算的脉冲神经网络运算系统及方法 - Google Patents
用于类脑智能与认知计算的脉冲神经网络运算系统及方法 Download PDFInfo
- Publication number
- WO2021000890A1 WO2021000890A1 PCT/CN2020/099714 CN2020099714W WO2021000890A1 WO 2021000890 A1 WO2021000890 A1 WO 2021000890A1 CN 2020099714 W CN2020099714 W CN 2020099714W WO 2021000890 A1 WO2021000890 A1 WO 2021000890A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- container
- model
- brain
- neuron
- intelligence
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/10—Interfaces, programming languages or software development kits, e.g. for simulating neural networks
- G06N3/105—Shells for specifying net layout
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
Definitions
- This application relates to the technical field of brain-like impulse neural network simulation and high-performance computing, and in particular to a pulsed neural network computing system and method for brain-like intelligence and cognitive computing.
- brain-like intelligence and cognitive computing are based on spiking neural networks, combining various neurotransmitters, neuromodulators, receptors, electrical synapses, chemical synapses, dendrites, neurons, and glial in the biological brain
- the rich working mechanism of cells is used for computational modeling.
- the neural circuits, nerve nuclei, brain regions and whole brain models constructed can simulate many cognitive mechanisms and behaviors of the biological brain, such as memory and learning, simulated emotions, navigation and Planning, motion control, brain-like vision and brain-like hearing, attention, decision-making, etc., provide a broader route for the development of artificial intelligence systems.
- the existing brain-like impulse neural network computing framework has the following problems:
- the existing brain-like impulse neural network computing framework often ignores the modeling of electrical synapses, the working mechanism that supports the simulation of neuromodulation, and the simulation of dendrites.
- the mechanism in which multiple synapses exchange information and perform logical operations among multiple synapses cannot support the topology of synapses and synapses directly connected, etc.
- One of the objectives of the embodiments of this application is to provide a spiking neural network computing system and method for brain-like intelligence and cognitive computing.
- the system provides a unified and flexible modeling method and provides a multi-level tree
- the structure of the network description method supports full-scale modeling of the biological brain and nervous system and flexible network topology, thereby organically unifying the modeling scale and modeling richness, and fusing all models at various scales into a unified neural network for operation , And supports the representation and storage of data in the form of tensors.
- This also enables the system to support spiking neural networks, as well as traditional neural networks (deep learning) and other algorithms that use tensors as the main data representation method.
- a pulsed neural network computing system for brain-like intelligence and cognitive computing includes: model description module, parameter database, configuration description module, configuration manager, rule manager, data manager, network builder, Network manager, operation manager, scheduler, log manager, operation monitoring module and graphic display module;
- the model description module is used to provide an interface for users to design and describe the network model
- the parameter database is used to store various parameter data of the network, including initialization parameters and runtime parameters;
- the parameter database can be a binary file or a text file;
- the text file can be in a CSV file format or a file format in which data is separated by other characters;
- the configuration description module is used to describe the configuration parameters of the current network operating environment and the conditions for initiating the cropping and regeneration of synapses and neurons;
- the configuration manager is used to read the above configuration description module to obtain system configuration parameters
- the network model object is constructed by the network builder and resides in the memory, and is used to characterize the entire network, including all containers, topological relationships and parameter data, and is the object scheduled to run by the scheduler;
- the rule manager is used to read the rules declared by the user in the model description module, and interpret these rules and arbitrate conflicts between the rules when the scheduler schedules the operation of the network model object;
- the data manager includes one or more decoders and encoders, which are used to read and parse the parameter database, convert the data format, and serialize the data; the user can add a custom decoder and encoder in the data manager Encoder to read and write files in a custom format;
- the network builder is used to read the model description module, analyze the topology of the network, read the data file through the data manager, and build the network model object in the memory;
- the network manager is used to construct, traverse, access and update network model objects
- the operation manager is used to manage all the operations that can be run on the system; all operations constitute the operation library; the user can specify the operations that need to be performed for each container in the model description module, and the scheduler will schedule and execute the corresponding operations during runtime. operating;
- the scheduler is used for allocating hardware resources and scheduling calculation processes to optimize calculation efficiency
- the log manager is used to record logs generated when the system is running, and remind users of the working status and abnormalities of the system, so as to facilitate debugging and maintenance;
- the operation monitoring module is used to receive and respond to user input and manage the operation status of the entire system, including default status, network construction status, network operation status and network suspension status;
- the graphical display module is used to read network data and display it to the user to facilitate development, monitoring and debugging.
- the model description module includes a network description unit, an aggregation description unit, and a circulation description unit, which are commonly used to describe various components and topological structures of the entire network; they can be selected as text files, using nested grammar, and Choose XML or JSON file format.
- the model description module adopts a network description mode of a multi-level tree structure simulating a biological brain nervous system organization mode
- the convergence description unit supports the organization of nodes in the network according to preset layer groups and modules, and is used to characterize the multi-level organization of neurons and related glial cells in the biological brain (eg, nucleus -> brain area -> Whole brain);
- the circulation description unit supports grouping and hierarchical arrangement and organization of edges in the network according to topological (connection relationship) similarity, and is used to characterize the various organization methods of neural synapses in the biological brain (such as dendrites, projections in neural pathways, Nerve fiber bundles, etc.) and the organization of related glial cells.
- the network description unit is used to describe containers such as Network and Param, describe the parameters and operating rules of the entire network, and point to one or more aggregation description units and circulation description units through links;
- the convergence description unit is used to describe containers such as Confluence, Module, Layer, Node, NodeParam, and Param, and is used to describe the relationship between the modules and layer groups of nodes in the network, the parameters of each container, and runtime rules and commands;
- the circulation description unit is used to describe containers such as Flow, Channel, Link, Edge, EdgeParam, and Param, and is used to describe the connection (topology) relationship of the edges in the network, the parameters of each container, and runtime rules and commands.
- the Network represents a network container, which is located at the first level (top level) of the tree structure and is used to characterize models of the whole brain and behavioral scales.
- Each Network can accommodate one or more Confluence and Flow;
- the Confluence represents the convergence container, which is located at the second level of the tree structure and can be used to characterize the model of the brain area.
- Each Confluence can contain one or more Modules;
- the Module represents a module container, which is located at the third level of the tree structure, and can be used to characterize the model of the nucleus scale.
- Each Module can contain one or more Layers;
- the Layer represents a layer group container, which is located at the fourth level of the tree structure and can be used to represent a model of the neural loop scale, and each layer can contain one or more nodes;
- the Node represents a node container, which is located at the fifth level of the tree structure. It can be used to characterize a neuron-scale or glial cell-scale model, and can also be used to characterize a group of neurons or glial cells. Each Node can contain one or more NodeParam;
- the Node can also be used to characterize input and output nodes, and is used to dock the I/O devices of the system, such as camera input, audio input, sensor input, control output, etc., and read and write I/O device data through each NodeParam of the Node Dynamic update
- the NodeParam represents the node parameter container, which is located at the sixth level (the lowest level) of the tree structure. It can be used to characterize models of molecular scale, receptor scale, neurotransmitter or neuromodulator scale, and can also be used to characterize a group of neurons or The parameter tensor of the glial cell model;
- the Flow represents a circulation container, which is located at the second level of the tree structure and can be used to characterize the model of the scale of the nerve fiber bundles connecting the brain compartments.
- Each Flow can contain one or more Channels;
- the Channel represents a channel container, which is located at the third level of the tree structure and can be used to characterize the model of the conduction bundle composed of axons connecting nerve nuclei.
- Each Channel can contain one or more Links;
- the Link represents a connection container, which is located at the fourth level of the tree structure and can be used to characterize the model of the neural pathway composed of axons in the neural circuit, and each Link can contain one or more Edges;
- the Edge represents the edge container, which is located at the fifth level of the tree structure. It can be used to characterize a model of dendritic scale or synaptic scale, and can also be used to characterize a group of synapses or glial cell protrusions. Each Edge can contain one or Multiple EdgeParam;
- the EdgeParam represents the edge parameter container, which is located at the sixth level (the lowest level) of the tree structure. It can be used to characterize models of molecular scale, neurotransmitter or neuromodulator scale, and receptor scale, and can also be used to characterize a group of synapses or The parameter tensor of the glial cell protrusion model;
- the Param represents a general parameter container and belongs to an auxiliary container. According to the needs of modeling, the containers at each level mentioned above can additionally have one or more Param, which is used to hold parameter data in the form of tensors; or, the containers at each level mentioned above may not have Param;
- Each of the above containers has a number and name, which are used for indexing in a multi-level tree structure
- Each of the above containers has one or more control blocks (Control Block) for storing statistics and control information, including the traversal order and rules of the network, the number of traversal operations that have been involved, whether the data has been stored in the main memory, and whether the data has been stored.
- Control Block for storing statistics and control information, including the traversal order and rules of the network, the number of traversal operations that have been involved, whether the data has been stored in the main memory, and whether the data has been stored
- the frequency of reading and writing of coprocessor memory and hard disk, etc., is managed and updated by the rule manager and scheduler.
- the firing characteristics of the neuron model can be constructed as tension firing, rapid firing, burst firing, peak firing or phase firing, etc.;
- the response of the neuron model to the upstream input signal can be constructed as different neural adaptability or sensitivity curves
- the neuron model can be constructed as an excitatory, inhibitory, modulated or neutral model for downstream action mechanisms;
- the neuron model can be constructed as an impulse neuron model, or as a traditional neuron model;
- the glial cell model can be constructed as astrocyte, oligodendrocyte, microglia, Schwann cell and satellite cell model.
- the neurotransmitter or neuromodulator model can be constructed as an excitatory, inhibitory or modulated model
- the receptor model can be constructed as an ionic or metabolic model
- the response effect of the receptor model to neurotransmitter or neuromodulator can be constructed as an excitatory, inhibitory, modulated or neutral model.
- the dendritic scale model can be constructed as an apical dendritic model, a basal dendritic model or a spine model;
- the synapse model can be constructed as excitatory, inhibitory, modulating or neutral.
- the molecular scale model can be constructed as an intracellular molecular model, a molecular model in the cell membrane, and a cell gap molecular model.
- the NodeParam, EdgeParam, and Param can accommodate parameters in the form of tensors (ie, multi-dimensional matrices),
- the dimension of the tensor can be one-dimensional or multi-dimensional, and the specific arrangement and use mode are specified by the user;
- the tensor can be configured in 4 dimensions, and the position of each parameter in the tensor can be represented by coordinates (x, y, z, t), where the three dimensions of x, y, and z correspond to each neural tissue represented in the parent container (Such as neurons or synapses, etc.)
- the spatial arrangement position of the model, t represents the time dimension, which can characterize the cache and delay of timing information, and can be used to simulate the long-term action mechanism of neuromodulators on neurons and synapses (with delay Sex);
- the parameters in the tensor can be shared by all or part of the neural tissue (such as neurons or synapses) models in the parent container, and can be used to simulate the large-area effect of neuromodulators on all neural tissues in the target area.
- the neural tissue such as neurons or synapses
- the Flow and all its child containers may correspond to one or more upstream containers, and, one or more downstream containers, and index access to them through the numbers or names of the upstream and downstream containers,
- Both the upstream container and the downstream container can be containers of any level, and the two can be the same or different containers;
- the Flow and all its sub-level containers can form an information flow path with its upstream and downstream containers, characterizing the (one-way or two-way) flow and processing process of information between two information sources (such as upstream and downstream containers). Any topological structure of information flow can be formed among multiple containers in the.
- the information flow and processing process can be used to realize various biological brain nerve mechanisms such as nerve impulse conduction between neurons through synapses, exchange of information between synapses and synapses, and neuron and synaptic plasticity.
- the arbitrary topological structure of the information flow can be used to realize any neural circuit connection mode in the cranial nervous system, including the feedback connection that supports the same neuron to connect back to itself, and the neurons of the same group (layer) mutually Connections between different groups (layers) of neurons (sequential/feedforward, cross-layer, feedback, etc.), as well as direct connections between synapses and synapses, and allow unlimited loops of feedback connections Calculation.
- the system supports a modeling design method that decomposes models of any level (or scale) into two parts: data and operation,
- the data can be accommodated by NodeParam, EdgeParam or Param, and stored by the corresponding parameter database;
- the operations are executable programs (such as functions and classes containing functions) used to access and update the aforementioned data.
- the operations can run on general-purpose CPU, ARM, DSP, GPU or other processors to ensure that the system has a certain span Hardware platform versatility.
- the system supports the user to define one or more operations so that the neurons in the same Node (without going through Edge) can directly access and update each other's data, so as to realize the rapid exchange of information. Simulate the electrical synapses in the biological brain nervous system.
- the system supports the user by defining one or more operations, so that the synapses in the same Edge can directly access and update each other's data, so as to realize the rapid exchange of information for simulating the biological brain nervous system
- the system supports automatic execution of the trimming and regeneration of synapses and neurons according to preset trigger conditions and execution rules
- the trigger condition can be specified by the user in the configuration description module
- the execution rule can be specified by the user in the model description module
- the execution rules can act on the entire network model object, or on sub-networks or specific containers;
- the trimming and regeneration process of synapses and neurons is scheduled and executed by a scheduler, and can be executed when the network is running or when the network is paused.
- the trigger condition includes one or any of the following:
- User command that is, the user inputs a command to the system through the keyboard or mouse or other methods, and the system executes the cutting or newborn process immediately after receiving the command or after the first preset time;
- Interval execution that is, the system automatically starts the cropping or regeneration process in a timely manner according to the first preset time interval or the first preset traversal period.
- the execution rules of the clipping process are divided into synaptic clipping rules and neuron clipping rules;
- the synapse tailoring rules include one or any of the following:
- the statistics of a certain synapse parameter and all the synaptic parameters in the specified reference synapse set reach the first preset value relationship (if the weight of a certain synapse is less than 1% of the average weight of all synapses in the specified edge), then The synapse is the synapse to be trimmed;
- the synapse is the synapse to be trimmed;
- the synapse is a synapse to be trimmed
- the synapse is the synapse to be trimmed; for the synapse to be trimmed, it can be trimmed;
- the neuron clipping rule includes one or any of the following:
- the neuron is a neuron to be trimmed
- the neuron is a neuron to be trimmed
- the neuron is a neuron to be trimmed
- the neuron For the neurons to be cut;
- the neuron is a neuron to be cut;
- the neuron is a neuron to be cropped; if a neuron is marked as being cropped by another operation process, the neuron is a waiting neuron. Cut the neuron; for the neuron to be cut, it can be cut;.
- the execution rules of the regeneration process are divided into neuron regeneration rules and synapse regeneration rules
- the neuron regeneration rules include one or more of the following:
- the new neurons will be born at the second preset ratio or the first preset number of the total capacity ;
- the first preset ratio and the second preset ratio may be the same or different;
- a certain node container generates new neurons according to a first preset rate (that is, according to a preset time interval or a preset traversal period) at a third preset ratio or a second preset number of its total capacity;
- a certain node container is marked as needing new neurons by other operations, and new neurons are generated at the second preset rate (that is, the preset time interval or the preset traversal period is the preset ratio or number of its total capacity);
- the synapse nascent rule includes one or any of the following:
- the fourth preset ratio and the fifth preset ratio may be the same or different;
- a certain edge container generates new synapses at a third preset rate (that is, according to a preset time interval or a preset traversal period with a preset ratio or number of its total capacity);
- a certain edge container is marked as needing new synapses by other calculation processes, and new synapses are generated according to the fourth preset rate (that is, according to the preset time interval or preset traversal period with the preset ratio or number of its total capacity);
- the user can specify one or more rules for each container, and all the rules constitute a rule base.
- the rule manager sorts the rules in the rule library according to preset priorities. When multiple rules for a container conflict with each other, only the rule with the highest priority is executed. When a container does not When specifying any rule, the rule manager will execute the default rule;
- the rules in the rule library include: traversal rules, memory usage rules, data I/O rules, synapses and neuron trimming and regeneration rules;
- the traversal rule can be used to instruct the scheduler to repeatedly traverse or skip traversing all or specific containers of the network according to the second preset time interval or the fourth preset traversal period, so as to concentrate computing resources on sub-networks that need to be computationally intensive. Improve data utilization efficiency;
- the memory usage rules can be used to guide the scheduler to reasonably arrange the usage of main memory and coprocessor memory
- the data I/O rules can be used to guide the scheduler to schedule the frequency of data exchange between the main memory and the coprocessor memory, and between the memory and the hard disk, so as to save I/O resources and improve overall computing efficiency.
- the scheduler manages one or more main memory pools and, one or more device memory pools, to reasonably allocate the usage of network model objects in the main memory and each device memory,
- the main memory pool is used to manage the use of main memory
- the device memory pool corresponds to each coprocessor (may be ARM, GPU, DSP, ASIC, etc.), and is used to manage the use of the corresponding device memory;
- the upper and lower limits of the capacity of the main memory pool and the device memory pool are specified by the user through the configuration description module.
- the scheduler manages one or more thread pools for dynamically arranging sub-threads to participate in multi-thread operations, so as to rationally arrange the main computing unit (can be CPU, ARM, etc.) and coprocessor (can be ARM , GPU, DSP, etc.) and I/O devices (hard disk, camera, audio input, control output, etc.).
- main computing unit can be CPU, ARM, etc.
- coprocessor can be ARM , GPU, DSP, etc.
- I/O devices hard disk, camera, audio input, control output, etc.
- the scheduler manages one or more node data input buffers, one or more node data output buffers, one or more edge data input buffers, and one or more edge data output buffers, Used for caching data that needs to read and write hard disks or I/O devices, so that the scheduler can arrange hard disks, I/O devices to read and write in a timely manner according to the load of the processor, hard disks and I/O devices to avoid I/O blocking ,
- the capacity of each buffer, the upper and lower limits of the frequency of reading and writing hard disks or I/O devices, and the upper and lower limits of throughput of reading and writing hard disks or I/O devices are specified by the user through the configuration description module.
- This application also provides a pulsed neural network operation method for brain-like intelligence and cognitive computing, which uses the above-mentioned pulsed neural network computing system for brain-like intelligence and cognitive computing.
- the present application discloses a pulsed neural network computing system and method for brain-like intelligence and cognitive computing, which provides a unified and flexible modeling method .
- the provided multi-level tree structure network description method supports the full-scale modeling of the biological brain and nervous system and the flexible network topology, thereby organically unifying the modeling scale and modeling richness, and all models of each scale It is integrated into a unified neural network for operation, and supports the representation and storage of data in the form of tensors. This also enables the system to support spiking neural networks, and also compatible with traditional neural networks (deep learning) and other tensors.
- the algorithm of the main data representation method also provides the function of automatically performing synapse and neuron cutting and regeneration according to certain conditions and rules, eliminating the burden of neural network developers who need to implement related functions by themselves.
- the above-mentioned modeling design method can be to decompose a model of any level (or scale) into two parts: data (data) and operation (operation).
- data can be accommodated by NodeParam, EdgeParam or Param, and stored by the corresponding parameter database.
- Operations are executable programs (such as functions and classes containing functions) that can access and update the aforementioned data.
- FIG. 1 is a schematic diagram of the overall architecture of a spiking neural network computing system for brain-like intelligence and cognitive computing provided by this application;
- FIG. 2 is a schematic diagram of a network hierarchy of a spiking neural network computing system for brain-like intelligence and cognitive computing in an embodiment of the application;
- FIG. 3 is a schematic diagram of a system operation process flow of a spiking neural network computing system for brain-like intelligence and cognitive computing in an embodiment of the application.
- an embodiment of the present application discloses a pulsed neural network computing system for brain-like intelligence and cognitive computing.
- the system includes: model description module 3, parameter database 2, configuration description module 1, configuration manager 6.
- the aforementioned model description module 3 includes: a network description unit, an aggregation description unit, and a circulation description unit. Together they describe the various components and topology of the entire network. They prefer to use nested syntax, and can choose XML or JSON file formats.
- the network description unit can be used to describe containers such as Network and Param, can be used to describe the parameters and operating rules of the entire network, and point to one or more convergence description units and circulation description units through links.
- the convergence description unit can be used to describe containers such as Confluence, Module, Layer, Node, NodeParam, and Param, and can be used to describe the relationship between the modules and layer groups of nodes in the network, the parameters of each container, and runtime rules and commands.
- the circulation description unit can be used to describe containers such as Flow, Channel, Link, Edge, EdgeParam, and Param, and can be used to describe the connection (topology) relationship of the edges in the network, the parameters of each container, and runtime rules and commands.
- the network description mode supported in the model description module 3 is preferably represented by a multi-level tree structure to imitate the organization of the biological brain nervous system:
- the above convergence description unit supports the organization of nodes in the network according to preset layer groups and modules, which can characterize the multi-level organization of neurons and related glial cells in the biological brain (eg, nucleus -> brain area -> Whole brain);
- the above-mentioned circulation description unit supports the grouping and hierarchical arrangement of edges in the network according to the similarity of topology (connection relationship), which can characterize the various organization methods of nerve synapses in the biological brain (such as dendrites, projections in neural pathways, Nerve fiber bundles, etc.) and the organization of the protrusions of related glial cells. This makes the development, debugging, management and scheduling of large-scale brain-like neural networks more intuitive and convenient.
- Network represents a network container, located at the first level (top level) of the tree structure, and can be used to characterize models of the whole brain and behavioral scales.
- Each Network can accommodate one or more Confluence and Flow.
- Confluence stands for Confluence Container, which is located at the second level of the tree structure and can be used to characterize models of brain scales. Each Confluence can contain one or more Modules.
- Module represents a module container, which is located at the third level of the tree structure and can be used to characterize the model of the nerve nucleus scale.
- Each Module can contain one or more Layers.
- Layer represents the layer group container, located at the fourth level of the tree structure, and can be used to represent the model of the neural loop scale.
- Each Layer can contain one or more Nodes.
- Node stands for node container, which is located at the fifth level of the tree structure. It can be used to characterize models of neuron scale or glial cell scale, and can also be used to characterize a group of neurons or glial cells.
- the firing characteristics of the neuron model can be constructed as tension firing, rapid firing, burst firing, peak firing or phasing firing, etc.
- Its response to upstream input signals can be constructed as different neural adaptability or sensitivity curves.
- the downstream mechanism of action can be constructed as an excitatory, inhibitory, modulated or neutral model.
- Glial cell models can be constructed as astrocytes, oligodendrocytes, microglia, Schwann cells and satellite cell models. Each Node can accommodate one or more NodeParams.
- the number of NodeParams it accommodates is determined by the number of parameter types of the neuron model, that is, each type corresponds to a NodeParam and the parameters of all neurons in the Node are defined as The amount is arranged and saved.
- Node can also be used to characterize input and output nodes, and is used to interface with system I/O devices, such as camera input, audio input, sensor input, control output, etc.
- system I/O devices such as camera input, audio input, sensor input, control output, etc.
- the data of reading and writing I/O devices is dynamically updated through each NodeParam of the Node.
- NodeParam represents the node parameter container, located at the sixth level (lowest level) of the tree structure. It can be used to characterize models of molecular scale, receptor scale, neurotransmitter or neuromodulator scale, and can also be used to characterize a group of neurons or glial The parameter tensor of the cell model.
- the neurotransmitter or neuromodulator model can be constructed as an excitatory, inhibitory or modulated model.
- the receptor model can be constructed as an ionic or metabolic model, and its response to neurotransmitter or neuromodulator can be constructed as an excitatory, inhibitory, modulated or neutral model.
- Flow stands for the circulation container, located at the second level of the tree structure, and can be used to characterize the model of the scale of the nerve fiber bundles connecting the brain areas.
- Each Flow can contain one or more Channels.
- Channel represents the channel container, located at the third level of the tree structure, and can be used to characterize the model of the conduction bundle formed by the axons connecting the nerve nuclei.
- Each Channel can contain one or more Links.
- Link represents the connection container, located at the fourth level of the tree structure, and can be used to represent the model of the neural pathway composed of axons in the neural circuit.
- Each Link can accommodate one or more Edge.
- Edge stands for edge container, located at the fifth level of the tree structure. It can be used to characterize models of dendritic scale or synaptic scale, and can also be used to characterize a group of synapses or glial cell protrusions.
- the dendritic scale model can be constructed as a apical dendritic model, a basal dendritic model or a spine model.
- Synapse models can be constructed as excitatory, inhibitory, modulated or neutral models.
- Each Edge can accommodate one or more EdgeParam. When Edge is used to characterize a group of synapses of the same type, the number of EdgeParams it contains is determined by the number of parameter types of the synapse model. Each type corresponds to an EdgeParam. The parameters of this type of all synapses in the Edge are arranged in a tensor. save.
- EdgeParam stands for edge parameter container, located at the sixth level (lowest level) of the tree structure. It can be used to characterize models of molecular scale, neurotransmitter or neuromodulator scale, and receptor scale, as well as a group of synapses or glials.
- Molecular scale models can be constructed as intracellular molecular models, molecular models in cell membranes, and intercellular molecular models.
- the neurotransmitter or neuromodulator model can be constructed as an excitatory, inhibitory or modulated model.
- the receptor model can be constructed as an ionic or metabolic model, and its response to neurotransmitter or neuromodulator can be constructed as an excitatory, inhibitory, modulated or neutral model.
- Param represents a general parameter container and belongs to an auxiliary container. According to the needs of modeling, the above-mentioned containers at each level may additionally have one or more Param, which is used to hold parameter data in the form of tensors; or, the above-mentioned containers at each level may not have Param;.
- NodeParam, EdgeParam, and Param can accommodate parameters in the form of tensors (ie, multi-dimensional matrices).
- the dimension of a tensor can be from 1 to multi-dimensional, and the specific arrangement and use method are specified by the user.
- a tensor can be 4-dimensional, and the position of each parameter in the tensor can be represented by coordinates (x, y, z, t), where the three dimensions of x, y, and z correspond to each neural tissue represented in the parent container ( (Such as neurons or synapses, etc.) the spatial arrangement of the model, t represents the time dimension, which can characterize the cache and delay of timing information, and can be used to simulate the long-term mechanism of neuromodulators on neurons and synapses (with delay ).
- the parameters in the tensor can be shared by all or part of the neural tissue (such as neurons or synapses) models in the parent container, and can be used to simulate the large-area effect of neuromodulators on all neural tissues in the target area.
- Each of the above containers has a number and name, which are used for indexing in a multi-level tree structure.
- Each container has one or more control blocks (Control Block) used to store statistics and control information, such as the traversal order and rules of the network, the number of traversal operations that have been involved, whether the data has been stored in the main memory, and whether the data has been stored in the protocol.
- Control Block used to store statistics and control information, such as the traversal order and rules of the network, the number of traversal operations that have been involved, whether the data has been stored in the main memory, and whether the data has been stored in the protocol
- the frequency of processor memory, hard disk read and write, etc. is managed and updated by the rule manager and scheduler.
- Flow and all its child containers can correspond to one or more upstream containers, and, one or more downstream containers, and index access to them through the numbers or names of the upstream and downstream containers.
- Both the upstream container and the downstream container can be containers of any level, and they can be the same or different containers. Therefore, Flow and all its child containers can form an information flow path with its upstream and downstream containers, characterizing the (one-way or two-way) flow and processing process of information between two information sources (such as upstream and downstream containers). Any topological structure of information flow can be formed among multiple containers in the network.
- the above-mentioned information flow and processing process can be used to realize a variety of biological brain nerve mechanisms such as nerve impulse conduction between neurons through synapses, exchange of information between synapses and synapses, and neuron and synaptic plasticity.
- the above-mentioned arbitrary topological structure of information flow can be used to realize any neural circuit connection in the cranial nervous system, including the feedback connection that supports the same neuron to connect back to itself, the connection between the neurons of the same group (layer), Arbitrary connections between different groups (layers) of neurons (sequential/feedforward, cross-layer, feedback, etc.), as well as direct connections between synapses and synapses, and allow endless loop calculations of feedback connections.
- the topological relationship between them can be used to achieve forward/feedforward connections between neurons in different groups (layers) through synapses;
- the upstream and downstream containers of the Edge are the same Node, the topological relationship between them (for example, Node 1 -> Edge -> Node 1) It can be used to realize the interconnection of neurons in the same group (layer) through synapses, and it can also be used to realize the feedback connection of neurons connecting back to themselves through self-synapses (autapse);
- each Node belongs to a different Layer, the topological relationship can be used to realize a neural loop formed by feed-forward connections, cross-layer connections, and feedback connections between neurons in different layers;
- the topological relationship can be used to realize a feedback loop composed of one or more (or one or more groups) of different neurons.
- the synapses in Edge can access the neurons in the upstream and downstream containers to obtain their excitation timing information, combine their own parameters (such as weights) to perform operations and propagate the results of the operations to the neurons in the upstream and downstream containers , Can realize the conduction of nerve impulses between neurons through synapses, as well as long and short-term plasticity mechanisms such as Hebbian, Anti-Hebbian, and STDP; neurons in Node can be transmitted according to the received (through neurotransmitter or neuromodulation) (Of) information undergoes functional changes or shaping (a type of neuronal plasticity).
- an Edge When an Edge is used to characterize one or more synapses, and at least one of its corresponding upstream and downstream containers is an Edge characterizing one or more synapses, the topological relationship between them (for example, Edge 1 -> Edge 2 -> Edge 3) can be used to realize the direct connection and direct information exchange between synapses and synapses.
- the above parameter database is used to store various parameter data of the network (including initialization parameters and runtime parameters).
- the parameter database can be selected as a binary file or a text file.
- the text file can be in CSV file format or a file format in which data is separated by other characters.
- Each container can have one or more corresponding parameter databases.
- the parameters contained in NodeParam, EdgeParam, or Param can be stored in one or more parameter databases, or multiple NodeParam, EdgeParam, or Param can share one or more parameter databases to store the same parameters.
- the user can place the parameter database of each container in the network in the corresponding subfolder in the model file path.
- the modeling design method supported by this system can be to decompose any level (or scale) model into two parts: data and operation.
- data can be accommodated by NodeParam, EdgeParam or Param, and stored by the corresponding parameter database.
- Operations are executable programs (such as functions and classes containing functions) that can access and update the aforementioned data.
- neuron modeling can use traditional neuron model, design its ReLU activation function as operation, and design its threshold parameter as data; neuron modeling can also use impulse neuron model, which can be leaky integrated- and-fire The function of model is designed as an operation, and its parameters are designed as data.
- the user can define one or more operations to enable each neuron in the same Node (without Edge) to directly access and update each other's data to achieve rapid exchange of information, which can be used to simulate biological brain nerves Electrical synapses in the system.
- users can define one or more operations when modeling, so that the synapses in the same Edge can directly access and update each other's data, so as to realize the rapid exchange of information, which is used to simulate the biological brain nervous system.
- the system provides a flexible and unified modeling method
- the multi-level tree structure network description method provided supports the full-scale modeling of the biological brain and nervous system and the flexible network topology, it will build The model scale and modeling richness are organically unified, and all models of various scales are integrated into a unified neural network for operation.
- the data is represented and stored in the form of tensors, which also enables the system to support spiking neural networks, and also compatible with traditional neural networks (deep learning) and other algorithms that use tensors as the main data representation method.
- the above-mentioned operation manager 7 is used to manage all operations that can be run on the system.
- Operations can be programs (including code segments, functions, and classes) that can run on general-purpose CPU, ARM, DSP, GPU, or other processors. All operations constitute the operation library.
- the operation manager 7 provides a programming interface for querying and recalling specified operations based on the operation number or name. The user can specify the operations to be performed for each container in the model description module, and the scheduler will schedule the corresponding operations during runtime. This ensures that the system has a certain cross-hardware platform versatility and can run on hardware platforms such as general-purpose CPU, GPU, ARM, DSP, etc.
- the above configuration description module 1 is used to describe the configuration parameters of the current network operating environment. Such as the size of the memory pool available to the system, the mode of operation (single, multiple, continuous operation), the upper and lower limits of the frequency of reading and writing hard disk data, the conditions for initiating synapses and neuron clipping and regeneration processes, etc.
- the configuration manager 6 is used to read the configuration description module 1 to obtain system configuration parameters, and provide a programming interface for other components to call.
- the above-mentioned network model object is constructed by the network builder 9 and resides in the memory. It characterizes the entire network, including all containers, topological relationships and parameter data, and is the object that the scheduler schedules to run.
- the aforementioned rule manager 12 is used to read the rules declared by the user in the model description module 3, and interpret these rules when the scheduler 11 schedules the operation of the network model object.
- the user can specify one or more rules for each container in the model description module 3. All the rules constitute the rule base.
- the rule manager 12 sorts the rules in the rule base according to a preset priority. When multiple rules for a container conflict with each other, only the rule with the highest priority is executed. When a container does not specify any rules, the rule manager 12 uses the default rules to execute.
- the rules in the rule base include (but are not limited to): traversal rules, memory usage rules, data I/O rules, synapses and neuron trimming and regeneration rules, etc.
- Traversal rules can be used to instruct the scheduler to repeatedly traverse or skip traversing all or specific containers of the network according to the second preset time interval or the fourth preset traversal period, so as to concentrate computing resources on sub-networks that need to be computationally intensive, and improve data Utilization efficiency
- memory usage rules can be used to guide the scheduler to rationally arrange the use of main memory and coprocessor memory
- data I/O rules can be used to guide the scheduler to schedule data between main memory and coprocessor memory, and between memory and The frequency of exchange between hard drives saves I/O resources and improves overall computing efficiency.
- the aforementioned data manager 8 includes one or more decoders and encoders.
- the decoder is used to read and parse the data file in the format specified by the user and convert its content into a data type that can be calculated by the computer.
- the encoder is used to serialize the data in the memory in a user-specified format for writing back to the hard disk.
- the file type of the data file can be a binary file or a text file (in Unicode or ASCII format). Users can add custom decoders and encoders in the data manager 8 to read and write files in custom formats.
- the above-mentioned network builder 9 reads the model description module 3, analyzes the topology of the network, and reads the data file through the data manager 8 to construct the network model object in the memory.
- the above-mentioned network manager 10 provides a programming interface for constructing a network model object, and the interface calls the network builder 9 to construct a network model object.
- the network manager 9 also provides a programming interface for accessing, traversing, and operating network model objects, including support for querying and updating arbitrary containers, neurons, synapses, parameters, etc. by number or name.
- the supported traversal sequence includes (but is not limited to):
- traversal can include (but is not limited to):
- the aforementioned scheduler 11 is used for allocating hardware resources and scheduling calculation processes to ensure optimal calculation efficiency.
- the scheduler 11 manages one or more main memory pools and one or more device memory pools to reasonably allocate the usage of network model objects in the main memory and each device memory.
- the main memory pool is used to manage the use of main memory; each coprocessor (which can be ARM, GPU, DSP, ASIC) has one or more corresponding device memory pools to manage the use of corresponding device memory.
- the upper and lower limits of its capacity are specified by the user through the configuration description module 1.
- the aforementioned scheduler 11 manages one or more thread pools for dynamically arranging sub-threads to participate in multi-threaded operations, so as to rationally arrange the main computing unit (can be CPU, ARM, etc.) and co-processors (can be ARM, GPU, DSP) Etc.) and the computing load of I/O devices (hard disk, camera, audio input, control output, etc.).
- main computing unit can be CPU, ARM, etc.
- co-processors can be ARM, GPU, DSP) Etc.
- I/O devices hard disk, camera, audio input, control output, etc.
- the above-mentioned scheduler 11 manages one or more node data input buffers, one or more node data output buffers, one or more edge data input buffers, and one or more edge data output buffers for buffering needs.
- Read and write data on hard disks or I/O devices They prefer to use the data structure of the circular queue.
- the capacity of each buffer, the upper and lower limits of the frequency of reading and writing hard disks or I/O devices, and the upper and lower limits of the throughput of reading and writing hard disks or I/O devices are specified by the user through the configuration description module 1. According to the load of the processor, the hard disk and the I/O device, the scheduler 11 arranges the hard disk and I/O device to read and write in a timely manner to avoid I/O blocking.
- scheduler 11 is used to reasonably allocate the use of hardware resources such as processors and coprocessors, memory, hard disks, and IO devices, this system is suitable for efficient operation on embedded devices with relatively limited hardware resources (such as memory).
- This system provides the function of automatically cutting and regenerating synapses and neurons according to certain trigger conditions and execution rules.
- the user can specify in the configuration description module 1 the trigger conditions for starting the cutting or newborn process, and in the model description module 3, specify the execution rules of the cutting or newborn process.
- Execution rules can act on the entire network model object, or on sub-networks or specific containers.
- the trimming or regeneration process is scheduled and executed by the scheduler 11, and can be executed when the network is running or when the network is suspended.
- the triggering conditions for starting the trimming or newborn process may include (but are not limited to) one or more of the following:
- User command that is, the user inputs a command to the system through keyboard or mouse or other methods, and the system executes the cutting or rebirth process immediately after receiving the command or after the first preset time;
- Interval execution that is, the system automatically starts the cutting or rebirth process in a timely manner according to the first preset time interval or the first preset traversal period.
- Synapse tailoring rules can include (but are not limited to) one or more of the following:
- the statistics of a certain synapse parameter and all the synaptic parameters in the specified reference synapse set reach the first preset value relationship (for example, the weight of a certain synapse is less than 1% of the average weight of all synapses in the specified edge) ,
- the synapse is the synapse to be trimmed;
- the parameter of a certain synapse and the specified threshold reach the second preset value relationship (for example, the weight of a certain synapse is less than 10.0), then the synapse is the synapse to be cut;
- a certain synapse is marked as being trimmed by another calculation process, then the synapse is the synapse to be trimmed; for the synapse to be trimmed, it can be trimmed.
- Neuron clipping rules can include (but are not limited to) one or more of the following:
- the neuron is a neuron to be trimmed
- the neuron is a neuron to be trimmed
- a neuron has no synapses for input and output, the neuron is a neuron to be trimmed;
- the statistics of the parameters of a certain neuron and all the neuron parameters in the specified reference neuron set reach the third preset value relationship (if the threshold of a certain neuron is greater than the maximum threshold of all neurons in the specified node), then the The neuron is the neuron to be cut;
- the parameter of a certain neuron and the specified threshold reach the fourth preset value relationship (for example, the threshold of a certain neuron is greater than 1000.0), then the neuron is a neuron to be cut;
- a certain neuron is marked as being cropped by another operation process, then this neuron is a neuron to be cropped; for a neuron to be cropped, it can be cropped.
- Neuron regeneration rules can include (but are not limited to) one or more of the following:
- the relationship between the number of existing neurons in a certain node container and the total capacity of the node container reaches the first preset ratio or the fifth preset value, and the new birth is based on the second preset ratio or the first preset number of its total capacity Neuron; wherein, the first preset ratio and the second preset ratio may be the same or different;
- a certain node container generates new neurons according to the first preset rate (that is, according to the preset time interval or the preset traversal period) at the third preset ratio or the second preset number of its total capacity;
- a certain node container is marked as needing new neurons by other calculation processes, and new neurons are generated at the second preset rate (that is, according to the preset time interval or preset traversal period with the preset ratio or number of its total capacity) .
- Synapse newborn rules can include (but are not limited to) one or more of the following:
- the relationship between the number of existing synapses of a certain side container and the total capacity of the side container reaches the fourth preset ratio or the sixth preset value, and the new birth is based on the fifth preset ratio or the third preset number of its total capacity Synapse; wherein, the fourth preset ratio and the fifth preset ratio may be the same or different;
- a certain edge container generates new synapses at a third preset rate (that is, according to a preset time interval or a preset traversal period with a preset ratio or number of its total capacity);
- a certain edge container is marked as needing new synapses by other calculation processes, and new synapses are generated according to the fourth preset rate (that is, according to the preset time interval or preset traversal period with the preset ratio or number of its total capacity) ;
- the above scheduler is responsible for scheduling the execution of synapses and neuron trimming and regeneration.
- the scheduler allocates one or more sub-threads from the thread pool managed by it, each responsible for some areas or specific containers in the network model object.
- the child thread will traverse each container in the area under its jurisdiction and execute the neuron and/or synapse trimming and/or regeneration process according to the specified rules.
- the regeneration process of a neuron or synapse can be to allocate the required memory space in the container and create a corresponding object (new/construct object); the clipping process of a neuron or synapse can be to destruct the corresponding object in the container (delete/destruct object) and release the occupied memory space.
- this system provides the function of automatically executing synapse and neuron cutting and regeneration according to certain conditions and rules, and provides a variety of flexible trigger conditions and execution process rules for starting the cutting and regeneration process, eliminating the need for neural network development
- the burden of writing synapse and neuron tailoring and newborn programs by themselves has increased the flexibility and efficiency of development.
- the cropping process of synapses and neurons can be used alternately with the regeneration process, which can optimize the coding efficiency of the neural network, greatly compress the size of the neural network and the required storage space, save memory and improve the computing efficiency, making this system suitable for Run on embedded devices with limited hardware resources.
- this system is conducive to simulating the rich mechanisms in the biological brain nervous system (such as hippocampal synapses and neuronal apoptosis and regeneration), and can better support brain-like Intelligent and cognitive computing.
- the above-mentioned log manager 5 is used to record the logs generated when the system is running, and the logs are used to remind the user of the working status and abnormality of the system, so as to facilitate debugging and maintenance.
- the log consists of a series of strings and timestamps, which can be displayed in a command line environment or saved in a file and displayed with a text browser.
- the log manager consists of a log recording programming interface and a log management service.
- the log recording programming interface is called by the user in the program and transmits the log data to the log management service.
- the log management service is run by an independent thread to avoid blocking network operations. It uniformly sorts the received log data according to the timestamp and caches it in the memory. When the amount of cached data reaches a certain level, it is saved to the hard disk in a certain order and cleaned up. Cache.
- the above-mentioned operation monitoring module 13 is used to receive and respond to user input and manage the operation status of the entire system. It adopts the design of state machine, including default state, network construction state, network operation state, and network suspension state. It includes a message queue for receiving and buffering user input commands, and an independent thread for responding to the commands in the queue in time, so that the state machine can switch between different states. Users can input commands through keyboard, mouse, programming interface or other methods. Commands include (but are not limited to): build network commands, start running commands, pause running commands, end running commands, synapse and neuron trimming commands, and synapse and neuron regeneration commands.
- S1 The system starts and initializes the operating environment
- step S10 Judge whether the start command is received, when the judgement result is no, return to step S9 to wait for the command input again, when the judgement result is yes, go to the next step;
- step S13 Determine whether a command to suspend operation is received, if the judgment result is yes, then go to step S14, if the judgment result is no, then go to step S17;
- step S16 Judge whether a start running command is received, if the judgement result is yes, return to step S11, if the judge result is no, return to step S15;
- step S17 Determine whether the specified stop condition is reached (including receiving the end operation command, etc.), if the determination result is no, return to step S12, if the determination result is yes, end the operation.
- the above state machine When the system is initialized, the above state machine is in the default state, starts the message queue, and starts to receive user input; when receiving the network construction command, the state machine switches to the network construction state and constructs the network model object; when the start operation command is received , The state machine switches to the network operation state and performs network operations; when receiving the pause operation command, the state machine switches to the network pause state and pauses the network operation; when receiving the end operation command, the state machine saves the network data to the hard disk , The system ends and exits.
- the scheduler When the state machine is in the network running state or the network suspended state, if there are synapses and neuron clipping commands in the message queue, the scheduler will start the synapse and neuron clipping process; if there are synapses and neuron regeneration in the message queue Command, the scheduler initiates the process of synapse and neuron regeneration. Since this system uses an operation monitoring module to manage the working status of the system, the system can be switched to the network pause state when the application environment does not require network operations to save power consumption and make this system suitable for embedded systems.
- the above-mentioned graphical display module 4 is used to read network data and display it to the user, which is convenient for development, monitoring and debugging.
- the image display module 4 can directly read the data of the network model object in the memory, or can read the data stored in the hard disk.
- the graphical display module 4 adopts an independent thread to avoid blocking network operations, so it can be displayed in real time during the network scheduling operation, or can be displayed after the network scheduling operation ends.
- first preset time to third preset time, first preset time interval to second preset time interval, first preset traversal period to fourth preset traversal period, first preset The numerical relationship to the sixth preset numerical relationship, the first preset ratio to the fifth preset ratio, the first preset number to the third preset number, and the first preset rate to the fourth preset rate are expressed only to facilitate the distinction between preset time, preset time interval, preset traversal period, preset value relationship, preset ratio, preset number, and preset rate, the specific value size or range can be determined according to actual needs. The application embodiment does not limit this.
- each of the aforementioned preset time, preset time interval, preset traversal period, preset numerical relationship, preset ratio, preset number, and preset rate may be the same or different.
- the length of time from the first preset time to the third preset time may be completely the same or completely different; or part of the time length is the same, while the other part of the time length is different. The embodiment of the application does not limit this.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Neurology (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
Description
Claims (38)
- 一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,包括:模型描述模块,用于提供用户设计和描述网络模型的接口,为网络模型对象指定待执行的操作和规则;参数数据库,用于以参数数据库的形式存储网络模型的各项参数数据;配置描述模块,用于描述当前网络运行环境的配置参数,启动突触和/或神经元的裁剪与新生过程的条件;配置管理器,用于从所述配置描述模块中调取相关配置参数;网络构建器,用于读取模型描述模块,解析网络的拓扑结构,并通过数据管理器读取数据文件,在内存中构建网络模型对象;网络管理器,用于构建、遍历、访问和/或更新网络模型对象;规则管理器,用于读取所述模型描述模块中用户声明的规则,并在调度器调度所述网络模型对象的运算时对用户声明的规则进行解释,并仲裁规则间的冲突;数据管理器,用于读取并解析参数数据库,转换数据格式以及将数据序列化;调度器,用于分配硬件资源和调度运算过程,调度执行相应的操作;操作管理器,用于管理运行的操作;日志管理器,用于记录系统运行时生成的日志,记录系统的工作状态,并对异常状态进行提示;运行监测模块,用于接收并响应用户输入指令,管理系统的运行状态;以及,图形化显示模块,用于读取并显示网络数据。
- 根据权利要求1所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述模型描述模块包括网络描述单元、汇聚描述单元和流通描述单元;所述网络描述单元用于描述网络容器和一般参数容器,描述网络的参数与运行规则,并通过链接指向一个或多个汇聚描述单元和流通描述单元;所述汇聚描述单元用于描述汇聚容器、模块容器、层组容器、节点容器、节点参数容器和一般参数容器中的至少一个容器,并用于描述网络中节点的模块与层组划分关系、各个网络模型对象的参数与运行时的规则和命令;所述流通描述单元用于描述流通容器、通道容器、连接容器、边容器、边参数容器和一般参数容器中的至少一个容器,并用于描述网络中边的连接关系、各个网络模型对象的参数与运行时的规则和命令。
- 根据权利要求2所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述模型描述单元采用模拟生物脑神经系统组织方式的多层级树状结构的网络描述方式;所述汇聚描述单元支持将网络中的节点按照预设层组和模块排列组织,用于表征生物脑中神经元及相关胶质细胞的多层级组织方式;所述流通描述单元支持将网络中的边按照拓扑相似性分组和层级排列组织,用于表征生物脑中神经突触的多种组织方式及相关胶质细胞的突起的组织方式。
- 根据权利要求2所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述网络描述单元、汇聚描述单元和流通描述单元均选用XML和/或JSON的文件格式并采用嵌套式语法。
- 根据权利要求1所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述参数数据包括初始化参数数据和运行时参数数据。
- 根据权利要求1所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述参数数据库为二进制文件或文本文件,所述文本文件采用CSV文件格式或以其他字符为分隔数据的文件格式。
- 根据权利要求1所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述网络模型对象包括容器、拓扑关系和/或参数数据,所述网络模型对象是调度器调度运行的对象。
- 根据权利要求7所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述容器包括编号和/或名称,用于在多层级树状结构中索引。
- 根据权利要求7所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述容器具有一个或多个控制块,用于存储统计和控制信息。
- 根据权利要求9所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述控制块包括网络的遍历次序与规则、已参与遍历运算的次数、数据是否已入驻主内存、数据是否已入驻协处理器内存、硬盘读写的频次中的至少一项,并由所述规则管理器和调度器管理和更新。
- 根据权利要求8或9所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述容器包括:网络容器,位于树状结构的第一层级,用于表征全脑和行为尺度的模型;汇聚容器,位于树状结构的第二层级,用于表征脑区尺度的模型;模块容器,位于树状结构的第三层级,用于表征神经核团尺度的模型;层组容器,位于树状结构的第四层级,用于表征神经环路尺度的模型;节点容器,位于树状结构的第五层级,用于表征神经元尺度或胶质细胞尺度的模型,以及用于表征一群神经元或胶质细胞;节点参数容器,位于树状结构的第六层级,用于表征分子尺度、受体尺度、神经递质或神经调质尺度的模型,和/或,用于表征一群神经元或胶质细胞模型的参数张量;流通容器,位于树状结构的第二层级,用于表征联接脑区间的神经纤维束尺度的模型;通道容器,位于树状结构的第三层级,用于表征由联接神经核团间的轴突构成的传导束的模型;连接容器,位于树状结构的第四层级,用于表征神经环路中由轴突构成的神经通路的模型;边容器,位于树状结构的第五层级,用于表征树突尺度或突触尺度的模型,和/或,用于表征一群突触或胶质细胞的突起;边参数容器,位于树状结构的第六层级,用于表征分子尺度、神经递质或神经调质尺度、受体尺度的模型,以及用于表征一群突触或胶质细胞的突起模型的参数张量;和/或,一般参数容器,用于以张量的形式容纳参数数据;所述一般参数容器属于辅助容器,各个层级的容器可额外带有一个或多个所述一般参数容器。
- 根据权利要求11所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述神经元模型的发放特性构建为包括紧张性发放、快速发放、爆发式发放、峰发放和/或相位性发放;所述神经元模型对上游输入信号的响应,构建为不同的神经适应性或敏感性曲线;所述神经元模型对下游的作用机制,构建为兴奋型的、抑制型的、调制型的,和/或中性的模型;所述神经元模型构建为脉冲神经元模型和/或传统神经元模型;所述胶质细胞模型构建为星形胶质细胞模型、少突胶质细胞模型、小胶质细胞模型、施万细胞模型和/或卫星细胞模型。
- 根据权利要求11所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述神经递质或神经调质模型构建为兴奋型的、抑制型的,和/或调制型的模型;所述受体模型构建为离子型的,和/或代谢型的模型;所述受体模型对神经递质或神经调质的响应效果,构建为兴奋型的、抑制型的、调制型的,和/或中性的模型。
- 根据权利要求11所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述树突尺度的模型构建为顶树突模型、基树突模型,和/或突棘模型;所述突触模型构建为兴奋型的、抑制型的、调制型的,和/或中性的模型。
- 根据权利要求11所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述分子尺度的模型构建为细胞内分子模型、细胞膜中分子模型,和/或细胞间隙分子模型。
- 根据权利要求11所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述节点参数容器、边参数容器以及一般参数容器内部采用张量的形式容纳参数。
- 根据权利要求16所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述张量的维度是一维或多维,所述张量的排列和使用方式由用户指定。
- 根据权利要求17所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述张量配置为四维,各个参数在张量中的位置由坐标(x,y,z,t)表示,其中,x、y、z三个维度对应于父级容器中表征的各个神经组织模型的空间排布位置;t表示时间维度,表征时序信息的缓存和延迟,用于模拟神经调质对神经元和/或突触的长时间作用机制;所述张量中的参数由父级容器中全体或部分神经组织模型共享,用于模拟神经调质对目标区域中的全部神经组织的大面积作用。
- 根据权利要求11所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述流通容器及其所有子级容器均对应于一个或多个上游容器,和,一个或多个下游容器,并通过所述上游容器和所述下游容器的编号或名称对其进行索引访问。
- 根据权利要求19所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述上游容器和下游容器均是任意层级的容器,二者为同一个或者不同的容器。
- 根据权利要求11所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述流通容器及其所有子级容器均与其上游容器和下游容器构成信息流动通路,表征信息在两个信息源间的流动和处理过程,网络中的多个容器间构成信息流动的任意拓扑结构。
- 根据权利要求21所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述信息的流动和处理过程用于实现至少一种生物脑神经机制。
- 根据权利要求22所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述生物脑神经机制包括神经脉冲通过突触在神经元间传导、突触和突触之间交换信息以及神经元与突触可塑性中的至少一种。
- 根据权利要求21所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述信息流动的任意拓扑结构,用于实现脑神经系统中任意的神经环路连接方式,包括支持同一个神经元连回自己的反馈连接、同一群的神经元互相之间的连接、不同群神经元之间的任意连接,以及突触和突触的直接连接中的至少一种连接,且允许反馈型连接的无限次循环计算。
- 根据权利要求1所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述模型描述模块支持将任何层级的模型分解为数据和操作两部分的建模设计方式,所述数据由节点参数容器、边参数容器,和/或一般参数容器容纳,并由对应的参数数据库存储;所述操作用于访问和更新所述数据的可执行程序,所述操作运行在通用CPU、ARM、DSP、GPU,和/或其他处理器,保证系统具有跨硬件平台通用性。
- 根据权利要求1所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述模型描述模块支持用户通过定义一种或多种操作,使同一个节点容器中的各个神经元直接互相访问和/或更新彼此的数据,以实现信息的快速交换,用于模拟生物脑神经系统中的电突触。
- 根据权利要求1所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述模型描述模块支持用户通过定义一种或多种操作,使同一个边容器中的各个突触直接互相访问和/或更新彼此的数据,以实现信息的快速交换,用于模拟生物脑神经系统中同一个神经元的树突上多个突触间互相交换信息并进行逻辑运算的情况,包括分流抑制机制。
- 根据权利要求1所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述系统支持按照预设触发条件和执行规则自动执行突触和/或神经元的裁剪与新生的功能;所述触发条件由用户在所述配置描述模块中指定;所述执行规则由用户在所述模型描述模块中指定;所述执行规则作用于网络模型对象,和/或,作用于子网络或特定容器;所述突触和/或神经元的裁剪与新生过程由调度器调度执行,在网络运行状态时执行,和/或在网络暂停状态时执行。
- 根据权利要求28所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述触发条件包括下面的一种或多种:用户命令:用户通过键盘或鼠标及其他方式输入命令给系统,系统接收到命令后即刻或第一预设时间后执行裁剪或新生过程;持续执行:当网络模型或其子区域达到符合裁剪或新生过程的规则时,执行裁剪或新生过程;间隔执行:系统按照第一预设时间间隔或第一预设遍历周期,自动启动裁剪或新生过程。
- 根据权利要求28所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述裁剪过程的执行规则包括突触裁剪规则和/或神经元裁剪规则;所述突触裁剪规则包括下面的任意一种或几种:某个突触的参数和指定参照突触集合中所有突触参数的统计量达到第一预设数值关系,则该突触为待裁剪突触;某个突触的参数和指定阈值达到第二预设数值关系,则该突触为待裁剪突触;某个突触超过第二预设时间或第二预设遍历周期没有被触发,则该突触为待裁剪突触;某个突触被标记为待裁剪,则该突触为待裁剪突触;所述神经元裁剪规则包括下面的任意一种或几种:某个神经元没有输入的突触,则该神经元为待裁剪神经元;某个神经元没有输出的突触,则该神经元为待裁剪神经元;某个神经元没有输入及输出的突触,则该神经元为待裁剪神经元;某个神经元的参数和指定参照神经元集合中所有神经元参数的统计量达到第三预设数值关系,则该神经元为待裁剪神经元;某个神经元的参数和指定阈值达到第四预设数值关系,则该神经元为待裁剪神经元;某个神经元超过第三预设时间或第三预设遍历周期没有发放,则该神经元为待裁剪神经元;某个神经元被标记为待裁剪,则该神经元为待裁剪神经元。
- 根据权利要求28所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述新生过程的执行规则,包括神经元新生规则和/或突触新生规则;所述神经元新生规则包括下面的任意一种或几种:某个节点容器的现存神经元数量和该节点容器的总容量达到第一预设比例或第五预设数值关系,则以其总容量的第二预设比例或第一预设数量新生神经元;某个节点容器按照第一预设速率,以其总容量的第三预设比例或第二预设数量新生神经元;某个节点容器被标记为待新生神经元,并按照第二预设速率新生神经元;所述突触新生规则包括下面的任意一种或几种:某个边容器的现存突触数量和该边容器的总容量达到第四预设比例或第六预设数值关系,则以其总容量的第五预设比例或第三预设数量新生突触;某个边容器按照第三预设速率新生突触;某个边容器被标记为待新生突触,并按照第四预设速率新生突触;某个节点容器存在没有输入或输出突触的神经元,则在对应的各个边容器中分别为其新生输入突触或输出突触。
- 根据权利要求1所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,系统的运行状态包括默认状态、网络构建状态、网络运行状态和/或网络暂停状态。
- 根据权利要求1所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述模型描述模型中,用户为各个容器指定一条或多条规则,所述一条或多条规则构成了一个规则库;所述规则管理器对所述规则库中的规则按照预设优先级进行排序,当作用于一个容器的多条规则互相之间发生冲突时,只有最高优先级的规则被执行,当一个容器没有指定任何规则时,规则管理器采用默认规则执行;所述规则库中的规则包括:遍历规则、内存使用规则、数据I/O规则,和/或,突触与神经元的裁剪与新生规则;所述遍历规则用于指导调度器按照第二预设时间间隔或第四预设遍历周期重复遍历或跳过遍历网络的全部或特定容器,以将计算资源集中在计算密集的子网络,提高数据利用效率;所述内存使用规则用于指导调度器安排主内存和/或协处理器内存的使用;所述数据I/O规则用于指导所述调度器调度数据在主内存和协处理器内存间,以及内存与硬盘间交换的频次。
- 根据权利要求1所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述调度器管理一个或多个主内存池,和,一个或多个设备内存池;所述主内存池用于管理主内存的使用;所述设备内存池对应于各个协处理器,用于管理相应设备内存的使用;所述主内存池和设备内存池的容量上限和下限由用户通过配置描述模块指定。
- 根据权利要求1或34所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述调度器管理一个或多个线程池,用于动态安排子线程参与多线程运算,以安排主计算单元、协处理器,和/或,I/O设备的运算负载。
- 根据权利要求35所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述调度器管理一个或多个节点数据输入缓存器、一个或多个节点数据输出缓存器、一个或多个边数据输入缓存器以及一个或多个边数据输出缓存器,用于缓存读写硬盘或I/O设备的数据,以使调度器根据处理器、硬盘和/或I/O设备的负载,安排硬盘、I/O设备读写,避免I/O阻塞。
- 根据权利要求36所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,各个所述缓存器的容量、读写硬盘或I/O设备的频次上限和下限、读写硬盘或I/O设备的吞吐量上限和下限,由用户通过配置描述模块指定。
- 一种用于类脑智能与认知计算的脉冲神经网络运算方法,其特征在于,所述方法使用如权利要求1至37任一项所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022500548A JP7322273B2 (ja) | 2019-07-02 | 2020-07-01 | 脳型知能とコグニティブコンピューティングに使用されるスパイキングニューラルネットワーク演算システムおよび方法 |
GB2200490.7A GB2601643A (en) | 2019-07-02 | 2020-07-01 | Spiking neural network computing system and method for brain-like intelligence and cognitive computing |
KR1020227003194A KR20220027199A (ko) | 2019-07-02 | 2020-07-01 | 두뇌 모방 인텔리전트 및 인지 컴퓨팅에 적용하는 스파이킹 신경망 컴퓨팅 시스템 및 방법 |
EP20834339.2A EP3996004A4 (en) | 2019-07-02 | 2020-07-01 | PULSE NEURAL NETWORK COMPUTATION SYSTEM AND METHOD FOR BRAIN-LIKE INTELLIGENCE AND COGNITIVE COMPUTING |
US17/623,753 US20220253675A1 (en) | 2019-07-02 | 2020-07-01 | Firing neural network computing system and method for brain-like intelligence and cognitive computing |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910588964.5 | 2019-07-02 | ||
CN201910588964.5A CN110322010B (zh) | 2019-07-02 | 2019-07-02 | 用于类脑智能与认知计算的脉冲神经网络运算系统及方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021000890A1 true WO2021000890A1 (zh) | 2021-01-07 |
Family
ID=68122227
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/099714 WO2021000890A1 (zh) | 2019-07-02 | 2020-07-01 | 用于类脑智能与认知计算的脉冲神经网络运算系统及方法 |
Country Status (7)
Country | Link |
---|---|
US (1) | US20220253675A1 (zh) |
EP (1) | EP3996004A4 (zh) |
JP (1) | JP7322273B2 (zh) |
KR (1) | KR20220027199A (zh) |
CN (1) | CN110322010B (zh) |
GB (1) | GB2601643A (zh) |
WO (1) | WO2021000890A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114399033A (zh) * | 2022-03-25 | 2022-04-26 | 浙江大学 | 基于神经元指令编码的类脑计算系统和计算方法 |
CN114816067A (zh) * | 2022-05-06 | 2022-07-29 | 清华大学 | 一种基于向量指令集实现类脑计算的方法及装置 |
WO2022177162A1 (ko) * | 2021-02-18 | 2022-08-25 | 삼성전자주식회사 | 어플리케이션의 모델 파일을 초기화하는 프로세서 및 이를 포함하는 전자 장치 |
CN115879544A (zh) * | 2023-02-28 | 2023-03-31 | 中国电子科技南湖研究院 | 一种针对分布式类脑仿真的神经元编码方法及系统 |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110322010B (zh) * | 2019-07-02 | 2021-06-25 | 深圳忆海原识科技有限公司 | 用于类脑智能与认知计算的脉冲神经网络运算系统及方法 |
CN112766470B (zh) * | 2019-10-21 | 2024-05-07 | 地平线(上海)人工智能技术有限公司 | 特征数据处理方法、指令序列生成方法、装置及设备 |
CN110928833B (zh) * | 2019-11-19 | 2021-01-22 | 安徽寒武纪信息科技有限公司 | 自适应算法运算装置以及自适应算法运算方法 |
CN111552563B (zh) * | 2020-04-20 | 2023-04-07 | 南昌嘉研科技有限公司 | 一种多线程数据系统、多线程消息传递方法及系统 |
CN113688981B (zh) * | 2020-05-19 | 2024-06-18 | 深圳忆海原识科技有限公司 | 具有记忆与信息抽象功能的类脑神经网络 |
CN111858989B (zh) * | 2020-06-09 | 2023-11-10 | 西安工程大学 | 一种基于注意力机制的脉冲卷积神经网络的图像分类方法 |
US20210406661A1 (en) * | 2020-06-25 | 2021-12-30 | PolyN Technology Limited | Analog Hardware Realization of Neural Networks |
CN112270406B (zh) * | 2020-11-11 | 2023-05-23 | 浙江大学 | 一种类脑计算机操作系统的神经信息可视化方法 |
CN112270407B (zh) * | 2020-11-11 | 2022-09-13 | 浙江大学 | 支持亿级神经元的类脑计算机 |
CN112434800B (zh) * | 2020-11-20 | 2024-02-20 | 清华大学 | 控制装置及类脑计算系统 |
CN112651504B (zh) * | 2020-12-16 | 2023-08-25 | 中山大学 | 一种基于并行化的类脑仿真编译的加速方法 |
CN112987765B (zh) * | 2021-03-05 | 2022-03-15 | 北京航空航天大学 | 一种仿猛禽注意力分配的无人机/艇精准自主起降方法 |
CN113222134B (zh) * | 2021-07-12 | 2021-10-26 | 深圳市永达电子信息股份有限公司 | 一种类脑计算系统、方法及计算机可读存储介质 |
CN113283594B (zh) * | 2021-07-12 | 2021-11-09 | 深圳市永达电子信息股份有限公司 | 一种基于类脑计算的入侵检测系统 |
CN114238707B (zh) * | 2021-11-30 | 2024-07-05 | 中国电子科技集团公司第十五研究所 | 一种基于类脑技术的数据处理系统 |
CN114492770B (zh) * | 2022-01-28 | 2024-10-15 | 浙江大学 | 一种面向循环脉冲神经网络的类脑计算芯片映射方法 |
WO2023238186A1 (ja) * | 2022-06-06 | 2023-12-14 | ソフトバンク株式会社 | Nn成長装置、情報処理装置、ニューラル・ネットワーク情報の生産方法、およびプログラム |
CN117709402A (zh) * | 2022-09-02 | 2024-03-15 | 深圳忆海原识科技有限公司 | 模型构建方法、装置、平台、电子设备及存储介质 |
CN117709400A (zh) * | 2022-09-02 | 2024-03-15 | 深圳忆海原识科技有限公司 | 层次化系统、运算方法、运算装置、电子设备及存储介质 |
CN117709401A (zh) * | 2022-09-02 | 2024-03-15 | 深圳忆海原识科技有限公司 | 模型管理装置及用于神经网络运算的层次化系统 |
CN115392443B (zh) * | 2022-10-27 | 2023-03-10 | 之江实验室 | 类脑计算机操作系统的脉冲神经网络应用表示方法及装置 |
CN116542291B (zh) * | 2023-06-27 | 2023-11-21 | 北京航空航天大学 | 一种记忆环路启发的脉冲记忆图像生成方法和系统 |
CN117251275B (zh) * | 2023-11-17 | 2024-01-30 | 北京卡普拉科技有限公司 | 多应用异步i/o请求的调度方法及系统、设备及介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120109864A1 (en) * | 2010-10-29 | 2012-05-03 | International Business Machines Corporation | Neuromorphic and synaptronic spiking neural network with synaptic weights learned using simulation |
CN108182473A (zh) * | 2017-12-12 | 2018-06-19 | 中国科学院自动化研究所 | 基于类脑脉冲神经网络的全尺度分布式全脑模拟系统 |
CN108985447A (zh) * | 2018-06-15 | 2018-12-11 | 华中科技大学 | 一种硬件脉冲神经网络系统 |
CN109858620A (zh) * | 2018-12-29 | 2019-06-07 | 北京灵汐科技有限公司 | 一种类脑计算系统 |
CN110322010A (zh) * | 2019-07-02 | 2019-10-11 | 深圳忆海原识科技有限公司 | 用于类脑智能与认知计算的脉冲神经网络运算系统及方法 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9460387B2 (en) * | 2011-09-21 | 2016-10-04 | Qualcomm Technologies Inc. | Apparatus and methods for implementing event-based updates in neuron networks |
EP3089080A1 (en) * | 2015-04-27 | 2016-11-02 | Universität Zürich | Networks and hierarchical routing fabrics with heterogeneous memory structures for scalable event-driven computing systems |
CN105095967B (zh) * | 2015-07-16 | 2018-02-16 | 清华大学 | 一种多模态神经形态网络核 |
CN105913119B (zh) * | 2016-04-06 | 2018-04-17 | 中国科学院上海微系统与信息技术研究所 | 行列互联的异构多核心类脑芯片及其使用方法 |
CN109816026B (zh) * | 2019-01-29 | 2021-09-10 | 清华大学 | 卷积神经网络和脉冲神经网络的融合装置及方法 |
-
2019
- 2019-07-02 CN CN201910588964.5A patent/CN110322010B/zh active Active
-
2020
- 2020-07-01 US US17/623,753 patent/US20220253675A1/en active Pending
- 2020-07-01 KR KR1020227003194A patent/KR20220027199A/ko unknown
- 2020-07-01 GB GB2200490.7A patent/GB2601643A/en not_active Withdrawn
- 2020-07-01 EP EP20834339.2A patent/EP3996004A4/en active Pending
- 2020-07-01 WO PCT/CN2020/099714 patent/WO2021000890A1/zh unknown
- 2020-07-01 JP JP2022500548A patent/JP7322273B2/ja active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120109864A1 (en) * | 2010-10-29 | 2012-05-03 | International Business Machines Corporation | Neuromorphic and synaptronic spiking neural network with synaptic weights learned using simulation |
CN108182473A (zh) * | 2017-12-12 | 2018-06-19 | 中国科学院自动化研究所 | 基于类脑脉冲神经网络的全尺度分布式全脑模拟系统 |
CN108985447A (zh) * | 2018-06-15 | 2018-12-11 | 华中科技大学 | 一种硬件脉冲神经网络系统 |
CN109858620A (zh) * | 2018-12-29 | 2019-06-07 | 北京灵汐科技有限公司 | 一种类脑计算系统 |
CN110322010A (zh) * | 2019-07-02 | 2019-10-11 | 深圳忆海原识科技有限公司 | 用于类脑智能与认知计算的脉冲神经网络运算系统及方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3996004A4 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022177162A1 (ko) * | 2021-02-18 | 2022-08-25 | 삼성전자주식회사 | 어플리케이션의 모델 파일을 초기화하는 프로세서 및 이를 포함하는 전자 장치 |
CN114399033A (zh) * | 2022-03-25 | 2022-04-26 | 浙江大学 | 基于神经元指令编码的类脑计算系统和计算方法 |
CN114816067A (zh) * | 2022-05-06 | 2022-07-29 | 清华大学 | 一种基于向量指令集实现类脑计算的方法及装置 |
CN115879544A (zh) * | 2023-02-28 | 2023-03-31 | 中国电子科技南湖研究院 | 一种针对分布式类脑仿真的神经元编码方法及系统 |
CN115879544B (zh) * | 2023-02-28 | 2023-06-16 | 中国电子科技南湖研究院 | 一种针对分布式类脑仿真的神经元编码方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN110322010B (zh) | 2021-06-25 |
CN110322010A (zh) | 2019-10-11 |
KR20220027199A (ko) | 2022-03-07 |
JP2022538694A (ja) | 2022-09-05 |
GB2601643A (en) | 2022-06-08 |
JP7322273B2 (ja) | 2023-08-07 |
EP3996004A1 (en) | 2022-05-11 |
EP3996004A4 (en) | 2022-10-19 |
US20220253675A1 (en) | 2022-08-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021000890A1 (zh) | 用于类脑智能与认知计算的脉冲神经网络运算系统及方法 | |
CN110215189A (zh) | 一种基于云平台的大数据智能健康监护系统 | |
Sloman | The “semantics” of evolution: Trajectories and trade-offs in design space and niche space | |
CN114238707B (zh) | 一种基于类脑技术的数据处理系统 | |
Wu et al. | micros. bt: An event-driven behavior tree framework for swarm robots | |
Zeigler | Discrete event models for cell space simulation | |
CN110262275A (zh) | 一种智能家居系统及其控制方法 | |
CN113222134B (zh) | 一种类脑计算系统、方法及计算机可读存储介质 | |
Lejamble et al. | A new software architecture for the wise object framework: Multidimensional separation of concerns | |
Ahuja et al. | A connectionist processing metaphor for diagnostic reasoning | |
JP2021533517A (ja) | データ処理モジュール、データ処理システム、およびデータ処理方法 | |
WO2024046459A1 (zh) | 模型管理装置及用于神经网络运算的层次化系统 | |
Li | Efficient and Practical Cluster Scheduling for High Performance Computing | |
von der Malsburg | Ordered retinotectal projections and brain organization | |
CN102346815B (zh) | 用于模拟生物竞争和进化过程的数字生物系统 | |
Freeman | Deconstruction of neural data yields biologically implausible periodic oscillations | |
Halford | Competing, or perhaps complementary, approaches to the dynamic-binding problem, with similar capacity limitations | |
Thorpe | Temporal synchrony and the speed of visual processing | |
Strong | Phase logic is biologically relevant logic | |
CN110188017A (zh) | 网络机房服务器与网络设备大数据采集装置及方法 | |
Dawson et al. | Making a middling mousetrap | |
Barnden | Time phases, pointers, rules and embedding | |
KR0136877B1 (ko) | 인지 시스템용 아키텍처 | |
Hölldobler | On the artificial intelligence paradox | |
Garson | Must we solve the binding problem in neural hardware? |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20834339 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022500548 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 202200490 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20200701 |
|
ENP | Entry into the national phase |
Ref document number: 20227003194 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2020834339 Country of ref document: EP Effective date: 20220202 |