WO2021000890A1 - 用于类脑智能与认知计算的脉冲神经网络运算系统及方法 - Google Patents

用于类脑智能与认知计算的脉冲神经网络运算系统及方法 Download PDF

Info

Publication number
WO2021000890A1
WO2021000890A1 PCT/CN2020/099714 CN2020099714W WO2021000890A1 WO 2021000890 A1 WO2021000890 A1 WO 2021000890A1 CN 2020099714 W CN2020099714 W CN 2020099714W WO 2021000890 A1 WO2021000890 A1 WO 2021000890A1
Authority
WO
WIPO (PCT)
Prior art keywords
container
model
brain
neuron
intelligence
Prior art date
Application number
PCT/CN2020/099714
Other languages
English (en)
French (fr)
Inventor
任化龙
Original Assignee
深圳忆海原识科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳忆海原识科技有限公司 filed Critical 深圳忆海原识科技有限公司
Priority to JP2022500548A priority Critical patent/JP7322273B2/ja
Priority to GB2200490.7A priority patent/GB2601643A/en
Priority to KR1020227003194A priority patent/KR20220027199A/ko
Priority to EP20834339.2A priority patent/EP3996004A4/en
Priority to US17/623,753 priority patent/US20220253675A1/en
Publication of WO2021000890A1 publication Critical patent/WO2021000890A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/10Interfaces, programming languages or software development kits, e.g. for simulating neural networks
    • G06N3/105Shells for specifying net layout
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Definitions

  • This application relates to the technical field of brain-like impulse neural network simulation and high-performance computing, and in particular to a pulsed neural network computing system and method for brain-like intelligence and cognitive computing.
  • brain-like intelligence and cognitive computing are based on spiking neural networks, combining various neurotransmitters, neuromodulators, receptors, electrical synapses, chemical synapses, dendrites, neurons, and glial in the biological brain
  • the rich working mechanism of cells is used for computational modeling.
  • the neural circuits, nerve nuclei, brain regions and whole brain models constructed can simulate many cognitive mechanisms and behaviors of the biological brain, such as memory and learning, simulated emotions, navigation and Planning, motion control, brain-like vision and brain-like hearing, attention, decision-making, etc., provide a broader route for the development of artificial intelligence systems.
  • the existing brain-like impulse neural network computing framework has the following problems:
  • the existing brain-like impulse neural network computing framework often ignores the modeling of electrical synapses, the working mechanism that supports the simulation of neuromodulation, and the simulation of dendrites.
  • the mechanism in which multiple synapses exchange information and perform logical operations among multiple synapses cannot support the topology of synapses and synapses directly connected, etc.
  • One of the objectives of the embodiments of this application is to provide a spiking neural network computing system and method for brain-like intelligence and cognitive computing.
  • the system provides a unified and flexible modeling method and provides a multi-level tree
  • the structure of the network description method supports full-scale modeling of the biological brain and nervous system and flexible network topology, thereby organically unifying the modeling scale and modeling richness, and fusing all models at various scales into a unified neural network for operation , And supports the representation and storage of data in the form of tensors.
  • This also enables the system to support spiking neural networks, as well as traditional neural networks (deep learning) and other algorithms that use tensors as the main data representation method.
  • a pulsed neural network computing system for brain-like intelligence and cognitive computing includes: model description module, parameter database, configuration description module, configuration manager, rule manager, data manager, network builder, Network manager, operation manager, scheduler, log manager, operation monitoring module and graphic display module;
  • the model description module is used to provide an interface for users to design and describe the network model
  • the parameter database is used to store various parameter data of the network, including initialization parameters and runtime parameters;
  • the parameter database can be a binary file or a text file;
  • the text file can be in a CSV file format or a file format in which data is separated by other characters;
  • the configuration description module is used to describe the configuration parameters of the current network operating environment and the conditions for initiating the cropping and regeneration of synapses and neurons;
  • the configuration manager is used to read the above configuration description module to obtain system configuration parameters
  • the network model object is constructed by the network builder and resides in the memory, and is used to characterize the entire network, including all containers, topological relationships and parameter data, and is the object scheduled to run by the scheduler;
  • the rule manager is used to read the rules declared by the user in the model description module, and interpret these rules and arbitrate conflicts between the rules when the scheduler schedules the operation of the network model object;
  • the data manager includes one or more decoders and encoders, which are used to read and parse the parameter database, convert the data format, and serialize the data; the user can add a custom decoder and encoder in the data manager Encoder to read and write files in a custom format;
  • the network builder is used to read the model description module, analyze the topology of the network, read the data file through the data manager, and build the network model object in the memory;
  • the network manager is used to construct, traverse, access and update network model objects
  • the operation manager is used to manage all the operations that can be run on the system; all operations constitute the operation library; the user can specify the operations that need to be performed for each container in the model description module, and the scheduler will schedule and execute the corresponding operations during runtime. operating;
  • the scheduler is used for allocating hardware resources and scheduling calculation processes to optimize calculation efficiency
  • the log manager is used to record logs generated when the system is running, and remind users of the working status and abnormalities of the system, so as to facilitate debugging and maintenance;
  • the operation monitoring module is used to receive and respond to user input and manage the operation status of the entire system, including default status, network construction status, network operation status and network suspension status;
  • the graphical display module is used to read network data and display it to the user to facilitate development, monitoring and debugging.
  • the model description module includes a network description unit, an aggregation description unit, and a circulation description unit, which are commonly used to describe various components and topological structures of the entire network; they can be selected as text files, using nested grammar, and Choose XML or JSON file format.
  • the model description module adopts a network description mode of a multi-level tree structure simulating a biological brain nervous system organization mode
  • the convergence description unit supports the organization of nodes in the network according to preset layer groups and modules, and is used to characterize the multi-level organization of neurons and related glial cells in the biological brain (eg, nucleus -> brain area -> Whole brain);
  • the circulation description unit supports grouping and hierarchical arrangement and organization of edges in the network according to topological (connection relationship) similarity, and is used to characterize the various organization methods of neural synapses in the biological brain (such as dendrites, projections in neural pathways, Nerve fiber bundles, etc.) and the organization of related glial cells.
  • the network description unit is used to describe containers such as Network and Param, describe the parameters and operating rules of the entire network, and point to one or more aggregation description units and circulation description units through links;
  • the convergence description unit is used to describe containers such as Confluence, Module, Layer, Node, NodeParam, and Param, and is used to describe the relationship between the modules and layer groups of nodes in the network, the parameters of each container, and runtime rules and commands;
  • the circulation description unit is used to describe containers such as Flow, Channel, Link, Edge, EdgeParam, and Param, and is used to describe the connection (topology) relationship of the edges in the network, the parameters of each container, and runtime rules and commands.
  • the Network represents a network container, which is located at the first level (top level) of the tree structure and is used to characterize models of the whole brain and behavioral scales.
  • Each Network can accommodate one or more Confluence and Flow;
  • the Confluence represents the convergence container, which is located at the second level of the tree structure and can be used to characterize the model of the brain area.
  • Each Confluence can contain one or more Modules;
  • the Module represents a module container, which is located at the third level of the tree structure, and can be used to characterize the model of the nucleus scale.
  • Each Module can contain one or more Layers;
  • the Layer represents a layer group container, which is located at the fourth level of the tree structure and can be used to represent a model of the neural loop scale, and each layer can contain one or more nodes;
  • the Node represents a node container, which is located at the fifth level of the tree structure. It can be used to characterize a neuron-scale or glial cell-scale model, and can also be used to characterize a group of neurons or glial cells. Each Node can contain one or more NodeParam;
  • the Node can also be used to characterize input and output nodes, and is used to dock the I/O devices of the system, such as camera input, audio input, sensor input, control output, etc., and read and write I/O device data through each NodeParam of the Node Dynamic update
  • the NodeParam represents the node parameter container, which is located at the sixth level (the lowest level) of the tree structure. It can be used to characterize models of molecular scale, receptor scale, neurotransmitter or neuromodulator scale, and can also be used to characterize a group of neurons or The parameter tensor of the glial cell model;
  • the Flow represents a circulation container, which is located at the second level of the tree structure and can be used to characterize the model of the scale of the nerve fiber bundles connecting the brain compartments.
  • Each Flow can contain one or more Channels;
  • the Channel represents a channel container, which is located at the third level of the tree structure and can be used to characterize the model of the conduction bundle composed of axons connecting nerve nuclei.
  • Each Channel can contain one or more Links;
  • the Link represents a connection container, which is located at the fourth level of the tree structure and can be used to characterize the model of the neural pathway composed of axons in the neural circuit, and each Link can contain one or more Edges;
  • the Edge represents the edge container, which is located at the fifth level of the tree structure. It can be used to characterize a model of dendritic scale or synaptic scale, and can also be used to characterize a group of synapses or glial cell protrusions. Each Edge can contain one or Multiple EdgeParam;
  • the EdgeParam represents the edge parameter container, which is located at the sixth level (the lowest level) of the tree structure. It can be used to characterize models of molecular scale, neurotransmitter or neuromodulator scale, and receptor scale, and can also be used to characterize a group of synapses or The parameter tensor of the glial cell protrusion model;
  • the Param represents a general parameter container and belongs to an auxiliary container. According to the needs of modeling, the containers at each level mentioned above can additionally have one or more Param, which is used to hold parameter data in the form of tensors; or, the containers at each level mentioned above may not have Param;
  • Each of the above containers has a number and name, which are used for indexing in a multi-level tree structure
  • Each of the above containers has one or more control blocks (Control Block) for storing statistics and control information, including the traversal order and rules of the network, the number of traversal operations that have been involved, whether the data has been stored in the main memory, and whether the data has been stored.
  • Control Block for storing statistics and control information, including the traversal order and rules of the network, the number of traversal operations that have been involved, whether the data has been stored in the main memory, and whether the data has been stored
  • the frequency of reading and writing of coprocessor memory and hard disk, etc., is managed and updated by the rule manager and scheduler.
  • the firing characteristics of the neuron model can be constructed as tension firing, rapid firing, burst firing, peak firing or phase firing, etc.;
  • the response of the neuron model to the upstream input signal can be constructed as different neural adaptability or sensitivity curves
  • the neuron model can be constructed as an excitatory, inhibitory, modulated or neutral model for downstream action mechanisms;
  • the neuron model can be constructed as an impulse neuron model, or as a traditional neuron model;
  • the glial cell model can be constructed as astrocyte, oligodendrocyte, microglia, Schwann cell and satellite cell model.
  • the neurotransmitter or neuromodulator model can be constructed as an excitatory, inhibitory or modulated model
  • the receptor model can be constructed as an ionic or metabolic model
  • the response effect of the receptor model to neurotransmitter or neuromodulator can be constructed as an excitatory, inhibitory, modulated or neutral model.
  • the dendritic scale model can be constructed as an apical dendritic model, a basal dendritic model or a spine model;
  • the synapse model can be constructed as excitatory, inhibitory, modulating or neutral.
  • the molecular scale model can be constructed as an intracellular molecular model, a molecular model in the cell membrane, and a cell gap molecular model.
  • the NodeParam, EdgeParam, and Param can accommodate parameters in the form of tensors (ie, multi-dimensional matrices),
  • the dimension of the tensor can be one-dimensional or multi-dimensional, and the specific arrangement and use mode are specified by the user;
  • the tensor can be configured in 4 dimensions, and the position of each parameter in the tensor can be represented by coordinates (x, y, z, t), where the three dimensions of x, y, and z correspond to each neural tissue represented in the parent container (Such as neurons or synapses, etc.)
  • the spatial arrangement position of the model, t represents the time dimension, which can characterize the cache and delay of timing information, and can be used to simulate the long-term action mechanism of neuromodulators on neurons and synapses (with delay Sex);
  • the parameters in the tensor can be shared by all or part of the neural tissue (such as neurons or synapses) models in the parent container, and can be used to simulate the large-area effect of neuromodulators on all neural tissues in the target area.
  • the neural tissue such as neurons or synapses
  • the Flow and all its child containers may correspond to one or more upstream containers, and, one or more downstream containers, and index access to them through the numbers or names of the upstream and downstream containers,
  • Both the upstream container and the downstream container can be containers of any level, and the two can be the same or different containers;
  • the Flow and all its sub-level containers can form an information flow path with its upstream and downstream containers, characterizing the (one-way or two-way) flow and processing process of information between two information sources (such as upstream and downstream containers). Any topological structure of information flow can be formed among multiple containers in the.
  • the information flow and processing process can be used to realize various biological brain nerve mechanisms such as nerve impulse conduction between neurons through synapses, exchange of information between synapses and synapses, and neuron and synaptic plasticity.
  • the arbitrary topological structure of the information flow can be used to realize any neural circuit connection mode in the cranial nervous system, including the feedback connection that supports the same neuron to connect back to itself, and the neurons of the same group (layer) mutually Connections between different groups (layers) of neurons (sequential/feedforward, cross-layer, feedback, etc.), as well as direct connections between synapses and synapses, and allow unlimited loops of feedback connections Calculation.
  • the system supports a modeling design method that decomposes models of any level (or scale) into two parts: data and operation,
  • the data can be accommodated by NodeParam, EdgeParam or Param, and stored by the corresponding parameter database;
  • the operations are executable programs (such as functions and classes containing functions) used to access and update the aforementioned data.
  • the operations can run on general-purpose CPU, ARM, DSP, GPU or other processors to ensure that the system has a certain span Hardware platform versatility.
  • the system supports the user to define one or more operations so that the neurons in the same Node (without going through Edge) can directly access and update each other's data, so as to realize the rapid exchange of information. Simulate the electrical synapses in the biological brain nervous system.
  • the system supports the user by defining one or more operations, so that the synapses in the same Edge can directly access and update each other's data, so as to realize the rapid exchange of information for simulating the biological brain nervous system
  • the system supports automatic execution of the trimming and regeneration of synapses and neurons according to preset trigger conditions and execution rules
  • the trigger condition can be specified by the user in the configuration description module
  • the execution rule can be specified by the user in the model description module
  • the execution rules can act on the entire network model object, or on sub-networks or specific containers;
  • the trimming and regeneration process of synapses and neurons is scheduled and executed by a scheduler, and can be executed when the network is running or when the network is paused.
  • the trigger condition includes one or any of the following:
  • User command that is, the user inputs a command to the system through the keyboard or mouse or other methods, and the system executes the cutting or newborn process immediately after receiving the command or after the first preset time;
  • Interval execution that is, the system automatically starts the cropping or regeneration process in a timely manner according to the first preset time interval or the first preset traversal period.
  • the execution rules of the clipping process are divided into synaptic clipping rules and neuron clipping rules;
  • the synapse tailoring rules include one or any of the following:
  • the statistics of a certain synapse parameter and all the synaptic parameters in the specified reference synapse set reach the first preset value relationship (if the weight of a certain synapse is less than 1% of the average weight of all synapses in the specified edge), then The synapse is the synapse to be trimmed;
  • the synapse is the synapse to be trimmed;
  • the synapse is a synapse to be trimmed
  • the synapse is the synapse to be trimmed; for the synapse to be trimmed, it can be trimmed;
  • the neuron clipping rule includes one or any of the following:
  • the neuron is a neuron to be trimmed
  • the neuron is a neuron to be trimmed
  • the neuron is a neuron to be trimmed
  • the neuron For the neurons to be cut;
  • the neuron is a neuron to be cut;
  • the neuron is a neuron to be cropped; if a neuron is marked as being cropped by another operation process, the neuron is a waiting neuron. Cut the neuron; for the neuron to be cut, it can be cut;.
  • the execution rules of the regeneration process are divided into neuron regeneration rules and synapse regeneration rules
  • the neuron regeneration rules include one or more of the following:
  • the new neurons will be born at the second preset ratio or the first preset number of the total capacity ;
  • the first preset ratio and the second preset ratio may be the same or different;
  • a certain node container generates new neurons according to a first preset rate (that is, according to a preset time interval or a preset traversal period) at a third preset ratio or a second preset number of its total capacity;
  • a certain node container is marked as needing new neurons by other operations, and new neurons are generated at the second preset rate (that is, the preset time interval or the preset traversal period is the preset ratio or number of its total capacity);
  • the synapse nascent rule includes one or any of the following:
  • the fourth preset ratio and the fifth preset ratio may be the same or different;
  • a certain edge container generates new synapses at a third preset rate (that is, according to a preset time interval or a preset traversal period with a preset ratio or number of its total capacity);
  • a certain edge container is marked as needing new synapses by other calculation processes, and new synapses are generated according to the fourth preset rate (that is, according to the preset time interval or preset traversal period with the preset ratio or number of its total capacity);
  • the user can specify one or more rules for each container, and all the rules constitute a rule base.
  • the rule manager sorts the rules in the rule library according to preset priorities. When multiple rules for a container conflict with each other, only the rule with the highest priority is executed. When a container does not When specifying any rule, the rule manager will execute the default rule;
  • the rules in the rule library include: traversal rules, memory usage rules, data I/O rules, synapses and neuron trimming and regeneration rules;
  • the traversal rule can be used to instruct the scheduler to repeatedly traverse or skip traversing all or specific containers of the network according to the second preset time interval or the fourth preset traversal period, so as to concentrate computing resources on sub-networks that need to be computationally intensive. Improve data utilization efficiency;
  • the memory usage rules can be used to guide the scheduler to reasonably arrange the usage of main memory and coprocessor memory
  • the data I/O rules can be used to guide the scheduler to schedule the frequency of data exchange between the main memory and the coprocessor memory, and between the memory and the hard disk, so as to save I/O resources and improve overall computing efficiency.
  • the scheduler manages one or more main memory pools and, one or more device memory pools, to reasonably allocate the usage of network model objects in the main memory and each device memory,
  • the main memory pool is used to manage the use of main memory
  • the device memory pool corresponds to each coprocessor (may be ARM, GPU, DSP, ASIC, etc.), and is used to manage the use of the corresponding device memory;
  • the upper and lower limits of the capacity of the main memory pool and the device memory pool are specified by the user through the configuration description module.
  • the scheduler manages one or more thread pools for dynamically arranging sub-threads to participate in multi-thread operations, so as to rationally arrange the main computing unit (can be CPU, ARM, etc.) and coprocessor (can be ARM , GPU, DSP, etc.) and I/O devices (hard disk, camera, audio input, control output, etc.).
  • main computing unit can be CPU, ARM, etc.
  • coprocessor can be ARM , GPU, DSP, etc.
  • I/O devices hard disk, camera, audio input, control output, etc.
  • the scheduler manages one or more node data input buffers, one or more node data output buffers, one or more edge data input buffers, and one or more edge data output buffers, Used for caching data that needs to read and write hard disks or I/O devices, so that the scheduler can arrange hard disks, I/O devices to read and write in a timely manner according to the load of the processor, hard disks and I/O devices to avoid I/O blocking ,
  • the capacity of each buffer, the upper and lower limits of the frequency of reading and writing hard disks or I/O devices, and the upper and lower limits of throughput of reading and writing hard disks or I/O devices are specified by the user through the configuration description module.
  • This application also provides a pulsed neural network operation method for brain-like intelligence and cognitive computing, which uses the above-mentioned pulsed neural network computing system for brain-like intelligence and cognitive computing.
  • the present application discloses a pulsed neural network computing system and method for brain-like intelligence and cognitive computing, which provides a unified and flexible modeling method .
  • the provided multi-level tree structure network description method supports the full-scale modeling of the biological brain and nervous system and the flexible network topology, thereby organically unifying the modeling scale and modeling richness, and all models of each scale It is integrated into a unified neural network for operation, and supports the representation and storage of data in the form of tensors. This also enables the system to support spiking neural networks, and also compatible with traditional neural networks (deep learning) and other tensors.
  • the algorithm of the main data representation method also provides the function of automatically performing synapse and neuron cutting and regeneration according to certain conditions and rules, eliminating the burden of neural network developers who need to implement related functions by themselves.
  • the above-mentioned modeling design method can be to decompose a model of any level (or scale) into two parts: data (data) and operation (operation).
  • data can be accommodated by NodeParam, EdgeParam or Param, and stored by the corresponding parameter database.
  • Operations are executable programs (such as functions and classes containing functions) that can access and update the aforementioned data.
  • FIG. 1 is a schematic diagram of the overall architecture of a spiking neural network computing system for brain-like intelligence and cognitive computing provided by this application;
  • FIG. 2 is a schematic diagram of a network hierarchy of a spiking neural network computing system for brain-like intelligence and cognitive computing in an embodiment of the application;
  • FIG. 3 is a schematic diagram of a system operation process flow of a spiking neural network computing system for brain-like intelligence and cognitive computing in an embodiment of the application.
  • an embodiment of the present application discloses a pulsed neural network computing system for brain-like intelligence and cognitive computing.
  • the system includes: model description module 3, parameter database 2, configuration description module 1, configuration manager 6.
  • the aforementioned model description module 3 includes: a network description unit, an aggregation description unit, and a circulation description unit. Together they describe the various components and topology of the entire network. They prefer to use nested syntax, and can choose XML or JSON file formats.
  • the network description unit can be used to describe containers such as Network and Param, can be used to describe the parameters and operating rules of the entire network, and point to one or more convergence description units and circulation description units through links.
  • the convergence description unit can be used to describe containers such as Confluence, Module, Layer, Node, NodeParam, and Param, and can be used to describe the relationship between the modules and layer groups of nodes in the network, the parameters of each container, and runtime rules and commands.
  • the circulation description unit can be used to describe containers such as Flow, Channel, Link, Edge, EdgeParam, and Param, and can be used to describe the connection (topology) relationship of the edges in the network, the parameters of each container, and runtime rules and commands.
  • the network description mode supported in the model description module 3 is preferably represented by a multi-level tree structure to imitate the organization of the biological brain nervous system:
  • the above convergence description unit supports the organization of nodes in the network according to preset layer groups and modules, which can characterize the multi-level organization of neurons and related glial cells in the biological brain (eg, nucleus -> brain area -> Whole brain);
  • the above-mentioned circulation description unit supports the grouping and hierarchical arrangement of edges in the network according to the similarity of topology (connection relationship), which can characterize the various organization methods of nerve synapses in the biological brain (such as dendrites, projections in neural pathways, Nerve fiber bundles, etc.) and the organization of the protrusions of related glial cells. This makes the development, debugging, management and scheduling of large-scale brain-like neural networks more intuitive and convenient.
  • Network represents a network container, located at the first level (top level) of the tree structure, and can be used to characterize models of the whole brain and behavioral scales.
  • Each Network can accommodate one or more Confluence and Flow.
  • Confluence stands for Confluence Container, which is located at the second level of the tree structure and can be used to characterize models of brain scales. Each Confluence can contain one or more Modules.
  • Module represents a module container, which is located at the third level of the tree structure and can be used to characterize the model of the nerve nucleus scale.
  • Each Module can contain one or more Layers.
  • Layer represents the layer group container, located at the fourth level of the tree structure, and can be used to represent the model of the neural loop scale.
  • Each Layer can contain one or more Nodes.
  • Node stands for node container, which is located at the fifth level of the tree structure. It can be used to characterize models of neuron scale or glial cell scale, and can also be used to characterize a group of neurons or glial cells.
  • the firing characteristics of the neuron model can be constructed as tension firing, rapid firing, burst firing, peak firing or phasing firing, etc.
  • Its response to upstream input signals can be constructed as different neural adaptability or sensitivity curves.
  • the downstream mechanism of action can be constructed as an excitatory, inhibitory, modulated or neutral model.
  • Glial cell models can be constructed as astrocytes, oligodendrocytes, microglia, Schwann cells and satellite cell models. Each Node can accommodate one or more NodeParams.
  • the number of NodeParams it accommodates is determined by the number of parameter types of the neuron model, that is, each type corresponds to a NodeParam and the parameters of all neurons in the Node are defined as The amount is arranged and saved.
  • Node can also be used to characterize input and output nodes, and is used to interface with system I/O devices, such as camera input, audio input, sensor input, control output, etc.
  • system I/O devices such as camera input, audio input, sensor input, control output, etc.
  • the data of reading and writing I/O devices is dynamically updated through each NodeParam of the Node.
  • NodeParam represents the node parameter container, located at the sixth level (lowest level) of the tree structure. It can be used to characterize models of molecular scale, receptor scale, neurotransmitter or neuromodulator scale, and can also be used to characterize a group of neurons or glial The parameter tensor of the cell model.
  • the neurotransmitter or neuromodulator model can be constructed as an excitatory, inhibitory or modulated model.
  • the receptor model can be constructed as an ionic or metabolic model, and its response to neurotransmitter or neuromodulator can be constructed as an excitatory, inhibitory, modulated or neutral model.
  • Flow stands for the circulation container, located at the second level of the tree structure, and can be used to characterize the model of the scale of the nerve fiber bundles connecting the brain areas.
  • Each Flow can contain one or more Channels.
  • Channel represents the channel container, located at the third level of the tree structure, and can be used to characterize the model of the conduction bundle formed by the axons connecting the nerve nuclei.
  • Each Channel can contain one or more Links.
  • Link represents the connection container, located at the fourth level of the tree structure, and can be used to represent the model of the neural pathway composed of axons in the neural circuit.
  • Each Link can accommodate one or more Edge.
  • Edge stands for edge container, located at the fifth level of the tree structure. It can be used to characterize models of dendritic scale or synaptic scale, and can also be used to characterize a group of synapses or glial cell protrusions.
  • the dendritic scale model can be constructed as a apical dendritic model, a basal dendritic model or a spine model.
  • Synapse models can be constructed as excitatory, inhibitory, modulated or neutral models.
  • Each Edge can accommodate one or more EdgeParam. When Edge is used to characterize a group of synapses of the same type, the number of EdgeParams it contains is determined by the number of parameter types of the synapse model. Each type corresponds to an EdgeParam. The parameters of this type of all synapses in the Edge are arranged in a tensor. save.
  • EdgeParam stands for edge parameter container, located at the sixth level (lowest level) of the tree structure. It can be used to characterize models of molecular scale, neurotransmitter or neuromodulator scale, and receptor scale, as well as a group of synapses or glials.
  • Molecular scale models can be constructed as intracellular molecular models, molecular models in cell membranes, and intercellular molecular models.
  • the neurotransmitter or neuromodulator model can be constructed as an excitatory, inhibitory or modulated model.
  • the receptor model can be constructed as an ionic or metabolic model, and its response to neurotransmitter or neuromodulator can be constructed as an excitatory, inhibitory, modulated or neutral model.
  • Param represents a general parameter container and belongs to an auxiliary container. According to the needs of modeling, the above-mentioned containers at each level may additionally have one or more Param, which is used to hold parameter data in the form of tensors; or, the above-mentioned containers at each level may not have Param;.
  • NodeParam, EdgeParam, and Param can accommodate parameters in the form of tensors (ie, multi-dimensional matrices).
  • the dimension of a tensor can be from 1 to multi-dimensional, and the specific arrangement and use method are specified by the user.
  • a tensor can be 4-dimensional, and the position of each parameter in the tensor can be represented by coordinates (x, y, z, t), where the three dimensions of x, y, and z correspond to each neural tissue represented in the parent container ( (Such as neurons or synapses, etc.) the spatial arrangement of the model, t represents the time dimension, which can characterize the cache and delay of timing information, and can be used to simulate the long-term mechanism of neuromodulators on neurons and synapses (with delay ).
  • the parameters in the tensor can be shared by all or part of the neural tissue (such as neurons or synapses) models in the parent container, and can be used to simulate the large-area effect of neuromodulators on all neural tissues in the target area.
  • Each of the above containers has a number and name, which are used for indexing in a multi-level tree structure.
  • Each container has one or more control blocks (Control Block) used to store statistics and control information, such as the traversal order and rules of the network, the number of traversal operations that have been involved, whether the data has been stored in the main memory, and whether the data has been stored in the protocol.
  • Control Block used to store statistics and control information, such as the traversal order and rules of the network, the number of traversal operations that have been involved, whether the data has been stored in the main memory, and whether the data has been stored in the protocol
  • the frequency of processor memory, hard disk read and write, etc. is managed and updated by the rule manager and scheduler.
  • Flow and all its child containers can correspond to one or more upstream containers, and, one or more downstream containers, and index access to them through the numbers or names of the upstream and downstream containers.
  • Both the upstream container and the downstream container can be containers of any level, and they can be the same or different containers. Therefore, Flow and all its child containers can form an information flow path with its upstream and downstream containers, characterizing the (one-way or two-way) flow and processing process of information between two information sources (such as upstream and downstream containers). Any topological structure of information flow can be formed among multiple containers in the network.
  • the above-mentioned information flow and processing process can be used to realize a variety of biological brain nerve mechanisms such as nerve impulse conduction between neurons through synapses, exchange of information between synapses and synapses, and neuron and synaptic plasticity.
  • the above-mentioned arbitrary topological structure of information flow can be used to realize any neural circuit connection in the cranial nervous system, including the feedback connection that supports the same neuron to connect back to itself, the connection between the neurons of the same group (layer), Arbitrary connections between different groups (layers) of neurons (sequential/feedforward, cross-layer, feedback, etc.), as well as direct connections between synapses and synapses, and allow endless loop calculations of feedback connections.
  • the topological relationship between them can be used to achieve forward/feedforward connections between neurons in different groups (layers) through synapses;
  • the upstream and downstream containers of the Edge are the same Node, the topological relationship between them (for example, Node 1 -> Edge -> Node 1) It can be used to realize the interconnection of neurons in the same group (layer) through synapses, and it can also be used to realize the feedback connection of neurons connecting back to themselves through self-synapses (autapse);
  • each Node belongs to a different Layer, the topological relationship can be used to realize a neural loop formed by feed-forward connections, cross-layer connections, and feedback connections between neurons in different layers;
  • the topological relationship can be used to realize a feedback loop composed of one or more (or one or more groups) of different neurons.
  • the synapses in Edge can access the neurons in the upstream and downstream containers to obtain their excitation timing information, combine their own parameters (such as weights) to perform operations and propagate the results of the operations to the neurons in the upstream and downstream containers , Can realize the conduction of nerve impulses between neurons through synapses, as well as long and short-term plasticity mechanisms such as Hebbian, Anti-Hebbian, and STDP; neurons in Node can be transmitted according to the received (through neurotransmitter or neuromodulation) (Of) information undergoes functional changes or shaping (a type of neuronal plasticity).
  • an Edge When an Edge is used to characterize one or more synapses, and at least one of its corresponding upstream and downstream containers is an Edge characterizing one or more synapses, the topological relationship between them (for example, Edge 1 -> Edge 2 -> Edge 3) can be used to realize the direct connection and direct information exchange between synapses and synapses.
  • the above parameter database is used to store various parameter data of the network (including initialization parameters and runtime parameters).
  • the parameter database can be selected as a binary file or a text file.
  • the text file can be in CSV file format or a file format in which data is separated by other characters.
  • Each container can have one or more corresponding parameter databases.
  • the parameters contained in NodeParam, EdgeParam, or Param can be stored in one or more parameter databases, or multiple NodeParam, EdgeParam, or Param can share one or more parameter databases to store the same parameters.
  • the user can place the parameter database of each container in the network in the corresponding subfolder in the model file path.
  • the modeling design method supported by this system can be to decompose any level (or scale) model into two parts: data and operation.
  • data can be accommodated by NodeParam, EdgeParam or Param, and stored by the corresponding parameter database.
  • Operations are executable programs (such as functions and classes containing functions) that can access and update the aforementioned data.
  • neuron modeling can use traditional neuron model, design its ReLU activation function as operation, and design its threshold parameter as data; neuron modeling can also use impulse neuron model, which can be leaky integrated- and-fire The function of model is designed as an operation, and its parameters are designed as data.
  • the user can define one or more operations to enable each neuron in the same Node (without Edge) to directly access and update each other's data to achieve rapid exchange of information, which can be used to simulate biological brain nerves Electrical synapses in the system.
  • users can define one or more operations when modeling, so that the synapses in the same Edge can directly access and update each other's data, so as to realize the rapid exchange of information, which is used to simulate the biological brain nervous system.
  • the system provides a flexible and unified modeling method
  • the multi-level tree structure network description method provided supports the full-scale modeling of the biological brain and nervous system and the flexible network topology, it will build The model scale and modeling richness are organically unified, and all models of various scales are integrated into a unified neural network for operation.
  • the data is represented and stored in the form of tensors, which also enables the system to support spiking neural networks, and also compatible with traditional neural networks (deep learning) and other algorithms that use tensors as the main data representation method.
  • the above-mentioned operation manager 7 is used to manage all operations that can be run on the system.
  • Operations can be programs (including code segments, functions, and classes) that can run on general-purpose CPU, ARM, DSP, GPU, or other processors. All operations constitute the operation library.
  • the operation manager 7 provides a programming interface for querying and recalling specified operations based on the operation number or name. The user can specify the operations to be performed for each container in the model description module, and the scheduler will schedule the corresponding operations during runtime. This ensures that the system has a certain cross-hardware platform versatility and can run on hardware platforms such as general-purpose CPU, GPU, ARM, DSP, etc.
  • the above configuration description module 1 is used to describe the configuration parameters of the current network operating environment. Such as the size of the memory pool available to the system, the mode of operation (single, multiple, continuous operation), the upper and lower limits of the frequency of reading and writing hard disk data, the conditions for initiating synapses and neuron clipping and regeneration processes, etc.
  • the configuration manager 6 is used to read the configuration description module 1 to obtain system configuration parameters, and provide a programming interface for other components to call.
  • the above-mentioned network model object is constructed by the network builder 9 and resides in the memory. It characterizes the entire network, including all containers, topological relationships and parameter data, and is the object that the scheduler schedules to run.
  • the aforementioned rule manager 12 is used to read the rules declared by the user in the model description module 3, and interpret these rules when the scheduler 11 schedules the operation of the network model object.
  • the user can specify one or more rules for each container in the model description module 3. All the rules constitute the rule base.
  • the rule manager 12 sorts the rules in the rule base according to a preset priority. When multiple rules for a container conflict with each other, only the rule with the highest priority is executed. When a container does not specify any rules, the rule manager 12 uses the default rules to execute.
  • the rules in the rule base include (but are not limited to): traversal rules, memory usage rules, data I/O rules, synapses and neuron trimming and regeneration rules, etc.
  • Traversal rules can be used to instruct the scheduler to repeatedly traverse or skip traversing all or specific containers of the network according to the second preset time interval or the fourth preset traversal period, so as to concentrate computing resources on sub-networks that need to be computationally intensive, and improve data Utilization efficiency
  • memory usage rules can be used to guide the scheduler to rationally arrange the use of main memory and coprocessor memory
  • data I/O rules can be used to guide the scheduler to schedule data between main memory and coprocessor memory, and between memory and The frequency of exchange between hard drives saves I/O resources and improves overall computing efficiency.
  • the aforementioned data manager 8 includes one or more decoders and encoders.
  • the decoder is used to read and parse the data file in the format specified by the user and convert its content into a data type that can be calculated by the computer.
  • the encoder is used to serialize the data in the memory in a user-specified format for writing back to the hard disk.
  • the file type of the data file can be a binary file or a text file (in Unicode or ASCII format). Users can add custom decoders and encoders in the data manager 8 to read and write files in custom formats.
  • the above-mentioned network builder 9 reads the model description module 3, analyzes the topology of the network, and reads the data file through the data manager 8 to construct the network model object in the memory.
  • the above-mentioned network manager 10 provides a programming interface for constructing a network model object, and the interface calls the network builder 9 to construct a network model object.
  • the network manager 9 also provides a programming interface for accessing, traversing, and operating network model objects, including support for querying and updating arbitrary containers, neurons, synapses, parameters, etc. by number or name.
  • the supported traversal sequence includes (but is not limited to):
  • traversal can include (but is not limited to):
  • the aforementioned scheduler 11 is used for allocating hardware resources and scheduling calculation processes to ensure optimal calculation efficiency.
  • the scheduler 11 manages one or more main memory pools and one or more device memory pools to reasonably allocate the usage of network model objects in the main memory and each device memory.
  • the main memory pool is used to manage the use of main memory; each coprocessor (which can be ARM, GPU, DSP, ASIC) has one or more corresponding device memory pools to manage the use of corresponding device memory.
  • the upper and lower limits of its capacity are specified by the user through the configuration description module 1.
  • the aforementioned scheduler 11 manages one or more thread pools for dynamically arranging sub-threads to participate in multi-threaded operations, so as to rationally arrange the main computing unit (can be CPU, ARM, etc.) and co-processors (can be ARM, GPU, DSP) Etc.) and the computing load of I/O devices (hard disk, camera, audio input, control output, etc.).
  • main computing unit can be CPU, ARM, etc.
  • co-processors can be ARM, GPU, DSP) Etc.
  • I/O devices hard disk, camera, audio input, control output, etc.
  • the above-mentioned scheduler 11 manages one or more node data input buffers, one or more node data output buffers, one or more edge data input buffers, and one or more edge data output buffers for buffering needs.
  • Read and write data on hard disks or I/O devices They prefer to use the data structure of the circular queue.
  • the capacity of each buffer, the upper and lower limits of the frequency of reading and writing hard disks or I/O devices, and the upper and lower limits of the throughput of reading and writing hard disks or I/O devices are specified by the user through the configuration description module 1. According to the load of the processor, the hard disk and the I/O device, the scheduler 11 arranges the hard disk and I/O device to read and write in a timely manner to avoid I/O blocking.
  • scheduler 11 is used to reasonably allocate the use of hardware resources such as processors and coprocessors, memory, hard disks, and IO devices, this system is suitable for efficient operation on embedded devices with relatively limited hardware resources (such as memory).
  • This system provides the function of automatically cutting and regenerating synapses and neurons according to certain trigger conditions and execution rules.
  • the user can specify in the configuration description module 1 the trigger conditions for starting the cutting or newborn process, and in the model description module 3, specify the execution rules of the cutting or newborn process.
  • Execution rules can act on the entire network model object, or on sub-networks or specific containers.
  • the trimming or regeneration process is scheduled and executed by the scheduler 11, and can be executed when the network is running or when the network is suspended.
  • the triggering conditions for starting the trimming or newborn process may include (but are not limited to) one or more of the following:
  • User command that is, the user inputs a command to the system through keyboard or mouse or other methods, and the system executes the cutting or rebirth process immediately after receiving the command or after the first preset time;
  • Interval execution that is, the system automatically starts the cutting or rebirth process in a timely manner according to the first preset time interval or the first preset traversal period.
  • Synapse tailoring rules can include (but are not limited to) one or more of the following:
  • the statistics of a certain synapse parameter and all the synaptic parameters in the specified reference synapse set reach the first preset value relationship (for example, the weight of a certain synapse is less than 1% of the average weight of all synapses in the specified edge) ,
  • the synapse is the synapse to be trimmed;
  • the parameter of a certain synapse and the specified threshold reach the second preset value relationship (for example, the weight of a certain synapse is less than 10.0), then the synapse is the synapse to be cut;
  • a certain synapse is marked as being trimmed by another calculation process, then the synapse is the synapse to be trimmed; for the synapse to be trimmed, it can be trimmed.
  • Neuron clipping rules can include (but are not limited to) one or more of the following:
  • the neuron is a neuron to be trimmed
  • the neuron is a neuron to be trimmed
  • a neuron has no synapses for input and output, the neuron is a neuron to be trimmed;
  • the statistics of the parameters of a certain neuron and all the neuron parameters in the specified reference neuron set reach the third preset value relationship (if the threshold of a certain neuron is greater than the maximum threshold of all neurons in the specified node), then the The neuron is the neuron to be cut;
  • the parameter of a certain neuron and the specified threshold reach the fourth preset value relationship (for example, the threshold of a certain neuron is greater than 1000.0), then the neuron is a neuron to be cut;
  • a certain neuron is marked as being cropped by another operation process, then this neuron is a neuron to be cropped; for a neuron to be cropped, it can be cropped.
  • Neuron regeneration rules can include (but are not limited to) one or more of the following:
  • the relationship between the number of existing neurons in a certain node container and the total capacity of the node container reaches the first preset ratio or the fifth preset value, and the new birth is based on the second preset ratio or the first preset number of its total capacity Neuron; wherein, the first preset ratio and the second preset ratio may be the same or different;
  • a certain node container generates new neurons according to the first preset rate (that is, according to the preset time interval or the preset traversal period) at the third preset ratio or the second preset number of its total capacity;
  • a certain node container is marked as needing new neurons by other calculation processes, and new neurons are generated at the second preset rate (that is, according to the preset time interval or preset traversal period with the preset ratio or number of its total capacity) .
  • Synapse newborn rules can include (but are not limited to) one or more of the following:
  • the relationship between the number of existing synapses of a certain side container and the total capacity of the side container reaches the fourth preset ratio or the sixth preset value, and the new birth is based on the fifth preset ratio or the third preset number of its total capacity Synapse; wherein, the fourth preset ratio and the fifth preset ratio may be the same or different;
  • a certain edge container generates new synapses at a third preset rate (that is, according to a preset time interval or a preset traversal period with a preset ratio or number of its total capacity);
  • a certain edge container is marked as needing new synapses by other calculation processes, and new synapses are generated according to the fourth preset rate (that is, according to the preset time interval or preset traversal period with the preset ratio or number of its total capacity) ;
  • the above scheduler is responsible for scheduling the execution of synapses and neuron trimming and regeneration.
  • the scheduler allocates one or more sub-threads from the thread pool managed by it, each responsible for some areas or specific containers in the network model object.
  • the child thread will traverse each container in the area under its jurisdiction and execute the neuron and/or synapse trimming and/or regeneration process according to the specified rules.
  • the regeneration process of a neuron or synapse can be to allocate the required memory space in the container and create a corresponding object (new/construct object); the clipping process of a neuron or synapse can be to destruct the corresponding object in the container (delete/destruct object) and release the occupied memory space.
  • this system provides the function of automatically executing synapse and neuron cutting and regeneration according to certain conditions and rules, and provides a variety of flexible trigger conditions and execution process rules for starting the cutting and regeneration process, eliminating the need for neural network development
  • the burden of writing synapse and neuron tailoring and newborn programs by themselves has increased the flexibility and efficiency of development.
  • the cropping process of synapses and neurons can be used alternately with the regeneration process, which can optimize the coding efficiency of the neural network, greatly compress the size of the neural network and the required storage space, save memory and improve the computing efficiency, making this system suitable for Run on embedded devices with limited hardware resources.
  • this system is conducive to simulating the rich mechanisms in the biological brain nervous system (such as hippocampal synapses and neuronal apoptosis and regeneration), and can better support brain-like Intelligent and cognitive computing.
  • the above-mentioned log manager 5 is used to record the logs generated when the system is running, and the logs are used to remind the user of the working status and abnormality of the system, so as to facilitate debugging and maintenance.
  • the log consists of a series of strings and timestamps, which can be displayed in a command line environment or saved in a file and displayed with a text browser.
  • the log manager consists of a log recording programming interface and a log management service.
  • the log recording programming interface is called by the user in the program and transmits the log data to the log management service.
  • the log management service is run by an independent thread to avoid blocking network operations. It uniformly sorts the received log data according to the timestamp and caches it in the memory. When the amount of cached data reaches a certain level, it is saved to the hard disk in a certain order and cleaned up. Cache.
  • the above-mentioned operation monitoring module 13 is used to receive and respond to user input and manage the operation status of the entire system. It adopts the design of state machine, including default state, network construction state, network operation state, and network suspension state. It includes a message queue for receiving and buffering user input commands, and an independent thread for responding to the commands in the queue in time, so that the state machine can switch between different states. Users can input commands through keyboard, mouse, programming interface or other methods. Commands include (but are not limited to): build network commands, start running commands, pause running commands, end running commands, synapse and neuron trimming commands, and synapse and neuron regeneration commands.
  • S1 The system starts and initializes the operating environment
  • step S10 Judge whether the start command is received, when the judgement result is no, return to step S9 to wait for the command input again, when the judgement result is yes, go to the next step;
  • step S13 Determine whether a command to suspend operation is received, if the judgment result is yes, then go to step S14, if the judgment result is no, then go to step S17;
  • step S16 Judge whether a start running command is received, if the judgement result is yes, return to step S11, if the judge result is no, return to step S15;
  • step S17 Determine whether the specified stop condition is reached (including receiving the end operation command, etc.), if the determination result is no, return to step S12, if the determination result is yes, end the operation.
  • the above state machine When the system is initialized, the above state machine is in the default state, starts the message queue, and starts to receive user input; when receiving the network construction command, the state machine switches to the network construction state and constructs the network model object; when the start operation command is received , The state machine switches to the network operation state and performs network operations; when receiving the pause operation command, the state machine switches to the network pause state and pauses the network operation; when receiving the end operation command, the state machine saves the network data to the hard disk , The system ends and exits.
  • the scheduler When the state machine is in the network running state or the network suspended state, if there are synapses and neuron clipping commands in the message queue, the scheduler will start the synapse and neuron clipping process; if there are synapses and neuron regeneration in the message queue Command, the scheduler initiates the process of synapse and neuron regeneration. Since this system uses an operation monitoring module to manage the working status of the system, the system can be switched to the network pause state when the application environment does not require network operations to save power consumption and make this system suitable for embedded systems.
  • the above-mentioned graphical display module 4 is used to read network data and display it to the user, which is convenient for development, monitoring and debugging.
  • the image display module 4 can directly read the data of the network model object in the memory, or can read the data stored in the hard disk.
  • the graphical display module 4 adopts an independent thread to avoid blocking network operations, so it can be displayed in real time during the network scheduling operation, or can be displayed after the network scheduling operation ends.
  • first preset time to third preset time, first preset time interval to second preset time interval, first preset traversal period to fourth preset traversal period, first preset The numerical relationship to the sixth preset numerical relationship, the first preset ratio to the fifth preset ratio, the first preset number to the third preset number, and the first preset rate to the fourth preset rate are expressed only to facilitate the distinction between preset time, preset time interval, preset traversal period, preset value relationship, preset ratio, preset number, and preset rate, the specific value size or range can be determined according to actual needs. The application embodiment does not limit this.
  • each of the aforementioned preset time, preset time interval, preset traversal period, preset numerical relationship, preset ratio, preset number, and preset rate may be the same or different.
  • the length of time from the first preset time to the third preset time may be completely the same or completely different; or part of the time length is the same, while the other part of the time length is different. The embodiment of the application does not limit this.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

一种用于类脑智能与认知计算的脉冲神经网络运算系统及方法,该系统包括模型描述模块(3)、参数数据库(2)、配置描述模块(1)、配置管理器(6)、规则管理器(12)、数据管理器(8)、网络构建器(9)、网络管理器(10)、操作管理器(7)、调度器(11)、日志管理器(5)、运行监测模块(13)和图形化显示模块(4),该系统提供了按照一定条件和规则自动执行突触和神经元裁剪与新生的功能,并且提供了多种灵活的启动裁剪和新生过程的触发条件与执行过程的规则,免去了神经网络开发者需要自行编写突触和神经元裁剪与新生程序的负担,从而有效解决了现有的类脑脉冲神经网络运算框架存在的一些问题。

Description

用于类脑智能与认知计算的脉冲神经网络运算系统及方法
本申请要求于2019年07月02日在中国专利局提交的、申请号为201910588964.5、发明名称为“用于类脑智能与认知计算的脉冲神经网络运算系统及方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及类脑脉冲神经网络模拟和高性能计算技术领域,具体涉及一种用于类脑智能与认知计算的脉冲神经网络运算系统及方法。
背景技术
目前,类脑智能和认知计算以脉冲神经网络为基础,结合生物脑中的多种神经递质、神经调质、受体、电突触、化学突触、树突、神经元、胶质细胞的丰富工作机制进行计算建模,构造成的神经环路、神经核团、脑区和全脑模型,能够模拟生物脑的诸多认知机制和行为,如记忆与学习、模拟情感、导航与计划、运动控制、类脑视觉和类脑听觉、注意力、决策等等,为人工智能系统的发展提供了更广阔的路线。
但是,现有的类脑脉冲神经网络运算框架存在着如下问题:
1、没有提供统一而灵活的建模方式,也不能支持灵活的网络拓扑结构,因而不能在建模尺度和建模丰富度之间取得统一。例如,在建模尺度方面,现有的类脑脉冲神经网络运算框架大多不能支持并统一在分子尺度、神经递质与神经调质尺度、受体尺度、突触尺度、树突尺度、神经元尺度、胶质细胞尺度、神经环路尺度、神经核团尺度、神经纤维束尺度、脑区尺度、全脑尺度和行为尺度的建模。又如,在建模丰富度方面,现有的类脑脉冲神经网络运算框架往往忽略了支持电突触的建模,也忽略了支持模拟神经调质的工作机制,还忽略了支持模拟树突中多个突触间互相交换信息并进行逻辑运算的机制,也不能支持突触和突触直接连接的拓扑结构等等。
2、没有提供模仿生物脑神经系统组织方式的描述方法。
3、没有集成足够丰富的内置功能和灵活的用户接口,如不能支持自动执行突触与神经元的裁剪与新生,需要用户自行编写程序实现相关功能。
4、不能将脉冲神经网络、传统神经网络、以及其他以张量为主要数据表征方式的算法有效地统一结合进行混合运算。
5、不能兼容CPU、ARM、GPU、DSP等芯片架构,对硬件资源的利用也大多不够优化,不适合将中等至大规模类脑神经网络高效运行于通用计算机和嵌入式设备。
上述问题的存在导致现有的类脑脉冲神经网络运算框架硬件平台建模范围有限、兼容性差、运算效率低、开发使用不便捷,很难实现将类脑智能和认知计算低成本大规模地部署于智能玩具、机器人、无人机、IOT设备、智能家居、车载系统等产品中。
因此,如何提供一种灵活高效的用于类脑智能与认知计算的脉冲神经网络运算系统及方法是本领域技术人员亟需解决的问题。
技术问题
本申请实施例的目的之一在于:提供一种用于类脑智能与认知计算的脉冲神经网络运算系统及方法,该系统提供了统一而灵活的建模方式,而且提供的多层级树状结构的网络描述方式支持生物脑神经系统全尺度的建模和灵活的网络拓扑结构,从而将建模尺度和建模丰富度有机统一,并将各个尺度的全部模型融合为统一的神经网络进行运算,并且支持将数据以张量的形式表征和存储,这也使本系统既可以支持脉冲神经网络,也可以兼容传统神经网络(深度学习)以及其他以张量为主要数据表征方式的算法,还提供了按照一定条件和规则自动执行突触和神经元裁剪与新生的功能,免去了神经网络开发者需要自行实现相关功能的负担,从而有效解决了现有的类脑脉冲神经网络运算框架存在的上述问题。
技术解决方案
为解决上述技术问题,本申请实施例采用的技术方案是:
一种用于类脑智能与认知计算的脉冲神经网络运算系统,所述系统包括:模型描述模块、参数数据库、配置描述模块、配置管理器、规则管理器、数据管理器、网络构建器、网络管理器、操作管理器、调度器、日志管理器、运行监测模块和图像化显示模块;
所述模型描述模块用于提供用户设计和描述网络模型的接口;
所述参数数据库用于存储网络的各项参数数据,包括初始化参数和运行时参数;参数数据库可选为二进制文件或文本文件;文本文件可采用CSV文件格式或以其他字符分隔数据的文件格式;
所述配置描述模块用于描述当前网络运行环境的配置参数和启动突触和神经元的裁剪与新生的条件;
所述配置管理器用于读取上述配置描述模块,以获取系统配置参数;
所述网络模型对象由网络构建器构造并驻留在内存中,用于表征整个网络,包括所有容器、拓扑关系和参数数据,是调度器调度运行的对象;
所述规则管理器用于读取模型描述模块中用户声明的规则,并在调度器调度网络模型对象的运算时对这些规则进行解释以及仲裁规则间的冲突;
所述数据管理器包括一种或多种解码器和编码器,用于读取、解析参数数据库,转换数据格式,以及将数据序列化;用户可以在数据管理器中添加自定义的解码器和编码器,以读写自定义格式的文件;
所述网络构建器用于读取模型描述模块,解析网络的拓扑结构,并通过数据管理器读取数据文件,在内存中构建网络模型对象;
所述网络管理器用于构建、遍历、访问和更新网络模型对象;
所述操作管理器用于管理所有可在本系统上运行的操作;所有的操作构成了操作库;用户可在模型描述模块中为各个容器指定需要执行的操作,运行时由调度器调度执行相应的操作;
所述调度器用于分配硬件资源和调度运算过程,优化运算效率;
所述日志管理器用于记录系统运行时生成的日志,提示用户系统的工作状态与异常,以便于调试和维护;
所述运行监测模块用于接收和响应用户输入并管理整个系统的运行状态,包含有默认状态、网络构建状态、网络运行状态和网络暂停状态;
所述图像化显示模块用于读取网络数据,并将其显示给用户,以便于开发、监测和调试。
可选地,所述模型描述模块包括网络描述单元、汇聚描述单元、流通描述单元,共同用于描述整个网络的各个构成部分与拓扑结构;它们可以选为文本文件,采用嵌套式语法,可以选用XML或JSON的文件格式。
可选地,所述模型描述模块采用模拟生物脑神经系统组织方式的多层级树状结构的网络描述方式;
所述汇聚描述单元支持将网络中的节点按照预设层组和模块排列组织,用于表征生物脑中神经元及相关胶质细胞的多层级组织方式(如,核团->脑区->全脑);
所述流通描述单元支持将网络中的边按照拓扑(连接关系)相似性分组和层级排列组织,用于表征生物脑中神经突触的多种组织方式(如树突、神经通路中的投射、神经纤维束等)及相关胶质细胞的突起的组织方式。
可选地,所述网络描述单元用于描述Network及Param等容器、描述整个网络的参数与运行规则,并通过链接指向一个或多个汇聚描述单元和流通描述单元;
所述汇聚描述单元用于描述Confluence、Module、Layer、Node、NodeParam和Param等容器,并用于描述网络中节点的模块与层组划分关系、各个容器的参数与运行时的规则和命令;
所述流通描述单元用于描述Flow、Channel、Link、Edge、EdgeParam和Param等容器,并用于描述网络中边的连接(拓扑)关系、各个容器的参数与运行时的规则和命令。
可选地,所述Network表示网络容器,位于树状结构的第一层级(最顶级),用于表征全脑和行为尺度的模型,每个Network可以容纳一个或多个Confluence和Flow;
所述Confluence表示汇聚容器,位于树状结构的第二层级,可用于表征脑区尺度的模型,每个Confluence可以容纳一个或多个Module;
所述Module表示模块容器,位于树状结构的第三层级,可用于表征神经核团尺度的模型,每个Module可以容纳一个或多个Layer;
所述Layer表示层组容器,位于树状结构的第四层级,可用于表征神经环路尺度的模型,每个Layer可以容纳一个或多个Node;
所述Node表示节点容器,位于树状结构的第五层级,可用于表征神经元尺度或胶质细胞尺度的模型,也可用于表征一群神经元或胶质细胞,每个Node可以容纳一个或多个NodeParam;
所述Node还可用于表征输入、输出节点,用于对接系统的I/O设备,如摄像头输入、音频输入、传感器输入、控制输出等,读写I/O设备的数据通过该Node的各个NodeParam动态更新;
所述NodeParam表示节点参数容器,位于树状结构的第六层级(最低级),可用于表征分子尺度、受体尺度、神经递质或神经调质尺度的模型,也可用于表征一群神经元或胶质细胞模型的参数张量;
所述Flow表示流通容器,位于树状结构的第二层级,可用于表征联接脑区间的神经纤维束尺度的模型,每个Flow可以容纳一个或多个Channel;
所述Channel表示通道容器,位于树状结构的第三层级,可用于表征由联接神经核团间的轴突构成的传导束的模型,每个Channel可以容纳一个或多个Link;
所述Link表示连接容器,位于树状结构的第四层级,可用于表征神经环路中由轴突构成的神经通路的模型,每个Link可以容纳一个或多个Edge;
所述Edge表示边容器,位于树状结构的第五层级,可用于表征树突尺度或突触尺度的模型,也可用于表征一群突触或胶质细胞的突起,每个Edge可以容纳一个或多个EdgeParam;
所述EdgeParam表示边参数容器,位于树状结构的第六层级(最低级),可用于表征分子尺度、神经递质或神经调质尺度、受体尺度的模型,也可用于表征一群突触或胶质细胞的突起模型的参数张量;
所述Param表示一般参数容器,属于辅助容器。根据建模的需要,上述各层级的容器都可以额外带有一个或多个Param,用于以张量的形式容纳参数数据;或者,上述各个层级的容器也可以不带有Param;
上述每个容器都有编号和名称,用于在多层级树状结构中索引;
上述每个容器都有一个或多个控制块(Control Block)用于存储统计和控制信息,包括网络的遍历次序与规则、已参与遍历运算的次数、数据是否已入驻主内存、数据是否已入驻协处理器内存、硬盘读写的频次等,并由规则管理器和调度器管理和更新。
可选地,所述神经元模型的发放特性可以构建为紧张性发放、快速发放、爆发式发放、峰发放或相位性发放等;
所述神经元模型对于上游输入信号的响应可以构建为不同的神经适应性或敏感性曲线;
所述神经元模型对于下游的作用机制可以构建为兴奋型的、抑制型的、调制型的或中性的模型;
所述神经元模型可以构建为脉冲神经元模型,也可以构建为传统神经元模型;
所述胶质细胞模型可以构建为星形胶质细胞、少突胶质细胞、小胶质细胞、施万细胞和卫星细胞模型。
可选地,所述神经递质或神经调质模型可以构建为兴奋型的、抑制型的或调制型的模型;
所述受体模型可构建为离子型的或代谢型的模型;
所述受体模型对于神经递质或神经调质的响应效果可以构建为兴奋型的、抑制型的、调制型的或中性的模型。
可选地,所述树突尺度的模型可以构建为顶树突模型、基树突模型或突棘模型;
所述突触模型可以构建为兴奋型的、抑制型的、调制型的或中性的。
可选地,所述分子尺度的模型可以构建为细胞内分子模型、细胞膜中分子模型和细胞间隙分子模型。
可选地,所述NodeParam、EdgeParam以及Param内部可采用张量(即多维矩阵)的形式容纳参数,
所述张量的维度可以是1维或多维,具体排列和使用方式由用户指定;
所述张量可以配置为4维,各个参数在张量中的位置可以由坐标(x,y,z,t)表示,其中x,y,z三个维度对应于父级容器中表征的各个神经组织(如神经元或突触等)模型的空间排布位置,t代表时间维度,可表征时序信息的缓存和延迟,可用于模拟神经调质对神经元和突触的长时间作用机制(具有延迟性);
所述张量中的参数可以由父级容器中全体或部分神经组织(如神经元或突触等)模型共享,可用于模拟神经调质对目标区域中的全部神经组织的大面积作用。
可选地,所述Flow及其所有子级容器都可以对应于一个或多个上游容器,和,一个或多个下游容器,并通过上、下游容器的编号或名称对它们进行索引访问,
所述上游容器和下游容器都可以是任意层级的容器,二者可以是同一个或者不同的容器;
所述Flow及其所有子级容器都可与其上、下游容器构成信息流动通路,表征信息在两个信息源(如上游容器和下游容器)间的(单向或双向)流动和处理过程,网络中的多个容器间可构成信息流动的任意拓扑结构。
可选地,所述信息的流动和处理过程可用于实现神经脉冲通过突触在神经元间传导、突触和突触之间交换信息、神经元与突触可塑性等多种生物脑神经机制。
可选地,所述信息流动的任意拓扑结构,可用于实现脑神经系统中任意的神经环路连接方式,包括支持同一个神经元连回自己的反馈连接、同一群(层)的神经元互相之间的连接、不同群(层)神经元之间的任意连接(顺次/前馈、跨层、反馈等),以及突触和突触的直接连接,并且允许反馈型连接的无限次循环计算。
可选地,所述系统支持将任何层级(或尺度)的模型分解为数据(data)和操作(operation)两部分的建模设计方式,
所述数据可由NodeParam、EdgeParam或Param容纳,并由对应的参数数据库存储;
所述操作是用于访问和更新前述数据的可执行程序(如函数及包含函数的类),操作可以运行在通用CPU、ARM、DSP、GPU或其他处理器,以保证所述系统具有一定跨硬件平台通用性。
可选地,所述系统支持用户通过定义一种或多种操作,使同一个Node中的各个神经元(无需通过Edge)直接互相访问和更新彼此的数据,以实现信息的快速交换,用于模拟生物脑神经系统中的电突触。
可选地,所述系统支持用户通过定义一种或多种操作,使同一个Edge中的各个突触直接互相访问和更新彼此的数据,以实现信息的快速交换,用于模拟生物脑神经系统中同一个神经元的树突上多个突触间互相交换信息并进行逻辑运算的情况,包括分流抑制(shunting inhibition)等机制。
可选地,所述系统支持按照预设触发条件和执行规则自动执行突触和神经元的裁剪与新生的功能,
所述触发条件可以由用户在所述配置描述模块中指定;
所述执行规则可以由用户在所述模型描述模块中指定;
所述执行规则可以作用于整个网络模型对象,也可以作用于子网络或特定容器;
所述突触和神经元的裁剪与新生过程由调度器调度执行,可以在网络运行状态时执行,也可在网络暂停状态时执行。
可选地,所述触发条件包括下面的一种或任几种:
用户命令,即用户通过键盘或鼠标及其他方式输入命令给本系统,本系统接收到命令后即刻或第一预设时间后执行裁剪或新生过程;
持续执行,即当网络模型或其子区域达到符合裁剪或新生过程的规则时,就执行裁剪或新生过程;
间隔执行,即系统按照第一预设时间间隔或第一预设遍历周期,适时地自动启动裁剪或新生过程。
可选地,所述裁剪过程的执行规则,分为突触裁剪规则和神经元裁剪规则;
所述突触裁剪规则包括下面的一种或任几种:
某个突触的参数和指定参照突触集合中所有突触参数的统计量达到第一预设数值关系(如某突触的权重小于指定边中全部突触权重平均值的1%),则该突触为待裁剪突触;
某个突触的参数和指定阈值达到第二预设数值关系(如某突触的权重小于10.0),则该突触为待裁剪突触;
某个突触超过第二预设时间或第二预设遍历周期没有被触发,则该突触为待裁剪突触;
某个突触被其他运算过程标记为可以被裁剪,则该突触为待裁剪突触;对于待裁剪突触,可以对其进行裁剪;
所述神经元裁剪规则包括下面的一种或任几种:
某个神经元没有输入的突触,则该神经元为待裁剪神经元;
某个神经元没有输出的突触,则该神经元为待裁剪神经元;
某个神经元没有输入及输出的突触,则该神经元为待裁剪神经元;
某个神经元的参数和指定参照神经元集合中所有神经元参数的统计量达到第三预设数值关系(如某神经元的阈值大于指定节点中全部神经元阈值最大值),则该神经元为待裁剪神经元;
某个神经元的参数和指定阈值达到第四预设数值关系(如某神经元的阈值大于1000.0),则该神经元为待裁剪神经元;
某个神经元超过第三预设时间或第三预设遍历周期没有发放,则该神经元为待裁剪神经元;某个神经元被其他运算过程标记为可以被裁剪,则该神经元为待裁剪神经元;对于待裁剪神经元,可以对其进行裁剪;。
可选地,所述新生过程的执行规则,分为神经元新生规则和突触新生规则;
所述神经元新生规则包括下面的一种或任几种:
某个节点容器的现存神经元数量和该节点容器的总容量达到第一预设比例或第五预设数值关系,则以其总容量的第二预设比例或第一预设数量新生神经元;其中,第一预设比例与第二预设比例可以相同,也可以不同;
某个节点容器按照第一预设速率,(即按照预设时间间隔或预设遍历周期)以其总容量的第三预设比例或第二预设数量新生神经元;
某个节点容器被其他运算过程标记为需要新生神经元,并按照第二预设速率(即每间隔预设时间间隔或预设遍历周期以其总容量的预设比例或数量)新生神经元;
所述突触新生规则包括下面的一种或任几种:
某个边容器的现存突触数量和该边容器的总容量达到第四预设比例或第六预设数值关系,则以其总容量的第五预设比例或第三预设数量新生突触;其中,第四预设比例与第五预设比例可以相同,也可以不同;
某个边容器按照第三预设速率(即按照预设时间间隔或预设遍历周期以其总容量的预设比例或数量)新生突触;
某个边容器被其他运算过程标记为需要新生突触,并按照第四预设速率(即按照预设时间间隔或预设遍历周期以其总容量的预设比例或数量)新生突触;
某个节点容器存在没有输入或输出突触的神经元,则在对应的各个边容器中分别为其新生输入突触或输出突触。
可选地,所述模型描述模块中,用户可以为各个容器指定一条或多条规则,全部规则构成了规则库,
所述规则管理器对所述规则库中的规则按照预设优先级进行排序,当作用于一个容器的多条规则互相之间发生冲突时,只有最高优先级的规则被执行,当一个容器没有指定任何规则时,规则管理器采用默认规则执行;
所述规则库中的规则包括:遍历规则、内存使用规则、数据I/O规则、突触与神经元的裁剪与新生规则;
所述遍历规则可以用于指导调度器按照第二预设时间间隔或第四预设遍历周期重复遍历或跳过遍历网络的全部或特定容器,以将计算资源集中在需要计算密集的子网络,提高数据利用效率;
所述内存使用规则可以用于指导调度器合理安排主内存和协处理器内存的使用;
所述数据I/O规则可以用于指导调度器调度数据在主内存和协处理器内存间,以及内存与硬盘间交换的频次,节省I/O资源,提高整体运算效率。
可选地,所述调度器管理了一个或多个主内存池,和,一个或多个设备内存池,以合理分配网络模型对象在主内存和各个设备内存中的使用情况,
所述主内存池用于管理主内存的使用;
所述设备内存池对应于各个协处理器(可以是ARM,GPU,DSP,ASIC等),用于管理相应设备内存的使用;
所述主内存池和设备内存池的容量上限和下限由用户通过配置描述模块指定。
可选地,所述调度器管理了一个或多个线程池,用于动态安排子线程参与多线程运算,以合理安排主计算单元(可以是CPU、ARM等)和协处理器(可以是ARM、GPU、DSP等)以及I/O设备(硬盘、摄像头、音频输入、控制输出等)的运算负载。
可选地,所述调度器管理了一个或多个节点数据输入缓存器、一个或多个节点数据输出缓存器、一个或多个边数据输入缓存器、一个或多个边数据输出缓存器,用于缓存需要读写硬盘或I/O设备的数据,以使调度器能够根据处理器、硬盘和I/O设备的负载,适时地安排硬盘、I/O设备读写以避免I/O阻塞,
所述各个缓存器的容量、读写硬盘或I/O设备的频次上限和下限、读写硬盘或I/O设备的吞吐量上限和下限,由用户通过配置描述模块指定。
本申请还提供了一种用于类脑智能与认知计算的脉冲神经网络运算方法,该方法使用上述的一种用于类脑智能与认知计算的脉冲神经网络运算系统。
经由上述的技术方案可知,与现有技术相比,本申请公开提供了一种用于类脑智能与认知计算的脉冲神经网络运算系统及方法,该系统提供了统一而灵活的建模方式,而且提供的多层级树状结构的网络描述方式支持生物脑神经系统全尺度的建模和灵活的网络拓扑结构,从而将建模尺度和建模丰富度有机统一,并将各个尺度的全部模型融合为统一的神经网络进行运算,并且支持将数据以张量的形式表征和存储,这也使本系统既可以支持脉冲神经网络,也可以兼容传统神经网络(深度学习)以及其他以张量为主要数据表征方式的算法,还提供了按照一定条件和规则自动执行突触和神经元裁剪与新生的功能,免去了神经网络开发者需要自行实现相关功能的负担。
上述建模设计方式可以是将任何层级(或尺度)的模型分解为数据(data)和操作(operation)两部分。如前所述,数据可由NodeParam、EdgeParam或Param容纳,并由对应的参数数据库存储。操作是可以访问和更新前述数据的可执行程序(如函数及包含函数的类)。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或示范性技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请提供的一种用于类脑智能与认知计算的脉冲神经网络运算系统的整体架构示意图;
图2为本申请实施例中一种用于类脑智能与认知计算的脉冲神经网络运算系统的网络层级示意图;
图3为本申请实施例中一种用于类脑智能与认知计算的脉冲神经网络运算系统的系统运行过程流程示意图。
本发明的实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
为了说明本申请的技术方案,以下结合具体附图及实施例进行详细说明。
参见附图1,本申请实施例公开了一种用于类脑智能与认知计算的脉冲神经网络运算系统,该系统包括:模型描述模块3、参数数据库2、配置描述模块1、配置管理器6、规则管理器12、数据管理器8、网络构建器9、网络管理器10、操作管理器7、调度器11、日志管理器5、运行监测模块13、图形化显示模块4。
上述模型描述模块3包括:网络描述单元、汇聚描述单元、流通描述单元。它们一起描述了整个网络的各个构成部分与拓扑结构。它们优先采用嵌套式语法,可以选用XML或JSON的文件格式。
网络描述单元可以用于描述Network及Param等容器,可用于描述整个网络的参数与运行规则,并通过链接指向一个或多个汇聚描述单元和流通描述单元。
汇聚描述单元可以用于描述Confluence、Module、Layer、Node、NodeParam和Param等容器,可用于描述网络中节点的模块与层组划分关系、各个容器的参数与运行时的规则和命令。
流通描述单元可以用于描述Flow、Channel、Link、Edge、EdgeParam和Param等容器,可用于描述网络中边的连接(拓扑)关系、各个容器的参数与运行时的规则和命令。
参见附图3,模型描述模块3中支持的网络描述方式优先选用多层级树状结构表示,以模仿生物脑神经系统的组织方式:
1,上述汇聚描述单元支持将网络中的节点按照预设层组和模块排列组织,可以表征生物脑中神经元及相关胶质细胞的多层级组织方式(如,核团->脑区->全脑);
2,上述流通描述单元支持将网络中的边按照拓扑(连接关系)相似性分组和层级排列组织,可以表征生物脑中神经突触的多种组织方式(如树突、神经通路中的投射、神经纤维束等)及相关胶质细胞的突起的组织方式。这样可使开发、调试、管理和调度运行大规模类脑神经网络更加直观、便捷。
具体地,上述多层级树状结构中:
Network表示网络容器,位于树状结构的第一层级(最顶级),可用于表征全脑和行为尺度的模型。每个Network可以容纳一个或多个Confluence和Flow。
Confluence表示汇聚容器,位于树状结构的第二层级,可用于表征脑区尺度的模型。每个Confluence可以容纳一个或多个Module。
Module表示模块容器,位于树状结构的第三层级,可用于表征神经核团尺度的模型。每个Module可以容纳一个或多个Layer。
Layer表示层组容器,位于树状结构的第四层级,可用于表征神经环路尺度的模型。每个Layer可以容纳一个或多个Node。
Node表示节点容器,位于树状结构的第五层级,可用于表征神经元尺度或胶质细胞尺度的模型,也可用于表征一群神经元或胶质细胞。神经元模型的发放特性可以构建为紧张性发放、快速发放、爆发式发放、峰发放或相位性发放等,其对于上游输入信号的响应可以构建为不同的神经适应性或敏感性曲线,其对于下游的作用机制可以构建为兴奋型的、抑制型的、调制型的或中性的模型。胶质细胞模型可以构建为星形胶质细胞、少突胶质细胞、小胶质细胞、施万细胞和卫星细胞模型等。每个Node可以容纳一个或多个NodeParam。当Node用于表征一群同种类神经元时,其容纳的NodeParam的数量由神经元模型的参数种类的数量决定,即每个种类对应一个NodeParam并将该Node中全部神经元的该类参数以张量排列保存。
Node还可用于表征输入、输出节点,用于对接系统的I/O设备,如摄像头输入、音频输入、传感器输入、控制输出等。读写I/O设备的数据通过该Node的各个NodeParam动态更新。
NodeParam表示节点参数容器,位于树状结构的第六层级(最低级),可用于表征分子尺度、受体尺度、神经递质或神经调质尺度的模型,也可用于表征一群神经元或胶质细胞模型的参数张量。神经递质或神经调质模型可以构建为兴奋型的、抑制型的或调制型的模型。受体模型可构建为离子型的或代谢型的,其对于神经递质或神经调质的响应效果可以构建为兴奋型的、抑制型的、调制型的或中性的模型。
Flow表示流通容器,位于树状结构的第二层级,可用于表征联接脑区间的神经纤维束尺度的模型。每个Flow可以容纳一个或多个Channel。
Channel表示通道容器,位于树状结构的第三层级,可用于表征由联接神经核团间的轴突构成的传导束的模型。每个Channel可以容纳一个或多个Link。
Link表示连接容器,位于树状结构的第四层级,可用于表征神经环路中由轴突构成的神经通路的模型。每个Link可以容纳一个或多个Edge。
Edge表示边容器,位于树状结构的第五层级,可用于表征树突尺度或突触尺度的模型,也可用于表征一群突触或胶质细胞的突起。树突尺度的模型可以构建为顶树突模型、基树突模型或突棘模型。突触模型可以构建为兴奋型的、抑制型的、调制型的或中性的模型。每个Edge可以容纳一个或多个EdgeParam。当Edge用于表征一群同种类突触时,其容纳的EdgeParam的数量由突触模型的参数种类的数量决定,每个种类对应一个EdgeParam将该Edge中全部突触的该类参数以张量排列保存。
EdgeParam表示边参数容器,位于树状结构的第六层级(最低级),可用于表征分子尺度、神经递质或神经调质尺度、受体尺度的模型,也可用于表征一群突触或胶质细胞的突起模型的参数张量。分子尺度的模型可以构建为细胞内分子模型、细胞膜中分子模型和细胞间隙分子模型。神经递质或神经调质模型可以构建为兴奋型的、抑制型的或调制型的模型。受体模型可构建为离子型的或代谢型的模型,其对于神经递质或神经调质的响应效果可以构建为兴奋型的、抑制型的、调制型的或中性的模型。
Param表示一般参数容器,属于辅助容器。根据建模的需要,上述各层级的容器都可以额外带有一个或多个Param,用于以张量的形式容纳参数数据;或者,上述各个层级的容器也可以不带有Param;。
NodeParam、EdgeParam以及Param内部可采用张量(即多维矩阵)的形式容纳参数。张量的维度可以是1维至多维,具体排列和使用方式由用户指定。例如,张量可以是4维,各个参数在张量中的位置可以由坐标(x,y,z,t)表示,其中x,y,z三个维度对应于父级容器中表征的各个神经组织(如神经元或突触等)模型的空间排布位置,t代表时间维度,可表征时序信息的缓存和延迟,可用于模拟神经调质对神经元和突触的长时间作用机制(具有延迟性)。再如,张量中的参数可以由父级容器中全体或部分神经组织(如神经元或突触等)模型共享,可用于模拟神经调质对目标区域中的全部神经组织的大面积作用。
上述每个容器都有编号和名称,用于在多层级树状结构中索引。每个容器都有一个或多个控制块(Control Block)用于存储统计和控制信息,如网络的遍历次序与规则、已参与遍历运算的次数、数据是否已入驻主内存、数据是否已入驻协处理器内存、硬盘读写的频次等,并由规则管理器和调度器管理和更新。
Flow及其所有子级容器都可以对应于一个或多个上游容器,和,一个或多个下游容器,并通过上、下游容器的编号或名称对它们进行索引访问。上游容器和下游容器都可以是任意层级的容器,二者可以是同一个或者不同的容器。因此Flow及其所有子级容器都可与其上、下游容器构成信息流动通路,表征信息在两个信息源(如上游容器和下游容器)间的(单向或双向)流动和处理过程。网络中的多个容器间可构成信息流动的任意拓扑结构。
上述信息的流动和处理过程可用于实现神经脉冲通过突触在神经元间传导、突触和突触之间交换信息、神经元与突触可塑性等多种生物脑神经机制。
上述信息流动的任意拓扑结构,可用于实现脑神经系统中任意的神经环路连接方式,包括支持同一个神经元连回自己的反馈连接、同一群(层)的神经元互相之间的连接、不同群(层)神经元之间的任意连接(顺次/前馈、跨层、反馈等),以及突触和突触的直接连接,并且允许反馈型连接的无限次循环计算。
下面详细举例说明:
当一个Edge用于表征一个或多个突触,并且其对应的上、下游容器都是表征一个或多个神经元的Node,那么:
1,如果该Edge的上、下游容器是不同的Node,它们之间的拓扑关系(例如,Node 1 -> Edge -> Node 2)可以用于实现不同群(层)神经元之间通过突触的顺向/前馈连接;
2,如果该Edge的上、下游容器是同一个Node,它们之间的拓扑关系(例如,Node 1 -> Edge -> Node 1)可用于实现同一群(层)神经元之间通过突触的互相连接,也可以用于实现神经元通过自突触(autapse)连接回自身的反馈连接;
3,如果该Edge的上、下游容器是来自于不同Layer的Node,它们之间的拓扑关系可以用于实现不同层神经元之间通过突触的跨层连接。
当一些Edge分别用于表征一个或多个突触,以及一些Node分别用于表征一个或多个神经元,它们构成的拓扑关系如Node 1 -> Edge 1 -> Node N -> Edge N -> Node 1,那么:
1,如果各个Node分属不同的Layer,则该拓扑关系可以用于实现不同层神经元之间通过顺馈连接、跨层连接和反馈连接形成的神经环路;
2,如果各个Node属于同一个Layer,则该拓扑关系可以用于实现由一个或多个(或一至多群)不同神经元构成的反馈回路。
在前述例子中,Edge中的突触可以通过访问上、下游容器中的神经元以获取其激发时序信息,结合自身参数(如权重)进行运算并将运算结果传播至上、下游容器中的神经元,可以实现神经脉冲通过突触在神经元间的传导,以及Hebbian、Anti-Hebbian、STDP等长、短时可塑性机制;Node中的神经元可根据接受到的(通过神经递质或神经调制传递的)信息发生功能改变或塑型(神经元可塑性的一种)。
当一个Edge用于表征一个或多个突触,并且其对应的上、下游容器中至少有一个是表征一个或多个突触的Edge时,它们之间的拓扑关系(例如,Edge 1 -> Edge 2 -> Edge 3)可以用于实现突触和突触间的直接连接关系与直接信息交换。
上述参数数据库用于存储网络的各项参数数据(包括初始化参数和运行时参数)。参数数据库可选为二进制文件或文本文件。文本文件可采用CSV文件格式或以其他字符分隔数据的文件格式。每个容器都可以有一个或多个对应的参数数据库。如可以使NodeParam、EdgeParam或Param容纳的参数由一个或多个参数数据库存储,也可以让多个NodeParam、EdgeParam或Param共享一个或多个参数数据库以存储相同的参数。用户可将网络中各个容器的参数数据库置于模型文件路径中相应的子文件夹。
本系统支持的建模设计方式可以是将任何层级(或尺度)的模型分解为数据(data)和操作(operation)两部分。如前所述,数据可由NodeParam、EdgeParam或Param容纳,并由对应的参数数据库存储。操作是可以访问和更新前述数据的可执行程序(如函数及包含函数的类)。
例如,神经元的建模可以用传统神经元模型,将其ReLU激活函数设计为操作,并将其阈值参数设计为数据;神经元的建模也可以用脉冲神经元模型,将其leaky integrate-and-fire model的函数设计为操作,并将其参数设计为数据。
再如,用户可以通过定义一种或多种操作,使同一个Node中的各个神经元(无需通过Edge)直接互相访问和更新彼此的数据,以实现信息的快速交换,可用于模拟生物脑神经系统中的电突触。
又如,用户建模时可以通过定义一种或多种操作,使同一个Edge中的各个突触直接互相访问和更新彼此的数据,以实现信息的快速交换,用于模拟生物脑神经系统中同一个神经元的树突上多个突触间互相交换信息并进行逻辑运算的情况,包括分流抑制(shunting inhibition)等机制。
综上所述,由于本系统提供了灵活而统一的建模方式,而且提供的多层级树状结构的网络描述方式支持生物脑神经系统全尺度的建模和灵活的网络拓扑结构,从而将建模尺度和建模丰富度有机统一,并将各个尺度的全部模型融合为统一的神经网络进行运算。再加上数据以张量的形式表征和存储,这也使本系统既可以支持脉冲神经网络,也可以兼容传统神经网络(深度学习)以及其他以张量为主要数据表征方式的算法。
上述操作管理器7用于管理所有可在本系统上运行的操作。操作可以是能够运行在通用CPU、ARM、DSP、GPU或其他处理器的程序(包括代码段、函数和类)。所有的操作构成了操作库。操作管理器7提供了编程接口用于根据操作的编号或名称查询和调取指定操作。用户可在模型描述模块中为各个容器指定需要执行的操作,运行时由调度器调度执行相应的操作。这样保证了该系统具有一定跨硬件平台通用性,可以运行在通用CPU、GPU、ARM、DSP等硬件平台上。
上述配置描述模块1用于描述当前网络运行环境的配置参数。诸如系统可用的内存池大小、以何种方式运行(单次、多次,连续运行)、读写硬盘数据的频率上限和下限、启动突触和神经元裁剪与新生过程的条件等等。
上述配置管理器6用于读取上述配置描述模块1,以获取系统配置参数,并提供编程接口给其他组件调用。
上述网络模型对象由网络构建器9构造并驻留在内存中。它表征了整个网络,包括所有容器、拓扑关系和参数数据,是调度器调度运行的对象。
上述规则管理器12用于读取模型描述模块3中用户声明的规则,并在调度器11调度网络模型对象的运算时对这些规则进行解释。用户可以在模型描述模块3中为各个容器指定一条或多条规则。全部规则构成了规则库。规则管理器12对规则库中的规则按照预设优先级进行排序,当作用于一个容器的多条规则互相之间发生冲突时,只有最高优先级的规则被执行。当一个容器没有指定任何规则时,规则管理器12采用默认规则执行。
规则库中的规则包括(但不限于):遍历规则、内存使用规则、数据I/O规则、突触与神经元的裁剪与新生规则等等。遍历规则可以用于指导调度器按照第二预设时间间隔或第四预设遍历周期重复遍历或跳过遍历网络的全部或特定容器,以将计算资源集中在需要计算密集的子网络,提高数据利用效率;内存使用规则可以用于指导调度器合理安排主内存和协处理器内存的使用;数据I/O规则可以用于指导调度器调度数据在主内存和协处理器内存间,以及内存与硬盘间交换的频次,节省I/O资源,提高整体运算效率。
上述数据管理器8包括一种或多种解码器和编码器。解码器用于读取、解析用户指定格式的数据文件并将其内容转换为可供计算机计算的数据类型。编码器用于将内存中的数据按照用户指定格式进行序列化以便写回硬盘。数据文件的文件类型可以是二进制文件,也可以是文本文件(采用Unicode或ASCII格式)。用户可以在数据管理器8中添加自定义的解码器和编码器,以读写自定义格式的文件。
上述网络构建器9读取模型描述模块3,解析网络的拓扑结构,并通过数据管理器8读取数据文件,在内存中构建网络模型对象。
上述网络管理器10提供了构建网络模型对象的编程接口,该接口通过调用网络构建器9以构建网络模型对象。网络管理器9还提供了访问、遍历和操作网络模型对象的编程接口,包括支持通过编号或名称来查询和更新任意容器、神经元、突触、参数等。支持的遍历顺序包括(但不限于):
1,深度优先遍历;
2,广度优先遍历;
3,按照模型描述模块中指定的规则进行遍历。
遍历的实现方式可以包括(但不限于):
1,循环式遍历;
2,递归式遍历。
上述调度器11用于分配硬件资源和调度运算过程,保证运算效率最优。调度器11管理了一个或多个主内存池,和,一个或多个设备内存池,以合理分配网络模型对象在主内存和各个设备内存中的使用情况。主内存池用于管理主内存的使用;每个协处理器(可以是ARM,GPU,DSP,ASIC)各有一个或多个对应的设备内存池,用于管理相应设备内存的使用。其容量上限和下限由用户通过配置描述模块1指定。
上述调度器11管理了一个或多个线程池,用于动态安排子线程参与多线程运算,以合理安排主计算单元(可以是CPU、ARM等)和协处理器(可以是ARM、GPU、DSP等)以及I/O设备(硬盘、摄像头、音频输入、控制输出等)的运算负载。
上述调度器11管理了一个或多个节点数据输入缓存器、一个或多个节点数据输出缓存器、一个或多个边数据输入缓存器、一个或多个边数据输出缓存器,用于缓存需要读写硬盘或I/O设备的数据。它们优先采用环形队列的数据结构。各个缓存器的容量、读写硬盘或I/O设备的频次上限和下限,读写硬盘或I/O设备的吞吐量上限和下限,由用户通过配置描述模块1指定。调度器11根据处理器、硬盘和I/O设备的负载,适时地安排硬盘、I/O设备读写以避免I/O阻塞。
由于使用了调度器11合理分配处理器及协处理器、内存、硬盘、IO设备等硬件资源的使用,使本系统适合在硬件资源(如内存)相对有限的嵌入式设备上高效运行。
本系统提供了按照一定触发条件和执行规则自动执行突触和神经元的裁剪与新生的功能。用户可以在配置描述模块1中指定启动裁剪或新生过程的触发条件,并在模型描述模块3中指定裁剪或新生过程的执行规则。执行规则可以作用于整个网络模型对象,也可以作用于子网络或特定容器。裁剪或新生过程由调度器11调度执行,可以在网络运行状态时执行,也可在网络暂停状态时执行。
上述启动裁剪或新生过程的触发条件可以包括(但不限于)下面的一种或多种:
1,用户命令,即用户通过键盘或鼠标及其他方式输入命令给本系统,本系统接收到命令后即刻或第一预设时间后执行裁剪或新生过程;
2,持续执行,即只要网络模型或其子区域达到符合裁剪或新生过程的规则就执行裁剪或新生过程;
3,间隔执行,即系统按照第一预设时间间隔或第一预设遍历周期,适时地自动启动裁剪或新生过程。
该裁剪过程的执行规则,分为突触裁剪规则和神经元裁剪规则。突触裁剪规则可以包括(但不限于)下面的一种或多种:
1,某个突触的参数和指定参照突触集合中所有突触参数的统计量达到第一预设数值关系(如某突触的权重小于指定边中全部突触权重平均值的1%),则该突触为待裁剪突触;
2,某个突触的参数和指定阈值达到第二预设数值关系(如某突触的权重小于10.0),则该突触为待裁剪突触;
3,某个突触超过第二预设时间或第二预设遍历周期没有被触发,则该突触为待裁剪突触;
4,某个突触被其他运算过程标记为可以被裁剪,则该突触为待裁剪突触;对于待裁剪突触,可以对其进行裁剪。
神经元裁剪规则可以包括(但不限于)下面的一种或多种:
1,某个神经元没有输入的突触,则该神经元为待裁剪神经元;
2,某个神经元没有输出的突触,则该神经元为待裁剪神经元;
3,某个神经元没有输入及输出的突触,则该神经元为待裁剪神经元;
4,某个神经元的参数和指定参照神经元集合中所有神经元参数的统计量达到第三预设数值关系(如某神经元的阈值大于指定节点中全部神经元阈值最大值),则该神经元为待裁剪神经元;
5,某个神经元的参数和指定阈值达到第四预设数值关系(如某神经元的阈值大于1000.0),则该神经元为待裁剪神经元;
6,某个神经元超过第三预设时间或第三预设遍历周期没有发放,则该神经元为待裁剪神经元;
7,某个神经元被其他运算过程标记为可以被裁剪,则该神经元为待裁剪神经元;对于待裁剪神经元,可以对其进行裁剪。
该新生过程的执行规则,分为神经元新生规则和突触新生规则。神经元新生规则可以包括(但不限于)下面的一种或多种:
1,某个节点容器的现存神经元数量和该节点容器的总容量达到第一预设比例或第五预设数值关系,则以其总容量的第二预设比例或第一预设数量新生神经元;其中,第一预设比例与第二预设比例可以相同,也可以不同;
2,某个节点容器按照第一预设速率,(即按照预设时间间隔或预设遍历周期)以其总容量的第三预设比例或第二预设数量新生神经元;
3,某个节点容器被其他运算过程标记为需要新生神经元,并按照第二预设速率(即按照预设时间间隔或预设遍历周期以其总容量的预设比例或数量)新生神经元。
突触新生规则可以包括(但不限于)下面的一种或多种:
1,某个边容器的现存突触数量和该边容器的总容量达到第四预设比例或第六预设数值关系,则以其总容量的第五预设比例或第三预设数量新生突触;其中,第四预设比例与第五预设比例可以相同,也可以不同;
2,某个边容器按照第三预设速率(即按照预设时间间隔或预设遍历周期以其总容量的预设比例或数量)新生突触;
3,某个边容器被其他运算过程标记为需要新生突触,并按照第四预设速率(即按照预设时间间隔或预设遍历周期以其总容量的预设比例或数量)新生突触;
4,某个节点容器存在没有输入或输出突触的神经元,则在对应的各个边容器中分别为其新生输入突触或输出突触。
上述调度器负责调度突触和神经元裁剪与新生的执行。当启动突触和神经元裁剪或新生的条件被触发时,调度器从其所管理的线程池中分配一个或多个子线程,各自负责网络模型对象中的部分区域或特定容器。子线程会遍历所辖区域中的各个容器并按照指定的规则执行神经元和/或突触的裁剪和/或新生过程。
神经元或突触的新生过程可以为在容器中分配所需内存空间并创建相应对象(new/construct object);神经元或突触的裁剪过程可以为在容器中析构相应对象(delete/destruct object)并释放占用的内存空间。
由于本系统提供了按照一定条件和规则自动执行突触和神经元裁剪与新生的功能,并且提供了多种灵活的启动裁剪和新生过程的触发条件与执行过程的规则,免去了神经网络开发者需要自行编写突触和神经元裁剪与新生程序的负担,提高了开发的灵活性和高效性。突触与神经元的裁剪过程可以和新生过程交替配合使用,可以优化神经网络的编码效率、大幅压缩神经网络的尺寸和所需存储空间、节省了内存并提高了运算效率,使本系统适合在硬件资源有限的嵌入式设备上运行。本系统通过支持突触与神经元的裁剪和新生,有利于模拟生物脑神经系统中丰富的机制(如海马体的突触和神经元的凋亡与新生现象),能够更好地支持类脑智能和认知计算。
上述日志管理器5用于记录系统运行时生成的日志,日志用于提示用户系统的工作状态与异常,以便于调试和维护。日志由一系列字符串和时间戳组成,可以在命令行环境中显示或者保存在文件中用文本浏览器显示。日志管理器由日志记录编程接口和日志管理服务构成。日志记录编程接口由用户在程序中调用,并把日志数据传给日志管理服务。日志管理服务由独立的线程运行以避免阻塞网络运算,它将接收到的日志数据按照时间戳统一排序并缓存在内存中,当缓存的数据量多达一定程度时按照一定顺序保存至硬盘并清理缓存。
上述运行监测模块13用于接收和响应用户输入并管理整个系统的运行状态。它采用状态机的设计,包含有默认状态、网络构建状态、网络运行状态、网络暂停状态。它包括一个消息队列用于接收并缓存用户输入的命令,并有一个独立线程用于及时响应队列中的命令,以使状态机在不同的状态间切换。用户可通过键盘、鼠标、编程接口或其他方式输入命令。命令包括(但不限于):构建网络命令、开始运行命令、暂停运行命令、结束运行命令、突触与神经元裁剪命令以及突触与神经元新生命令。
下面结合附图3对系统的运行原理做简单的描述:
S1:系统启动并初始化运行环境;
S2:运行环境进入默认状态;
S3:通过配置管理其读取配置描述模块获取配置参数;
S4:等待命令输入;
S5:判断是否接收到构建网络命令,直至判断结果为是,进入下一步;
S6:当接收到构建网络命令时,运行环境切换至网络构建状态;
S7:初始化网络管理器和规则管理器;
S8:通过网络构建器读取模型描述模块并构建网络模型对象,通过数据管理器读取参数数据库;
S9:等待命令输入;
S10:判断是否收到开始运行命令,当判断结果为否时,返回步骤S9再次等待命令输入,当判断结果为是时,进入下一步;
S11:运行环境进入网络运行状态;
S12:调度执行;
S13:判断是否收到暂停运行命令,如果判断结果为是,则进入步骤S14,如果判断结果为否,则进入步骤S17;
S14:运行环境进入网络暂停状态;
S15:等待命令输入;
S16:判断是否收到开始运行命令,如果判断结果为是,则返回步骤S11,如果判断结果为否,返回步骤S15;
S17:判断是否达到指定停止条件(包括接收到结束运行命令等),如果判定结果为否,返回步骤S12,如果判定结果为是,结束运行。
当系统初始化时,上述状态机在默认状态,启动消息队列,开始接收用户输入;当接收到构建网络命令时,状态机切换至网络构建状态,并构建网络模型对象;当接收到开始运行命令时,状态机切换至网络运行状态,并执行网络运算;当接收到暂停运行命令时,状态机切换至网络暂停状态,并暂停网络运算;当接收到结束运行命令时,状态机保存网络数据至硬盘,系统结束并退出。在状态机处于网络运行状态或网络暂停状态时,如果消息队列中有突触与神经元裁剪命令,则通过调度器启动突触与神经元裁剪过程;如果消息队列中有突触与神经元新生命令,则通过调度器启动突触与神经元新生过程。由于本系统使用了运行监测模块来管理系统的工作状态,可以在应用环境不需要网络运算时将系统切换至网络暂停状态,以节省功耗,使本系统适用于嵌入式系统。
上述图像化显示模块4用于读取网络数据,并将其显示给用户,便于开发、监测和调试。图像化显示模块4可以直接读取网络模型对象在内存中的数据,也可以读取保存在硬盘的数据。图像化显示模块4采用独立线程以避免阻塞网络运算,因此它可以在网络调度运行中实时显示,也可以在网络调度运行结束后显示。
需要说明的是,上述第一预设时间至第三预设时间、第一预设时间间隔至第二预设时间间隔、第一预设遍历周期至第四预设遍历周期、第一预设数值关系至第六预设数值关系、第一预设比例至第五预设比例、第一预设数量至第三预设数量,以及第一预设速率至第四预设速率等表述,仅为方便对预设时间、预设时间间隔、预设遍历周期、预设数值关系、预设比例、预设数量以及预设速率的区分,其具体的数值大小或范围可以根据实际需要确定,本申请实施例对此不作限定。另外,上述各个预设时间、预设时间间隔、预设遍历周期、预设数值关系、预设比例、预设数量以及预设速率的数值大小可以是相同的,也可以是不同的。例如,第一预设时间至第三预设时间的时间长短可以是完全相同的,也可以是完全不同的;或者其中部分的时间长短是相同的,而另外一部分的时间长短是不同的。本申请实施例对此亦不作限定。
以上仅为本申请的可选实施例而已,并不用于限制本申请。对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (38)

  1. 一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,包括:
    模型描述模块,用于提供用户设计和描述网络模型的接口,为网络模型对象指定待执行的操作和规则;
    参数数据库,用于以参数数据库的形式存储网络模型的各项参数数据;
    配置描述模块,用于描述当前网络运行环境的配置参数,启动突触和/或神经元的裁剪与新生过程的条件;
    配置管理器,用于从所述配置描述模块中调取相关配置参数;
    网络构建器,用于读取模型描述模块,解析网络的拓扑结构,并通过数据管理器读取数据文件,在内存中构建网络模型对象;
    网络管理器,用于构建、遍历、访问和/或更新网络模型对象;
    规则管理器,用于读取所述模型描述模块中用户声明的规则,并在调度器调度所述网络模型对象的运算时对用户声明的规则进行解释,并仲裁规则间的冲突;
    数据管理器,用于读取并解析参数数据库,转换数据格式以及将数据序列化;
    调度器,用于分配硬件资源和调度运算过程,调度执行相应的操作;
    操作管理器,用于管理运行的操作;
    日志管理器,用于记录系统运行时生成的日志,记录系统的工作状态,并对异常状态进行提示;
    运行监测模块,用于接收并响应用户输入指令,管理系统的运行状态;以及,
    图形化显示模块,用于读取并显示网络数据。
  2. 根据权利要求1所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述模型描述模块包括网络描述单元、汇聚描述单元和流通描述单元;
    所述网络描述单元用于描述网络容器和一般参数容器,描述网络的参数与运行规则,并通过链接指向一个或多个汇聚描述单元和流通描述单元;
    所述汇聚描述单元用于描述汇聚容器、模块容器、层组容器、节点容器、节点参数容器和一般参数容器中的至少一个容器,并用于描述网络中节点的模块与层组划分关系、各个网络模型对象的参数与运行时的规则和命令;
    所述流通描述单元用于描述流通容器、通道容器、连接容器、边容器、边参数容器和一般参数容器中的至少一个容器,并用于描述网络中边的连接关系、各个网络模型对象的参数与运行时的规则和命令。
  3. 根据权利要求2所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述模型描述单元采用模拟生物脑神经系统组织方式的多层级树状结构的网络描述方式;所述汇聚描述单元支持将网络中的节点按照预设层组和模块排列组织,用于表征生物脑中神经元及相关胶质细胞的多层级组织方式;所述流通描述单元支持将网络中的边按照拓扑相似性分组和层级排列组织,用于表征生物脑中神经突触的多种组织方式及相关胶质细胞的突起的组织方式。
  4. 根据权利要求2所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述网络描述单元、汇聚描述单元和流通描述单元均选用XML和/或JSON的文件格式并采用嵌套式语法。
  5. 根据权利要求1所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述参数数据包括初始化参数数据和运行时参数数据。
  6. 根据权利要求1所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述参数数据库为二进制文件或文本文件,所述文本文件采用CSV文件格式或以其他字符为分隔数据的文件格式。
  7. 根据权利要求1所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述网络模型对象包括容器、拓扑关系和/或参数数据,所述网络模型对象是调度器调度运行的对象。
  8. 根据权利要求7所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述容器包括编号和/或名称,用于在多层级树状结构中索引。
  9. 根据权利要求7所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述容器具有一个或多个控制块,用于存储统计和控制信息。
  10. 根据权利要求9所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述控制块包括网络的遍历次序与规则、已参与遍历运算的次数、数据是否已入驻主内存、数据是否已入驻协处理器内存、硬盘读写的频次中的至少一项,并由所述规则管理器和调度器管理和更新。
  11. 根据权利要求8或9所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述容器包括:
    网络容器,位于树状结构的第一层级,用于表征全脑和行为尺度的模型;
    汇聚容器,位于树状结构的第二层级,用于表征脑区尺度的模型;
    模块容器,位于树状结构的第三层级,用于表征神经核团尺度的模型;
    层组容器,位于树状结构的第四层级,用于表征神经环路尺度的模型;
    节点容器,位于树状结构的第五层级,用于表征神经元尺度或胶质细胞尺度的模型,以及用于表征一群神经元或胶质细胞;
    节点参数容器,位于树状结构的第六层级,用于表征分子尺度、受体尺度、神经递质或神经调质尺度的模型,和/或,用于表征一群神经元或胶质细胞模型的参数张量;
    流通容器,位于树状结构的第二层级,用于表征联接脑区间的神经纤维束尺度的模型;
    通道容器,位于树状结构的第三层级,用于表征由联接神经核团间的轴突构成的传导束的模型;
    连接容器,位于树状结构的第四层级,用于表征神经环路中由轴突构成的神经通路的模型;
    边容器,位于树状结构的第五层级,用于表征树突尺度或突触尺度的模型,和/或,用于表征一群突触或胶质细胞的突起;
    边参数容器,位于树状结构的第六层级,用于表征分子尺度、神经递质或神经调质尺度、受体尺度的模型,以及用于表征一群突触或胶质细胞的突起模型的参数张量;和/或,
    一般参数容器,用于以张量的形式容纳参数数据;
    所述一般参数容器属于辅助容器,各个层级的容器可额外带有一个或多个所述一般参数容器。
  12. 根据权利要求11所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述神经元模型的发放特性构建为包括紧张性发放、快速发放、爆发式发放、峰发放和/或相位性发放;
    所述神经元模型对上游输入信号的响应,构建为不同的神经适应性或敏感性曲线;
    所述神经元模型对下游的作用机制,构建为兴奋型的、抑制型的、调制型的,和/或中性的模型;
    所述神经元模型构建为脉冲神经元模型和/或传统神经元模型;
    所述胶质细胞模型构建为星形胶质细胞模型、少突胶质细胞模型、小胶质细胞模型、施万细胞模型和/或卫星细胞模型。
  13. 根据权利要求11所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述神经递质或神经调质模型构建为兴奋型的、抑制型的,和/或调制型的模型;
    所述受体模型构建为离子型的,和/或代谢型的模型;
    所述受体模型对神经递质或神经调质的响应效果,构建为兴奋型的、抑制型的、调制型的,和/或中性的模型。
  14. 根据权利要求11所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述树突尺度的模型构建为顶树突模型、基树突模型,和/或突棘模型;
    所述突触模型构建为兴奋型的、抑制型的、调制型的,和/或中性的模型。
  15. 根据权利要求11所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述分子尺度的模型构建为细胞内分子模型、细胞膜中分子模型,和/或细胞间隙分子模型。
  16. 根据权利要求11所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述节点参数容器、边参数容器以及一般参数容器内部采用张量的形式容纳参数。
  17. 根据权利要求16所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述张量的维度是一维或多维,所述张量的排列和使用方式由用户指定。
  18. 根据权利要求17所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述张量配置为四维,各个参数在张量中的位置由坐标(x,y,z,t)表示,其中,x、y、z三个维度对应于父级容器中表征的各个神经组织模型的空间排布位置;t表示时间维度,表征时序信息的缓存和延迟,用于模拟神经调质对神经元和/或突触的长时间作用机制;
    所述张量中的参数由父级容器中全体或部分神经组织模型共享,用于模拟神经调质对目标区域中的全部神经组织的大面积作用。
  19. 根据权利要求11所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述流通容器及其所有子级容器均对应于一个或多个上游容器,和,一个或多个下游容器,并通过所述上游容器和所述下游容器的编号或名称对其进行索引访问。
  20. 根据权利要求19所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述上游容器和下游容器均是任意层级的容器,二者为同一个或者不同的容器。
  21. 根据权利要求11所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述流通容器及其所有子级容器均与其上游容器和下游容器构成信息流动通路,表征信息在两个信息源间的流动和处理过程,网络中的多个容器间构成信息流动的任意拓扑结构。
  22. 根据权利要求21所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述信息的流动和处理过程用于实现至少一种生物脑神经机制。
  23. 根据权利要求22所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述生物脑神经机制包括神经脉冲通过突触在神经元间传导、突触和突触之间交换信息以及神经元与突触可塑性中的至少一种。
  24. 根据权利要求21所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述信息流动的任意拓扑结构,用于实现脑神经系统中任意的神经环路连接方式,包括支持同一个神经元连回自己的反馈连接、同一群的神经元互相之间的连接、不同群神经元之间的任意连接,以及突触和突触的直接连接中的至少一种连接,且允许反馈型连接的无限次循环计算。
  25. 根据权利要求1所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述模型描述模块支持将任何层级的模型分解为数据和操作两部分的建模设计方式,
    所述数据由节点参数容器、边参数容器,和/或一般参数容器容纳,并由对应的参数数据库存储;
    所述操作用于访问和更新所述数据的可执行程序,所述操作运行在通用CPU、ARM、DSP、GPU,和/或其他处理器,保证系统具有跨硬件平台通用性。
  26. 根据权利要求1所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述模型描述模块支持用户通过定义一种或多种操作,使同一个节点容器中的各个神经元直接互相访问和/或更新彼此的数据,以实现信息的快速交换,用于模拟生物脑神经系统中的电突触。
  27. 根据权利要求1所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述模型描述模块支持用户通过定义一种或多种操作,使同一个边容器中的各个突触直接互相访问和/或更新彼此的数据,以实现信息的快速交换,用于模拟生物脑神经系统中同一个神经元的树突上多个突触间互相交换信息并进行逻辑运算的情况,包括分流抑制机制。
  28. 根据权利要求1所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述系统支持按照预设触发条件和执行规则自动执行突触和/或神经元的裁剪与新生的功能;
    所述触发条件由用户在所述配置描述模块中指定;
    所述执行规则由用户在所述模型描述模块中指定;
    所述执行规则作用于网络模型对象,和/或,作用于子网络或特定容器;
    所述突触和/或神经元的裁剪与新生过程由调度器调度执行,在网络运行状态时执行,和/或在网络暂停状态时执行。
  29. 根据权利要求28所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述触发条件包括下面的一种或多种:
    用户命令:用户通过键盘或鼠标及其他方式输入命令给系统,系统接收到命令后即刻或第一预设时间后执行裁剪或新生过程;
    持续执行:当网络模型或其子区域达到符合裁剪或新生过程的规则时,执行裁剪或新生过程;
    间隔执行:系统按照第一预设时间间隔或第一预设遍历周期,自动启动裁剪或新生过程。
  30. 根据权利要求28所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述裁剪过程的执行规则包括突触裁剪规则和/或神经元裁剪规则;
    所述突触裁剪规则包括下面的任意一种或几种:
    某个突触的参数和指定参照突触集合中所有突触参数的统计量达到第一预设数值关系,则该突触为待裁剪突触;
    某个突触的参数和指定阈值达到第二预设数值关系,则该突触为待裁剪突触;
    某个突触超过第二预设时间或第二预设遍历周期没有被触发,则该突触为待裁剪突触;
    某个突触被标记为待裁剪,则该突触为待裁剪突触;
    所述神经元裁剪规则包括下面的任意一种或几种:
    某个神经元没有输入的突触,则该神经元为待裁剪神经元;
    某个神经元没有输出的突触,则该神经元为待裁剪神经元;
    某个神经元没有输入及输出的突触,则该神经元为待裁剪神经元;
    某个神经元的参数和指定参照神经元集合中所有神经元参数的统计量达到第三预设数值关系,则该神经元为待裁剪神经元;
    某个神经元的参数和指定阈值达到第四预设数值关系,则该神经元为待裁剪神经元;
    某个神经元超过第三预设时间或第三预设遍历周期没有发放,则该神经元为待裁剪神经元;
    某个神经元被标记为待裁剪,则该神经元为待裁剪神经元。
  31. 根据权利要求28所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述新生过程的执行规则,包括神经元新生规则和/或突触新生规则;
    所述神经元新生规则包括下面的任意一种或几种:
    某个节点容器的现存神经元数量和该节点容器的总容量达到第一预设比例或第五预设数值关系,则以其总容量的第二预设比例或第一预设数量新生神经元;
    某个节点容器按照第一预设速率,以其总容量的第三预设比例或第二预设数量新生神经元;
    某个节点容器被标记为待新生神经元,并按照第二预设速率新生神经元;
    所述突触新生规则包括下面的任意一种或几种:
    某个边容器的现存突触数量和该边容器的总容量达到第四预设比例或第六预设数值关系,则以其总容量的第五预设比例或第三预设数量新生突触;
    某个边容器按照第三预设速率新生突触;
    某个边容器被标记为待新生突触,并按照第四预设速率新生突触;
    某个节点容器存在没有输入或输出突触的神经元,则在对应的各个边容器中分别为其新生输入突触或输出突触。
  32. 根据权利要求1所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,系统的运行状态包括默认状态、网络构建状态、网络运行状态和/或网络暂停状态。
  33. 根据权利要求1所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述模型描述模型中,用户为各个容器指定一条或多条规则,所述一条或多条规则构成了一个规则库;
    所述规则管理器对所述规则库中的规则按照预设优先级进行排序,当作用于一个容器的多条规则互相之间发生冲突时,只有最高优先级的规则被执行,当一个容器没有指定任何规则时,规则管理器采用默认规则执行;
    所述规则库中的规则包括:遍历规则、内存使用规则、数据I/O规则,和/或,突触与神经元的裁剪与新生规则;
    所述遍历规则用于指导调度器按照第二预设时间间隔或第四预设遍历周期重复遍历或跳过遍历网络的全部或特定容器,以将计算资源集中在计算密集的子网络,提高数据利用效率;
    所述内存使用规则用于指导调度器安排主内存和/或协处理器内存的使用;
    所述数据I/O规则用于指导所述调度器调度数据在主内存和协处理器内存间,以及内存与硬盘间交换的频次。
  34. 根据权利要求1所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述调度器管理一个或多个主内存池,和,一个或多个设备内存池;
    所述主内存池用于管理主内存的使用;
    所述设备内存池对应于各个协处理器,用于管理相应设备内存的使用;
    所述主内存池和设备内存池的容量上限和下限由用户通过配置描述模块指定。
  35. 根据权利要求1或34所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述调度器管理一个或多个线程池,用于动态安排子线程参与多线程运算,以安排主计算单元、协处理器,和/或,I/O设备的运算负载。
  36. 根据权利要求35所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,所述调度器管理一个或多个节点数据输入缓存器、一个或多个节点数据输出缓存器、一个或多个边数据输入缓存器以及一个或多个边数据输出缓存器,用于缓存读写硬盘或I/O设备的数据,以使调度器根据处理器、硬盘和/或I/O设备的负载,安排硬盘、I/O设备读写,避免I/O阻塞。
  37. 根据权利要求36所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统,其特征在于,各个所述缓存器的容量、读写硬盘或I/O设备的频次上限和下限、读写硬盘或I/O设备的吞吐量上限和下限,由用户通过配置描述模块指定。
  38. 一种用于类脑智能与认知计算的脉冲神经网络运算方法,其特征在于,所述方法使用如权利要求1至37任一项所述的一种用于类脑智能与认知计算的脉冲神经网络运算系统。
PCT/CN2020/099714 2019-07-02 2020-07-01 用于类脑智能与认知计算的脉冲神经网络运算系统及方法 WO2021000890A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
JP2022500548A JP7322273B2 (ja) 2019-07-02 2020-07-01 脳型知能とコグニティブコンピューティングに使用されるスパイキングニューラルネットワーク演算システムおよび方法
GB2200490.7A GB2601643A (en) 2019-07-02 2020-07-01 Spiking neural network computing system and method for brain-like intelligence and cognitive computing
KR1020227003194A KR20220027199A (ko) 2019-07-02 2020-07-01 두뇌 모방 인텔리전트 및 인지 컴퓨팅에 적용하는 스파이킹 신경망 컴퓨팅 시스템 및 방법
EP20834339.2A EP3996004A4 (en) 2019-07-02 2020-07-01 PULSE NEURAL NETWORK COMPUTATION SYSTEM AND METHOD FOR BRAIN-LIKE INTELLIGENCE AND COGNITIVE COMPUTING
US17/623,753 US20220253675A1 (en) 2019-07-02 2020-07-01 Firing neural network computing system and method for brain-like intelligence and cognitive computing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910588964.5 2019-07-02
CN201910588964.5A CN110322010B (zh) 2019-07-02 2019-07-02 用于类脑智能与认知计算的脉冲神经网络运算系统及方法

Publications (1)

Publication Number Publication Date
WO2021000890A1 true WO2021000890A1 (zh) 2021-01-07

Family

ID=68122227

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/099714 WO2021000890A1 (zh) 2019-07-02 2020-07-01 用于类脑智能与认知计算的脉冲神经网络运算系统及方法

Country Status (7)

Country Link
US (1) US20220253675A1 (zh)
EP (1) EP3996004A4 (zh)
JP (1) JP7322273B2 (zh)
KR (1) KR20220027199A (zh)
CN (1) CN110322010B (zh)
GB (1) GB2601643A (zh)
WO (1) WO2021000890A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114399033A (zh) * 2022-03-25 2022-04-26 浙江大学 基于神经元指令编码的类脑计算系统和计算方法
CN114816067A (zh) * 2022-05-06 2022-07-29 清华大学 一种基于向量指令集实现类脑计算的方法及装置
WO2022177162A1 (ko) * 2021-02-18 2022-08-25 삼성전자주식회사 어플리케이션의 모델 파일을 초기화하는 프로세서 및 이를 포함하는 전자 장치
CN115879544A (zh) * 2023-02-28 2023-03-31 中国电子科技南湖研究院 一种针对分布式类脑仿真的神经元编码方法及系统

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110322010B (zh) * 2019-07-02 2021-06-25 深圳忆海原识科技有限公司 用于类脑智能与认知计算的脉冲神经网络运算系统及方法
CN112766470B (zh) * 2019-10-21 2024-05-07 地平线(上海)人工智能技术有限公司 特征数据处理方法、指令序列生成方法、装置及设备
CN110928833B (zh) * 2019-11-19 2021-01-22 安徽寒武纪信息科技有限公司 自适应算法运算装置以及自适应算法运算方法
CN111552563B (zh) * 2020-04-20 2023-04-07 南昌嘉研科技有限公司 一种多线程数据系统、多线程消息传递方法及系统
CN113688981B (zh) * 2020-05-19 2024-06-18 深圳忆海原识科技有限公司 具有记忆与信息抽象功能的类脑神经网络
CN111858989B (zh) * 2020-06-09 2023-11-10 西安工程大学 一种基于注意力机制的脉冲卷积神经网络的图像分类方法
US20210406661A1 (en) * 2020-06-25 2021-12-30 PolyN Technology Limited Analog Hardware Realization of Neural Networks
CN112270406B (zh) * 2020-11-11 2023-05-23 浙江大学 一种类脑计算机操作系统的神经信息可视化方法
CN112270407B (zh) * 2020-11-11 2022-09-13 浙江大学 支持亿级神经元的类脑计算机
CN112434800B (zh) * 2020-11-20 2024-02-20 清华大学 控制装置及类脑计算系统
CN112651504B (zh) * 2020-12-16 2023-08-25 中山大学 一种基于并行化的类脑仿真编译的加速方法
CN112987765B (zh) * 2021-03-05 2022-03-15 北京航空航天大学 一种仿猛禽注意力分配的无人机/艇精准自主起降方法
CN113222134B (zh) * 2021-07-12 2021-10-26 深圳市永达电子信息股份有限公司 一种类脑计算系统、方法及计算机可读存储介质
CN113283594B (zh) * 2021-07-12 2021-11-09 深圳市永达电子信息股份有限公司 一种基于类脑计算的入侵检测系统
CN114238707B (zh) * 2021-11-30 2024-07-05 中国电子科技集团公司第十五研究所 一种基于类脑技术的数据处理系统
CN114492770B (zh) * 2022-01-28 2024-10-15 浙江大学 一种面向循环脉冲神经网络的类脑计算芯片映射方法
WO2023238186A1 (ja) * 2022-06-06 2023-12-14 ソフトバンク株式会社 Nn成長装置、情報処理装置、ニューラル・ネットワーク情報の生産方法、およびプログラム
CN117709402A (zh) * 2022-09-02 2024-03-15 深圳忆海原识科技有限公司 模型构建方法、装置、平台、电子设备及存储介质
CN117709400A (zh) * 2022-09-02 2024-03-15 深圳忆海原识科技有限公司 层次化系统、运算方法、运算装置、电子设备及存储介质
CN117709401A (zh) * 2022-09-02 2024-03-15 深圳忆海原识科技有限公司 模型管理装置及用于神经网络运算的层次化系统
CN115392443B (zh) * 2022-10-27 2023-03-10 之江实验室 类脑计算机操作系统的脉冲神经网络应用表示方法及装置
CN116542291B (zh) * 2023-06-27 2023-11-21 北京航空航天大学 一种记忆环路启发的脉冲记忆图像生成方法和系统
CN117251275B (zh) * 2023-11-17 2024-01-30 北京卡普拉科技有限公司 多应用异步i/o请求的调度方法及系统、设备及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120109864A1 (en) * 2010-10-29 2012-05-03 International Business Machines Corporation Neuromorphic and synaptronic spiking neural network with synaptic weights learned using simulation
CN108182473A (zh) * 2017-12-12 2018-06-19 中国科学院自动化研究所 基于类脑脉冲神经网络的全尺度分布式全脑模拟系统
CN108985447A (zh) * 2018-06-15 2018-12-11 华中科技大学 一种硬件脉冲神经网络系统
CN109858620A (zh) * 2018-12-29 2019-06-07 北京灵汐科技有限公司 一种类脑计算系统
CN110322010A (zh) * 2019-07-02 2019-10-11 深圳忆海原识科技有限公司 用于类脑智能与认知计算的脉冲神经网络运算系统及方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9460387B2 (en) * 2011-09-21 2016-10-04 Qualcomm Technologies Inc. Apparatus and methods for implementing event-based updates in neuron networks
EP3089080A1 (en) * 2015-04-27 2016-11-02 Universität Zürich Networks and hierarchical routing fabrics with heterogeneous memory structures for scalable event-driven computing systems
CN105095967B (zh) * 2015-07-16 2018-02-16 清华大学 一种多模态神经形态网络核
CN105913119B (zh) * 2016-04-06 2018-04-17 中国科学院上海微系统与信息技术研究所 行列互联的异构多核心类脑芯片及其使用方法
CN109816026B (zh) * 2019-01-29 2021-09-10 清华大学 卷积神经网络和脉冲神经网络的融合装置及方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120109864A1 (en) * 2010-10-29 2012-05-03 International Business Machines Corporation Neuromorphic and synaptronic spiking neural network with synaptic weights learned using simulation
CN108182473A (zh) * 2017-12-12 2018-06-19 中国科学院自动化研究所 基于类脑脉冲神经网络的全尺度分布式全脑模拟系统
CN108985447A (zh) * 2018-06-15 2018-12-11 华中科技大学 一种硬件脉冲神经网络系统
CN109858620A (zh) * 2018-12-29 2019-06-07 北京灵汐科技有限公司 一种类脑计算系统
CN110322010A (zh) * 2019-07-02 2019-10-11 深圳忆海原识科技有限公司 用于类脑智能与认知计算的脉冲神经网络运算系统及方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3996004A4

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022177162A1 (ko) * 2021-02-18 2022-08-25 삼성전자주식회사 어플리케이션의 모델 파일을 초기화하는 프로세서 및 이를 포함하는 전자 장치
CN114399033A (zh) * 2022-03-25 2022-04-26 浙江大学 基于神经元指令编码的类脑计算系统和计算方法
CN114816067A (zh) * 2022-05-06 2022-07-29 清华大学 一种基于向量指令集实现类脑计算的方法及装置
CN115879544A (zh) * 2023-02-28 2023-03-31 中国电子科技南湖研究院 一种针对分布式类脑仿真的神经元编码方法及系统
CN115879544B (zh) * 2023-02-28 2023-06-16 中国电子科技南湖研究院 一种针对分布式类脑仿真的神经元编码方法及系统

Also Published As

Publication number Publication date
CN110322010B (zh) 2021-06-25
CN110322010A (zh) 2019-10-11
KR20220027199A (ko) 2022-03-07
JP2022538694A (ja) 2022-09-05
GB2601643A (en) 2022-06-08
JP7322273B2 (ja) 2023-08-07
EP3996004A1 (en) 2022-05-11
EP3996004A4 (en) 2022-10-19
US20220253675A1 (en) 2022-08-11

Similar Documents

Publication Publication Date Title
WO2021000890A1 (zh) 用于类脑智能与认知计算的脉冲神经网络运算系统及方法
CN110215189A (zh) 一种基于云平台的大数据智能健康监护系统
Sloman The “semantics” of evolution: Trajectories and trade-offs in design space and niche space
CN114238707B (zh) 一种基于类脑技术的数据处理系统
Wu et al. micros. bt: An event-driven behavior tree framework for swarm robots
Zeigler Discrete event models for cell space simulation
CN110262275A (zh) 一种智能家居系统及其控制方法
CN113222134B (zh) 一种类脑计算系统、方法及计算机可读存储介质
Lejamble et al. A new software architecture for the wise object framework: Multidimensional separation of concerns
Ahuja et al. A connectionist processing metaphor for diagnostic reasoning
JP2021533517A (ja) データ処理モジュール、データ処理システム、およびデータ処理方法
WO2024046459A1 (zh) 模型管理装置及用于神经网络运算的层次化系统
Li Efficient and Practical Cluster Scheduling for High Performance Computing
von der Malsburg Ordered retinotectal projections and brain organization
CN102346815B (zh) 用于模拟生物竞争和进化过程的数字生物系统
Freeman Deconstruction of neural data yields biologically implausible periodic oscillations
Halford Competing, or perhaps complementary, approaches to the dynamic-binding problem, with similar capacity limitations
Thorpe Temporal synchrony and the speed of visual processing
Strong Phase logic is biologically relevant logic
CN110188017A (zh) 网络机房服务器与网络设备大数据采集装置及方法
Dawson et al. Making a middling mousetrap
Barnden Time phases, pointers, rules and embedding
KR0136877B1 (ko) 인지 시스템용 아키텍처
Hölldobler On the artificial intelligence paradox
Garson Must we solve the binding problem in neural hardware?

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20834339

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022500548

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 202200490

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20200701

ENP Entry into the national phase

Ref document number: 20227003194

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020834339

Country of ref document: EP

Effective date: 20220202