WO2021233179A1 - 具有前向学习和元学习功能的类脑视觉神经网络 - Google Patents

具有前向学习和元学习功能的类脑视觉神经网络 Download PDF

Info

Publication number
WO2021233179A1
WO2021233179A1 PCT/CN2021/093354 CN2021093354W WO2021233179A1 WO 2021233179 A1 WO2021233179 A1 WO 2021233179A1 CN 2021093354 W CN2021093354 W CN 2021093354W WO 2021233179 A1 WO2021233179 A1 WO 2021233179A1
Authority
WO
WIPO (PCT)
Prior art keywords
neurons
neuron
connection
information
weight
Prior art date
Application number
PCT/CN2021/093354
Other languages
English (en)
French (fr)
Inventor
任化龙
Original Assignee
深圳忆海原识科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳忆海原识科技有限公司 filed Critical 深圳忆海原识科技有限公司
Publication of WO2021233179A1 publication Critical patent/WO2021233179A1/zh
Priority to US17/991,143 priority Critical patent/US20230079847A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/092Reinforcement learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0985Hyperparameter optimisation; Meta-learning; Learning-to-learn

Definitions

  • the embodiments of the present application relate to the technical fields of brain-like vision algorithms and spiking neural networks, and in particular to a brain-like visual neural network with forward learning and meta-learning functions.
  • the existing deep learning vision algorithms have the following problems:
  • the visual nervous system of the biological brain provides an excellent reference blueprint for designing brain-like visual neural networks.
  • the brain-like visual neural network should include at least two position encoding methods, namely, implicit position encoding (Implicit Position Encoding) and explicit position encoding (Explicit Position Encoding).
  • the implicit position coding is to make the neurons of each layer coding feature have corresponding receptive fields through the corresponding connection from the picture to the neurons of each layer, instead of using a special neural loop to encode the position information.
  • the method is not flexible enough, and the visual features cannot be flexibly combined at any position, and the information combination, abstraction, and processing cannot be performed on the location information, and the generalization ability of recognition is weak.
  • the explicit position coding is that a special neural loop is used to encode position information, which can flexibly combine various visual features at any position, and can also perform information combination, abstraction, and processing for position information, and can encode richer shapes.
  • Position relationship the generalization ability of recognition is strong, and it can also accurately identify situations with strong shape-position relationship constraints.
  • the biological visual nervous system also has bottom-up and top-down bidirectional neural pathways, which have a priming effect and can help the visual search process. Brain-like visual neural networks should also learn from this feature.
  • the biological visual nervous system takes the plasticity mechanism as the core, and has multiple learning paradigms such as reinforcement learning, forward learning and meta-learning. If the brain-like visual neural network adopts a plasticity mechanism with biological rationality, it can get rid of the training paradigm of error back propagation and gradient descent, avoiding a large number of partial differential calculation processes, and it is expected to break through the von Neumann architecture and be more suitable for deployment in firmware Or a neuromorphic chip; in addition, the brain-like visual neural network should also have forward learning and meta-learning functions to quickly learn and encode the visual features of the pictures or video streams seen, perform information abstraction, and find common representations between objects , Make the generalization ability better, reduce the data required for training, and shorten the training period.
  • the brain-like visual neural network adopts a plasticity mechanism with biological rationality, it can get rid of the training paradigm of error back propagation and gradient descent, avoiding a large number of partial differential calculation processes, and it is expected to break through the von Neumann architecture and be more suitable for deployment in firmware Or a neuromorph
  • One of the purposes of the embodiments of this application is to provide a brain-like visual neural network with forward learning and meta-learning functions, which aims to solve the problem that the existing machine vision neural network cannot be compatible with multiple learning paradigms at the same time, and its generalization ability is relatively high. Poor, the training process requires a lot of partial differential calculations, and the training cycle is long.
  • the embodiment of the application provides a brain-like visual neural network with forward learning and meta-learning functions, including: several primary feature coding modules and several composite feature coding modules;
  • Each module includes multiple neurons
  • the neuron includes a primary feature coding neuron, a concrete feature coding neuron, and an abstract feature coding neuron;
  • the primary feature encoding module includes a plurality of the primary feature encoding neurons to encode primary visual feature information
  • the compound feature coding module includes a concrete feature coding unit and an abstract feature coding unit;
  • the concrete feature encoding unit includes a plurality of the concrete feature encoding neurons, which encodes concrete visual feature information;
  • the abstract feature encoding unit includes a plurality of the abstract feature encoding neurons, which encode abstract visual feature information
  • a unidirectional connection is formed between neuron A and neuron B, it means a unidirectional connection of A->B; if a bidirectional connection is formed between neuron A and neuron B, it means A ⁇ -> B (or A->B and A ⁇ -B) two-way connection;
  • neuron A is called the directly upstream neuron of neuron B
  • neuron B is called the directly downstream neuron of neuron A
  • Neuron A and neuron B have a two-way connection of A ⁇ ->B, so neuron A and neuron B are called direct upstream neurons and direct downstream neurons;
  • neuron A is called a nerve
  • the excitatory connection is: when the upstream neuron of the excitable connection fires, provide a non-negative input to the downstream neuron through the excitable connection;
  • the inhibitory connection is: when the upstream neuron of the inhibitory connection fires, the inhibitory connection provides a non-positive input to the downstream neuron;
  • a plurality of said primary feature encoding neurons respectively form a one-way or two-way excitatory/inhibitory connection with a plurality of other said primary feature encoding neurons;
  • the plurality of primary feature encoding neurons respectively form a unidirectional or bidirectional excitatory/inhibitory type with a plurality of the concrete feature encoding neurons or a plurality of the abstract feature encoding neurons located in at least one of the composite feature encoding modules Connect
  • the several concrete feature coding neurons located in the same composite feature coding module and the several abstract feature coding neurons located in the same composite feature coding module respectively form one-way or two-way excitability/ Inhibited connection
  • a number of the specific feature encoding neurons and the abstract feature encoding neuron in the plurality of composite feature encoding modules are respectively combined with the plurality of the specific feature encoding neurons and the abstract feature encoding neurons of the other plurality of the composite feature encoding modules.
  • the abstract feature encoding neurons form unidirectional or bidirectional excitatory/inhibitory connections;
  • the neural network buffers and encodes information through the firing of the neurons, and encodes, stores, and transmits information through the connections between the neurons;
  • Input a picture or a video stream, and multiply several pixel values of several pixels of each frame of pictures by weights and input them to several of the primary feature coding neurons, so as to activate the several of the primary feature coding neurons;
  • the membrane potential is calculated to determine whether to fire. If fired, each downstream neuron will accumulate the membrane potential, and then determine whether to fire, so that the fire will propagate in the neural network; the upstream neuron and The weight of the connection between downstream neurons is a constant value, or dynamically adjusted through the process of synaptic plasticity;
  • the working process of the neural network includes: forward memory process, memory trigger process, information aggregation process, directional information aggregation process, information transfer process, memory forgetting process, memory self-consolidation process, information component adjustment process, intensive learning process, Novelty signal modulation process and supervised learning process;
  • the synaptic plasticity process includes a unipolar upstream firing dependent synaptic plasticity process, a unipolar downstream firing dependent synaptic plasticity process, a unipolar upstream and downstream firing dependent synaptic plasticity process, and a unipolar upstream pulse dependent synaptic plasticity process.
  • Process unipolar downstream pulse dependent synaptic plasticity process, unipolar pulse time dependent synaptic plasticity process, asymmetric bipolar pulse time dependent synaptic plasticity process, symmetric bipolar pulse time dependent synaptic plasticity process;
  • a number of the neurons are mapped to corresponding labels as output.
  • the several neurons of the neural network adopt impulse neurons or non- impulse neurons.
  • the embodiments of the present application include the following advantages:
  • the embodiment of the application provides a brain-like visual neural network with forward learning and meta-learning functions, including a primary feature encoding module, a composite feature encoding module, including active and automatic attention mechanisms, and a position that explicitly encodes visual features
  • the neural circuit of information has forward and reverse neural pathways, supports up and down two-way information processing processes, adopts a variety of biologically reasonable plastic processes, can carry out forward learning, and can input the vision in the image or video stream.
  • Representation information is quickly encoded into memory information, and the process of information abstraction and information component modulation is carried out to obtain common characteristic information and difference characteristic information between objects, forming information channels with multiple information dimensions and information abstraction levels, and improving generalization ability At the same time, it retains detailed information, and also supports the process of reinforcement learning, supervised learning, and novelty signal modulation. It does not rely on the end-to-end training paradigm of error back propagation and gradient descent. It breaks the bottleneck of the existing deep learning theoretical system and is a neural mimic Design and application provide the basis.
  • FIG. 1 is an overall block diagram of a brain-like visual neural network with forward learning and meta-learning functions provided by an embodiment of this application;
  • FIG. 2 is a schematic diagram of an input-side attention control unit and an output-side attention control unit in a composite feature encoding module of a brain-like visual neural network with forward learning and meta-learning functions in an embodiment of the application;
  • FIG. 3 is a schematic diagram of a position coding unit in a composite feature coding module of a brain-like visual neural network with forward learning and meta-learning functions in an embodiment of the application;
  • FIG. 4 is a topological schematic diagram of the input-side attention control unit, the concrete feature coding unit, and the abstract feature coding unit of a brain-like visual neural network with forward learning and meta-learning functions in an embodiment of this application;
  • Figure 5 is a schematic diagram of the input side attention control unit and the concrete feature coding unit, the abstract feature coding unit and the output side attention control unit of a brain-like visual neural network with forward learning and meta-learning functions in an embodiment of the application ;
  • FIG. 6 is a schematic diagram of the position-encoding neuron topology of a corresponding subspace of a brain-like visual neural network with forward learning and meta-learning functions in an embodiment of the application;
  • FIG. 7 is a schematic diagram of a position-encoding neuron topology of a corresponding region of a brain-like visual neural network with forward learning and meta-learning functions in an embodiment of the application;
  • FIG. 8 is a schematic diagram of the projection relationship of the receptive fields of a brain-like visual neural network with forward learning and meta-learning functions in an embodiment of the application;
  • FIG. 9 is a schematic diagram of a forward neural pathway and a reverse neural pathway of a brain-like visual neural network with forward learning and meta-learning functions in an embodiment of the application;
  • FIG. 10 is a schematic diagram of the center-periphery topology structure of a brain-like visual neural network with forward learning and meta-learning functions in an embodiment of the application.
  • the embodiment of the application discloses a brain-like visual neural network with forward learning and meta-learning functions, including: several (such as 1 to 2) primary feature coding modules 1, several (such as 3) Up to 3000) composite feature coding module 2.
  • Each module includes multiple neurons.
  • the neurons include a primary feature encoding neuron 10, a concrete feature encoding neuron 210, and an abstract feature encoding neuron 220.
  • the primary feature encoding module 1 includes a plurality (for example, 2 million) of the primary feature encoding neurons 10, which encode primary visual feature information.
  • the composite feature encoding module 2 includes a concrete feature encoding unit 21 and an abstract feature encoding unit 22.
  • the concrete feature encoding unit 21 includes a plurality (for example, 100,000) of the concrete feature encoding neuron 210, which encodes the concrete visual feature information.
  • the abstract feature encoding unit 22 includes a plurality (such as 100,000) of the abstract feature encoding neurons 220, which encode abstract visual feature information.
  • a unidirectional connection is formed between neuron A and neuron B, it means a unidirectional connection of A->B; if a bidirectional connection is formed between neuron A and neuron B, it means A ⁇ -> B (or A->B and A ⁇ -B) two-way connection.
  • neuron A is called the directly upstream neuron of neuron B
  • neuron B is called the directly downstream neuron of neuron A
  • Neuron A and neuron B have a two-way connection of A ⁇ ->B
  • neuron A and neuron B are called direct upstream neurons and direct downstream neurons.
  • neuron A is called a nerve
  • the indirect upstream neuron of neuron B is called neuron B as the indirect downstream neuron of neuron A
  • neuron D is called the direct upstream neuron of neuron B.
  • the excitatory connection is: when the upstream neuron of the excitable connection fires, the excitatory connection provides a non-negative input to the downstream neuron.
  • the inhibitory connection is: when the upstream neuron of the inhibitory connection fires, the inhibitory connection provides a non-positive input to the downstream neuron.
  • the primary feature-coding neurons 10 respectively form a one-way or two-way excitatory/inhibitory connection with several other (such as 1-20) of the primary feature-coding neurons 10.
  • the coding neuron 210 or several (such as 10-1000) of the abstract feature coding neurons 220 form a one-way or two-way excitatory/inhibitory connection.
  • the abstract feature encoding neuron 220 forms a unidirectional or bidirectional excitatory/inhibitory connection.
  • the compound feature encoding modules 2 e.g., 50,000
  • the concrete feature encoding neuron 210 and the abstract feature encoding neuron 220 are respectively associated with several other (e.g., 1 to 300)
  • a number of (for example, 2000) of the compound feature encoding module 2 of the concrete feature encoding neuron 210 and the abstract feature encoding neuron 220 form a one-way or two-way excitatory/inhibitory connection.
  • the neural network buffers and encodes information through the firing of the neurons, and encodes, stores, and transmits information through the connections between the neurons.
  • Input a picture or video stream, and multiply the R, G, and B pixel values of each pixel of each frame of the picture by the weights and input them to several (such as 2-30) of the primary feature coding neurons 10, so that several The primary features encode neuron 10 activation.
  • the membrane potential is calculated to determine whether to fire. If fired, each downstream neuron will accumulate the membrane potential, and then determine whether to fire, so that the fire will propagate in the neural network; the upstream neuron and The weight of the connection between downstream neurons is a constant value, or dynamically adjusted through the process of synaptic plasticity.
  • the working process of the neural network includes: forward memory process, memory trigger process, information aggregation process, directional information aggregation process, information transfer process, memory forgetting process, memory self-consolidation process, information component adjustment process, intensive learning process, Novelty signal modulation process, supervised learning process.
  • the synaptic plasticity process includes a unipolar upstream firing dependent synaptic plasticity process, a unipolar downstream firing dependent synaptic plasticity process, a unipolar upstream and downstream firing dependent synaptic plasticity process, and a unipolar upstream pulse dependent synaptic plasticity process.
  • a unipolar downstream pulse dependent synaptic plasticity process unipolar pulse time dependent synaptic plasticity process, unipolar pulse time dependent synaptic plasticity process, asymmetric bipolar pulse time dependent synaptic plasticity process, symmetric bipolar pulse time dependent synaptic plasticity process.
  • a number of the neurons are mapped to corresponding labels as output. For example, 100,000 abstract feature coding neurons 220 of the high-level information channel are mapped to corresponding labels as output.
  • the several neurons of the neural network adopt impulse neurons or non-pulmonary neurons.
  • all primary feature encoding neurons 10 concrete feature encoding neurons 210, abstract feature encoding neurons 220, and interneurons use impulse neurons.
  • an impulse neuron is realized by using a leaked integral impulse neuron (LIF neuron model); an impulse neuron is realized by an artificial neuron in a deep neural network (for example, a ReLU activation function).
  • LIF neuron model leaked integral impulse neuron
  • an impulse neuron is realized by an artificial neuron in a deep neural network (for example, a ReLU activation function).
  • neuron of the neural network are self-excited neurons; the self-excited neurons include conditional self-excited neurons and unconditional self-excited neurons.
  • conditional self-excited neuron If the conditional self-excited neuron is not excited by external input in the first preset time interval, it self-excited according to the probability P.
  • the unconditional self-excited neuron automatically gradually accumulates the membrane potential without external input.
  • the unconditional self-excited neuron excites and restores the membrane potential to the resting potential to restart the accumulation process.
  • Step m2 weighted summation of all inputs and superimposed on Vm;
  • Vm is the membrane potential
  • Vc is the cumulative constant
  • Vrest is the resting potential
  • threshold is the threshold
  • Vc 5mV
  • Vrest -70mV
  • threshold -25mV
  • conditionally self-excited neuron if it is not excited by an external input in the first preset time interval (for example, configured as 10 minutes), it self-excites according to the probability P.
  • the conditionally self-excited neuron records any one or more of the following information:
  • the calculation rule of the probability P includes any one or more of the following:
  • P is positively related to the total number of all output connections.
  • the calculation rules for the activation intensity or firing rate Fs of the conditional self-excited neurons during self-excitation include any one or more of the following:
  • Fs is positively correlated with the duration of the most recent excitation
  • Fs is positively correlated with the total number of executions of the synaptic plasticity process of each input connection recently
  • Fs is positively correlated with the total number of executions of the synaptic plasticity process of each output connection recently
  • Fs is positively correlated with the total number of all input connections
  • Fs is positively correlated with the total number of all output connections.
  • conditional self-excited neuron is a pulse neuron
  • P is the probability of a series of pulses being fired, if it is fired, the firing rate is Fs, and if it is not fired, the firing rate is 0.
  • conditional self-excited neuron is a non-pulse neuron
  • P is the probability of current excitation, if it is activated, the activation intensity is Fs, and if it is not activated, the activation intensity is 0.
  • 500,000 to 1 million of the primary feature encoding neurons are 10, 10 million of the concrete feature encoding neurons 210, and 10 million of the abstract feature encoding neurons 220, 1 million
  • Each of the input-side attention regulating neurons 230 adopts the conditionally self-excited neuron; the 500,000 to 1 million primary feature coding neurons 10 adopt unconditional self-excited neurons.
  • each neuron and each connection can be represented by a vector or matrix, and the operation of the neural network is expressed as a vector or matrix operation.
  • each neuron, each connection of the same kind of parameters such as: neuron firing rate, connection weight
  • the signal propagation of the neural network can be expressed as a neuron firing rate vector Dot multiplication with the weight vector of the connection (that is, the weighted sum of the input).
  • each neuron and each connection can also be realized by object; for example, they can be realized as an object (object-oriented Object in programming), the operation of the neural network is manifested as the calling of objects and the transfer of information between objects.
  • the neural network may also be implemented in firmware (for example, FPGA) or ASIC (for example, neuromorphic chip).
  • connections of the neural network can be replaced by a convolution operation; for example, all connections between each primary feature coding neuron 10 and each concrete feature coding neuron 210 can use a convolution operation
  • the composite feature encoding module 2 may further include an input-side attention control unit and an output-side attention control unit.
  • the neurons further include input-side attention-regulating neuron 230 and output-side attention-regulating neuron 240.
  • the input-side attention control unit includes several (such as 100,000) input-side attention control neurons 230.
  • the output-side attention control unit includes several (for example, 100,000) output-side attention control neurons 240.
  • Several (e.g., 50,000) of the input-side attention-regulating neurons 230 can respectively accept a number of (e.g., 100,000 to 10,000) of the primary feature-encoding neuron 10 unidirectional or bidirectional excitatory/inhibitory connection .
  • Each input-side attention control neuron 230 forms a one-way OR with several (e.g., 1 to 1,000) of the specific feature encoding neuron 210/abstract feature encoding neuron 220 of the composite feature encoding module 2 where it is located. Two-way exciting connection.
  • Each input-side attention control neuron 230 receives several (such as 100,000 to 10,000) from other composite feature coding modules 2 of the concrete feature coding neuron 210/abstract feature coding neuron 220/output-side attention Force regulates the unidirectional or bidirectional excitatory connection of neurons 240.
  • Each input-side attention-regulating neuron 230 may also form a unidirectional or bidirectional excitatory connection with several other (eg, 1,000) input-side attention-regulating neurons 230, respectively.
  • the attention-regulated neuron 230 forms a unidirectional or bidirectional excitatory connection.
  • Each of the output-side attention-regulating neurons 240 respectively receives several (e.g., 1 to 1,000) of the specific feature encoding neuron 210/abstract feature encoding neuron 220 from the composite feature encoding module 2 where it is located. Or two-way exciting connection.
  • Each of the output-side attention-regulating neurons 240 may also form a one-way or two-way excitatory connection with several other (for example, 1,000) of the output-side attention-regulating neurons 240, respectively.
  • Each input side attention control neuron 230 may have an input side attention control terminal 31; each output side attention control neuron 240 may have an output side attention control terminal 32.
  • the working process of the neural network also includes an active attention process and an automatic attention process.
  • the active attention process is: adjusting each of the input-side attention control nerves by the strength of the attention control signal applied at the input-side attention control terminal 31 (the amplitude can be positive, negative, or 0)
  • the activation intensity or firing rate or pulse firing phase of the element 230 is then controlled to enter the corresponding concrete feature encoding unit 21 and abstract feature encoding unit 22, and the size and proportion of various information components are adjusted; or, the output side pays attention
  • the strength of the attention control signal applied at the force control terminal 32 (the amplitude can be positive, negative, or 0) is used to adjust the activation intensity or firing rate or pulse firing phase of each of the output-side attention regulating neurons 240, and then Control the information output from the corresponding concrete feature encoding unit 21 and abstract feature encoding unit 22, and adjust the size and proportion of various information components.
  • the automatic attention process is: when several neurons connected to the input-side attention-regulating neuron 230 are activated, these input-side attention-regulating neurons 230 are more easily activated, so that relevant information components are easier to input To the corresponding concrete feature encoding unit 21 and abstract feature encoding unit 22; or, when several neurons connected to the output-side attention-regulating neuron 240 are activated, these output-side attention-regulating neurons 240 are more easily activated, thereby It makes it easier for relevant information components to be output from the corresponding concrete feature encoding unit 21 and abstract feature encoding unit 22.
  • the neural network includes one or more information channels.
  • the working process of the neural network also includes the process of automatically forming information channels.
  • the automatic formation process of the information channel is: by performing the forward memory process, the memory trigger process, the information aggregation process, the directional information aggregation process, the information transfer process, the memory forgetting process, the memory self-consolidation process, the information component adjustment process, Any one or more of the active attention process and the automatic attention process are adjusted to adjust the connection relationship and weight between the neurons, so that the neural network forms one or more information channels, and each information channel encodes one to A variety of information components; there may be crossovers between the information channels.
  • the neural network form one or more types by presetting the initial connection relationship, initial parameters (such as connection weight, threshold value of the neuron, initial membrane potential of the neuron, and initial time constant of the neuron).
  • initial parameters such as connection weight, threshold value of the neuron, initial membrane potential of the neuron, and initial time constant of the neuron.
  • Information channels each information channel encodes one or more preset information components.
  • the information channel includes a primary information channel.
  • the primary information channel is: all the primary feature coding neurons 10 and their connections constitute the primary information channel.
  • the primary information channel includes a primary contrast information channel, a primary orientation information channel, a primary edge information channel, and a primary color block information channel.
  • the primary contrast information channel is: selecting several adjacent pixels in the input image as pixels in the central area, selecting several pixels around the pixels in the central area as pixels in the surrounding area, and setting each central area Pixels and several pixel values of pixels in the surrounding area are respectively multiplied by weights and input to several of the primary feature encoding neurons 10 to form a central-peripheral topology. These primary feature encoding neurons 10 and their connections form one to most A primary contrast information channel.
  • the primary information channel select one or more kinds of pixels, adjacent pixels covering the position and area of the screen space, and multiply one or more pixel values of these pixels by one or more weights, which can also be configured to have Several primary orientation information channels, primary edge information channels, primary color patch information channels or their combination of one or more receptive fields.
  • the R, G, and B values of each pixel are multiplied by a negative weight, a negative weight, and a positive weight, respectively, and input to a number (such as 1-2) of the primary feature encoding neurons 10,
  • These primary feature encoding neurons 10 and their connections constitute a blue-sensitive color patch information channel.
  • the primary contrast information channel includes a light-dark contrast information channel, a dark-light contrast information channel, a red-green contrast information channel, a green-red contrast information channel, a yellow-blue contrast information channel, and a blue-yellow contrast information channel.
  • the light-dark contrast information channel is: the R, G, and B pixel values of each (such as 9) pixels in the central area are respectively multiplied by a positive weight (such as +1) and input to a number (such as 1 to 10).
  • the primary feature coding neuron 10 the R, G, and B pixel values of each (such as 72) surrounding area pixels are respectively multiplied by negative weights (such as -1) and input to these primary feature coding neurons 10, these primary feature coding neurons 10
  • the feature coding neuron 10 and its connection constitute a light-dark contrast information channel.
  • the dark and light contrast information channels are: each (for example, 9) of the R, G, and B pixel values of the central area pixels are respectively multiplied by a negative weight (for example, -1) and input into several (for example, 1 to 10)
  • the primary feature encoding neuron 10 multiplies the R, G, and B pixel values of each (such as 72) pixels in the surrounding area by a positive weight (such as +1) and inputs them to these primary feature coding neurons 10, and these
  • the primary feature coding neuron 10 and its connection constitute a dark-light contrast information channel.
  • the red-green contrast information channel is: each (such as 4) of the central area pixels R, G, B pixel values are respectively multiplied by a positive weight (such as +2), a negative weight (such as- 2).
  • the positive weight (such as +1) is input to several (such as 1 to 10) of the primary feature encoding neurons 10, and the R, G, and B pixel values of each (such as 32) of the surrounding area pixels Multiply negative weights (such as -2), positive weights (such as +2), and positive weights (such as +1) into these primary feature coding neurons 10, and these primary feature coding neurons 10 and their connections constitute red-green contrast Information channel.
  • the green-red contrast information channel is: multiplying the R, G, and B pixel values of each (such as 4) pixels in the central area by a negative weight (such as -2), a positive weight (such as +2), and a positive weight. (Such as +1) input to several (such as 1 to 10) of the primary feature coding neurons 10, and multiply the R, G, and B pixel values of each (such as 32) surrounding area pixels by positive weights (Such as +2), negative weight (such as -2), and positive weight (such as +1) are input to these primary feature coding neurons 10, and these primary feature coding neurons 10 and their connections form a green-red contrast information channel.
  • the yellow-blue contrast information channel is: the R, G, and B pixel values of each (such as 4) pixels in the central area are respectively multiplied by a positive weight (such as +2), a positive weight (such as +2), and a negative weight. (Such as -4) input to several (such as 1 to 10) of the primary feature coding neurons 10, and multiply the R, G, B pixel values of each (such as 32) of the surrounding area pixels by negative weights (Such as -2), negative weight (such as -2), and positive weight (such as +4) are input to these primary feature-coding neurons 10, and these primary feature-coding neurons 10 and their connections form a yellow-blue contrast information channel.
  • a positive weight such as +2
  • a positive weight such as +2
  • a negative weight such as +4
  • the blue-yellow contrast information channel is: multiplying the R, G, and B pixel values of each (such as 4) pixels in the central area by a negative weight (such as -2), a negative weight (such as -2), and a positive weight. (E.g. +4) input to several (e.g. 1 to 10) of the primary feature coding neurons 10, and multiply the R, G, and B pixel values of each (e.g., 32) surrounding area pixels by positive weights (Such as +2), positive weight (such as +2), and negative weight (such as -4) are input to these primary feature coding neurons 10, and these primary feature coding neurons 10 and their connections constitute a blue-yellow contrast information channel.
  • the primary visual feature information includes light-dark contrast information, dark-light contrast information, red-green contrast information, green-red contrast information, yellow-blue contrast information, blue-yellow contrast information, primary edge information, primary orientation information, and receptive field information. , Color block information.
  • the primary information channel further includes a primary optical flow information channel.
  • the primary optical flow information channel is: Calculate the optical flow for each pixel in the input image to obtain the direction value and velocity value of the optical flow movement, combine the different direction values and velocity values, and multiply the weights and input them to several (For example, 1 to 10) the primary feature encoding neurons 10, these primary feature encoding neurons 10 and their connections constitute a primary optical flow information channel.
  • the primary visual feature information also includes optical flow information.
  • the composite feature encoding module 2 may further include several (for example, 1 to 10) position encoding units 25.
  • the neuron also includes a position-encoding neuron 250.
  • the position coding unit 25 includes several (for example, 1,000 to 10,000) of the position coding neurons 250 to encode position information (visual features relative to the picture space or relative to other visual features).
  • Each of the position coding units 25 corresponds to several subspaces in the picture space, and each subspace may have an intersection.
  • Each position coding neuron 250 corresponds to each region corresponding to the position in each subspace corresponding to the position coding unit 25 where it is located, and accepts that the receptive field is several (such as 1 to 10,000) of these regions.
  • One-way or two-way excitatory connection of neurons (such as primary feature-encoding neuron 10); the projection relationship of the receptive fields can be seen in Figure 8.
  • a number (such as each) of the position coding neurons 250 respectively form a unidirectional or bidirectional excitatory connection with other position coding neurons 250 corresponding to the same region.
  • the position coding neurons 250 can also be respectively connected to several (e.g., 1 to 1,000) of the input-side attention regulation neurons located in the composite feature coding module 2 where they are located.
  • 230/output-side attention regulation neuron 240/concrete feature coding neuron 210/abstract feature coding neuron 220 form a unidirectional or bidirectional excitatory connection.
  • a number of (e.g., 1,000 to 50,000) of the position coding neurons 250 can also be combined with several (e.g., 1 to 1,000) of the input-side attention control neurons 230 located in other composite feature coding modules 2 respectively.
  • /Concrete feature encoding neuron 210/Abstract feature encoding neuron 220 forms a unidirectional or bidirectional excitatory connection.
  • the position coding neurons 250A, 250B, 250C, and 250D respectively correspond to some subspaces/regions in the picture space, and their corresponding subspaces/regions have an intersection with each other; the positions The coding neurons 250A, 250B, 250C, and 250D respectively correspond to the same regions as the position coding neurons 250E, 250F, 250G, and 250H, and they respectively form a bidirectional excitatory connection.
  • the position-coding neuron 250Y accepts the excitatory connection of the primary feature-coding neurons 10E, 10F, etc., in each region corresponding to the position in the four subspaces.
  • the information channel further includes an intermediate information channel.
  • the intermediate information channel includes an intermediate position information channel.
  • the intermediate position information channel is: through the automatic formation process of the information channel, or through preset initial connection relations and initial parameters, all of the multiple feature encoding modules 2 (such as 1 to 10) Input-side attention-regulating neuron 230/output-side attention-regulating neuron 240/concrete feature coding neuron 210/abstract feature coding neuron 220 of the total weight of some or all of the input connections comes from the position coding neuron 250 and The proportion of the total weight of the connection of the neuron encoding the position information reaches or exceeds the first preset ratio (for example, the configuration is 30%), and each connection from the position encoding neuron 250 and the neuron encoding the position information The weights are combined in various proportions, so that the input-side attention-regulating neuron 230/output-side attention-regulating neuron 240/concrete feature coding neuron 210/abstract feature coding neuron 220 have one or more receptive fields, respectively One or more kinds of position information are encoded, and the position
  • the intermediate position information channel includes neurons encoding the position information of the visual features, an explicit position coding method is adopted.
  • the intermediate information channel also includes an intermediate visual feature information channel.
  • the intermediate visual feature information channel is: through the automatic formation process of the information channel, or through preset initial connection relations and initial parameters, several (such as 10 to 2000) of the composite feature encoding modules 2
  • the total weight of some or all input connections of the input-side attention-regulating neuron 230/output-side attention-regulating neuron 240/concrete feature encoding neuron 210/abstract feature encoding neuron 220 (such as 80% of all)
  • the ratio of the total weight of the connection of neurons from the primary information channel reaches or exceeds the second preset ratio (for example, it is configured to 60%), and the corresponding picture space from the primary information channel and the intermediate information channel is
  • the connection weights of each neuron of the region and location are combined in various proportions, so that the input-side attention-regulating neuron 230/output-side attention-regulating neuron 240/concrete feature coding neuron 210/abstract feature coding neuron 220 respectively have one or more kinds of receptive fields, respectively encode one or
  • the intermediate visual feature information includes composite color contrast information, composite light-dark contrast information, composite orientation information, composite edge information, area information, and motion information.
  • the element 220 receives the unidirectional excitatory connection of each neuron from the two primary orientation information channels, and the neurons of the composite feature encoding module 2 encode the primary orientation information encoded by the two primary orientation information channels (one is Horizontally to the right, the other is the 10° direction from top left to bottom right, and the receptive field is 9x9), that is, the comprehensive orientation information (from the horizontal to the right to the top left to the bottom right 10° direction interval, the receptive field is 9x9).
  • the neurons of the intermediate visual feature information channel directly accept the connections from the neurons of the primary information channel, through these connection relationships correspond to various regions and positions in the picture space to form a receptive field, that is, implicit position coding is adopted Way.
  • the information channel also includes an advanced information channel.
  • the high-level information channel is: through the automatic formation process of the information channel, or through preset initial connection relations and initial parameters, several (such as 10 to 2000) of the composite feature encoding modules 2 ( (Such as 80% of the total) of the input-side attention-regulating neuron 230/output-side attention-regulating neuron 240/concrete feature coding neuron 210/abstract feature coding neuron 220 in the total weight of some or all of the input connections from The ratio of the total weight of the connection of the neurons of the intermediate information channel reaches or exceeds the third preset ratio (for example, 40% is configured), and the corresponding picture space from the primary information channel, the intermediate information channel, and the advanced information channel The connection weights of each neuron in each region and location in each area are combined in various proportions, so that these input-side attention-regulating neurons 230/output-side attention-regulating neurons 240/concrete feature coding neurons 210/abstract feature coding Neurons 220 respectively have one or more kinds of receptive fields,
  • the high-level visual feature information includes contour information, texture information, brightness information, transparency information, shape and position information, compound motion information, and objectification information.
  • the objectification information is the identified object (which can be an instance or a category), and each object can have a name, such as "apple”, “banana”, and "car”.
  • the cell 220 respectively accepts unidirectional excitatory connection of each neuron from a plurality of intermediate information channels.
  • These intermediate information channels encode composite edge information and position information, and the neurons of the composite feature encoding module 2 encode shape and position information.
  • a basic working process of the neural network is: respectively selecting several vibrating neurons from several candidate neurons (in a certain module or sub-module) , The source neuron, the target neuron, and make a number of the initiating neurons generate firing distributions and maintain activation for a preset time or operation cycle, so that the connections between the neurons involved in the work process pass through the synapses Adjust the weights according to the plasticity process.
  • the firing distribution is: a plurality of the neurons respectively generate the same or different activation intensity, firing rate, and pulse phase.
  • neuron A, neuron B, and neuron C generate activation intensities with amplitudes of 2, 5, and 9, respectively, or generate firing rates of 0.4 Hz, 50 Hz, and 20 Hz, or generate pulses of 100 ms, 300 ms, and 150 ms, respectively. Phase.
  • the process of selecting a vibrating neuron, a source neuron or a target neuron among several candidate neurons includes any one or any of the following: select the first Kf1 with the smallest weight and total modulus length of part or all of the input connections Neuron, select the first Kf2 neuron with the smallest weight and total modulus length for some or all of the output connections, select the top Kf3 neuron with the largest total modulus length for the weights of part or all of the input connections, and select the weight total of some or all of the output connections
  • the first Kf4 neurons with the largest modulus length select the first Kf5 neurons with the largest activation intensity or firing rate or the first to fire, and select the first Kf6 neurons with the smallest activation strength or firing rate or the latest firing (including no firing) Neurons, select the first Kf7 neurons with the longest time since the last firing, select the first Kf8 neurons with the closest firing time, and choose the one with the longest time since the last input connection or output connection for the synaptic plasticity process
  • the way to make several of the neurons generate firing distribution and keep the activation for a preset period can be input samples (pictures or video streams), directly activate several neurons in the neural network,
  • the several neurons in the neural network are self-excited, and the existing activation state of the several neurons in the neural network is propagated in the neural network, so that the several neurons (for example, the Vibration neurons) are activated.
  • the neural network includes a forward neural pathway and a reverse neural pathway.
  • the forward neural pathway and the reverse neural pathway are respectively: a number of the primary feature encoding module 1/composite feature encoding module 2 are cascaded in a first preset order, and a number of the neurons are sequentially connected.
  • the neural pathway formed by cascading in the first preset order is used as the forward nerve pathway, and the nerve pathway formed by cascading several of the neurons in the first preset order is used as the neural pathway.
  • Reverse nerve pathway is used as the forward nerve pathway.
  • the first preset order is primary information channel, intermediate information channel, and advanced information channel;
  • the neural pathway of the upper cascade (that is, in the first preset order) mainly participates in the recognition of external input information (picture or video stream) and the forward learning process;
  • the reverse neural pathway is composed of several high-level information channels, intermediate The neurons of the information channel and the primary information channel pass through the neural pathways cascading from top to bottom (that is, against the first preset order), and mainly participate in the pattern completion process, the directional activation process, the association process or the imagination process.
  • each primary feature encoding module 1/composite feature encoding module 2 several neurons constituting the forward neural pathway can respectively form a one-way or two-way excitatory/inhibitory connection with a number of neurons constituting the reverse neural pathway.
  • the working process of the neural network also includes a directional startup process.
  • the directional start process includes a forward start process and a reverse start process.
  • the forward start process is:
  • Step o1 Select several neurons in the forward neural pathway as the vibrating neurons.
  • Step o2 Make each of the initiating neurons generate firing distribution and keep activating the third preset period Tfprime.
  • Step o3 Several of the neurons in the reverse neural pathway that receive the excitatory connection of the vibrating neuron receive non-negative input to make it easier to activate.
  • Step o4 Several of the neurons in the reverse neural pathway that receive the inhibitory connection of the vibrating neuron receive non-positive input to make it more difficult to activate.
  • the reverse start process is:
  • Step n1 Select a number of neurons in the reverse neural pathway as the vibrating neurons.
  • Step n2 Make each of the priming neurons generate firing distribution and keep activating the tenth preset period Tbprime.
  • Step n3 Several of the neurons in the positive nerve pathway that receive the excitatory connection of the vibrating neuron receive non-negative input to make it easier to activate.
  • Step n4 Several of the neurons in the positive nerve pathway that receive the inhibitory connection of the vibrating neuron receive non-positive input to make it more difficult to activate.
  • Tfprime and Tbprime are configured as 5 seconds.
  • the directional activation process can be used for visual search.
  • a number of the neurons encoding the searched information component in the reverse neural pathway are used as vibrating neurons and generate a release that characterizes the searched information component Distribute and maintain activation for the tenth preset period Tbprime (if configured as 5 seconds), the neurons in the forward neural pathway that encode the searched information components are easier to activate, but the neurons that do not encode the searched information components The neuron is suppressed, and when the searched information component appears in the external input information (picture or video stream), it is easier to be recognized, and irrelevant information components are filtered out.
  • the neural network can be configured to be composed of a primary feature encoding module 1A, a composite feature encoding module 2A, a composite feature encoding module 2B, and a composite feature encoding module 2C; these four modules respectively contain the primary feature encoding neural Cells 10A, 10B, and 10C, concrete features encode neurons 210A, 210B, 210C, concrete features encode neurons 210D, 210E, 210F, and concrete features encode neurons 210G, 210H, 210I.
  • the first preset order may be configured as the order of the primary feature encoding module 1A, the composite feature encoding module 2A, the composite feature encoding module 2B, and the composite feature encoding module 2C; the primary feature encoding neuron 10A, the concrete feature encoding neuron 210A, The concrete feature-coding neuron 210D and the concrete feature-coding neuron 210G are cascaded through the unidirectional excitatory connection in the first preset order to form the forward neural pathway A; the primary feature-coding neuron 10B, the concrete feature-coding neuron 210B, the specific feature encoding neuron 210E, and the specific feature encoding neuron 210H cascade through the unidirectional excitatory connection along the first preset order to form the forward neural pathway B; the primary feature encoding neuron 10C, the specific feature encoding The neuron 210C, the specific feature encoding neuron 210F, and the specific feature encoding neuron 210I are cascaded through a unidirectional excit
  • the primary feature coding neuron 10C forms a bidirectional excitatory connection and a bidirectional inhibitory connection with the primary feature coding neurons 10A and 10B, respectively.
  • the specific feature encoding neuron 210C and the specific feature encoding neurons 210A and 210B form a bidirectional excitatory connection and a bidirectional inhibitory connection, respectively.
  • the specific feature encoding neuron 210F and the specific feature encoding neurons 210D and 210E respectively form a bidirectional excitatory connection and a bidirectional inhibitory connection.
  • the specific feature encoding neuron 210I and the specific feature encoding neurons 210G, 210H form a bidirectional excitatory connection and a bidirectional inhibitory connection, respectively.
  • the reverse nerve pathway C promotes the forward nerve pathway A and inhibits the forward nerve pathway B.
  • the neuron further includes an interneuron.
  • the primary feature encoding module 1 and the composite feature encoding module 2 respectively include a number (such as 1,000 to 10,000) of the interneurons, and the plurality of interneurons respectively correspond to a number of corresponding modules (such as 1 to 10,000).
  • Ten thousand) neurons form a one-way inhibitory connection, and corresponding several neurons in each of the modules respectively form a one-way excitatory connection with several (for example, 1 to 10,000) corresponding interneurons.
  • the corresponding two or more groups of neurons in each of the modules form an inter-group competition (lateral inhibition) through the interneurons.
  • the competing groups of the neurons produce different
  • the neurons (groups) released after suppression form a time difference to ensure that the information coding of the neurons in each group is independent, decoupled from each other, and automatically grouped, so that the input information during the memory triggering process can trigger the correlation with it
  • the highest memory information can also enable the neurons participating in the directional information aggregation process to be automatically grouped into the Ga1, Ga2, Ga3, and Ga4 according to the response (the activation intensity or the firing rate, or the firing time sequence).
  • the neuron further includes a differential information decoupling neuron
  • the working process of the neural network also includes a differential information decoupling process
  • the differential information decoupling process is:
  • a number of the input-side attention-regulating neuron 230/output-side attention-regulating neuron 240/concrete feature coding neuron 210/abstract feature coding neuron 220 are selected as target neurons.
  • a number of neurons that have unidirectional/bidirectional excitatory connections with the target neuron are selected as concrete information source neurons.
  • Each tangible information source neuron may have several matching differential information decoupling neurons; each tangible information source neuron and each matched differential information decoupling neuron are respectively Form a unidirectional excitatory connection; the differential information decoupling neuron forms a unidirectional inhibitory connection with the target neuron, or forms a unidirectional inhibitory connection with the information source neuron input to the target neuron Type synapse-synaptic connection, so that the signal input from the concrete information source neuron to the target neuron is subject to the inhibitory regulation of the matched differential information decoupling neuron; the abstract information source neuron and the difference The decoupling neurons form a unidirectional excitatory connection.
  • Each of the differential information decoupling neurons may have a decoupling control signal input terminal; the degree of information decoupling can be adjusted by adjusting the magnitude of the signal applied to the decoupling control signal input terminal (which can be positive, negative, or 0).
  • the weight of the unidirectional excitatory connection between the concrete information source neuron/abstract information source neuron and the matched differential information decoupling neuron is a constant value, or is dynamically adjusted through the synaptic plasticity process.
  • connection Sconn1 accepts the input of several other connections (denoted as Sconn2), and when the upstream neuron connected to Sconn1 is fired, the connection Sconn1 is transmitted to the downstream
  • the value of the neuron is the overlap of the weights of the connection Sconn1 plus the input value of each connection Sconn2.
  • a certain composite feature encoding module 2 is selected to perform the forward learning process, one group of the input-side attention control neurons 230 is selected as the vibrating neuron, and one group of the concrete features is selected
  • the coding neuron 210 is used as a target neuron; when a novel sample (picture or video stream) is input, a plurality of the target neuron is activated and the visual feature information in the sample is encoded into concrete features through the forward learning process Information (a kind of memory information, that is, the original characteristic information component of each object) and store it.
  • forward learning process Information a kind of memory information, that is, the original characteristic information component of each object
  • the composite feature encoding module 2 performs the directional information aggregation process, select the same group of input-side attention control neurons 230 as the vibrating neurons as before, and select the same group of specific feature encoding neurons 210 as the source nerves.
  • abstract feature information a type of memory information, That is, the common feature information component between each object
  • the composite feature coding module 2 performs the differential information decoupling process, select the same group of concrete feature coding neurons 210 as the previous concrete information source neurons, and select the same group of abstract feature coding neurons as before 220 is used as the abstract information source neuron, and a plurality of the output-side attention regulating neurons 240 of the composite feature encoding module 2 is selected as the target neuron; when the same sample is input again, a plurality of the avatars are activated
  • the information source neuron triggers its encoded concrete feature information, and also activates multiple abstract information source neurons at the same time, which triggers the encoded abstract feature information; these abstract information source neurons activate the differential information solution Coupling neurons, thereby inhibiting the input of these concrete information source neurons to each target neuron signal, so that abstract feature information replaces the original concrete feature information and input to each target neuron, that is, finally through the composite feature encoding module 2
  • the information output to the other composite feature encoding module 2 is abstract feature information.
  • the composite feature encoding module 2 can be made to perform the information component adjustment process, selecting the same group of input-side attention-regulating neurons 230 as the vibrating neurons as before, and selecting the same group of specific feature encoding neurons 210 as the previous ones.
  • the Kb1 take a smaller value (for example, 1).
  • the characteristic information of each target neuron becomes differential characteristic information (a piece of memory information). That is, the information component that characterizes the difference between objects); at this time, the signals output by these concrete feature encoding neurons 210 to the output-side attention control neuron 240 are no longer decoupled by the differential information
  • the inhibition of neurons can be transmitted to the downstream neural network.
  • the whole process can be performed one or more times, so that the concrete feature information is gradually abstracted into abstract feature information, and the difference feature information is retained, forming a sparser code, saving coding and signal transmission bandwidth, and also making the representation of the neural network more general.
  • the ability to transform is better (because of the formation of abstract feature information), and it can also not lose details in the process of forming a higher level of representation (because the difference feature information is retained).
  • the forward learning process is:
  • Step a1 Select a number of said neurons as vibrating neurons.
  • Step a2 Select a number of said neurons as target neurons.
  • Step a3 the unidirectional excitatory connection between each activated vibrating neuron and a plurality of the target neuron adjusts the weight through the synaptic plasticity process.
  • Step a4 Each activated target neuron can establish one-way or two-way excitatory connection with several other target neurons, or it can establish self-circulating excitatory connection with itself, and these connections pass through the synapse.
  • the plasticity process adjusts the weight.
  • the weights of the input/output connections of each target neuron are adjusted through the synaptic plasticity process, the weights of part or all of the input or output connections may or may not be standardized.
  • 10,000 of the input-side attention-regulating neurons 230 are selected as the vibrating neurons, and 10,000 of the concrete feature encoding neurons 210/abstract can be selected.
  • the feature encoding neuron 220 serves as a target neuron.
  • the forward learning process can quickly encode and store the visual feature information of each object in the current input picture/video stream in the primary feature encoding module 1/concrete feature encoding unit 21/abstract feature encoding unit 22 for easy viewing Quickly identify the same or similar objects, and provide processing materials for the information aggregation process/directional information aggregation process/information component adjustment process, and find the cluster centers (that is, common features) and differentiated features of multiple similar objects, It is the foundation of meta-learning.
  • the memory triggering process is: inputting information (picture or video stream), or directly activating several of the neurons in the neural network, or being triggered by several of the neural networks.
  • the neuron is self-excited or is propagated in the neural network by the existing activation state of several of the neurons in the neural network, and if it causes the target area in the second preset period (such as 1s) If several of the neurons fired, the characteristic of each fired neuron in the target area can be combined with its activation intensity or firing rate as the result of the memory triggering process.
  • the target area may be any sub-network in the neural network (for example, all abstract feature coding neurons 220 of a certain composite feature coding module 2).
  • the memory triggering process can be embodied as a process of recognizing input information (picture or video stream), and each neuron emitted in the target area can be mapped to a neuron through a number of readout layer neurons.
  • Several labels are used as the recognition result; each neuron in the target area forms a unidirectional excitatory or inhibitory connection with several readout layer neurons; each readout layer neuron corresponds to a label, and its activation intensity Or the higher the distribution rate, or the earlier the distribution starts, the higher the correlation between the input information and its corresponding tag, and vice versa; for example, each of the tags can be "apple", “car”, “grassland”, etc. .
  • the information aggregation process is:
  • Step g1 Select a number of said neurons as vibrating neurons.
  • Step g2 Select a number of said neurons as source neurons.
  • Step g3 Select a number of said neurons as target neurons.
  • Step g4 Make each of the initiating neurons generate firing distribution and keep activating the eighth preset period Tk.
  • Step g5 In the eighth preset period Tk, make the unidirectional or bidirectional excitatory/inhibitory connection between each activated vibrating neuron and several of the target neurons go through the synaptic plasticity process Adjust the weight.
  • Step g6 In the eighth preset period Tk, the unidirectional or bidirectional excitatory/inhibitory connection between each activated source neuron and several of the target neurons is adjusted through the synaptic plasticity process Weights.
  • Step g7 Each time the process from step g1 to step g6 is executed is recorded as one iteration, and one or more iterations are executed.
  • a number of the target neurons are mapped to corresponding labels as a result of the information aggregation process.
  • the eighth preset period Tk is configured to be 100ms to 2 seconds, and 10,000 input-side attention control neurons 230 of any one of the composite feature encoding modules 2 are selected as Vibrate neurons, select 10,000 of the concrete feature encoding neurons 210 of the composite feature encoding module 2 as source neurons, and select 10,000 of the abstract feature encoding neurons 220 of the composite feature encoding module 2 as targets Neurons.
  • the directional information aggregation process is:
  • Step h1 Select a number of said neurons as vibrating neurons.
  • Step h2 Select a number of said neurons as source neurons.
  • Step h3 Select a number of said neurons as target neurons.
  • Step h4 Make each of the initiating neurons generate firing distribution and keep activating for the ninth preset period Ta.
  • Step h5 In the ninth preset period Ta, Ma1 of the source neurons and Ma2 of the target neurons are activated.
  • Step h6 In the ninth preset period Ta, the first Ka1 source neurons with the highest activation intensity or the highest firing rate or the first firing are recorded as Ga1, and the remaining Ma1-Ka1 activated source neurons are recorded as Ga2 .
  • Step h7 In the ninth preset period Ta, the first Ka2 target neurons with the highest activation intensity or the highest firing rate or the first firing are recorded as Ga3, and the remaining Ma2-Ka2 activated target neurons are recorded as Ga4 .
  • Step h8 In the ninth preset period Ta, each source neuron in the Ga1 has one-way or two-way excitatory/inhibitory connection with several target neurons in the Ga3 for one or more synapses. The weight enhancement process.
  • Step h9 In the ninth preset period Ta, each source neuron in the Ga1 is connected to a number of target neurons in the Ga4 in one-way or two-way excitatory/inhibitory connection for one or more synapses. The weight reduction process.
  • Step h10 In the ninth preset period Ta, one-way or two-way excitatory/inhibitory connection between each source neuron in Ga2 and several target neurons in Ga3 can be performed or not. The process of weakening synaptic weight multiple times.
  • Step h11 In the ninth preset period Ta, the unidirectional or bidirectional excitatory/inhibitory connection between each source neuron in Ga2 and several target neurons in Ga4 can be performed or not. Multiple synaptic weight enhancement process.
  • Step h12 In the ninth preset period Ta, each activated stimulating neuron performs one-way or two-way excitatory/inhibitory connection with several target neurons in the Ga3 to perform one or more synaptic weights Enhance the process.
  • Step h13 In the ninth preset period Ta, each activated stimulus neuron performs one-way or two-way excitatory/inhibitory connection with several target neurons in the Ga4 to perform one or more synaptic weights Weakening process.
  • Step h14 Every time the process from step h1 to step h13 is executed, it is recorded as one iteration, and one or more iterations are executed.
  • step h8 to step h13 after performing one or more synaptic weight enhancement processes or synaptic weight reduction processes, part or all of the input or output connections of each of the source neurons or target neurons may be connected.
  • the weights can be standardized or not.
  • the synaptic weight enhancement process may adopt the unipolar upstream and downstream firing-dependent synaptic enhancement process, or the unipolar pulse time-dependent synaptic enhancement process.
  • the synaptic weight weakening process may adopt the unipolar upstream and downstream firing dependent synaptic weakening process, or the unipolar pulse time dependent synaptic weakening process.
  • the synaptic weight enhancement process and the synaptic weight weakening process may also adopt the asymmetric bipolar pulse time-dependent synaptic plasticity process or the symmetric bipolar pulse time-dependent synaptic plasticity process, respectively.
  • the characterization of each target neuron may be used as a result of the directional information aggregation process of the characterization of each source neuron, and mapped to a corresponding label as an output.
  • the Ma1 and Ma2 are positive integers, Ka1 is a positive integer not exceeding Ma1, and Ka2 is a positive integer not exceeding Ma2.
  • the ninth preset period Ta 200ms to 2s, and select any one of the composite feature encoding modules 2 of the 10,000 input-side attention-regulating neurons 230 as vibrating neurons, select 10,000 of the concrete feature encoding neurons 210 of the composite feature encoding module 2 as source neurons, and select the composite feature encoding
  • the 10,000 abstract feature-encoding neurons 220 of module 2 are used as target neurons.
  • Each of the target neurons represents the abstract, isotopic, or concrete representation of the representation of each of the source neurons connected to it; the connection weight of a certain source neuron to each of the target neurons represents The correlation degree between the representation of the source neuron and the representation of each target neuron, the greater the weight, the greater the correlation degree, and vice versa.
  • the source neuron represents concrete information (such as a subcategory or instance), and the target neuron represents abstract information (such as a parent category);
  • Each of the target neurons represents the clustering center of each of the source neurons connected to it (the former represents the common information component in the latter); a certain source neuron is connected to each of the target neurons
  • the connection weight of the element characterizes the correlation (or the distance of the characterization) between the source neuron and the information represented by each of the target neurons (ie, the clustering center). The larger the weight, the higher the correlation (ie the distance of the characterization). The closer); the directional information abstraction process, that is, the clustering process, that is, the meta-learning process.
  • the directional information aggregation process is executed, and such iterations can continuously form a higher level of abstraction Information representation.
  • the information transfer process is:
  • Step f1 Select a number of said neurons as vibrating neurons.
  • Step f2 Select several direct downstream neurons or indirect downstream neurons of the vibrating neuron as source neurons.
  • Step f3 Select several direct downstream neurons or indirect downstream neurons of the vibrating neuron as target neurons.
  • Step f4 Make each of the initiating neurons generate firing distribution and keep activating for the seventh preset period Tj.
  • Step f5 In the seventh preset period Tj, a number of the source neurons are activated.
  • Step f6 In the seventh preset period Tj, if a certain initiating neuron is a neuron directly upstream of a certain target neuron, make the unidirectional or bidirectional excitatory/inhibitory connection between the two pass all The synaptic plasticity process adjusts the weight. If a vibrating neuron is an indirect upstream neuron of a target neuron, the direct upstream neuron of the target neuron and the target neuron in the connecting pathway between the two The unidirectional or bidirectional excitatory/inhibitory connection adjusts the weight through the synaptic plasticity process.
  • Step f7 In the seventh predetermined period Tj, each of the target neurons can establish connections with several other target neurons, and the weight can be adjusted through the synaptic plasticity process.
  • Step f8 In the seventh predetermined period Tj, if there is a unidirectional or bidirectional excitatory connection between a certain source neuron and a certain target neuron, the weight can be adjusted through the synaptic plasticity process.
  • the seventh preset period Tj is configured to be 20ms to 500ms, and 10,000 input-side attention control neurons 230 of any one of the composite feature encoding modules 2 are selected as Vibrate neurons, select 10,000 of the concrete feature encoding neurons 210 of the composite feature encoding module 2 as source neurons, and select 10,000 of the abstract feature encoding neurons 220 of the composite feature encoding module 2 as targets Neurons.
  • the information represented by part or all of the input connection weight of each activated source neuron is approximately coupled into part or all of the input connection weight of each target neuron, that is, the information is transferred from the former. Transcribed into the latter; the "approximately coupled” is because the transcribed information component is also coupled with the firing distribution of each of the initiating neurons, and the initiating neurons and the sources of activation The influence of the connection path between neurons, the connection and firing of each neuron in the connection path between the vibrating neuron and the target neuron.
  • connection weights between neurons will be added to the connection weights of these initiating neurons and these target neurons in approximately equal proportions, eventually making the latter approach the former; on the contrary, if some activated initiating neurons They are some activated source neurons or the indirect upstream neurons of some target neurons, then the connection weights of these vibrating neurons and these target neurons will eventually include the relationship between the vibrating neurons and the activated source neurons.
  • the memory forgetting process includes an upstream release dependent memory forgetting process, a downstream release dependent memory forgetting process, and an upstream and downstream release dependent memory forgetting process.
  • the upstream release-dependent memory forgetting process is: for a certain connection, if its upstream neuron continues to not release within the fourth preset period (such as 20 minutes to 24 hours), the absolute value of the weight decreases, and the decrease is recorded as DwDecay1 .
  • the downstream release dependent memory forgetting process is: for a connection, if its downstream neuron continues to fail within the fifth preset period (such as 20 minutes to 24 hours), the absolute value of the weight decreases, and the decrease is recorded as DwDecay2 .
  • the upstream and downstream delivery dependent memory forgetting process is: for a certain connection, if the upstream and downstream neurons do not simultaneously fire within the sixth preset period (such as 20 minutes to 24 hours), the absolute value of the weight will decrease. The reduction is recorded as DwDecay3.
  • the synchronous release is: when the downstream neuron involved in the connection fires, and the time interval from the current or past most recent upstream neuron firing does not exceed the fourth preset time interval Te1, or when the upstream neuron involved in the connection fires Time, and the time interval from the current or past most recent downstream neuron firing does not exceed the fifth preset time interval Te2.
  • the memory self-consolidation process is: when a certain neuron is self-excited, the weight of part or all of the input connection of the neuron is released through the unipolar downstream depending on the synaptic enhancement process, single Polar downstream pulses are adjusted depending on the synaptic enhancement process, and the weight of part or all of the output connections of the neuron is adjusted by the unipolar upstream firing-dependent synaptic enhancement process and the unipolar upstream pulse-dependent synaptic enhancement process.
  • the memory self-consolidation process helps to maintain the codes of some of the neurons with approximate fidelity and avoid forgetting.
  • the information component adjustment process is:
  • Step i1 Select a number of said neurons as vibrating neurons.
  • Step i2 Select several direct downstream neurons or indirect downstream neurons of the vibrating neuron as target neurons.
  • Step i3 Make each of the initiating neurons generate firing distributions, and keep them activated during the first preset period Tb.
  • Step i4 In the first preset period Tb, Mb1 of the target neurons are activated, where the first Kb1 target neurons with the highest activation intensity or the highest firing rate or the first firing are denoted as Gb1, and the remaining Mb1-Kb1 Each activated target neuron is denoted as Gb2.
  • Step i5 If a certain initiating neuron is a neuron directly upstream of a certain target neuron in the Gb1, make the one-way or two-way connection between the two to perform one or more synaptic weight enhancement processes.
  • the vibration neuron is the indirect upstream neuron of a certain target neuron in the Gb1, so that the direct upstream neuron of the target neuron and the target neuron in the connection path between the two are unidirectional or bidirectional connection Perform one or more synaptic weight enhancement processes.
  • Step i6 If a certain initiating neuron is a neuron directly upstream of a certain target neuron in the Gb2, make the one-way or two-way connection between the two to perform one or more synaptic weight reduction processes.
  • the vibration neuron is the indirect upstream neuron of a certain target neuron in the Gb2, so that the direct upstream neuron of the target neuron and the target neuron in the connection path between the two are unidirectional or bidirectional connection Perform one or more synaptic weight reduction processes.
  • Step i7 Every time the process from step i1 to step i6 is executed, it is recorded as one iteration, and one or more iterations are executed.
  • step i5 and step i6 after performing one or more synaptic weight enhancement processes or synaptic weight reduction processes, the weights of part or all of the input connections of each target neuron are normalized or not.
  • a number of the target neurons can be mapped to corresponding labels as a result of the information component adjustment process.
  • the synaptic weight enhancement process may adopt the unipolar upstream and downstream firing-dependent synaptic enhancement process, or the unipolar pulse time-dependent synaptic enhancement process.
  • the synaptic weight weakening process may adopt the unipolar upstream and downstream firing dependent synaptic weakening process, or the unipolar pulse time dependent synaptic weakening process.
  • the synaptic weight enhancement process and the synaptic weight weakening process may also adopt the asymmetric bipolar pulse time-dependent synaptic plasticity process or the symmetric bipolar pulse time-dependent synaptic plasticity process, respectively.
  • the Kb1 takes a small value (for example, 1), only the target neuron with the highest activation intensity or the highest firing rate or the first firing will undergo the synaptic weight enhancement process, that is, the synaptic weight enhancement process is superimposed to a certain extent.
  • the information component represented by the firing of each of the vibrating neurons at present enables the target neuron to consolidate its existing characteristics; while the other target neurons all undergo the process of weakening the synaptic weight, that is, to a certain extent Subtract (decouple) the information component represented by the current firing of each of the initiating neurons; therefore, multiple iterations are performed, and each iteration causes each of the initiating neurons to produce a different firing distribution, which can make each The representations of the target neurons are decoupled from each other; if further iterations are performed to strengthen the decoupling, the representations of each target neuron will become a set of relatively independent bases in the representation space.
  • the Kb1 takes a larger value (for example, 8)
  • each iteration causes each of the vibrating neurons to produce a different firing distribution, which can make the multiple target neurons characterize
  • the information components of are superimposed on each other to a certain extent. If further iterations are performed for multiple iterations, the representations of multiple target neurons can be close to each other.
  • adjusting the Kb1 can adjust the information component represented by each target neuron.
  • the first preset period Tb is configured to be 100ms to 500ms, and 10,000 input-side attention control neurons 230 of any one of the composite feature encoding modules 2 are selected as Vibrate neurons, and select 10,000 of the concrete feature encoding neurons 210 of the composite feature encoding module 2 as target neurons.
  • the reinforcement learning process is: when a number of the connections receive reinforcement signals, in the second preset time interval, the weights of these connections change, or these connections are in the memory forgetting process The weight decrease of, or the weight increase/weight decrease of these connections in the synaptic plasticity process changes; or,
  • these neurons When a number of the neurons receive an enhanced signal, in the third preset time interval (for example, within 30 seconds from receiving the enhanced signal), these neurons receive a positive or negative input, or these neurons
  • the weight of some or all of the input connection or output connection of the neuron changes, or the weight reduction of these connections in the memory forgetting process changes, or the weight increase/weight reduction of these connections in the synaptic plasticity process The amount has changed.
  • the reinforcement signal is a constant value when the neural network has no input information; in the supervised learning process, if the result of the memory triggering process is correct, the reinforcement signal rises, if If the result of the memory triggering process is wrong, the strengthening signal drops.
  • the constant value of the reinforcement signal is 0, if the supervised learning process is performed, the result of the memory trigger process is correct, the reinforcement signal rises to +10, and the number of specific feature encoding neurons 210 If the two-way excitement connection receives the enhanced signal (+10), in the second preset time interval (within 30 seconds from the time when the enhanced signal is received), if these connections perform the symmetric bipolar
  • the pulse time depends on the process of synaptic plasticity, and the DwLTP6 is increased by 10 on the basis of its original value.
  • the novelty signal modulation process is: when several of the neurons receive the novelty signal, in a sixth preset time interval (for example, starting from receiving the novelty signal) Within 30 seconds), these neurons receive positive or negative input, or the weights of some or all of the input connections or output connections of these neurons have changed, or the weights of these connections in the memory forgetting process are reduced A change occurs, or the weight increase/weight decrease of these connections in the synaptic plasticity process changes.
  • the novelty signal is constant when the neural network has no input information or gradually decreases with time; the novelty signal is in the process of triggering the memory when the neural network has input information
  • the activation intensity or firing rate of each neuron in the target area is negatively correlated.
  • the novelty signal when there is no input information, the novelty signal is constant +50; if input information (picture or video stream) is applied, if the input information does not trigger memory information with sufficient correlation (for example, neurons in the target area are inputting The highest activation intensity of the current picture is only 10% of the highest activation intensity when inputting a picture that has formed a memory code, and the correlation is only 10%), and the novelty signal rises from a constant value of +50 to +90.
  • these neurons When several of the neurons constituting the forward neural pathway receive a novelty signal of +90, these neurons receive a positive input (for example, +40) in the sixth preset time interval.
  • the novelty signal decreases from +90 to +10.
  • these neurons When several of the neurons constituting the positive neural pathway receive a novelty signal of +10, these neurons receive a negative input (such as -40) in the sixth preset time interval.
  • the novelty signal will cause the neurons of the forward neural pathway to receive positive input (enhanced excitability) and be easier to activate, and make the neurons of the negative neural pathway Receiving negative input (decreased excitability) makes it more difficult to activate, so that the neural network preferentially pays attention to, recognizes, and learns the current novel external input information through the bottom-up positive neural pathway; on the contrary, when the external input When the information is not novel enough, the novelty signal will cause the neurons of the positive neural pathway to receive negative input (decreased excitability) and make it more difficult to activate, while the neurons of the negative neural pathway receive positive input (excitement). Sexual enhancement) and easier to activate, so that the neural network preferentially triggers existing memory information through a top-down reverse neural pathway, or performs a pattern completion process, an association process, or an imagination process.
  • the supervised learning process is:
  • Step r1 Given the positive firing distribution range of each neuron in the target area, it can also be given the negative firing distribution range of each neuron in the target area, and step r2 is performed.
  • Step r2 Perform the memory triggering process. If the actual firing distribution of each neuron in the target area does not meet the positive firing distribution range nor the negative firing distribution range, it is regarded as each neuron in the target area If the element does not encode relevant memory information, go to step r3; if the actual firing distribution of each neuron in the target area meets the frontal firing distribution range, the result of the memory triggering process is deemed to be correct, and the supervised learning process is ended; If the actual firing distribution of each neuron in the target area meets the negative firing distribution range, it is deemed that the result of the memory triggering process is an error, and step r3 is executed.
  • Step r3 Perform the novelty signal modulation process, reinforcement learning process, active attention process, automatic attention process, directional start process, forward learning process, information aggregation process, directional information aggregation process, information component adjustment process, information Any one or more of the transfer process and the differential information decoupling process, so that each neuron in the target area encodes relevant memory information, and step r1 is executed.
  • the supervised learning process can also be:
  • Step q1 Given a positive label range, and a negative label range, go to step q2.
  • Step q2 Perform the memory triggering process, and map the actual firing distribution of each neuron in the target area to the corresponding label. If the corresponding label does not meet the positive label range nor the negative label range, it is regarded as If each neuron in the target area does not encode relevant memory information, go to step q3; if the corresponding label meets the positive label range, the result of the memory trigger process is deemed correct, and the supervised learning process ends; if the corresponding label meets negative The label range is regarded as an error as a result of the memory triggering process, and step q3 is executed.
  • Step q3 Perform the novelty signal modulation process, the reinforcement learning process, the active attention process, the automatic attention process, the directional start process, the forward learning process, the information aggregation process, the directional information aggregation process, the information component adjustment process, and the information Any one or more of the transfer process and the differential information decoupling process, so that each neuron in the target area encodes relevant memory information, and step q1 is executed.
  • the unipolar upstream firing-dependent synaptic plasticity process includes a unipolar upstream firing-dependent synaptic enhancement process and a unipolar upstream firing-dependent synaptic weakening process.
  • the unipolar upstream firing-dependent synaptic enhancement process is: when the activation strength or firing rate of the upstream neuron involved in the connection is not zero, the absolute value of the connection weight increases, and the increase is recorded as DwLTP1u.
  • the unipolar upstream firing-dependent synaptic weakening process is: when the activation intensity or firing rate of the upstream neuron involved in the connection is not zero, the absolute value of the connection weight decreases, and the decrease is recorded as DwLTD1u.
  • the DwLTP1u and DwLTD1u are non-negative values.
  • the values of DwLTP1u and DwLTD1u in the process of unipolar upstream release-dependent synaptic plasticity include any one or more of the following:
  • the DwLTP1u and DwLTD1u are non-negative values and are respectively proportional to the activation intensity or firing rate of the upstream neuron involved in the connection; or,
  • the DwLTP1u and DwLTD1u are non-negative values and are respectively proportional to the activation strength or firing rate of the upstream neuron involved in the connection and the weight of the connection involved.
  • DwLTP1u 0.01*Fru1
  • DwLTD1u 0.01*Fru1
  • Fru1 is the firing rate of upstream neurons.
  • the unipolar downstream firing-dependent synaptic plasticity process includes a unipolar downstream firing-dependent synaptic enhancement process and a unipolar downstream firing-dependent synaptic weakening process.
  • the unipolar downstream firing-dependent synaptic enhancement process is: when the activation intensity or firing rate of the downstream neuron involved in the connection is not zero, the absolute value of the connection weight increases, and the increase is recorded as DwLTP1d.
  • the unipolar downstream firing-dependent synaptic weakening process is: when the activation intensity or firing rate of the downstream neuron involved in the connection is not zero, the absolute value of the connection weight decreases, and the decrease is recorded as DwLTD1d.
  • the DwLTP1d and DwLTD1d are non-negative values.
  • the values of DwLTP1d and DwLTD1d in the process of unipolar downstream firing dependent on synaptic plasticity include any one or more of the following:
  • the DwLTP1d and DwLTD1d are non-negative values and are respectively proportional to the activation intensity or firing rate of the downstream neurons involved in the connection; or,
  • the DwLTP1d and DwLTD1d are non-negative values and are respectively proportional to the activation intensity or firing rate of the downstream neurons involved in the connection and the weight of the connection involved.
  • DwLTP1d 0.01*Frd1
  • DwLTD1d 0.01*Frd1
  • Frd1 is the firing rate of downstream neurons.
  • the unipolar upstream and downstream firing-dependent synaptic plasticity process includes a unipolar upstream and downstream firing-dependent synaptic enhancement process and a unipolar upstream and downstream firing-dependent synaptic weakening process.
  • the unipolar upstream and downstream firing dependent synaptic enhancement process is: when the activation intensity or firing rate of the upstream neuron and downstream neuron involved in the connection is not zero, the absolute value of the connection weight increases, and the increase is recorded It is DwLTP2.
  • the unipolar upstream and downstream firing dependent synaptic weakening process is: when the activation intensity or firing rate of the upstream neuron and downstream neuron involved in the connection is not zero, the absolute value of the connection weight decreases, and the decrease is recorded It is DwLTD2.
  • the DwLTP2 and DwLTD2 are non-negative values.
  • the values of DwLTP2 and DwLTD2 in the process of unipolar upstream and downstream firing dependent on synaptic plasticity include any one or more of the following:
  • the DwLTP2 and DwLTD2 are non-negative values, which are respectively proportional to the activation strength or firing rate of the upstream neuron and the activation strength or firing rate of the downstream neuron involved; or,
  • the DwLTP2 and DwLTD2 are non-negative values and are respectively proportional to the activation strength or firing rate of the upstream neuron involved in the connection, the activation strength or firing rate of the downstream neuron, and the weight of the involved connection.
  • DwLTP2 0.01*Fru2*Frd2
  • DwLTD2 0.01*Fru2*Frd2
  • Fru2 and Frd2 are the firing rates of upstream and downstream neurons, respectively.
  • the unipolar upstream pulse-dependent synaptic plasticity process includes a unipolar upstream pulse-dependent synaptic enhancement process and a unipolar upstream pulse-dependent synaptic weakening process.
  • the unipolar upstream impulse-dependent synaptic enhancement process is: when the upstream neuron involved in the connection fires, the absolute value of the connection weight increases, and the increase is recorded as DwLTP3u.
  • the unipolar upstream impulse-dependent synapse weakening process is: when the upstream neuron involved in the connection is excited, the absolute value of the connection weight decreases, and the decrease is recorded as DwLTD3u.
  • the DwLTP3u and DwLTD3u are non-negative values.
  • the values of DwLTP3u and DwLTD3u in the unipolar upstream pulse-dependent synaptic plasticity process include any one or more of the following:
  • the DwLTP3u and DwLTD3u adopt non-negative constants; or,
  • the DwLTP3u and DwLTD3u are non-negative values and are respectively proportional to the weight of the connection involved.
  • DwLTP3u 0.01*weight
  • DwLTD3u 0.01*weight
  • weight as the connection weight
  • the unipolar downstream pulse-dependent synaptic plasticity process includes a unipolar downstream pulse-dependent synaptic enhancement process and a unipolar downstream pulse-dependent synaptic weakening process.
  • the unipolar downstream impulse-dependent synaptic enhancement process is: when the downstream neuron involved in the connection fires, the absolute value of the connection weight increases, and the increase is recorded as DwLTP3d.
  • the unipolar downstream impulse-dependent synapse weakening process is: when the downstream neuron involved in the connection is excited, the absolute value of the connection weight decreases, and the decrease is recorded as DwLTD3d.
  • the DwLTP3d and DwLTD3d are non-negative values.
  • the values of DwLTP3d and DwLTD3d of the unipolar downstream pulse-dependent synaptic plasticity process include any one or more of the following:
  • the DwLTP3d and DwLTD3d adopt non-negative constants; or,
  • the DwLTP3d and DwLTD3d are non-negative values and are respectively proportional to the weight of the involved connection.
  • DwLTP3d 0.01*weight
  • DwLTD3d 0.01*weight
  • weight as the connection weight
  • the unipolar pulse time-dependent synaptic plasticity process includes a unipolar pulse time-dependent synaptic enhancement process and a unipolar pulse time-dependent synaptic weakening process.
  • the unipolar pulse time-dependent synaptic enhancement process is: when the downstream neuron involved in the connection fires, and the time interval from the current or past last upstream neuron firing does not exceed Tg1, or when the upstream neuron involved in the connection fires
  • Tg1 the time interval from the current or past last upstream neuron firing does not exceed Tg1
  • Tg2 the absolute value of the connection weight increases, and the increase is recorded as DwLTP4.
  • the unipolar pulse time-dependent synaptic weakening process is: when the downstream neuron involved in the connection fires, and the time interval from the current or past most recent upstream neuron firing does not exceed Tg3, or when the upstream neuron involved in the connection fires
  • Tg3 When the element is fired, and the time interval from the current or last downstream neuron firing does not exceed Tg4, the absolute value of the connection weight decreases, and the decrease is recorded as DwLTD4.
  • the DwLTP4, DwLTD4, Tg1, Tg2, Tg3, and Tg4 are all non-negative values. For example, set Tg1, Tg2, Tg3, and Tg4 to 200 ms.
  • the values of DwLTP4 and DwLTD4 in the unipolar pulse time-dependent synaptic plasticity process include any one or more of the following:
  • the DwLTP4 and DwLTD4 adopt non-negative constants; or,
  • the DwLTP4 and DwLTD4 are non-negative values and are respectively proportional to the weight of the involved connection.
  • DwLTP4 KLTP4*weight+C1
  • the time-dependent synaptic plasticity process of the asymmetric bipolar pulse is:
  • the upstream neuron involved in the connection fires, if the time interval from the current or past last downstream neuron firing does not exceed Th3, the absolute value of the connection weight increases, and the increase is recorded as DwLTP5; if it is away from the current or the past When the time interval of the most recent downstream neuron firing exceeds Th3 but does not exceed Th4, the absolute value of the connection weight decreases, and the decrease is recorded as DwLTD5.
  • Th1, Th3, DwLTP5, and DwLTD5 are non-negative values
  • Th2 is a value greater than Th1
  • the values of DwLTP5 and DwLTD5 in the process of the asymmetric bipolar pulse time-dependent synaptic plasticity include any one or more of the following:
  • the DwLTP5 and DwLTD5 adopt non-negative constants; or,
  • the DwLTP5 and DwLTD5 are non-negative values. DwLTP5 is negatively correlated with the time interval between downstream neurons and upstream neurons. When the time interval is 0, DwLTP5 reaches the specified maximum value DwLTPmax5, and when the time interval is Th1, DwLTP5 is 0; DwLTD5 is negatively correlated with the time interval between downstream neurons and upstream neurons.
  • the time-dependent synaptic plasticity process of the symmetric bipolar pulse is:
  • the values of DwLTP6 and DwLTD6 in the symmetrical bipolar pulse time-dependent synaptic plasticity process include any one or more of the following:
  • the DwLTP6 and DwLTD6 adopt non-negative constants; or,
  • the DwLTP6 and DwLTD6 are non-negative values. DwLTP6 is negatively correlated with the time interval between downstream neurons and upstream neurons firing. Specifically, when the time interval is 0, DwLTP6 reaches the specified maximum value DwLTPmax6, and when the time interval is Ti1, DwLTP6 Is 0; DwLTD6 is negatively correlated with the time interval between upstream neuron and downstream neuron firing. When the time interval tends to 0, DwLTD6 reaches the specified maximum value DwLTDmax6.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种具有前向学习和元学习功能的类脑视觉神经网络,包括初级特征编码模块(1)、复合特征编码模块(2),包括主动、自动注意力机制,包括显式编码视觉特征的位置信息的神经环路,包括正向、反向神经通路,支持上、下双向信息处理过程,采用多种具有生物合理性的可塑性过程,能够进行前向学习,将输入图像或视频流中的视觉表征信息快速编码为记忆信息,并进行信息抽象和信息成分调制过程以获得各对象间的共性特征信息、差异性特征信息,形成多种信息维度与信息抽象程度的信息通道,提升泛化能力的同时保留细节信息,还支持强化学习、监督学习、新颖度信号调制过程,不依赖误差反传与梯度下降的端到端训练范式,为神经拟态芯片提供基础。

Description

具有前向学习和元学习功能的类脑视觉神经网络
本申请要求于2020年05月19日在中国专利局提交的、申请号为202010424999.8、发明名称为“具有前向学习和元学习功能的类脑视觉神经网络”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及类脑视觉算法和脉冲神经网络技术领域,具体涉及一种具有前向学习和元学习功能的类脑视觉神经网络。
背景技术
现有的深度学习视觉算法存在如下问题:
1,缺乏对视觉特征的位置的显式编码,不易于灵活地描述特征间的位置构成关系,进而不利于编码和识别丰富而精确的形状、轮廓,以及描述对象间的形位关系;
2,依赖误差反传与梯度下降的端到端训练范式,涉及大量偏微分运算过程,训练成本较高,也较难突破冯诺依曼体系结构;
3,缺乏对多种信息从多种维度进行组合、抽象的机制,不易形成多种信息维度与信息抽象程度的信息通道;
4,只有正向神经通路,缺乏反向神经通路,无法支持自上而下的信息处理过程;
5,不具备前向学习功能,不易快速记住所见过的输入图片或视频流,也导致训练所需数据多、训练周期长。
生物脑的视觉神经系统为设计类脑视觉神经网络提供了绝佳的参考蓝本。
根据生物视觉神经系统的神经环路与工作原理,类脑视觉神经网络应至少包括隐式位置编码(Implicit Position Encoding)和显式位置编码(Explicit Position Encoding)两种位置编码方式。所述隐式位置编码为,通过从图片到各层神经元逐级对应联接,使各层编码特征的神经元具有对应的感受野,而不是使用专门的神经环路来编码位置信息,这种方式不够灵活,不能将各视觉特征以任意位置灵活地组合,也不能针对位置信息进行信息组合、抽象、加工,识别的泛化能力弱。所述显式位置编码为,使用专门的神经环路来编码位置信息,能将各视觉特征以任意位置灵活地组合,还能针对位置信息进行信息组合、抽象、加工,能够编码更丰富的形位关系,识别的泛化能力强,也能够对具有强形位关系约束的情况进行准确识别。
生物视觉神经系统还具有自下而上和自上而下的双向神经通路,具有启动效应,能够帮助视觉搜索过程。类脑视觉神经网络也应借鉴这种特点。
生物视觉神经系统以可塑性机制为核心,具有强化学习、前向学习和元学习等多种学习范式。类脑视觉神经网络若采用具有生物合理性的可塑性机制,则可以摆脱误差反传与梯度下降的训练范式,避免大量偏微分运算过程,有望突破冯诺依曼体系结构,更适于部署于固件或神经拟态芯片;此外,类脑视觉神经网络还应具有前向学习和元学习功能,快速学习和编码所见过的图片或视频流的视觉特征,进行信息抽象,找到各对象间的共性表征,使泛化能力更好,减少训练所需数据,缩短训练周期。
技术问题
本申请实施例的目的之一在于:提供一种具有前向学习和元学习功能的类脑视觉神经网络,旨在解决现有的机器视觉神经网络无法同时兼容多种学习范式,泛化能力较差,训练过程需要进行大量的偏微分运算、训练周期长的问题。
技术解决方案
为解决上述技术问题,本申请实施例采用的技术方案是:
本申请实施例提供了一种具有前向学习和元学习功能的类脑视觉神经网络,包括:若干个初级特征编码模块、若干个复合特征编码模块;
各模块分别包括多个神经元;
所述神经元包括初级特征编码神经元、具象特征编码神经元、抽象特征编码神经元;
所述初级特征编码模块包括多个所述初级特征编码神经元,编码初级视觉特征信息;
所述复合特征编码模块包括具象特征编码单元、抽象特征编码单元;
所述具象特征编码单元包括多个所述具象特征编码神经元,编码具象视觉特征信息;
所述抽象特征编码单元包括多个所述抽象特征编码神经元,编码抽象视觉特征信息;
在表述中,若神经元A与神经元B之间形成单向联接,即表示A->B的单向联接;若神经元A与神经元B之间形成双向联接,即表示A<->B(或A->B与A<-B)的双向联接;
若神经元A与神经元B之间具有A->B的单向联接,则称神经元A为神经元B的直接上游神经元,称神经元B为神经元A的直接下游神经元;若神经元A与神经元B之间具有A<->B的双向联接,则称神经元A与神经元B互为直接上游神经元、直接下游神经元;
若神经元A与神经元B之间不具有联接,但二者间通过若干个其它神经元构成联接通路,如A->C->…->D->B,则称神经元A为神经元B的间接上游神经元,称神经元B为神经元A的间接下游神经元,称神经元D为神经元B的直接上游神经元;
所述兴奋型联接为:当该兴奋型联接的上游神经元发放时,通过该兴奋型联接向下游神经元提供非负值输入;
所述抑制型联接为:当该抑制型联接的上游神经元发放时,通过该抑制型联接向下游神经元提供非正值输入;
若干个所述初级特征编码神经元分别与其它若干个所述初级特征编码神经元形成单向或双向兴奋型/抑制型联接;
若干个所述初级特征编码神经元分别与位于至少一个所述复合特征编码模块的若干个所述具象特征编码神经元或若干个所述抽象特征编码神经元形成单向或双向兴奋型/抑制型联接;
位于同一个所述复合特征编码模块中的若干个所述具象特征编码神经元分别与位于同一个所述复合特征编码模块中的若干个所述抽象特征编码神经元形成单向或双向兴奋型/抑制型联接;
若干个所述复合特征编码模块中的若干个所述具象特征编码神经元、所述抽象特征编码神经元分别与其它若干个所述复合特征编码模块的若干个所述具象特征编码神经元、所述抽象特征编码神经元形成单向或双向兴奋型/抑制型联接;
所述神经网络通过所述神经元的发放来缓存与编码信息,通过所述神经元之间的联接来编码、存储、传递信息;
输入图片或视频流,将每帧图片的若干个像素的若干个像素值分别乘以权重输入至若干个所述初级特征编码神经元,以使若干个所述初级特征编码神经元激活;
对于若干个所述神经元,计算其膜电位以确定是否发放,如发放则使其各个下游神经元累计膜电位,进而确定是否发放,从而使发放在所述神经网络中传播;上游神经元与下游神经元之间的联接的权重为常值,或通过突触可塑性过程动态调整;
所述神经网络的工作过程包括:前向记忆过程、记忆触发过程、信息聚合过程、定向信息聚合过程、信息转写过程、记忆遗忘过程、记忆自巩固过程、信息成分调整过程、强化学习过程、新颖度信号调制过程、监督学习过程;
所述突触可塑性过程包括单极性上游发放依赖突触可塑性过程、单极性下游发放依赖突触可塑性过程、单极性上下游发放依赖突触可塑性过程、单极性上游脉冲依赖突触可塑性过程、单极性下游脉冲依赖突触可塑性过程、单极性脉冲时间依赖突触可塑性过程、非对称双极性脉冲时间依赖突触可塑性过程、对称双极性脉冲时间依赖突触可塑性过程;
将若干个所述神经元映射至对应的标签作为输出。
在一种实施例中,所述神经网络的若干个神经元采用脉冲神经元或非脉冲神经元。
有益效果
与现有技术相比,本申请实施例包括以下优点:
本申请实施例提供了一种具有前向学习和元学习功能的类脑视觉神经网络,包括初级特征编码模块、复合特征编码模块,包括主动、自动注意力机制,具有显式编码视觉特征的位置信息的神经环路, 具有正向、反向神经通路,支持上、下双向信息处理过程,采用多种具有生物合理性的可塑性过程,能够进行前向学习,将输入图像或视频流中的视觉表征信息快速编码为记忆信息,并进行信息抽象和信息成分调制过程以获得各对象间的共性特征信息、差异性特征信息,形成多种信息维度与信息抽象程度的信息通道,提升泛化能力的同时保留细节信息,还支持强化学习、监督学习、新颖度信号调制过程,不依赖误差反传与梯度下降的端到端训练范式,突破了现有深度学习理论体系的瓶颈,为神经拟态芯片的设计与应用提供了基础。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或示范性技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请实施例的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为本申请实施例提供的一种具有前向学习和元学习功能的类脑视觉神经网络的整体框图;
图2为本申请实施例中一种具有前向学习和元学习功能的类脑视觉神经网络的复合特征编码模块中输入侧注意力调控单元与输出侧注意力调控单元示意图;
图3为本申请实施例中一种具有前向学习和元学习功能的类脑视觉神经网络的复合特征编码模块中位置编码单元示意图;
图4为本申请实施例中一种具有前向学习和元学习功能的类脑视觉神经网络的输入侧注意力调控单元与具象特征编码单元与抽象特征编码单元拓扑示意图;
图5为本申请实施例中一种具有前向学习和元学习功能的类脑视觉神经网络的输入侧注意力调控单元与具象特征编码单元与抽象特征编码单元与输出侧注意力调控单元拓扑示意图;
图6为本申请实施例中一种具有前向学习和元学习功能的类脑视觉神经网络的对应子空间的位置编码神经元拓扑示意图;
图7为本申请实施例中一种具有前向学习和元学习功能的类脑视觉神经网络的对应区域的位置编码神经元拓扑示意图;
图8为本申请实施例中一种具有前向学习和元学习功能的类脑视觉神经网络的感受野投射关系示意图;
图9为本申请实施例中一种具有前向学习和元学习功能的类脑视觉神经网络的正向神经通路和反向神经通路示意图;
图10为本申请实施例中一种具有前向学习和元学习功能的类脑视觉神经网络的中央-周围拓扑结构示意图。
本发明的实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
为了说明本申请的技术方案,以下结合具体附图及实施例进行详细说明。
参见附图1,本申请实施例公开了一种具有前向学习和元学习功能的类脑视觉神经网络,包括:若干个(如1至2个)初级特征编码模块1、若干个(如3至3000个)复合特征编码模块2。
各模块分别包括多个神经元。
所述神经元包括初级特征编码神经元10、具象特征编码神经元210、抽象特征编码神经元220。
所述初级特征编码模块1包括多个(如200万个)所述初级特征编码神经元10,编码初级视觉特征信息。
所述复合特征编码模块2包括具象特征编码单元21、抽象特征编码单元22。
所述具象特征编码单元21包括多个(如10万个)所述具象特征编码神经元210,编码具象视觉特征信息。
所述抽象特征编码单元22包括多个(如10万个)所述抽象特征编码神经元220,编码抽象视觉特征信息。
在表述中,若神经元A与神经元B之间形成单向联接,即表示A->B的单向联接;若神经元A与神经元B之间形成双向联接,即表示A<->B(或A->B与A<-B)的双向联接。
若神经元A与神经元B之间具有A->B的单向联接,则称神经元A为神经元B的直接上游神经元,称神经元B为神经元A的直接下游神经元;若神经元A与神经元B之间具有A<->B的双向联接,则称神经元A与神经元B互为直接上游神经元、直接下游神经元。
若神经元A与神经元B之间不具有联接,但二者间通过若干个其它神经元构成联接通路,如A->C->…->D->B,则称神经元A为神经元B的间接上游神经元,称神经元B为神经元A的间接下游神经元,称神经元D为神经元B的直接上游神经元。
所述兴奋型联接为:当该兴奋型联接的上游神经元发放时,通过该兴奋型联接向下游神经元提供非负值输入。
所述抑制型联接为:当该抑制型联接的上游神经元发放时,通过该抑制型联接向下游神经元提供非正值输入。
若干个(如1万个)所述初级特征编码神经元10分别与其它若干个(如1-20个)所述初级特征编码神经元10形成单向或双向兴奋型/抑制型联接。
若干个(如50万至100万个)所述初级特征编码神经元10分别与位于至少一个(如2个)所述复合特征编码模块2的若干个(如10至1000个)所述具象特征编码神经元210或若干个(如10-1000个)所述抽象特征编码神经元220形成单向或双向兴奋型/抑制型联接。
位于同一个所述复合特征编码模块2中的若干个(如5万个)所述具象特征编码神经元210分别与位于同一个所述复合特征编码模块2中的若干个(如5000个)所述抽象特征编码神经元220形成单向或双向兴奋型/抑制型联接。
若干个(如3至3000个)所述复合特征编码模块2中的若干个(如5万个)所述具象特征编码神经元210、所述抽象特征编码神经元220分别与其它若干个(如1至300个)所述复合特征编码模块2的若干个(如2000个)所述具象特征编码神经元210、所述抽象特征编码神经元220形成单向或双向兴奋型/抑制型联接。
所述神经网络通过所述神经元的发放来缓存与编码信息,通过所述神经元之间的联接来编码、存储、传递信息。
输入图片或视频流,将每帧图片的各个像素的R、G、B像素值分别乘以权重输入至若干个(如2-30个)所述初级特征编码神经元10,以使若干个所述初级特征编码神经元10激活。
对于若干个所述神经元,计算其膜电位以确定是否发放,如发放则使其各个下游神经元累计膜电位,进而确定是否发放,从而使发放在所述神经网络中传播;上游神经元与下游神经元之间的联接的权重为常值,或通过突触可塑性过程动态调整。
所述神经网络的工作过程包括:前向记忆过程、记忆触发过程、信息聚合过程、定向信息聚合过程、信息转写过程、记忆遗忘过程、记忆自巩固过程、信息成分调整过程、强化学习过程、新颖度信号调制过程、监督学习过程。
所述突触可塑性过程包括单极性上游发放依赖突触可塑性过程、单极性下游发放依赖突触可塑性过程、单极性上下游发放依赖突触可塑性过程、单极性上游脉冲依赖突触可塑性过程、单极性下游脉冲依赖突触可塑性过程、单极性脉冲时间依赖突触可塑性过程、非对称双极性脉冲时间依赖突触可塑性过程、对称双极性脉冲时间依赖突触可塑性过程。
将若干个所述神经元映射至对应的标签作为输出。例如将所述高级信息通道的10万个抽象特征编码神经元220映射至对应的标签作为输出。
在本实施例中,所述神经网络的若干个神经元采用脉冲神经元或非脉冲神经元。
本实施例中,全部初级特征编码神经元10、具象特征编码神经元210、抽象特征编码神经元220、中间神经元采用脉冲神经元。
例如,一种脉冲神经元的实现方式为采用漏积分脉冲神经元(LIF neuron model);一种非脉冲神经元的实现方式为采用深度神经网络中的人工神经元(例如采用ReLU激活函数)。
在本实施例中,所述神经网络的若干个神经元为自激发神经元;所述自激发神经元包括有条件自激发神经元和无条件自激发神经元。
所述有条件自激发神经元若在第一预设时间区间没有被外部输入激发,则根据概率P自激发。
所述无条件自激发神经元在没有外部输入的情况下膜电位自动逐渐累加,当膜电位达到阈值时该无条件自激发神经元激发,并使膜电位恢复至静息电位以重新进行累加过程。
在本实施例中,一种无条件自激发神经元的实现方式为:
步骤m1:膜电位Vm=Vm+Vc;
步骤m2:对全部输入加权求和并叠加到Vm;
步骤m3:若Vm>=threshold,则该无条件自激发神经元激发,并令Vm=Vrest;
重复步骤m1至m3。
所述Vm为膜电位,Vc为累加常量,Vrest为静息电位,threshold为阈值。
例如,令Vc=5mV,Vrest=-70mV,threshold=-25mV。
在本实施例中,所述有条件自激发神经元若在第一预设时间区间(例如配置为10分钟)没有被外部输入激发,则根据概率P自激发。
所述有条件自激发神经元记录下面的任一种或任几种信息:
1)距上一次激发的时间间隔、
2)最近平均发放率、
3)最近激发的持续时间、
4)总激发次数、
5)最近各输入联接的突触可塑性过程执行总次数、
6)最近各输出联接的突触可塑性过程执行总次数、
7)最近各输入联接的权重总改变量、
8)最近各输出联接的权重总改变量。
在本实施例中,所述概率P的计算规则包括下面的任一种或任几种:
1)P与距上一次激发的时间间隔成正相关、
2)P与最近平均发放率成正相关、
3)P与最近激发的持续时间成正相关、
4)P与总激发次数成正相关、
5)P与最近各输入联接的突触可塑性过程执行总次数成正相关、
6)P与最近各输出联接的突触可塑性过程执行总次数成正相关、
7)P与最近各输入联接的权重总改变量成正相关、
8)P与最近各输出联接的权重总改变量成正相关、
9)P与所有输入联接的权重平均值成正相关、
10)P与所有输入联接的权重总模长成正相关、
11)P与所有输入联接的总数量成正相关、
12)P与所有输出联接的总数量成正相关。
在本实施例中,令P=min(1,a*Tinterval^2+b*Fr+c*Nin_plasticity+Bias);式中:a、b、c为系数,Tinterval为距上一次激发的时间间隔,Fr为最近平均发放率,Nin_plasticity为最近各输入联接的突触可塑性过程执行总次数,Bias为偏置量。
在本实施例中,所述有条件自激发神经元在自激发时的激活强度或发放率Fs的计算规则包括下面的任一种或任几种:
1)Fs=Fsd,Fsd为默认激发频率、
2)Fs与距上一次激发的时间间隔成负相关、
3)Fs与最近平均发放率成正相关、
4)Fs与最近激发的持续时间成正相关、
5)Fs与总激发次数成正相关、
6)Fs与最近各输入联接的突触可塑性过程执行总次数成正相关、
7)Fs与最近各输出联接的突触可塑性过程执行总次数成正相关、
8)Fs与最近各输入联接的权重总改变量成正相关、
9)Fs与最近各输出联接的权重总改变量成正相关、
10)Fs与所有输入联接的权重平均值成正相关、
11)Fs与所有输入联接的权重总模长成正相关、
12)Fs与所有输入联接的总数量成正相关、
13)Fs与所有输出联接的总数量成正相关。
若所述有条件自激发神经元为脉冲神经元,P为当前发放一系列脉冲的概率,若发放则发放率为Fs,若不发放则发放率为0。
若所述有条件自激发神经元为非脉冲神经元,P为当前激发的概率,若激活则激活强度为Fs,若不激活则激活强度为0。
在本实施例中,50万至100万个所述初级特征编码神经元10、1千万个所述具象特征编码神经元210、1千万个所述抽象特征编码神经元220、1百万个所述输入侧注意力调控神经元230采用所述有条件自激发神经元;50万至100万个所述初级特征编码神经元10采用无条件自激发神经元。
在本实施例中,各个神经元、各个联接(包括神经元-神经元联接、突触-突触联接)可以采用向量或矩阵的表征方式,所述神经网络的运算即表现为向量或矩阵运算;例如,将各个神经元、各个联接中同类的参数(如:神经元的发放率、联接的权重)平铺为向量或矩阵,所述神经网络的信号传播可表现为神经元的发放率向量与联接的权重向量点乘运算(即对输入加权求和)。
在另一种实施例中,各个神经元、各个联接(包括神经元-神经元联接、突触-突触联接)还可以采用对象化的实现;例如,将它们分别实现为一个对象(面向对象编程中的object),所述神经网络的运算即表现为对象的调用和对象间的信息传递。
在另一种实施例中,所述神经网络还可以采用固件(例如FPGA)或ASIC(例如神经拟态芯片)的实现方式。
在另一种实施例中,所述神经网络的若干个联接可以用卷积运算替代;例如,各初级特征编码神经元10与各具象特征编码神经元210之间的全部联接可以采用卷积运算替代,同样可以产生具有一至多种感受野的信号投射关系。所述感受野的投射关系可参见附图8。
参见附图2、3、4、5,在一种进一步改进的实施例中,所述复合特征编码模块2还可以包括输入侧注意力调控单元、输出侧注意力调控单元。
所述神经元还包括输入侧注意力调控神经元230、输出侧注意力调控神经元240。
所述输入侧注意力调控单元包括若干个(如10万个)所述输入侧注意力调控神经元230。
所述输出侧注意力调控单元包括若干个(如10万个)所述输出侧注意力调控神经元240。
若干个(如5万个)所述输入侧注意力调控神经元230可分别接受若干个(如10至1万个)所述初级特征编码神经元10的单向或双向兴奋型/抑制型联接。
各所述输入侧注意力调控神经元230分别与所在复合特征编码模块2的若干个(如1个至1千个)所述具象特征编码神经元210/抽象特征编码神经元220形成单向或双向兴奋型联接。
各所述输入侧注意力调控神经元230分别接受来自其它复合特征编码模块2的若干个(如10至1万个)所述具象特征编码神经元210/抽象特征编码神经元220/输出侧注意力调控神经元240的单向或双向兴奋型联接。
各所述输入侧注意力调控神经元230还可以分别与其它若干个(如1千个)所述输入侧注意力调控神经元230形成单向或双向兴奋型联接。
各所述输出侧注意力调控神经元240分别与位于其它复合特征编码模块2的若干个(如1千至1万个)所述具象特征编码神经元210/抽象特征编码神经元220/输入侧注意力调控神经元230形成单向或双向兴奋型联接。
各所述输出侧注意力调控神经元240分别接受来自所在复合特征编码模块2的若干个(如1个至1千个)所述具象特征编码神经元210/抽象特征编码神经元220的单向或双向兴奋型联接。
各所述输出侧注意力调控神经元240还可以分别与其它若干个(如1千个)所述输出侧注意力调控神经元240形成单向或双向兴奋型联接。
各所述输入侧注意力调控神经元230可以具有一个输入侧注意力控制端31;各所述输出侧注意力调 控神经元240可以具有一个输出侧注意力控制端32。
所述神经网络的工作过程还包括主动注意力过程、自动注意力过程。
所述主动注意力过程为:通过所述输入侧注意力控制端31处施加的注意力控制信号的强弱(幅值可以为正、负、0)来调节各所述输入侧注意力调控神经元230的激活强度或发放率或脉冲发放相位,进而控制进入相应具象特征编码单元21、抽象特征编码单元22的信息,以及调节各项信息成分的大小和比例;或者,通过所述输出侧注意力控制端32处施加的注意力控制信号的强弱(幅值可以为正、负、0)来调节各所述输出侧注意力调控神经元240的激活强度或发放率或脉冲发放相位,进而控制从相应具象特征编码单元21、抽象特征编码单元22输出的信息,以及调节各项信息成分的大小和比例。
所述自动注意力过程为:若干连接至所述输入侧注意力调控神经元230的神经元激活时,使这些输入侧注意力调控神经元230更易于激活,从而使相关的信息成分更易于输入至相应具象特征编码单元21、抽象特征编码单元22;或者,若干连接至所述输出侧注意力调控神经元240的神经元激活时,使这些输出侧注意力调控神经元240更易于激活,从而使相关的信息成分更易于从相应具象特征编码单元21、抽象特征编码单元22输出。
在一种进一步改进的实施例中,所述神经网络包括一至多种信息通道。
所述神经网络的工作过程还包括信息通道自动形成过程。
所述信息通道自动形成过程为:通过进行所述前向记忆过程、记忆触发过程、信息聚合过程、定向信息聚合过程、信息转写过程、记忆遗忘过程、记忆自巩固过程、信息成分调整过程、主动注意力过程、自动注意力过程中的任一种或任几种,调整各所述神经元间的联接关系与权重,使所述神经网络形成一至多种信息通道,每种信息通道编码一至多种信息成分;各所述信息通道间可以存在交叉。
还可以通过预设初始联接关系、初始参数(如联接权重、所述神经元的阈值、所述神经元的初始膜电位、所述神经元的初始时间常数)使所述神经网络形成一至多种信息通道,每种信息通道编码一至多种预设信息成分。
在本实施例中,所述信息通道包括初级信息通道。
所述初级信息通道为:全部所述初级特征编码神经元10及其联接构成所述初级信息通道。
所述初级信息通道包括初级对比度信息通道、初级朝向信息通道、初级边缘信息通道、初级色块信息通道。
参见附图10,所述初级对比度信息通道为:选择输入图像中若干个相邻近的像素作为中央区域像素,选择中央区域像素的周围的若干个像素作为周围区域像素,将各所述中央区域像素、所述周围区域像素的若干个像素值分别乘以权重输入至若干个所述初级特征编码神经元10,即形成中央-周围拓扑结构,这些初级特征编码神经元10及其联接构成一至多种初级对比度信息通道。
在所述初级信息通道中,选择一至多种像素个数、覆盖画面空间的位置、面积的相邻的像素,将这些像素的一至多种像素值分别乘以一至多种权重,还可以构成具有一至多种感受野的若干个初级朝向信息通道、初级边缘信息通道、初级色块信息通道或它们的综合。
例如,将1个所述中央区域像素的R、G、B像素值分别乘以负权重(如-2)、负权重(如-2)、正权重(如+4)输入至若干个(如1-2个)所述初级特征编码神经元10;将左上方(如1个)和右下方(如1个)的所述周围区域像素的R、G、B像素值分别乘以负权重(如-2)、负权重(如-2)、正权重(如+4),将上方(如1个)、下方(如1个)、左方(如1个)、右方(如1个)、右上方(如1个)、左下方(如1个)的所述周围区域像素的R、G、B像素值分别乘以正权重(如+2)、正权重(如+2)、负权重(如-4)输入至这些初级特征编码神经元10,这些初级特征编码神经元10及其联接构成具有蓝黄对比度和由左上至右下45°朝向敏感的具有3x3感受野的信息通道。
例如,在15x15的像素区域中,各像素的R、G、B值分别乘以负权重、负权重、正权重,输入至若干个(如1-2个)所述初级特征编码神经元10,这些初级特征编码神经元10及其联接构成对蓝色敏感的色块信息通道。
所述初级对比度信息通道包括明暗对比度信息通道、暗明对比度信息通道、红绿对比度信息通道、绿红对比度信息通道、黄蓝对比度信息通道、蓝黄对比度信息通道。
所述明暗对比度信息通道为:将各(如9个)所述中央区域像素的R、G、B像素值分别乘以正权 重(如+1)输入至若干个(如1至10个)所述初级特征编码神经元10,将各(如72个)所述周围区域像素的R、G、B像素值分别乘以负权重(如-1)输入至这些初级特征编码神经元10,这些初级特征编码神经元10及其联接构成明暗对比度信息通道。
所述暗明对比度信息通道为:将各(如9个)所述中央区域像素的R、G、B像素值分别乘以负权重(如-1)输入至若干个(如1至10个)所述初级特征编码神经元10,将各(如72个)所述周围区域像素的R、G、B像素值分别乘以正权重(如+1)输入至这些初级特征编码神经元10,这些初级特征编码神经元10及其联接构成暗明对比度信息通道。
参见附图10,所述红绿对比度信息通道为:将各(如4个)所述中央区域像素的R、G、B像素值分别乘以正权重(如+2)、负权重(如-2)、正权重(如+1)输入至若干个(如1至10个)所述初级特征编码神经元10,将各(如32个)所述周围区域像素的R、G、B像素值分别乘以负权重(如-2)、正权重(如+2)、正权重(如+1)输入至这些初级特征编码神经元10,这些初级特征编码神经元10及其联接构成红绿对比度信息通道。
所述绿红对比度信息通道为:将各(如4个)所述中央区域像素的R、G、B像素值分别乘以负权重(如-2)、正权重(如+2)、正权重(如+1)输入至若干个(如1至10个)所述初级特征编码神经元10,将各(如32个)所述周围区域像素的R、G、B像素值分别乘以正权重(如+2)、负权重(如-2)、正权重(如+1)输入至这些初级特征编码神经元10,这些初级特征编码神经元10及其联接构成绿红对比度信息通道。
所述黄蓝对比度信息通道为:将各(如4个)所述中央区域像素的R、G、B像素值分别乘以正权重(如+2)、正权重(如+2)、负权重(如-4)输入至若干个(如1至10个)所述初级特征编码神经元10,将各(如32个)所述周围区域像素的R、G、B像素值分别乘以负权重(如-2)、负权重(如-2)、正权重(如+4)输入至这些初级特征编码神经元10,这些初级特征编码神经元10及其联接构成黄蓝对比度信息通道。
所述蓝黄对比度信息通道为:将各(如4个)所述中央区域像素的R、G、B像素值分别乘以负权重(如-2)、负权重(如-2)、正权重(如+4)输入至若干个(如1至10个)所述初级特征编码神经元10,将各(如32个)所述周围区域像素的R、G、B像素值分别乘以正权重(如+2)、正权重(如+2)、负权重(如-4)输入至这些初级特征编码神经元10,这些初级特征编码神经元10及其联接构成蓝黄对比度信息通道。
具体地,所述初级视觉特征信息包括明暗对比度信息、暗明对比度信息、红绿对比度信息、绿红对比度信息、黄蓝对比度信息、蓝黄对比度信息、初级边缘信息、初级朝向信息、感受野信息、色块信息。
在一种进一步改进的实施例中,所述初级信息通道还包括初级光流信息通道。
所述初级光流信息通道为:对输入图像中各像素分别计算光流,得到光流运动的方向值与速率值,将不同方向值、速率值进行组合,并分别乘以权重输入至若干个(如1至10个)所述初级特征编码神经元10,这些初级特征编码神经元10及其联接构成初级光流信息通道。
所述初级视觉特征信息还包括光流信息。
参见附图3、附图6、附图7,在一种进一步改进的实施例中,所述复合特征编码模块2还可以包括若干个(如1至10个)位置编码单元25。
所述神经元还包括位置编码神经元250。
所述位置编码单元25包括若干个(如1千至1万个)所述位置编码神经元250,编码(视觉特征相对于画面空间或相对于其它视觉特征的)位置信息。
每个所述位置编码单元25分别对应画面空间中的若干个子空间,各子空间可以存在交集。
每个所述位置编码神经元250分别对应其所在位置编码单元25对应的各子空间中位置相对应的各区域,并接受感受野为这些区域的若干(如1个至1万个)所述神经元(如初级特征编码神经元10)的单向或双向兴奋型联接;感受野的投射关系可参见附图8。
若干个(如各个)所述位置编码神经元250分别与其它对应相同区域的所述位置编码神经元250形成单向或双向兴奋型联接。
若干个(如1千至5万个)所述位置编码神经元250还可以分别与位于其所在复合特征编码模块2 的若干个(如1至1千个)所述输入侧注意力调控神经元230/输出侧注意力调控神经元240/具象特征编码神经元210/抽象特征编码神经元220形成单向或双向兴奋型联接。
若干个(如1千至5万个)所述位置编码神经元250还可以分别与位于其它复合特征编码模块2的若干个(如1至1千个)所述输入侧注意力调控神经元230/具象特征编码神经元210/抽象特征编码神经元220形成单向或双向兴奋型联接。
例如,在附图6中,所述位置编码神经元250A、250B、250C、250D分别对应画面空间中的一些子空间/区域,它们所对应的子空间/区域互相之间存在交集;所述位置编码神经元250A、250B、250C、250D分别与所述位置编码神经元250E、250F、250G、250H对应相同的区域,它们之间分别形成双向兴奋型联接。
再如,在附图7中,所述位置编码神经元250Y接受四个子空间中位置相对应的各区域的初级特征编码神经元10E、10F等的兴奋型联接。
在一种进一步改进的实施例中,所述信息通道还包括中级信息通道。
所述中级信息通道包括中级位置信息通道。
所述中级位置信息通道为:通过所述信息通道自动形成过程,或通过预设初始联接关系、初始参数,使若干个(如1至10个)所述复合特征编码模块2中的全部所述输入侧注意力调控神经元230/输出侧注意力调控神经元240/具象特征编码神经元210/抽象特征编码神经元220的部分或全部输入联接的总权重中来自所述位置编码神经元250以及编码了位置信息的神经元的联接总权重的比例达到或超过第一预设比例(如配置为30%),并使来自所述位置编码神经元250以及编码了位置信息的神经元的各联接权重以多种比例组合,以使这些输入侧注意力调控神经元230/输出侧注意力调控神经元240/具象特征编码神经元210/抽象特征编码神经元220分别具有一至多种感受野、分别编码一至多种位置信息,与所述位置编码神经元250共同构成所述中级位置信息通道。
由于所述中级位置信息通道中包括编码视觉特征的位置信息的神经元,即采用了显式位置编码方式。
在本实施例中,所述中级信息通道还包括中级视觉特征信息通道。
所述中级视觉特征信息通道为:通过所述信息通道自动形成过程,或通过预设初始联接关系、初始参数,使若干个(如10个至2000个)所述复合特征编码模块2中的若干个(如全部的80%)所述输入侧注意力调控神经元230/输出侧注意力调控神经元240/具象特征编码神经元210/抽象特征编码神经元220的部分或全部输入联接的总权重中来自所述初级信息通道的神经元的联接总权重的比例达到或超过第二预设比例(如配置为60%),并使来自所述初级信息通道、中级信息通道的对应画面空间中各个区域、位置的各神经元的各联接权重以多种比例组合,以使这些输入侧注意力调控神经元230/输出侧注意力调控神经元240/具象特征编码神经元210/抽象特征编码神经元220分别具有一至多种感受野、分别编码一至多种中级视觉特征信息,共同构成所述中级视觉特征信息通道。
具体地,所述中级视觉特征信息包括复合颜色对比度信息、复合明暗对比度信息、复合朝向信息、复合边缘信息、面积信息、运动信息。
例如,选择1至10个所述复合特征编码模块2,令其中80%所述输入侧注意力调控神经元230/输出侧注意力调控神经元240/具象特征编码神经元210/抽象特征编码神经元220分别接受来自两个初级朝向信息通道的各神经元的单向兴奋型联接,则这些复合特征编码模块2的神经元编码了由这两个初级朝向信息通道编码的初级朝向信息(一个是水平向右方向,另一个是左上至右下10°的方向,感受野皆为9x9)的综合,也即综合朝向信息(从水平向右到左上至右下10°的方向区间,感受野为9x9)。
由于所述中级视觉特征信息通道的神经元直接接受来自所述初级信息通道的神经元的联接,通过这些联接关系对应画面空间中的各个区域、位置,形成感受野,即采用了隐式位置编码方式。
在本实施例中,所述信息通道还包括高级信息通道。
所述高级信息通道为:通过所述信息通道自动形成过程,或通过预设初始联接关系、初始参数,使若干个(如10个至2000个)所述复合特征编码模块2中的若干个(如全部的80%)所述输入侧注意力调控神经元230/输出侧注意力调控神经元240/具象特征编码神经元210/抽象特征编码神经元220的部分或全部输入联接的总权重中来自所述中级信息通道的神经元的联接总权重的比例达到或超过第三预设 比例(如配置为40%),并使来自所述初级信息通道、中级信息通道、高级信息通道的对应画面空间中各个区域、位置的各神经元的各联接权重以多种比例组合,以使这些输入侧注意力调控神经元230/输出侧注意力调控神经元240/具象特征编码神经元210/抽象特征编码神经元220分别具有一至多种感受野、分别编码一至多种高级视觉特征信息,共同构成所述高级信息通道。
具体地,所述高级视觉特征信息包括轮廓信息、纹理信息、明亮度信息、透明度信息、形位信息、复合运动信息、对象化信息。
所述对象化信息即所识别的对象(可以是实例,也可以是类别),各对象可分别具有命名,例如“苹果”、“香蕉”、“汽车”。
例如,选择1至10个所述复合特征编码模块2,令其中80%所述输入侧注意力调控神经元230/输出侧注意力调控神经元240/具象特征编码神经元210/抽象特征编码神经元220分别接受来自多个中级信息通道的各神经元的单向兴奋型联接,这些中级信息通道编码了复合边缘信息、位置信息,则这些复合特征编码模块2的神经元编码了形位信息。
在本实施例中,所述神经网络、其模块或单元的一种基本工作过程为:分别在(某个或某些模块或子模块中的)若干备选神经元中选择若干起振神经元、源神经元、靶神经元,并使若干个所述起振神经元产生发放分布并保持激活预设的时间或运算周期,令参与该工作过程的若干神经元之间的联接通过所述突触可塑性过程调整权重。
所述发放分布为:若干个所述神经元分别产生相同或不同的激活强度、发放率、脉冲相位。
例如,神经元A、神经元B、神经元C分别产生幅值为2、5、9的激活强度,或者分别产生0.4Hz、50Hz、20Hz的发放率,或者分别产生100ms、300ms、150ms的脉冲相位。
在若干个备选神经元中选择起振神经元、源神经元或靶神经元的过程包括下面的任一种或任几种:选择部分或全部输入联接的权重总模长最小的前Kf1个神经元,选择部分或全部输出联接的权重总模长最小的前Kf2个神经元,选择部分或全部输入联接的权重总模长最大的前Kf3个神经元,选择部分或全部输出联接的权重总模长最大的前Kf4个神经元,选择激活强度或发放率最大或最先开始发放的前Kf5个神经元,选择激活强度或发放率最小或最晚开始发放(含不发放)的前Kf6个神经元,选择距上一次发放时间最久的前Kf7个神经元,选择距上一次发放时间最近的前Kf8个神经元,选择距上一次输入联接或输出联接进行突触可塑性过程时间最久的前Kf9个神经元,以及,选择距上一次输入联接或输出联接进行突触可塑性过程时间最近的前Kf10个神经元。
使若干个所述神经元产生发放分布并保持激活预设周期(例如200ms至2s)的方式可以为输入样本(图片或视频流)、直接令所述神经网络中的若干个神经元激活、由所述神经网络中的若干个神经元自激发、由所述神经网络中的若干个神经元的既有激活状态在所述神经网络中传播,以使若干个所述神经元(例如所述起振神经元)激活。
参见附图9,在本实施例中,所述神经网络包括正向神经通路和反向神经通路。
所述正向神经通路和反向神经通路分别为:将若干个所述初级特征编码模块1/复合特征编码模块2以第一预设次序级联,将由其中的若干个所述神经元以顺着所述第一预设次序级联构成的神经通路作为所述正向神经通路,将由其中的若干个所述神经元以逆着所述第一预设次序级联构成的神经通路作为所述反向神经通路。
具体地,所述第一预设次序为初级信息通道、中级信息通道、高级信息通道;所述正向神经通路为由若干初级信息通道、中级信息通道、高级信息通道的神经元通过自下而上级联(即顺着第一预设次序)的神经通路,主要参与识别外部输入信息(图片或视频流)和所述前向学习过程;所述反向神经通路为由若干高级信息通道、中级信息通道、初级信息通道的神经元通过自上而下级联(即逆着第一预设次序)的神经通路,主要参与模式补全过程、定向启动过程、联想过程或想象过程。
在每个初级特征编码模块1/复合特征编码模块2中,其构成正向神经通路的若干神经元可以分别与其构成反向神经通路的若干神经元形成单向或双向兴奋型/抑制型联接。
在本实施例中,所述神经网络的工作过程还包括定向启动过程。
所述定向启动过程包括正向启动过程、反向启动过程。
所述正向启动过程为:
步骤o1:选择正向神经通路中的若干神经元作为起振神经元。
步骤o2:使各个所述起振神经元产生发放分布并保持激活第三预设周期Tfprime。
步骤o3:所述反向神经通路中接受所述起振神经元的兴奋型联接的若干所述神经元收到非负值输入以更易于激活。
步骤o4:所述反向神经通路中接受所述起振神经元的抑制型联接的若干所述神经元收到非正值输入以更难于激活。
所述反向启动过程为:
步骤n1:选择反向神经通路中的若干神经元作为起振神经元。
步骤n2:使各个所述起振神经元产生发放分布并保持激活第十预设周期Tbprime。
步骤n3:所述正向神经通路中接受所述起振神经元的兴奋型联接的若干所述神经元收到非负值输入以更易于激活。
步骤n4:所述正向神经通路中接受所述起振神经元的抑制型联接的若干所述神经元收到非正值输入以更难于激活。
例如,Tfprime、Tbprime配置为5秒。
所述定向启动过程可用于视觉搜索,在进行视觉搜索时,使所述反向神经通路中编码了所搜索信息成分的若干所述神经元作为起振神经元并产生表征所搜索信息成分的发放分布并保持激活第十预设周期Tbprime(如配置为5秒),所述正向神经通路中编码了所搜索信息成分的若干所述神经元更易于激活,而没有编码所搜索信息成分的若干所述神经元受到抑制,当外部输入信息(图片或视频流)中出现所搜索的信息成分时更易于被识别,不相关的信息成分被过滤掉。
参见附图9,例如,所述神经网络可以配置为采用初级特征编码模块1A、复合特征编码模块2A、复合特征编码模块2B、复合特征编码模块2C构成;这四个模块分别包含初级特征编码神经元10A、10B、10C,具象特征编码神经元210A、210B、210C,具象特征编码神经元210D、210E、210F,具象特征编码神经元210G、210H、210I。
所述第一预设次序可以配置为初级特征编码模块1A、复合特征编码模块2A、复合特征编码模块2B、复合特征编码模块2C的顺序;初级特征编码神经元10A、具象特征编码神经元210A、具象特征编码神经元210D、具象特征编码神经元210G顺着所述第一预设次序通过单向兴奋型联接级联,构成正向神经通路A;初级特征编码神经元10B、具象特征编码神经元210B、具象特征编码神经元210E、具象特征编码神经元210H顺着所述第一预设次序通过单向兴奋型联接级联,构成正向神经通路B;初级特征编码神经元10C、具象特征编码神经元210C、具象特征编码神经元210F、具象特征编码神经元210I逆着所述第一预设次序通过单向兴奋型联接级联,构成反向神经通路C。
初级特征编码神经元10C分别与初级特征编码神经元10A、10B形成双向兴奋型联接和双向抑制型联接。
具象特征编码神经元210C分别与具象特征编码神经元210A、210B形成双向兴奋型联接和双向抑制型联接。
具象特征编码神经元210F分别与具象特征编码神经元210D、210E形成双向兴奋型联接和双向抑制型联接。
具象特征编码神经元210I分别与具象特征编码神经元210G、210H形成双向兴奋型联接和双向抑制型联接。
从而,反向神经通路C促进正向神经通路A,抑制正向神经通路B。
在本实施例中,所述神经元还包括中间神经元。
所述初级特征编码模块1、复合特征编码模块2分别包括若干(如1千至1万个)所述中间神经元,若干所述中间神经元分别与相应模块中对应的若干(如1个至1万个)神经元形成单向抑制型连接,各所述模块中的相应若干神经元分别与若干(如1个至1万个)对应的所述中间神经元形成单向兴奋型连接。
在本实施例中,各所述模块中的相应二至多组神经元通过所述中间神经元形成组间竞争(侧抑制作用),当施加输入时,相竞争的各组所述神经元产生不同的整体激活强度(或发放率),通过所述中间神 经元的侧抑制作用使整体激活强度(或发放率)强者更强、弱者更弱,或使先开始发放的所述神经元(组)抑制后发放的所述神经元(组)形成时间差,保证各组所述神经元的信息编码独立且互相解耦、自动分组,可使所述记忆触发过程中输入信息能够触发与之相关度最高的记忆信息,亦可使参与所述定向信息聚合过程中的各神经元可根据响应(激活强度或发放率大小,或发放时间先后)自动分组为所述Ga1、Ga2、Ga3、Ga4。
在本实施例中,所述神经元还包括差动信息解耦神经元,所述神经网络的工作过程还包括差动信息解耦过程。
所述差动信息解耦过程为:
选择若干个所述输入侧注意力调控神经元230/输出侧注意力调控神经元240/具象特征编码神经元210/抽象特征编码神经元220作为靶神经元。
选择若干个与所述靶神经元具有单向/双向兴奋型联接的神经元作为具象信息源神经元。
选择其它若干个与所述靶神经元具有单向/双向兴奋型联接的神经元作为抽象信息源神经元。
每个所述具象信息源神经元可以有若干个与之匹配的所述差动信息解耦神经元;每个所述具象信息源神经元与匹配的各个所述差动信息解耦神经元分别形成单向兴奋型联接;所述差动信息解耦神经元分别与所述靶神经元形成单向抑制型联接或与所述信息源神经元输入至所述靶神经元的联接形成单向抑制型突触-突触联接,使该具象信息源神经元输入至所述靶神经元的信号受到匹配的差动信息解耦神经元的抑制型调节;所述抽象信息源神经元与所述差动信息解耦神经元形成单向兴奋型联接。
每个所述差动信息解耦神经元可以有一个解耦控制信号输入端;通过调节解耦控制信号输入端上施加的信号大小(可以是正、负、0)来调节信息解耦程度。
所述具象信息源神经元/抽象信息源神经元与匹配的所述差动信息解耦神经元之间的单向兴奋型联接的权重为常值,或通过所述突触可塑性过程动态调整。
在本实施例中,所述突触-突触联接的一种方案为,联接Sconn1接受其它若干个联接(记为Sconn2)的输入,当联接Sconn1的上游神经元发放时,联接Sconn1传递给下游神经元的值为联接Sconn1的权重叠加上各个联接Sconn2的输入值。
在本实施例中,选择某个复合特征编码模块2进行所述前向学习过程,选择其中一组所述输入侧注意力调控神经元230作为起振神经元,选择其中一组所述具象特征编码神经元210作为靶神经元;当输入新颖的样本(图片或视频流)时,多个所述靶神经元被激活并通过所述前向学习过程将样本中的视觉特征信息编码为具象特征信息(记忆信息的一种,也即各对象的原始特征信息成分)并存储。
然后,令该复合特征编码模块2进行所述定向信息聚合过程,选择与之前同一组输入侧注意力调控神经元230作为起振神经元,选择与之前同一组具象特征编码神经元210作为源神经元、选择其中一组所述抽象特征编码神经元220作为靶神经元;在完成一至多次所述定向信息聚合过程后,所述具象特征信息被聚合为抽象特征信息(记忆信息的一种,也即各对象间的共性特征信息成分),并通过多个所述靶神经元编码与存储。
再然后,令该复合特征编码模块2进行所述差动信息解耦过程,选择与之前同一组具象特征编码神经元210作为所述具象信息源神经元,选择与之前同一组抽象特征编码神经元220作为所述抽象信息源神经元,选择该复合特征编码模块2的多个所述输出侧注意力调控神经元240作为靶神经元;当再次输入相同的样本时,激活了多个所述具象信息源神经元,触发了其编码的具象特征信息,也同时激活了多个所述抽象信息源神经元,触发了其编码的抽象特征信息;这些抽象信息源神经元激活所述差动信息解耦神经元,进而抑制了这些具象信息源神经元输入至各靶神经元的信号,从而使抽象特征信息代替原来的具象特征信息输入至各靶神经元,也即最终通过该复合特征编码模块2输出至其它复合特征编码模块2的信息为抽象特征信息。
再然后,可令该复合特征编码模块2进行所述信息成分调整过程,选择与之前同一组输入侧注意力调控神经元230作为起振神经元,选择与之前同一组具象特征编码神经元210作为靶神经元,令所述Kb1取较小的数值(例如1),在完成一至多次所述信息成分调整过程后,各所述靶神经元的特征信息变为差异特征信息(记忆信息的一种,也即表征各对象间的差异的信息成分);此时,这些具象特征编码神经元210输出至所述输出侧注意力调控神经元240的信号即不再受所述差动信息解耦神经元的抑制, 能够传至下游神经网络。
整个过程可执行一至多次,使具象特征信息逐步抽象为抽象特征信息,并保留了差异特征信息,形成更稀疏的编码,节省了编码和信号传输带宽,也使所述神经网络的表征的泛化能力更好(因为形成了抽象特征信息),还能在形成更高级的表征过程中不丢失细节(因为保留了差异特征信息)。
在本实施例中,所述前向学习过程为:
步骤a1:选择若干个所述神经元作为起振神经元。
步骤a2:选择若干个所述神经元作为靶神经元。
步骤a3:各个激活的所述起振神经元分别与若干个所述靶神经元的单向兴奋型联接通过所述突触可塑性过程调整权重。
步骤a4:每个激活的所述靶神经元可分别与若干个其它所述靶神经元建立单向或双向兴奋型联接,也可以与自己建立自循环兴奋型联接,这些联接通过所述突触可塑性过程调整权重。
在每个所述靶神经元的各输入/输出联接通过所述突触可塑性过程调整权重时,可以将部分或全部输入或输出联接的权重进行规范化,也可以不作规范化。
在本实施例中,在所述前向学习过程中,选择1万个所述输入侧注意力调控神经元230作为起振神经元,可以选择1万个所述具象特征编码神经元210/抽象特征编码神经元220作为靶神经元。
所述前向学习过程可以将当前输入图片/视频流中各对象的视觉特征信息快速编码并存储在所述初级特征编码模块1/具象特征编码单元21/抽象特征编码单元22中,便于再次见到相同或类似对象时快速识别,以及为所述信息聚合过程/定向信息聚合过程/信息成分调整过程提供加工素材,找到多个相似对象的聚类中心(也即共性特征)以及差异化特征,是元学习的基础。
在本实施例中,所述记忆触发过程为:输入信息(图片或视频流)、或直接令所述神经网络中的若干个所述神经元激活、或由所述神经网络中的若干个所述神经元自激发、或由所述神经网络中的若干个所述神经元的既有激活状态在所述神经网络中传播,在第二预设周期(如1s)内若导致目标区域中的若干个所述神经元发放,则将所述目标区域的各个发放的神经元的表征并可以连同其激活强度或发放率作为所述记忆触发过程的结果。
所述目标区域可以为所述神经网络中的任一个子网络(如某个所述复合特征编码模块2的全部抽象特征编码神经元220)。
在本实施例中,所述记忆触发过程可以体现为对输入信息(图片或视频流)的识别过程,并可以将所述目标区域的各个发放的神经元通过若干个读出层神经元映射至若干个标签作为识别结果;所述目标区域的每个神经元与若干个所述读出层神经元形成单向兴奋型或抑制型联接;每个读出层神经元对应一个标签,其激活强度或发放率越高,或者开始发放得越早,则输入信息与其对应标签的相关度越高,反之亦然;例如,各所述标签可为“苹果”、“汽车”、“草原”等等。
在本实施例中,所述信息聚合过程为:
步骤g1:选择若干个所述神经元作为起振神经元。
步骤g2:选择若干个所述神经元作为源神经元。
步骤g3:选择若干个所述神经元作为靶神经元。
步骤g4:使各个所述起振神经元产生发放分布并保持激活第八预设周期Tk。
步骤g5:在所述第八预设周期Tk中,令各个激活的起振神经元与若干个所述靶神经元之间的单向或双向兴奋型/抑制型联接通过所述突触可塑性过程调整权重。
步骤g6:在所述第八预设周期Tk中,令各个激活的源神经元与若干个所述靶神经元之间的单向或双向兴奋型/抑制型联接通过所述突触可塑性过程调整权重。
步骤g7:每执行步骤g1至步骤g6过程一遍记为一次迭代,执行一至多次迭代。
将若干个所述靶神经元映射至对应的标签作为所述信息聚合过程的结果。
例如,在所述信息聚合过程中,所述第八预设周期Tk配置为100ms至2秒,选择任一个所述复合特征编码模块2的1万个所述输入侧注意力调控神经元230作为起振神经元,选择该复合特征编码模块2的1万个所述具象特征编码神经元210作为源神经元,选择该复合特征编码模块2的1万个所述抽象特征编码神经元220作为靶神经元。
在本实施例中,所述定向信息聚合过程为:
步骤h1:选择若干个所述神经元作为起振神经元。
步骤h2:选择若干个所述神经元作为源神经元。
步骤h3:选择若干个所述神经元作为靶神经元。
步骤h4:使各个所述起振神经元产生发放分布并保持激活第九预设周期Ta。
步骤h5:在所述第九预设周期Ta中,激活了Ma1个所述源神经元,以及激活了Ma2个所述靶神经元。
步骤h6:在所述第九预设周期Ta中,激活强度最高或发放率最大或最先开始发放的前Ka1个源神经元记为Ga1,其余Ma1-Ka1个激活的源神经元记为Ga2。
步骤h7:在所述第九预设周期Ta中,激活强度最高或发放率最大或最先开始发放的前Ka2个靶神经元记为Ga3,其余Ma2-Ka2个激活的靶神经元记为Ga4。
步骤h8:在所述第九预设周期Ta中,所述Ga1中各源神经元分别与所述Ga3中若干个靶神经元的单向或双向兴奋型/抑制型联接进行一至多次突触权重增强过程。
步骤h9:在所述第九预设周期Ta中,所述Ga1中各源神经元分别与所述Ga4中若干个靶神经元的单向或双向兴奋型/抑制型联接进行一至多次突触权重减弱过程。
步骤h10:在所述第九预设周期Ta中,所述Ga2中各源神经元分别与所述Ga3中若干个靶神经元的单向或双向兴奋型/抑制型联接可以进行或不进行一至多次突触权重减弱过程。
步骤h11:在所述第九预设周期Ta中,所述Ga2中各源神经元分别与所述Ga4中若干个靶神经元的单向或双向兴奋型/抑制型联接可以进行或不进行一至多次突触权重增强过程。
步骤h12:在所述第九预设周期Ta中,激活的各个起振神经元分别与所述Ga3中若干个靶神经元的单向或双向兴奋型/抑制型联接进行一至多次突触权重增强过程。
步骤h13:在所述第九预设周期Ta中,激活的各个起振神经元分别与所述Ga4中若干个靶神经元的单向或双向兴奋型/抑制型联接进行一至多次突触权重减弱过程。
步骤h14:每执行步骤h1至步骤h13过程一遍记为一次迭代,执行一至多次迭代。
在所述步骤h8至步骤h13过程中,在执行一至多次突触权重增强过程或突触权重减弱过程后,可以对各所述源神经元或靶神经元的部分或全部输入或输出联接的权重规范化,也可以不作规范化。
所述突触权重增强过程可采用所述单极性上下游发放依赖突触增强过程、或所述单极性脉冲时间依赖突触增强过程。
所述突触权重减弱过程可采用所述单极性上下游发放依赖突触减弱过程、或所述单极性脉冲时间依赖突触减弱过程。
所述突触权重增强过程和所述突触权重减弱过程还分别可以采用所述非对称双极性脉冲时间依赖突触可塑性过程、或所述对称双极性脉冲时间依赖突触可塑性过程。
可将各所述靶神经元的表征作为对各所述源神经元的表征的所述定向信息聚合过程的结果,映射至对应的标签作为输出。
所述Ma1、Ma2为正整数,Ka1为不超过Ma1的正整数,Ka2为不超过Ma2的正整数。
例如,在所述定向信息聚合过程中,令Ma1=100,Ma2=10,Ka1=3,Ka2=2,所述第九预设周期Ta=200ms至2s,选择任一个所述复合特征编码模块2的1万个所述输入侧注意力调控神经元230作为起振神经元,选择该复合特征编码模块2的1万个所述具象特征编码神经元210作为源神经元,选择该复合特征编码模块2的1万个所述抽象特征编码神经元220作为靶神经元。
每个所述靶神经元即表征了连接至其的各个所述源神经元的表征的抽象、同位或具象表征;某个所述源神经元联向各个所述靶神经元的联接权重表征了该源神经元的表征与各个所述靶神经元的表征的相关度,权重越大,相关度越大,反之亦然。
例如,当所述定向信息聚合过程体现为定向信息抽象过程时,所述源神经元表征具象信息(如:子类或实例),而所述靶神经元表征抽象信息(如父类);每个所述靶神经元即表征了连接至其的各个所述源神经元的聚类中心(前者表征了后者中的共性信息成分);某个所述源神经元联向各个所述靶神经元的联接权重表征了该源神经元与各个所述靶神经元表征的信息(即聚类中心)的相关度(或表征的距离), 权重越大则相关度越高(也即表征的距离越近);该定向信息抽象过程,也即聚类过程,也即元学习过程。
若将当前的靶神经元作为新的源神经元,并选择另一组所述神经元作为新的靶神经元,执行所述定向信息聚合过程,如此迭代,则可不断形成更高层次的抽象信息表征。
在本实施例中,所述信息转写过程为:
步骤f1:选择若干个所述神经元作为起振神经元。
步骤f2:选择所述起振神经元的若干个直接下游神经元或间接下游神经元作为源神经元。
步骤f3:选择所述起振神经元的若干个直接下游神经元或间接下游神经元作为靶神经元。
步骤f4:令各个所述起振神经元产生发放分布并保持激活第七预设周期Tj。
步骤f5:在所述第七预设周期Tj中,激活了若干个所述源神经元。
步骤f6:在所述第七预设周期Tj中,若某起振神经元为某靶神经元的直接上游神经元,则令二者之间的单向或双向兴奋型/抑制型联接通过所述突触可塑性过程调整权重,若某起振神经元为某靶神经元的间接上游神经元,则令二者间的联接通路中该靶神经元的直接上游神经元与该靶神经元之间的单向或双向兴奋型/抑制型联接通过所述突触可塑性过程调整权重。
步骤f7:在所述第七预设周期Tj中,各所述靶神经元可分别与若干个其它所述靶神经元建立联接,并可通过所述突触可塑性过程调整权重。
步骤f8:在所述第七预设周期Tj中,若某源神经元与某靶神经元之间具有单向或双向兴奋型联接,则可通过所述突触可塑性过程调整权重。
例如,在所述信息转写过程中,所述第七预设周期Tj配置为20ms至500ms,选择任一个所述复合特征编码模块2的1万个所述输入侧注意力调控神经元230作为起振神经元,选择该复合特征编码模块2的1万个所述具象特征编码神经元210作为源神经元,选择该复合特征编码模块2的1万个所述抽象特征编码神经元220作为靶神经元。
在所述信息转写过程中,各个激活的源神经元的部分或全部输入联接权重表征的信息被近似地耦合进各个所述靶神经元的部分或全部输入联接权重中,也即信息由前者转写入后者;所述“近似地耦合”,是因为被转写的信息成分还耦合了各个所述起振神经元的发放分布,以及所述起振神经元与激活的各所述源神经元之间的联接通路、所述起振神经元与所述靶神经元之间的联接通路中各神经元的联接与发放情况的影响。
具体地,在所述信息转写过程中,若一些激活的起振神经元分别为某些激活的源神经元以及某些靶神经元的直接上游神经元,那么这些起振神经元与这些源神经元之间的联接权重就会被接近等比例地叠加到这些起振神经元与这些靶神经元的联接权重中,最终使后者趋近于前者;反之,若一些激活的起振神经元分别为某些激活的源神经元或某些靶神经元的间接上游神经元,那么这些起振神经元与这些靶神经元的联接权重最终还会包含起振神经元与激活的源神经元之间的联接通路、所述起振神经元与所述靶神经元之间的联接通路中各神经元的联接与发放情况的影响。
在本实施例中,所述记忆遗忘过程包括上游发放依赖记忆遗忘过程、下游发放依赖记忆遗忘过程及上下游发放依赖记忆遗忘过程。
所述上游发放依赖记忆遗忘过程为:对于某个联接,若其上游神经元持续在第四预设周期(如20分钟至24小时)内未发放,则权重绝对值减少,减少量记为DwDecay1。
所述下游发放依赖记忆遗忘过程为:对于某个联接,若其下游神经元持续在第五预设周期(如20分钟至24小时)内未发放,则权重绝对值减少,减少量记为DwDecay2。
所述上下游发放依赖记忆遗忘过程为:对于某个联接,若持续在第六预设周期(如20分钟至24小时)内其上、下游神经元未发生同步发放,则权重绝对值减少,减少量记为DwDecay3。
所述同步发放为:当所涉及联接的下游神经元激发时,并且距当前或过去的最近一次上游神经元激发的时间间隔不超过第四预设时间区间Te1,或者当所涉及联接的上游神经元激发时,并且距当前或过去的最近一次下游神经元激发的时间间隔不超过第五预设时间区间Te2。
例如,令所述第四预设时间区间Te1=30ms,所述第五预设时间区间Te2=20ms。
在所述记忆遗忘过程中,若某个联接具有指定的权重的绝对值下限,则权重的绝对值到达该下限就 不再减少,或将该联接裁剪掉。
在本实施例中,所述DwDecay1、DwDecay2、DwDecay3分别与所涉及联接的权重成正比,例如DwDecay1=Kdecay1*weight,DwDecay2=Kdecay2*weight,DwDecay1=Kdecay3*weight;令Kdecay1=Kdecay2=Kdecay3=0.01,weight为联接权重。
在本实施例中,所述记忆自巩固过程为:某个所述神经元自激发时,该神经元的部分或全部输入联接的权重通过所述单极性下游发放依赖突触增强过程、单极性下游脉冲依赖突触增强过程进行调整,该神经元的部分或全部输出联接的权重通过所述单极性上游发放依赖突触增强过程、单极性上游脉冲依赖突触增强过程进行调整。
所述记忆自巩固过程有助于使一些所述神经元的编码得以近似保真地保持,避免遗忘。
在本实施例中,所述信息成分调整过程为:
步骤i1:选择若干个所述神经元作为起振神经元。
步骤i2:选择所述起振神经元的若干个直接下游神经元或间接下游神经元作为靶神经元。
步骤i3:令各个所述起振神经元产生发放分布,并使其在第一预设周期Tb内保持激活。
步骤i4:在第一预设周期Tb中,激活了Mb1个所述靶神经元,其中激活强度最高或发放率最大或最先开始发放的前Kb1个靶神经元记为Gb1,其余Mb1-Kb1个激活的靶神经元记为Gb2。
步骤i5:若某起振神经元为所述Gb1中的某靶神经元的直接上游神经元,则令二者之间的单向或双向联接进行一至多次突触权重增强过程,若某起振神经元为所述Gb1中的某靶神经元的间接上游神经元,则令二者间的联接通路中该靶神经元的直接上游神经元与该靶神经元之间的单向或双向联接进行一至多次突触权重增强过程。
步骤i6:若某起振神经元为所述Gb2中的某靶神经元的直接上游神经元,则令二者之间的单向或双向联接进行一至多次突触权重减弱过程,若某起振神经元为所述Gb2中的某靶神经元的间接上游神经元,则令二者间的联接通路中该靶神经元的直接上游神经元与该靶神经元之间的单向或双向联接进行一至多次突触权重减弱过程。
步骤i7:每执行步骤i1至步骤i6过程一遍记为一次迭代,执行一至多次迭代。
在步骤i5、步骤i6过程中,在执行一至多次突触权重增强过程或突触权重减弱过程后,对各所述靶神经元的部分或全部输入联接的权重规范化,也可以不作规范化。
可将若干个所述靶神经元映射至对应的标签作为所述信息成分调整过程的结果。
所述突触权重增强过程可采用所述单极性上下游发放依赖突触增强过程、或所述单极性脉冲时间依赖突触增强过程。
所述突触权重减弱过程可采用所述单极性上下游发放依赖突触减弱过程、或所述单极性脉冲时间依赖突触减弱过程。
所述突触权重增强过程和所述突触权重减弱过程还分别可以采用所述非对称双极性脉冲时间依赖突触可塑性过程、或所述对称双极性脉冲时间依赖突触可塑性过程。
当所述Kb1取较小的数值(例如1)时,只有激活强度最高或发放率最大或最先开始发放的所述靶神经元发生所述突触权重增强过程,也即一定程度上叠加了当前各所述起振神经元的发放所表征的信息成分,使该靶神经元巩固了其既有表征;而其它所述靶神经元都发生所述突触权重减弱过程,也即一定程度上减去(解耦)了当前各所述起振神经元的发放所表征的信息成分;因此,执行多次迭代,每次迭代使各所述起振神经元产生不同的发放分布,可令各个所述靶神经元的表征相互间解耦;如再进一步执行多次迭代,加强解耦,各个所述靶神经元的表征即变为表征空间中一组相对独立的基。
同理,当所述Kb1取较大的数值(例如8)时,执行多次迭代,每次迭代使各所述起振神经元产生不同的发放分布,可令多个所述靶神经元表征的信息成分在一定程度上互相叠加,如再进一步执行多次迭代,可令多个所述靶神经元的表征互相接近。
因此,调整所述Kb1即可调整各所述靶神经元所表征的信息组分。
例如,在所述信息成分调整过程中,所述第一预设周期Tb配置为100ms至500ms,选择任一个所述复合特征编码模块2的1万个所述输入侧注意力调控神经元230作为起振神经元,选择该复合特征编码模块2的1万个所述具象特征编码神经元210作为靶神经元。
在本实施例中,所述强化学习过程为:当若干个所述联接收到强化信号时,在第二预设时间区间中,这些联接的权重发生改变,或这些联接在所述记忆遗忘过程的权重减少量发生改变,或这些联接在所述突触可塑性过程中的权重增加量/权重减少量发生改变;或者,
当若干个所述神经元收到强化信号时,在第三预设时间区间(例如,从接收到强化信号开始的30秒内)中,这些神经元收到正值或负值输入,或者这些神经元的部分或全部输入联接或输出联接的权重发生改变,或这些联接在所述记忆遗忘过程的权重减少量发生改变,或这些联接在所述突触可塑性过程中的权重增加量/权重减少量发生改变。
在本实施例中,所述强化信号在所述神经网络没有输入信息时为常值;在所述监督学习过程中,若所述记忆触发过程的结果为正确,则所述强化信号上升,若所述记忆触发过程的结果为错误,则所述强化信号下降。
例如,所述强化信号的常值为0,若进行所述监督学习过程,所述记忆触发过程的结果为正确,所述强化信号升至+10,若干个具象特征编码神经元210之间的双向兴奋型联接收到了所述强化信号(+10),在所述第二预设时间区间中(从接收到所述强化信号开始的30秒内),这些联接如果进行所述对称双极性脉冲时间依赖突触可塑性过程,所述DwLTP6在其原值基础上加10。
在本实施例中,所述新颖度信号调制过程为:当若干个所述神经元收到所述新颖度信号时,在第六预设时间区间(例如,从接收到所述新颖度信号开始的30秒内)中,这些神经元收到正值或负值输入,或者这些神经元的部分或全部输入联接或输出联接的权重发生改变,或这些联接在所述记忆遗忘过程的权重减少量发生改变,或这些联接在所述突触可塑性过程中的权重增加量/权重减少量发生改变。
在本实施例中,所述新颖度信号在所述神经网络没有输入信息时为常值或随时间逐渐减弱;所述新颖度信号在所述神经网络有输入信息时与所述记忆触发过程中的目标区域的各神经元的激活强度或发放率成负相关。
例如,在没有输入信息时,新颖度信号为常值+50;施加输入信息(图片或视频流),若这些输入信息没有触发具有足够相关度的记忆信息(例如,目标区域的神经元在输入当前图片的最高激活强度仅为输入已形成记忆编码的图片时的最高激活强度的10%,则相关度仅为10%),则新颖度信号从常值+50升至+90。
当构成所述正向神经通路的若干个所述神经元收到+90的新颖度信号时,这些神经元在第六预设时间区间收到正值输入(如+40)。
当构成所述反向神经通路的若干个所述神经元收到+90的新颖度信号时,这些神经元在第六预设时间区间收到负值输入(如-40)。
当输入信息保持不变,并进行所述前向学习过程后,若目标区域的神经元的最高激活强度为90%,则新颖度信号从+90降至+10。
当构成所述正向神经通路的若干个所述神经元收到+10的新颖度信号时,这些神经元在第六预设时间区间收到负值输入(如-40)。
当构成所述反向神经通路的若干个所述神经元收到+10的新颖度信号时,这些神经元在第六预设时间区间收到正值输入(如+40)。
因此,当足够新颖的外部输入信息出现时,所述新颖度信号会使正向神经通路的神经元收到正值输入(兴奋性增强)从而更易于激活,而使反向神经通路的神经元收到负值输入(兴奋性减弱)从而更难于激活,进而使所述神经网络优先通过自下而上的正向神经通路注意、识别、学习当前的新颖的外部输入信息;反之,当外部输入信息不够新颖时,所述新颖度信号会使正向神经通路的神经元收到负值输入(兴奋性减弱)并更难于激活,而使反向神经通路的神经元收到正值输入(兴奋性增强)并更易于激活,从而使所述神经网络优先通过自上而下的反向神经通路触发既有记忆信息,或进行模式补全过程、联想过程或想象过程。
在本实施例中,所述监督学习过程为:
步骤r1:给定目标区域中各神经元的正面发放分布范围,还可以给定目标区域中各神经元的负面发放分布范围,执行步骤r2。
步骤r2:进行所述记忆触发过程,若所述目标区域中各神经元的实际发放分布不符合所述正面发放 分布范围也不符合所述负面发放分布范围,则视为所述目标区域各神经元没有编码相关记忆信息,执行步骤r3;若所述目标区域中各神经元的实际发放分布符合正面发放分布范围,则视为所述记忆触发过程的结果为正确,结束本次监督学习过程;若所述目标区域中各神经元的实际发放分布符合负面发放分布范围,则视为所述记忆触发过程的结果为错误,执行步骤r3。
步骤r3:进行所述新颖度信号调制过程、强化学习过程、主动注意力过程、自动注意力过程、定向启动过程、前向学习过程、信息聚合过程、定向信息聚合过程、信息成分调整过程、信息转写过程、差动信息解耦过程中的任一种或任几种,以使所述目标区域各神经元编码相关记忆信息,执行步骤r1。
所述监督学习过程还可以为:
步骤q1:给定正面标签范围,还可以给定负面标签范围,执行步骤q2。
步骤q2:进行所述记忆触发过程,将所述目标区域中各神经元的实际发放分布映射至对应标签,若对应标签不符合所述正面标签范围也不符合所述负面标签范围,则视为所述目标区域各神经元没有编码相关记忆信息,执行步骤q3;若对应标签符合正面标签范围,则视为所述记忆触发过程的结果为正确,结束本次监督学习过程;若对应标签符合负面标签范围,则视为所述记忆触发过程的结果为错误,执行步骤q3。
步骤q3:进行所述新颖度信号调制过程、强化学习过程、主动注意力过程、自动注意力过程、定向启动过程、前向学习过程、信息聚合过程、定向信息聚合过程、信息成分调整过程、信息转写过程、差动信息解耦过程中的任一种或任几种,以使所述目标区域各神经元编码相关记忆信息,执行步骤q1。
在本实施例中,所述单极性上游发放依赖突触可塑性过程包括单极性上游发放依赖突触增强过程和单极性上游发放依赖突触减弱过程。
所述单极性上游发放依赖突触增强过程为:当所涉及联接的上游神经元的激活强度或发放率不为零时,则该联接权重的绝对值增加,该增加量记为DwLTP1u。
所述单极性上游发放依赖突触减弱过程为:当所涉及联接的上游神经元的激活强度或发放率不为零时,则该联接权重的绝对值减少,该减少量记为DwLTD1u。
所述DwLTP1u、DwLTD1u为非负值。
在本实施例中,所述单极性上游发放依赖突触可塑性过程中的DwLTP1u、DwLTD1u的取值包括下面的任一种或任几种:
所述DwLTP1u、DwLTD1u为非负值,分别与所涉及联接的上游神经元的激活强度或发放率成正比;或者,
所述DwLTP1u、DwLTD1u为非负值,分别与所涉及联接的上游神经元的激活强度或发放率、以及所涉及联接的权重成正比。
例如,令DwLTP1u=0.01*Fru1,DwLTD1u=0.01*Fru1,Fru1为上游神经元的发放率。
在本实施例中,所述单极性下游发放依赖突触可塑性过程包括单极性下游发放依赖突触增强过程和单极性下游发放依赖突触减弱过程。
所述单极性下游发放依赖突触增强过程为:当所涉及联接的下游神经元的激活强度或发放率不为零时,则该联接权重的绝对值增加,该增加量记为DwLTP1d。
所述单极性下游发放依赖突触减弱过程为:当所涉及联接的下游神经元的激活强度或发放率不为零时,则该联接权重的绝对值减少,该减少量记为DwLTD1d。
所述DwLTP1d、DwLTD1d为非负值。
在本实施例中,所述单极性下游发放依赖突触可塑性过程中的DwLTP1d、DwLTD1d的取值包括下面的任一种或任几种:
所述DwLTP1d、DwLTD1d为非负值,分别与所涉及联接的下游神经元的激活强度或发放率成正比;或者,
所述DwLTP1d、DwLTD1d为非负值,分别与所涉及联接的下游神经元的激活强度或发放率、以及所涉及联接的权重成正比。
例如,令DwLTP1d=0.01*Frd1,DwLTD1d=0.01*Frd1,Frd1为下游神经元的发放率。
在本实施例中,所述单极性上下游发放依赖突触可塑性过程包括单极性上下游发放依赖突触增强过 程和单极性上下游发放依赖突触减弱过程。
所述单极性上下游发放依赖突触增强过程为:当所涉及联接的上游神经元和下游神经元的激活强度或发放率不为零时,则该联接权重的绝对值增加,该增加量记为DwLTP2。
所述单极性上下游发放依赖突触减弱过程为:当所涉及联接的上游神经元和下游神经元的激活强度或发放率不为零时,则该联接权重的绝对值减少,该减少量记为DwLTD2。
所述DwLTP2、DwLTD2为非负值。
在本实施例中,所述单极性上下游发放依赖突触可塑性过程中的DwLTP2、DwLTD2的取值包括下面的任一种或任几种:
所述DwLTP2、DwLTD2为非负值,分别与所涉及联接的上游神经元的激活强度或发放率、以及下游神经元的激活强度或发放率成正比;或者,
所述DwLTP2、DwLTD2为非负值,分别与所涉及联接的上游神经元的激活强度或发放率、下游神经元的激活强度或发放率、以及所涉及联接的权重成正比。
例如,令DwLTP2=0.01*Fru2*Frd2,DwLTD2=0.01*Fru2*Frd2,Fru2、Frd2分别为上、下游神经元的发放率。
在本实施例中,所述单极性上游脉冲依赖突触可塑性过程包括单极性上游脉冲依赖突触增强过程和单极性上游脉冲依赖突触减弱过程。
所述单极性上游脉冲依赖突触增强过程为:当所涉及联接的上游神经元激发时,则该联接权重的绝对值增加,该增加量记为DwLTP3u。
所述单极性上游脉冲依赖突触减弱过程为:当所涉及联接的上游神经元激发时,则该联接权重的绝对值减少,该减少量记为DwLTD3u。
所述DwLTP3u、DwLTD3u为非负值。
在本实施例中,所述单极性上游脉冲依赖突触可塑性过程中的DwLTP3u、DwLTD3u的取值包括下面的任一种或任几种:
所述DwLTP3u、DwLTD3u采用非负值常量;或者,
所述DwLTP3u、DwLTD3u为非负值,分别与所涉及联接的权重成正比。
例如,令DwLTP3u=0.01*weight,DwLTD3u=0.01*weight,weight为联接权重。
在本实施例中,所述单极性下游脉冲依赖突触可塑性过程包括单极性下游脉冲依赖突触增强过程和单极性下游脉冲依赖突触减弱过程。
所述单极性下游脉冲依赖突触增强过程为:当所涉及联接的下游神经元激发时,则该联接权重的绝对值增加,该增加量记为DwLTP3d。
所述单极性下游脉冲依赖突触减弱过程为:当所涉及联接的下游神经元激发时,则该联接权重的绝对值减少,该减少量记为DwLTD3d。
所述DwLTP3d、DwLTD3d为非负值。
在本实施例中,所述单极性下游脉冲依赖突触可塑性过程的DwLTP3d、DwLTD3d的取值包括下面的任一种或任几种:
所述DwLTP3d、DwLTD3d采用非负值常量;或者,
所述DwLTP3d、DwLTD3d为非负值,分别与所涉及联接的权重成正比。
例如,令DwLTP3d=0.01*weight,DwLTD3d=0.01*weight,weight为联接权重。
在本实施例中,所述单极性脉冲时间依赖突触可塑性过程包括单极性脉冲时间依赖突触增强过程和单极性脉冲时间依赖突触减弱过程。
所述单极性脉冲时间依赖突触增强过程为:当所涉及联接的下游神经元激发时,并且距当前或过去的最近一次上游神经元激发的时间间隔不超过Tg1,或者当所涉及联接的上游神经元激发时,并且距当前或过去的最近一次下游神经元激发的时间间隔不超过Tg2,则该联接权重的绝对值增加,该增加量记为DwLTP4。
所述单极性脉冲时间依赖突触减弱过程为:当所涉及联接的下游神经元激发时,并且距当前或过去的最近一次上游神经元激发的时间间隔不超过Tg3,或者当所涉及联接的上游神经元激发时,并且距当 前或过去的最近一次下游神经元激发的时间间隔不超过Tg4,则该联接权重的绝对值减少,该减少量记为DwLTD4。
所述DwLTP4、DwLTD4、Tg1、Tg2、Tg3、Tg4均为非负值。例如,将Tg1、Tg2、Tg3、Tg4设为200ms。
在本实施例中,所述单极性脉冲时间依赖突触可塑性过程中的DwLTP4、DwLTD4的取值包括下面的任一种或任几种:
所述DwLTP4、DwLTD4采用非负值常量;或者,
所述DwLTP4、DwLTD4为非负值,分别与所涉及联接的权重成正比。
例如,令DwLTP4=KLTP4*weight+C1,DwLTD4=KLTD4*weight+C2;式中,KLTP4=0.01,为突触增强过程比例系数,KLTD4=0.01,为突触减弱过程比例系数,C1、C2为常量,设为0.001。
在本实施例中,所述非对称双极性脉冲时间依赖突触可塑性过程为:
当所涉及联接的下游神经元激发时,若距当前或过去的最近一次上游神经元激发的时间间隔不超过Th1,则该联接权重的绝对值增加,该增加量记为DwLTP5;若距当前或过去的最近一次上游神经元激发的时间间隔超过Th1但不超过Th2,则该联接权重的绝对值减少,该减少量记为DwLTD5;或者,
当所涉及联接的上游神经元激发时,若距当前或过去的最近一次下游神经元激发的时间间隔不超过Th3,则该联接权重的绝对值增加,该增加量记为DwLTP5;若距当前或过去的最近一次下游神经元激发的时间间隔超过Th3但不超过Th4,则该联接权重的绝对值减少,该减少量记为DwLTD5。
所述Th1、Th3、DwLTP5、DwLTD5为非负值,Th2为大于Th1的值,Th4为大于Th3的值;例如,令Th1=Th3=150ms,Th2=Th4=200ms。
在本实施例中,所述非对称双极性脉冲时间依赖突触可塑性过程中的DwLTP5、DwLTD5的取值包括下面的任一种或任几种:
所述DwLTP5、DwLTD5采用非负值常量;或者,
所述DwLTP5、DwLTD5为非负值,分别与所涉及联接的权重成正比,例如,令DwLTP5=KLTP5*weight,DwLTD5=KLTD5*weight,如令KLTP5=0.01,KLTD5=0.01;或者,
所述DwLTP5、DwLTD5为非负值,DwLTP5与下游神经元和上游神经元发放的时间间隔成负相关,当时间间隔为0时DwLTP5达到指定最大值DwLTPmax5,当时间间隔为Th1时DwLTP5为0;DwLTD5与下游神经元和上游神经元发放的时间间隔成负相关,当时间间隔为Th1时DwLTD5达到指定最大值DwLTDmax5,当时间间隔为Th2时DwLTD5为0;例如,令DwLTPmax5=0.1,DwLTDmax5=0.1,令DwLTP5=-DwLTPmax5/Th1*DeltaT1+DwLTPmax5,令DwLTD5=-DwLTDmax5/(Th2-Th1)*DeltaT1+DwLTDmax5*Th2/(Th2-Th1),DeltaT1为下游神经元和上游神经元发放的时间间隔(即下游神经元发放的时刻减去上游神经元发放的时刻)。
在本实施例中,所述对称双极性脉冲时间依赖突触可塑性过程为:
当所涉及联接的下游神经元激发时,若距当前或过去的最近一次上游神经元激发的时间间隔不超过Ti1,则该联接权重的绝对值增加,该增加量记为DwLTP6。
当所涉及联接的上游神经元激发时,若距过去的最近一次下游神经元激发的时间间隔不超过Ti2,则该联接权重的绝对值减少,该减少量记为DwLTD6。
所述Ti1、Ti2、DwLTP6、DwLTD6为非负值;例如,令Ti1=200ms,Ti2=200ms。
在本实施例中,所述对称双极性脉冲时间依赖突触可塑性过程中的DwLTP6、DwLTD6的取值包括下面的任一种或任几种:
所述DwLTP6、DwLTD6采用非负值常量;或者,
所述DwLTP6、DwLTD6为非负值,分别与所涉及联接的权重成正比;例如,令DwLTP6=KLTP6*weight,DwLTD6=KLTD6*weight;或者,
所述DwLTP6、DwLTD6为非负值,DwLTP6与下游神经元和上游神经元发放的时间间隔成负相关,具体地,当时间间隔为0时DwLTP6达到指定最大值DwLTPmax6,当时间间隔为Ti1时DwLTP6为0;DwLTD6与上游神经元和下游神经元发放的时间间隔成负相关,当时间间隔为趋于0时DwLTD6达到指定最大值DwLTDmax6,当时间间隔为Ti2时DwLTD6为0;例如,令DwLTPmax6=0.1,DwLTDmax6 =0.1,令DwLTP6=-DwLTPmax6/DeltaT2+DwLTPmax6,DwLTD6=-DwLTDmax6/DeltaT3+DwLTDmax6,DeltaT2为下游神经元和上游神经元发放的时间间隔,DeltaT3为上游神经元和下游神经元发放的时间间隔。
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本申请。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本申请的精神或范围的情况下,在其它实施例中实现。因此,本申请将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (34)

  1. 一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,包括:若干个初级特征编码模块、若干个复合特征编码模块;
    各模块分别包括多个神经元;
    所述神经元包括初级特征编码神经元、具象特征编码神经元、抽象特征编码神经元;
    所述初级特征编码模块包括多个所述初级特征编码神经元,编码初级视觉特征信息;
    所述复合特征编码模块包括具象特征编码单元、抽象特征编码单元;
    所述具象特征编码单元包括多个所述具象特征编码神经元,编码具象视觉特征信息;
    所述抽象特征编码单元包括多个所述抽象特征编码神经元,编码抽象视觉特征信息;
    若干个所述初级特征编码神经元分别与其它若干个所述初级特征编码神经元形成单向或双向兴奋型/抑制型联接;
    若干个所述初级特征编码神经元分别与位于至少一个所述复合特征编码模块的若干个所述具象特征编码神经元或若干个所述抽象特征编码神经元形成单向或双向兴奋型/抑制型联接;
    位于同一个所述复合特征编码模块中的若干个所述具象特征编码神经元分别与位于同一个所述复合特征编码模块中的若干个所述抽象特征编码神经元形成单向或双向兴奋型/抑制型联接;
    若干个所述复合特征编码模块中的若干个所述具象特征编码神经元、所述抽象特征编码神经元分别与其它若干个所述复合特征编码模块的若干个所述具象特征编码神经元、所述抽象特征编码神经元形成单向或双向兴奋型/抑制型联接;
    所述神经网络通过所述神经元的发放来缓存与编码信息,通过所述神经元之间的联接来编码、存储、传递信息;
    输入图片或视频流,将每帧图片的若干个像素的若干个像素值分别乘以权重输入至若干个所述初级特征编码神经元,以使若干个所述初级特征编码神经元激活;
    对于若干个所述神经元,计算其膜电位以确定是否发放,如发放则使其各个下游神经元累计膜电位,进而确定是否发放,从而使发放在所述神经网络中传播;上游神经元与下游神经元之间的联接的权重为常值,或通过突触可塑性过程动态调整;
    所述神经网络的工作过程包括:前向记忆过程、记忆触发过程、信息聚合过程、定向信息聚合过程、信息转写过程、记忆遗忘过程、记忆自巩固过程、信息成分调整过程、强化学习过程、新颖度信号调制过程、监督学习过程;
    所述突触可塑性过程包括单极性上游发放依赖突触可塑性过程、单极性下游发放依赖突触可塑性过程、单极性上下游发放依赖突触可塑性过程、单极性上游脉冲依赖突触可塑性过程、单极性下游脉冲依赖突触可塑性过程、单极性脉冲时间依赖突触可塑性过程、非对称双极性脉冲时间依赖突触可塑性过程、对称双极性脉冲时间依赖突触可塑性过程;
    将若干个所述神经元映射至对应的标签作为输出。
  2. 根据权利要求1所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述神经网络的若干个神经元采用脉冲神经元或非脉冲神经元。
  3. 根据权利要求1所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述神经网络的若干个神经元为自激发神经元;所述自激发神经元包括有条件自激发神经元和无条件自激发神经元;
    所述有条件自激发神经元若在第一预设时间区间没有被外部输入激发,则根据概率P自激发;
    所述无条件自激发神经元在没有外部输入的情况下膜电位自动逐渐累加,当膜电位达到阈值时该无条件自激发神经元激发,并使膜电位恢复至静息电位以重新进行累加过程。
  4. 根据权利要求1所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述神经网络的若干个联接可以用卷积运算替代。
  5. 根据权利要求1所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述复合特征编码模块还可以包括输入侧注意力调控单元、输出侧注意力调控单元;
    所述神经元还包括输入侧注意力调控神经元、输出侧注意力调控神经元;
    所述输入侧注意力调控单元包括若干个所述输入侧注意力调控神经元;
    所述输出侧注意力调控单元包括若干个所述输出侧注意力调控神经元;
    若干个所述输入侧注意力调控神经元可分别接受若干个所述初级特征编码神经元的单向或双向兴奋型/抑制型联接;
    各所述输入侧注意力调控神经元分别与所在复合特征编码模块的若干个所述具象特征编码神经元或若干个所述抽象特征编码神经元形成单向或双向兴奋型联接;
    各所述输入侧注意力调控神经元分别接受来自其它复合特征编码模块的若干个所述具象特征编码神经元或若干个所述抽象特征编码神经元或若干个所述输出侧注意力调控神经元的单向或双向兴奋型联接;
    各所述输入侧注意力调控神经元还可以分别与其它若干个所述输入侧注意力调控神经元形成单向或双向兴奋型联接;
    各所述输出侧注意力调控神经元分别与位于其它复合特征编码模块的若干个所述具象特征编码神经元或若干个所述抽象特征编码神经元或若干个所述输入侧注意力调控神经元形成单向或双向兴奋型联接;
    各所述输出侧注意力调控神经元分别接受来自所在复合特征编码模块的若干个所述具象特征编码神经元或若干个所述抽象特征编码神经元的单向或双向兴奋型联接;
    各所述输出侧注意力调控神经元还可以分别与其它若干个所述输出侧注意力调控神经元形成单向或双向兴奋型联接;
    各所述输入侧注意力调控神经元可以具有一个输入侧注意力控制端;各所述输出侧注意力调控神经元可以具有一个输出侧注意力控制端;
    所述神经网络的工作过程还包括主动注意力过程、自动注意力过程;
    所述主动注意力过程为:通过所述输入侧注意力控制端处施加的注意力控制信号的强弱来调节各所述输入侧注意力调控神经元的激活强度或发放率或脉冲发放相位,进而控制进入相应具象特征编码单元、抽象特征编码单元的信息,以及调节各项信息成分的大小和比例;或者,通过所述输出侧注意力控制端处施加的注意力控制信号的强弱来调节各所述输出侧注意力调控神经元的激活强度或发放率或脉冲发放相位,进而控制从相应具象特征编码单元、抽象特征编码单元输出的信息,以及调节各项信息成分的大小和比例;
    所述自动注意力过程为:若干连接至所述输入侧注意力调控神经元的神经元激活时,使这些输入侧注意力调控神经元更易于激活,从而使相关的信息成分更易于输入至相应具象特征编码单元、抽象特征编码单元;或者,若干连接至所述输出侧注意力调控神经元的神经元激活时,使这些输出侧注意力调控神经元更易于激活,从而使相关的信息成分更易于从相应具象特征编码单元、抽象特征编码单元输出。
  6. 根据权利要求1所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述神经网络包括一至多种信息通道;
    所述神经网络的工作过程还包括信息通道自动形成过程;
    所述信息通道自动形成过程为:通过进行所述前向记忆过程、记忆触发过程、信息聚合过程、定向信息聚合过程、信息转写过程、记忆遗忘过程、记忆自巩固过程、信息成分调整过程、主动注意力过程、自动注意力过程中的任一种或任几种,调整各所述神经元间的联接关系与权重,使所述神经网络形成一至多种信息通道,每种信息通道编码一至多种信息成分;各所述信息通道间可以存在交叉;
    还可以通过预设初始联接关系、初始参数使所述神经网络形成一至多种信息通道,每种信息通道编码一至多种预设信息成分。
  7. 根据权利要求1、6任一项所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述信息通道包括初级信息通道;
    所述初级信息通道为:全部所述初级特征编码神经元及其联接构成所述初级信息通道;
    所述初级信息通道包括初级对比度信息通道、初级朝向信息通道、初级边缘信息通道、初级色块信息通道;
    所述初级对比度信息通道为:选择输入图像中若干个相邻近的像素作为中央区域像素,选择中央区域像素的周围的若干个像素作为周围区域像素,将各所述中央区域像素、所述周围区域像素的若干个像素值分别乘以权重输入至若干个所述初级特征编码神经元,即形成中央-周围拓扑结构,这些初级特征编码神经元及其联接构成一至多种初级对比度信息通道;
    在所述初级信息通道中,选择一至多种像素个数、覆盖画面空间的位置、面积的相邻的像素,将这些像素的一至多种像素值分别乘以一至多种权重,还可以构成具有一至多种感受野的若干个初级朝向信息通道、初级边缘信息通道、初级色块信息通道或它们的综合;
    所述初级对比度信息通道包括明暗对比度信息通道、暗明对比度信息通道、红绿对比度信息通道、绿红对比度信息通道、黄蓝对比度信息通道、蓝黄对比度信息通道;
    所述明暗对比度信息通道为:将各所述中央区域像素的R、G、B像素值分别乘以正权重输入至若干个所述初级特征编码神经元,将各所述周围区域像素的R、G、B像素值分别乘以负权重输入至这些初级特征编码神经元,这些初级特征编码神经元及其联接构成明暗对比度信息通道;
    所述暗明对比度信息通道为:将各所述中央区域像素的R、G、B像素值分别乘以负权重输入至若干个所述初级特征编码神经元,将各所述周围区域像素的R、G、B像素值分别乘以正权重输入至这些初级特征编码神经元,这些初级特征编码神经元及其联接构成暗明对比度信息通道;
    所述红绿对比度信息通道为:将各所述中央区域像素的R、G、B像素值分别乘以正权重、负权重、正权重输入至若干个所述初级特征编码神经元,将各所述周围区域像素的R、G、B像素值分别乘以负权重、正权重、正权重输入至这些初级特征编码神经元,这些初级特征编码神经元及其联接构成红绿对比度信息通道;
    所述绿红对比度信息通道为:将各所述中央区域像素的R、G、B像素值分别乘以负权重、正权重、正权重输入至若干个所述初级特征编码神经元,将各所述周围区域像素的R、G、B像素值分别乘以正权重、负权重、正权重输入至这些初级特征编码神经元,这些初级特征编码神经元及其联接构成绿红对比度信息通道;
    所述黄蓝对比度信息通道为:将各所述中央区域像素的R、G、B像素值分别乘以正权重、正权重、负权重输入至若干个所述初级特征编码神经元,将各所述周围区域像素的R、G、B像素值分别乘以负权重、负权重、正权重输入至这些初级特征编码神经元,这些初级特征编码神经元及其联接构成黄蓝对比度信息通道;
    所述蓝黄对比度信息通道为:将各所述中央区域像素的R、G、B像素值分别乘以负权重、负权重、正权重输入至若干个所述初级特征编码神经元,将各所述周围区域像素的R、G、B像素值分别乘以正权重、正权重、负权重输入至这些初级特征编码神经元,这些初级特征编码神经元及其联接构成蓝黄对比度信息通道。
  8. 根据权利要求7所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述初级信息通道还包括初级光流信息通道;
    所述初级光流信息通道为:对输入图像中若干个像素分别计算光流,得到光流运动的方向值与速率值,将不同方向值、速率值进行组合,并分别乘以权重输入至若干个所述初级特征编码神经元,这些初级特征编码神经元及其联接构成初级光流信息通道;
    所述初级视觉特征信息还包括光流信息。
  9. 根据权利要求1、5任一项所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述复合特征编码模块还可以包括若干个位置编码单元;
    所述神经元还包括位置编码神经元;
    所述位置编码单元包括若干个所述位置编码神经元,编码位置信息;
    每个所述位置编码单元分别对应画面空间中的若干个子空间,各子空间可以存在交集;
    每个所述位置编码神经元分别对应其所在位置编码单元对应的各子空间中位置相对应的各区域,并接受感受野为这些区域的若干所述神经元的单向或双向兴奋型联接;
    若干个所述位置编码神经元分别与其它若干个对应相同区域的所述位置编码神经元形成单向或双向兴奋型联接;
    若干个所述位置编码神经元还可以分别与位于其所在复合特征编码模块的若干个所述输入侧注意力调控神经元/输出侧注意力调控神经元/具象特征编码神经元/抽象特征编码神经元形成单向或双向兴奋型联接;
    若干个所述位置编码神经元还可以分别与位于其它复合特征编码模块的若干个所述输入侧注意力调控神经元/具象特征编码神经元/抽象特征编码神经元形成单向或双向兴奋型联接。
  10. 根据权利要求1、6任一项所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述信息通道还包括中级信息通道;
    所述中级信息通道包括中级位置信息通道;
    所述中级位置信息通道为:通过所述信息通道自动形成过程,或通过预设初始联接关系、初始参数,使若干个所述复合特征编码模块中的若干个所述输入侧注意力调控神经元/输出侧注意力调控神经元/具象特征编码神经元/抽象特征编码神经元的部分或全部输入联接的总权重中来自所述位置编码神经元以及编码了位置信息的神经元的联接总权重的比例达到或超过第一预设比例,并使来自所述位置编码神经元以及编码了位置信息的神经元的各联接权重以多种比例组合,以使这些输入侧注意力调控神经元/输出侧注意力调控神经元/具象特征编码神经元/抽象特征编码神经元分别具有一至多种感受野、分别编码一至多种位置信息,与所述位置编码神经元共同构成所述中级位置信息通道。
  11. 根据权利要求10所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述中级信息通道还包括中级视觉特征信息通道;
    所述中级视觉特征信息通道为:通过所述信息通道自动形成过程,或通过预设初始联接关系、初始参数,使若干个所述复合特征编码模块中的若干个所述输入侧注意力调控神经元/输出侧注意力调控神经元/具象特征编码神经元/抽象特征编码神经元的部分或全部输入联接的总权重中来自所述初级信息通道的神经元的联接总权重的比例达到或超过第二预设比例,并使来自所述初级信息通道、中级信息通道的对应画面空间中各个区域、位置的各神经元的各联接权重以多种比例组合,以使这些输入侧注意力调控神经元/输出侧注意力调控神经元/具象特征编码神经元/抽象特征编码神经元分别具有一至多种感受野、分别编码一至多种中级视觉特征信息,共同构成所述中级视觉特征信息通道。
  12. 根据权利要求10所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述信息通道还包括高级信息通道;
    所述高级信息通道为:通过所述信息通道自动形成过程,或通过预设初始联接关系、初始参数,使若干个所述复合特征编码模块中的若干个所述输入侧注意力调控神经元/输出侧注意力调控神经元/具象特征编码神经元/抽象特征编码神经元的部分或全部输入联接的总权重中来自所述中级信息通道的神经元的联接总权重的比例达到或超过第三预设比例,并使来自所述初级信息通道、中级信息通道、高级信息通道的对应画面空间中各个区域、位置的各神经元的各联接权重以多种比例组合,以使这些输入侧注意力调控神经元/输出侧注意力调控神经元/具象特征编码神经元/抽象特征编码神经元分别具有一至多种感受野、分别编码一至多种高级视觉特征信息,共同构成所述高级信息通道。
  13. 根据权利要求1所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述神经网络包括正向神经通路和反向神经通路;
    所述正向神经通路和反向神经通路分别为:将若干个所述初级特征编码模块/复合特征编码模块以第一预设次序级联,将由其中的若干个所述神经元以顺着所述第一预设次序级联构成的神经通路作为所述正向神经通路,将由其中的若干个所述神经元以逆着所述第一预设次序级联构成的神经通路作为所述反向神经通路;
    在每个初级特征编码模块/复合特征编码模块中,其构成正向神经通路的若干神经元可以分别与其构成反向神经通路的若干神经元形成单向或双向兴奋型/抑制型联接;
    所述定向启动过程包括正向启动过程、反向启动过程;
    所述正向启动过程为:
    步骤o1:选择正向神经通路中的若干神经元作为起振神经元;
    步骤o2:使各个所述起振神经元产生发放分布并保持激活第三预设周期Tfprime;
    步骤o3:所述反向神经通路中接受所述起振神经元的兴奋型联接的若干所述神经元收到非负值输入以更易于激活;
    步骤o4:所述反向神经通路中接受所述起振神经元的抑制型联接的若干所述神经元收到非正值输入以更难于激活;
    所述反向启动过程为:
    步骤n1:选择反向神经通路中的若干神经元作为起振神经元;
    步骤n2:使各个所述起振神经元产生发放分布并保持激活第十预设周期Tbprime;
    步骤n3:所述正向神经通路中接受所述起振神经元的兴奋型联接的若干所述神经元收到非负值输入以更易于激活;
    步骤n4:所述正向神经通路中接受所述起振神经元的抑制型联接的若干所述神经元收到非正值输入以更难于激活。
  14. 根据权利要求1所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述神经元还包括中间神经元;
    所述初级特征编码模块、复合特征编码模块分别包括若干所述中间神经元,若干所述中间神经元分别与相应模块中对应的若干神经元形成单向抑制型连接,各所述模块中的相应若干神经元分别与若干对应的所述中间神经元形成单向兴奋型连接。
  15. 根据权利要求1、5任一项所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述神经元还包括差动信息解耦神经元,所述神经网络的工作过程还包括差动信息解耦过程;
    所述差动信息解耦过程为:
    选择若干个所述输入侧注意力调控神经元/输出侧注意力调控神经元/具象特征编码神经元/抽象特征编码神经元作为靶神经元;
    选择若干个与所述靶神经元具有单向/双向兴奋型联接的神经元作为具象信息源神经元;
    选择其它若干个与所述靶神经元具有单向/双向兴奋型联接的神经元作为抽象信息源神经元;
    每个所述具象信息源神经元可以有若干个与之匹配的所述差动信息解耦神经元;每个所述具象信息源神经元与匹配的各个所述差动信息解耦神经元分别形成单向兴奋型联接;所述差动信息解耦神经元分别与所述靶神经元形成单向抑制型联接或与所述信息源神经元输入至所述靶神经元的联接形成单向抑制型突触-突触联接,使该具象信息源神经元输入至所述靶神经元的信号受到匹配的差动信息解耦神经元的抑制型调节;所述抽象信息源神经元与所述差动信息解耦神经元形成单向兴奋型联接;
    每个所述差动信息解耦神经元可以有一个解耦控制信号输入端;通过调节解耦控制信号输入端上施加的信号大小来调节信息解耦程度。
  16. 根据权利要求1所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述前向学习过程为:
    步骤a1:选择若干个所述神经元作为起振神经元;
    步骤a2:选择若干个所述神经元作为靶神经元;
    步骤a3:各个激活的所述起振神经元分别与若干个所述靶神经元的单向兴奋型联接通过所述突触可塑性过程调整权重;
    步骤a4:每个激活的所述靶神经元可分别与若干个其它所述靶神经元建立单向或双向兴奋型联接,也可以与自己建立自循环兴奋型联接,这些联接通过所述突触可塑性过程调整权重;
    在每个所述靶神经元的各输入/输出联接通过所述突触可塑性过程调整权重时,可以将部分或全部输入或输出联接的权重进行规范化,也可以不作规范化。
  17. 根据权利要求1、3任一项所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述记忆触发过程为:输入信息、或直接令所述神经网络中的若干个所述神经元激活、或由所述神经网络中的若干个所述神经元自激发、或由所述神经网络中的若干个所述神经元的既有激活状态在所述神经网络中传播,在第二预设周期内若导致目标区域中的若干个所述神经元发放,则将所述目标区域的各个发放的神经元的表征并可以连同其激活强度或发放率作为所述记忆触发过程的结果;
    所述目标区域可以为所述神经网络中的任一个子网络。
  18. 根据权利要求1所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述信息聚合过程为:
    步骤g1:选择若干个所述神经元作为起振神经元;
    步骤g2:选择若干个所述神经元作为源神经元;
    步骤g3:选择若干个所述神经元作为靶神经元;
    步骤g4:使各个所述起振神经元产生发放分布并保持激活第八预设周期Tk;
    步骤g5:在所述第八预设周期Tk中,令各个激活的起振神经元与若干个所述靶神经元之间的单向或双向兴奋型/抑制型联接通过所述突触可塑性过程调整权重;
    步骤g6:在所述第八预设周期Tk中,令各个激活的源神经元与若干个所述靶神经元之间的单向或双向兴奋型/抑制型联接通过所述突触可塑性过程调整权重;
    步骤g7:每执行步骤g1至步骤g6过程一遍记为一次迭代,执行一至多次迭代;
    将若干个所述靶神经元映射至对应的标签作为所述信息聚合过程的结果。
  19. 根据权利要求1所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述定向信息聚合过程为:
    步骤h1:选择若干个所述神经元作为起振神经元;
    步骤h2:选择若干个所述神经元作为源神经元;
    步骤h3:选择若干个所述神经元作为靶神经元;
    步骤h4:使各个所述起振神经元产生发放分布并保持激活第九预设周期Ta;
    步骤h5:在所述第九预设周期Ta中,激活了Ma1个所述源神经元,以及激活了Ma2个所述靶神经元;
    步骤h6:在所述第九预设周期Ta中,激活强度最高或发放率最大或最先开始发放的前Ka1个源神经元记为Ga1,其余Ma1-Ka1个激活的源神经元记为Ga2;
    步骤h7:在所述第九预设周期Ta中,激活强度最高或发放率最大或最先开始发放的前Ka2个靶神经元记为Ga3,其余Ma2-Ka2个激活的靶神经元记为Ga4;
    步骤h8:在所述第九预设周期Ta中,所述Ga1中各源神经元分别与所述Ga3中若干个靶神经元的单向或双向兴奋型/抑制型联接进行一至多次突触权重增强过程;
    步骤h9:在所述第九预设周期Ta中,所述Ga1中各源神经元分别与所述Ga4中若干个靶神经元的单向或双向兴奋型/抑制型联接进行一至多次突触权重减弱过程;
    步骤h10:在所述第九预设周期Ta中,所述Ga2中各源神经元分别与所述Ga3中若干个靶神经元的单向或双向兴奋型/抑制型联接可以进行或不进行一至多次突触权重减弱过程;
    步骤h11:在所述第九预设周期Ta中,所述Ga2中各源神经元分别与所述Ga4中若干个靶神经元的单向或双向兴奋型/抑制型联接可以进行或不进行一至多次突触权重增强过程;
    步骤h12:在所述第九预设周期Ta中,激活的各个起振神经元分别与所述Ga3中若干个靶神经元的单向或双向兴奋型/抑制型联接进行一至多次突触权重增强过程;
    步骤h13:在所述第九预设周期Ta中,激活的各个起振神经元分别与所述Ga4中若干个靶神经元的单向或双向兴奋型/抑制型联接进行一至多次突触权重减弱过程;
    步骤h14:每执行步骤h1至步骤h13过程一遍记为一次迭代,执行一至多次迭代;
    在所述步骤h8至步骤h13过程中,在执行一至多次突触权重增强过程或突触权重减弱过程后,可以对各所述源神经元或靶神经元的部分或全部输入或输出联接的权重规范化,也可以不作规范化;
    所述突触权重增强过程可采用所述单极性上下游发放依赖突触增强过程、或所述单极性脉冲时间依赖突触增强过程;
    所述突触权重减弱过程可采用所述单极性上下游发放依赖突触减弱过程、或所述单极性脉冲时间依赖突触减弱过程;
    所述突触权重增强过程和所述突触权重减弱过程还分别可以采用所述非对称双极性脉冲时间依赖突触可塑性过程、或所述对称双极性脉冲时间依赖突触可塑性过程;
    所述Ma1、Ma2为正整数,Ka1为不超过Ma1的正整数,Ka2为不超过Ma2的正整数。
  20. 根据权利要求1所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述信息转写过程为:
    步骤f1:选择若干个所述神经元作为起振神经元;
    步骤f2:选择所述起振神经元的若干个直接下游神经元或间接下游神经元作为源神经元;
    步骤f3:选择所述起振神经元的若干个直接下游神经元或间接下游神经元作为靶神经元;
    步骤f4:令各个所述起振神经元产生发放分布并保持激活第七预设周期Tj;
    步骤f5:在所述第七预设周期Tj中,激活了若干个所述源神经元;
    步骤f6:在所述第七预设周期Tj中,若某起振神经元为某靶神经元的直接上游神经元,则令二者之间的单向或双向兴奋型/抑制型联接通过所述突触可塑性过程调整权重,若某起振神经元为某靶神经元的间接上游神经元,则令二者间的联接通路中该靶神经元的直接上游神经元与该靶神经元之间的单向或双向兴奋型/抑制型联接通过所述突触可塑性过程调整权重;
    步骤f7:在所述第七预设周期Tj中,各所述靶神经元可分别与若干个其它所述靶神经元建立联接,并可通过所述突触可塑性过程调整权重;
    步骤f8:在所述第七预设周期Tj中,若某源神经元与某靶神经元之间具有单向或双向兴奋型联接,则可通过所述突触可塑性过程调整权重。
  21. 根据权利要求1所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述记忆遗忘过程包括上游发放依赖记忆遗忘过程、下游发放依赖记忆遗忘过程及上下游发放依赖记忆遗忘过程;
    所述上游发放依赖记忆遗忘过程为:对于某个联接,若其上游神经元持续在第四预设周期内未发放,则权重绝对值减少,减少量记为DwDecay1;
    所述下游发放依赖记忆遗忘过程为:对于某个联接,若其下游神经元持续在第五预设周期内未发放,则权重绝对值减少,减少量记为DwDecay2;
    所述上下游发放依赖记忆遗忘过程为:对于某个联接,若持续在第六预设周期内其上、下游神经元未发生同步发放,则权重绝对值减少,减少量记为DwDecay3;
    所述同步发放为:当所涉及联接的下游神经元激发时,并且距当前或过去的最近一次上游神经元激发的时间间隔不超过第四预设时间区间Te1,或者当所涉及联接的上游神经元激发时,并且距当前或过去的最近一次下游神经元激发的时间间隔不超过第五预设时间区间Te2;
    在所述记忆遗忘过程中,若某个联接具有指定的权重的绝对值下限,则权重的绝对值到达该下限就不再减少,或将该联接裁剪掉。
  22. 根据权利要求1、3任一项所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述记忆自巩固过程为:某个所述神经元自激发时,该神经元的部分或全部输入联接的权重通过所述单极性下游发放依赖突触增强过程、单极性下游脉冲依赖突触增强过程进行调整,该神经元的部分或全部输出联接的权重通过所述单极性上游发放依赖突触增强过程、单极性上游脉冲依赖突触增强过程进行调整。
  23. 根据权利要求1所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述信息成分调整过程为:
    步骤i1:选择若干个所述神经元作为起振神经元;
    步骤i2:选择所述起振神经元的若干个直接下游神经元或间接下游神经元作为靶神经元;
    步骤i3:令各个所述起振神经元产生发放分布,并使其在第一预设周期Tb内保持激活;
    步骤i4:在第一预设周期Tb中,激活了Mb1个所述靶神经元,其中激活强度最高或发放率最大或最先开始发放的前Kb1个靶神经元记为Gb1,其余Mb1-Kb1个激活的靶神经元记为Gb2;
    步骤i5:若某起振神经元为所述Gb1中的某靶神经元的直接上游神经元,则令二者之间的单向或双向联接进行一至多次突触权重增强过程,若某起振神经元为所述Gb1中的某靶神经元的间接上游神经元,则令二者间的联接通路中该靶神经元的直接上游神经元与该靶神经元之间的单向或双向联接进行一至多次突触权重增强过程;
    步骤i6:若某起振神经元为所述Gb2中的某靶神经元的直接上游神经元,则令二者之间的单向或双向联接进行一至多次突触权重减弱过程,若某起振神经元为所述Gb2中的某靶神经元的间接上游神经元,则令二者间的联接通路中该靶神经元的直接上游神经元与该靶神经元之间的单向或双向联接进行一至多次突触权重减弱过程;
    步骤i7:每执行步骤i1至步骤i6过程一遍记为一次迭代,执行一至多次迭代;
    在步骤i5、步骤i6过程中,在执行一至多次突触权重增强过程或突触权重减弱过程后,对各所述靶神经元的部分或全部输入联接的权重规范化,也可以不作规范化;
    所述突触权重增强过程可采用所述单极性上下游发放依赖突触增强过程、或所述单极性脉冲时间依赖突触增强过程;
    所述突触权重减弱过程可采用所述单极性上下游发放依赖突触减弱过程、或所述单极性脉冲时间依赖突触减弱过程;
    所述突触权重增强过程和所述突触权重减弱过程还分别可以采用所述非对称双极性脉冲时间依赖突触可塑性过程、或所述对称双极性脉冲时间依赖突触可塑性过程。
  24. 根据权利要求1所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述强化学习过程为:当若干个所述联接收到强化信号时,在第二预设时间区间中,这些联接的权重发生改变,或这些联接在所述记忆遗忘过程的权重减少量发生改变,或这些联接在所述突触可塑性过程中的权重增加量/权重减少量发生改变;或者,
    当若干个所述神经元收到强化信号时,在第三预设时间区间中,这些神经元收到正值或负值输入,或者这些神经元的部分或全部输入联接或输出联接的权重发生改变,或这些联接在所述记忆遗忘过程的权重减少量发生改变,或这些联接在所述突触可塑性过程中的权重增加量/权重减少量发生改变;
    所述强化信号在所述神经网络没有输入信息时为常值;在所述监督学习过程中,若所述记忆触发过程的结果为正确,则所述强化信号上升,若所述记忆触发过程的结果为错误,则所述强化信号下降。
  25. 根据权利要求1所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述新颖度信号调制过程为:当若干个所述神经元收到所述新颖度信号时,在第六预设时间区间中,这些神经元收到正值或负值输入,或者这些神经元的部分或全部输入联接或输出联接的权重发生改变,或这些联接在所述记忆遗忘过程的权重减少量发生改变,或这些联接在所述突触可塑性过程中的权重增加量/权重减少量发生改变;
    所述新颖度信号在所述神经网络没有输入信息时为常值或随时间逐渐减弱;所述新颖度信号在所述神经网络有输入信息时与所述记忆触发过程中的目标区域的各神经元的激活强度或发放率成负相关。
  26. 根据权利要求1、24、25任一项所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述监督学习过程为:
    步骤r1:给定目标区域中各神经元的正面发放分布范围,还可以给定目标区域中各神经元的负面发放分布范围,执行步骤r2;
    步骤r2:进行所述记忆触发过程,若所述目标区域中各神经元的实际发放分布不符合所述正面发放分布范围也不符合所述负面发放分布范围,则视为所述目标区域各神经元没有编码相关记忆信息,执行步骤r3;若所述目标区域中各神经元的实际发放分布符合正面发放分布范围,则视为所述记忆触发过程的结果为正确,结束本次监督学习过程;若所述目标区域中各神经元的实际发放分布符合负面发放分布范围,则视为所述记忆触发过程的结果为错误,执行步骤r3;
    步骤r3:进行所述新颖度信号调制过程、强化学习过程、主动注意力过程、自动注意力过程、定向启动过程、前向学习过程、信息聚合过程、定向信息聚合过程、信息成分调整过程、信息转写过程、差动信息解耦过程中的任一种或任几种,以使所述目标区域各神经元编码相关记忆信息,执行步骤r1;
    所述监督学习过程还可以为:
    步骤q1:给定正面标签范围,还可以给定负面标签范围,执行步骤q2;
    步骤q2:进行所述记忆触发过程,将所述目标区域中各神经元的实际发放分布映射至对应标签,若对应标签不符合所述正面标签范围也不符合所述负面标签范围,则视为所述目标区域各神经元没有编码相关记忆信息,执行步骤q3;若对应标签符合正面标签范围,则视为所述记忆触发过程的结果为正确, 结束本次监督学习过程;若对应标签符合负面标签范围,则视为所述记忆触发过程的结果为错误,执行步骤q3;
    步骤q3:进行所述新颖度信号调制过程、强化学习过程、主动注意力过程、自动注意力过程、定向启动过程、前向学习过程、信息聚合过程、定向信息聚合过程、信息成分调整过程、信息转写过程、差动信息解耦过程中的任一种或任几种,以使所述目标区域各神经元编码相关记忆信息,执行步骤q1。
  27. 根据权利要求1所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述单极性上游发放依赖突触可塑性过程包括单极性上游发放依赖突触增强过程和单极性上游发放依赖突触减弱过程;
    所述单极性上游发放依赖突触增强过程为:当所涉及联接的上游神经元的激活强度或发放率不为零时,则该联接权重的绝对值增加,该增加量记为DwLTP1u;
    所述单极性上游发放依赖突触减弱过程为:当所涉及联接的上游神经元的激活强度或发放率不为零时,则该联接权重的绝对值减少,该减少量记为DwLTD1u;
    所述DwLTP1u、DwLTD1u为非负值。
  28. 根据权利要求1所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述单极性下游发放依赖突触可塑性过程包括单极性下游发放依赖突触增强过程和单极性下游发放依赖突触减弱过程;
    所述单极性下游发放依赖突触增强过程为:当所涉及联接的下游神经元的激活强度或发放率不为零时,则该联接权重的绝对值增加,该增加量记为DwLTP1d;
    所述单极性下游发放依赖突触减弱过程为:当所涉及联接的下游神经元的激活强度或发放率不为零时,则该联接权重的绝对值减少,该减少量记为DwLTD1d;
    所述DwLTP1d、DwLTD1d为非负值。
  29. 根据权利要求1所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述单极性上下游发放依赖突触可塑性过程包括单极性上下游发放依赖突触增强过程和单极性上下游发放依赖突触减弱过程;
    所述单极性上下游发放依赖突触增强过程为:当所涉及联接的上游神经元和下游神经元的激活强度或发放率不为零时,则该联接权重的绝对值增加,该增加量记为DwLTP2;
    所述单极性上下游发放依赖突触减弱过程为:当所涉及联接的上游神经元和下游神经元的激活强度或发放率不为零时,则该联接权重的绝对值减少,该减少量记为DwLTD2;
    所述DwLTP2、DwLTD2为非负值。
  30. 根据权利要求1所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述单极性上游脉冲依赖突触可塑性过程包括单极性上游脉冲依赖突触增强过程和单极性上游脉冲依赖突触减弱过程;
    所述单极性上游脉冲依赖突触增强过程为:当所涉及联接的上游神经元激发时,则该联接权重的绝对值增加,该增加量记为DwLTP3u;
    所述单极性上游脉冲依赖突触减弱过程为:当所涉及联接的上游神经元激发时,则该联接权重的绝对值减少,该减少量记为DwLTD3u;
    所述DwLTP3u、DwLTD3u为非负值。
  31. 根据权利要求1所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述单极性下游脉冲依赖突触可塑性过程包括单极性下游脉冲依赖突触增强过程和单极性下游脉冲依赖突触减弱过程;
    所述单极性下游脉冲依赖突触增强过程为:当所涉及联接的下游神经元激发时,则该联接权重的绝对值增加,该增加量记为DwLTP3d;
    所述单极性下游脉冲依赖突触减弱过程为:当所涉及联接的下游神经元激发时,则该联接权重的绝对值减少,该减少量记为DwLTD3d;
    所述DwLTP3d、DwLTD3d为非负值。
  32. 根据权利要求1所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所 述单极性脉冲时间依赖突触可塑性过程包括单极性脉冲时间依赖突触增强过程和单极性脉冲时间依赖突触减弱过程;
    所述单极性脉冲时间依赖突触增强过程为:当所涉及联接的下游神经元激发时,并且距当前或过去的最近一次上游神经元激发的时间间隔不超过Tg1,或者当所涉及联接的上游神经元激发时,并且距当前或过去的最近一次下游神经元激发的时间间隔不超过Tg2,则该联接权重的绝对值增加,该增加量记为DwLTP4:
    所述单极性脉冲时间依赖突触减弱过程为:当所涉及联接的下游神经元激发时,并且距当前或过去的最近一次上游神经元激发的时间间隔不超过Tg3,或者当所涉及联接的上游神经元激发时,并且距当前或过去的最近一次下游神经元激发的时间间隔不超过Tg4,则该联接权重的绝对值减少,该减少量记为DwLTD4:
    所述DwLTP4、DwLTD4、Tg1、Tg2、Tg3、Tg4均为非负值。
  33. 根据权利要求1所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述非对称双极性脉冲时间依赖突触可塑性过程为:
    当所涉及联接的下游神经元激发时,若距当前或过去的最近一次上游神经元激发的时间间隔不超过Th1,则该联接权重的绝对值增加,该增加量记为DwLTP5;若距当前或过去的最近一次上游神经元激发的时间间隔超过Th1但不超过Th2,则该联接权重的绝对值减少,该减少量记为DwLTD5;或者,
    当所涉及联接的上游神经元激发时,若距当前或过去的最近一次下游神经元激发的时间间隔不超过Th3,则该联接权重的绝对值增加,该增加量记为DwLTP5;若距当前或过去的最近一次下游神经元激发的时间间隔超过Th3但不超过Th4,则该联接权重的绝对值减少,该减少量记为DwLTD5;
    所述Th1、Th3、DwLTP5、DwLTD5为非负值,Th2为大于Th1的值,Th4为大于Th3的值。
  34. 根据权利要求1所述的一种具有前向学习和元学习功能的类脑视觉神经网络,其特征在于,所述对称双极性脉冲时间依赖突触可塑性过程为:
    当所涉及联接的下游神经元激发时,若距当前或过去的最近一次上游神经元激发的时间间隔不超过Ti1,则该联接权重的绝对值增加,该增加量记为DwLTP6;
    当所涉及联接的上游神经元激发时,若距过去的最近一次下游神经元激发的时间间隔不超过Ti2,则该联接权重的绝对值减少,该减少量记为DwLTD6;
    所述Ti1、Ti2、DwLTP6、DwLTD6为非负值。
PCT/CN2021/093354 2020-05-19 2021-05-12 具有前向学习和元学习功能的类脑视觉神经网络 WO2021233179A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/991,143 US20230079847A1 (en) 2020-05-19 2022-11-21 Brain-like visual neural network with forward-learning and meta-learning functions

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010424999.8A CN113688980A (zh) 2020-05-19 2020-05-19 具有前向学习和元学习功能的类脑视觉神经网络
CN202010424999.8 2020-05-19

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/991,143 Continuation-In-Part US20230079847A1 (en) 2020-05-19 2022-11-21 Brain-like visual neural network with forward-learning and meta-learning functions

Publications (1)

Publication Number Publication Date
WO2021233179A1 true WO2021233179A1 (zh) 2021-11-25

Family

ID=78576045

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/093354 WO2021233179A1 (zh) 2020-05-19 2021-05-12 具有前向学习和元学习功能的类脑视觉神经网络

Country Status (3)

Country Link
US (1) US20230079847A1 (zh)
CN (1) CN113688980A (zh)
WO (1) WO2021233179A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117332320A (zh) * 2023-11-21 2024-01-02 浙江大学 一种基于残差卷积网络的多传感器融合pmsm故障诊断方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116989800B (zh) * 2023-09-27 2023-12-15 安徽大学 一种基于脉冲强化学习的移动机器人视觉导航决策方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107092959A (zh) * 2017-04-07 2017-08-25 武汉大学 基于stdp非监督学习算法的硬件友好型脉冲神经网络模型
US9864933B1 (en) * 2016-08-23 2018-01-09 Jasmin Cosic Artificially intelligent systems, devices, and methods for learning and/or using visual surrounding for autonomous object operation
CN108333941A (zh) * 2018-02-13 2018-07-27 华南理工大学 一种基于混合增强智能的云机器人协作学习方法
CN110210563A (zh) * 2019-06-04 2019-09-06 北京大学 基于Spike cube SNN的图像脉冲数据时空信息学习及识别方法
CN110569886A (zh) * 2019-08-20 2019-12-13 天津大学 一种双向通道注意力元学习的图像分类方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10460237B2 (en) * 2015-11-30 2019-10-29 International Business Machines Corporation Neuron-centric local learning rate for artificial neural networks to increase performance, learning rate margin, and reduce power consumption
KR102565273B1 (ko) * 2016-01-26 2023-08-09 삼성전자주식회사 뉴럴 네트워크에 기초한 인식 장치 및 뉴럴 네트워크의 학습 방법
CN106650922B (zh) * 2016-09-29 2019-05-03 清华大学 硬件神经网络转换方法、计算装置、软硬件协作系统
CN108985447B (zh) * 2018-06-15 2020-10-16 华中科技大学 一种硬件脉冲神经网络系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9864933B1 (en) * 2016-08-23 2018-01-09 Jasmin Cosic Artificially intelligent systems, devices, and methods for learning and/or using visual surrounding for autonomous object operation
CN107092959A (zh) * 2017-04-07 2017-08-25 武汉大学 基于stdp非监督学习算法的硬件友好型脉冲神经网络模型
CN108333941A (zh) * 2018-02-13 2018-07-27 华南理工大学 一种基于混合增强智能的云机器人协作学习方法
CN110210563A (zh) * 2019-06-04 2019-09-06 北京大学 基于Spike cube SNN的图像脉冲数据时空信息学习及识别方法
CN110569886A (zh) * 2019-08-20 2019-12-13 天津大学 一种双向通道注意力元学习的图像分类方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117332320A (zh) * 2023-11-21 2024-01-02 浙江大学 一种基于残差卷积网络的多传感器融合pmsm故障诊断方法
CN117332320B (zh) * 2023-11-21 2024-02-02 浙江大学 一种基于残差卷积网络的多传感器融合pmsm故障诊断方法

Also Published As

Publication number Publication date
US20230079847A1 (en) 2023-03-16
CN113688980A (zh) 2021-11-23

Similar Documents

Publication Publication Date Title
WO2021233179A1 (zh) 具有前向学习和元学习功能的类脑视觉神经网络
Gehrig et al. Event-based angular velocity regression with spiking networks
Sacramento et al. Dendritic cortical microcircuits approximate the backpropagation algorithm
US11853875B2 (en) Neural network apparatus and method
WO2021233180A1 (zh) 具有记忆与信息抽象功能的类脑神经网络
US8972315B2 (en) Apparatus and methods for activity-based plasticity in a spiking neuron network
US9218563B2 (en) Spiking neuron sensory processing apparatus and methods for saliency detection
US20190042952A1 (en) Multi-task Semi-Supervised Online Sequential Extreme Learning Method for Emotion Judgment of User
US9390371B2 (en) Deep convex network with joint use of nonlinear random projection, restricted boltzmann machine and batch-based parallelizable optimization
CN103930910B (zh) 无监督的神经重放、学习完善、关联以及记忆转移的方法和装置:结构可塑性和结构约束建模
Hopkins et al. Spiking neural networks for computer vision
WO2018156314A1 (en) Method and apparatus for multi-dimensional sequence prediction
Carlson et al. Biologically plausible models of homeostasis and STDP: stability and learning in spiking neural networks
US20180336469A1 (en) Sigma-delta position derivative networks
US11551076B2 (en) Event-driven temporal convolution for asynchronous pulse-modulated sampled signals
WO2023010663A1 (zh) 计算设备及电子设备
US11238333B2 (en) STDP with synaptic fatigue for learning of spike-time-coded patterns in the presence of parallel rate-coding
CN112307982A (zh) 基于交错增强注意力网络的人体行为识别方法
KR20230088714A (ko) 개인화된 뉴럴 네트워크 프루닝
She et al. Safe-dnn: a deep neural network with spike assisted feature extraction for noise robust inference
CN116434224A (zh) 一种细粒度图像识别方法及系统
JP2016139420A (ja) 1次視覚野単純細胞および他の神経回路の入力シナプスの教師なしトレーニングのための方法および装置
CN110046709A (zh) 一种基于双向lstm的多任务学习模型
US11526735B2 (en) Neuromorphic neuron apparatus for artificial neural networks
CN113011442A (zh) 一种基于双向自适应特征金字塔的目标检测方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21808150

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21808150

Country of ref document: EP

Kind code of ref document: A1