EP3908982A1 - A spiking neural network for probabilistic computation - Google Patents

A spiking neural network for probabilistic computation

Info

Publication number
EP3908982A1
EP3908982A1 EP19782857.7A EP19782857A EP3908982A1 EP 3908982 A1 EP3908982 A1 EP 3908982A1 EP 19782857 A EP19782857 A EP 19782857A EP 3908982 A1 EP3908982 A1 EP 3908982A1
Authority
EP
European Patent Office
Prior art keywords
neuron
neural network
synaptic
random variables
synaptic weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP19782857.7A
Other languages
German (de)
French (fr)
Inventor
Hao-Yuan Chang
Aruna JAMMALAMADEKA
Nigel D. STEPP
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HRL Laboratories LLC
Original Assignee
HRL Laboratories LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HRL Laboratories LLC filed Critical HRL Laboratories LLC
Publication of EP3908982A1 publication Critical patent/EP3908982A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Definitions

  • the present invention relates to a system for computing conditional
  • Bayesian inference is a popular framework for making decisions by estimating the conditional dependencies between different variables in the data.
  • the inference task is often computationally expensive and performed using conventional digital computers.
  • One prior method for probabilistic computation uses synaptic updating to perform Bayesian inference (see Literature Reference No. 1 of the List of Incorporated Literature References), but the method is only a mathematical theory and not biologically plausible.
  • the present invention relates to a system for computing conditional
  • the system comprises a neuromorphic hardware for implementing a spiking neural network comprising a plurality of neurons to compute the conditional probability of two random variables X and Y according to the following:
  • the spiking neural network comprises an increment path for w that is proportional to a product of w * P(X ), a decrement path for w that is proportional to P(X, Y ), and delay and spike timing dependent plasticity (STDP) parameters such that w increases and decreases with the same magnitude for a single firing event.
  • STDP delay and spike timing dependent plasticity
  • the spiking neural network comprises a plurality of
  • synapses wherein all neurons, except for the B neuron, have the same threshold voltage, and wherein the synaptic weight w between the A neuron and the B neuron is the only synapse that has STDP, wherein all other synapses have a fixed weight that is designed to trigger post-synaptic neurons when pre-synaptic neurons fire.
  • a sign of the STDP is inverted such that if the A neuron spikes before the B neuron, the synaptic weight w is decreased.
  • the spiking neural network further comprises an XY neuron connected with both the A neuron and the B neuron, and wherein a delay is imposed between the XY neuron and the A neuron, which causes an increase in the synaptic weight w.
  • the B neuron spikes after the A neuron in proportion to the synaptic weight w, such that a spiking rate for the B neuron depends on a product between a spiking rate of the X neuron and the synaptic weight w.
  • the present invention also includes a computer implemented method.
  • the computer implemented method includes an act of causing a computer to execute instructions and perform the resulting operations.
  • FIG. 1 is an illustration of spike timing dependent plasticity (STDP) according to some embodiments of the present disclosure
  • FIG. 2 is an illustration of a relationship between weight change and a spike interval for STDP according to some embodiments of the present disclosure
  • FIG. 3 is an illustration of network topology for a probabilistic computation unit (PCU+) according to some embodiments of the present disclosure
  • FIG. 4 is an illustration of a neural path for increasing the synaptic weight according to some embodiments of the present disclosure
  • FIG. 5 is an illustration of a neural path for decreasing the synaptic weight according to some embodiments of the present disclosure
  • FIG. 6 is an illustration of a PCU+ with the subtractor circuit according to some embodiments of the present disclosure
  • FIG. 7 is a plot illustrating a conditional probability computed by the PCU+ according to some embodiments of the present disclosure
  • FIG. 8 is a plot illustrating conditional probabilities computed by the PCU+ with different probability settings according to some embodiments of the present disclosure
  • FIG. 9 is a table illustrating probabilities computed by the neural network according to some embodiments of the present disclosure.
  • FIG. 10A is an illustration of dependencies found by the PCU+ for ten
  • FIG. 10B is an illustration of ground truth dependencies according to some embodiments of the present disclosure
  • FIG. 11 is an illustration of the system flow of the system for computing conditional probabilities of random variables for structure learning and Bayesian inference according to some embodiments of the present disclosure
  • FIG. 12A is an illustration of a directional excitatory synapse according to some embodiments of the present disclosure
  • FIG. 12B is an illustration of a directional inhibitory synapse according to some embodiments of the present disclosure.
  • FIG. 13 is an illustration of a full conditional probability unit according to some embodiments of the present disclosure.
  • any element in a claim that does not explicitly state“means for” performing a specified function, or“step for” performing a specific function, is not to be interpreted as a“means” or“step” clause as specified in 35 U.S.C.
  • Various embodiments of the invention include three“principal” aspects.
  • the first is a system for computing conditional probabilities of random variables for structure learning and Bayesian inference.
  • the system is typically in the form of a computer system operating software (e.g., neuromorphic hardware) or in the form of a“hard-coded” instruction set.
  • Neuromorphic hardware is any electronic device which mimics the natural biological structures of the nervous system.
  • the implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors, spintronic memories, threshold switches, and transistors.
  • the second principal aspect is a method, typically in the form of software, implemented using the neuromorphic hardware (digital computer).
  • the digital computer system (neuromorphic hardware) is configured to
  • Bayesian inference is a popular framework for making decisions by estimating the conditional dependencies between different variables in the data.
  • the inference task is often computationally expensive and performed using conventional digital computers.
  • Described herein is a unique spiking neural network to compute the conditional probabilities of random variables for structure learning and Bayesian inference.
  • Random variables is a statistical term meaning that their values over time are taken from a kind of random distribution. The X and Y neurons are made to spike along with these random variables.
  • the spiking neural network has a new network topology that enables efficient computation beyond the state-of-the- art methods.
  • FIG. l is a depiction of Spike Timing Dependent Plasticity (STDP).
  • STDP results in strengthening connections due to correlated spikes, and weakening connections due to uncorrelated spikes. For example, in FIG. 1, if neuron A (element 100) spikes just before neuron B (element 102), then w increases a lot; if neuron A (element 100) spikes a long time before neuron B (element 102) then w increases a little bit.
  • neuron B (element 102) spikes just before neuron A (element 100) then w decreases a lot, and if neuron B (element 102) spikes a long time before neuron A (element 100), w decreases a little bit.
  • FIG. 2 illustrates the relationship between weight change and spike interval for STDP. More specifically, FIG. 2 depicts the relationship between the magnitude of the weight update and the time difference (At) between the pre- synaptic and the post-synaptic spikes.
  • a + and A determine the speed of convergence to the final weight value (i.e., the learning rate). A higher learning rate will converge faster, but the solution is less accurate. As a non-limiting example, 0.008 was used as the learning rate to achieve a good trade-off between speed and accuracy.
  • the neural network topology described herein is capable of computing the conditional probability of two random variables (X and Y). From Bayes’ theorem, the conditional probability can be computed as
  • Equation (1) P(X, Y) (2)
  • Equation (2) When the left-hand side of Equation (2) equals the right-hand side, w equals the conditional probability, P(Y ⁇ X). It is, therefore, the goal to design a neural network that has the following characteristics:
  • Circular nodes e.g., X neuron (element 300)
  • inhibitory synapses are drawn with a circle (element 302) at the end of the connection, and triangles (element 301) represent excitatory synapses, which are considered the default synapses).
  • All neurons have a threshold voltage of 1 volt except for neuron B (element 102).
  • the only synapse that has STDP is the one between neuron A (element 100) and B (element 102); other synapses have a fixed weight (e.g.
  • t ⁇ was set to be the unit delay between two neurons (0.1 milliseconds (ms)), and one volt was used as the firing threshold.
  • All of the neurons are of the integrate-and-fire type, which sums up all of the input voltages from pre-synaptic spikes.
  • the decay constant ⁇ ecay f° r the membrane potential is set to infinity; therefore, each neuron’s membrane potential will never decay.
  • the integrate-and-fire dynamics can be implemented using the following equations (see Literature Reference No. 3): and where V is the membrane potential in volt for the neuron.
  • I j is an indicator function for identifying which of the pre-synaptic neurons have fired:
  • the membrane potentials are compared with the threshold voltages Vthreshold to determine if the neuron fires or not.
  • Vthreshold is set to a constant value (e.g., one volt) except for neuron B (element 102), whose threshold is adjusted dynamically as described later.
  • the membrane potential is reset to Vreset; Vreset is zero for all neurons.
  • the method described herein uses reverse STDP update rules and the topology shown in FIG. 3 for storing the computed conditional probability in the weight value between two neurons.
  • FIGs. 12A and 12B depict examples of a directional excitatory synapse and a directional inhibitory synapse, respectively.
  • FIG. 12A shows a directional excitatory synapse, where neuron A (element 100) causes neuron B (element 102) to spike.
  • FIG. 12B shows a directional inhibitory synapse, where neuron A (element 100) inhibits neuron B (element 102) from spiking.
  • the sign of STDP was purposely inverted such that if neuron A (element 100) spikes before neuron B (element 102) in FIG. 5, the synaptic weight w is decreased.
  • the reversal of STDP is necessary to decrease w when P ⁇ X) increases in order to keep w*P ⁇ X) constant. This is important for the network to converge to the desired equilibrium point instead of diverging away from it.
  • the sign of the STDP being inverted results in the function illustrated in FIG. 2 being flipped about the time axis. [00060] (3.1.3) Increment Path for w
  • FIG. 4 illustrates the neural structure for increasing the synaptic weight w.
  • a delay between the XY neuron (element 400) and neuron A (element 100) is imposed, which causes w to increase when the XY neuron (element 400) fires because the A neuron (element 100) spikes after the B neuron (element 102) and the sign of the STDP is inverted.
  • the B neuron (element 102) spikes after the A neuron (element 100) in proportion to w.
  • Vdt the dynamic threshold
  • Equation (7) describes the update rule for Vdt Vthreshold is set to one volt in one embodiment.
  • the dynamic threshold allows the network to calculate the product between w and P(X) more accurately; otherwise, w will suffer from imprecision because it takes the same number of spikes from the A neuron (element 100) to trigger the B neuron (element 102) when w is larger than 0.5.
  • the dynamic threshold is updated each time after neuron B (element 102) fires according to this equation: Vdt— ⁇ threshold (VB ⁇ dt), (7) where V dt is the voltage threshold for the B neuron (element 102). V threshold is the normal voltage threshold, which is set to IV. V B is the membrane potential of the B neuron (element 102) after accumulating all of the input voltages at the current time step. Note that in order for Equation (7) to work as designed, it is important to avoid conflicts between the increment path and the decrement path. For this reason, a delay of four units (0.4 ms) is imposed between the X neuron (element 300) and the A neuron (element 100) to allow w to increase first then decrease in the case where both paths are triggered.
  • the neuron network can be operated in two modes: one is the training phase and the other is the measurement phase.
  • the objective of the training phase is to allow weight w to converge to the targeted P(Y ⁇ X).
  • spikes from the two random variables are fed to neuron X (element 300) and neuron Y (element 304 in FIGs. 3 and 6).
  • STDP is enabled for the synapse between neuron A (element 100) and neuron B (element 102).
  • the training phase is complete (after a certain amount of time defined by the users that suit their speed and accuracy requirements)
  • the network enters the measurement phase.
  • the goal of the measurement phase is to determine the dependency between the two random variables by calculating
  • P(X) is set to one in the measurement phase (i.e., it spikes continuously).
  • STDP is disabled to prevent w from being altered.
  • the resulting calculation (the deviation of the conditional probability from the intrinsic value) is encoded in the firing rate of the neuron F (element 600) in FIG. 6.
  • X) is recorded by reading out the synaptic weight w.
  • FIG. 6 shows a neural network with a subtractor circuit (SI, S2, F) for calculating the absolute difference between P(Y) and P(Y ⁇ X).
  • the firing rate of neuron F (element 600) measures the likelihood that neuron X (element 300) causes neuron Y (element 304) to happen.
  • the SI neuron (element 602) takes three inputs (from neuron B (element 102), XY (element 306), and Y (element 304)).
  • P(X) is set to one.
  • the synapse between the XY neuron (element 306) and the SI neuron (element 602) is to compensate for the undesired artifact that the B neuron (element 102) will fire when the XY neuron (element 306) is triggered during the measurement phase, improving the accuracy of the subtraction calculation.
  • the S2 neuron (element 604) works similarly; however, it computes P(Y ⁇ X) - P ⁇ Y) instead.
  • Neuron F (element 600) outputs the sum of the two firing rates.
  • P(Y) - P( Y ⁇ X) and P( Y ⁇ X) - P ⁇ Y) are exactly opposite in sign, and because firing probabilities are non negative, any negative values are reset to zero. Equations (9) through (11) below demonstrate how neuron F (element 600) effectively implements the following absolute function:
  • the SI (element 602) and S2 (element 604) neurons have a lower and upper membrane potential limits to prevent the voltages from running away uncontrollably; the voltage limits are set to [-5,5] volts in one embodiment.
  • the probability of firing for neuron SI (element 602), S2 (element 604), and F (element 600) can be described using the following equations:
  • P(S 1), P(S2), and P ⁇ F) are unit-less numbers between zero and one that represent the probabilities of firing at any given time step.
  • the max( ,0) operators in equations (9) through (11) are used to account for the fact that probabilities cannot be negative.
  • FIG. 6 depicts a neuronal network topology for computing the conditional dependency structure between two or more random variables encoded as input data streams.
  • the topology uses a subtractor circuit to compare P(X) to P(Y
  • described herein is a method for learning conditional structure by determining independence and conditional dependence of temporally lagged variables using the network topology shown in FIG. 6.
  • FIG. 11 depicts the system flow for the system for computing conditional probabilities of random variables for structure learning and Bayesian inference according to embodiments of the present disclosure.
  • the spike encoder takes input sensor data and rate codes the values into frequencies of spikes in input spike trains, while the spiking decoder takes the output spike trains and decodes the frequencies of spikes back into values for the random variables and/or conditional probabilities, depending on what the user has queried.
  • the inputs X (element 300) and Y (element 304) shown in FIG. 3 come from the spike encoding (element 1100) of streaming sensor data (element 1102) on a mobile platform, such as a ground vehicle (element 1104) or aircraft (element 1106), as depicted in FIG. 11.
  • the values of the streaming sensor data will be temporally coded using rate coding.
  • Probabilistic input spikes for X are generated as fixed-rate Poisson process.
  • Y spikes are generated with a fixed Bernoulli probability P.
  • FIGs. 9, 10 A, and 10B ten Bernoulli random variables were generated using this method, some of which are conditionally dependent on each other.
  • the neuromorphic hardware required for implementing the neuronal network topology (element 1108) in FIG. 11 must have specific neuron voltage (spiking) equations, synaptic weight update rules (known as STDP), and specific neuronal voltage dynamics.
  • STDP synaptic weight update rules
  • the STDP required for the PCU+ topology in FIG. 13 is described in Literature Reference No. 5.
  • At t post — t pre , where A + and 4 _ are the maximum and minimum gains, and T + and T _ are timescales for weight increase and decrease (FIG. 2).
  • a unique“STDP Reversal” update rule is employed, such that the conditions for the two cases (i.e., At > 0, At ⁇ 0) are swapped.
  • This reversal may not be implementable on existing general-purpose neuromorphic hardware.
  • the voltage update rules required for the system described herein are described in Equations (3), (4), (5), and (7). Additional details regarding the neuronal topology (element 1108) implemented on neuromorphic hardware can be found in U.S. Application No. 16/294,815, entitled“A Neuronal Network Topology for Computing Conditional Probabilities,” which is hereby incorporated by reference as though fully set forth herein.
  • the neuromorphic computing hardware according to embodiments of the present disclosure replaces the usage of a digital computer CPU (control processing unit) or GPU (graphics processing unit).
  • the neuromorphic compiler (element 1110) is detailed in U.S. Application No. 16/294,886, entitled,“Programming Model for a Bayesian Neuromorphic Compiler,” which is hereby incorporated by reference as though fully set forth herein. It is a programming model which lets users query Bayesian network probabilities from the learned conditional model (element 1112) for further processing or decision making.
  • X) 0.6 is part of the learned conditional model (element 1112).
  • FIG. 13 is an illustration of a full conditional probability unit according to some embodiments of the present disclosure.
  • Ip i.e., phasic input (element 1300)
  • I T i.e., tonic input (element 1302)
  • Tonic input corresponds to P(X)
  • Phasic input corresponds to P(X, Y
  • the "Tonic" input I T causes A (element 100) to spike
  • a (element 100) causes B (element 102) to spike, resulting in an increase to w (element 1304).
  • delay T 1 (element 1308) will cause d (element 102) to spike before A (element 100), causing w (element 1304) to decrease.
  • A will not cause B (element 102) to spike after T 1 (element 1308) due to il's (element 102) refractory period, which is the period after stimulation where a neuron is unresponsive.
  • the refractory period of a neuron is a period just after the spikes during which it cannot spike again, even if it receives input spikes from neighboring neurons. This is a known biological mechanism leveraged by the network according to embodiments of the present disclosure. This results in maintenance of a balance, dependent on C (element 1306) (i.e., if C (element 1306) spikes every time f?
  • PCU+ is able to compute the conditional probability of two random variables, as shown in FIG. 7, where the unbolded line (element 700) represents the calculated results, and the bold line (element 702) represents the ground truth.
  • the synaptic weight ( w ) between A and B converges to the final value in about 30 seconds of neural computation.
  • the time interval between each input data point is 40 ms, and the spiking probability for the inputs are set to 0.7. This means that the PCU+ converged after observing around 500 input spikes; this is a fairly efficient method to estimate the conditional probability.
  • the results show that w is able to track the fluctuation in P( Y ⁇ X) as the system progresses in time.
  • the weight w is recorded in a variable in a software program or a register in neuromorphic hardware.
  • Most spiking neural network simulators allow recordings of internal variables such as the synaptic weights.
  • a register read operation is performed to record the final weight value to a recording media.
  • FIG. 8 shows the results from a suite of simulations with the PCU+.
  • the probabilities P(X) and P( Y ⁇ X) are varied to validate the accuracy of the neural network across different input conditions.
  • the final synaptic weight, w which encodes P(Y ⁇ X)
  • w was plotted after each simulation. It was observed that the final weights (represented by filled circles (element 800) align with the ground truth values (i.e., the dotted line (element 802)). Note that the PCU+ circuit is able to accurately compute the conditional probability over the entire range ([0,1]) of possible values.
  • the PCU+ with Subtractor Circuit was applied to solve structure learning problems.
  • structure learning the goal is to identify the causal relationships in a Bayesian network. More precisely, in experimental studies, the goal was to find the dependencies between ten different random variables, in which the current value of a variable affects the future values of other variables (i.e., Granger causality (see Literature References No. 4).
  • Granger causality the following pre-processing techniques were performed on the data before being fed to the PCU+.
  • the data for each random variable is a stream of 0s and Is, recording the occurrence of errors as a time series.
  • a pair of data streams e.g., X and Y
  • the data for Y is first shifted earlier by one time step. Effectively, P(Yt+i ⁇ Xt) is being calculated instead of P(Yt ⁇ Xt). This is necessary because the cause and the effect happens sequentially in time; by shifting Y forward in the training dataset, whether the current X (i.e., X t ) has an effect on future Y (i.e. Y t+i ) was tested.
  • the dependencies were identified by computing P(Yt+i ⁇ Xt) between all pairs of random variables in the system and calculating the deviation of the conditional probabilities, P(Yt+i ⁇ Xt), from their intrinsic values, P(Yt+ 1).
  • is encoded in the firing rate of neuron F in each PCU+.
  • a threshold is defined, in which the link is flagged as significant, and a conclusion is made that X causes Y to happen.
  • the table in FIG. 9 summarizes the firing rates of neuron F (element 600) from each PCU+, which encodes
  • the results from the table in FIG. 9 are displayed visually by drawing an arrow between two random variables which have a potential causal relationship, as illustrated in FIG. 10 A.
  • the neural network described herein can identify these dependencies with 100% accuracy, as determined by comparison with the ground truth dependencies shown in FIG. 10B.
  • Bayesian inference is ubiquitous in data science and decision theory.
  • the invention described herein can be used to reduce the operational cost for aircrafts and vehicles through preventive maintenance and diagnostics, enable
  • Bayesian decision theory is a statistical approach to the problem of pattern classification.
  • Pattern classification an example of a Bayesian inference task, has several applications, including object detection and object classification.
  • an application of the invention described herein is estimating conditional probabilities between fault messages of a ground vehicle or aircraft for fault prognostic models.
  • one or more processors of the system described herein can control one or more motor vehicle components (electrical, non-electrical, mechanical), such as a brake, a steering mechanism, suspension, or safety device (e.g., airbags, seatbelt tensioners, etc.).
  • the vehicle could be an unmanned aerial vehicle (UAV), an autonomous self-driving ground vehicle, or a human operated vehicle controlled either by a driver or by a remote operator.
  • UAV unmanned aerial vehicle
  • the system can cause the autonomous vehicle to perform a driving operation/maneuver (such as steering or another command) in line with driving parameters in accordance with the recognized object.
  • the system described herein can cause a vehicle maneuver/operation to be performed to avoid a collision with the bicyclist or vehicle (or any other object that should be avoided while driving).
  • the system can cause the autonomous vehicle to apply a functional movement response, such as a braking operation followed by a steering operation, to redirect vehicle away from the object, thereby avoiding a collision.
  • Other appropriate responses may include one or more of a steering operation, a throttle operation to increase speed or to decrease speed, or a decision to maintain course and speed without change.
  • the responses may be appropriate for avoiding a collision, improving travel speed, or improving efficiency.
  • control of other device types is also possible.
  • the system according to embodiments of the present disclosure also provides an additional functionality of measuring the deviation from the normal probability, which can be used to indicate potential causality in a Bayesian network. With dynamic threshold, the new network fixes a key problem in the prior art, which is that the computation becomes inaccurate when the conditional probability exceeds a threshold.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Neurology (AREA)
  • Feedback Control In General (AREA)

Abstract

Described is a system for computing conditional probabilities of random variables for Bayesian inference. The system implements a spiking neural network of neurons to compute the conditional probability of two random variables X and Y. The spiking neural network includes an increment path for a synaptic weight that is proportional to a product of the synaptic weight and a probability of X, a decrement path for the synaptic weight that is proportional to a probability of X, Y, and delay and spike timing dependent plasticity (STDP) parameters such that the synaptic weight increases and decreases with the same magnitude for a single firing event.

Description

[0001] A SPIKING NEURAL NETWORK FOR PROBABILISTIC COMPUTATION
[0002] CROSS-REFERENCE TO RELATED APPLICATIONS
[0003] The present application is a Continuation-in-Part application of U.S.
Application No. 16/294,815, filed in the United States on March 6, 2019, entitled, “A Neuronal Network Topology for Computing Conditional Probabilities,” which is a Non-Provisional Application of U.S. Provisional Application No. 62/659,085, filed in the United States on April 17, 2018, entitled,“A Neuronal Network Topology for Computing Conditional Probabilities,” the entirety of which are incorporated herein by reference.
[0004] The present application is ALSO a Non-Provisional Application of U.S.
Provisional Application No. 62/790,296, filed in the United States on January 9, 2019, entitled,“A Spiking Neural Network for Probabilistic Computation,” the entirety of which is incorporated herein by reference.
BACKGROUND OF INVENTION
[0005] (1) Field of Invention
[0006] The present invention relates to a system for computing conditional
probabilities of random variables for structure learning and Bayesian inference and, more particularly, to a system for computing conditional probabilities of random variables for structure learning and Bayesian inference using a unique spiking neural network.
[0007] (2) Description of Related Art
[0008] In machine learning, Bayesian inference is a popular framework for making decisions by estimating the conditional dependencies between different variables in the data. The inference task is often computationally expensive and performed using conventional digital computers. [0009] One prior method for probabilistic computation uses synaptic updating to perform Bayesian inference (see Literature Reference No. 1 of the List of Incorporated Literature References), but the method is only a mathematical theory and not biologically plausible.
[00010] Thus, a continuing need exists for an approach that is biologically plausible and, therefore, easy to implement in neuromorphic hardware.
[00011 ] SUMMARY OF INVENTION
[00012] The present invention relates to a system for computing conditional
probabilities of random variables for structure learning and Bayesian inference, and more particularly, to a system for computing conditional probabilities of random variables for structure learning and Bayesian inference using a unique spiking neural network. The system comprises a neuromorphic hardware for implementing a spiking neural network comprising a plurality of neurons to compute the conditional probability of two random variables X and Y according to the following:
w * P(X ) = P(X, Y),
where P denotes probability, and w denotes a synaptic weight between a first neuron and a connected second neuron. An X neuron and a Y neuron are configured to spike along with the random variables X and Y. The spiking neural network comprises an increment path for w that is proportional to a product of w * P(X ), a decrement path for w that is proportional to P(X, Y ), and delay and spike timing dependent plasticity (STDP) parameters such that w increases and decreases with the same magnitude for a single firing event.
[00013] In another aspect, the spiking neural network comprises a plurality of
synapses, wherein all neurons, except for the B neuron, have the same threshold voltage, and wherein the synaptic weight w between the A neuron and the B neuron is the only synapse that has STDP, wherein all other synapses have a fixed weight that is designed to trigger post-synaptic neurons when pre-synaptic neurons fire.
[00014] In another aspect, a sign of the STDP is inverted such that if the A neuron spikes before the B neuron, the synaptic weight w is decreased.
[00015] In another aspect, the spiking neural network further comprises an XY neuron connected with both the A neuron and the B neuron, and wherein a delay is imposed between the XY neuron and the A neuron, which causes an increase in the synaptic weight w.
[00016] In another aspect, wherein when the X neuron fires, the B neuron spikes after the A neuron in proportion to the synaptic weight w, such that a spiking rate for the B neuron depends on a product between a spiking rate of the X neuron and the synaptic weight w.
[00017] In another aspect, the spiking neural network implemented by the
neuromorphic hardware further comprises a subtractor circuit, and the subtractor circuit is used to compare the random variables A and Y.
[00018] Finally, the present invention also includes a computer implemented method.
The computer implemented method includes an act of causing a computer to execute instructions and perform the resulting operations.
[00019] BRIEF DESCRIPTION OF THE DRAWINGS
[00020] The objects, features and advantages of the present invention will be apparent from the following detailed descriptions of the various aspects of the invention in conjunction with reference to the following drawings, where:
[00021] FIG. 1 is an illustration of spike timing dependent plasticity (STDP) according to some embodiments of the present disclosure; [00022] FIG. 2 is an illustration of a relationship between weight change and a spike interval for STDP according to some embodiments of the present disclosure;
[00023] FIG. 3 is an illustration of network topology for a probabilistic computation unit (PCU+) according to some embodiments of the present disclosure;
[00024] FIG. 4 is an illustration of a neural path for increasing the synaptic weight according to some embodiments of the present disclosure;
[00025] FIG. 5 is an illustration of a neural path for decreasing the synaptic weight according to some embodiments of the present disclosure;
[00026] FIG. 6 is an illustration of a PCU+ with the subtractor circuit according to some embodiments of the present disclosure;
[00027] FIG. 7 is a plot illustrating a conditional probability computed by the PCU+ according to some embodiments of the present disclosure;
[00028] FIG. 8 is a plot illustrating conditional probabilities computed by the PCU+ with different probability settings according to some embodiments of the present disclosure;
[00029] FIG. 9 is a table illustrating probabilities computed by the neural network according to some embodiments of the present disclosure;
[00030] FIG. 10A is an illustration of dependencies found by the PCU+ for ten
random variables according to some embodiments of the present disclosure;
[00031] FIG. 10B is an illustration of ground truth dependencies according to some embodiments of the present disclosure; [00032] FIG. 11 is an illustration of the system flow of the system for computing conditional probabilities of random variables for structure learning and Bayesian inference according to some embodiments of the present disclosure;
[00033] FIG. 12A is an illustration of a directional excitatory synapse according to some embodiments of the present disclosure;
[00034] FIG. 12B is an illustration of a directional inhibitory synapse according to some embodiments of the present disclosure; and
[00035] FIG. 13 is an illustration of a full conditional probability unit according to some embodiments of the present disclosure.
[00036] DETAILED DESCRIPTION
[00037] The present invention relates to a system for computing conditional
probabilities of random variables for structure learning and Bayesian inference and, more particularly, to a system for computing conditional probabilities of random variables for structure learning and Bayesian inference using a unique spiking neural network. The following description is presented to enable one of ordinary skill in the art to make and use the invention and to incorporate it in the context of particular applications. Various modifications, as well as a variety of uses in different applications will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to a wide range of aspects. Thus, the present invention is not intended to be limited to the aspects presented, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
[00038] In the following detailed description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention.
However, it will be apparent to one skilled in the art that the present invention may be practiced without necessarily being limited to these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention.
[00039] The reader’s attention is directed to all papers and documents which are filed concurrently with this specification and which are open to public inspection with this specification, and the contents of all such papers and documents are incorporated herein by reference. All the features disclosed in this specification, (including any accompanying claims, abstract, and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
[00040] Furthermore, any element in a claim that does not explicitly state“means for” performing a specified function, or“step for” performing a specific function, is not to be interpreted as a“means” or“step” clause as specified in 35 U.S.C.
Section 112, Paragraph 6. In particular, the use of“step of’ or“act of’ in the claims herein is not intended to invoke the provisions of 35 U.S.C. 112, Paragraph 6
[00041] Before describing the invention in detail, first a list of cited references is
provided. Next, a description of the various principal aspects of the present invention is provided. Finally, specific details of various embodiment of the present invention are provided to give an understanding of the specific aspects.
[00042] (1) List of Incorporated Literature References
[00043] The following references are cited and incorporated throughout this
application. For clarity and convenience, the references are listed herein as a central resource for the reader. The following references are hereby incorporated by reference as though fully set forth herein. The references are cited in the application by referring to the corresponding literature reference number, as follows:
1. J. Bill, L. Buesing, S. Habenschuss, B. Nessler, W. Maass, and R.
Legenstein. Distributed Bayesian Computation and Self-Organized Learning in Sheets of Spiking Neurons with Local Lateral Inhibition. PloS one, 10(8):e0134356, 2015.
2. N. Stepp, A. Jammalamadaka. A Dynamical Systems Approach to Neuromorphic Computation of Conditional Probabilities. 1-4. Proceedings of the International Conference on Neuromorphic Systems. ICONS’ 18, 2018.
3. §. Mihala§, E. Niebur. A Generalized Linear Integrate-and-Fire Neural Model Produces Diverse Spiking Behaviors. Neural Computation, 21(3):704-718, 2009.
4. C.W. J. Granger. Investigating Causal Relations by Econometric Models and Cross-Spectral Methods. Econometrica 37, 424-438, 1969.
5. Song, S., Miller K.D., and L.F. Abbott. Competitive Hebbian Learning through Spike-Timing Dependent Synaptic Plasticity. Nat Neurosci 3(9): 919-926, 2000.
[00044] (2) Principal Aspects
[00045] Various embodiments of the invention include three“principal” aspects. The first is a system for computing conditional probabilities of random variables for structure learning and Bayesian inference. The system is typically in the form of a computer system operating software (e.g., neuromorphic hardware) or in the form of a“hard-coded” instruction set. This system may be incorporated into a wide variety of devices that provide different functionalities. Neuromorphic hardware is any electronic device which mimics the natural biological structures of the nervous system. The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors, spintronic memories, threshold switches, and transistors. The second principal aspect is a method, typically in the form of software, implemented using the neuromorphic hardware (digital computer).
[00046] The digital computer system (neuromorphic hardware) is configured to
perform calculations, processes, operations, and/or functions associated with a program or algorithm. In one aspect, certain processes and steps discussed herein are realized as a series ofmstructions (e.g., software program) that reside within computer readable memory units and are executed by a digital computer. When executed, the instructions cause the digital computer to perform specific actions and exhibit specific behavior, such as described herein.
[00047] (3) Specific Details of Various Embodiments
[00048] In machine learning, Bayesian inference is a popular framework for making decisions by estimating the conditional dependencies between different variables in the data. The inference task is often computationally expensive and performed using conventional digital computers. Described herein is a unique spiking neural network to compute the conditional probabilities of random variables for structure learning and Bayesian inference. “Random variables” is a statistical term meaning that their values over time are taken from a kind of random distribution. The X and Y neurons are made to spike along with these random variables. In addition to drastically reducing the number of required neurons in the network by inverting the spike-timing-dependent plasticity (STDP) parameters and improving the accuracy by employing a dynamic threshold, the spiking neural network has a new network topology that enables efficient computation beyond the state-of-the- art methods.
[00049] The advantages of using a spiking neural network to calculate conditional probabilities are two-fold: energy-efficiency and parallelism. Spiking neural networks are often more energy-efficient than conventional computers due to the elimination of high frequency clocking operations in digital circuits. Neural computations are event-driven, which means that they consume energy only when necessary or when new information is available. Moreover, neural networks are highly parallel. Data is processed simultaneously through multiple neural pathways. These two characteristics are utilized in the system described herein to devise a power-efficient, massively-parallel machine for tackling the Bayesian inference task, as will be described in further detail below.
[00050] FIG. l is a depiction of Spike Timing Dependent Plasticity (STDP). STDP results in strengthening connections due to correlated spikes, and weakening connections due to uncorrelated spikes. For example, in FIG. 1, if neuron A (element 100) spikes just before neuron B (element 102), then w increases a lot; if neuron A (element 100) spikes a long time before neuron B (element 102) then w increases a little bit. Similarly, if neuron B (element 102) spikes just before neuron A (element 100) then w decreases a lot, and if neuron B (element 102) spikes a long time before neuron A (element 100), w decreases a little bit.
[00051] Neuron B (element 102) has a voltage threshold which, if exceeded, causes it to spike. Spikes from neuron A (element 100) weighted by w accumulate voltage in neuron B (element 102) until this threshold is reached, at which point neuron B (element 102) spikes, w is incremented, and the voltage level is reset to zero. This means that if w = 0, w = 0, and if w increases w also increases (i.e., there is a monotonic mapping between w and w). Since the likelihood of neuron B (element 102) spiking (and w being incremented) is directly proportional to the value of w, write w = w for the instantaneous change in w at time t. The goal is to use these dynamics to create a conditional probabilistic computation unit, which can then be used to model chains of conditional variables corresponding to larger Bayesian networks. [00052] FIG. 2 illustrates the relationship between weight change and spike interval for STDP. More specifically, FIG. 2 depicts the relationship between the magnitude of the weight update and the time difference (At) between the pre- synaptic and the post-synaptic spikes. In the implementation according to embodiments of the present disclosure, A+ and A determine the speed of convergence to the final weight value (i.e., the learning rate). A higher learning rate will converge faster, but the solution is less accurate. As a non-limiting example, 0.008 was used as the learning rate to achieve a good trade-off between speed and accuracy.
[00053] (3.1) Neural Network Topology of the Probabilistic Computation Unit
(PCU+)
[00054] (3.1.1) Conditional Probability Computation
[00055] The neural network topology described herein is capable of computing the conditional probability of two random variables (X and Y). From Bayes’ theorem, the conditional probability can be computed as
P(X, Y)
P(Y\X) = = w (1)
P(X) which is stored in the synaptic weight w between neuron A (element 100) and neuron B (element 102) after the computation. Rearranging Equation (1), an equation is obtained that describes the desired equilibrium of the neural network as follows: w * P(X) = P(X, Y) (2)
When the left-hand side of Equation (2) equals the right-hand side, w equals the conditional probability, P(Y\X). It is, therefore, the goal to design a neural network that has the following characteristics:
1) An increment path for w that is proportional to the product of w*P(X) 2) An decrement path for w that is proportional to P(X,Y)
3) Correct delay and STDP parameters such that w increases and decreases with the same magnitude for a single firing event.
[00056] To realize the functions listed above, the PCU+ architecture shown in FIG. 3 was created. Circular nodes (e.g., X neuron (element 300)) in the network represent neurons, and inhibitory synapses are drawn with a circle (element 302) at the end of the connection, and triangles (element 301) represent excitatory synapses, which are considered the default synapses). All neurons have a threshold voltage of 1 volt except for neuron B (element 102). The only synapse that has STDP is the one between neuron A (element 100) and B (element 102); other synapses have a fixed weight (e.g. 1.0000001) that is designed to trigger the post-synaptic neurons when the pre-synaptic neurons fire. Although the network will work with a wide range of parameters, for illustration purposes only, a symmetric STDP function was used, tί was set to be the unit delay between two neurons (0.1 milliseconds (ms)), and one volt was used as the firing threshold.
All of the neurons are of the integrate-and-fire type, which sums up all of the input voltages from pre-synaptic spikes. The decay constant ^ ecay f°r the membrane potential is set to infinity; therefore, each neuron’s membrane potential will never decay. The integrate-and-fire dynamics can be implemented using the following equations (see Literature Reference No. 3): and where V is the membrane potential in volt for the neuron. Ij is an indicator function for identifying which of the pre-synaptic neurons have fired:
After updating the membrane potentials for each of the neurons, the membrane potentials are compared with the threshold voltages Vthreshold to determine if the neuron fires or not. Vthreshold is set to a constant value (e.g., one volt) except for neuron B (element 102), whose threshold is adjusted dynamically as described later. After the firing event, the membrane potential is reset to Vreset; Vreset is zero for all neurons. The method described herein uses reverse STDP update rules and the topology shown in FIG. 3 for storing the computed conditional probability in the weight value between two neurons.
[00057] FIGs. 12A and 12B depict examples of a directional excitatory synapse and a directional inhibitory synapse, respectively. FIG. 12A shows a directional excitatory synapse, where neuron A (element 100) causes neuron B (element 102) to spike. FIG. 12B shows a directional inhibitory synapse, where neuron A (element 100) inhibits neuron B (element 102) from spiking.
[00058] (3.1.2) STDP Reversal
[00059] In one embodiment, the sign of STDP was purposely inverted such that if neuron A (element 100) spikes before neuron B (element 102) in FIG. 5, the synaptic weight w is decreased. The reversal of STDP is necessary to decrease w when P{X) increases in order to keep w*P{X) constant. This is important for the network to converge to the desired equilibrium point instead of diverging away from it. The sign of the STDP being inverted results in the function illustrated in FIG. 2 being flipped about the time axis. [00060] (3.1.3) Increment Path for w
[00061] FIG. 4 illustrates the neural structure for increasing the synaptic weight w. A delay between the XY neuron (element 400) and neuron A (element 100) is imposed, which causes w to increase when the XY neuron (element 400) fires because the A neuron (element 100) spikes after the B neuron (element 102) and the sign of the STDP is inverted.
[00062] (3.1.4) Decrement Path for w
[00063] To decrease the weight, w, connect the X neuron (element 300) to neuron A
(element 100), as shown in FIG. 5. In this case, when the X neuron (element 300) fires, the B neuron (element 102) spikes after the A neuron (element 100) in proportion to w. The spiking rate for the B neuron (element 102) depends on the product between the spiking rate of the X neuron (element 300) and w. That is, speed of w decreasing = P(X ) * w. (6)
[00064] (3.1.5) Dynamic Threshold
[00065] In order for Equation (6) to hold true, the firing threshold for neuron B
(element 102) needs to be adjusted dynamically according to the sum of the input voltages. This is named the dynamic threshold Vdt, which only affects neuron B (element 102). More specifically, the threshold is reduced by the same amount as the overshoot of membrane potential from the previous firing event. Equation (7) describes the update rule for Vdt Vthreshold is set to one volt in one embodiment. The dynamic threshold allows the network to calculate the product between w and P(X) more accurately; otherwise, w will suffer from imprecision because it takes the same number of spikes from the A neuron (element 100) to trigger the B neuron (element 102) when w is larger than 0.5. The dynamic threshold is updated each time after neuron B (element 102) fires according to this equation: Vdt— ^ threshold (VB ^dt), (7) where Vdt is the voltage threshold for the B neuron (element 102). Vthreshold is the normal voltage threshold, which is set to IV. VB is the membrane potential of the B neuron (element 102) after accumulating all of the input voltages at the current time step. Note that in order for Equation (7) to work as designed, it is important to avoid conflicts between the increment path and the decrement path. For this reason, a delay of four units (0.4 ms) is imposed between the X neuron (element 300) and the A neuron (element 100) to allow w to increase first then decrease in the case where both paths are triggered.
[00066] (3.1.6) Modes of Operation
[00067] The neuron network can be operated in two modes: one is the training phase and the other is the measurement phase. The objective of the training phase is to allow weight w to converge to the targeted P(Y\X). During this phase, spikes from the two random variables are fed to neuron X (element 300) and neuron Y (element 304 in FIGs. 3 and 6). STDP is enabled for the synapse between neuron A (element 100) and neuron B (element 102). After the training phase is complete (after a certain amount of time defined by the users that suit their speed and accuracy requirements), the network enters the measurement phase. The goal of the measurement phase is to determine the dependency between the two random variables by calculating | P(Y) - P(Y\X) \. P(X) is set to one in the measurement phase (i.e., it spikes continuously). STDP is disabled to prevent w from being altered. The resulting calculation (the deviation of the conditional probability from the intrinsic value) is encoded in the firing rate of the neuron F (element 600) in FIG. 6. The conditional probability P(Y|X) is recorded by reading out the synaptic weight w.
[00068] (3.1.7) Subtractor Circuit [00069] FIG. 6 shows a neural network with a subtractor circuit (SI, S2, F) for calculating the absolute difference between P(Y) and P(Y\X). The firing rate of neuron F (element 600) measures the likelihood that neuron X (element 300) causes neuron Y (element 304) to happen. The SI neuron (element 602) takes three inputs (from neuron B (element 102), XY (element 306), and Y (element 304)). During the measurement phase of the operation, P(X) is set to one. The synapse between the B neuron (element 102) and the SI neuron (element 602) is inhibitory, meaning that the membrane potential is reduced if the B neuron (element 102) fires. On the other hand, the synapse between the Y neuron (element 304) and the SI neuron (element 602) is excitatory. As result, the firing rate of the SI neuron (element 602) computes P(Y) - P( Y\X). The synapse between the XY neuron (element 306) and the SI neuron (element 602) is to compensate for the undesired artifact that the B neuron (element 102) will fire when the XY neuron (element 306) is triggered during the measurement phase, improving the accuracy of the subtraction calculation. The S2 neuron (element 604) works similarly; however, it computes P(Y\X) - P{Y) instead. Neuron F (element 600) outputs the sum of the two firing rates. P(Y) - P( Y\X) and P( Y\X) - P{Y) are exactly opposite in sign, and because firing probabilities are non negative, any negative values are reset to zero. Equations (9) through (11) below demonstrate how neuron F (element 600) effectively implements the following absolute function:
Likelihood{X ® Y} = | P(Y - P(Y \X) \
= firing rate of F * resolution (8) where the firing rate of F is measured in Hertz (Hz), and resolution is measured in seconds. Resolution is the time interval between two data points in the input data stream. Likelihood{X ® Y} is a unit-less number between 0 and 1, which is compared with a threshold to determine the dependency between X and Y. [00070] The SI (element 602), the S2 (element 604), and the F (element 600) neurons are also integrate-and-fire type neurons with the same equations as described in Equations (3) through (5). All of their thresholds are set to one volt, and all the connection weights are set to 1.0000001. In addition, the SI (element 602) and S2 (element 604) neurons have a lower and upper membrane potential limits to prevent the voltages from running away uncontrollably; the voltage limits are set to [-5,5] volts in one embodiment. The probability of firing for neuron SI (element 602), S2 (element 604), and F (element 600) can be described using the following equations:
(9)
where P(S 1), P(S2), and P{F) are unit-less numbers between zero and one that represent the probabilities of firing at any given time step. The max( ,0) operators in equations (9) through (11) are used to account for the fact that probabilities cannot be negative.
[00071] In summary, FIG. 6 depicts a neuronal network topology for computing the conditional dependency structure between two or more random variables encoded as input data streams. The topology uses a subtractor circuit to compare P(X) to P(Y|X) and determine pairwise conditional structure for a network of random variables. In addition, described herein is a method for learning conditional structure by determining independence and conditional dependence of temporally lagged variables using the network topology shown in FIG. 6. [00072] (3.1.8) Spiking Encoder/Decoder
[00073] FIG. 11 depicts the system flow for the system for computing conditional probabilities of random variables for structure learning and Bayesian inference according to embodiments of the present disclosure. The spike encoder takes input sensor data and rate codes the values into frequencies of spikes in input spike trains, while the spiking decoder takes the output spike trains and decodes the frequencies of spikes back into values for the random variables and/or conditional probabilities, depending on what the user has queried. The inputs X (element 300) and Y (element 304) shown in FIG. 3 come from the spike encoding (element 1100) of streaming sensor data (element 1102) on a mobile platform, such as a ground vehicle (element 1104) or aircraft (element 1106), as depicted in FIG. 11. The values of the streaming sensor data (element 1102) will be temporally coded using rate coding. Probabilistic input spikes for X (element 300) are generated as fixed-rate Poisson process. Each time there is a spike in the realization of X (element 300), Y (element 304) spikes are generated with a fixed Bernoulli probability P. To generate the results shown in FIGs. 9, 10 A, and 10B, ten Bernoulli random variables were generated using this method, some of which are conditionally dependent on each other.
[00074] (3.1.9) Neuronal Topology (FIG. 11, element 1108)
[00075] The neuromorphic hardware required for implementing the neuronal network topology (element 1108) in FIG. 11 must have specific neuron voltage (spiking) equations, synaptic weight update rules (known as STDP), and specific neuronal voltage dynamics. The STDP required for the PCU+ topology in FIG. 13 is described in Literature Reference No. 5. As a function of At = tpost— tpre , where A+ and 4_are the maximum and minimum gains, and T+and T_are timescales for weight increase and decrease (FIG. 2). For the topologies in FIGs.
3 and 6, a unique“STDP Reversal” update rule is employed, such that the conditions for the two cases (i.e., At > 0, At < 0) are swapped. This reversal may not be implementable on existing general-purpose neuromorphic hardware. The voltage update rules required for the system described herein are described in Equations (3), (4), (5), and (7). Additional details regarding the neuronal topology (element 1108) implemented on neuromorphic hardware can be found in U.S. Application No. 16/294,815, entitled“A Neuronal Network Topology for Computing Conditional Probabilities,” which is hereby incorporated by reference as though fully set forth herein. The neuromorphic computing hardware according to embodiments of the present disclosure replaces the usage of a digital computer CPU (control processing unit) or GPU (graphics processing unit).
[00076] (3.1.10) Neuromorphic Compiler (FIG. 11, element 1110)
[00077] The neuromorphic compiler (element 1110) is detailed in U.S. Application No. 16/294,886, entitled,“Programming Model for a Bayesian Neuromorphic Compiler,” which is hereby incorporated by reference as though fully set forth herein. It is a programming model which lets users query Bayesian network probabilities from the learned conditional model (element 1112) for further processing or decision making. The learned conditional model (element 1112) refers to the conditional properties learned between input random variables. For example, if the probability of Y=5 given that X=3 is 60%, then P(Y|X) = 0.6 is part of the learned conditional model (element 1112). For instance, in a fault message application this can be preventative repair of expected future faults based on current faults. Preventative repair is when parts on a machine (e.g., vehicle) are replaced before they actually wear out, so that they don’t end up wearing out while in operation. For example, if a user sees a system fault message #1 while a vehicle is in operation, and the user knows that P(system fault message #2 (system fault message #1) = 95%, the user can preventatively replace the vehicle part corresponding to system fault message #2 in anticipation that it will fail soon as well.
[00078] FIG. 13 is an illustration of a full conditional probability unit according to some embodiments of the present disclosure. In order to compute the conditional probability of two input processes X (element 300) and Y (element 304), define inputs to Ip (i.e., phasic input (element 1300)) and IT (i.e., tonic input (element 1302)), as shown in FIG. 13. Ip (element 1300) comes from the logical AND of the two input processes X (element 300) and Y (element 304), and IT (element 1302) comes straight from X (element 300). Since the Tonic input (element 1302) corresponds to P(X) and the Phasic input (element 1300) corresponds to P(X, Y), this equation forces w (element 1304) to converge to IPX, Y) / IPX) = IPY X). The "Tonic" input IT (element 1302) causes A (element 100) to spike, A (element 100) causes B (element 102) to spike, resulting in an increase to w (element 1304). However, if C (element 1306) spikes, delay T1 (element 1308) will cause d (element 102) to spike before A (element 100), causing w (element 1304) to decrease. Additionally, A (element 100) will not cause B (element 102) to spike after T1 (element 1308) due to il's (element 102) refractory period, which is the period after stimulation where a neuron is unresponsive. The refractory period of a neuron is a period just after the spikes during which it cannot spike again, even if it receives input spikes from neighboring neurons. This is a known biological mechanism leveraged by the network according to embodiments of the present disclosure. This results in maintenance of a balance, dependent on C (element 1306) (i.e., if C (element 1306) spikes every time f? (element 102) spikes then the a increment from A (element 100) will cancel with that of C (element 1306), producing w = 0). [00079] Furthermore, two more neurons D (element 1310) and E (element 1312) were added, with input from B (element 102) and delays T1 (element 1308) and T2 (element 1314), respectively. Since (element 1308) < T2 (element 1314), B (element 102) will cause C (element 1306) to spike twice: once through /) (element 1310) (fast path) and once through E (element 1312) (slow path). Since the delay T1 is less than T2, the spikes that travel from B ® E ® C will take longer than the spikes that travel from B ® D ® C, which explains the designation between“fast path” and“slow path”. Thus, for each spike from the tonic input IT (element 1302) that causes w (element 1304) to increase, there are two spikes coming from C (element 1306) that cause it to decrease. As a result, w (element 1304) decreases in proportion to itself (i.e., W = wIT— 2wIT =— wIT ). An additional neuron / (element 1316) inhibits neurons D (element 1310) and E
(element 1312) so that they will not spike with Ip (element 1300), and there will be no associated decrease in w (element 1304). Note that if B (element 102) spikes as a result of Ip (element 1300) it will not spike again due to IT (element 1302) because of B's (element 102) refractory period. This network now models the fixed point: W =—wIT + Ip .
[00080] (3.2) Experimental Studies
[00081] PCU+ is able to compute the conditional probability of two random variables, as shown in FIG. 7, where the unbolded line (element 700) represents the calculated results, and the bold line (element 702) represents the ground truth. As depicted in FIG. 7, the synaptic weight ( w ) between A and B converges to the final value in about 30 seconds of neural computation. The time interval between each input data point is 40 ms, and the spiking probability for the inputs are set to 0.7. This means that the PCU+ converged after observing around 500 input spikes; this is a fairly efficient method to estimate the conditional probability.
The results also show that w is able to track the fluctuation in P( Y\X) as the system progresses in time. In practice, the weight w is recorded in a variable in a software program or a register in neuromorphic hardware. Most spiking neural network simulators allow recordings of internal variables such as the synaptic weights. On the other hand, in neuromorphic hardware, a register read operation is performed to record the final weight value to a recording media.
[00082] FIG. 8 shows the results from a suite of simulations with the PCU+. The probabilities P(X) and P( Y\X) are varied to validate the accuracy of the neural network across different input conditions. The final synaptic weight, w, which encodes P(Y\X), was plotted after each simulation. It was observed that the final weights (represented by filled circles (element 800) align with the ground truth values (i.e., the dotted line (element 802)). Note that the PCU+ circuit is able to accurately compute the conditional probability over the entire range ([0,1]) of possible values.
[00083] In addition, the PCU+ with Subtractor Circuit according to embodiments of the present disclosure was applied to solve structure learning problems. In structure learning, the goal is to identify the causal relationships in a Bayesian network. More precisely, in experimental studies, the goal was to find the dependencies between ten different random variables, in which the current value of a variable affects the future values of other variables (i.e., Granger causality (see Literature References No. 4). In order to test for Granger causality, the following pre-processing techniques were performed on the data before being fed to the PCU+. The data for each random variable is a stream of 0s and Is, recording the occurrence of errors as a time series. Before a pair of data streams (e.g., X and Y) are fed to the PCU+, the data for Y is first shifted earlier by one time step. Effectively, P(Yt+i\Xt) is being calculated instead of P(Yt\Xt). This is necessary because the cause and the effect happens sequentially in time; by shifting Y forward in the training dataset, whether the current X (i.e., Xt) has an effect on future Y (i.e. Yt+i) was tested. The dependencies were identified by computing P(Yt+i\Xt) between all pairs of random variables in the system and calculating the deviation of the conditional probabilities, P(Yt+i\Xt), from their intrinsic values, P(Yt+ 1). The deviation, | P(Yt+ 1) - P(Yt+l\Xt) |, is encoded in the firing rate of neuron F in each PCU+. A threshold is defined, in which the link is flagged as significant, and a conclusion is made that X causes Y to happen.
[00084] One hundred PCU+ were used to compute the conditional probabilities
between all combinations of the ten random variables. The table in FIG. 9 summarizes the firing rates of neuron F (element 600) from each PCU+, which encodes | P(Yt+ 1) -P(Yt+i\Xt) |. In one embodiment, values larger than 0.055, the threshold indicating a causal relationship, are flagged. The results from the table in FIG. 9 are displayed visually by drawing an arrow between two random variables which have a potential causal relationship, as illustrated in FIG. 10 A. The neural network described herein can identify these dependencies with 100% accuracy, as determined by comparison with the ground truth dependencies shown in FIG. 10B.
[00085] Bayesian inference is ubiquitous in data science and decision theory. The invention described herein can be used to reduce the operational cost for aircrafts and vehicles through preventive maintenance and diagnostics, enable
maneuvering (i.e., driving) of an autonomous vehicle through performing a Bayesian inference task, enhance real-time mission planning for unmanned aircrafts, and facilitate unsupervised structure learning in new environments. Bayesian decision theory is a statistical approach to the problem of pattern classification. Pattern classification, an example of a Bayesian inference task, has several applications, including object detection and object classification.
Additionally, an application of the invention described herein is estimating conditional probabilities between fault messages of a ground vehicle or aircraft for fault prognostic models.
[00086] In the application of a self-driving vehicle, one or more processors of the system described herein can control one or more motor vehicle components (electrical, non-electrical, mechanical), such as a brake, a steering mechanism, suspension, or safety device (e.g., airbags, seatbelt tensioners, etc.). Further, the vehicle could be an unmanned aerial vehicle (UAV), an autonomous self-driving ground vehicle, or a human operated vehicle controlled either by a driver or by a remote operator. For instance, upon object detection (i.e., a Bayesian inference task) and recognition, the system can cause the autonomous vehicle to perform a driving operation/maneuver (such as steering or another command) in line with driving parameters in accordance with the recognized object. For example, if the system recognizes a bicyclist, another vehicle, or a pedestrian, the system described herein can cause a vehicle maneuver/operation to be performed to avoid a collision with the bicyclist or vehicle (or any other object that should be avoided while driving). The system can cause the autonomous vehicle to apply a functional movement response, such as a braking operation followed by a steering operation, to redirect vehicle away from the object, thereby avoiding a collision.
[00087] Other appropriate responses may include one or more of a steering operation, a throttle operation to increase speed or to decrease speed, or a decision to maintain course and speed without change. The responses may be appropriate for avoiding a collision, improving travel speed, or improving efficiency. As can be appreciated by one skilled in the art, control of other device types is also possible. Thus, there are a number of automated actions that can be initiated by the autonomous vehicle given the particular object detected and the circumstances in which the system is implemented. [00088] The system according to embodiments of the present disclosure also provides an additional functionality of measuring the deviation from the normal probability, which can be used to indicate potential causality in a Bayesian network. With dynamic threshold, the new network fixes a key problem in the prior art, which is that the computation becomes inaccurate when the conditional probability exceeds a threshold.
[00089] Finally, while this invention has been described in terms of several
embodiments, one of ordinary skill in the art will readily recognize that the invention may have other applications in other environments. It should be noted that many embodiments and implementations are possible. Further, the following claims are in no way intended to limit the scope of the present invention to the specific embodiments described above. In addition, any recitation of“means for” is intended to evoke a means-plus-function reading of an element and a claim, whereas, any elements that do not specifically use the recitation“means for”, are not intended to be read as means-plus-function elements, even if the claim otherwise includes the word“means”. Further, while particular method steps have been recited in a particular order, the method steps may occur in any desired order and fall within the scope of the present invention.

Claims

CLAIMS What is claimed is:
1. A system for computing conditional probabilities of random variables for Bayesian inference, the system comprising:
neuromorphic hardware configured to implement a spiking neural network, the spiking neural network comprising a plurality of neurons to compute the conditional probability of two random variables X and Y according to the following:
w * P X) = P(X, Y)
where P denotes probability, and w denotes a synaptic weight between an A neuron and a connected B neuron;
wherein an X neuron and a Y neuron are configured to spike along with the random variables X and Y;
wherein the spiking neural network comprises an increment path for w that is proportional to a product of w * P(X ), a decrement path for w that is proportional to P(X, Y ), and delay and spike timing dependent plasticity (STDP) parameters such that w increases and decreases with the same magnitude for a single firing event.
2. The system as set forth in Claim 1, wherein the spiking neural network implemented by the neuromorphic hardware comprises a plurality of synapses, wherein all neurons, except for the B neuron, have the same threshold voltage, and wherein the synaptic weight w between the A neuron and the B neuron is the only synapse that has STDP, wherein all other synapses have a fixed weight that is designed to trigger post-synaptic neurons when pre-synaptic neurons fire.
3. The system as set forth in Claim 2, wherein a sign of the STDP is inverted such that if the A neuron spikes before the B neuron, the synaptic weight w is decreased.
4. The system as set forth in Claim 3, wherein the spiking neural network implemented by the neuromorphic hardware further comprises an XY neuron connected with both the A neuron and the B neuron, and wherein a delay is imposed between the XY neuron and the A neuron, which causes an increase in the synaptic weight w.
5. The system as set forth in Claim 4, wherein the X neuron is connected with the A neuron, wherein when the X neuron fires, the B neuron spikes after the A neuron in proportion to the synaptic weight w, such that a spiking rate for the B neuron depends on a product between a spiking rate of the X neuron and the synaptic weight w.
6. A neuromorphic hardware implemented method for computing conditional
probabilities of random variables for Bayesian inference, the method comprising an act of:
operating a spiking neural network comprising a plurality of neurons to compute the conditional probability of two random variables X and Y according to the following:
w * Pi ) = P(X, Y )
where P denotes probability, and w denotes a synaptic weight between an A neuron and a connected B neuron;
wherein an X neuron and a Y neuron are configured to spike along with the random variables X and Y;
wherein the spiking neural network comprises an increment path for w that is proportional to a product of w * P(X ), a decrement path for w that is proportional to P(X, Y ), and delay and spike timing dependent plasticity (STDP) parameters such that w increases and decreases with the same magnitude for a single firing event.
7. The method as set forth in Claim 6, wherein the spiking neural network comprises a plurality of synapses, wherein all neurons, except for the B neuron, have the same threshold voltage, and wherein the synaptic weight w between the A neuron and the B neuron is the only synapse that has STDP, wherein all other synapses have a fixed weight that is designed to trigger post-synaptic neurons when pre-synaptic neurons fire.
8. The method as set forth in Claim 7, wherein a sign of the STDP is inverted such that if the A neuron spikes before the B neuron, the synaptic weight w is decreased.
9. The method as set forth in Claim 8, wherein the spiking neural network further
comprises an XY neuron connected with both the A neuron and the B neuron, and wherein the method further comprises an act of imposing a delay between the XY neuron and the A neuron, which causes an increase in the synaptic weight w.
10. The method as set forth in Claim 9, wherein the X neuron is connected with the A neuron, wherein when the X neuron fires, the B neuron spikes after the A neuron in proportion to the synaptic weight w, such that a spiking rate for the B neuron depends on a product between a spiking rate of the X neuron and the synaptic weight w.
11. The system as set forth in Claim 1, wherein the spiking neural network implemented by the neuromorphic hardware further comprises a subtractor circuit, and wherein the subtractor circuit is used to compare the random variables X and Y.
12. The method as set forth in Claim 6, wherein the spiking neural network further
comprises a subtractor circuit, and wherein the method further comprises an act of using the subtractor circuit to compare the random variables X and Y.
EP19782857.7A 2019-01-09 2019-09-20 A spiking neural network for probabilistic computation Pending EP3908982A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962790296P 2019-01-09 2019-01-09
PCT/US2019/052275 WO2020146016A1 (en) 2019-01-09 2019-09-20 A spiking neural network for probabilistic computation

Publications (1)

Publication Number Publication Date
EP3908982A1 true EP3908982A1 (en) 2021-11-17

Family

ID=68136569

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19782857.7A Pending EP3908982A1 (en) 2019-01-09 2019-09-20 A spiking neural network for probabilistic computation

Country Status (3)

Country Link
EP (1) EP3908982A1 (en)
CN (1) CN113196301A (en)
WO (1) WO2020146016A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115545190B (en) * 2022-12-01 2023-02-03 四川轻化工大学 Impulse neural network based on probability calculation and implementation method thereof

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2467401A1 (en) * 2001-11-16 2003-05-30 Yuan Yan Chen Pausible neural network with supervised and unsupervised cluster analysis
US8370241B1 (en) * 2004-11-22 2013-02-05 Morgan Stanley Systems and methods for analyzing financial models with probabilistic networks
JP2008293199A (en) * 2007-05-23 2008-12-04 Toshiba Corp Bayesian network information processing device and bayesian network information processing program
CN107092959B (en) * 2017-04-07 2020-04-10 武汉大学 Pulse neural network model construction method based on STDP unsupervised learning algorithm
US11449735B2 (en) * 2018-04-17 2022-09-20 Hrl Laboratories, Llc Spiking neural network for probabilistic computation

Also Published As

Publication number Publication date
CN113196301A (en) 2021-07-30
WO2020146016A1 (en) 2020-07-16

Similar Documents

Publication Publication Date Title
Almeida Multilayer perceptrons
Werfel et al. Learning curves for stochastic gradient descent in linear feedforward networks
Kosmatopoulos et al. Dynamical neural networks that ensure exponential identification error convergence
US11449735B2 (en) Spiking neural network for probabilistic computation
EP3136304A1 (en) Methods and systems for performing reinforcement learning in hierarchical and temporally extended environments
US11347221B2 (en) Artificial neural networks having competitive reward modulated spike time dependent plasticity and methods of training the same
Jung et al. Pattern classification of back-propagation algorithm using exclusive connecting network
EP3908982A1 (en) A spiking neural network for probabilistic computation
US10748063B2 (en) Neuronal network topology for computing conditional probabilities
CN112418421A (en) Roadside end pedestrian trajectory prediction algorithm based on graph attention self-coding model
Maravall et al. Fusion of learning automata theory and granular inference systems: ANLAGIS. Applications to pattern recognition and machine learning
US11443171B2 (en) Pulse generation for updating crossbar arrays
Zabiri et al. NN-based algorithm for control valve stiction quantification
Hirasawa et al. Improvement of generalization ability for identifying dynamical systems by using universal learning networks
V〈 rkonyi et al. Improved neural network control of inverted pendulums
CN111582461A (en) Neural network training method and device, terminal equipment and readable storage medium
PANDYA et al. A stochastic parallel algorithm for supervised learning in neural networks
Abdulkarim et al. Evaluating Feedforward and Elman Recurrent Neural Network Performances in Time Series Forecasting
Stepp et al. A dynamical systems approach to neuromorphic computation of conditional probabilities
Simani Mathematical Modeling and Fault Description
Munavalli et al. Pattern recognition for data retrieval using artificial neural network
de Moraes Assessment of EFuNN accuracy for pattern recognition using data with different statistical distributions
Serpen Search for a Lyapunov function through empirical approximation by artificial neural nets: Theoretical framework
Reis et al. Modified fuzzy-CMAC networks with clustering-based structure
Gouko et al. An action generation model by using time series prediction and its application to robot navigation

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20210420

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230525

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20230724