EP3908982A1 - Gepulstes neuronales netzwerk zur probabilistischen berechnung - Google Patents

Gepulstes neuronales netzwerk zur probabilistischen berechnung

Info

Publication number
EP3908982A1
EP3908982A1 EP19782857.7A EP19782857A EP3908982A1 EP 3908982 A1 EP3908982 A1 EP 3908982A1 EP 19782857 A EP19782857 A EP 19782857A EP 3908982 A1 EP3908982 A1 EP 3908982A1
Authority
EP
European Patent Office
Prior art keywords
neuron
neural network
synaptic
random variables
synaptic weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP19782857.7A
Other languages
English (en)
French (fr)
Inventor
Hao-Yuan Chang
Aruna JAMMALAMADEKA
Nigel D. STEPP
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HRL Laboratories LLC
Original Assignee
HRL Laboratories LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HRL Laboratories LLC filed Critical HRL Laboratories LLC
Publication of EP3908982A1 publication Critical patent/EP3908982A1/de
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Definitions

  • the present invention relates to a system for computing conditional
  • Bayesian inference is a popular framework for making decisions by estimating the conditional dependencies between different variables in the data.
  • the inference task is often computationally expensive and performed using conventional digital computers.
  • One prior method for probabilistic computation uses synaptic updating to perform Bayesian inference (see Literature Reference No. 1 of the List of Incorporated Literature References), but the method is only a mathematical theory and not biologically plausible.
  • the present invention relates to a system for computing conditional
  • the system comprises a neuromorphic hardware for implementing a spiking neural network comprising a plurality of neurons to compute the conditional probability of two random variables X and Y according to the following:
  • the spiking neural network comprises an increment path for w that is proportional to a product of w * P(X ), a decrement path for w that is proportional to P(X, Y ), and delay and spike timing dependent plasticity (STDP) parameters such that w increases and decreases with the same magnitude for a single firing event.
  • STDP delay and spike timing dependent plasticity
  • the spiking neural network comprises a plurality of
  • synapses wherein all neurons, except for the B neuron, have the same threshold voltage, and wherein the synaptic weight w between the A neuron and the B neuron is the only synapse that has STDP, wherein all other synapses have a fixed weight that is designed to trigger post-synaptic neurons when pre-synaptic neurons fire.
  • a sign of the STDP is inverted such that if the A neuron spikes before the B neuron, the synaptic weight w is decreased.
  • the spiking neural network further comprises an XY neuron connected with both the A neuron and the B neuron, and wherein a delay is imposed between the XY neuron and the A neuron, which causes an increase in the synaptic weight w.
  • the B neuron spikes after the A neuron in proportion to the synaptic weight w, such that a spiking rate for the B neuron depends on a product between a spiking rate of the X neuron and the synaptic weight w.
  • the present invention also includes a computer implemented method.
  • the computer implemented method includes an act of causing a computer to execute instructions and perform the resulting operations.
  • FIG. 1 is an illustration of spike timing dependent plasticity (STDP) according to some embodiments of the present disclosure
  • FIG. 2 is an illustration of a relationship between weight change and a spike interval for STDP according to some embodiments of the present disclosure
  • FIG. 3 is an illustration of network topology for a probabilistic computation unit (PCU+) according to some embodiments of the present disclosure
  • FIG. 4 is an illustration of a neural path for increasing the synaptic weight according to some embodiments of the present disclosure
  • FIG. 5 is an illustration of a neural path for decreasing the synaptic weight according to some embodiments of the present disclosure
  • FIG. 6 is an illustration of a PCU+ with the subtractor circuit according to some embodiments of the present disclosure
  • FIG. 7 is a plot illustrating a conditional probability computed by the PCU+ according to some embodiments of the present disclosure
  • FIG. 8 is a plot illustrating conditional probabilities computed by the PCU+ with different probability settings according to some embodiments of the present disclosure
  • FIG. 9 is a table illustrating probabilities computed by the neural network according to some embodiments of the present disclosure.
  • FIG. 10A is an illustration of dependencies found by the PCU+ for ten
  • FIG. 10B is an illustration of ground truth dependencies according to some embodiments of the present disclosure
  • FIG. 11 is an illustration of the system flow of the system for computing conditional probabilities of random variables for structure learning and Bayesian inference according to some embodiments of the present disclosure
  • FIG. 12A is an illustration of a directional excitatory synapse according to some embodiments of the present disclosure
  • FIG. 12B is an illustration of a directional inhibitory synapse according to some embodiments of the present disclosure.
  • FIG. 13 is an illustration of a full conditional probability unit according to some embodiments of the present disclosure.
  • any element in a claim that does not explicitly state“means for” performing a specified function, or“step for” performing a specific function, is not to be interpreted as a“means” or“step” clause as specified in 35 U.S.C.
  • Various embodiments of the invention include three“principal” aspects.
  • the first is a system for computing conditional probabilities of random variables for structure learning and Bayesian inference.
  • the system is typically in the form of a computer system operating software (e.g., neuromorphic hardware) or in the form of a“hard-coded” instruction set.
  • Neuromorphic hardware is any electronic device which mimics the natural biological structures of the nervous system.
  • the implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors, spintronic memories, threshold switches, and transistors.
  • the second principal aspect is a method, typically in the form of software, implemented using the neuromorphic hardware (digital computer).
  • the digital computer system (neuromorphic hardware) is configured to
  • Bayesian inference is a popular framework for making decisions by estimating the conditional dependencies between different variables in the data.
  • the inference task is often computationally expensive and performed using conventional digital computers.
  • Described herein is a unique spiking neural network to compute the conditional probabilities of random variables for structure learning and Bayesian inference.
  • Random variables is a statistical term meaning that their values over time are taken from a kind of random distribution. The X and Y neurons are made to spike along with these random variables.
  • the spiking neural network has a new network topology that enables efficient computation beyond the state-of-the- art methods.
  • FIG. l is a depiction of Spike Timing Dependent Plasticity (STDP).
  • STDP results in strengthening connections due to correlated spikes, and weakening connections due to uncorrelated spikes. For example, in FIG. 1, if neuron A (element 100) spikes just before neuron B (element 102), then w increases a lot; if neuron A (element 100) spikes a long time before neuron B (element 102) then w increases a little bit.
  • neuron B (element 102) spikes just before neuron A (element 100) then w decreases a lot, and if neuron B (element 102) spikes a long time before neuron A (element 100), w decreases a little bit.
  • FIG. 2 illustrates the relationship between weight change and spike interval for STDP. More specifically, FIG. 2 depicts the relationship between the magnitude of the weight update and the time difference (At) between the pre- synaptic and the post-synaptic spikes.
  • a + and A determine the speed of convergence to the final weight value (i.e., the learning rate). A higher learning rate will converge faster, but the solution is less accurate. As a non-limiting example, 0.008 was used as the learning rate to achieve a good trade-off between speed and accuracy.
  • the neural network topology described herein is capable of computing the conditional probability of two random variables (X and Y). From Bayes’ theorem, the conditional probability can be computed as
  • Equation (1) P(X, Y) (2)
  • Equation (2) When the left-hand side of Equation (2) equals the right-hand side, w equals the conditional probability, P(Y ⁇ X). It is, therefore, the goal to design a neural network that has the following characteristics:
  • Circular nodes e.g., X neuron (element 300)
  • inhibitory synapses are drawn with a circle (element 302) at the end of the connection, and triangles (element 301) represent excitatory synapses, which are considered the default synapses).
  • All neurons have a threshold voltage of 1 volt except for neuron B (element 102).
  • the only synapse that has STDP is the one between neuron A (element 100) and B (element 102); other synapses have a fixed weight (e.g.
  • t ⁇ was set to be the unit delay between two neurons (0.1 milliseconds (ms)), and one volt was used as the firing threshold.
  • All of the neurons are of the integrate-and-fire type, which sums up all of the input voltages from pre-synaptic spikes.
  • the decay constant ⁇ ecay f° r the membrane potential is set to infinity; therefore, each neuron’s membrane potential will never decay.
  • the integrate-and-fire dynamics can be implemented using the following equations (see Literature Reference No. 3): and where V is the membrane potential in volt for the neuron.
  • I j is an indicator function for identifying which of the pre-synaptic neurons have fired:
  • the membrane potentials are compared with the threshold voltages Vthreshold to determine if the neuron fires or not.
  • Vthreshold is set to a constant value (e.g., one volt) except for neuron B (element 102), whose threshold is adjusted dynamically as described later.
  • the membrane potential is reset to Vreset; Vreset is zero for all neurons.
  • the method described herein uses reverse STDP update rules and the topology shown in FIG. 3 for storing the computed conditional probability in the weight value between two neurons.
  • FIGs. 12A and 12B depict examples of a directional excitatory synapse and a directional inhibitory synapse, respectively.
  • FIG. 12A shows a directional excitatory synapse, where neuron A (element 100) causes neuron B (element 102) to spike.
  • FIG. 12B shows a directional inhibitory synapse, where neuron A (element 100) inhibits neuron B (element 102) from spiking.
  • the sign of STDP was purposely inverted such that if neuron A (element 100) spikes before neuron B (element 102) in FIG. 5, the synaptic weight w is decreased.
  • the reversal of STDP is necessary to decrease w when P ⁇ X) increases in order to keep w*P ⁇ X) constant. This is important for the network to converge to the desired equilibrium point instead of diverging away from it.
  • the sign of the STDP being inverted results in the function illustrated in FIG. 2 being flipped about the time axis. [00060] (3.1.3) Increment Path for w
  • FIG. 4 illustrates the neural structure for increasing the synaptic weight w.
  • a delay between the XY neuron (element 400) and neuron A (element 100) is imposed, which causes w to increase when the XY neuron (element 400) fires because the A neuron (element 100) spikes after the B neuron (element 102) and the sign of the STDP is inverted.
  • the B neuron (element 102) spikes after the A neuron (element 100) in proportion to w.
  • Vdt the dynamic threshold
  • Equation (7) describes the update rule for Vdt Vthreshold is set to one volt in one embodiment.
  • the dynamic threshold allows the network to calculate the product between w and P(X) more accurately; otherwise, w will suffer from imprecision because it takes the same number of spikes from the A neuron (element 100) to trigger the B neuron (element 102) when w is larger than 0.5.
  • the dynamic threshold is updated each time after neuron B (element 102) fires according to this equation: Vdt— ⁇ threshold (VB ⁇ dt), (7) where V dt is the voltage threshold for the B neuron (element 102). V threshold is the normal voltage threshold, which is set to IV. V B is the membrane potential of the B neuron (element 102) after accumulating all of the input voltages at the current time step. Note that in order for Equation (7) to work as designed, it is important to avoid conflicts between the increment path and the decrement path. For this reason, a delay of four units (0.4 ms) is imposed between the X neuron (element 300) and the A neuron (element 100) to allow w to increase first then decrease in the case where both paths are triggered.
  • the neuron network can be operated in two modes: one is the training phase and the other is the measurement phase.
  • the objective of the training phase is to allow weight w to converge to the targeted P(Y ⁇ X).
  • spikes from the two random variables are fed to neuron X (element 300) and neuron Y (element 304 in FIGs. 3 and 6).
  • STDP is enabled for the synapse between neuron A (element 100) and neuron B (element 102).
  • the training phase is complete (after a certain amount of time defined by the users that suit their speed and accuracy requirements)
  • the network enters the measurement phase.
  • the goal of the measurement phase is to determine the dependency between the two random variables by calculating
  • P(X) is set to one in the measurement phase (i.e., it spikes continuously).
  • STDP is disabled to prevent w from being altered.
  • the resulting calculation (the deviation of the conditional probability from the intrinsic value) is encoded in the firing rate of the neuron F (element 600) in FIG. 6.
  • X) is recorded by reading out the synaptic weight w.
  • FIG. 6 shows a neural network with a subtractor circuit (SI, S2, F) for calculating the absolute difference between P(Y) and P(Y ⁇ X).
  • the firing rate of neuron F (element 600) measures the likelihood that neuron X (element 300) causes neuron Y (element 304) to happen.
  • the SI neuron (element 602) takes three inputs (from neuron B (element 102), XY (element 306), and Y (element 304)).
  • P(X) is set to one.
  • the synapse between the XY neuron (element 306) and the SI neuron (element 602) is to compensate for the undesired artifact that the B neuron (element 102) will fire when the XY neuron (element 306) is triggered during the measurement phase, improving the accuracy of the subtraction calculation.
  • the S2 neuron (element 604) works similarly; however, it computes P(Y ⁇ X) - P ⁇ Y) instead.
  • Neuron F (element 600) outputs the sum of the two firing rates.
  • P(Y) - P( Y ⁇ X) and P( Y ⁇ X) - P ⁇ Y) are exactly opposite in sign, and because firing probabilities are non negative, any negative values are reset to zero. Equations (9) through (11) below demonstrate how neuron F (element 600) effectively implements the following absolute function:
  • the SI (element 602) and S2 (element 604) neurons have a lower and upper membrane potential limits to prevent the voltages from running away uncontrollably; the voltage limits are set to [-5,5] volts in one embodiment.
  • the probability of firing for neuron SI (element 602), S2 (element 604), and F (element 600) can be described using the following equations:
  • P(S 1), P(S2), and P ⁇ F) are unit-less numbers between zero and one that represent the probabilities of firing at any given time step.
  • the max( ,0) operators in equations (9) through (11) are used to account for the fact that probabilities cannot be negative.
  • FIG. 6 depicts a neuronal network topology for computing the conditional dependency structure between two or more random variables encoded as input data streams.
  • the topology uses a subtractor circuit to compare P(X) to P(Y
  • described herein is a method for learning conditional structure by determining independence and conditional dependence of temporally lagged variables using the network topology shown in FIG. 6.
  • FIG. 11 depicts the system flow for the system for computing conditional probabilities of random variables for structure learning and Bayesian inference according to embodiments of the present disclosure.
  • the spike encoder takes input sensor data and rate codes the values into frequencies of spikes in input spike trains, while the spiking decoder takes the output spike trains and decodes the frequencies of spikes back into values for the random variables and/or conditional probabilities, depending on what the user has queried.
  • the inputs X (element 300) and Y (element 304) shown in FIG. 3 come from the spike encoding (element 1100) of streaming sensor data (element 1102) on a mobile platform, such as a ground vehicle (element 1104) or aircraft (element 1106), as depicted in FIG. 11.
  • the values of the streaming sensor data will be temporally coded using rate coding.
  • Probabilistic input spikes for X are generated as fixed-rate Poisson process.
  • Y spikes are generated with a fixed Bernoulli probability P.
  • FIGs. 9, 10 A, and 10B ten Bernoulli random variables were generated using this method, some of which are conditionally dependent on each other.
  • the neuromorphic hardware required for implementing the neuronal network topology (element 1108) in FIG. 11 must have specific neuron voltage (spiking) equations, synaptic weight update rules (known as STDP), and specific neuronal voltage dynamics.
  • STDP synaptic weight update rules
  • the STDP required for the PCU+ topology in FIG. 13 is described in Literature Reference No. 5.
  • At t post — t pre , where A + and 4 _ are the maximum and minimum gains, and T + and T _ are timescales for weight increase and decrease (FIG. 2).
  • a unique“STDP Reversal” update rule is employed, such that the conditions for the two cases (i.e., At > 0, At ⁇ 0) are swapped.
  • This reversal may not be implementable on existing general-purpose neuromorphic hardware.
  • the voltage update rules required for the system described herein are described in Equations (3), (4), (5), and (7). Additional details regarding the neuronal topology (element 1108) implemented on neuromorphic hardware can be found in U.S. Application No. 16/294,815, entitled“A Neuronal Network Topology for Computing Conditional Probabilities,” which is hereby incorporated by reference as though fully set forth herein.
  • the neuromorphic computing hardware according to embodiments of the present disclosure replaces the usage of a digital computer CPU (control processing unit) or GPU (graphics processing unit).
  • the neuromorphic compiler (element 1110) is detailed in U.S. Application No. 16/294,886, entitled,“Programming Model for a Bayesian Neuromorphic Compiler,” which is hereby incorporated by reference as though fully set forth herein. It is a programming model which lets users query Bayesian network probabilities from the learned conditional model (element 1112) for further processing or decision making.
  • X) 0.6 is part of the learned conditional model (element 1112).
  • FIG. 13 is an illustration of a full conditional probability unit according to some embodiments of the present disclosure.
  • Ip i.e., phasic input (element 1300)
  • I T i.e., tonic input (element 1302)
  • Tonic input corresponds to P(X)
  • Phasic input corresponds to P(X, Y
  • the "Tonic" input I T causes A (element 100) to spike
  • a (element 100) causes B (element 102) to spike, resulting in an increase to w (element 1304).
  • delay T 1 (element 1308) will cause d (element 102) to spike before A (element 100), causing w (element 1304) to decrease.
  • A will not cause B (element 102) to spike after T 1 (element 1308) due to il's (element 102) refractory period, which is the period after stimulation where a neuron is unresponsive.
  • the refractory period of a neuron is a period just after the spikes during which it cannot spike again, even if it receives input spikes from neighboring neurons. This is a known biological mechanism leveraged by the network according to embodiments of the present disclosure. This results in maintenance of a balance, dependent on C (element 1306) (i.e., if C (element 1306) spikes every time f?
  • PCU+ is able to compute the conditional probability of two random variables, as shown in FIG. 7, where the unbolded line (element 700) represents the calculated results, and the bold line (element 702) represents the ground truth.
  • the synaptic weight ( w ) between A and B converges to the final value in about 30 seconds of neural computation.
  • the time interval between each input data point is 40 ms, and the spiking probability for the inputs are set to 0.7. This means that the PCU+ converged after observing around 500 input spikes; this is a fairly efficient method to estimate the conditional probability.
  • the results show that w is able to track the fluctuation in P( Y ⁇ X) as the system progresses in time.
  • the weight w is recorded in a variable in a software program or a register in neuromorphic hardware.
  • Most spiking neural network simulators allow recordings of internal variables such as the synaptic weights.
  • a register read operation is performed to record the final weight value to a recording media.
  • FIG. 8 shows the results from a suite of simulations with the PCU+.
  • the probabilities P(X) and P( Y ⁇ X) are varied to validate the accuracy of the neural network across different input conditions.
  • the final synaptic weight, w which encodes P(Y ⁇ X)
  • w was plotted after each simulation. It was observed that the final weights (represented by filled circles (element 800) align with the ground truth values (i.e., the dotted line (element 802)). Note that the PCU+ circuit is able to accurately compute the conditional probability over the entire range ([0,1]) of possible values.
  • the PCU+ with Subtractor Circuit was applied to solve structure learning problems.
  • structure learning the goal is to identify the causal relationships in a Bayesian network. More precisely, in experimental studies, the goal was to find the dependencies between ten different random variables, in which the current value of a variable affects the future values of other variables (i.e., Granger causality (see Literature References No. 4).
  • Granger causality the following pre-processing techniques were performed on the data before being fed to the PCU+.
  • the data for each random variable is a stream of 0s and Is, recording the occurrence of errors as a time series.
  • a pair of data streams e.g., X and Y
  • the data for Y is first shifted earlier by one time step. Effectively, P(Yt+i ⁇ Xt) is being calculated instead of P(Yt ⁇ Xt). This is necessary because the cause and the effect happens sequentially in time; by shifting Y forward in the training dataset, whether the current X (i.e., X t ) has an effect on future Y (i.e. Y t+i ) was tested.
  • the dependencies were identified by computing P(Yt+i ⁇ Xt) between all pairs of random variables in the system and calculating the deviation of the conditional probabilities, P(Yt+i ⁇ Xt), from their intrinsic values, P(Yt+ 1).
  • is encoded in the firing rate of neuron F in each PCU+.
  • a threshold is defined, in which the link is flagged as significant, and a conclusion is made that X causes Y to happen.
  • the table in FIG. 9 summarizes the firing rates of neuron F (element 600) from each PCU+, which encodes
  • the results from the table in FIG. 9 are displayed visually by drawing an arrow between two random variables which have a potential causal relationship, as illustrated in FIG. 10 A.
  • the neural network described herein can identify these dependencies with 100% accuracy, as determined by comparison with the ground truth dependencies shown in FIG. 10B.
  • Bayesian inference is ubiquitous in data science and decision theory.
  • the invention described herein can be used to reduce the operational cost for aircrafts and vehicles through preventive maintenance and diagnostics, enable
  • Bayesian decision theory is a statistical approach to the problem of pattern classification.
  • Pattern classification an example of a Bayesian inference task, has several applications, including object detection and object classification.
  • an application of the invention described herein is estimating conditional probabilities between fault messages of a ground vehicle or aircraft for fault prognostic models.
  • one or more processors of the system described herein can control one or more motor vehicle components (electrical, non-electrical, mechanical), such as a brake, a steering mechanism, suspension, or safety device (e.g., airbags, seatbelt tensioners, etc.).
  • the vehicle could be an unmanned aerial vehicle (UAV), an autonomous self-driving ground vehicle, or a human operated vehicle controlled either by a driver or by a remote operator.
  • UAV unmanned aerial vehicle
  • the system can cause the autonomous vehicle to perform a driving operation/maneuver (such as steering or another command) in line with driving parameters in accordance with the recognized object.
  • the system described herein can cause a vehicle maneuver/operation to be performed to avoid a collision with the bicyclist or vehicle (or any other object that should be avoided while driving).
  • the system can cause the autonomous vehicle to apply a functional movement response, such as a braking operation followed by a steering operation, to redirect vehicle away from the object, thereby avoiding a collision.
  • Other appropriate responses may include one or more of a steering operation, a throttle operation to increase speed or to decrease speed, or a decision to maintain course and speed without change.
  • the responses may be appropriate for avoiding a collision, improving travel speed, or improving efficiency.
  • control of other device types is also possible.
  • the system according to embodiments of the present disclosure also provides an additional functionality of measuring the deviation from the normal probability, which can be used to indicate potential causality in a Bayesian network. With dynamic threshold, the new network fixes a key problem in the prior art, which is that the computation becomes inaccurate when the conditional probability exceeds a threshold.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Neurology (AREA)
  • Feedback Control In General (AREA)
EP19782857.7A 2019-01-09 2019-09-20 Gepulstes neuronales netzwerk zur probabilistischen berechnung Pending EP3908982A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962790296P 2019-01-09 2019-01-09
PCT/US2019/052275 WO2020146016A1 (en) 2019-01-09 2019-09-20 A spiking neural network for probabilistic computation

Publications (1)

Publication Number Publication Date
EP3908982A1 true EP3908982A1 (de) 2021-11-17

Family

ID=68136569

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19782857.7A Pending EP3908982A1 (de) 2019-01-09 2019-09-20 Gepulstes neuronales netzwerk zur probabilistischen berechnung

Country Status (3)

Country Link
EP (1) EP3908982A1 (de)
CN (1) CN113196301A (de)
WO (1) WO2020146016A1 (de)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115545190B (zh) * 2022-12-01 2023-02-03 四川轻化工大学 一种基于概率计算的脉冲神经网络及其实现方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2467401A1 (en) * 2001-11-16 2003-05-30 Yuan Yan Chen Pausible neural network with supervised and unsupervised cluster analysis
US8370241B1 (en) * 2004-11-22 2013-02-05 Morgan Stanley Systems and methods for analyzing financial models with probabilistic networks
JP2008293199A (ja) * 2007-05-23 2008-12-04 Toshiba Corp ベイジアンネットワーク情報処理装置およびベイジアンネットワーク情報処理プログラム
CN107092959B (zh) * 2017-04-07 2020-04-10 武汉大学 基于stdp非监督学习算法的脉冲神经网络模型构建方法
US11449735B2 (en) * 2018-04-17 2022-09-20 Hrl Laboratories, Llc Spiking neural network for probabilistic computation

Also Published As

Publication number Publication date
CN113196301A (zh) 2021-07-30
WO2020146016A1 (en) 2020-07-16

Similar Documents

Publication Publication Date Title
Almeida Multilayer perceptrons
Werfel et al. Learning curves for stochastic gradient descent in linear feedforward networks
Kosmatopoulos et al. Dynamical neural networks that ensure exponential identification error convergence
US11449735B2 (en) Spiking neural network for probabilistic computation
EP3136304A1 (de) Verfahren und systeme zur durchführung von verstärkungslernen in hierarchischen und zeitlich erweiterten umgebungen
US11347221B2 (en) Artificial neural networks having competitive reward modulated spike time dependent plasticity and methods of training the same
Jung et al. Pattern classification of back-propagation algorithm using exclusive connecting network
EP3908982A1 (de) Gepulstes neuronales netzwerk zur probabilistischen berechnung
US10748063B2 (en) Neuronal network topology for computing conditional probabilities
CN112418421A (zh) 一种基于图注意力自编码模型的路侧端行人轨迹预测算法
Maravall et al. Fusion of learning automata theory and granular inference systems: ANLAGIS. Applications to pattern recognition and machine learning
US11443171B2 (en) Pulse generation for updating crossbar arrays
Zabiri et al. NN-based algorithm for control valve stiction quantification
Hirasawa et al. Improvement of generalization ability for identifying dynamical systems by using universal learning networks
V〈 rkonyi et al. Improved neural network control of inverted pendulums
CN111582461A (zh) 神经网络训练方法、装置、终端设备和可读存储介质
PANDYA et al. A stochastic parallel algorithm for supervised learning in neural networks
Abdulkarim et al. Evaluating Feedforward and Elman Recurrent Neural Network Performances in Time Series Forecasting
Stepp et al. A dynamical systems approach to neuromorphic computation of conditional probabilities
Simani Mathematical Modeling and Fault Description
Munavalli et al. Pattern recognition for data retrieval using artificial neural network
de Moraes Assessment of EFuNN accuracy for pattern recognition using data with different statistical distributions
Serpen Search for a Lyapunov function through empirical approximation by artificial neural nets: Theoretical framework
Reis et al. Modified fuzzy-CMAC networks with clustering-based structure
Gouko et al. An action generation model by using time series prediction and its application to robot navigation

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20210420

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230525

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20230724