WO2016010922A1 - Decomposing convolution operation in neural networks - Google Patents

Decomposing convolution operation in neural networks Download PDF

Info

Publication number
WO2016010922A1
WO2016010922A1 PCT/US2015/040206 US2015040206W WO2016010922A1 WO 2016010922 A1 WO2016010922 A1 WO 2016010922A1 US 2015040206 W US2015040206 W US 2015040206W WO 2016010922 A1 WO2016010922 A1 WO 2016010922A1
Authority
WO
WIPO (PCT)
Prior art keywords
filter
neuron
complexity
neural network
separable filters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2015/040206
Other languages
English (en)
French (fr)
Inventor
Venkata Sreekanta Reddy Annapureddy
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qualcomm Inc
Original Assignee
Qualcomm Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Inc filed Critical Qualcomm Inc
Priority to CN201580038152.3A priority Critical patent/CN106537421A/zh
Priority to EP15742477.1A priority patent/EP3170126A1/en
Publication of WO2016010922A1 publication Critical patent/WO2016010922A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0495Quantised networks; Sparse networks; Compressed networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Definitions

  • a method of operating a neural network includes determining a complexity of separable filters
  • a computer program product for operating a neural network.
  • the computer program product includes a non- transitory computer readable medium having encoded thereon program code.
  • the program code includes program code to determine a complexity of separable filters approximating a filter in the neural network.
  • the program code further includes program code to selectively apply a decomposed convolution to the filter based on the determined number of separable filters.
  • computational network neural system or neural network
  • FIGURE 4 illustrates an example of a positive regime and a negative regime for defining behavior of a neuron model in accordance with certain aspects of the present disclosure.
  • FIGURE 5 illustrates an example implementation of designing a neural network using a general-purpose processor in accordance with certain aspects of the present disclosure.
  • FIGURE 7 illustrates an example implementation of designing a neural network based on distributed memories and distributed processing units in accordance with certain aspects of the present disclosure.
  • FIGURE 8 illustrates an example implementation of a neural network in accordance with certain aspects of the present disclosure.
  • an action potential In biological neurons, the output spike generated when a neuron fires is referred to as an action potential.
  • This electrical signal is a relatively rapid, transient, nerve impulse, having an amplitude of roughly 100 mV and a duration of about 1 ms.
  • every action potential has basically the same amplitude and duration, and thus, the information in the signal may be represented only by the frequency and number of spikes, or the time of spikes, rather than by the amplitude.
  • the information carried by an action potential may be determined by the spike, the neuron that spiked, and the time of the spike relative to other spike or spikes. The importance of the spike may be determined by a weight applied to a connection between neurons, as explained below.
  • the transfer of spikes from one level of neurons to another may be achieved through the network of synaptic connections (or simply "synapses") 104, as illustrated in FIGURE 1.
  • neurons of level 102 may be considered presynaptic neurons and neurons of level 106 may be considered postsynaptic neurons.
  • the synapses 104 may receive output signals (i.e., spikes) from the level 102 neurons and scale those signals according to adjustable synaptic weights where P is a total number of synaptic connections between the neurons of levels 102 and 106 and i is an indicator of the neuron level.
  • i represents neuron level 102 and i+1 represents neuron level 106.
  • Biological synapses can mediate either excitatory or inhibitory (hyperpolarizing) actions in postsynaptic neurons and can also serve to amplify neuronal signals.
  • Excitatory signals depolarize the membrane potential (i.e., increase the membrane potential with respect to the resting potential). If enough excitatory signals are received within a certain time period to depolarize the membrane potential above a threshold, an action potential occurs in the postsynaptic neuron. In contrast, inhibitory signals generally hyperpolarize (i.e., lower) the membrane potential. Inhibitory signals, if strong enough, can counteract the sum of excitatory signals and prevent the membrane potential from reaching a threshold.
  • the neural system 100 may be emulated by a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components, a software module executed by a processor, or any combination thereof.
  • the neural system 100 may be utilized in a large range of applications, such as image and pattern recognition, machine learning, motor control, and alike.
  • Each neuron in the neural system 100 may be implemented as a neuron circuit.
  • the neuron membrane charged to the threshold value initiating the output spike may be implemented, for example, as a capacitor that integrates an electrical current flowing through it.
  • the neuron 202 may combine the scaled input signals and use the combined scaled inputs to generate an output signal 208 (i.e., a signal Y).
  • the output signal 208 may be a current, a conductance, a voltage, a real-valued and/or a complex-valued.
  • the output signal may be a numerical value with a fixed-point or a floating-point representation.
  • the output signal 208 may be then transferred as an input signal to other neurons of the same neural system, or as an input signal to the same neuron 202, or as an output of the neural system.
  • the weights may settle or converge to one of two values (i.e., a bimodal distribution of weights). This effect can be utilized to reduce the number of bits for each synaptic weight, increase the speed of reading and writing from to a memory storing the synaptic weights, and to reduce power and/or processor consumption of the synaptic memory.
  • STDP spike -timing-dependent plasticity
  • BCM Bienenstock-Copper-Munro
  • the weights may settle or converge to one of two values (i.e., a bimodal distribution of weights). This effect can be utilized to reduce the number of bits for each synaptic weight, increase the speed of reading and writing from to a memory storing the synaptic weights, and to reduce power and/or processor consumption of the synaptic memory.
  • STDP is a learning process that adjusts the strength of synaptic connections between neurons. The connection strengths are adjusted based on the relative timing of a particular neuron's output and received input spikes (i.e., action potentials).
  • LTP long-term potentiation
  • LTD long-term depression
  • FIGURE 3 illustrates an exemplary diagram 300 of a synaptic weight change as a function of relative timing of presynaptic and postsynaptic spikes in accordance with the STDP.
  • a negative offset ⁇ may be applied to the LTP (causal) portion 302 of the STDP graph.
  • the offset value ⁇ can be computed to reflect the frame boundary.
  • a first input spike (pulse) in the frame may be considered to decay over time either as modeled by a postsynaptic potential directly or in terms of the effect on neural state.
  • the computational complexity may be on the order of
  • FIGURE 5 illustrates an example implementation 500 of the aforementioned decomposition using a general-purpose processor 502 in accordance with certain aspects of the present disclosure.
  • Variables neural signals
  • synaptic weights may be stored in a memory block 504
  • instructions executed at the general-purpose processor 502 may be loaded from a program memory 506.
  • the instructions loaded into the general-purpose processor 502 may comprise code for determining a number of separable filters to express a filter in the neural network and/or selectively applying a decomposed convolution to the filter.
  • FIGURE 7 illustrates an example implementation 700 of the aforementioned decomposition.
  • one memory bank 702 may be directly interfaced with one processing unit 704 of a computational network (neural network).
  • Each memory bank 702 may store variables (neural signals), synaptic weights, and/or system parameters associated with a corresponding processing unit (neural processor) 704 delays, frequency bin information, regularization information and/or system metrics.
  • the processing unit 704 may be configured to determine a number of separable filters to express a filter in the neural network and/or selectively apply a decomposed convolution to the filter.
  • the process may also selectively apply a decomposed convolution to the filter, for example based on the complexity.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)
  • Electrically Operated Instructional Devices (AREA)
PCT/US2015/040206 2014-07-16 2015-07-13 Decomposing convolution operation in neural networks Ceased WO2016010922A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201580038152.3A CN106537421A (zh) 2014-07-16 2015-07-13 神经网络中的分解卷积操作
EP15742477.1A EP3170126A1 (en) 2014-07-16 2015-07-13 Decomposing convolution operation in neural networks

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201462025406P 2014-07-16 2014-07-16
US62/025,406 2014-07-16
US14/526,018 2014-10-28
US14/526,018 US10360497B2 (en) 2014-07-16 2014-10-28 Decomposing convolution operation in neural networks

Publications (1)

Publication Number Publication Date
WO2016010922A1 true WO2016010922A1 (en) 2016-01-21

Family

ID=55074837

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/US2015/040206 Ceased WO2016010922A1 (en) 2014-07-16 2015-07-13 Decomposing convolution operation in neural networks
PCT/US2015/040221 Ceased WO2016010930A1 (en) 2014-07-16 2015-07-13 Decomposing convolution operation in neural networks

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/US2015/040221 Ceased WO2016010930A1 (en) 2014-07-16 2015-07-13 Decomposing convolution operation in neural networks

Country Status (8)

Country Link
US (2) US10402720B2 (enExample)
EP (2) EP3170126A1 (enExample)
JP (1) JP2017525038A (enExample)
KR (1) KR20170031695A (enExample)
CN (2) CN106663222A (enExample)
AU (1) AU2015289877A1 (enExample)
BR (1) BR112017000229A2 (enExample)
WO (2) WO2016010922A1 (enExample)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107248144A (zh) * 2017-04-27 2017-10-13 东南大学 一种基于压缩型卷积神经网络的图像去噪方法
US11822616B2 (en) 2017-11-28 2023-11-21 Nanjing Horizon Robotics Technology Co., Ltd. Method and apparatus for performing operation of convolutional layers in convolutional neural network

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10402720B2 (en) 2014-07-16 2019-09-03 Qualcomm Incorporated Decomposing convolution operation in neural networks
US10262259B2 (en) * 2015-05-08 2019-04-16 Qualcomm Incorporated Bit width selection for fixed point neural networks
KR102565273B1 (ko) * 2016-01-26 2023-08-09 삼성전자주식회사 뉴럴 네트워크에 기초한 인식 장치 및 뉴럴 네트워크의 학습 방법
US10713562B2 (en) * 2016-06-18 2020-07-14 International Business Machines Corporation Neuromorphic memory circuit
CN106326985A (zh) * 2016-08-18 2017-01-11 北京旷视科技有限公司 神经网络训练方法和装置及数据处理方法和装置
US11238337B2 (en) * 2016-08-22 2022-02-01 Applied Brain Research Inc. Methods and systems for implementing dynamic neural networks
EP3306535B1 (en) * 2016-10-10 2019-12-04 Alcatel Lucent Runtime optimization of convolutional neural networks
KR102879261B1 (ko) * 2016-12-22 2025-10-31 삼성전자주식회사 컨볼루션 신경망 처리 방법 및 장치
US11301750B2 (en) 2017-03-31 2022-04-12 Ecole Polytechnique Federale De Lausanne (Epfl) Simplification of neural models that include arborized projections
DE102017205713A1 (de) 2017-04-04 2018-10-04 Siemens Aktiengesellschaft Verfahren und Steuereinrichtung zum Steuern eines technischen Systems
US11037330B2 (en) 2017-04-08 2021-06-15 Intel Corporation Low rank matrix compression
US11093832B2 (en) 2017-10-19 2021-08-17 International Business Machines Corporation Pruning redundant neurons and kernels of deep convolutional neural networks
WO2019146189A1 (ja) * 2018-01-29 2019-08-01 日本電気株式会社 ニューラルネットワークのランク最適化装置および最適化方法
US11238346B2 (en) 2018-04-25 2022-02-01 Qualcomm Incorproated Learning a truncation rank of singular value decomposed matrices representing weight tensors in neural networks
JP7021010B2 (ja) * 2018-06-06 2022-02-16 株式会社Nttドコモ 機械学習システム
US11922314B1 (en) * 2018-11-30 2024-03-05 Ansys, Inc. Systems and methods for building dynamic reduced order physical models
CN109948787B (zh) * 2019-02-26 2021-01-08 山东师范大学 用于神经网络卷积层的运算装置、芯片及方法
US11580399B2 (en) 2019-04-30 2023-02-14 Samsung Electronics Co., Ltd. System and method for convolutional layer structure for neural networks
KR102774162B1 (ko) * 2019-05-16 2025-03-04 삼성전자주식회사 전자 장치 및 이의 제어 방법
WO2020235011A1 (ja) * 2019-05-21 2020-11-26 日本電信電話株式会社 学習装置、学習方法及び学習プログラム
CN112215329B (zh) * 2019-07-09 2023-09-29 杭州海康威视数字技术股份有限公司 基于神经网络的卷积计算方法及装置
CN112784207B (zh) * 2019-11-01 2024-02-02 中科寒武纪科技股份有限公司 运算方法及相关产品
US11010691B1 (en) * 2020-03-16 2021-05-18 Sas Institute Inc. Distributable event prediction and machine learning recognition system
US12450472B2 (en) * 2020-06-22 2025-10-21 Qualcomm Incorporated Charge-pump-based current-mode neuron for machine learning
KR102427737B1 (ko) 2020-09-18 2022-08-01 네이버 주식회사 표현적 병목 현상이 최소화된 인공 신경망을 기반으로 하는 전자 장치 및 그의 동작 방법
US20230057387A1 (en) * 2021-07-23 2023-02-23 Cohere Inc. System and Method for Low Rank Training of Neural Networks
JP7600972B2 (ja) * 2021-12-06 2024-12-17 株式会社デンソー モデル生成方法、モデル生成プログラム、モデル生成装置、データ処理装置

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7945061B1 (en) * 2006-06-07 2011-05-17 Bae Systems Information And Electronic Systems Integration Inc. Scalable architecture for subspace signal tracking
US20140372112A1 (en) * 2013-06-18 2014-12-18 Microsoft Corporation Restructuring deep neural network acoustic models

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5781700A (en) 1996-02-05 1998-07-14 Ford Global Technologies, Inc. Trained Neural network air/fuel control system
US6351740B1 (en) 1997-12-01 2002-02-26 The Board Of Trustees Of The Leland Stanford Junior University Method and system for training dynamic nonlinear adaptive filters which have embedded memory
US6269351B1 (en) * 1999-03-31 2001-07-31 Dryken Technologies, Inc. Method and system for training an artificial neural network
US6754380B1 (en) 2003-02-14 2004-06-22 The University Of Chicago Method of training massive training artificial neural networks (MTANN) for the detection of abnormalities in medical images
WO2006091636A2 (en) 2005-02-23 2006-08-31 Digital Intelligence, L.L.C. Signal decomposition and reconstruction
ITRM20050192A1 (it) 2005-04-20 2006-10-21 Consiglio Nazionale Ricerche Sistema per la rilevazione e la classificazione di eventi durante azioni in movimento.
JP5315411B2 (ja) 2008-07-03 2013-10-16 エヌイーシー ラボラトリーズ アメリカ インク 有糸分裂像検出装置および計数システム、および有糸分裂像を検出して計数する方法
CN101667425A (zh) 2009-09-22 2010-03-10 山东大学 一种对卷积混叠语音信号进行盲源分离的方法
BRPI0904540B1 (pt) 2009-11-27 2021-01-26 Samsung Eletrônica Da Amazônia Ltda método para animar rostos/cabeças/personagens virtuais via processamento de voz
US8874432B2 (en) 2010-04-28 2014-10-28 Nec Laboratories America, Inc. Systems and methods for semi-supervised relationship extraction
US8583586B2 (en) 2011-01-21 2013-11-12 International Business Machines Corporation Mining temporal patterns in longitudinal event data using discrete event matrices and sparse coding
US9262724B2 (en) 2012-07-13 2016-02-16 International Business Machines Corporation Low-rank matrix factorization for deep belief network training with high-dimensional output targets
CN102820653B (zh) 2012-09-12 2014-07-30 湖南大学 一种电能质量综合控制器模糊-神经网络双闭环控制方法
US20140156575A1 (en) 2012-11-30 2014-06-05 Nuance Communications, Inc. Method and Apparatus of Processing Data Using Deep Belief Networks Employing Low-Rank Matrix Factorization
CN103325382A (zh) 2013-06-07 2013-09-25 大连民族学院 一种自动识别中国少数民族传统乐器音频数据的方法
US9400955B2 (en) * 2013-12-13 2016-07-26 Amazon Technologies, Inc. Reducing dynamic range of low-rank decomposition matrices
US10402720B2 (en) 2014-07-16 2019-09-03 Qualcomm Incorporated Decomposing convolution operation in neural networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7945061B1 (en) * 2006-06-07 2011-05-17 Bae Systems Information And Electronic Systems Integration Inc. Scalable architecture for subspace signal tracking
US20140372112A1 (en) * 2013-06-18 2014-12-18 Microsoft Corporation Restructuring deep neural network acoustic models

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
EMILY DENTON ET AL: "Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation", 2 April 2014 (2014-04-02), pages 1 - 16, XP055229687, Retrieved from the Internet <URL:http://www.researchgate.net/profile/Joan_Bruna/publication/261368736_Exploiting_Linear_Structure_Within_Convolutional_Networks_for_Efficient_Evaluation/links/0c960535ec07f32e3a000000.pdf?inViewer=true&pdfJsDownload=true&disableCoverPage=true&origin=publication_detail> [retrieved on 20151119] *
FRANCK MAMALET ET AL: "Simplifying ConvNets for Fast Learning", 11 September 2012, ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING ICANN 2012, SPRINGER BERLIN HEIDELBERG, BERLIN, HEIDELBERG, PAGE(S) 58 - 65, ISBN: 978-3-642-33265-4, XP047017789 *
MAX JADERBERG ET AL: "Speeding up Convolutional Neural Networks with Low Rank Expansions", PROCEEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2014, 15 May 2014 (2014-05-15), pages 1 - 12, XP055229678, ISBN: 978-1-901725-52-0, DOI: 10.5244/C.28.88 *
XUE JIAN ET AL: "Singular value decomposition based low-footprint speaker adaptation and personalization for deep neural network", 2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), IEEE, 4 May 2014 (2014-05-04), pages 6359 - 6363, XP032617895, DOI: 10.1109/ICASSP.2014.6854828 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107248144A (zh) * 2017-04-27 2017-10-13 东南大学 一种基于压缩型卷积神经网络的图像去噪方法
CN107248144B (zh) * 2017-04-27 2019-12-10 东南大学 一种基于压缩型卷积神经网络的图像去噪方法
US11822616B2 (en) 2017-11-28 2023-11-21 Nanjing Horizon Robotics Technology Co., Ltd. Method and apparatus for performing operation of convolutional layers in convolutional neural network

Also Published As

Publication number Publication date
BR112017000229A2 (pt) 2017-10-31
EP3170127A1 (en) 2017-05-24
US20160019455A1 (en) 2016-01-21
US10360497B2 (en) 2019-07-23
KR20170031695A (ko) 2017-03-21
EP3170126A1 (en) 2017-05-24
WO2016010930A1 (en) 2016-01-21
US10402720B2 (en) 2019-09-03
AU2015289877A1 (en) 2017-01-05
JP2017525038A (ja) 2017-08-31
CN106663222A (zh) 2017-05-10
US20160019456A1 (en) 2016-01-21
CN106537421A (zh) 2017-03-22

Similar Documents

Publication Publication Date Title
US10402720B2 (en) Decomposing convolution operation in neural networks
US10339447B2 (en) Configuring sparse neuronal networks
US20150242745A1 (en) Event-based inference and learning for stochastic spiking bayesian networks
US20150269482A1 (en) Artificial neural network and perceptron learning using spiking neurons
US9305256B2 (en) Automated method for modifying neural dynamics
US9600762B2 (en) Defining dynamics of multiple neurons
EP3123404A2 (en) Differential encoding in neural networks
WO2015088774A2 (en) Neuronal diversity in spiking neural networks and pattern classification
US20150317557A1 (en) Temporal spike encoding for temporal learning
WO2015156989A2 (en) Modulating plasticity by global scalar values in a spiking neural network
US20140310216A1 (en) Method for generating compact representations of spike timing-dependent plasticity curves
WO2015153150A2 (en) Probabilistic representation of large sequences using spiking neural network
WO2015057302A2 (en) Congestion avoidance in networks of spiking neurons
US9342782B2 (en) Stochastic delay plasticity
WO2015057305A1 (en) Dynamically assigning and examining synaptic delay
WO2014197175A2 (en) Efficient implementation of neural population diversity in neural system
WO2015127124A2 (en) Imbalanced cross-inhibitory mechanism for spatial target selection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15742477

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
REEP Request for entry into the european phase

Ref document number: 2015742477

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015742477

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE