GB201718358D0 - Exploiting sparsity in a neural network - Google Patents

Exploiting sparsity in a neural network

Info

Publication number
GB201718358D0
GB201718358D0 GBGB1718358.3A GB201718358A GB201718358D0 GB 201718358 D0 GB201718358 D0 GB 201718358D0 GB 201718358 A GB201718358 A GB 201718358A GB 201718358 D0 GB201718358 D0 GB 201718358D0
Authority
GB
United Kingdom
Prior art keywords
neural network
exploiting sparsity
sparsity
exploiting
neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GBGB1718358.3A
Other versions
GB2568102A (en
GB2568102B (en
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Imagination Technologies Ltd
Original Assignee
Imagination Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Imagination Technologies Ltd filed Critical Imagination Technologies Ltd
Priority to GB1718358.3A priority Critical patent/GB2568102B/en
Publication of GB201718358D0 publication Critical patent/GB201718358D0/en
Priority to CN201811315075.3A priority patent/CN110020716A/en
Priority to CN201811314022.XA priority patent/CN110059811A/en
Priority to EP18204733.2A priority patent/EP3480746A1/en
Priority to US16/182,471 priority patent/US11182668B2/en
Priority to GB1818103.2A priority patent/GB2570186B/en
Priority to US16/182,369 priority patent/US11551065B2/en
Priority to EP18204741.5A priority patent/EP3480749B1/en
Priority to GB1818109.9A priority patent/GB2570187B/en
Priority to US16/182,426 priority patent/US11610099B2/en
Priority to CN201811314394.2A priority patent/CN110033080A/en
Priority to US16/181,559 priority patent/US11574171B2/en
Priority to CN201811311595.7A priority patent/CN110059798B/en
Priority to EP18204740.7A priority patent/EP3480748A1/en
Priority to EP18204739.9A priority patent/EP3480747A1/en
Publication of GB2568102A publication Critical patent/GB2568102A/en
Application granted granted Critical
Publication of GB2568102B publication Critical patent/GB2568102B/en
Priority to US17/511,363 priority patent/US11803738B2/en
Priority to US18/093,768 priority patent/US11907830B2/en
Priority to US18/104,749 priority patent/US20230186062A1/en
Priority to US18/119,590 priority patent/US20230214631A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Complex Calculations (AREA)
GB1718358.3A 2017-11-06 2017-11-06 Exploiting sparsity in a neural network Active GB2568102B (en)

Priority Applications (19)

Application Number Priority Date Filing Date Title
GB1718358.3A GB2568102B (en) 2017-11-06 2017-11-06 Exploiting sparsity in a neural network
CN201811314394.2A CN110033080A (en) 2017-11-06 2018-11-06 Monoplane filtering
CN201811311595.7A CN110059798B (en) 2017-11-06 2018-11-06 Exploiting sparsity in neural networks
EP18204733.2A EP3480746A1 (en) 2017-11-06 2018-11-06 Weight buffers
US16/182,471 US11182668B2 (en) 2017-11-06 2018-11-06 Neural network architecture using convolution engine filter weight buffers
GB1818103.2A GB2570186B (en) 2017-11-06 2018-11-06 Weight buffers
US16/182,369 US11551065B2 (en) 2017-11-06 2018-11-06 Neural network architecture using control logic determining convolution operation sequence
EP18204741.5A EP3480749B1 (en) 2017-11-06 2018-11-06 Exploiting sparsity in a neural network
GB1818109.9A GB2570187B (en) 2017-11-06 2018-11-06 Single plane filters
US16/182,426 US11610099B2 (en) 2017-11-06 2018-11-06 Neural network architecture using single plane filters
CN201811315075.3A CN110020716A (en) 2017-11-06 2018-11-06 Neural network hardware
US16/181,559 US11574171B2 (en) 2017-11-06 2018-11-06 Neural network architecture using convolution engines
CN201811314022.XA CN110059811A (en) 2017-11-06 2018-11-06 Weight buffer
EP18204740.7A EP3480748A1 (en) 2017-11-06 2018-11-06 Neural network hardware
EP18204739.9A EP3480747A1 (en) 2017-11-06 2018-11-06 Single plane filters
US17/511,363 US11803738B2 (en) 2017-11-06 2021-10-26 Neural network architecture using convolution engine filter weight buffers
US18/093,768 US11907830B2 (en) 2017-11-06 2023-01-05 Neural network architecture using control logic determining convolution operation sequence
US18/104,749 US20230186062A1 (en) 2017-11-06 2023-02-01 Neural Network Architecture Using Convolution Engines
US18/119,590 US20230214631A1 (en) 2017-11-06 2023-03-09 Neural Network Architecture Using Single Plane Filters

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1718358.3A GB2568102B (en) 2017-11-06 2017-11-06 Exploiting sparsity in a neural network

Publications (3)

Publication Number Publication Date
GB201718358D0 true GB201718358D0 (en) 2017-12-20
GB2568102A GB2568102A (en) 2019-05-08
GB2568102B GB2568102B (en) 2021-04-14

Family

ID=60664897

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1718358.3A Active GB2568102B (en) 2017-11-06 2017-11-06 Exploiting sparsity in a neural network

Country Status (1)

Country Link
GB (1) GB2568102B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111047031A (en) * 2018-10-12 2020-04-21 西部数据技术公司 Shift architecture for data reuse in neural networks
CN111626405A (en) * 2020-05-15 2020-09-04 Tcl华星光电技术有限公司 CNN acceleration method, CNN acceleration device and computer readable storage medium
CN112970036A (en) * 2018-11-06 2021-06-15 创惟科技股份有限公司 Convolution block array for implementing neural network applications, method of using the same, and convolution block circuit
CN113892092A (en) * 2019-02-06 2022-01-04 瀚博控股公司 Method and system for convolution model hardware accelerator
CN116261736A (en) * 2020-06-12 2023-06-13 墨芯国际有限公司 Method and system for double sparse convolution processing and parallelization

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11182668B2 (en) 2017-11-06 2021-11-23 Imagination Technologies Limited Neural network architecture using convolution engine filter weight buffers
CN111144558B (en) * 2020-04-03 2020-08-18 深圳市九天睿芯科技有限公司 Multi-bit convolution operation module based on time-variable current integration and charge sharing
FR3117645B1 (en) * 2020-12-16 2023-08-25 Commissariat Energie Atomique Taking advantage of low data density or non-zero weights in a weighted sum calculator

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11055063B2 (en) * 2016-05-02 2021-07-06 Marvell Asia Pte, Ltd. Systems and methods for deep learning processor

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111047031A (en) * 2018-10-12 2020-04-21 西部数据技术公司 Shift architecture for data reuse in neural networks
CN111047031B (en) * 2018-10-12 2024-02-27 西部数据技术公司 Shift device for data reuse in neural network
CN112970036A (en) * 2018-11-06 2021-06-15 创惟科技股份有限公司 Convolution block array for implementing neural network applications, method of using the same, and convolution block circuit
CN112970036B (en) * 2018-11-06 2024-02-23 创惟科技股份有限公司 Convolutional block array for implementing neural network applications and methods of use thereof
CN113892092A (en) * 2019-02-06 2022-01-04 瀚博控股公司 Method and system for convolution model hardware accelerator
CN111626405A (en) * 2020-05-15 2020-09-04 Tcl华星光电技术有限公司 CNN acceleration method, CNN acceleration device and computer readable storage medium
CN111626405B (en) * 2020-05-15 2024-05-07 Tcl华星光电技术有限公司 CNN acceleration method, acceleration device and computer readable storage medium
CN116261736A (en) * 2020-06-12 2023-06-13 墨芯国际有限公司 Method and system for double sparse convolution processing and parallelization

Also Published As

Publication number Publication date
GB2568102A (en) 2019-05-08
GB2568102B (en) 2021-04-14

Similar Documents

Publication Publication Date Title
HK1254700A1 (en) Exploiting input data sparsity in neural network compute units
GB2568102B (en) Exploiting sparsity in a neural network
GB201917993D0 (en) Neural network classification
GB202006969D0 (en) Facilitating neural network efficiency
GB202008794D0 (en) Cost function deformation in quantum approximate optimization
IL261245A (en) Structure learning in convolutional neural networks
GB2582519B (en) Convolutional neural network hardware
GB201803806D0 (en) Transposing neural network Matrices in hardware
GB201611857D0 (en) An artificial neural network
GB2564596B (en) Quantum processor and its use for implementing a neural network
GB201703330D0 (en) Training a computational neural network
GB201718756D0 (en) Neural interface
ZA201902904B (en) Enabling multiple numerologies in a network
IL262329B (en) Artificial neuron
IL231862A0 (en) Neural network image representation
IL270192B2 (en) Octree-based convolutional neural network
EP3704608A4 (en) Using neural networks in creating apparel designs
IL259470A (en) Neural cell extracellular vessicles
GB201717309D0 (en) Generating randomness in neural networks
GB2537377B (en) Security improvements in a cellular network
PL3616467T3 (en) Network manager in a nr network
IL261888A (en) Rule enforcement in a network
GB201809777D0 (en) Access mode configuration in a network
ZA202001245B (en) Improved zoning configuration in a mesh network
GB201721574D0 (en) Improvements in hair-cutting devices