JP7196167B2 - ホスト通信されるマージされた重みと層単位命令のパッケージとを使用するニューラルネットワークアクセラレータによる多層ニューラルネットワーク処理 - Google Patents
ホスト通信されるマージされた重みと層単位命令のパッケージとを使用するニューラルネットワークアクセラレータによる多層ニューラルネットワーク処理 Download PDFInfo
- Publication number
- JP7196167B2 JP7196167B2 JP2020521412A JP2020521412A JP7196167B2 JP 7196167 B2 JP7196167 B2 JP 7196167B2 JP 2020521412 A JP2020521412 A JP 2020521412A JP 2020521412 A JP2020521412 A JP 2020521412A JP 7196167 B2 JP7196167 B2 JP 7196167B2
- Authority
- JP
- Japan
- Prior art keywords
- layer
- neural network
- instructions
- processing
- instruction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Neurology (AREA)
- Advance Control (AREA)
- Image Analysis (AREA)
- Complex Calculations (AREA)
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/785,800 US11620490B2 (en) | 2017-10-17 | 2017-10-17 | Multi-layer neural network processing by a neural network accelerator using host communicated merged weights and a package of per-layer instructions |
| US15/785,800 | 2017-10-17 | ||
| PCT/US2018/056112 WO2019079319A1 (en) | 2017-10-17 | 2018-10-16 | NEURONAL MULTICOUCHE NETWORK PROCESSING BY A NEURONAL NETWORK ACCELERATOR USING CONTAINED HOST COMMUNICATION WEIGHTS AND A LAYERED INSTRUCTION PACKAGE |
Publications (3)
| Publication Number | Publication Date |
|---|---|
| JP2020537785A JP2020537785A (ja) | 2020-12-24 |
| JP2020537785A5 JP2020537785A5 (enExample) | 2021-11-25 |
| JP7196167B2 true JP7196167B2 (ja) | 2022-12-26 |
Family
ID=64110172
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| JP2020521412A Active JP7196167B2 (ja) | 2017-10-17 | 2018-10-16 | ホスト通信されるマージされた重みと層単位命令のパッケージとを使用するニューラルネットワークアクセラレータによる多層ニューラルネットワーク処理 |
Country Status (6)
| Country | Link |
|---|---|
| US (1) | US11620490B2 (enExample) |
| EP (1) | EP3698296B1 (enExample) |
| JP (1) | JP7196167B2 (enExample) |
| KR (1) | KR102578508B1 (enExample) |
| CN (1) | CN111226231A (enExample) |
| WO (1) | WO2019079319A1 (enExample) |
Families Citing this family (21)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11037330B2 (en) * | 2017-04-08 | 2021-06-15 | Intel Corporation | Low rank matrix compression |
| US11386644B2 (en) * | 2017-10-17 | 2022-07-12 | Xilinx, Inc. | Image preprocessing for generalized image processing |
| US10565285B2 (en) * | 2017-12-18 | 2020-02-18 | International Business Machines Corporation | Processor and memory transparent convolutional lowering and auto zero padding for deep neural network implementations |
| US11250107B2 (en) * | 2019-07-15 | 2022-02-15 | International Business Machines Corporation | Method for interfacing with hardware accelerators |
| US11573828B2 (en) * | 2019-09-16 | 2023-02-07 | Nec Corporation | Efficient and scalable enclave protection for machine learning programs |
| US11501145B1 (en) * | 2019-09-17 | 2022-11-15 | Amazon Technologies, Inc. | Memory operation for systolic array |
| KR102463123B1 (ko) * | 2019-11-29 | 2022-11-04 | 한국전자기술연구원 | 뉴럴 네트워크 가속기의 효율적인 제어, 모니터링 및 소프트웨어 디버깅 방법 |
| US20200134417A1 (en) * | 2019-12-24 | 2020-04-30 | Intel Corporation | Configurable processor element arrays for implementing convolutional neural networks |
| US11132594B2 (en) * | 2020-01-03 | 2021-09-28 | Capital One Services, Llc | Systems and methods for producing non-standard shaped cards |
| US11182159B2 (en) * | 2020-02-26 | 2021-11-23 | Google Llc | Vector reductions using shared scratchpad memory |
| CN111461315A (zh) * | 2020-03-31 | 2020-07-28 | 中科寒武纪科技股份有限公司 | 计算神经网络的方法、装置、板卡及计算机可读存储介质 |
| CN111461316A (zh) * | 2020-03-31 | 2020-07-28 | 中科寒武纪科技股份有限公司 | 计算神经网络的方法、装置、板卡及计算机可读存储介质 |
| US11783163B2 (en) * | 2020-06-15 | 2023-10-10 | Arm Limited | Hardware accelerator for IM2COL operation |
| KR102860333B1 (ko) * | 2020-06-22 | 2025-09-16 | 삼성전자주식회사 | 가속기, 가속기의 동작 방법 및 이를 포함한 가속기 시스템 |
| KR102859455B1 (ko) * | 2020-08-31 | 2025-09-12 | 삼성전자주식회사 | 가속기, 가속기의 동작 방법 및 이를 포함한 전자 장치 |
| CN113485762B (zh) * | 2020-09-19 | 2024-07-26 | 广东高云半导体科技股份有限公司 | 用可配置器件卸载计算任务以提高系统性能的方法和装置 |
| CN112613605A (zh) * | 2020-12-07 | 2021-04-06 | 深兰人工智能(深圳)有限公司 | 神经网络加速控制方法、装置、电子设备及存储介质 |
| US20220179703A1 (en) * | 2020-12-07 | 2022-06-09 | Nvidia Corporation | Application programming interface for neural network computation |
| CN112580787B (zh) * | 2020-12-25 | 2023-11-17 | 北京百度网讯科技有限公司 | 神经网络加速器的数据处理方法、装置、设备及存储介质 |
| CN113326479A (zh) * | 2021-05-28 | 2021-08-31 | 哈尔滨理工大学 | 一种基于fpga的k均值算法的实现方法 |
| CN120981816A (zh) | 2023-04-06 | 2025-11-18 | 墨子国际有限公司 | 用于神经网络加速器的分层片上网络(noc) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160335120A1 (en) | 2015-05-11 | 2016-11-17 | Auviz Systems, Inc. | ACCELERATING ALGORITHMS & APPLICATIONS ON FPGAs |
| US20160342890A1 (en) | 2015-05-21 | 2016-11-24 | Google Inc. | Batch processing in a neural network processor |
| JP2016536679A (ja) | 2013-10-11 | 2016-11-24 | クゥアルコム・インコーポレイテッドQualcomm Incorporated | ニューラルシミュレータ用の共有メモリアーキテクチャ |
Family Cites Families (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6346825B1 (en) | 2000-10-06 | 2002-02-12 | Xilinx, Inc. | Block RAM with configurable data width and parity for use in a field programmable gate array |
| WO2014204615A2 (en) * | 2013-05-22 | 2014-12-24 | Neurala, Inc. | Methods and apparatus for iterative nonspecific distributed runtime architecture and its application to cloud intelligence |
| US10417555B2 (en) * | 2015-05-29 | 2019-09-17 | Samsung Electronics Co., Ltd. | Data-optimized neural network traversal |
| KR102076257B1 (ko) * | 2015-10-28 | 2020-02-11 | 구글 엘엘씨 | 계산 그래프들 프로세싱 |
| US9875104B2 (en) * | 2016-02-03 | 2018-01-23 | Google Llc | Accessing data in multi-dimensional tensors |
| US10891538B2 (en) * | 2016-08-11 | 2021-01-12 | Nvidia Corporation | Sparse convolutional neural network accelerator |
| CN107239823A (zh) * | 2016-08-12 | 2017-10-10 | 北京深鉴科技有限公司 | 一种用于实现稀疏神经网络的装置和方法 |
| US10802992B2 (en) * | 2016-08-12 | 2020-10-13 | Xilinx Technology Beijing Limited | Combining CPU and special accelerator for implementing an artificial neural network |
| US10489702B2 (en) * | 2016-10-14 | 2019-11-26 | Intel Corporation | Hybrid compression scheme for efficient storage of synaptic weights in hardware neuromorphic cores |
| US10175980B2 (en) * | 2016-10-27 | 2019-01-08 | Google Llc | Neural network compute tile |
| US10949736B2 (en) * | 2016-11-03 | 2021-03-16 | Intel Corporation | Flexible neural network accelerator and methods therefor |
| KR102224510B1 (ko) * | 2016-12-09 | 2021-03-05 | 베이징 호라이즌 인포메이션 테크놀로지 컴퍼니 리미티드 | 데이터 관리를 위한 시스템들 및 방법들 |
| GB2568776B (en) * | 2017-08-11 | 2020-10-28 | Google Llc | Neural network accelerator with parameters resident on chip |
-
2017
- 2017-10-17 US US15/785,800 patent/US11620490B2/en active Active
-
2018
- 2018-10-16 KR KR1020207013441A patent/KR102578508B1/ko active Active
- 2018-10-16 WO PCT/US2018/056112 patent/WO2019079319A1/en not_active Ceased
- 2018-10-16 CN CN201880067687.7A patent/CN111226231A/zh active Pending
- 2018-10-16 JP JP2020521412A patent/JP7196167B2/ja active Active
- 2018-10-16 EP EP18797373.0A patent/EP3698296B1/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2016536679A (ja) | 2013-10-11 | 2016-11-24 | クゥアルコム・インコーポレイテッドQualcomm Incorporated | ニューラルシミュレータ用の共有メモリアーキテクチャ |
| US20160335120A1 (en) | 2015-05-11 | 2016-11-17 | Auviz Systems, Inc. | ACCELERATING ALGORITHMS & APPLICATIONS ON FPGAs |
| US20160342890A1 (en) | 2015-05-21 | 2016-11-24 | Google Inc. | Batch processing in a neural network processor |
Non-Patent Citations (1)
| Title |
|---|
| QIU, Jiantao, et al.,Going Deeper with Embedded FPGA Platform for Convolutional Neural Network,Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays,2016年02月,pages 26-35,[検索日 2022.11.07], インターネット:<URL:https://dl.acm.org/doi/10.1145/2847263.2847265>,<DOI: 10.1145/2847263.2847265> |
Also Published As
| Publication number | Publication date |
|---|---|
| KR20200069338A (ko) | 2020-06-16 |
| WO2019079319A1 (en) | 2019-04-25 |
| KR102578508B1 (ko) | 2023-09-13 |
| EP3698296B1 (en) | 2024-07-17 |
| JP2020537785A (ja) | 2020-12-24 |
| US20190114529A1 (en) | 2019-04-18 |
| EP3698296A1 (en) | 2020-08-26 |
| US11620490B2 (en) | 2023-04-04 |
| CN111226231A (zh) | 2020-06-02 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7196167B2 (ja) | ホスト通信されるマージされた重みと層単位命令のパッケージとを使用するニューラルネットワークアクセラレータによる多層ニューラルネットワーク処理 | |
| EP3698293B1 (en) | Neural network processing system having multiple processors and a neural network accelerator | |
| US11429848B2 (en) | Host-directed multi-layer neural network processing via per-layer work requests | |
| KR102697368B1 (ko) | 일반화된 이미지 프로세싱을 위한 이미지 프리프로세싱 | |
| US11568218B2 (en) | Neural network processing system having host controlled kernel acclerators | |
| US10515135B1 (en) | Data format suitable for fast massively parallel general matrix multiplication in a programmable IC | |
| EP3698294B1 (en) | Machine learning runtime library for neural network acceleration | |
| US10354733B1 (en) | Software-defined memory bandwidth reduction by hierarchical stream buffering for general matrix multiplication in a programmable IC | |
| US10984500B1 (en) | Inline image preprocessing for convolution operations using a matrix multiplier on an integrated circuit | |
| US11204747B1 (en) | Re-targetable interface for data exchange between heterogeneous systems and accelerator abstraction into software instructions | |
| KR20200069346A (ko) | 대규모 병렬 소프트웨어로 정의된 하드웨어 시스템에서의 정적 블록 스케줄링 | |
| KR20200037303A (ko) | 뉴럴 네트워크들의 아키텍처 최적화된 트레이닝 | |
| US11036827B1 (en) | Software-defined buffer/transposer for general matrix multiplication in a programmable IC | |
| US12073317B2 (en) | Method and system for processing a neural network |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| A529 | Written submission of copy of amendment under article 34 pct |
Free format text: JAPANESE INTERMEDIATE CODE: A529 Effective date: 20200605 |
|
| A521 | Request for written amendment filed |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20211014 |
|
| A621 | Written request for application examination |
Free format text: JAPANESE INTERMEDIATE CODE: A621 Effective date: 20211014 |
|
| A977 | Report on retrieval |
Free format text: JAPANESE INTERMEDIATE CODE: A971007 Effective date: 20221025 |
|
| TRDD | Decision of grant or rejection written | ||
| A01 | Written decision to grant a patent or to grant a registration (utility model) |
Free format text: JAPANESE INTERMEDIATE CODE: A01 Effective date: 20221115 |
|
| A61 | First payment of annual fees (during grant procedure) |
Free format text: JAPANESE INTERMEDIATE CODE: A61 Effective date: 20221214 |
|
| R150 | Certificate of patent or registration of utility model |
Ref document number: 7196167 Country of ref document: JP Free format text: JAPANESE INTERMEDIATE CODE: R150 |