CN113168554A - 一种神经网络压缩方法及装置 - Google Patents
一种神经网络压缩方法及装置 Download PDFInfo
- Publication number
- CN113168554A CN113168554A CN201880099983.5A CN201880099983A CN113168554A CN 113168554 A CN113168554 A CN 113168554A CN 201880099983 A CN201880099983 A CN 201880099983A CN 113168554 A CN113168554 A CN 113168554A
- Authority
- CN
- China
- Prior art keywords
- zero
- weights
- group
- weight
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 127
- 238000000034 method Methods 0.000 title claims abstract description 122
- 230000006835 compression Effects 0.000 title claims abstract description 81
- 238000007906 compression Methods 0.000 title claims abstract description 81
- 238000012549 training Methods 0.000 claims abstract description 248
- 238000012545 processing Methods 0.000 claims abstract description 217
- 238000003062 neural network model Methods 0.000 claims abstract description 133
- 239000011159 matrix material Substances 0.000 claims abstract description 89
- 230000015654 memory Effects 0.000 claims description 49
- 239000003550 marker Substances 0.000 claims description 27
- 238000003672 processing method Methods 0.000 claims description 18
- 238000003860 storage Methods 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 11
- 230000000694 effects Effects 0.000 abstract description 8
- 230000008569 process Effects 0.000 description 32
- 230000006870 function Effects 0.000 description 20
- 238000010586 diagram Methods 0.000 description 19
- 238000013461 design Methods 0.000 description 13
- 238000004364 calculation method Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 5
- 238000013136 deep learning model Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000013473 artificial intelligence Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000013138 pruning Methods 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003278 mimic effect Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
一种神经网络压缩方法及装置,用以解决现有技术中不能很好地适配处理设备的能力,达不到较好的处理效果的问题。方法包括:根据处理设备的处理能力信息,确定稀疏化单位长度;在对神经网络模型进行当前次训练时,根据上一次训练参照的第j组权重,对上一次训练后得到的第j组权重进行调整,得到当前次训练参照的第j组权重;根据得到的当前次训练参照的各组权重,对神经网络模型进行当前次训练;其中,稀疏化单位长度为处理设备进行矩阵运算时一次运算的数据长度,第j组权重包括的权重个数为稀疏化单位长度;j取遍1至m中的任意一个正整数,m为对神经网络模型的所有权重按照稀疏化单位长度分组后得到的权重总组数。
Description
PCT国内申请,说明书已公开。
Claims (27)
- PCT国内申请,权利要求书已公开。
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/125812 WO2020133492A1 (zh) | 2018-12-29 | 2018-12-29 | 一种神经网络压缩方法及装置 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113168554A true CN113168554A (zh) | 2021-07-23 |
CN113168554B CN113168554B (zh) | 2023-11-28 |
Family
ID=71127997
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201880099983.5A Active CN113168554B (zh) | 2018-12-29 | 2018-12-29 | 一种神经网络压缩方法及装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113168554B (zh) |
WO (1) | WO2020133492A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114418086A (zh) * | 2021-12-02 | 2022-04-29 | 北京百度网讯科技有限公司 | 压缩神经网络模型的方法、装置 |
CN116383666A (zh) * | 2023-05-23 | 2023-07-04 | 重庆大学 | 一种电力数据预测方法、装置及电子设备 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113112014A (zh) * | 2021-03-04 | 2021-07-13 | 联想(北京)有限公司 | 一种数据处理方法、设备及存储介质 |
CN114580630B (zh) * | 2022-03-01 | 2024-05-31 | 厦门大学 | 用于ai芯片设计的神经网络模型训练方法及图形分类方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130138589A1 (en) * | 2011-11-28 | 2013-05-30 | Microsoft Corporation | Exploiting sparseness in training deep neural networks |
CN107239825A (zh) * | 2016-08-22 | 2017-10-10 | 北京深鉴智能科技有限公司 | 考虑负载均衡的深度神经网络压缩方法 |
CN107909147A (zh) * | 2017-11-16 | 2018-04-13 | 深圳市华尊科技股份有限公司 | 一种数据处理方法及装置 |
WO2018107414A1 (zh) * | 2016-12-15 | 2018-06-21 | 上海寒武纪信息科技有限公司 | 压缩/解压缩神经网络模型的装置、设备和方法 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107229967B (zh) * | 2016-08-22 | 2021-06-15 | 赛灵思公司 | 一种基于fpga实现稀疏化gru神经网络的硬件加速器及方法 |
CN107239824A (zh) * | 2016-12-05 | 2017-10-10 | 北京深鉴智能科技有限公司 | 用于实现稀疏卷积神经网络加速器的装置和方法 |
CN107688850B (zh) * | 2017-08-08 | 2021-04-13 | 赛灵思公司 | 一种深度神经网络压缩方法 |
-
2018
- 2018-12-29 WO PCT/CN2018/125812 patent/WO2020133492A1/zh active Application Filing
- 2018-12-29 CN CN201880099983.5A patent/CN113168554B/zh active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130138589A1 (en) * | 2011-11-28 | 2013-05-30 | Microsoft Corporation | Exploiting sparseness in training deep neural networks |
CN107239825A (zh) * | 2016-08-22 | 2017-10-10 | 北京深鉴智能科技有限公司 | 考虑负载均衡的深度神经网络压缩方法 |
WO2018107414A1 (zh) * | 2016-12-15 | 2018-06-21 | 上海寒武纪信息科技有限公司 | 压缩/解压缩神经网络模型的装置、设备和方法 |
CN107909147A (zh) * | 2017-11-16 | 2018-04-13 | 深圳市华尊科技股份有限公司 | 一种数据处理方法及装置 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114418086A (zh) * | 2021-12-02 | 2022-04-29 | 北京百度网讯科技有限公司 | 压缩神经网络模型的方法、装置 |
CN114418086B (zh) * | 2021-12-02 | 2023-02-28 | 北京百度网讯科技有限公司 | 压缩神经网络模型的方法、装置 |
US11861498B2 (en) | 2021-12-02 | 2024-01-02 | Beijing Baidu Netcom Science Technology Co., Ltd. | Method and apparatus for compressing neural network model |
CN116383666A (zh) * | 2023-05-23 | 2023-07-04 | 重庆大学 | 一种电力数据预测方法、装置及电子设备 |
CN116383666B (zh) * | 2023-05-23 | 2024-04-19 | 重庆大学 | 一种电力数据预测方法、装置及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
WO2020133492A1 (zh) | 2020-07-02 |
CN113168554B (zh) | 2023-11-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113168554B (zh) | 一种神经网络压缩方法及装置 | |
US11544573B2 (en) | Projection neural networks | |
US20180300653A1 (en) | Distributed Machine Learning System | |
CN112740236A (zh) | 在深度神经网络中利用激活稀疏性 | |
WO2019018375A1 (en) | NEURONAL ARCHITECTURE RESEARCH FOR CONVOLUTION NEURAL NETWORKS | |
CN107122490B (zh) | 一种分组查询中聚合函数的数据处理方法及系统 | |
US20210312295A1 (en) | Information processing method, information processing device, and information processing program | |
US20220083843A1 (en) | System and method for balancing sparsity in weights for accelerating deep neural networks | |
CN111788583A (zh) | 连续稀疏性模式神经网络 | |
US20220261623A1 (en) | System and method for channel-separable operations in deep neural networks | |
CN110399487A (zh) | 一种文本分类方法、装置、电子设备及存储介质 | |
Astrid et al. | Deep compression of convolutional neural networks with low‐rank approximation | |
CN114511042A (zh) | 一种模型的训练方法、装置、存储介质及电子装置 | |
CN101833691A (zh) | 一种基于fpga的最小二乘支持向量机串行结构实现方法 | |
CN110874626A (zh) | 一种量化方法及装置 | |
CN111610977B (zh) | 一种编译方法和相关装置 | |
EP4354349A1 (en) | Halo transfer for convolution workload partition | |
US20230140173A1 (en) | Deep neural network (dnn) accelerators with heterogeneous tiling | |
CN116629342A (zh) | 模型旁路调优方法及装置 | |
EP4052188B1 (en) | Neural network instruction streaming | |
CN113168565A (zh) | 一种神经网络压缩方法及装置 | |
US20230014656A1 (en) | Power efficient register files for deep neural network (dnn) accelerator | |
US20230020929A1 (en) | Write combine buffer (wcb) for deep neural network (dnn) accelerator | |
US20230017662A1 (en) | Deep neural network (dnn) accelerators with weight layout rearrangement | |
US20230018857A1 (en) | Sparsity processing on unpacked data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |