CN108764458A - A kind of model compression method and system of non-uniform quantizing - Google Patents
A kind of model compression method and system of non-uniform quantizing Download PDFInfo
- Publication number
- CN108764458A CN108764458A CN201810460805.2A CN201810460805A CN108764458A CN 108764458 A CN108764458 A CN 108764458A CN 201810460805 A CN201810460805 A CN 201810460805A CN 108764458 A CN108764458 A CN 108764458A
- Authority
- CN
- China
- Prior art keywords
- weights
- quantization
- cluster
- convolutional neural
- neural networks
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
The invention discloses a kind of model compression method and system of non-uniform quantizing, the weights of convolutional neural networks are clustered first, weights in each cluster are quantified as to the value of the cluster central point, quantization error summation in calculating per cluster after all weights quantizations, when the quantization error summation of cluster is more than predetermined threshold value, the cluster is quantified again in a manner of demarcation interval at equal intervals, to reduce the error of weights quantization;To the quantizations of weights using the method for non-uniform quantizing, not only reduce the quantity of weights and memory space, the quantization error of neural network entirety will smaller, the precision of overall network will not have loss substantially;The memory space of weights can be reduced by indexing the weights after storage quantization with quantization step;By the input data fixed point of weights and convolutional neural networks after quantization, the calculating speed of convolutional neural networks model and computational efficiency is made to be significantly improved.
Description
Technical field
The present invention relates to neural network model compression technique areas, more particularly, to a kind of model compression of non-uniform quantizing
Method and system.
Background technology
Convolutional neural networks have been achieved for more and more prominent as a kind of widely used technology of artificial intelligence field
The achievement of broken property.But in order to reach better performance, convolutional neural networks model is typically become deeper wider,
The size of AlexNet network models has had 240MB, the size of VGG-16 network models to have reached 552MB, this is undoubtedly needed
Prodigious memory space and calculation amount are consumed, for the mobile device of computing resource and limited storage space, this is born
Short slab, and whether neuron inputs in traditional network model or network weight is all using real-coded GA, phase
The resource occupied for fixed-point type data is more, and calculating speed and efficiency are lower.Traditional network model occupied space and resource
It is larger, it needs to be compressed while ensureing model accuracy.
Invention content
It is an object of the invention to overcome above-mentioned technical deficiency, proposes a kind of model compression method of non-uniform quantizing and be
System, solves above-mentioned technical problem in the prior art.
To reach above-mentioned technical purpose, technical scheme of the present invention provides a kind of model compression method of non-uniform quantizing,
Including:
S1, the weights of convolutional neural networks are clustered;
Multiple clusters are formed after S2, each weights cluster, the weights in each cluster are quantified as to the value of the cluster central point;
S3, the quantization error summation in calculating per cluster after all weights quantizations, when the quantization error summation of cluster is more than
When predetermined threshold value, which is divided into 2 sections at equal intervals and the weights in each section are quantified as to the central point in the section
Value, calculates the quantization error summation in each section, if the quantization error summation in a section is more than predetermined threshold value, by the section
Weights quantization is carried out after being divided into 2 sections at equal intervals again, until the quantization error summation in all sections is no more than in advance
If threshold value;
S4, the weights after storage quantization are indexed with quantization step;
S5, by the input data fixed point of weights and convolutional neural networks after quantization.
The present invention also provides a kind of model compression systems of non-uniform quantizing, including:
Cluster module:The weights of convolutional neural networks are clustered;
Quantization modules:Multiple clusters are formed after each weights cluster, the weights in each cluster are quantified as to the value of the cluster central point;
Error reduces module:For calculating the quantization error summation in every cluster after all weights quantizations, when the amount of cluster
When changing sum of the deviations more than predetermined threshold value, which is divided into 2 sections at equal intervals and the weights in each section are quantified as this
The value of the central point in section calculates the quantization error summation in each section, is preset if the quantization error summation in a section is more than
Threshold value carries out weights quantization after the section to be then divided into 2 sections at equal intervals again, until the quantization in all sections misses
Poor summation is no more than predetermined threshold value;
Memory module:The weights after storage quantization are indexed with quantization step;
Fixed point module:By the input data fixed point of weights and convolutional neural networks after quantization.
Compared with prior art, beneficial effects of the present invention include:The weights of convolutional neural networks are clustered, it is each to weigh
Multiple clusters are formed after value cluster, the weights in each cluster are quantified as to the value of the cluster central point, all weights in calculating per cluster
Quantization error summation after quantization, when the quantization error summation of cluster is more than predetermined threshold value, by the cluster with dividing regions at equal intervals
Between mode quantified again, with reduce weights quantization error;To the quantizations of weights using the side of non-uniform quantizing
Method not only reduces the quantity of weights so that the memory space of weights is reduced, and the big weights in network will not be because of amount
The reason of change and excessive distortion, whole quantization error will smaller, the precision of overall network will not have loss substantially;With quantization
Weights after rank index storage quantization, the weights after quantization are floating types, for example, 32 bits, and quantization step index is
The serial number of weights, quantization step index is integer, and number of bits used can also be reduced, after indexing storage quantization with quantization step
Weights can reduce the memory spaces of weights;By the input data fixed point of weights and convolutional neural networks after quantization, use
The value of fixed point replaces the weights for being originally floating type number, and the calculating speed of convolutional neural networks model and computational efficiency is made to obtain
It significantly improves;To sum up, technical scheme of the present invention make the compression ratio of convolutional neural networks model is high, occupy computing resource and
Memory space is few, computational accuracy will not have loss substantially, calculating speed and computational efficiency are high.
Description of the drawings
Fig. 1 is a kind of model compression method flow diagram of non-uniform quantizing provided by the invention;
Fig. 2 is a kind of model compression system structure diagram of non-uniform quantizing provided by the invention.
In attached drawing:1, the model compression system of non-uniform quantizing, 11, cluster module, 12, quantization modules, 13, error reduces
Module, 14, memory module, 15, fixed point module.
Specific implementation mode
In order to make the purpose , technical scheme and advantage of the present invention be clearer, with reference to the accompanying drawings and embodiments, right
The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and
It is not used in the restriction present invention.
The present invention provides a kind of model compression methods of non-uniform quantizing, including:
S1, the weights of convolutional neural networks are clustered;
Multiple clusters are formed after S2, each weights cluster, the weights in each cluster are quantified as to the value of the cluster central point;
S3, the quantization error summation in calculating per cluster after all weights quantizations, when the quantization error summation of cluster is more than
When predetermined threshold value, which is divided into 2 sections at equal intervals and the weights in each section are quantified as to the central point in the section
Value, calculates the quantization error summation in each section, if the quantization error summation in a section is more than predetermined threshold value, by the section
Weights quantization is carried out after being divided into 2 sections at equal intervals again, until the quantization error summation in all sections is no more than in advance
If threshold value;
S4, the weights after storage quantization are indexed with quantization step, quantization step indexes the integer serial number for being defined as weights;
S5, by the input data fixed point of weights and convolutional neural networks after quantization.
The model compression method of non-uniform quantizing of the present invention, cluster uses k-means clustering methods in step S1.
The model compression method of non-uniform quantizing of the present invention, the quantization error summation in step S3 per cluster is should
The sum of the quantization error of all weights of cluster, the quantization errors of a weights be value after the weights non-quantized value and quantization it
Difference.
The model compression method of non-uniform quantizing of the present invention, by the input data of convolutional neural networks in step S5
The method of fixed point is:
The fixed point of convolutional neural networks different layers is configured according to the value range of the input data of convolutional neural networks
Estimated, error function is configured and optimized according to the fixed point of estimation and determines input data in convolutional neural networks different layers
Optimal fixed point collocation point, fixed point is carried out to the input datas of convolutional neural networks according to optimal fixed point collocation point.
The model compression method of non-uniform quantizing of the present invention, it is non-during convolutional neural networks model compression
After uniform quantization and weights are shared, each layer of all shared weights in the inside are all indicated with the call number of its quantization, and
With a code table come the correspondence between recording indexes number and weights.It, will be shared identical later during backwards calculation
The gradient of weights is classified as one kind, and weights are updated by this gradient.
The model compression method of non-uniform quantizing of the present invention carries out real on classical VGG-16 and AlexNet
It tests, weights are after quantization, and network model can compress 13.7 times or so, and almost without the loss of precision.
The present invention also provides a kind of model compression systems 1 of non-uniform quantizing, including:
Cluster module 11:The weights of convolutional neural networks are clustered;
Quantization modules 12:Multiple clusters are formed after each weights cluster, the weights in each cluster are quantified as the cluster central point
Value;
Error reduces module 13:For calculating the quantization error summation in every cluster after all weights quantizations, when cluster
When quantization error summation is more than predetermined threshold value, which is divided into 2 sections at equal intervals and is quantified as the weights in each section
The value of the central point in the section calculates the quantization error summation in each section, if the quantization error summation in a section is more than in advance
If threshold value, then weights quantization is carried out after the section to be divided into 2 sections at equal intervals again, until the quantization in all sections
Sum of the deviations is no more than predetermined threshold value;
Memory module 14:The weights after storage quantization are indexed with quantization step;
Fixed point module 15:By the input data fixed point of weights and convolutional neural networks after quantization.
The model compression system 1 of non-uniform quantizing of the present invention, cluster module 11 are used to use the cluster sides k-means
Method clusters the weights of convolutional neural networks.
The model compression system 1 of non-uniform quantizing of the present invention, error reduce the every cluster calculated in module 13
Quantization error summation is the sum of the quantization error of all weights of the cluster, and the quantization error of a weights is the non-quantized value of the weights
And the difference of the value after quantization.
The model compression system 1 of non-uniform quantizing of the present invention, fixed point module 15 are used for according to convolutional Neural net
The value range of the input data of network estimates the fixed point configuration of convolutional neural networks different layers, according to the fixed point of estimation
Change configuration and optimize error function determine input data convolutional neural networks different layers optimal fixed point collocation point, according to most
Good fixed point collocation point carries out fixed point to the input data of convolutional neural networks.
Compared with prior art, beneficial effects of the present invention include:The weights of convolutional neural networks are clustered, it is each to weigh
Multiple clusters are formed after value cluster, the weights in each cluster are quantified as to the value of the cluster central point, all weights in calculating per cluster
Quantization error summation after quantization, when the quantization error summation of cluster is more than predetermined threshold value, by the cluster with dividing regions at equal intervals
Between mode quantified again, with reduce weights quantization error;To the quantizations of weights using the side of non-uniform quantizing
Method not only reduces the quantity of weights so that the memory space of weights is reduced, and the big weights in network will not be because of amount
The reason of change and excessive distortion, whole quantization error will smaller, the precision of overall network will not have loss substantially;With quantization
Weights after rank index storage quantization, the weights after quantization are floating types, for example, 32 bits, and quantization step index is
The serial number of weights, quantization step index is integer, and number of bits used can also be reduced, after indexing storage quantization with quantization step
Weights can reduce the memory spaces of weights;By the input data fixed point of weights and convolutional neural networks after quantization, use
The value of fixed point replaces the weights for being originally floating type number, and the calculating speed of convolutional neural networks model and computational efficiency is made to obtain
It significantly improves;To sum up, technical scheme of the present invention make the compression ratio of convolutional neural networks model is high, occupy computing resource and
Memory space is few, computational accuracy will not have loss substantially, calculating speed and computational efficiency are high.
The specific implementation mode of present invention described above, is not intended to limit the scope of the present invention..Any basis
The various other corresponding changes and deformation that the technical concept of the present invention is made, should be included in the guarantor of the claims in the present invention
It protects in range.
Claims (8)
1. a kind of model compression method of non-uniform quantizing, which is characterized in that including:
S1, the weights of convolutional neural networks are clustered;
Multiple clusters are formed after S2, each weights cluster, the weights in each cluster are quantified as to the value of the cluster central point;
S3, the quantization error summation in calculating per cluster after all weights quantizations, preset when the quantization error summation of cluster is more than
When threshold value, which is divided into 2 sections at equal intervals and the weights in each section are quantified as to the value of the central point in the section, counted
The quantization error summation in each section is calculated, if the quantization error summation in a section is more than predetermined threshold value, again by the section
Weights quantization is carried out after being divided into 2 sections at equal intervals, until the quantization error summation in all sections is no more than default threshold
Value;
S4, the weights after storage quantization are indexed with quantization step;
S5, by the input data fixed point of weights and convolutional neural networks after quantization.
2. the model compression method of non-uniform quantizing as described in claim 1, which is characterized in that cluster described in step S1 is adopted
With k-means clustering methods.
3. the model compression method of non-uniform quantizing as described in claim 1, which is characterized in that the institute in step S3 per cluster
The sum of the quantization error that quantization error summation is all weights of the cluster is stated, the quantization errors of a weights is the weights without amount
The difference of the value and the value after quantization of change.
4. the model compression method of non-uniform quantizing as described in claim 1, which is characterized in that by convolutional Neural in step S5
The method of the input data fixed point of network is:
The fixed point of convolutional neural networks different layers is configured according to the value range of the input data of convolutional neural networks and is carried out
Estimation configures and is optimized according to the fixed point of estimation error function and determines input data in convolutional neural networks different layers
Optimal fixed point collocation point, fixed point is carried out to the input datas of convolutional neural networks according to the optimal fixed point collocation point.
5. a kind of model compression system of non-uniform quantizing, which is characterized in that including:
Cluster module:It is clustered for the weights to convolutional neural networks;
Quantization modules:Value for the weights in each cluster to be quantified as to the cluster central point;
Error reduces module:For calculating the quantization error summation in every cluster after all weights quantizations, when the quantization of cluster misses
When poor summation is more than predetermined threshold value, which is divided into 2 sections at equal intervals and the weights in each section are quantified as the section
Central point value, calculate the quantization error summation in each section, if the quantization error summation in a section be more than predetermined threshold value,
Weights quantization is carried out after the section to be then divided into 2 sections at equal intervals again, until the quantization error in all sections is total
With no more than predetermined threshold value;
Memory module:For utilizing the weights after quantization step index storage quantization;
Fixed point module:Input data fixed point for weights and convolutional neural networks after quantifying.
6. the model compression system of non-uniform quantizing as claimed in claim 5, which is characterized in that the cluster module is for adopting
The weights of convolutional neural networks are clustered with k-means clustering methods.
7. the model compression system of non-uniform quantizing as claimed in claim 5, which is characterized in that the error reduces in module
The quantization error summation of the every cluster calculated is the sum of the quantization error of all weights of the cluster, and the quantization of a weights misses
Difference is the difference of the value after the weights non-quantized value and quantization.
8. the model compression system of non-uniform quantizing as claimed in claim 5, which is characterized in that the fixed point module is used for
The fixed point configuration of convolutional neural networks different layers is estimated according to the value range of the input data of convolutional neural networks,
According to the fixed point of estimation configure and optimize error function determine input data convolutional neural networks different layers most
Good fixed point collocation point carries out fixed point according to the optimal fixed point collocation point to the input data of convolutional neural networks.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810460805.2A CN108764458B (en) | 2018-05-15 | 2018-05-15 | Method and system for reducing storage space consumption and calculation amount of mobile equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810460805.2A CN108764458B (en) | 2018-05-15 | 2018-05-15 | Method and system for reducing storage space consumption and calculation amount of mobile equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108764458A true CN108764458A (en) | 2018-11-06 |
CN108764458B CN108764458B (en) | 2021-03-02 |
Family
ID=64006798
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810460805.2A Active CN108764458B (en) | 2018-05-15 | 2018-05-15 | Method and system for reducing storage space consumption and calculation amount of mobile equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108764458B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110992432A (en) * | 2019-10-28 | 2020-04-10 | 北京大学 | Depth neural network-based minimum variance gradient quantization compression and image processing method |
CN111563600A (en) * | 2019-02-14 | 2020-08-21 | 北京嘀嘀无限科技发展有限公司 | System and method for fixed-point conversion |
CN109523016B (en) * | 2018-11-21 | 2020-09-01 | 济南大学 | Multi-valued quantization depth neural network compression method and system for embedded system |
CN113505774A (en) * | 2021-07-14 | 2021-10-15 | 青岛全掌柜科技有限公司 | Novel policy identification model size compression method |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105184362A (en) * | 2015-08-21 | 2015-12-23 | 中国科学院自动化研究所 | Depth convolution neural network acceleration and compression method based on parameter quantification |
CN106570853A (en) * | 2015-10-08 | 2017-04-19 | 上海深邃智能科技有限公司 | Shape and color integration insulator identification and defect detection method |
CN106597262A (en) * | 2017-01-17 | 2017-04-26 | 太仓市同维电子有限公司 | Wireless testing calibration method based on K-means algorithm |
CN106845640A (en) * | 2017-01-12 | 2017-06-13 | 南京大学 | It is heterogeneous in layer based on depth convolutional neural networks to pinpoint quantization method at equal intervals |
CN106897734A (en) * | 2017-01-12 | 2017-06-27 | 南京大学 | K average clusters fixed point quantization method heterogeneous in layer based on depth convolutional neural networks |
CN107067077A (en) * | 2017-04-18 | 2017-08-18 | 武汉大学 | A kind of weighting algorithm of convolutional neural networks |
WO2017196963A1 (en) * | 2016-05-10 | 2017-11-16 | Accutar Biotechnology Inc. | Computational method for classifying and predicting protein side chain conformations |
CN107944555A (en) * | 2017-12-07 | 2018-04-20 | 广州华多网络科技有限公司 | Method, storage device and the terminal that neutral net is compressed and accelerated |
-
2018
- 2018-05-15 CN CN201810460805.2A patent/CN108764458B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105184362A (en) * | 2015-08-21 | 2015-12-23 | 中国科学院自动化研究所 | Depth convolution neural network acceleration and compression method based on parameter quantification |
CN106570853A (en) * | 2015-10-08 | 2017-04-19 | 上海深邃智能科技有限公司 | Shape and color integration insulator identification and defect detection method |
WO2017196963A1 (en) * | 2016-05-10 | 2017-11-16 | Accutar Biotechnology Inc. | Computational method for classifying and predicting protein side chain conformations |
CN106845640A (en) * | 2017-01-12 | 2017-06-13 | 南京大学 | It is heterogeneous in layer based on depth convolutional neural networks to pinpoint quantization method at equal intervals |
CN106897734A (en) * | 2017-01-12 | 2017-06-27 | 南京大学 | K average clusters fixed point quantization method heterogeneous in layer based on depth convolutional neural networks |
CN106597262A (en) * | 2017-01-17 | 2017-04-26 | 太仓市同维电子有限公司 | Wireless testing calibration method based on K-means algorithm |
CN107067077A (en) * | 2017-04-18 | 2017-08-18 | 武汉大学 | A kind of weighting algorithm of convolutional neural networks |
CN107944555A (en) * | 2017-12-07 | 2018-04-20 | 广州华多网络科技有限公司 | Method, storage device and the terminal that neutral net is compressed and accelerated |
Non-Patent Citations (4)
Title |
---|
FANGXUAN SUN,AT EL.: ""Intra-Layer Nonuniform Quantization of Convolutional Neural Network"", 《ARXIV》 * |
SONG HAN,AT EL.: ""DEEP COMPRESSION: COMPRESSING DEEP NEURAL NETWORKS WITH PRUNING, TRAINED QUANTIZATION AND HUFFMAN CODING"", 《ARXIV》 * |
仇桦: ""基于压缩与聚类分析的复杂网络可视化技术研究"", 《中国优秀硕士学位论文全文数据库 基础科学辑》 * |
朱翔: ""基于深度学习的互联网图片人脸检索系统"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109523016B (en) * | 2018-11-21 | 2020-09-01 | 济南大学 | Multi-valued quantization depth neural network compression method and system for embedded system |
CN111563600A (en) * | 2019-02-14 | 2020-08-21 | 北京嘀嘀无限科技发展有限公司 | System and method for fixed-point conversion |
CN111563600B (en) * | 2019-02-14 | 2024-05-10 | 北京嘀嘀无限科技发展有限公司 | System and method for fixed-point conversion |
CN110992432A (en) * | 2019-10-28 | 2020-04-10 | 北京大学 | Depth neural network-based minimum variance gradient quantization compression and image processing method |
CN113505774A (en) * | 2021-07-14 | 2021-10-15 | 青岛全掌柜科技有限公司 | Novel policy identification model size compression method |
CN113505774B (en) * | 2021-07-14 | 2023-11-10 | 众淼创新科技(青岛)股份有限公司 | Policy identification model size compression method |
Also Published As
Publication number | Publication date |
---|---|
CN108764458B (en) | 2021-03-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108764458A (en) | A kind of model compression method and system of non-uniform quantizing | |
Li et al. | A novel cost-effective dynamic data replication strategy for reliability in cloud data centres | |
CN109714395A (en) | Cloud platform resource uses prediction technique and terminal device | |
CN111563589B (en) | Quantification method and device for neural network model | |
CN109635935B (en) | Model adaptive quantization method of deep convolutional neural network based on modular length clustering | |
US20190392300A1 (en) | Systems and methods for data compression in neural networks | |
CN111079899A (en) | Neural network model compression method, system, device and medium | |
CN108564168A (en) | A kind of design method to supporting more precision convolutional neural networks processors | |
CN110222029A (en) | A kind of big data multidimensional analysis computational efficiency method for improving and system | |
CN102110079B (en) | Tuning calculation method of distributed conjugate gradient method based on MPI | |
CN107450855B (en) | Model-variable data distribution method and system for distributed storage | |
CN115080248B (en) | Scheduling optimization method for scheduling device, and storage medium | |
CN109086866A (en) | A kind of part two-value convolution method suitable for embedded device | |
CN109543821A (en) | A kind of limitation weight distribution improves the convolutional neural networks training method of quantification effect | |
CN108512918A (en) | The data processing method of heterogeneous distributed storage system | |
CN110689113A (en) | Deep neural network compression method based on brain consensus initiative | |
KR20220092776A (en) | Apparatus and method for quantizing neural network models | |
CN110874626A (en) | Quantization method and device | |
CN111160614B (en) | Training method and device of resource transfer prediction model and computing equipment | |
CN110058869A (en) | Mobile application method for pushing, computer readable storage medium and terminal device | |
US11763158B2 (en) | Method for automatic hybrid quantization of deep artificial neural networks | |
CN107329881B (en) | Application system performance test method and device, computer equipment and storage medium | |
CN116302481B (en) | Resource allocation method and system based on sparse knowledge graph link prediction | |
CN105554069A (en) | Big data processing distributed cache system and method thereof | |
CN111490889B (en) | Method and device for estimating wireless service growth |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |