CN113065640A - Image classification network compression method based on convolution kernel shape automatic learning - Google Patents

Image classification network compression method based on convolution kernel shape automatic learning Download PDF

Info

Publication number
CN113065640A
CN113065640A CN202110283921.3A CN202110283921A CN113065640A CN 113065640 A CN113065640 A CN 113065640A CN 202110283921 A CN202110283921 A CN 202110283921A CN 113065640 A CN113065640 A CN 113065640A
Authority
CN
China
Prior art keywords
convolution kernel
convolution
network
image classification
automatic learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110283921.3A
Other languages
Chinese (zh)
Other versions
CN113065640B (en
Inventor
张科
刘广哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202110283921.3A priority Critical patent/CN113065640B/en
Publication of CN113065640A publication Critical patent/CN113065640A/en
Application granted granted Critical
Publication of CN113065640B publication Critical patent/CN113065640B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an image classification network compression method based on convolution kernel shape automatic learning, and belongs to the technical field of image processing and recognition. By applying multiple sparse regular constraints on the parameters of each position in the conventional convolution kernel, the internal parameters of the convolution kernel are thinned in the network training process, and the automatic learning convolution kernel shape can be obtained by setting a clipping threshold according to the compression ratio, so that the redundant parameters in the convolution kernel can be effectively eliminated. The method is applied to the image classification task, the compression rate of the network model can be further improved while the classification accuracy is ensured, the parameter quantity and the calculated quantity of the model are reduced, and the method is convenient to deploy and apply in mobile equipment with limited resources.

Description

Image classification network compression method based on convolution kernel shape automatic learning
Technical Field
The invention belongs to the technical field of image processing and recognition, and particularly relates to an image classification network compression method based on convolution kernel shape automatic learning.
Background
Image classification and recognition are important subjects in the field of machine vision, and early image recognition methods mainly rely on manual feature extraction, and are low in accuracy and limited in applicability to different scenes. With the occurrence of deep learning methods, convolutional neural networks have achieved tremendous achievement in the machine vision field such as image recognition, target detection and the like, deep neural networks can effectively extract high-level semantic features in images, and the recognition capability of people can be surpassed.
However, while the network performance is improved, the network structure is more and more complex, the requirements on the storage capacity and the computing capacity of the computing device are higher and higher, and the application and development of the computing device in the mobile device with limited resources are limited. The interior of a large-scale neural network model often has larger redundancy, not all parameters play an effective role in network performance, and excessive parameters can cause the problems of slow network convergence, overfitting of the parameters and the like. In order to facilitate the deployment and application of the neural network, the neural network compression method is increasingly emphasized.
Parameter pruning is an effective neural network compression method, and achieves the effect of reducing the complexity of a model by cutting out redundant or unimportant parameters in a network. Wevier, Chengshi super, Zhufenghua and the like (model pruning method based on sparse convolutional neural network, computer engineering, DOI.https:// doi.org/10.19678/j.issn.1000-3428.0059375) provide a model pruning algorithm based on sparse convolutional neural network, sparse regular constraint is applied to a convolutional layer and a Batch Normaposition (BN) layer in the training process, the network weight is thinned, a pruning threshold is set, filter channels with lower importance in the network are pruned, and the accuracy of the model is restored through fine tuning training, so that the aim of compressing the convolutional neural network is fulfilled. This method belongs to a structured pruning method, and pruning is performed with the convolution channel as the smallest unit, but redundant parameters inside the convolution kernel cannot be removed. Smaller pruning units need to be used if higher compression ratios are to be achieved.
Disclosure of Invention
Technical problem to be solved
The existing convolution neural network sparsification pruning method conducts sparsification training on the whole convolution channel, redundant parameters inside a convolution kernel cannot be eliminated, and therefore the compression rate of a network model is low, and the final image classification accuracy rate is affected. The invention provides an image classification network compression method based on convolution kernel shape automatic learning.
Technical scheme
An image classification network compression method based on convolution kernel shape automatic learning is characterized by comprising the following steps:
step 1: building a convolutional neural network for image classification;
step 2: introducing a coefficient matrix F to the traditional convolution process, and adding a weight sparsification regular term to a loss function
Figure BDA0002979630800000021
Distribution equalization regularization term
Figure BDA0002979630800000022
Between-group equalization regularization term
Figure BDA0002979630800000023
Figure BDA0002979630800000024
In the formula, λ1、λ2、λ3Is a coefficient for balancing the terms;
in the network training process, the three loss functions are solved for each item f in the coefficient matrixijPartial derivatives of (a) for back-propagating the update coefficient matrix F; obtaining a sparse coefficient matrix F after training is finished;
and step 3: setting a threshold value according to the expected model compression ratio required to be achieved, and setting f below the threshold valueijRemoving the convolution kernel parameters of the corresponding positions to obtain the convolution kernel shapes of the convolution layers;
and 4, step 4: and replacing the original conventional convolution kernel by the convolution kernel with the sparse shape obtained by automatic learning, and carrying out network training again to obtain the final image classification neural network model.
Preferably: the convolutional neural network in step 1 is VGG.
Preferably: the convolutional neural network in step 1 is ResNet.
Advantageous effects
According to the image classification network compression method based on automatic learning of the convolution kernel shape, disclosed by the invention, multiple items of sparse regular constraints are applied to parameters of all positions in a conventional convolution kernel, the internal parameters of the convolution kernel are thinned in the network training process, and the automatic learning convolution kernel shape can be obtained by setting a clipping threshold according to the compression ratio, so that the redundant parameters in the convolution kernel can be effectively eliminated. The method is applied to the image classification task, the compression rate of the network model can be further improved while the classification accuracy is ensured, the parameter quantity and the calculated quantity of the model are reduced, and the method is convenient to deploy and apply in mobile equipment with limited resources.
The image classification network compression method based on convolution kernel shape automatic learning can automatically learn the convolution kernel shapes of all convolution layers in the network training process, so that the feeling of the convolution kernels is adaptive to the network depth, meanwhile, redundant parameters in the convolution kernels are eliminated, and a good network compression effect is achieved.
The automatic learning method of the convolution kernel shape provides a new idea for efficient network structure design, and the convolution kernel shape is added into a search space of a neural structure search (NAS) to obtain a larger search space, so that more abundant target characteristics are extracted, and the network performance is favorably improved.
The image classification network compression method based on convolution kernel shape automatic learning can effectively compress parameter quantity in the neural network convolution layer, for example, 59.07% of parameter quantity and 51.91% of calculated quantity of a VGG-16 network can be reduced on the premise of not reducing accuracy, and the image classification network can be conveniently deployed on terminal mobile equipment.
The image classification network compression method based on convolution kernel shape automatic learning provided by the invention can reduce redundant parameters in a convolution layer, thereby reducing the over-fitting risk of the image classification network, and being beneficial to improving the classification accuracy of the network, such as improving the classification accuracy by 0.72% while compressing a VGG-16 network.
Drawings
FIG. 1 is a diagram of a convolution calculation process incorporating a matrix of convolution kernel coefficients.
Fig. 2 is a diagram of the number of each parameter of the 3 × 3 convolution kernel and the division of the packet.
Fig. 3 is a flow chart of convolution kernel shape auto-learning.
FIG. 4 is a convolution kernel shape for each convolution layer automatically learned on a CIFAR-10 dataset using a VGG-16 network.
Detailed Description
The invention will now be further described with reference to the following examples and drawings:
the invention provides an image classification network compression method based on convolution kernel shape automatic learning, and the convolution kernel shape automatic learning process is shown in figure 3. The following describes an embodiment of the present invention with reference to an image classification example, but the technical content of the present invention is not limited to the scope, and the embodiment includes the following steps:
(1) and (3) building a convolutional neural network for image classification, and building an image data set with a large number of training samples and labels.
(2) For convolutional layers in a neural network, the convolution calculation process of a conventional convolutional kernel is as follows:
Y=X*w
in the formula (I), the compound is shown in the specification,
Figure BDA0002979630800000041
is the tensor of the input eigen-map,
Figure BDA0002979630800000042
is the output eigen-map tensor,
Figure BDA0002979630800000043
is the convolution weight parameter, c and n are the number of input and output channels, respectively, h and w are the height and width of the input feature map, h 'and w' are the height and width of the output feature map, respectively, k × k is the size of the convolution kernel, and x is the image convolution operation.
And averagely dividing convolution kernels of the n output channels into d groups, wherein each group comprises n/d convolution channels. In order to be able to sparsify the internal parameters of the convolution kernel, a coefficient matrix is introduced
Figure BDA0002979630800000044
Multiplying the convolution weight w point by point with each group, and performing convolution operation on the input X, namely:
Y=X*(F⊙w)
in the equation,. is a multiplication operation by dots. The whole calculation process refers to fig. 1.
The loss function during the conventional convolutional neural network training process is:
Figure BDA0002979630800000045
in the formula (I), the compound is shown in the specification,
Figure BDA0002979630800000046
is a classification loss item, is related to input images and prediction labels in the network training process,
Figure BDA0002979630800000047
is a regular term for weight decay, which can reduce network overfitting.
Introducing a coefficient matrix F to the traditional convolution process, and adding a weight sparsification regular term to a loss function
Figure BDA0002979630800000048
Distribution equalization regularization term
Figure BDA0002979630800000049
Between-group equalization regularization term
Figure BDA00029796308000000410
In the network training process, the three loss functions are solved for each item f in the coefficient matrixijFor propagating the update coefficient matrix F backward. And obtaining a sparse coefficient matrix F after the training is finished.
In order to achieve the purpose of automatically learning the convolution kernel shape, a sparse regularization constraint is applied to the coefficient matrix F, and the loss function is as follows:
Figure BDA0002979630800000051
in the formula (I), the compound is shown in the specification,
Figure BDA0002979630800000052
is a regularization term that makes the convolution kernel weights sparse,
Figure BDA0002979630800000053
is a regular term that equalizes the distribution of the convolution kernel parameters,
Figure BDA0002979630800000054
is a regularizing term, λ, which equalizes the parameters between the groups1、λ2、λ3Are coefficients used to balance the terms.
The regularization terms are constructed separately below. Taking a 3 × 3 convolution kernel as an example, 9 parameters in the convolution kernel are numbered and divided into corners (G)corner) Side (G)edge) Heart (G)center) Three groups, referring specifically to fig. 2, each group is numbered as:
Figure BDA0002979630800000055
1)
Figure BDA0002979630800000056
the weight is a regular term which tends to be sparse, and the calculation method is as follows:
Figure BDA0002979630800000057
in the formula, kjFor coefficients of different positions, which form a vector k ∈ R1×9For applying different regular constraints to parameters at different positions of corners, edges and centers, for example, taking k ═ 4,2,4,2,1,2,4]Represents a pair GcornerPosition applied 4 times GcenterConstraint of GedgeApplying 2 times of GcenterAboutThe beam, and thus more emphasis is placed on preserving parameters near the center of the convolution kernel.
g (-) is a regularized norm, e.g. with L1Norm, then:
Figure BDA0002979630800000058
during the training process, the reason is that
Figure BDA0002979630800000059
Independent of the training samples, it can be solved in advance for each coefficient fijFor propagating the update coefficient matrix F backward. The partial derivatives are:
Figure BDA0002979630800000061
in the formula, sgn (. cndot.) is a sign function.
2)
Figure BDA0002979630800000062
The convolution kernel parameter distribution is a regular term which enables the convolution kernel parameters to be distributed uniformly, and is used for giving consideration to convolution parameters in all directions and avoiding the situation of characteristic diagram deviation. Coefficient f at the same position j for all d groupsijAnd calculating the sum of absolute values to obtain:
Figure BDA0002979630800000063
for GcornerAnd GedgeIn each case FjAnd (3) making difference between every two, and solving the square sum to obtain:
Figure BDA0002979630800000064
according to the chain-type derivation rule,
Figure BDA0002979630800000065
about eachCoefficient fijThe partial derivatives of (a) are:
Figure BDA0002979630800000066
in the formula (I), the compound is shown in the specification,
Figure BDA0002979630800000067
Figure BDA0002979630800000068
3)
Figure BDA0002979630800000069
the method is a regular term for equalizing parameters among groups, and avoids the overlarge difference of the number of the parameters among the d groups. Calculate G in d groups separatelycorner、Gedge、GcenterF at each positionijThe sum of absolute values gives:
Figure BDA00029796308000000610
in the formula, Fi cornerRepresenting a position at G in the i-th set of convolution kernelscornerCoefficient of position fijSum of absolute values, Fi edgeRepresenting a position at G in the i-th set of convolution kernelsedgeCoefficient of position fijSum of absolute values, Fi centerRepresenting a position at G in the i-th set of convolution kernelscenterCoefficient of position fijSum of absolute values.
For each F in d groupsi corner、Fi edge、Fi centerAnd (3) making difference between every two, and solving the square sum to obtain:
Figure BDA0002979630800000071
in the formula (I), the compound is shown in the specification,
Figure BDA0002979630800000075
represents GcornerThe loss of inter-group balance due to position,
Figure BDA0002979630800000076
represents GedgeThe loss of inter-group balance due to position,
Figure BDA0002979630800000077
represents GcenterPosition-generated loss of inter-group balance. The total interclass balance loss is:
Figure BDA0002979630800000078
according to the chain-type derivation rule,
Figure BDA0002979630800000079
about each coefficient fijThe partial derivatives of (a) are:
Figure BDA0002979630800000072
in the formula (I), the compound is shown in the specification,
Figure BDA0002979630800000073
Figure BDA0002979630800000074
(3) carrying out sparse training on the coefficient matrix F in the step (1) by using the loss function in the step (2), obtaining sparse F after training is finished, setting a clipping threshold value, and setting F lower than the threshold valueijAnd removing the convolution kernel parameters of the corresponding position to obtain the automatically-learned convolution kernel shape.
(4) And replacing the original conventional convolution kernel by the convolution kernel with the sparse shape obtained by automatic learning, and carrying out network training again to obtain the final neural network model. The network parameters and the calculated amount of the model are lower than those of the original model, so that the effect of network compression can be achieved while the correct classification result is ensured.
Based on the automatic learning method of the convolution kernel shape, the convolution kernel shape which is adapted to each convolution layer can be obtained, and redundant parameters in the convolution kernel can be effectively removed, so that the purpose of model compression is achieved.
Fig. 4 shows the convolution kernel shape of each convolution layer obtained by the VGG-16 network through automatic learning on the CIFAR-10 data set, where d is 2 in the learning process, that is, the convolution kernels of n output channels of each convolution layer are averagely divided into 2 groups, and 60% of parameters in the network are removed during clipping. The left-most column (solution one) is that only sparsification regularization terms are added
Figure BDA0002979630800000082
And the same constraint coefficient k is taken at each position of the corner, the edge and the centerjThe results obtained were. The second row (scheme two) of the left number is that different constraint coefficients k are taken at each position of the angle, the edge and the center on the basis of the first rowjThe results obtained retain more of the parameters near the center of the convolution kernel than in the first column. The third column from the left (scheme three) is to add a distribution equalization regular term on the basis of the second column
Figure BDA0002979630800000083
As a result, the convolution kernel obtained combines the convolution parameters in each direction, particularly the layer 1 and the layer 10, as compared with the second column, and thus the problem of the extracted feature map being shifted in a certain direction can be avoided. The fourth column from the left (scheme four) is to add an intergroup equalization regularization term on the basis of the third column
Figure BDA0002979630800000084
The results obtained, compared to the third column, better balance the number of parameters of the two sets of convolution kernels, especially at level 5.
Table 1 shows the compression results of the present invention, where the original VGG-16 model contains 15.0M parameters and 314M calculated amounts, and the accuracy on CIFAR-10 is 93.45%; the number of model parameters obtained by adopting a traditional structured pruning method is 5.4M, the calculated amount is 206M, and the accuracy rate is reduced to 93.40%; by adopting the automatic learning method of the convolution kernel shape, the accuracy of the model can be improved while the network is compressed by adding each regular constraint, which shows that each constraint can play a beneficial role in the compression result, the parameter number of the finally obtained model (scheme four) is 6.14M, the calculated amount is 151M, and the accuracy is improved by 94.17%.
Table 1 table of network compression results of the present invention
Figure BDA0002979630800000081
Figure BDA0002979630800000091

Claims (3)

1. An image classification network compression method based on convolution kernel shape automatic learning is characterized by comprising the following steps:
step 1: building a convolutional neural network for image classification;
step 2: introducing a coefficient matrix F to the traditional convolution process, and adding a weight sparsification regular term to a loss function
Figure FDA0002979630790000011
Distribution equalization regularization term
Figure FDA0002979630790000012
Between-group equalization regularization term
Figure FDA0002979630790000013
Figure FDA0002979630790000014
In the formula, λ1、λ2、λ3Is a coefficient for balancing the terms;
in the network training process, the three loss functions are solved for each item f in the coefficient matrixijPartial derivatives of (a) for back-propagating the update coefficient matrix F; obtaining a sparse coefficient matrix F after training is finished;
and step 3: setting a threshold value according to the expected model compression ratio required to be achieved, and setting f below the threshold valueijRemoving the convolution kernel parameters of the corresponding positions to obtain the convolution kernel shapes of the convolution layers;
and 4, step 4: and replacing the original conventional convolution kernel by the convolution kernel with the sparse shape obtained by automatic learning, and carrying out network training again to obtain the final image classification neural network model.
2. The image classification network compression method based on convolution kernel shape automatic learning of claim 1, characterized in that the convolution neural network in step 1 is VGG.
3. The image classification network compression method based on convolution kernel shape automatic learning of claim 1, characterized in that the convolution neural network in step 1 is ResNet.
CN202110283921.3A 2021-03-17 2021-03-17 Image classification network compression method based on convolution kernel shape automatic learning Active CN113065640B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110283921.3A CN113065640B (en) 2021-03-17 2021-03-17 Image classification network compression method based on convolution kernel shape automatic learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110283921.3A CN113065640B (en) 2021-03-17 2021-03-17 Image classification network compression method based on convolution kernel shape automatic learning

Publications (2)

Publication Number Publication Date
CN113065640A true CN113065640A (en) 2021-07-02
CN113065640B CN113065640B (en) 2024-01-09

Family

ID=76560834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110283921.3A Active CN113065640B (en) 2021-03-17 2021-03-17 Image classification network compression method based on convolution kernel shape automatic learning

Country Status (1)

Country Link
CN (1) CN113065640B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200224A (en) * 2014-08-28 2014-12-10 西北工业大学 Valueless image removing method based on deep convolutional neural networks
CN107609525A (en) * 2017-09-19 2018-01-19 吉林大学 Remote Sensing Target detection method based on Pruning strategy structure convolutional neural networks
WO2018028255A1 (en) * 2016-08-11 2018-02-15 深圳市未来媒体技术研究院 Image saliency detection method based on adversarial network
KR20180052063A (en) * 2016-11-07 2018-05-17 한국전자통신연구원 Convolution neural network system and operation method thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200224A (en) * 2014-08-28 2014-12-10 西北工业大学 Valueless image removing method based on deep convolutional neural networks
WO2018028255A1 (en) * 2016-08-11 2018-02-15 深圳市未来媒体技术研究院 Image saliency detection method based on adversarial network
KR20180052063A (en) * 2016-11-07 2018-05-17 한국전자통신연구원 Convolution neural network system and operation method thereof
CN107609525A (en) * 2017-09-19 2018-01-19 吉林大学 Remote Sensing Target detection method based on Pruning strategy structure convolutional neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
S.LIN ET AL.: ""Accelerating convolutional networks via global & dynamic filter pruning"", 《JOINT CONF. ARTIF. INTELL》 *
张科;苏雨;王靖宇;王霰宇;张彦华;: "基于融合特征以及卷积神经网络的环境声音分类系统研究", 西北工业大学学报, no. 01 *

Also Published As

Publication number Publication date
CN113065640B (en) 2024-01-09

Similar Documents

Publication Publication Date Title
JP6980958B1 (en) Rural area classification garbage identification method based on deep learning
CN108510485B (en) Non-reference image quality evaluation method based on convolutional neural network
CN110263863B (en) Fine-grained fungus phenotype identification method based on transfer learning and bilinear InceptionResNet V2
CN110033446B (en) Enhanced image quality evaluation method based on twin network
CN111882040B (en) Convolutional neural network compression method based on channel number search
CN112233026A (en) SAR image denoising method based on multi-scale residual attention network
CN111489364B (en) Medical image segmentation method based on lightweight full convolution neural network
CN111523521A (en) Remote sensing image classification method for double-branch fusion multi-scale attention neural network
CN110533022B (en) Target detection method, system, device and storage medium
CN111507319A (en) Crop disease identification method based on deep fusion convolution network model
CN109190511B (en) Hyperspectral classification method based on local and structural constraint low-rank representation
CN109376787B (en) Manifold learning network and computer vision image set classification method based on manifold learning network
CN113159173A (en) Convolutional neural network model compression method combining pruning and knowledge distillation
CN113420651B (en) Light weight method, system and target detection method for deep convolutional neural network
CN111986075A (en) Style migration method for target edge clarification
CN110119805B (en) Convolutional neural network algorithm based on echo state network classification
CN112884668A (en) Lightweight low-light image enhancement method based on multiple scales
CN116740119A (en) Tobacco leaf image active contour segmentation method based on deep learning
CN110837808A (en) Hyperspectral image classification method based on improved capsule network model
CN114742997A (en) Full convolution neural network density peak pruning method for image segmentation
CN113392871B (en) Polarized SAR (synthetic aperture radar) ground object classification method based on scattering mechanism multichannel expansion convolutional neural network
CN116703744B (en) Remote sensing image dodging and color homogenizing method and device based on convolutional neural network
CN113065640A (en) Image classification network compression method based on convolution kernel shape automatic learning
CN111797941A (en) Image classification method and system carrying spectral information and spatial information
Rui et al. Smart network maintenance in an edge cloud computing environment: An adaptive model compression algorithm based on model pruning and model clustering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant