CN111178525A - Pruning-based convolutional neural network compression method, system and medium - Google Patents

Pruning-based convolutional neural network compression method, system and medium Download PDF

Info

Publication number
CN111178525A
CN111178525A CN201911349816.4A CN201911349816A CN111178525A CN 111178525 A CN111178525 A CN 111178525A CN 201911349816 A CN201911349816 A CN 201911349816A CN 111178525 A CN111178525 A CN 111178525A
Authority
CN
China
Prior art keywords
pruning
convolutional neural
neural network
significance
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911349816.4A
Other languages
Chinese (zh)
Inventor
李琳
徐亦农
杨苗苗
赖彬彬
刘凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201911349816.4A priority Critical patent/CN111178525A/en
Publication of CN111178525A publication Critical patent/CN111178525A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention claims a pruning-based convolutional neural network compression method, system and medium, comprising the following steps: preprocessing training data; initializing the weight of the convolutional neural network model; calculating the significance of the network connection, taking the score of the significance as a standard for evaluating the importance of the connection, and ranking the significance: pruning the convolutional neural network, and reserving top-k connection to ensure that the network is sparse: training the sparse convolutional neural network model until the model converges. The invention cuts the network before training, saves pre-training and fine-tuning processes, greatly simplifies the pruning process, keeps the accuracy of the network, and simultaneously structurally selects important connections through the significance score, has robustness for different network structures, and can be applied to various network structures without excessive adjustment.

Description

Pruning-based convolutional neural network compression method, system and medium
Technical Field
The invention belongs to the technical field of convolutional neural network compression, and particularly relates to a pruning-based convolutional neural network compression method.
Background
In recent years, a deep neural network is excellent in solving computer vision tasks such as image classification, face recognition, and the like, and among different types of neural networks, a convolutional neural network is one excellent in performance. With the development of the convolutional neural network, the network hierarchy gradually deepens, and the parameter scale becomes large, so that the convolutional neural network is greatly limited in practical deployment. One aspect is the model size, the powerful power of convolutional neural networks comes from millions of trainable parameters. These parameters, as well as network structure information, need to be stored on disk and loaded into memory during reasoning. For example, a typical network model trained on ImageNet may take up more than 300MB of space, which is a significant resource burden for embedded devices; another aspect is runtime memory, even if the batch size is 1, the intermediate activation/response of the convolutional neural network may take even more memory space during inference than storing the model parameters. This is not a problem for high-end GPUs, but is not affordable for mobile and embedded devices with low computational power. Therefore, it is necessary to prune the model to reduce the size of the model.
The model parameters of the deep convolutional neural network have a plurality of redundancies, an effective judging means can be found to judge the importance of the parameters, unimportant connections or filters are cut to reduce the redundancy of the model, the model becomes more sparse, and the calculation amount is reduced. The typical model pruning method comprises three steps: training, pruning and trimming, the Network slim paper of ICCV2017 introduces this method in detail, but this method requires a long trimming time in order to achieve a good accuracy. The latest pruning method, Rethinking the Value of network pruning, found that a model structure obtained by pruning can be trained from scratch with comparable or even better results than fine tuning.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. A pruning-based convolutional neural network compression method, system and medium are provided. The technical scheme of the invention is as follows:
a pruning-based convolutional neural network compression method comprises the following steps:
(1) acquiring training data and preprocessing the training data, wherein the preprocessing comprises random cutting, random vertical turning and regularization;
(2) initializing the weight of the convolutional neural network model;
(3) calculating the significance of the network connection, taking the score of the significance as a standard for evaluating the importance of the connection, and sequencing the significance, wherein in order to reduce the calculated amount and accelerate the pruning process, a Taylor expansion formula is adopted to approximately analyze the influence of pruning on a loss function;
(4) pruning the convolutional neural network, defining the pruning as an optimization problem as shown in formula (1),
Figure BDA0002334366740000021
in the formula (1), l (X, Y; theta) represents a loss function of the training model, X represents input data, Y is a corresponding label, and theta is belonged to RmParameters representing a model; pruning, i.e. selecting a subset of all connections
Figure BDA0002334366740000022
And remove parameters from the network
Figure BDA0002334366740000023
The remaining parameters are then noted as
Figure BDA0002334366740000024
Therefore, to minimize the growth of the loss function requires choosing the appropriate c*(ii) a Keeping the connection of top-k, so that the network is thinned;
(5) training the sparse convolutional neural network model until the model converges.
Further, the step (1) is to preprocess the training data, and the specific steps are as follows:
step 2.1, dividing the data set to be used into a training set and a test set, wherein the ratio of the training set to the test set is 9:1, meanwhile, taking a batch of the data set as a training sample, the size of the used batch is 128, the formula of the selected batch is shown as the formula (2),
Figure BDA0002334366740000025
in equation (2), D represents the data set, x represents the input data, y is the corresponding label, and i represents the current batch.
Further, the step (2) initializes the weight of the convolutional neural network model, and specifically includes the steps of:
and 3.1, initializing the weight of the convolutional neural network model by adopting a variance initialization method.
Further, the step (3) calculates the significance of the network connection, takes the score of the significance as a standard for evaluating the importance of the connection, and ranks the significance, and the specific steps are as follows:
step 4.1-representing the output of connection k by a profile z, with a trainable scaling factor g ∈ {0,1}mMultiply by z and use
Figure BDA0002334366740000031
The latter calculation is carried out when giWhen 0, it is equivalent to the link k being cut off, so the equation (1) can be rewritten to the optimization equation as shown in equation (3),
△l(X,Y)(g)=|l(X,Y)(g)-l(X,Y)(0) i type (3)
Step 4.2, expanding l in the step (3) by adopting Taylor series expansion(X,Y)(0) As shown in equations (4) and (5),
Figure BDA0002334366740000032
r in the formula (4)i(g) R in formula (5) representing a Lagrangian residue term1(g) Representing the lagrangian remainder after the first order expansion.
Step 4.3, combining the formula (3) and the formula (5), obtaining an optimization equation shown in the formula (6),
Figure BDA0002334366740000033
step 4.4: lagrange remainder R1This results in a large number of calculations, which are omitted in order to save computation power, so that the significance score can be calculated by back-propagation, for each connection ciE.g. C, using S (g)i) As a significance score, S (g)i) Is calculated by the formula (7),
Figure BDA0002334366740000034
and 4.5, sorting the significance scores from large to small.
Further, the step (4) prunes the convolutional neural network, and keeps the connection of top-k, so that the network is sparse, and the specific steps are as follows:
step 5.1, presetting the sparsity level of the pruning model, wherein the sparsity level is shown as a formula (8),
Figure BDA0002334366740000035
in formula (8), m represents the number of all parameters, and κ represents the number of desired non-zero parameters;
step 5.2 at a given sparsity
Figure BDA0002334366740000041
Thereafter, the connections in the network model can be arbitrarily disconnected and the absolute maximum κ% connection can be retained.
A pruning-based convolutional neural network compression system, comprising:
a preprocessing module: the training data preprocessing device is used for acquiring training data and preprocessing the training data, and comprises random cutting, random vertical turning and regularization;
an initialization module: weights for initializing a convolutional neural network model;
the significance calculation and sequencing module: the system is used for calculating the significance of the network connection, taking the score of the significance as a standard for evaluating the importance of the connection, and sequencing the significance;
a pruning module: used for pruning the convolutional neural network, the pruning is defined as the optimization problem shown in the formula (1),
Figure BDA0002334366740000042
in the formula (1), l (X, Y; theta) represents a loss function of the training model, X represents input data, Y is a corresponding label, and theta is belonged to RmParameters representing a model; pruning, i.e. selecting a subset of all connections
Figure BDA0002334366740000043
And remove parameters from the network
Figure BDA0002334366740000044
The remaining parameters are then noted as
Figure BDA0002334366740000045
Therefore, to minimize the growth of the loss function requires choosing the appropriate c*(ii) a Keeping the connection of top-k, so that the network is thinned;
a training module: and training the sparse convolutional neural network model until the model converges.
A medium having stored therein a computer program which, when read by a processor, performs the method of any of the preceding claims.
The invention has the following advantages and beneficial effects:
aiming at the problems of redundant parameters and excessive floating point operands of the convolutional neural network, the invention provides a pruning-based convolutional neural network compression method, which can train a more sparse network while simplifying the pruning process.
The method has the advantages that after the weight is initialized, important connections are selected in a structuralized mode through the connection sensitivity scores, the network is cut, time-consuming pre-training and fine-tuning processes are omitted, then a more sparse network structure can be obtained through normal training, and the accuracy of an original network model is kept.
Drawings
FIG. 1 is a flowchart illustrating the overall compression of a preferred embodiment of the convolutional neural network based on pruning according to the present invention;
FIG. 2 is a graph of the change of the test accuracy of the VGG-D model on the CIFAR-10 data set by the method of the invention;
FIG. 3 is a graph of the variation of the loss of the VGG-D model on the CIFAR-10 data set by the method of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
the experimental platform of the model established by the invention is as follows: the Ubuntu16.04 system adopts a GeForce GTX1070 display card and a TensorFlow framework to train and test the network.
The invention will be described in further detail with reference to fig. 1 and a specific embodiment by taking a public data set CIFAR-10 as an example, wherein CIFAR-10 is a color image data set, and has 10 categories, i.e., airplane, car, bird, cat, deer, dog, frog, horse, boat, and truck, each picture has a size of 32 × 3, the entire data set has 50000 training images and 10000 testing images, each category occupies one tenth, and fig. 1 is a flowchart of a convolutional neural network compression overall structure:
step 1, preprocessing training data;
step 2, initializing the weight of the convolutional neural network model;
step 3, calculating the significance of the network connection, taking the score of the significance as a standard for evaluating the importance of the connection, and sequencing the significance;
step 4, pruning the convolutional neural network, and reserving top-k connection to ensure that the network is sparse;
step 5, training the sparse convolutional neural network model until the model converges;
the following details the detailed problems of the implementation of the various steps of the present invention:
step 1, preprocessing training data. Specifically, the data set to be used is divided into a training set and a test set, the ratio of the training set to the test set is 9:1, meanwhile, one batch of the data set is used as a training sample, the used batch size is 128, the selected batch formula is shown as formula (2),
Figure BDA0002334366740000061
in equation (2), D represents the data set, and i represents the current batch.
And 2, initializing the weight of the convolutional neural network model by adopting a variance initialization method in the convolutional neural network compression method.
And 3, calculating the significance of the network connection, taking the score of the significance as a standard for evaluating the importance of the connection, and sequencing the significance. The convolutional neural network model has K connections, and in order to avoid calculating the connections for K times in one pruning process, the method of the invention uses Taylor expansion to approximately analyze the influence of pruning on a loss function, and the following steps are specific steps of approximate calculation:
step 3-1. representing the output of connection k by a profile z, with a trainable scaling factor g ∈ {0,1}mMultiply by z and use
Figure BDA0002334366740000062
The latter calculation is carried out when giWhen 0, it is equivalent to the link k being cut off, so the equation (1) can be rewritten to the optimization equation as shown in equation (3),
△l(X,Y)(g)=|l(X,Y)(g)-l(X,Y)(0) i type (3)
Step 3-2, expanding l in the step (3) by adopting Taylor series expansion(X,Y)(0) As shown in equations (4) and (5),
Figure BDA0002334366740000063
r in the formula (4)i(g) R in formula (5) representing a Lagrangian residue term1(g) Representing the lagrangian remainder after the first order expansion.
Step 3-3, combining the formula (3) and the formula (5), obtaining an optimization equation shown in the formula (6),
Figure BDA0002334366740000064
step 3-4: lagrange remainder R1A large number of calculations are involved and are omitted in the present invention in order to save the amount of calculations. Therefore the significance score can be easily calculated by back-propagation, c for each connectioniE.g. C, using S (g)i) As a significance score, S (g)i) Is calculated by the formula (7),
Figure BDA0002334366740000071
and 3-5, sorting the significance scores from large to small.
And 4, pruning the convolutional neural network, and reserving top-k connection to ensure that the network is sparse.
Step 4-1, presetting the sparsity level of the pruning model, wherein the sparsity level is shown as a formula (8),
Figure BDA0002334366740000072
in formula (8), m represents the number of all parameters, and κ represents the number of desired non-zero parameters.
Step 4-2, at a given sparsity
Figure BDA0002334366740000073
Thereafter, the connections in the network model can be arbitrarily disconnected and the absolute maximum κ% connection can be retained.
And 5, training the sparse convolutional neural network model until the model converges. The pre-training parameters are as follows:
parameter item Parameter value
Learning rate 0.1
Weight decay 0.0005
Train batch 128
Max iterations 150000
Optimizer Momentum
During training, the learning rate decays as follows:
number of iterations Learning rate
30000 0.02
60000 0.004
90000 0.0008
120000 0.00016
FIG. 2 is a graph showing the change of the accuracy of the VGG-D model at a sparse level of 95%, which shows that the method of the present invention can reduce the parameters while maintaining the accuracy of the original model, and FIG. 3 is a graph showing the change of the loss of the VGG-D model at a sparse level of 95%, which shows that the method of the present invention can effectively reduce the loss function value and make the model converge more quickly.
The invention cuts the network before training, saves pre-training and fine-tuning processes, greatly simplifies the pruning process, keeps the accuracy of the network, and simultaneously structurally selects important connections through the significance score, has robustness for different network structures, and can be applied to various network structures without excessive adjustment. In addition, the method selects one batch in the data set to determine important connections, and whether the remaining connections are valid for a specified task can be verified through different batches.
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (7)

1. A pruning-based convolutional neural network compression method is characterized by comprising the following steps:
(1) acquiring training data and preprocessing the training data, wherein the preprocessing comprises random cutting, random vertical turning and regularization;
(2) initializing the weight of the convolutional neural network model;
(3) calculating the significance of the network connection, taking the score of the significance as a standard for evaluating the importance of the connection, and sequencing the significance, wherein in order to reduce the calculated amount and accelerate the pruning process, a Taylor expansion formula is adopted to approximately analyze the influence of pruning on a loss function;
(4) pruning the convolutional neural network, defining the pruning as an optimization problem as shown in formula (1),
Figure FDA0002334366730000011
in the formula (1), l (X, Y; theta) represents a loss function of the training model, X represents input data, Y is a corresponding label, and theta is belonged to RmParameters representing a model; pruning, i.e. selecting a subset of all connections
Figure FDA0002334366730000015
And remove parameters from the network
Figure FDA0002334366730000012
The remaining parameters are then noted as
Figure FDA0002334366730000013
Therefore, to minimize the growth of the loss function requires choosing the appropriate c*(ii) a Keeping the connection of top-k, so that the network is thinned;
(5) training the sparse convolutional neural network model until the model converges.
2. The pruning-based convolutional neural network compression method according to claim 1, wherein the step (1) preprocesses the training data, and specifically comprises the following steps:
step 2.1, dividing the data set to be used into a training set and a test set, wherein the ratio of the training set to the test set is 9:1, meanwhile, taking a batch of the data set as a training sample, the size of the used batch is 128, the formula of the selected batch is shown as the formula (2),
Figure FDA0002334366730000014
in equation (2), D represents the data set, x represents the input data, y is the corresponding label, and i represents the current batch.
3. The pruning-based convolutional neural network compression method according to claim 1, wherein the step (2) initializes the weights of the convolutional neural network model, and comprises the specific steps of:
and 3.1, initializing the weight of the convolutional neural network model by adopting a variance initialization method.
4. The pruning-based convolutional neural network compression method according to claim 1, wherein the step (3) calculates the significance of the network connection, takes the score of the significance as a criterion for evaluating the importance of the connection, and ranks the significance by the following specific steps:
step 4.1-representing the output of connection k by a profile z, with a trainable scaling factor g ∈ {0,1}mMultiply by z and use
Figure FDA0002334366730000021
The latter calculation is carried out when giWhen 0, it is equivalent to the link k being cut off, so the equation (1) can be rewritten to the optimization equation as shown in equation (3),
△l(X,Y)(g)=|l(X,Y)(g)-;(X,Y)(0) i type (3)
Step 4.2, expanding l in the step (3) by adopting Taylor series expansion(X,Y)(0) As shown in equations (4) and (5),
Figure FDA0002334366730000022
r in the formula (4)i(g) R in formula (5) representing a Lagrangian residue term1(g) Representing the lagrangian remainder after the first order expansion.
Step 4.3, combining the formula (3) and the formula (5), obtaining an optimization equation shown in the formula (6),
Figure FDA0002334366730000023
step 4.4: lagrange remainder R1This results in a large number of calculations, which are omitted in order to save computation power, so that the significance score can be calculated by back-propagation, for each connection ciE.g. C, using S (g)i) As a significance score, S (g)i) Is calculated by the formula (7),
Figure FDA0002334366730000024
and 4.5, sorting the significance scores from large to small.
5. The pruning-based convolutional neural network compression method according to claim 4, wherein the step (4) prunes the convolutional neural network, and keeps top-k connection to make the network sparse, and comprises the following specific steps:
step 5.1, presetting the sparsity level of the pruning model, wherein the sparsity level is shown as a formula (8),
Figure FDA0002334366730000031
in formula (8), m represents the number of all parameters, and κ represents the number of desired non-zero parameters;
step 5.2 at a given sparsity
Figure FDA0002334366730000032
Thereafter, the connections in the network model can be arbitrarily disconnected and the absolute maximum κ% connection can be retained.
6. A pruning-based convolutional neural network compression system, comprising:
a preprocessing module: the training data preprocessing device is used for acquiring training data and preprocessing the training data, and comprises random cutting, random vertical turning and regularization;
an initialization module: weights for initializing a convolutional neural network model;
the significance calculation and sequencing module: the method is used for calculating the significance of the network connection, taking the score of the significance as a standard for evaluating the importance of the connection, and sequencing the significance, wherein in order to reduce the calculation amount and accelerate the pruning process, the influence of pruning on a loss function is approximately analyzed by adopting a Taylor expansion formula;
a pruning module: used for pruning the convolutional neural network, the pruning is defined as the optimization problem shown in the formula (1),
Figure FDA0002334366730000033
in the formula (1), l (X, Y; theta) represents a loss function of the training model, X represents input data, Y is a corresponding label, and theta is belonged to RmParameters representing a model; pruning, i.e. selecting a subset of all connections
Figure FDA0002334366730000036
And remove parameters from the network
Figure FDA0002334366730000034
The remaining parameters are then noted as
Figure FDA0002334366730000035
Therefore, to minimize the growth of the loss function requires choosing the appropriate c*(ii) a Keeping the connection of top-k, so that the network is thinned;
a training module: and training the sparse convolutional neural network model until the model converges.
7. A medium having a computer program stored therein, wherein the computer program, when read by a processor, performs the method of any of the preceding claims 1 to 5.
CN201911349816.4A 2019-12-24 2019-12-24 Pruning-based convolutional neural network compression method, system and medium Pending CN111178525A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911349816.4A CN111178525A (en) 2019-12-24 2019-12-24 Pruning-based convolutional neural network compression method, system and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911349816.4A CN111178525A (en) 2019-12-24 2019-12-24 Pruning-based convolutional neural network compression method, system and medium

Publications (1)

Publication Number Publication Date
CN111178525A true CN111178525A (en) 2020-05-19

Family

ID=70657924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911349816.4A Pending CN111178525A (en) 2019-12-24 2019-12-24 Pruning-based convolutional neural network compression method, system and medium

Country Status (1)

Country Link
CN (1) CN111178525A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016674A (en) * 2020-07-29 2020-12-01 魔门塔(苏州)科技有限公司 Knowledge distillation-based convolutional neural network quantification method
CN112217663A (en) * 2020-09-17 2021-01-12 暨南大学 Lightweight convolutional neural network security prediction method
CN112766491A (en) * 2021-01-18 2021-05-07 电子科技大学 Neural network compression method based on Taylor expansion and data driving
CN113052300A (en) * 2021-03-29 2021-06-29 商汤集团有限公司 Neural network training method and device, electronic equipment and storage medium
CN113065636A (en) * 2021-02-27 2021-07-02 华为技术有限公司 Pruning processing method, data processing method and equipment for convolutional neural network
CN113762463A (en) * 2021-07-26 2021-12-07 华南师范大学 Model pruning method and system for raspberry pi processor
CN114330713A (en) * 2022-01-11 2022-04-12 平安科技(深圳)有限公司 Convolutional neural network model pruning method and device, electronic equipment and storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016674A (en) * 2020-07-29 2020-12-01 魔门塔(苏州)科技有限公司 Knowledge distillation-based convolutional neural network quantification method
CN112217663A (en) * 2020-09-17 2021-01-12 暨南大学 Lightweight convolutional neural network security prediction method
CN112217663B (en) * 2020-09-17 2023-04-07 暨南大学 Lightweight convolutional neural network security prediction method
CN112766491A (en) * 2021-01-18 2021-05-07 电子科技大学 Neural network compression method based on Taylor expansion and data driving
CN113065636A (en) * 2021-02-27 2021-07-02 华为技术有限公司 Pruning processing method, data processing method and equipment for convolutional neural network
CN113065636B (en) * 2021-02-27 2024-06-07 华为技术有限公司 Pruning processing method, data processing method and equipment for convolutional neural network
CN113052300A (en) * 2021-03-29 2021-06-29 商汤集团有限公司 Neural network training method and device, electronic equipment and storage medium
CN113052300B (en) * 2021-03-29 2024-05-28 商汤集团有限公司 Neural network training method and device, electronic equipment and storage medium
CN113762463A (en) * 2021-07-26 2021-12-07 华南师范大学 Model pruning method and system for raspberry pi processor
CN114330713A (en) * 2022-01-11 2022-04-12 平安科技(深圳)有限公司 Convolutional neural network model pruning method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111178525A (en) Pruning-based convolutional neural network compression method, system and medium
CN112163465B (en) Fine-grained image classification method, fine-grained image classification system, computer equipment and storage medium
CN113378632A (en) Unsupervised domain pedestrian re-identification algorithm based on pseudo label optimization
CN110598029A (en) Fine-grained image classification method based on attention transfer mechanism
US9070047B2 (en) Decision tree fields to map dataset content to a set of parameters
CN111898703B (en) Multi-label video classification method, model training method, device and medium
CN109740734B (en) Image classification method of convolutional neural network by optimizing spatial arrangement of neurons
CN113505797B (en) Model training method and device, computer equipment and storage medium
CN115359074B (en) Image segmentation and training method and device based on hyper-voxel clustering and prototype optimization
CN110263644B (en) Remote sensing image classification method, system, equipment and medium based on triplet network
CN111191739B (en) Wall surface defect detection method based on attention mechanism
CN110363218B (en) Noninvasive embryo assessment method and device
CN104318271B (en) Image classification method based on adaptability coding and geometrical smooth convergence
CN112861718A (en) Lightweight feature fusion crowd counting method and system
CN114676777A (en) Self-supervision learning fine-grained image classification method based on twin network
CN112418327A (en) Training method and device of image classification model, electronic equipment and storage medium
CN114612728A (en) Model training method and device, computer equipment and storage medium
CN114358197A (en) Method and device for training classification model, electronic equipment and storage medium
CN116451093A (en) Training method of circuit fault analysis model and circuit fault analysis method
CN116434002A (en) Smoke detection method, system, medium and equipment based on lightweight neural network
CN112949590B (en) Cross-domain pedestrian re-identification model construction method and system
JP2020155010A (en) Neural network model compaction device
CN113392867A (en) Image identification method and device, computer equipment and storage medium
CN117152528A (en) Insulator state recognition method, insulator state recognition device, insulator state recognition apparatus, insulator state recognition program, and insulator state recognition program
CN116128044A (en) Model pruning method, image processing method and related devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200519

RJ01 Rejection of invention patent application after publication