CN115906937A - Model pruning method of interpretable CNN classification model - Google Patents

Model pruning method of interpretable CNN classification model Download PDF

Info

Publication number
CN115906937A
CN115906937A CN202211390301.0A CN202211390301A CN115906937A CN 115906937 A CN115906937 A CN 115906937A CN 202211390301 A CN202211390301 A CN 202211390301A CN 115906937 A CN115906937 A CN 115906937A
Authority
CN
China
Prior art keywords
model
picture
filter
pruning
input picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211390301.0A
Other languages
Chinese (zh)
Inventor
王世东
吴国伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN202211390301.0A priority Critical patent/CN115906937A/en
Publication of CN115906937A publication Critical patent/CN115906937A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to an interpretable model pruning method for a CNN classification model, belongs to the field of image compression, solves the problems of high operation complexity, large time and memory consumption and difficulty in deployment on terminal equipment of the conventional deep CNN model, and solves the problem that the conventional model pruning algorithm lacks interpretability. Inputting a training picture into a neural network model to be pruned, and extracting a characteristic diagram matrix of each convolution layer; the feature map matrix is up-sampled to the size of an input picture, and a significance map is constructed through normalization operation; multiplying the saliency map and the input picture element by element to construct a weighted input picture; the input picture and the weighted picture are reduced element by element to construct an attention area occlusion picture; and inputting the attention occlusion map into the model to be pruned, observing the change of the model accuracy as the importance score of the channel, and pruning the channel to obtain the pruning lightweight model. The invention realizes high pruning rate of the model and improves the interpretability of the pruning process.

Description

Model pruning method of interpretable CNN classification model
Technical Field
The invention relates to the technical field of depth model compression, in particular to an interpretable model pruning method for a CNN classification model.
Background
As one of the most important depth models, the convolutional neural network has good feature extraction capability and generalization capability, and has great success in various fields such as image processing, target tracking and detection, natural language processing and the like. However, this also leads to deeper and wider network structures, which leads to a large consumption of storage resources and also of computing resources for the deployment of convolutional neural networks on edge devices. For example, the conventional VGG-16 model has a parameter amount as high as 138M, and 155 hundred million floating point operations are required in the forward reasoning process to classify a picture.
Model pruning is a common model compression method, can effectively relieve the over-parameterization phenomenon of a depth model, and does not need the support of special hardware and software. The currently common pruning method mainly utilizes the weight of a convolution kernel in a convolution network to evaluate the importance of the convolution kernel. Under the influence of input data floating, the compression rate, the interpretability and the precision which can be realized by the existing method have further improved space.
Disclosure of Invention
In order to improve the defects of the existing method, the invention provides an interpretable model pruning method of a CNN classification model, which aims to compress the calculated quantity and the parameter quantity of the existing deep learning model, improve the identification capability of redundant parameters in the deep learning model, solve the problems of low compression ratio and large influence on the model accuracy of the existing method and realize the visual interpretation of the pruning process through the organic combination with the model interpretation method.
The model interpretation method is combined with the model pruning process, so that the visual interpretation of the model pruning process is realized; through disturbance processing on an input picture, importance evaluation of each filter of the model by means of a forward reasoning process only depending on the model is achieved. The method combines the traditional model pruning process with the model interpretation method, and simultaneously evaluates the importance of each filter by utilizing the forward reasoning process of the model, thereby having the advantages of large compression degree and low influence on the original accuracy of the model.
The technical scheme of the invention is as follows:
a model pruning method of an interpretable CNN classification model comprises the following steps:
1) Constructing a reference model: a predetermined deep neural network (CNN) classification model is trained on a predefined data set, and a reference model which has certain generalization capability and contains redundant parameters is obtained. The method comprises the following specific steps: firstly, building a predetermined deep neural network (CNN) classification model by using a python programming language and a pytorch deep learning framework; and then training the model on the data set by using the predetermined training parameters, and recording the accuracy of the model as a reference score to obtain a reference model.
2) Acquiring a filter attention area occlusion picture: in order to evaluate the role of each filter in the CNN classification model in the inference process, it is necessary to obtain the attention area of each filter in the input picture, and then block the area to simulate the influence of the deletion filter on the model. The method comprises the following specific steps: firstly, inputting a certain picture in a training set into the reference model obtained in the step 1), extracting a characteristic diagram matrix of a certain convolution layer and a classification score of the picture by the model; then, performing upsampling operation on each characteristic diagram matrix, and amplifying the matrix to the size of an input picture; normalizing the up-sampled picture, and limiting the value of the element of the up-sampled picture to be within the range of 0-1, thereby obtaining a significance map of each filter; further, multiplying the obtained saliency map and the input picture element by element to obtain a weighted input picture; inputting the weighted picture into a ReLU activation function, and reserving a part (a part interested by a filter) which is larger than zero in the weighted picture to obtain an attention area map; finally, the original input picture and the obtained attention area image are subtracted element by element respectively to obtain an attention area occlusion picture which retains all information except the attention area of a certain filter in the original input picture.
3) Filter importance score generation: and evaluating the importance of the filter of the convolutional layer by observing the accuracy change of the model to be pruned and utilizing the forward reasoning process of the model. The method specifically comprises the following steps: firstly, inputting the attention area occlusion picture obtained in the step 2) into the reference model obtained in the step 1), and recording the classification score of each occlusion picture input by the model; subtracting the reference scores recorded in the step 1) from the scores of the shielding pictures respectively to obtain the scores corresponding to the functions of the filter in reasoning the input pictures. The different filters have different activation degrees for different types of pictures, and the scores calculated from a single picture cannot reflect the effect of each filter on identifying all types of pictures. Randomly sampling n pictures belonging to different classes from a training data set, and repeating the process to obtain n scores of each filter; further, the scores are summed and then normalized. And finally, obtaining the comprehensive contribution score of the filter to the judgment of each class of pictures.
4) Pruning: sorting the comprehensive contribution degree scores obtained in the step 3) from small to large, marking all the filters with the comprehensive contribution degree scores smaller than the predefined pruning threshold as redundant parts, and directly deleting all parameters related to the filters, including parameters of a convolution kernel, parameters of an adjacent batch normalization layer, input channels of a next volume layer and the like. Pruning steps are taken for each layer of convolution from input to output of the reference model obtained in the step 1), and after pruning of each layer is completed, the generalization capability of the model is recovered through fine tuning, so that the lightweight model is obtained.
The predefined data set in step 1) includes, but is not limited to, common classification data sets such as CIFAR-10, CIFAR-100, imageNet, and the like, and may also be a classification data set collected by the user himself.
The network in the step 1) is a CNN classification network constructed by using a machine learning method, the structure of the CNN classification network is complex enough to support feature mining of a predefined data set, and specific limitations are not made on the specific structure of a convolution network, the type of convolution layers and the depth of convolution;
the training parameters in the step 1) comprise learning rate, learning rate attenuation mode, learning round, batch size and the like.
The up-sampling operation in the step 2) adopts a specific scheme of interpolation algorithm such as bilinear interpolation; the specific scheme of the normalization operation is maximum value and minimum value normalization.
Wherein, each element of the saliency map in step 2) represents the attention score of a certain filter to a corresponding pixel point of the input picture.
Wherein, the filter importance score of the single picture obtained in the step 3) may be positive or negative, if the score is positive, the filter indicates that the concerned content of the filter has positive effect on the model reasoning process; if the score is negative, it indicates that the filter is interfering with the model's reasoning process.
Wherein, the normalization operation in the step 3) adopts a specific scheme of maximum and minimum normalization.
Wherein the pruning threshold set in step 4) is a decimal number in the range of 0-1, generally set to 0.5 or 0.6.
The invention has the beneficial effects that: compared with the prior art, the method combines the model pruning process with the model interpretation method, realizes the compression of the deep CNN model, effectively improves the compression ratio of the model, and has little influence on the accuracy of the model. Acquiring an attention area of each filter in an input picture through a model interpretation method, wherein the attention area can display information acquired by the filter in a visual mode; the importance degree of each filter can be evaluated by utilizing the model as a whole by using the filter attention area to shield the picture to disturb the input picture, so that the defect of insufficient single-layer data is eliminated; the invention takes the filter in the convolutional layer as the operation unit of pruning, does not damage the integral structure of the model, does not need the support of special software and hardware, and has better application prospect aiming at the deployment of the deep CNN network on the edge equipment.
Drawings
Fig. 1 is an overall flowchart of a model pruning method of an interpretable CNN classification model according to the present invention.
Fig. 2 is an exemplary diagram of an input picture according to the present embodiment.
FIG. 3 is an exemplary diagram of a saliency map of a filter as described in this example.
FIG. 4 is an exemplary illustration of the attention area of the filter according to this example.
FIG. 5 is an exemplary illustration of the obscuring filter attention area according to the present example.
Detailed Description
The following further describes a specific embodiment of the present invention with reference to the drawings and technical solutions.
As shown in fig. 1, an interpretable model pruning method for a CNN classification model includes the steps of constructing a reference model, obtaining a filter attention area occlusion picture, generating a filter importance score, and performing pruning, and specifically includes the following steps:
step 1, training a reference model. In the embodiment, a ResNet-50 network is constructed to realize the classification task of the ImageNet data set, and the ResNet-50 initial model structure is shown in Table 1: data enhancement operations such as random horizontal turning and random cutting are carried out on the pictures in the data set ImageNet to enhance the diversity of the data, and meanwhile, normalization operation is carried out on the pictures; building a neural network according to a network structure of ResNet-50, and setting the input specification of the network to be 224 × 3, wherein 3 channels represent RGB channels in a classified manner; the batch size, momentum, and weight decay in training were set to 256, 0.9, and 0.0001, respectively; training 120 times by adopting a random gradient algorithm, setting the initial learning rate to be 0.5, and updating the learning rate by adopting a cosine annealing strategy; thus, a reference model which can complete the classification task on ImageNet and contains redundant parameters is obtained (the accuracy of the model Top-1 is 76.15%, and the accuracy of the model Top-5 is 92.87%).
TABLE 1 ResNet-50 model structure before pruning
Convolutional layer Number of output channels
Conv-1 64
Layer1_bottleneck0 64,64,256
Layer1_bottleneck1 64,64,256
Layer1_bottleneck2 64,64,256
Layer2_bottleneck0 128,128,512
Layer2_bottleneck1 128,128,512
Layer2_bottleneck2 128,128,512
Layer2_bottleneck3 128,128,512
Layer3_bottleneck0 256,256,1024
Layer3_bottleneck1 256,256,1024
Layer3_bottleneck2 256,256,1024
Layer3_bottleneck3 256,256,1024
Layer3_bottleneck4 256,256,1024
Layer3_bottleneck5 256,256,1024
Layer4_bottleneck0 512,512,2048
Layer4_bottleneck1 512,512,2048
Layer4_bottleneck2 512,512,2048
Step 2, obtaining a picture of the attention shielding area: taking layer4_ botteleck 2_ conv3 convolutional layer of ResNet-50 (the third convolutional layer of the last botteleck structure of the fourth layer of the model) as an example, assume that the model input picture is a picture X of a snake. Then the input to the convolutional layer is the 7 x 512 intermediate pictures generated in the upper layer, each filter generates a 7*7 sized feature map matrix, and all feature maps constitute a set
Figure BDA0003931751460000051
The feature map is up-sampled to 224 x 224 size by adopting the following bilinear interpolation formula for each feature map, and an up-sampled feature map set A is obtained 3up . Recalculate A 3up The maximum value and the minimum value of each matrix are normalized by the following formula, and then the filtering is obtainedSaliency map set A of a machine 3sal . Further, for A 3sal Each matrix in (a) is multiplied element by element with the input picture to obtain a weighted input picture A 3weig hted And removing negative parts in the picture by applying ReLU to the weighted picture to obtain a final significance map. Because A is 3weig hted All elements in (A) have been normalized to [0,1 ]]Within the range, so 1 can be directly subtracted from A 3weig hted To obtain the final occlusion picture A 3mask
Figure BDA0003931751460000061
Figure BDA0003931751460000062
A 3weig hted =X·A 3sal
A 3mask =1-A 3weig hted
Wherein, (x, y) represents a point of a matrix desired to be estimated, and the feature map matrix is enlarged to the size of an input picture by the formula; a. The 3up Representing the matrix after an upsampling operation, minA 3up Represents the minimum value in the matrix, maxA 3up Represents the maximum value in the matrix; a. The 3sal Representing a saliency map, each element of which represents the attention score of a certain filter to each pixel point in an input picture; X.A 3sal Representing the element-by-element multiplication of the input picture with the saliency map; a. The 3weig hted Representing a weighted input picture; a. The 3mask Representing an occlusion picture; 1 represents an all-1 matrix with the same size as the input picture;
wherein, the input picture in this example is shown in fig. 2; the filter saliency map is shown in FIG. 3; the filter attention area is shown in FIG. 4; the filter occlusion picture is shown in fig. 5.
And 3, generating the importance score of the filter. Take the 7 th, 27 th and 31 th filters in the layer4_ bottompiece 2_ conv3 convolutional layer as examples: the accuracy of correctly classifying the picture of the snake by ResNet-50 is 0.71, and after the attention-blocking pictures of the three filters are input into the model, the probability of successfully classifying the picture by the model is respectively 0.30, 0.56 and 0.46. The variation in accuracy is 0.41, 0.15, 0.25, respectively. It can be seen that the 7 th filter mainly focuses on the body part of the snake, and after the attention area of the snake is blocked, the accuracy of the model for correctly distinguishing the picture is reduced by 0.41, which indicates that the part focused by the filter is important for the model reasoning process; while the 27 th filter focuses primarily on the texture on the sand, the removal of the attention area of the filter has substantially no effect on the model, which is consistent with our way of thinking. The scores of the three filters after the above process is repeated by sampling 1000 pictures in the training data set are respectively: 0.9,0.4 and 0.6, then the importance rankings for the three filters are: 7 th > 31 th > 9 th.
And 4, pruning. The pruning threshold is set to 0.46, then the 31 st filter in step two will be deleted. And (3) repeating the steps 2-3 in sequence for the first two convolution layers in each bottleeck structure from the approach input to the approach output, and deleting the filter with the importance score smaller than 0.46 to realize the pruning of the model. The convolutional layer close to the input is not the target of pruning, as that would affect the extraction of information from the input picture; the third convolutional layer in each bottleeck is also not considered in pruning because the existence of the residual structure changes the number of its output channels and affects the connection of each layer of the model. The process for ResNet-50 pruning can be described as:
inputting: trained ResNet-50 model
Pruning:
the following process is repeated for the first two convolutional layers in each bottleeck from input to output:
sampling 1000 pictures from an input data set;
calculating the fraction of each filter in the layer of convolution according to the step 2 and the step 3;
ranking the filters according to the score, and deleting the filters with the score less than 0.46;
obtaining a new model structure, and inheriting the parameters of the rest filters from the old model;
fine-tuning the new model on the training dataset;
and (3) outputting: pruning ResNet-50 model
After pruning, the floating point number Per Second (FLOPs) operation times of ResNet-50 are reduced by 56.4%, and the parameter quantity is reduced by 55.5%. The model Top-1 accuracy became 72.45% (a decrease of% 3.67); top-5 accuracy became 90.63% (a% decrease of 2.24). The model structure after pruning is shown in table 2:
TABLE 2 post-pruning ResNet-50 model structure
Figure BDA0003931751460000071
Figure BDA0003931751460000081
According to the method, the model pruning process is combined with the model interpretation method, so that the attention area of the filter in the input picture can be accurately found, and the information acquired by the filter can be displayed in a visual mode; by perturbing the input picture by using the filter attention area occlusion picture, the importance level of each filter can be evaluated by using the model as a whole, eliminating the disadvantage of insufficient single-layer data. The method can more efficiently and accurately identify the redundant structure in the model, improve the calculated amount of the model and the compression rate of the storage resource consumption, and simultaneously reduce the influence of model pruning on the accuracy of the model to the minimum.

Claims (9)

1. A model pruning method of an interpretable CNN classification model is characterized by comprising the following steps:
1) Constructing a reference model: training a predetermined deep neural network (CNN) classification model on a predefined data set to obtain a reference model; the method specifically comprises the following steps: firstly, building a predetermined deep neural network (CNN) classification model by using a python programming language and a pytorch deep learning framework; then, training a model on the data set by using predetermined training parameters, and recording the accuracy of the model as a benchmark score to obtain a benchmark model;
2) Obtaining an occlusion picture of a filter attention area: in order to evaluate the function of each filter in the CNN classification model in the inference process, an attention area of each filter in an input picture needs to be acquired, and then the attention area is shielded to simulate the influence of a deletion filter on the model; the method specifically comprises the following steps: firstly, inputting a certain picture in a training set into the reference model obtained in the step 1), extracting a characteristic diagram matrix of a certain convolution layer and a classification score of the picture by the model; then, performing up-sampling operation on each characteristic diagram matrix, and amplifying the matrix to the size of an input picture; normalizing the up-sampled picture, and limiting the value of an element of the up-sampled picture to be within a range of 0-1 so as to obtain a significance map of each filter; further, multiplying the obtained saliency map and the input picture element by element to obtain a weighted input picture; inputting the weighted picture into a ReLU activation function, and reserving a part which is larger than zero in the weighted picture to obtain an attention area map; finally, subtracting the obtained attention area graph element by element from the original input picture to obtain an attention area shielding picture which reserves all information except the attention area of a certain filter in the original input picture;
3) Filter importance score generation: evaluating the importance of the filter of the convolutional layer by observing the accuracy change of the model to be pruned and utilizing the forward reasoning process of the model; the method specifically comprises the following steps: firstly, inputting the attention area shielding pictures obtained in the step 2) into the reference model obtained in the step 1), and recording the classification score of each shielding picture input by the model; subtracting the reference scores recorded in the step 1) from the scores of the shielding pictures respectively to obtain the scores of the functions of the corresponding filters in reasoning the input pictures; the activation degrees of different filters on different types of pictures are different, and the scores calculated from a single picture cannot reflect the effect of each filter on identifying all types of pictures; randomly sampling n pictures belonging to different classes from a training data set, and repeating the process to obtain n scores of each filter; further, summing the scores, and then performing normalization; finally, obtaining the comprehensive contribution score of the filter to the judgment of each class of pictures;
4) Pruning: sorting the comprehensive contribution degree scores obtained from the step 3) from small to large, marking all filters with the comprehensive contribution degree scores smaller than a predefined pruning threshold value as redundant parts, and directly deleting all parameters related to the filters; pruning steps are taken for each layer of convolution from input to output of the reference model obtained in the step 1), and after pruning of each layer is completed, the generalization capability of the model is recovered through fine tuning, so that the lightweight model is obtained.
2. The model pruning method for the interpretable CNN classification model according to claim 1, wherein the predefined datasets of step 1) include common classification datasets CIFAR-10, CIFAR-100 and ImageNet, and classification datasets collected by users themselves.
3. The model pruning method for the interpretable CNN classification model according to claim 1, wherein the training parameters of step 1) include learning rate, learning rate decay pattern, learning round and batch size.
4. The model pruning method for the interpretable CNN classification model according to claim 1, wherein the upsampling operation in step 2) is an interpolation algorithm such as bilinear interpolation; the normalization operation is maximum value and minimum value normalization.
5. The model pruning method for the interpretable CNN classification model according to claim 1, wherein the filter importance scores of the single pictures obtained in step 3) may be positive or negative, and if the scores are positive, it indicates that the content concerned by the filter has a positive effect on the model inference process; if the score is negative, it indicates that the filter is interfering with the model's reasoning process.
6. The model pruning method for the interpretable CNN classification model according to claim 1, wherein the normalization in step 3) is performed by using max-min normalization.
7. The model pruning method for the interpretable CNN classification model according to claim 1, wherein the pruning threshold set in step 4) is a decimal number in the range of 0 to 1.
8. The model pruning method for the interpretable CNN classification model according to claim 1, wherein the step 2) implements generation of the feature map-to-attention-occlusion picture based on the following formula:
Figure FDA0003931751450000021
Figure FDA0003931751450000031
A 3weighted =X·A 3sal
A 3mask =1-A 3weighted
wherein, (x, y) represents a point of a matrix desired to be estimated, and the feature map matrix is enlarged to the size of an input picture by the formula; a. The 3up Representing the matrix after an upsampling operation, minA 3up Represents the minimum value in the matrix, maxA 3up Represents the maximum value in the matrix; a. The 3sal Representing a saliency map, wherein each element of the saliency map represents the attention score of a certain filter to each pixel point in an input picture; X.A 3sal Representing the element-by-element multiplication of the input picture with the saliency map; a. The 3weighted Representing a weighted input picture; a. The 3mask Representing an occlusion picture; 1 represents an all 1 matrix of the same size as the input picture.
9. The model pruning method for the interpretable CNN classification model according to claim 1, wherein the parameters related to the redundancy filter deleted in step 4) include parameters of a convolution kernel, parameters of an adjacent batch normalization layer, and input channels of a next convolution layer.
CN202211390301.0A 2022-11-08 2022-11-08 Model pruning method of interpretable CNN classification model Pending CN115906937A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211390301.0A CN115906937A (en) 2022-11-08 2022-11-08 Model pruning method of interpretable CNN classification model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211390301.0A CN115906937A (en) 2022-11-08 2022-11-08 Model pruning method of interpretable CNN classification model

Publications (1)

Publication Number Publication Date
CN115906937A true CN115906937A (en) 2023-04-04

Family

ID=86475760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211390301.0A Pending CN115906937A (en) 2022-11-08 2022-11-08 Model pruning method of interpretable CNN classification model

Country Status (1)

Country Link
CN (1) CN115906937A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117035044A (en) * 2023-10-08 2023-11-10 安徽农业大学 Filter pruning method based on output activation mapping, image classification system and edge equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117035044A (en) * 2023-10-08 2023-11-10 安徽农业大学 Filter pruning method based on output activation mapping, image classification system and edge equipment
CN117035044B (en) * 2023-10-08 2024-01-12 安徽农业大学 Filter pruning method based on output activation mapping, image classification system and edge equipment

Similar Documents

Publication Publication Date Title
CN108596258B (en) Image classification method based on convolutional neural network random pooling
CN110728224A (en) Remote sensing image classification method based on attention mechanism depth Contourlet network
CN107145885A (en) A kind of individual character figure character recognition method and device based on convolutional neural networks
CN109635010B (en) User characteristic and characteristic factor extraction and query method and system
CN111415323B (en) Image detection method and device and neural network training method and device
CN114926680B (en) Malicious software classification method and system based on AlexNet network model
CN110634060A (en) User credit risk assessment method, system, device and storage medium
CN112991278A (en) Method and system for detecting Deepfake video by combining RGB (red, green and blue) space domain characteristics and LoG (LoG) time domain characteristics
CN111275686A (en) Method and device for generating medical image data for artificial neural network training
Yu et al. A recognition method of soybean leaf diseases based on an improved deep learning model
CN115906937A (en) Model pruning method of interpretable CNN classification model
CN113344438A (en) Loan system, loan monitoring method, loan monitoring apparatus, and loan medium for monitoring loan behavior
CN114299082A (en) New coronary pneumonia CT image segmentation method, device and storage medium
CN109871866B (en) Model training method, device, equipment and medium for hospital infection prediction
Suo et al. Casm-amfmnet: A network based on coordinate attention shuffle mechanism and asymmetric multi-scale fusion module for classification of grape leaf diseases
CN113449819A (en) Credit evaluation model method based on capsule network and storage medium thereof
CN113744209A (en) Heart segmentation method based on multi-scale residual U-net network
CN116386803A (en) Cytopathology report generation method based on graph
CN116246138A (en) Infrared-visible light image target level fusion method based on full convolution neural network
CN114648406A (en) User credit integral prediction method and device based on random forest
CN114494828A (en) Grape disease identification method and device, electronic equipment and storage medium
CN114170000A (en) Credit card user risk category identification method, device, computer equipment and medium
CN114463732A (en) Scene text detection method and device based on knowledge distillation
Samosir Filtering and wavelet transform algorithm for old document image restoration
Punia et al. Automatic detection of liver in CT images using optimal feature based neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination