CN110147834A - Fine granularity image classification method based on rarefaction bilinearity convolutional neural networks - Google Patents

Fine granularity image classification method based on rarefaction bilinearity convolutional neural networks Download PDF

Info

Publication number
CN110147834A
CN110147834A CN201910387272.4A CN201910387272A CN110147834A CN 110147834 A CN110147834 A CN 110147834A CN 201910387272 A CN201910387272 A CN 201910387272A CN 110147834 A CN110147834 A CN 110147834A
Authority
CN
China
Prior art keywords
loss
bilinearity
sparse
training
rarefaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910387272.4A
Other languages
Chinese (zh)
Inventor
王永雄
马力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201910387272.4A priority Critical patent/CN110147834A/en
Publication of CN110147834A publication Critical patent/CN110147834A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of fine granularity image classification methods based on rarefaction bilinearity convolutional neural networks, the cutting of feature channel is carried out to bilinearity convolutional neural networks, simultaneously distinguishing feature channel carries out size sequence according to importance and carries out proportional cutting for the importance of classification in meeting automatic sparse features channel in training process.The output of bilinearity convolutional neural networks is input in batch regularization, using the zoom factor of BN as scale factor, and regularization method is applied to it, regularization method is there are many such as L1, L2, and wherein the sparsity of L1 is stronger, pass through joint training network weight and scale factor, it can be achieved with the sparse of feature channel, finally carry out beta pruning according to the size sequence of sparse rear scale factor, finally obtain the final model for carrying out fine granularity image classification task using by fine tuning.It may be implemented Weakly supervised and reduce nuisance parameter, prevent over-fitting, effectively improve the accuracy rate of fine granularity image classification.

Description

Fine granularity image classification method based on rarefaction bilinearity convolutional neural networks
Technical field
It is the present invention relates to a kind of image processing techniques, in particular to a kind of based on rarefaction bilinearity convolutional neural networks Fine granularity image classification method.
Background technique
It can only be made based on the fine granularity image classification of common convolutional neural networks in order to obtain preferable classification results often With strong supervised learning method, training picture needs a large amount of artificial markup informations, and some preferable Weakly supervised learning methods, only needs Picture possesses label information, but since parameter excessively be easy to cause over-fitting, trained accuracy rate and test accuracy rate phase Difference is larger.
Summary of the invention
The present invention be directed to be now based on the fine granularity image classification of common convolutional neural networks there are the problem of, propose A kind of fine granularity image classification method based on rarefaction bilinearity convolutional neural networks may be implemented Weakly supervised and reduce redundancy Parameter prevents over-fitting, effectively improves the accuracy rate of fine granularity image classification.
The technical solution of the present invention is as follows: a kind of fine granularity image classification side based on rarefaction bilinearity convolutional neural networks Method specifically comprises the following steps:
1) bilinearity convolutional neural networks are established: bilinear model of the building for fine granularity image classification first, and two layers Feature extraction channel A and B uses VGG-16 network, exports feature respectively by bilinear model, is operated by the apposition of matrix It is converged, to obtain a bilinearity feature, it is complete to obtain that the bilinearity feature of positions all in image is added polymerization Obtained all feature vectors are inputted last classification function C and classified by office's image;
2) input and output of each convolutional layer are known as feature channel in feature extraction network VGG-16, and each convolutional layer includes to swash Function living, it is sparse by being carried out to the scale factor γ in BN layers using regularization operation at BN layers of convolution Intercalation reaction, thus Sparse layer is formed, rarefaction bilinear model is obtained;
3) model training is carried out to the rarefaction bilinear model of building, obtains final mask:
The first step is slightly trained, and larger Learning Step is arranged, and only the last softmax classification layer of model is trained, training week Phase is 50~100;
Second step fine tuning, is arranged smaller Learning Step, determines with specific reference to data set, all parameters in training pattern, instruction The white silk period is set as 50;
Fine tuning is finally cut, feature channel is cut according to the threshold value of setting, and finely tune training, Learning Step and and second step Unanimously, cycle of training is 20~50, acquires final mask after training.
BN layers of regularization active mode, BN layers of usable small lot statistics are utilized in the construction of the sparse layer of step 2) Come realize standardized internal activate, the specific method is as follows:
If enabling xinAnd xoutAs BN layers output and input, B indicates current small lot, and the conversion of BN layers of execution is as follows:
BN layers of input is also the output of upper one layer of convolutional layer, has m output, be to more in current small lot here Conversion process is done in group input and output;
∈ is the constant for preventing denominator from being zero;
Wherein μBAnd σBIt is the average value and standard deviation value that small lot B input activates,For to input xinAt standardization Output after reason, scale factor γ and offset parameter β are trainable affine transformation parameters, and affine transformation parameter can will standardize Activation linearly transforms to any scale.
The sparsity for realization scale factor γ in the training process, in the training objective of rarefaction bilinear model Sparse penalty term is added in function, shown in training objective function such as formula (10),
Lloss=∑(x,y)lloss(f(x,W),y)+λ∑γ∈Γg(γ) (10)
lloss=H (p, q)=- ∑xp(x)log q(x) (11)
LlossRepresent the loss function of whole rarefaction bilinear model, llossRepresent not sparse bilinear model damage Mistake function is cross entropy, llossIt is L before improvementlossIt is improved;First item l in formula (10)lossFor the damage of former B-CNN algorithm Function is lost, uses cross entropy loss function here, p (x) is to intersect entropy function exact value in formula (11), and q (x) is to intersect entropy function Predicted value, llossCross entropy calculated value is the distance of the two probability distribution;(x, y) is the picture and true tag of input, and W is can Trained weight, f (x, W) refer to the anticipation function of model, and output is predicted value;Section 2 is sparse punishment in formula (10) , g (γ) is that the regularization of comparative example factor gamma operates, and Γ is the set of all proportions factor, and L1 or L2 is can be selected just in g () Then change;When using L1 regularization, non-smooth L1 penalty term need to be optimized using subgradient algorithm, also be can use smooth L1 be replaced;λ is the parameter for controlling sparse degree, prevents sparse scale factor excessive and loses important channel feature, root It factually tests and obtains λ=10-5For compared with the figure of merit.
The beneficial effects of the present invention are: the present invention is based on the fine granularity images of rarefaction bilinearity convolutional neural networks point Class method, Weakly supervised, parameter amount is less, the higher fine granularity Image Classfication Technology of accuracy rate.Realize that training data no longer needs It is artificial to carry out a large amount of mark work, the parameter of bilinearity convolutional neural networks is reduced, over-fitting is prevented, improves accuracy rate.
Detailed description of the invention
Fig. 1 is the bilinear model figure of fine granularity image classification of the present invention;
Fig. 2 is sparse, beta pruning procedure chart of the invention;
Fig. 3 is rarefaction bilinear model figure of the present invention;
Fig. 4 is the procedure chart that model of the present invention obtains.
Specific embodiment
The cutting of feature channel is carried out to bilinearity convolutional neural networks using a kind of novel simple technology of prunning branches, was trained The automatic sparse features channel Cheng Zhonghui and distinguishing feature channel for classification importance, according to importance carry out size sort into Row proportional cutting.Technology of prunning branches in common network model compression method, comprising being cut to network model level, channel level It cuts, weight grade is cut.Level cutting is excessively coarse, is not suitable for fine granularity image classification, is easily lost important feature, weight grade It cuts and calculates excessively complexity, the complexity of algorithm can be improved.Channel level, which is cut, achieves balance in flexibility and easy implementation. But common channel tailoring technique is not suitable for the pre-training model of the commonly computer vision based on deep learning.We will be double The output of linear convolution neural network is input in batch regularization (Batch Normalization), using BN scaling because Son is used as scale factor, and applies regularization method to it, regularization method there are many such as L1, L2, wherein the sparsity of L1 compared with By force, by joint training network weight and scale factor, can be achieved with the sparse of feature channel, finally according to ratio after sparse because The size sequence of son carries out beta pruning, finally using by finely tuning the model for obtaining finally carrying out fine granularity image classification task.
Finally all it is higher than other most of Weakly supervised and strong supervision using the fine granularity image classification accuracy rate of the model to learn Learning method, and parameter amount is few.
Based on the fine granularity image classification method of rarefaction bilinearity convolutional neural networks, include the following steps:
It is illustrated in fig. 1 shown below step 1: constructing first for the bilinear model of fine granularity image classification, wherein A and B network It using VGG-16, can be truncated in the rule5_3 of VGG-16 or other active coatings, use the complete shared model of network herein, at this The bilinear model A and B of text, which are characterized, extracts function fAAnd fB, fAAnd fBIt is a kind of mapping f:L × I → RC×D, wherein L is input The band of position of image, I are the images of input, and the two is mapped to the feature of C × D dimension.Last fAAnd fBThe output of the two Feature is converged by the apposition operation of matrix, to obtain a bilinearity feature.Shown in bilinearity feature such as formula (1).
b(l,i,fA,fB)=fA(l,i)TfB(l,i) (1)
I ∈ I, l ∈ L in formula (i is the position of image block and l image block in figure).fAAnd fBIdentical feature must be possessed The dimension of dimension C, C are determined by model.Pond function P process is as shown in Equation 2, using the bilinearity of positions all in image is special Sign is added polymerization and is indicated with obtaining global image.If fAAnd fBThe feature of extraction is C × M and C × N, then what is exported in formula (2) is double Linear character φ (I) dimension is M × N.
φ (I)=∑l∈Lbilinear(l,i,fA,fB) (2)
The column vector that φ (I) feature is converted into MN × 1 is denoted as x, as the feature finally extracted.By MN × 1 Feature vector inputs last classification function C and classifies, and C is classified as shown in formula (3) using softmax function.
Softmax function is usually used in multitask classification, characteristic value is mapped in the section of (0,1), wherein e (x)iIt represents The weighted value of classification i, ∑je(x)jFor all categories weighted value summation, C (x)iBelong to the probability value of classification i for network output.
Step 2: feature channel refers to each convolutional layer (convolutional layer included activation here in feature extraction network VGG-16 Function) input and output, by BN layers of convolution Intercalation reaction, using regularization operation to the scale factor γ progress in BN layer It is sparse, to form sparse layer.Since the bilinearity vector of bilinearity Chi Huahou possesses redundancy, model over-fitting, so taking Network channel cutting is carried out to feature extraction network, to solve the over-fitting of model.VGG-16 is using on Imagenet Pre-training model, the weight of input and output is not all zero or close to zero, therefore common channel level cutting is not available.
Therefore, a corresponding scale factor γ (γ >=0) is introduced to each feature channel, be illustrated in fig. 2 shown below, by γ group At sparse layer realize feature channel screen function.BN layers of regularization active mode is utilized in the construction of sparse layer, this can be with Design the scale factor that a kind of simple effective method is used to merge channel, γ, that is, scale factor in formula (9).BN layers can be used Small lot counts to realize that standardized internal activates, and the specific method is as follows:
If enabling xinAnd xoutAs BN layers output and input, B indicates current small lot, and the conversion of BN layers of execution is as follows It is shown.
BN layers of input is also the output of upper one layer of convolutional layer, has m output, be to more in current small lot here Conversion process is done in group input and output;
∈ is the constant for preventing denominator from being zero;
Wherein μBAnd σBIt is the average value and standard deviation value that small lot B input activates,For to input xinAt standardization Output after reason, scale factor γ and offset parameter β are trainable affine transformation parameters, they can be linear by standardization activation Transform to any scale.
After the BN layer of the ratio for possessing channel level and offset parameter β is inserted in convolutional layer, BN layers can be directly utilized In γ parameter carry out network rarefaction.This method does not need to introduce any overhead, has big advantage, in experiment It was found that this is the most effectual way of channel scale factor beta pruning.If compared reason is that 1) not utilizing BN layers of realization rarefaction The example factor is nonsensical for the importance for assessing feature channel, because convolutional layer and sparse layer are all linear transformations.By Reduce scale factor while amplifying weight in convolutional layer, identical result can be obtained.If 2) by the content ratio factor Sparse layer is inserted in front of BN layers, and the zooming effect of scaling layer will be ineffective by the normalized in BN.If 3) will The sparse layer of the content ratio factor is inserted in after BN layers, then each feature channel can be there are two continuous scale factor.
To control the sparsity of scale factor in the training process, in the training mesh of B-CNN (rarefaction bilinear model) Sparse penalty term is added in scalar functions.Training objective function is as shown in Equation 10.
Lloss=∑(x,y)lloss(f(x,W),y)+λ∑γ∈Γg(γ) (10)
lloss=H (p, q)=- ∑xp(x)log q(x) (11)
LlossRepresent the loss function of whole rarefaction bilinear model, llossRepresent not sparse bilinear model damage Mistake function is cross entropy, llossIt is L before improvementlossIt is improved;First item l in formula (10)lossFor the damage of former B-CNN algorithm Function is lost, uses cross entropy loss function here, as shown in formula (11), wherein p (x) is to intersect entropy function exact value, and q (x) is Cross entropy function prediction value, llossCross entropy calculated value is the distance of the two probability distribution.(x, y) is the picture of input and true Label, W are trainable weight, and f (x, W) refers to the anticipation function of model, and output is predicted value.Section 2 is in formula (10) Sparse penalty term, g (γ) are the regularization operation (set that Γ is all proportions factor) of comparative example factor gamma, and g () is optional With L1 or L2 regularization, two kinds of regularization methods are compared in experiment, L1 is compared with L2 has the function of rarefaction, but Meeting lost part channel characteristics, and L2 can retain more multi-channel feature, it, need to be using subgradient algorithm to non-when using L1 regularization Smooth L1 penalty term optimizes, and also can use smooth L1 and is replaced.λ is the parameter for controlling sparse degree, is prevented Sparse scale factor is excessive and loses important channel feature, obtains λ=10 according to experiment-5For compared with the figure of merit.
Step 3: being finally trained to the rarefaction bilinear model (being illustrated in fig. 3 shown below) of building:
Data set overturns training dataset and zero averaging at random by pretreatment, fixed size 448*448 Operation, is then fed into model training.
As shown in figure 4, training process is as follows: 1) first step is slightly trained, and larger Learning Step such as 0.5~0.9 is arranged, only instructs Practice the last softmax classification layer of network, cycle of training is 50~100.2) second step is finely tuned, and smaller Learning Step is arranged, specifically It is determined according to data set, 0.001~0.0001 etc., all parameters of training (convolutional layer, sparse layer, layer of classifying), cycle of training It is set as 50.3) finally cut fine tuning, feature channel cut according to the threshold value of setting, and finely tunes training, Learning Step with the Two steps are consistent, and cycle of training is 20~50, acquire final mask after training.

Claims (3)

1. a kind of fine granularity image classification method based on rarefaction bilinearity convolutional neural networks, which is characterized in that specific packet Include following steps:
1) bilinearity convolutional neural networks are established: bilinear model of the building for fine granularity image classification first, two layers of feature It extracts channel A and B and uses VGG-16 network, feature is exported by bilinear model respectively, operated and carried out by the apposition of matrix The bilinearity feature of positions all in image is added polymerization to obtain a bilinearity feature to obtain global figure by convergence Obtained all feature vectors are inputted last classification function C and classified by picture;
2) input and output of each convolutional layer are known as feature channel in feature extraction network VGG-16, and each convolutional layer includes activation letter Number, it is sparse by being carried out to the scale factor γ in BN layers using regularization operation at BN layers of convolution Intercalation reaction, to be formed Sparse layer obtains rarefaction bilinear model;
3) model training is carried out to the rarefaction bilinear model of building, obtains final mask:
The first step is slightly trained, and larger Learning Step is arranged, and only model last softmax classification layer is trained, and cycle of training is 50~100;
Second step fine tuning, is arranged smaller Learning Step, determines with specific reference to data set, all parameters in training pattern, training week Phase is set as 50;
Finally cut fine tuning, feature channel cut according to the threshold value of setting, and finely tunes training, Learning Step with second step one It causes, cycle of training is 20~50, acquires final mask after training.
2. the fine granularity image classification method according to claim 1 based on rarefaction bilinearity convolutional neural networks, special Sign is that BN layers of regularization active mode, BN layers of usable small lot statistics are utilized in the construction of the sparse layer of step 2) Come realize standardized internal activate, the specific method is as follows:
If enabling xinAnd xoutAs BN layers output and input, B indicates current small lot, and the conversion of BN layers of execution is as follows:
BN layers of input is also the output of upper one layer of convolutional layer, has m output, be defeated to multiple groups in current small lot here Enter output and does conversion process;
∈ is the constant for preventing denominator from being zero;
Wherein μBAnd σBIt is the average value and standard deviation value that small lot B input activates,For to input xinAfter standardization Output, scale factor γ and offset parameter β are trainable affine transformation parameters, and affine transformation parameter can will standardization activation Linear transformation is to any scale.
3. the fine granularity image classification method according to claim 2 based on rarefaction bilinearity convolutional neural networks, special Sign is, described to realize the sparsity of scale factor γ in the training process, in the training objective of rarefaction bilinear model Sparse penalty term is added in function, shown in training objective function such as formula (10),
Lloss=∑(x,y)lloss(f(x,W),y)+λ∑γ∈Γg(γ) (10)
lloss=H (p, q)=- ∑xp(x)log q(x) (11)
LlossRepresent the loss function of whole rarefaction bilinear model, llossIt represents not sparse bilinear model and loses letter Number is cross entropy, llossIt is L before improvementlossIt is improved;First item l in formula (10)lossFor the loss letter of former B-CNN algorithm Number uses cross entropy loss function here, and p (x) is to intersect entropy function exact value in formula (11), and q (x) is cross entropy function prediction Value, llossCross entropy calculated value is the distance of the two probability distribution;(x, y) is the picture and true tag of input, and W is that can train Weight, f (x, W) refers to the anticipation function of model, and output is predicted value;Section 2 is sparse penalty term, g in formula (10) (γ) is that the regularization of comparative example factor gamma operates, and Γ is the set of all proportions factor, L1 or L2 canonical can be selected in g () Change;When using L1 regularization, non-smooth L1 penalty term need to be optimized using subgradient algorithm, also be can use smooth L1 is replaced;λ is the parameter for controlling sparse degree, prevents sparse scale factor excessive and loses important channel feature, according to Experiment obtains λ=10-5For compared with the figure of merit.
CN201910387272.4A 2019-05-10 2019-05-10 Fine granularity image classification method based on rarefaction bilinearity convolutional neural networks Pending CN110147834A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910387272.4A CN110147834A (en) 2019-05-10 2019-05-10 Fine granularity image classification method based on rarefaction bilinearity convolutional neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910387272.4A CN110147834A (en) 2019-05-10 2019-05-10 Fine granularity image classification method based on rarefaction bilinearity convolutional neural networks

Publications (1)

Publication Number Publication Date
CN110147834A true CN110147834A (en) 2019-08-20

Family

ID=67594208

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910387272.4A Pending CN110147834A (en) 2019-05-10 2019-05-10 Fine granularity image classification method based on rarefaction bilinearity convolutional neural networks

Country Status (1)

Country Link
CN (1) CN110147834A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110796183A (en) * 2019-10-17 2020-02-14 大连理工大学 Weak supervision fine-grained image classification algorithm based on relevance-guided discriminant learning
CN111062382A (en) * 2019-10-30 2020-04-24 北京交通大学 Channel pruning method for target detection network
CN111191587A (en) * 2019-12-30 2020-05-22 兰州交通大学 Pedestrian re-identification method and system
CN111210010A (en) * 2020-01-15 2020-05-29 上海眼控科技股份有限公司 Data processing method and device, computer equipment and readable storage medium
CN111291806A (en) * 2020-02-02 2020-06-16 西南交通大学 Identification method of label number of industrial product based on convolutional neural network
CN111444772A (en) * 2020-02-28 2020-07-24 天津大学 Pedestrian detection method based on NVIDIA TX2
CN111539460A (en) * 2020-04-09 2020-08-14 咪咕文化科技有限公司 Image classification method and device, electronic equipment and storage medium
CN111709996A (en) * 2020-06-16 2020-09-25 北京主线科技有限公司 Method and device for detecting position of container
CN112084950A (en) * 2020-09-10 2020-12-15 上海庞勃特科技有限公司 Target detection method and detection device based on sparse convolutional neural network
CN112288046A (en) * 2020-12-24 2021-01-29 浙江大学 Mixed granularity-based joint sparse method for neural network
CN112529165A (en) * 2020-12-22 2021-03-19 上海有个机器人有限公司 Deep neural network pruning method, device, terminal and storage medium
CN112668630A (en) * 2020-12-24 2021-04-16 华中师范大学 Lightweight image classification method, system and equipment based on model pruning
CN112734025A (en) * 2019-10-28 2021-04-30 复旦大学 Neural network parameter sparsification method based on fixed base regularization
WO2021129570A1 (en) * 2019-12-25 2021-07-01 神思电子技术股份有限公司 Network pruning optimization method based on network activation and sparsification
CN113222142A (en) * 2021-05-28 2021-08-06 上海天壤智能科技有限公司 Channel pruning and quick connection layer pruning method and system
CN113554127A (en) * 2021-09-18 2021-10-26 南京猫头鹰智能科技有限公司 Image recognition method, device and medium based on hybrid model
CN114065823A (en) * 2021-12-02 2022-02-18 中国人民解放军国防科技大学 Modulation signal identification method and system based on sparse deep neural network
WO2023198224A1 (en) * 2022-04-13 2023-10-19 四川大学华西医院 Method for constructing magnetic resonance image preliminary screening model for mental disorders

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017215284A1 (en) * 2016-06-14 2017-12-21 山东大学 Gastrointestinal tumor microscopic hyper-spectral image processing method based on convolutional neural network
CN107871136A (en) * 2017-03-22 2018-04-03 中山大学 The image-recognizing method of convolutional neural networks based on openness random pool
CN108805167A (en) * 2018-05-04 2018-11-13 江南大学 L aplace function constraint-based sparse depth confidence network image classification method
CN109086792A (en) * 2018-06-26 2018-12-25 上海理工大学 Based on the fine granularity image classification method for detecting and identifying the network architecture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017215284A1 (en) * 2016-06-14 2017-12-21 山东大学 Gastrointestinal tumor microscopic hyper-spectral image processing method based on convolutional neural network
CN107871136A (en) * 2017-03-22 2018-04-03 中山大学 The image-recognizing method of convolutional neural networks based on openness random pool
CN108805167A (en) * 2018-05-04 2018-11-13 江南大学 L aplace function constraint-based sparse depth confidence network image classification method
CN109086792A (en) * 2018-06-26 2018-12-25 上海理工大学 Based on the fine granularity image classification method for detecting and identifying the network architecture

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
马力: "基于稀疏化双线性卷积神经网络的细粒度图像分类", 《模式识别与人工智能》 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110796183A (en) * 2019-10-17 2020-02-14 大连理工大学 Weak supervision fine-grained image classification algorithm based on relevance-guided discriminant learning
CN112734025B (en) * 2019-10-28 2023-07-21 复旦大学 Neural network parameter sparsification method based on fixed base regularization
CN112734025A (en) * 2019-10-28 2021-04-30 复旦大学 Neural network parameter sparsification method based on fixed base regularization
CN111062382A (en) * 2019-10-30 2020-04-24 北京交通大学 Channel pruning method for target detection network
WO2021129570A1 (en) * 2019-12-25 2021-07-01 神思电子技术股份有限公司 Network pruning optimization method based on network activation and sparsification
CN111191587A (en) * 2019-12-30 2020-05-22 兰州交通大学 Pedestrian re-identification method and system
CN111191587B (en) * 2019-12-30 2021-04-09 兰州交通大学 Pedestrian re-identification method and system
CN111210010A (en) * 2020-01-15 2020-05-29 上海眼控科技股份有限公司 Data processing method and device, computer equipment and readable storage medium
CN111291806A (en) * 2020-02-02 2020-06-16 西南交通大学 Identification method of label number of industrial product based on convolutional neural network
CN111444772A (en) * 2020-02-28 2020-07-24 天津大学 Pedestrian detection method based on NVIDIA TX2
CN111539460A (en) * 2020-04-09 2020-08-14 咪咕文化科技有限公司 Image classification method and device, electronic equipment and storage medium
CN111709996A (en) * 2020-06-16 2020-09-25 北京主线科技有限公司 Method and device for detecting position of container
CN112084950A (en) * 2020-09-10 2020-12-15 上海庞勃特科技有限公司 Target detection method and detection device based on sparse convolutional neural network
CN112529165A (en) * 2020-12-22 2021-03-19 上海有个机器人有限公司 Deep neural network pruning method, device, terminal and storage medium
CN112529165B (en) * 2020-12-22 2024-02-02 上海有个机器人有限公司 Deep neural network pruning method, device, terminal and storage medium
CN112668630A (en) * 2020-12-24 2021-04-16 华中师范大学 Lightweight image classification method, system and equipment based on model pruning
CN112668630B (en) * 2020-12-24 2022-04-29 华中师范大学 Lightweight image classification method, system and equipment based on model pruning
CN112288046A (en) * 2020-12-24 2021-01-29 浙江大学 Mixed granularity-based joint sparse method for neural network
CN113222142A (en) * 2021-05-28 2021-08-06 上海天壤智能科技有限公司 Channel pruning and quick connection layer pruning method and system
CN113554127A (en) * 2021-09-18 2021-10-26 南京猫头鹰智能科技有限公司 Image recognition method, device and medium based on hybrid model
CN113554127B (en) * 2021-09-18 2021-12-28 南京猫头鹰智能科技有限公司 Image recognition method, device and medium based on hybrid model
CN114065823A (en) * 2021-12-02 2022-02-18 中国人民解放军国防科技大学 Modulation signal identification method and system based on sparse deep neural network
WO2023198224A1 (en) * 2022-04-13 2023-10-19 四川大学华西医院 Method for constructing magnetic resonance image preliminary screening model for mental disorders

Similar Documents

Publication Publication Date Title
CN110147834A (en) Fine granularity image classification method based on rarefaction bilinearity convolutional neural networks
WO2021042828A1 (en) Neural network model compression method and apparatus, and storage medium and chip
CN110689086B (en) Semi-supervised high-resolution remote sensing image scene classification method based on generating countermeasure network
CN110084281B (en) Image generation method, neural network compression method, related device and equipment
CN105184303B (en) A kind of image labeling method based on multi-modal deep learning
CN107145889B (en) Target identification method based on double CNN network with RoI pooling
Cheong et al. Support vector machines with binary tree architecture for multi-class classification
DE112020005609T5 (en) Domain adaptation for semantic segmentation by exploiting weak labels
CN108549895A (en) A kind of semi-supervised semantic segmentation method based on confrontation network
CN110914836A (en) System and method for implementing continuous memory bounded learning in artificial intelligence and deep learning for continuously running applications across networked computing edges
DE102021116436A1 (en) Method and device for data-free post-training network quantization and generation of synthetic data based on a pre-trained machine learning model
CN109145964B (en) Method and system for realizing image color clustering
US20210241112A1 (en) Neural network update method, classification method and electronic device
US20200143209A1 (en) Task dependent adaptive metric for classifying pieces of data
CN113159067A (en) Fine-grained image identification method and device based on multi-grained local feature soft association aggregation
Herdiyeni et al. Fusion of local binary patterns features for tropical medicinal plants identification
Li et al. Dual guided loss for ground-based cloud classification in weather station networks
Swope et al. Representation learning for remote sensing: An unsupervised sensor fusion approach
Amosov et al. Human localization in the video stream using the algorithm based on growing neural gas and fuzzy inference
CN115439809A (en) Subway people stream density real-time monitoring system and method based on digital twins
Kouzani Road-sign identification using ensemble learning
Dorobanţiu et al. A novel contextual memory algorithm for edge detection
Saleem et al. Assessing the efficacy of logistic regression, multilayer perceptron, and convolutional neural network for handwritten digit recognition
Nguyen et al. An approach to pattern recognition based on hierarchical granular computing
Kim et al. Deep Coupling of Random Ferns.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190820