CN111860672B - Fine-grained image classification method based on block convolutional neural network - Google Patents
Fine-grained image classification method based on block convolutional neural network Download PDFInfo
- Publication number
- CN111860672B CN111860672B CN202010738474.1A CN202010738474A CN111860672B CN 111860672 B CN111860672 B CN 111860672B CN 202010738474 A CN202010738474 A CN 202010738474A CN 111860672 B CN111860672 B CN 111860672B
- Authority
- CN
- China
- Prior art keywords
- block
- convolution
- fine
- feature map
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Abstract
A fine-grained image classification method based on a block convolutional neural network relates to the technical field of fine-grained image identification, solves the problem that the existing method inputs an original image into the convolutional neural network after being averagely blocked for fine-grained image classification, and has weak reception field limitation. The invention limits the convolution receptive field according to the requirement, so that the network focuses more on the characteristics of the local area and is more suitable for the fine-grained image classification task. The fine-grained image classification method limits the receptive field range of the convolutional layer on the premise of not introducing more parameters, so that a convolutional neural network can search a smaller discriminative local area.
Description
Technical Field
The invention relates to the technical field of fine-grained image recognition, in particular to a fine-grained image classification method based on a block convolutional neural network.
Background
In the technical Field of fine-grained image recognition, most of the existing methods based on artificial intelligence and deep learning directly input images into a Convolutional Neural Network (CNN), a Feature Map is extracted from an output Feature Map (Feature Map) of a previous layer through multilayer convolution and pooling layer operation, a Feature Map with a larger Receptive Field (RF), namely a range in which each Feature point on the Feature Map is mapped onto an input image, is extracted layer by layer, and finally a Feature Map with the Receptive Field being the size of the whole image (the theoretical Receptive Field may be larger than the size of the whole image) is obtained and used for fine-grained image classification. However, most existing methods are mainly used to identify the type of object in the image, such as different colored wings and different shaped beaks in birds, different shaped lights and tires in automobiles, by finding discriminative local areas on the image. In this case, the smaller field of view enables the model to better extract local features on the image, and thus to search for smaller discriminative local regions. However, the existing convolutional neural network framework mainly introduces operations with higher complexity and larger parameter amount, but still has difficulty in limiting the receptive field size of the convolutional layer.
Fine-Grained Visual Classification (FGVC) is a sub-task of the traditional image Classification task, which refers to more refined Classification of objects of a certain class, for example: distinguish different kinds of birds or dogs, different models of automobiles or airplanes, and the like. The fine-grained degree is more challenging than the traditional classification task because the difference between the target object and the objects of the same category may be larger than the difference between the target object and the objects of different categories, for example, two birds of the same category may have a great difference due to different postures; however, two birds of different species may have differences in structure and texture only in local areas such as the beak and the tail of a bird due to their close physical statures.
With the development of deep learning, CNN has become a mainstream solution for the task of image classification. CNN is mainly composed of the following parts: (1) a convolutional layer for feature extraction; (2) the pooling layer is used for feature selection and information filtering; (3) and the full connection layer is used for carrying out nonlinear combination on the extracted features to obtain the final output. In CNN, the concept of RF refers to a range in which a feature point on an output feature map of a specified layer is mapped onto an input picture, and both a convolutional layer and a pooling layer have the effect of increasing the receptive field, and the receptive field relationship between adjacent layers of a network is calculated in the following manner:
wherein r is(l)The reception field, k, of the first layer of the convolutional or pooling layer(l)Refers to the kernel size, s, of the first convolutional or pooling layer(l′)Refers to the step size of the i' th convolutional or pooling layer.
The existing fine-grained classification methods are mainly divided into two types: (1) based on the local positioning method, the convolutional neural network is required to be used for extracting features, a plurality of discriminative areas are found, the areas are cut from an original image, and feature extraction and classification operations are respectively executed, so that the prediction time is long; in addition, the number of regions for classification is mostly set in advance in the method, and the flexibility of the model is greatly limited. (2) Most of methods based on end-to-end feature coding generate a high-dimensional vector before a full connection layer to improve the model expression capability to adapt to a fine-grained classification task. The extra computation amount brought by the excessively high dimensionality greatly limits the model efficiency.
For a traditional convolutional neural network, the general receptive field is very large, and for a general image classification task, the model can be judged according to information in a larger range; however, for fine-grained tasks, too large a reception field increases the influence of intra-class differences on the network, making it difficult to focus on local details.
The existing document, "fine-grained visual classification based on jigsaw and progressive multi-grained learning" is a method that an original image is averagely blocked, disturbed and blocked and then directly input into a convolutional neural network for fine-grained image classification, and is different in that (1) the method only blocks the original image; (2) the method limits the receptive field by a method of disturbing the blocks, and the limitation is weaker.
Disclosure of Invention
The invention provides a fine-grained image classification method based on a block convolutional neural network, aiming at solving the problem that the existing method has weak receptive field limitation when an original image is input into the convolutional neural network after being averagely blocked for fine-grained image classification.
A fine-grained image classification method based on a block convolutional neural network is characterized in that the block convolutional neural network is set to have L block convolutional layers, wherein L is the number of layers of the current block convolutional layers, L is more than or equal to 1 and is less than or equal to L, and the initialization is that L is equal to 1; the method is realized by the following steps:
step one, for the first block convolution layer f (·; omega)(l)) Obtaining its input characteristic diagram as x(l)(ii) a The above-mentionedFor the convolution kernel parameters, R represents a real number, c(l)For inputting the number of channels of the feature map, c(l+1)For the number of channels of the output profile,. the input of the representation function,andfor each convolution kernel width and height;the dimension of expression isA real matrix of (d);
the input feature mapIs the output characteristic diagram, x, of the (l-1) th block convolutional layer(1)As model input, W(l)And H(l)Width and height of the input feature map;
step two, when l is equal to 1, setting m1=n 11 is ═ 1; when l > 1, the number of blocks m per line on the input feature map is calculated by the following formulalAnd the number of blocks per column nl:
In the formula (I), the compound is shown in the specification,andare respectively an input feature map x(l)Has a width and a height of a theoretical receptive field, andandis the contraction factor of the theoretical receptive field in the width and height dimensions,andstep sizes of convolution kernels of the l' th layer of block convolution layer in the width and height dimensions of the feature map respectively,the operation of rounding up is carried out;
step three, according to the number m of blocks in each row and each column on the input feature map obtained in the step twolAnd nlRandomly sampling to obtain the width of the feature map blockAnd heightAnd isi=1,…,mlAndj=1,…,nlare all positive integers, and are not limited to the integer,
step four, dividing the block width according to the characteristic diagram obtained in the step threeAnd heightFeature map x to be input(l)Is divided into ml×nlBlock, obtaining a set of block feature maps
Step five, adopting the convolution kernel parameter omega in the step one(l)Respectively comparing all obtained in step fourPerforming convolution to obtain corresponding convolution output characteristic diagram
Step six, the convolution output characteristic diagram obtained in the step five is usedSplicing according to the original position to obtain the output characteristic diagram of the first convolution layer in the block convolution neural network
Step seven, for the L block convolution layers, the operation is carried out according to the steps from one step to six until the output characteristic diagram x of the last L block convolution layer is obtained(L+1)X is to be(L+1)Inputting the data into a full connection layer to obtain the output probability p ∈ R of fine-grained image classificationnAnd n is the number of categories, so that the classification of fine-grained images is realized.
The invention has the beneficial effects that: the fine-grained image classification method can limit the experience field of convolution according to requirements, enables the network to pay more attention to the characteristics of local areas, and is more suitable for being applied to fine-grained image classification tasks. Meanwhile, additional parameters and operation are not introduced, and the high efficiency of the general convolutional neural network can be reserved in the prediction process.
The fine-grained image classification method does not need the characteristic of overlarge receptive field to divide the input characteristic graph into blocks, and each block is spliced again after being respectively subjected to convolution operation, so that the method has strong limitation.
The fine-grained image classification method limits the receptive field range of the convolutional layer on the premise of not introducing more parameters, so that a convolutional neural network can search a smaller discriminative local area.
Drawings
Fig. 1 is a flowchart of a fine-grained image classification method based on a block convolutional neural network according to the present invention.
FIG. 2 is a schematic diagram of a fine-grained image classification method based on a block convolutional neural network according to the present invention, which is expressed in ml=nlTake 4 as an example.
FIG. 3 is a diagram of a second embodiment of a fine-grained image classification method based on a block convolutional neural network according to the present invention, where m is the numberl=nlTake 4 as an example.
Detailed Description
In a first specific embodiment, the first embodiment is described with reference to fig. 1 and fig. 2, a fine-grained image classification method based on a block convolutional neural network is provided, where the block convolutional neural network has L block convolutional layers, L is the number of layers of the current block convolutional layer, L is greater than or equal to 1 and less than or equal to L, and is initialized to L is 1; the method is realized by the following steps:
step one, for the first block convolution layer f (·; omega)(l)) ObtainingThe input characteristic diagram is x(l)(ii) a "·" denotes an input of a function, and may be denoted by "·" when the input is uncertain. The above-mentionedFor the convolution kernel parameters, R is a real number,the dimension of expression isReal matrix of (d) for representing Ω(l)The size of (d); c. C(l)For inputting the number of channels of the feature map, c(l+1)In order to output the number of channels of the feature map,andfor each convolution kernel width and height;
the input feature mapIs the output characteristic diagram, x, of the (l-1) th block convolutional layer(1)As model input, W(l)And H(l)Width and height of the input feature map;
step two, when l is equal to 1, setting m1=n 11 is ═ 1; when l > 1, the number of blocks m per line on the input feature map is calculated by the following formulalAnd the number of blocks per column nl:
In the formula (I), the compound is shown in the specification,andare respectively an input feature map x(l)Has a width and a height of a theoretical receptive field, andandis the contraction factor of the theoretical receptive field in the width and height dimensions,andstep sizes of convolution kernels of the l' th layer of block convolution layer in the width and height dimensions of the feature map respectively,the operation of rounding up is carried out; the range of the shrinkage factor on the width dimension and the height dimension of the theoretical receptive field is respectively as follows:
step three, according to the number m of blocks in each row and each column on the input feature map obtained in the step twolAnd nlRandomly sampling to obtain the width of the feature map blockAnd heightAnd isi=1,…,mlAndj=1,…,nlare all positive integers, and are not limited to the integer,
step four, dividing the block width according to the characteristic diagram obtained in the step threeAnd heightFeature map x to be input(l)Is divided into ml×nlBlock, obtaining a set of block feature maps
Step five, adopting the convolution kernel parameter omega in the step one(l)Respectively comparing all obtained in step fourPerforming convolution to obtain corresponding convolution output characteristic diagram
Step six, the convolution output characteristic diagram obtained in the step five is usedi=1,…,ml,j=1,…,nlSplicing according to the original position to obtain the output characteristic diagram of the first convolution layer in the block convolution neural network
Step seven, for the L partitioned convolution layers, operating according to the steps one to six until the L partitioned convolution layers are all operatedObtaining the output characteristic diagram x of the last L-th block convolution layer(L+1)X is to be(L+1)Inputting the data into a full connection layer to obtain the output probability p ∈ R of fine-grained image classificationnAnd n is the number of categories, so that the classification of fine-grained images is realized.
Step eight, in the model training process, cross entropy L is usedCE(t, p) and the real category t optimize the output probability p of the fine-grained image classification:
LCE(t,p)=-ln pt。
in a second embodiment, the present embodiment is described with reference to fig. 3, and the present embodiment is an example of a fine-grained image classification method based on a block convolutional neural network according to the first embodiment: the embodiment can simplify the operation and improve the block convolution efficiency while finishing the block convolution operation. Setting a block convolutional neural network to have L block convolutional layers, wherein L is the number of layers of the current block convolutional layer, L is more than or equal to 1 and is less than or equal to L, and initializing to L is 1;
step 2, according to the number m of blocks in each row and each column on the preset feature diagramlAnd nlRandomly sampling to obtain the width of the feature map blockAnd heightAnd isi=1,…,mlAndj=1,…,nlare all positive integers, and are not limited to the integer,
step 3, inputting the characteristic diagram x(l)Every other row aboveInsert intoColumn all zero column vector, every other rowInsert intoThe rows are all zero row vectors and the row vectors,andis the step size of the convolution kernel in the width and height dimensions of the feature map,obtaining a processed feature map for a round-down operation
Step 4, adopting a convolution kernel parameter omega(l)To pairPerforming convolution to obtain a convolution output characteristic diagram
Step 5, according to the positions of the all-zero column vectors and the all-zero row vectors inserted in the step 3, outputting the feature graph by convolutionThe inserted vector is removed, and the removed column is marked with The removed row numbers areObtaining an output feature map of the first convolutional layer in a partitioned convolutional neural network
Step 6, for all the block convolution layers, operating according to the steps 1 to 5 until obtaining the output characteristic diagram x of the last block convolution layer (L layer)(L+1)X is to be(L+1)Inputting the fine-grained image classification data into a full connection layer to obtain an output probability p of fine-grained image classification;
in this embodiment modeUsing Cross Entropy (CE) LCE(t, p) and the true class t optimize the output probability p of the fine-grained image classification.
Claims (4)
1. A fine-grained image classification method based on a block convolutional neural network is characterized in that the block convolutional neural network is set to have L block convolutional layers, wherein L is the number of layers of the current block convolutional layers, L is more than or equal to 1 and is less than or equal to L, and the initialization is that L is equal to 1; the method is characterized in that:
the method is realized by the following steps:
step one, for the first block convolution layer f (·; omega)(l)) Obtaining its input characteristic diagram as x(l)(ii) a The above-mentionedFor the convolution kernel parameters, R represents a real number, c(l)For inputting the number of channels of the feature map, c(l+1)For the number of channels of the output profile,. the input of the representation function,andfor each convolution kernel width and height;the dimension of expression isA real matrix of (d);
the input feature mapIs the output characteristic diagram, x, of the (l-1) th block convolutional layer(1)As model input, W(l)And H(l)Width and height of the input feature map;
step two, when l is equal to 1, setting m1=n11 is ═ 1; when in useWhen l is more than 1, calculating the number m of blocks per line on the input feature map by the following formulalAnd the number of blocks per column nl:
In the formula (I), the compound is shown in the specification,andare respectively an input feature map x(l)Has a width and a height of a theoretical receptive field, and andis the contraction factor of the theoretical receptive field in the width and height dimensions,andstep sizes of convolution kernels of the l' th layer of block convolution layer in the width and height dimensions of the feature map respectively,the operation of rounding up is carried out;
step three, according to the stepThe number m of blocks in each row and column on the input feature map obtained in the second steplAnd nlRandomly sampling to obtain the width of the feature map blockAnd heightAnd isAndare all positive integers, and are not limited to the integer,
step four, dividing the block width according to the characteristic diagram obtained in the step threeAnd heightFeature map x to be input(l)Is divided into ml×nlBlock, obtaining a set of block feature maps
Step five, adopting the convolution kernel parameter omega in the step one(l)Respectively comparing all obtained in step fourPerforming convolution to obtain corresponding convolution output characteristic diagram
Step six, the convolution output characteristic diagram obtained in the step five is usedSplicing according to the original position to obtain the output characteristic diagram of the first convolution layer in the block convolution neural network
Step seven, for the L block convolution layers, the operation is carried out according to the steps from one step to six until the output characteristic diagram x of the last L block convolution layer is obtained(L+1)X is to be(L+1)Inputting the data into a full connection layer to obtain the output probability p ∈ R of fine-grained image classificationnAnd n is the number of categories, so that the classification of fine-grained images is realized.
2. The fine-grained image classification method based on the block convolutional neural network as claimed in claim 1, wherein: step eight, adopting cross entropy LCE(t, p) and the real category t optimize the output probability p of the fine-grained image classification:
LCE(t,p)=-lnpt。
4. the fine-grained image classification method based on the block convolutional neural network as claimed in claim 1, wherein: replacing the second step with the sixth step by the following steps:
step A, setting the number m of blocks in each row and column on the output characteristic diagramlAnd nlRandomly sampling to obtain the width of the feature map blockAnd heightAnd isAndare all positive integers, and are not limited to the integer,
step B, inputting a characteristic diagram x(l)Every other row aboveInsert intoColumn all zero column vector, every other rowInsert intoThe rows are all zero row vectors and the row vectors, andstep sizes of the convolution kernel in the feature width and height dimensions respectively,obtaining a processed feature map for a round-down operation
Step C, adopting the convolution kernel parameter omega obtained in the step one(l)To pairDirectly performing convolution to obtain convolution output characteristic diagram
Step D: c, according to the positions of all zero column vectors and all zero row vectors inserted in the step C, outputting the feature graph by convolutionThe inserted vector is removed, and the removed column is marked with The removed row numbers areObtaining an output feature map of the first convolutional layer in a partitioned convolutional neural network
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010738474.1A CN111860672B (en) | 2020-07-28 | 2020-07-28 | Fine-grained image classification method based on block convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010738474.1A CN111860672B (en) | 2020-07-28 | 2020-07-28 | Fine-grained image classification method based on block convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111860672A CN111860672A (en) | 2020-10-30 |
CN111860672B true CN111860672B (en) | 2021-03-16 |
Family
ID=72948450
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010738474.1A Active CN111860672B (en) | 2020-07-28 | 2020-07-28 | Fine-grained image classification method based on block convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111860672B (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106462549A (en) * | 2014-04-09 | 2017-02-22 | 尹度普有限公司 | Authenticating physical objects using machine learning from microscopic variations |
CN106650786A (en) * | 2016-11-14 | 2017-05-10 | 沈阳工业大学 | Image recognition method based on multi-column convolutional neural network fuzzy evaluation |
CN107680106A (en) * | 2017-10-13 | 2018-02-09 | 南京航空航天大学 | A kind of conspicuousness object detection method based on Faster R CNN |
CN109190622A (en) * | 2018-09-11 | 2019-01-11 | 深圳辉煌耀强科技有限公司 | Epithelial cell categorizing system and method based on strong feature and neural network |
CN109191457A (en) * | 2018-09-21 | 2019-01-11 | 中国人民解放军总医院 | A kind of pathological image quality validation recognition methods |
CN109344856A (en) * | 2018-08-10 | 2019-02-15 | 华南理工大学 | A kind of off-line signature verification method based on multilayer discriminate feature learning |
CN109711448A (en) * | 2018-12-19 | 2019-05-03 | 华东理工大学 | Based on the plant image fine grit classification method for differentiating key field and deep learning |
CN110084285A (en) * | 2019-04-08 | 2019-08-02 | 安徽艾睿思智能科技有限公司 | Fish fine grit classification method based on deep learning |
CN110110692A (en) * | 2019-05-17 | 2019-08-09 | 南京大学 | A kind of realtime graphic semantic segmentation method based on the full convolutional neural networks of lightweight |
US10503978B2 (en) * | 2017-07-14 | 2019-12-10 | Nec Corporation | Spatio-temporal interaction network for learning object interactions |
CN110958187A (en) * | 2019-12-17 | 2020-04-03 | 电子科技大学 | Distributed machine learning parameter-oriented synchronous differential data transmission method |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10242036B2 (en) * | 2013-08-14 | 2019-03-26 | Ricoh Co., Ltd. | Hybrid detection recognition system |
US20170061257A1 (en) * | 2013-12-16 | 2017-03-02 | Adobe Systems Incorporated | Generation of visual pattern classes for visual pattern regonition |
CN104331447A (en) * | 2014-10-29 | 2015-02-04 | 邱桃荣 | Cloth color card image retrieval method |
CN106951872B (en) * | 2017-03-24 | 2020-11-06 | 江苏大学 | Pedestrian re-identification method based on unsupervised depth model and hierarchical attributes |
CN110914829A (en) * | 2017-04-07 | 2020-03-24 | 英特尔公司 | Method and system for image processing using improved convolutional neural network |
CN108537283A (en) * | 2018-04-13 | 2018-09-14 | 厦门美图之家科技有限公司 | A kind of image classification method and convolutional neural networks generation method |
CN108776807A (en) * | 2018-05-18 | 2018-11-09 | 复旦大学 | It is a kind of based on can the double branch neural networks of skip floor image thickness grain-size classification method |
CN109255375A (en) * | 2018-08-29 | 2019-01-22 | 长春博立电子科技有限公司 | Panoramic picture method for checking object based on deep learning |
CN109857889B (en) * | 2018-12-19 | 2021-04-09 | 苏州科达科技股份有限公司 | Image retrieval method, device and equipment and readable storage medium |
CN109978077B (en) * | 2019-04-08 | 2021-03-12 | 南京旷云科技有限公司 | Visual recognition method, device and system and storage medium |
CN111047038A (en) * | 2019-11-08 | 2020-04-21 | 南昌大学 | Neural network compression method using block circulant matrix |
CN111178432B (en) * | 2019-12-30 | 2023-06-06 | 武汉科技大学 | Weak supervision fine granularity image classification method of multi-branch neural network model |
CN111414954B (en) * | 2020-03-17 | 2022-09-09 | 重庆邮电大学 | Rock image retrieval method and system |
-
2020
- 2020-07-28 CN CN202010738474.1A patent/CN111860672B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106462549A (en) * | 2014-04-09 | 2017-02-22 | 尹度普有限公司 | Authenticating physical objects using machine learning from microscopic variations |
CN106650786A (en) * | 2016-11-14 | 2017-05-10 | 沈阳工业大学 | Image recognition method based on multi-column convolutional neural network fuzzy evaluation |
US10503978B2 (en) * | 2017-07-14 | 2019-12-10 | Nec Corporation | Spatio-temporal interaction network for learning object interactions |
CN107680106A (en) * | 2017-10-13 | 2018-02-09 | 南京航空航天大学 | A kind of conspicuousness object detection method based on Faster R CNN |
CN109344856A (en) * | 2018-08-10 | 2019-02-15 | 华南理工大学 | A kind of off-line signature verification method based on multilayer discriminate feature learning |
CN109190622A (en) * | 2018-09-11 | 2019-01-11 | 深圳辉煌耀强科技有限公司 | Epithelial cell categorizing system and method based on strong feature and neural network |
CN109191457A (en) * | 2018-09-21 | 2019-01-11 | 中国人民解放军总医院 | A kind of pathological image quality validation recognition methods |
CN109711448A (en) * | 2018-12-19 | 2019-05-03 | 华东理工大学 | Based on the plant image fine grit classification method for differentiating key field and deep learning |
CN110084285A (en) * | 2019-04-08 | 2019-08-02 | 安徽艾睿思智能科技有限公司 | Fish fine grit classification method based on deep learning |
CN110110692A (en) * | 2019-05-17 | 2019-08-09 | 南京大学 | A kind of realtime graphic semantic segmentation method based on the full convolutional neural networks of lightweight |
CN110958187A (en) * | 2019-12-17 | 2020-04-03 | 电子科技大学 | Distributed machine learning parameter-oriented synchronous differential data transmission method |
Non-Patent Citations (2)
Title |
---|
A codebook-free and annotation-free approach for fine-grained image categorization;Yao B P等;《Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition》;20121231;第3466-3473页 * |
基于深度卷积特征的细粒度图像分类研究综述;罗建豪等;《自动化学报》;20170217;第43卷(第8期);第1306-1318页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111860672A (en) | 2020-10-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110443842B (en) | Depth map prediction method based on visual angle fusion | |
Wang et al. | Detect globally, refine locally: A novel approach to saliency detection | |
CN109492666B (en) | Image recognition model training method and device and storage medium | |
CN110414394B (en) | Facial occlusion face image reconstruction method and model for face occlusion detection | |
CN111191583B (en) | Space target recognition system and method based on convolutional neural network | |
CN108288035A (en) | The human motion recognition method of multichannel image Fusion Features based on deep learning | |
CN109741341B (en) | Image segmentation method based on super-pixel and long-and-short-term memory network | |
CN107680106A (en) | A kind of conspicuousness object detection method based on Faster R CNN | |
CN112862792B (en) | Wheat powdery mildew spore segmentation method for small sample image dataset | |
CN108710893B (en) | Digital image camera source model classification method based on feature fusion | |
CN108171249B (en) | RGBD data-based local descriptor learning method | |
CN111723915A (en) | Pruning method of deep convolutional neural network, computer equipment and application method | |
CN113705641B (en) | Hyperspectral image classification method based on rich context network | |
CN111401380A (en) | RGB-D image semantic segmentation method based on depth feature enhancement and edge optimization | |
CN114565628B (en) | Image segmentation method and system based on boundary perception attention | |
CN114821058A (en) | Image semantic segmentation method and device, electronic equipment and storage medium | |
CN112101364A (en) | Semantic segmentation method based on parameter importance incremental learning | |
CN111310820A (en) | Foundation meteorological cloud chart classification method based on cross validation depth CNN feature integration | |
Sun et al. | Iterative, deep synthetic aperture sonar image segmentation | |
CN111784699A (en) | Method and device for carrying out target segmentation on three-dimensional point cloud data and terminal equipment | |
CN111860672B (en) | Fine-grained image classification method based on block convolutional neural network | |
CN113361589A (en) | Rare or endangered plant leaf identification method based on transfer learning and knowledge distillation | |
CN111275616B (en) | Low-altitude aerial image splicing method and device | |
CN109949298B (en) | Image segmentation quality evaluation method based on cluster learning | |
CN108805811B (en) | Natural image intelligent picture splicing method and system based on non-convex quadratic programming |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |