CN111340096A - Weakly supervised butterfly target detection method based on confrontation complementary learning - Google Patents
Weakly supervised butterfly target detection method based on confrontation complementary learning Download PDFInfo
- Publication number
- CN111340096A CN111340096A CN202010111404.3A CN202010111404A CN111340096A CN 111340096 A CN111340096 A CN 111340096A CN 202010111404 A CN202010111404 A CN 202010111404A CN 111340096 A CN111340096 A CN 111340096A
- Authority
- CN
- China
- Prior art keywords
- branch
- butterfly
- network
- image
- classifier
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 18
- 230000000295 complement effect Effects 0.000 title claims abstract description 16
- 241000510032 Ellipsaria lineolata Species 0.000 claims abstract description 62
- 238000012360 testing method Methods 0.000 claims abstract description 18
- 238000012549 training Methods 0.000 claims abstract description 13
- 230000003042 antagnostic effect Effects 0.000 claims abstract description 8
- 238000000034 method Methods 0.000 claims description 12
- 241000255777 Lepidoptera Species 0.000 claims description 11
- 238000010586 diagram Methods 0.000 claims description 10
- 238000011176 pooling Methods 0.000 claims description 10
- 239000013598 vector Substances 0.000 claims description 10
- 238000013527 convolutional neural network Methods 0.000 claims description 8
- 230000004913 activation Effects 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims 1
- 238000010801 machine learning Methods 0.000 description 4
- 241000607479 Yersinia pestis Species 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 239000004576 sand Substances 0.000 description 2
- 241000238421 Arthropoda Species 0.000 description 1
- 244000241796 Christia obcordata Species 0.000 description 1
- 241000938605 Crocodylia Species 0.000 description 1
- 241000238631 Hexapoda Species 0.000 description 1
- 241000500891 Insecta Species 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 239000003153 chemical reaction reagent Substances 0.000 description 1
- 230000009193 crawling Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000004392 genitalia Anatomy 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000001418 larval effect Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000001568 sexual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a weak supervision butterfly target detection method based on antagonistic complementary learning, which sequentially comprises the following steps: firstly, mixing a butterfly ecological image crawled by a crawler with a butterfly specimen image according to categories to form a butterfly data set; then cutting and standardizing the image; then dividing the butterfly data set into a training image set and a testing image set according to a proportion; then, a backbone network and an confrontation complementary learning network are established, a training set is used for training the network, and the model is stored when the network converges; and finally, inputting the test image into the trained network model to obtain a target detection result graph.
Description
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a weak supervision butterfly target detection method based on confrontation complementary learning.
Background
Butterflies are a group of insects of the phylum arthropoda, class insecta, order lepidoptera, suborder hammer. On one hand, butterflies in the larval stage feed on agricultural and forestry crops and are one of main pests in the agricultural and forestry; on the other hand, the butterfly is a precious environmental index, the monitoring data of the butterfly is used for ecological environment monitoring, biodiversity protection and the like, and meanwhile, the butterfly has high ornamental value and economic value and is a natural resource. Therefore, the classification and identification of butterflies have great significance in the aspects of agriculture, forestry, disease and pest control, environmental protection, butterfly industry development and other practical works.
The traditional butterfly identification mode mainly comprises two modes of artificial identification and biochemical identification. The manual identification mode mainly compares the ecological characteristics with the specimen characteristics, and the method depends on long-term experience accumulation and is time-consuming; the biochemical identification method utilizes the reaction of butterfly genitals to biochemical reagents, and the method depends on professional biochemical knowledge and is expensive. Therefore, the two methods do not have the universality of butterfly identification.
With the development of image processing technology and machine learning theory, researchers realize identification of butterflies through a machine learning method, which mainly comprises the steps of artificially extracting image features (color, texture and shape information of butterfly wing surfaces) of butterflies, performing mathematical modeling according to the feature information, and determining a classifier for classification.
Most of machine learning methods need to manually select image features, and the final effect of classification is determined to a great extent by feature extraction and feature selection. Meanwhile, machine learning methods focus on identification of butterfly specimen images, and an effective identification means for butterfly ecological images (butterfly images shot in an ecological environment) is lacking. In the ecological image, on one hand, most butterflies do not occupy all positions in the image; on the other hand, butterflies have mimicry capability, so that butterfly targets are difficult to distinguish from backgrounds, which brings huge challenges to identification of butterfly images. Therefore, in order to better identify the butterfly in the ecological image, the position of the butterfly in the image needs to be determined, and then the identification of the butterfly is completed, which is the target detection of the butterfly.
Deep learning methods, represented by Convolutional Neural Networks (CNNs), have been highly successful in the field of image recognition. The deep learning method can automatically extract image features, and makes great breakthrough in various tasks such as image classification, target detection, image segmentation and the like. Aiming at a target detection task, a two-stage target detection algorithm represented by an R-CNN series and a one-stage target detection algorithm represented by SSD, YOLO and the like have excellent effects, but the algorithms belong to full-supervision detection algorithms, depend on manually marked object boundary boxes and are expensive. To solve the problem of fully supervised detection algorithms, researchers have come to focus on how to accomplish target detection under weak supervision (image-level label only), and some success has emerged. For example, Zhou et al uses a Global Average Pooling (GAP) layer instead of the fully connected layer of VGG to obtain the position information of the object, but this method can only obtain the most discriminative region. Singh et al randomly conceals the block of the region from each input image based on the Zhou method to obtain more distinct regions. However, this method cannot effectively locate the entire area of the object due to random concealment.
The invention effectively locates all positions of the butterfly in the image in a weak supervision mode by using the image-level label only through the confrontation complementary learning method and identifies the category of the butterfly.
Disclosure of Invention
The invention aims to provide a weak supervision butterfly target detection method based on confrontation complementary learning, which is used for target detection of butterfly images. In order to achieve the purpose, the invention adopts the following technical scheme:
a weak supervision butterfly target detection method based on antagonistic complementary learning comprises the following steps:
step 1: and constructing a butterfly data set. The butterfly data set is composed of two parts, the first part is composed of Google pictures and butterfly ecological images crawled on Baidu pictures, and the first part is called as a data set D1The second part is composed of the butterfly specimen image on "Chinese butterfly Zhi", which is calledData set D2. Data set D1And a data set D2Mixed composition of butterfly data setsWherein the butterfly image is IiThe category label is yi. The data set D contains N images of M butterflies in total, and the data set D is divided into a training set Dt(containing N)tImage) and test set Ds(containing N)sA web image);
step 2: and constructing a backbone network. The present invention selects the first 13 layers of the VGG-16 as the backbone network, which consists of 5 convolutional blocks. Butterfly image I with color input for backbone networki∈Rh×w×3(1<i<Nt) Where h and w represent the height and width of the image, respectively, and 3 represents the number of channels of the image. Location-aware feature maps with multi-channel output for networksWherein K1Number of channels, H, representing a location-aware feature map1And W1Respectively, the height and width of the feature map. The backbone network is represented as:
Si=f0(θ0,Ii)
wherein f is0(. -) represents the role of the backbone network, θ0Is a parameter of the backbone network;
and step 3: and constructing an antagonistic complementary learning network. The countervailing complementary learning network comprises two parallel branches A and B, each branch comprising a feature extractor and a classifier. Wherein the feature extractor and classifier of the A branch are respectively denoted as EAAnd clsAThe feature extractor and classifier of the B branch are denoted EBAnd clsB;
Step 3.1: for the A branch, the branch first uses a feature extractor EAExtracting features and acquiring a category activation graph; then uses the classifier clsAAnd (6) classifying. Wherein the feature extractor is a three-layer convolutional neural network whose input is boneOutput S of the trunk networki(1≤i≤Nt) Output as class target mapThe target graph shows the unique distinguished regions of the target class. Will be provided withNormalized to [0,1 ]]And is defined as The location map of the branch is obtained. The classifier consists of a Global Average Pooling (GAP) layer and a soft max (softmax) layer, whose inputs areOutput as classification resultWhere M represents the number of categories of butterflies. The whole branch is specifically represented as:
wherein f isA(. a) andrespectively represent feature extractor EAAnd a classifier clsAAction of thetaAAs a feature extractor EAIs determined by the parameters of (a) and (b),parameters of the A branch classifier;
step 3.2: erasing feature maps using a feature eraser EraThe most discriminative region in the set. Assuming that the threshold is δ, the most discriminating region isThe area where Aera is located is in the characteristic diagram SiSetting the middle value to be 0, and generating a feature diagram after erasingNamely, it is
Step 3.3: for the B branch, the structure is substantially the same as for the A branch. The branch first uses a feature extractor EBExtracting features, obtaining class activation graph, and then using classifier clsBAnd (6) classifying. Wherein, the feature extractor is also a three-layer convolution neural network, and the input of the feature extractor is an erased feature mapOutput as class target map The new most discriminative area will be learned. Will be provided withNormalized to [0,1 ]]And is defined as The location map of the branch is obtained. The classifier consists of a global average pooling layer anda soft maximum layer having as its inputOutput as classification resultThe whole branch is specifically represented as:
wherein f isB(. a) andrespectively represent feature extractor EBAnd a classifier clsBAction of thetaBAs a feature extractor EBIs determined by the parameters of (a) and (b),parameters of the B-branch classifier;
and 4, step 4: establishing a loss function L of two branch networksAAnd LBThe loss function being the actual output vectorAndrespectively with the target output vector yiRespectively expressed as:
then netThe total loss of the collaterals is L ═ LA+LB;
And 5: and (5) network training. Setting super parameters such as iteration times, learning rate and the like, and setting a training set DtInputting a network, iteratively updating network parameters by using a random gradient descent algorithm until loss is converged, and storing a final model;
step 6: and (5) testing the network. Loading the saved model, and testing the set DsAnd inputting the data into a network to obtain the accuracy of classification. Inputting a single test image Ii∈Rh×w×3(1≤i≤Ns) Obtaining the location map of the A branchAnd location map of branch BTaking the maximum value of the corresponding positions of the two positioning maps to obtain the final positioning mapAnd drawing a rectangular frame on the image according to the positioning diagram, so as to obtain the position of the butterfly target in the image.
Drawings
Fig. 1 is an original image.
Fig. 2 is a backbone network structure.
Fig. 3 shows the overall network structure.
Fig. 4 is a graph of the test results.
Detailed Description
The embodiment of the invention provides a weak supervision butterfly target detection method based on antagonistic complementary learning, and the invention is explained and explained below by combining the related drawings:
the flow of the embodiment of the invention is as follows:
step 1: and constructing a butterfly data set. The butterfly data set consists of two parts, namely a butterfly ecological image data set and a butterfly specimen image data set. The butterfly ecological image is obtained by crawling reptiles on Google pictures and Baidu pictures and is called as a data set D1The number ofThe image of the data set is shown in fig. 1 (a). The butterfly specimen image is obtained from the book "Chinese butterfly log", and is called as data set D2The image of the data set is shown in fig. 1 (b). Data set D1And a data set D2Mixed composition of butterfly data setsWherein the butterfly image is IiThe category label is yiThe butterfly data set D is classified into M334, which contains N74111 images. Dividing the data set D into training sets D according to the proportion of 8:2 of each classt(containing N)t58288 images) and test set Ds(containing N)s14823 images) to prevent the computational burden, each butterfly image is resampled to 256 × 256 and then randomly cropped to 224 × 224 as input to the network, where the data needs to be normalized (the dimensions of the image minus the mean of the dataset and divided by the standard deviation of the dataset);
step 2: and constructing a backbone network. The present invention selects the first 13 layers of the VGG-16 as the backbone network, which consists of 5 convolutional blocks. Wherein, for the first 2 convolutional blocks, each convolutional block consists of 2 convolutional layers; for the last 3 convolutional blocks, each convolutional block is composed of 3 convolutional layers, and the structure is shown in fig. 2, wherein the total number of the convolutional layers is 13. Butterfly image I with color input for backbone networki∈R224×224×3(1<i<Nt) Where 3 denotes the number of image channels, and h-224 and w-224 denote the height and width of the image, respectively. Location-aware feature map S with multi-channel output for networki∈R28×28×512Where 512 represents the number of channels of the feature map, 28 × 28 represents the resolution of the feature map, the backbone network is represented as:
Si=f0(θ0,Ii),1<i<Nt
wherein f is0(. -) represents the role of the backbone network, θ0Is a parameter of the backbone network;
and step 3: and constructing an antagonistic complementary learning network. The antagonistic complementary learning network comprises two of A and BThe line branches, each branch containing a feature extractor and a classifier. Wherein the feature extractor and classifier of the A branch are respectively denoted as EAAnd clsAThe feature extractor and classifier of the B branch are denoted EBAnd clsB;
Step 3.1: for the first branch A, the branch first uses a feature extractor EAExtracting features, obtaining class activation map, and using classifier clsAAnd (6) classifying. Wherein the feature extractor is a three-layer convolutional neural network, and the input of the convolutional neural network is the output S of the backbone networkiOutput as class target mapThe figure shows the unique distinguishing region of the target class. Will be provided withNormalized to [0,1 ]]And is defined as The location map of the branch is obtained. The classifier consists of a Global Average Pooling (GAP) layer and a soft max layer. The global average pooling layer replaces a full connection layer in the VGG16, and the output of the global average pooling layer is a one-dimensional vector with the size of 334; the soft max layer maps one-dimensional vectors to probabilities for each class. The input of the classifier isOutput as classification resultThe whole branch is specifically represented as:
wherein f isA(. a) andrespectively represent feature extractor EAAnd a classifier clsAAction of thetaAAs a feature extractor EAIs determined by the parameters of (a) and (b),parameters of the A branch classifier;
step 3.2: erasing feature maps using a feature eraser EraAssuming the threshold is δ (δ ∈ {0.5,0.6,0.7,0.8,0.9}), the most discriminative region isThe area where Aera is located is in the characteristic diagram SiSetting the middle value to be 0, and generating a feature diagram after erasingNamely, it is
Step 3.3: for the second branch B, the structure is substantially the same as branch a. The branch first uses a feature extractor EBExtracting features, obtaining class activation graph, and then using classifier clsBAnd (6) classifying. Wherein, the feature extractor is also a three-layer convolution neural network, and the input of the feature extractor is an erased feature mapOutput as class target map Will learn the most new judgmentA sexual area. Will be provided withNormalized to [0,1 ]]And is defined as The location map of the branch is obtained. The classifier consists of a global average pooling layer and a soft maximum layer, wherein the output of the global average pooling layer is a one-dimensional vector with the size of 334; the soft max layer maps one-dimensional vectors to probabilities for each class. The input of the classifier isOutput as classification resultThe whole branch is specifically represented as:
wherein f isB(. a) andrespectively represent feature extractor EBAnd a classifier clsBAction of thetaBAs a feature extractor EBIs determined by the parameters of (a) and (b),parameters of the B-branch classifier;
and 4, step 4: establishing a loss function L of two branch networksAAnd LBThe loss function being the actual output vectorAndand the target output vector yiRespectively expressed as:
the total loss of the network is L ═ LA+LB;
And 5: and (5) network training. Setting the iteration number to be 50, the learning rate to be 0.001 and the threshold value delta to be 0.6, and setting the training set DtInputting a network, initializing a backbone network by using a VGG16 weight trained by ImageNet, iteratively updating network parameters by using a random gradient descent algorithm until loss is converged, and storing a final model;
step 6: and (5) testing the network. Loading the saved model, and testing the set DsAnd inputting the data into a network to obtain the classification accuracy. Inputting a single test image Ii∈Rh×w×3(1≤i≤Ns) Obtaining the location map of the A branchAnd location map of branch BTaking the maximum value of the corresponding positions of the two positioning maps to obtain the final positioning mapAdding the scout map to the test image, as shown in fig. 4 (a); and (3) binarizing the positioning map, then obtaining the contour of the butterfly target, then obtaining a circumscribed rectangle of the contour, and finally drawing the rectangle on the test image to obtain the position of the butterfly target in the image, as shown in (b) of the attached figure 4.
The above examples are only used to describe the present invention, and do not limit the technical solutions described in the present invention. Therefore, all technical solutions and modifications that do not depart from the spirit and scope of the present invention should be construed as being included in the scope of the appended claims.
Claims (2)
1. A weak supervision butterfly target detection method based on confrontation complementary learning is characterized by comprising the following steps: the method comprises the following steps of,
step 1: constructing a butterfly data set: the butterfly data set is composed of two parts, the first part is composed of Google pictures and butterfly ecological images crawled on Baidu pictures, and the first part is called as a data set D1The second part consisting of an image of a butterfly specimen, called dataset D2(ii) a Data set D1And a data set D2Mixed composition of butterfly data setsWherein the butterfly image is IiThe category label is yi(ii) a The data set D contains N images of M butterflies in total, and the data set D is divided into a training set DtAnd test set Ds(ii) a Training set DtContaining NtImage, test set DsContaining NsA frame of images;
step 2: constructing a backbone network: selecting the first 13 layers of the VGG-16 as a backbone network, wherein the backbone network consists of 5 convolution blocks; butterfly image I with color input for backbone networki∈Rh×w×3,1<i<NtWherein h and w respectively represent the height and width of the image, and 3 represents the number of channels of the image; location-aware feature maps with multi-channel output for networksWherein K1Number of channels, H, representing a location-aware feature map1And W1The height and width of the feature map, respectively; the backbone network is represented as:
Si=f0(θ0,Ii),
wherein f is0(. -) represents the role of the backbone network, θ0Is a parameter of the backbone network;
and step 3: construction of antagonistic complementationLearning the network: the countervailing complementary learning network comprises two parallel branches A and B, wherein each branch comprises a feature extractor and a classifier; wherein the feature extractor and classifier of the A branch are respectively denoted as EAAnd clsAThe feature extractor and classifier of the B branch are denoted EBAnd clsB;
And 4, step 4: establishing a loss function L of two branch networksAAnd LBThe loss function being the actual output vectorAndrespectively with the target output vector yiRespectively expressed as:
the total loss of the network is L ═ LA+LB;
And 5: network training: setting super parameters such as iteration times, learning rate and the like, and setting a training set DtInputting a network, iteratively updating network parameters by using a random gradient descent algorithm until loss is converged, and storing a final model;
step 6: network testing: loading the saved model, and testing the set DsInputting the data into a network to obtain the accuracy of classification; inputting a single test image Ii∈Rh×w×3Obtaining the location map of the A branchAnd location map of branch BTaking the maximum value of the corresponding positions of the two positioning maps to obtain the final positioning mapAnd drawing a rectangular frame on the image according to the positioning diagram to obtain the position of the butterfly target in the image.
2. The method for detecting a weakly supervised butterfly target based on antagonistic complementary learning as claimed in claim 1, wherein: step 3 comprises the following steps, step 3.1: for the A branch, the branch first uses a feature extractor EAExtracting features and acquiring a category activation graph; then uses the classifier clsAClassifying; wherein the feature extractor is a three-layer convolutional neural network, and the input of the convolutional neural network is the output S of the backbone networkiOutputting the class target mapThe target graph shows a unique distinguished region of the target class; will be provided withNormalized to [0,1 ]]And is defined as The positioning diagram of the branch is obtained; classifier clsAComprises a global average pooling layer and a soft maximum output layer, and the input isOutput as classification resultThe whole branch is specifically represented as:
wherein f isA(. a) andrespectively represent feature extractor EAAnd a classifier clsAAction of thetaAAs a feature extractor EAIs determined by the parameters of (a) and (b),parameters of the A branch classifier;
step 3.2: erasing feature maps using a feature eraser EraThe most discriminative region: assuming the threshold is δ, the most discriminative region can be represented asThe Area where the Area is located is positioned in a characteristic diagram SiSetting the middle value to be 0, and generating a feature diagram after erasingNamely, it is
Step 3.3: for the B branch, the structure is basically the same as that of the A branch; the branch also first uses a feature extractor EBExtracting features, obtaining class activation map, and using classifier clsBClassifying; wherein, the feature extractor is also a three-layer convolution neural network, and the input is the erased feature mapOutput as class target map Learning a new most discriminative area; will be provided withNormalized to [0,1 ]]And is defined as The positioning diagram of the branch is obtained; the classifier consists of a global average pooling layer and a soft max layer with inputs ofOutput as classification resultThe whole branch is specifically represented as:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010111404.3A CN111340096A (en) | 2020-02-24 | 2020-02-24 | Weakly supervised butterfly target detection method based on confrontation complementary learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010111404.3A CN111340096A (en) | 2020-02-24 | 2020-02-24 | Weakly supervised butterfly target detection method based on confrontation complementary learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111340096A true CN111340096A (en) | 2020-06-26 |
Family
ID=71185486
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010111404.3A Pending CN111340096A (en) | 2020-02-24 | 2020-02-24 | Weakly supervised butterfly target detection method based on confrontation complementary learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111340096A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112364980A (en) * | 2020-11-09 | 2021-02-12 | 北京计算机技术及应用研究所 | Deep neural network training method based on reinforcement learning under weak supervision scene |
CN112801029A (en) * | 2021-02-09 | 2021-05-14 | 北京工业大学 | Multi-task learning method based on attention mechanism |
CN112800927A (en) * | 2021-01-25 | 2021-05-14 | 北京工业大学 | AM-Softmax loss-based butterfly image fine granularity identification method |
CN114882298A (en) * | 2022-07-11 | 2022-08-09 | 东声(苏州)智能科技有限公司 | Optimization method and device for confrontation complementary learning model |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105488536A (en) * | 2015-12-10 | 2016-04-13 | 中国科学院合肥物质科学研究院 | Agricultural pest image recognition method based on multi-feature deep learning technology |
CN109063742A (en) * | 2018-07-06 | 2018-12-21 | 平安科技(深圳)有限公司 | Butterfly identifies network establishing method, device, computer equipment and storage medium |
CN109376765A (en) * | 2018-09-14 | 2019-02-22 | 汕头大学 | A kind of butterfly automatic classification method based on deep learning |
CN109886295A (en) * | 2019-01-11 | 2019-06-14 | 平安科技(深圳)有限公司 | A kind of butterfly recognition methods neural network based and relevant device |
CN110569901A (en) * | 2019-09-05 | 2019-12-13 | 北京工业大学 | Channel selection-based countermeasure elimination weak supervision target detection method |
-
2020
- 2020-02-24 CN CN202010111404.3A patent/CN111340096A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105488536A (en) * | 2015-12-10 | 2016-04-13 | 中国科学院合肥物质科学研究院 | Agricultural pest image recognition method based on multi-feature deep learning technology |
CN109063742A (en) * | 2018-07-06 | 2018-12-21 | 平安科技(深圳)有限公司 | Butterfly identifies network establishing method, device, computer equipment and storage medium |
CN109376765A (en) * | 2018-09-14 | 2019-02-22 | 汕头大学 | A kind of butterfly automatic classification method based on deep learning |
CN109886295A (en) * | 2019-01-11 | 2019-06-14 | 平安科技(深圳)有限公司 | A kind of butterfly recognition methods neural network based and relevant device |
CN110569901A (en) * | 2019-09-05 | 2019-12-13 | 北京工业大学 | Channel selection-based countermeasure elimination weak supervision target detection method |
Non-Patent Citations (1)
Title |
---|
XIAOLIN ZHANG 等: "Adversarial Complementary Learning for Weakly Supervised Object Localization", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112364980A (en) * | 2020-11-09 | 2021-02-12 | 北京计算机技术及应用研究所 | Deep neural network training method based on reinforcement learning under weak supervision scene |
CN112364980B (en) * | 2020-11-09 | 2024-04-30 | 北京计算机技术及应用研究所 | Deep neural network training method based on reinforcement learning under weak supervision scene |
CN112800927A (en) * | 2021-01-25 | 2021-05-14 | 北京工业大学 | AM-Softmax loss-based butterfly image fine granularity identification method |
CN112800927B (en) * | 2021-01-25 | 2024-03-29 | 北京工业大学 | Butterfly image fine-granularity identification method based on AM-Softmax loss |
CN112801029A (en) * | 2021-02-09 | 2021-05-14 | 北京工业大学 | Multi-task learning method based on attention mechanism |
CN112801029B (en) * | 2021-02-09 | 2024-05-28 | 北京工业大学 | Attention mechanism-based multitask learning method |
CN114882298A (en) * | 2022-07-11 | 2022-08-09 | 东声(苏州)智能科技有限公司 | Optimization method and device for confrontation complementary learning model |
CN114882298B (en) * | 2022-07-11 | 2022-11-01 | 东声(苏州)智能科技有限公司 | Optimization method and device for confrontation complementary learning model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109214399B (en) | Improved YOLOV3 target identification method embedded in SENET structure | |
Xie et al. | Multilevel cloud detection in remote sensing images based on deep learning | |
CN109977918B (en) | Target detection positioning optimization method based on unsupervised domain adaptation | |
CN111340096A (en) | Weakly supervised butterfly target detection method based on confrontation complementary learning | |
Grilli et al. | A review of point clouds segmentation and classification algorithms | |
Li et al. | SAR image change detection using PCANet guided by saliency detection | |
CN110796186A (en) | Dry and wet garbage identification and classification method based on improved YOLOv3 network | |
CN110619059B (en) | Building marking method based on transfer learning | |
CN109766873B (en) | Pedestrian re-identification method based on hybrid deformable convolution | |
CN112464911A (en) | Improved YOLOv 3-tiny-based traffic sign detection and identification method | |
CN111833322B (en) | Garbage multi-target detection method based on improved YOLOv3 | |
CN113269224B (en) | Scene image classification method, system and storage medium | |
CN112101364B (en) | Semantic segmentation method based on parameter importance increment learning | |
CN113761259A (en) | Image processing method and device and computer equipment | |
CN112528845B (en) | Physical circuit diagram identification method based on deep learning and application thereof | |
CN111339935A (en) | Optical remote sensing picture classification method based on interpretable CNN image classification model | |
CN111753682A (en) | Hoisting area dynamic monitoring method based on target detection algorithm | |
CN111353396A (en) | Concrete crack segmentation method based on SCSEOCUnet | |
CN111079837A (en) | Method for detecting, identifying and classifying two-dimensional gray level images | |
CN112132145A (en) | Image classification method and system based on model extended convolutional neural network | |
CN111414951B (en) | Fine classification method and device for images | |
CN113920472A (en) | Unsupervised target re-identification method and system based on attention mechanism | |
Tan et al. | Rapid fine-grained classification of butterflies based on FCM-KM and mask R-CNN fusion | |
CN115049952A (en) | Juvenile fish limb identification method based on multi-scale cascade perception deep learning network | |
CN112308825A (en) | SqueezeNet-based crop leaf disease identification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200626 |
|
WD01 | Invention patent application deemed withdrawn after publication |