CN110852398B - Aphis gossypii glover recognition method based on convolutional neural network - Google Patents

Aphis gossypii glover recognition method based on convolutional neural network Download PDF

Info

Publication number
CN110852398B
CN110852398B CN201911127841.8A CN201911127841A CN110852398B CN 110852398 B CN110852398 B CN 110852398B CN 201911127841 A CN201911127841 A CN 201911127841A CN 110852398 B CN110852398 B CN 110852398B
Authority
CN
China
Prior art keywords
convolution
neural network
cotton
convolutional neural
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911127841.8A
Other languages
Chinese (zh)
Other versions
CN110852398A (en
Inventor
乔红波
张慧
郭伟
许鑫
马新明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Agricultural University
Original Assignee
Henan Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Agricultural University filed Critical Henan Agricultural University
Priority to CN201911127841.8A priority Critical patent/CN110852398B/en
Publication of CN110852398A publication Critical patent/CN110852398A/en
Application granted granted Critical
Publication of CN110852398B publication Critical patent/CN110852398B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a cotton aphid identification method based on a convolutional neural network, which comprises the following steps: acquiring a cotton aphid hazard image; adopting a transfer learning and fine tuning mode to establish a cotton aphid identification model based on a convolutional neural network; and inputting the cotton aphid hazard image into a cotton aphid identification model based on a convolutional neural network, and determining the cotton aphid hazard level. According to the method, a mobile phone is used for collecting the aphid hazard images of cotton, a plant disease and pest identification method and model are established through a large amount of early investigation data and a data mining and deep learning deep convolutional neural network method, the aphid hazard levels of cotton are identified and distinguished, the difference generated by artificial factors in the current plant protection investigation is reduced, the investigation efficiency is improved, the test cost is reduced, namely, the method can be used for conveniently and quickly collecting images and quickly classifying the aphid hazard images, a convenient, quick and accurate investigation system is provided for the plant disease and pest investigation, and decision support is provided for relevant decision departments according to high-timeliness theoretical data.

Description

Aphis gossypii glover recognition method based on convolutional neural network
Technical Field
The invention relates to the technical field of cotton aphid identification, in particular to a cotton aphid identification method based on a convolutional neural network.
Background
The traditional plant diseases and insect pests investigation is carried out by agricultural specialists and agricultural technicians, the investigation process is time-consuming and labor-consuming, and the efficiency is low; and different investigators also have different understanding and identification of classification standards, and investigation results have errors due to subjectivity.
Disclosure of Invention
The embodiment of the invention provides a cotton aphid identification method based on a convolutional neural network, which is used for solving the problems in the prior art.
The embodiment of the invention provides a cotton aphid identification method based on a convolutional neural network, which comprises the following steps:
acquiring a cotton aphid hazard image;
adopting a transfer learning and fine tuning mode to establish a cotton aphid identification model based on a convolutional neural network;
and inputting the cotton aphid hazard image into a cotton aphid identification model based on a convolutional neural network, and determining the cotton aphid hazard level.
Further, the acquiring the cotton aphid hazard image specifically includes:
adopting a mobile phone with a camera to collect images on a cotton canopy; the lens is kept parallel to the canopy when the image is acquired, and the canopy of cotton is ensured to be completely in the acquired image.
Further, a cotton aphid identification model based on a convolutional neural network is established by adopting a transfer learning and fine tuning mode; the method specifically comprises the following steps:
training a no_top weight parameter on an ImageNet data set by adopting a convolutional neural network model;
fine tuning all convolution layers and classification layers of the convolution neural network model by adopting a cotton aphid hazard data set;
training to form a cotton aphid identification model based on the convolutional neural network according to the no_top weight parameter and the finely tuned convolutional neural network model.
Further, the adoption of the cotton aphid hazard data set carries out fine adjustment on all convolution layers and classification layers of the convolution neural network model; the method specifically comprises the following steps:
the first convolution group and the second convolution group are similar and are composed of two convolution layers and a pooling layer, and a relu activation function is used after each convolution layer; setting the number of output feature images as 64 in the two convolution layers of the first group, carrying out convolution operation on the cotton aphid hazard image by using a filter with the size of 3*3, and then filling 1 pixel value; the feature map size after each convolution is 64×224×224; the characteristic information extracted by the first two layers of convolution layers is transmitted to a downsampling layer, and scaling is carried out on sampling sub-blocks with the size of 2 x 2 and the step length of 2, so that 64 characteristic images with the size of 112 x 112 pixels are finally obtained and used as the input of a second convolution group, the output of the characteristic images is 128, the size and the step length of a filter and the size and the step length of sub-sampling blocks are the same as those of the first convolution group, and finally the characteristic images with the size of 128 x 56 are obtained;
a third convolution set comprising three convolution layers and a pooling layer, each convolution layer being followed by a relu activation function; the method comprises the steps of inputting 128 characteristic graphs of 56 x 56, setting 256 characteristic graphs of three convolution layers, setting the filter size to 3*3, filling the filter with 1 pixel value, enabling the characteristic graphs after each convolution to be 256 x 56, transmitting the extracted characteristic values to a pooling layer, scaling 256 x 28 by sampling sub-blocks with the size of 2 x 2 and the step length of 2, and finally obtaining 256 x 28 characteristic graphs;
the fourth and fifth convolution sets are similar and include three convolution layers and a pooling layer, each convolution layer being followed by a relu activation function; the output characteristic diagrams of the two convolution groups are 512, and the output after the fourth layer and the fifth layer are pooled is 512.14x14 and 512.7x7 respectively;
the full-connection layer stage is that each neuron in the upper layer is connected with all neurons in the lower layer, fc6 and fc7 are respectively a first full-connection layer and a second full-connection layer, the input is a vector which is 4096 dimensions and is formed by connecting the outputs of the upper group of convolution layers, and a Dropout technology is adopted to randomly close a part of neurons during training, so that the problem of model overfitting is solved; and carrying out softmax regression on the fc8 layer, wherein 1000 dimensions of the output images respectively correspond to the probability that the images belong to the category.
Further, the cotton aphid hazard rating; the method specifically comprises the following steps:
aphis gossypii harm grade 0: no aphid, and flat leaf;
aphis gossypii harm grade 1: aphids are arranged, and the leaves are not damaged;
aphis gossypii harm grade 2: aphids are present, the heaviest blade is wrinkled or micro-rolled, and the blade is nearly semicircular;
aphis gossypii harm grade 3: the aphid is present, the damaged heaviest blade curls to be more than semicircle or semicircle, and takes an arc shape;
aphis gossypii harm grade 4: with aphids, the blade with the heaviest damage is fully curled and takes on a spherical shape.
The embodiment of the invention provides a cotton aphid identification method based on a convolutional neural network, which has the following beneficial effects compared with the prior art:
the invention uses deep learning to identify the plant disease and pest grade as a technology for well realizing automatic extraction of image features, uses a mobile phone to collect a cotton aphid hazard image, establishes a plant disease and pest identification method and model by combining a large amount of early investigation data with a deep convolutional neural network method of data mining and deep learning, identifies and distinguishes the cotton aphid hazard grade, reduces the difference generated by artificial factors in the current plant protection investigation, improves investigation efficiency, reduces test cost, can conveniently and rapidly collect images, rapidly classifies the images, provides a convenient, rapid and accurate investigation system for the plant disease and pest investigation, and provides decision support for relevant decision departments according to theoretical data with high timeliness.
Drawings
FIG. 1 is a schematic diagram of a cotton aphid hazard class structure provided by an embodiment of the invention;
fig. 2 is a schematic diagram of a training process of an aphid pest grade image recognition model according to an embodiment of the present invention;
FIG. 3a is a precision graph of a training set and a test set provided by an embodiment of the present invention;
fig. 3b is a graph showing the loss of training and testing sets according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment of the invention provides a cotton aphid identification method based on a convolutional neural network, which comprises the following steps:
and step 1, acquiring a cotton aphid hazard image.
And 2, establishing a cotton aphid identification model based on a convolutional neural network by adopting a migration learning and fine tuning mode.
And step 3, inputting the cotton aphid hazard image into a cotton aphid identification model based on a convolutional neural network, and determining the cotton aphid hazard level.
For step 1, the image acquisition process is specifically as follows:
the study sites were the kuerle base of the national academy of agricultural sciences, the institute of plant protection, kuerle city, and the shilike county, xinjiang. Shooting time is from 6 months to 7 months in 2018, and average month temperatures are 24.4 ℃ and 21.9 ℃ respectively. The research images are typical cotton aphid hazard images randomly collected in the field, and in the image collection process, the images are collected on the cotton canopy by using a mobile phone with a camera, so that the shadow area on the canopy is minimum and the interference on post-treatment is minimum. Shooting angle: taking a single cotton plant as shooting content, wherein the shooting angle is vertical orthoshooting; the lens must be kept parallel to the canopy during image acquisition, and it is ensured that the canopy of cotton is fully recorded in the image, even if the shooting of one image is completed. Preprocessing the acquired images, and uniformly scaling the acquired aphid hazard images to 224 x 3. 2500 cotton aphid images with hazard grades from grade 1 to grade 4 are collected, and the total number of the cotton aphid images is 10000. The aphid pest grade grading standard adopts the national standard (as shown in table 1).
TABLE 1 grade Standard for Aphis gossypii
Figure BDA0002277410430000041
Figure BDA0002277410430000051
For the steps 2-3, the specific content of the cotton aphid identification process is as follows:
the acquired cotton aphid hazard data set adopts a mode of transfer learning and fine tuning, firstly, a no_top weight parameter trained on an ImageNet data set by VGG-16 is used, a classifier is continuously trained on the cotton aphid hazard data set, and secondly, fine tuning is carried out on all convolution layers and classification layers by using the cotton aphid hazard data set. In the experiments, a VGG-16 deep convolutional neural network model is adopted, and the network structure and the specific experiments are as follows.
The VGG-16 network is a deep convolutional neural network developed by the university of oxford Visual Geometry Group (VGG) Karen Simmony and AndrewZisserman together, with a second name obtained in 2014 on the ILSVRC contest.
The network architecture of VGGNet is straightforward, with mainly 13 Convolution Layer,3 Fully Connected Layers and final classifiers. On the basis of the above network, a single-Layer network is converted into a plurality of Convolution Layer and 2 x 2 Max-pooling layers of stacked 3*3, and the small convolution kernels which are stacked together are selected to have better effect than the large convolution kernels which are directly used, so that the larger field effect can be simulated, the nonlinear layers of the multiple layers can learn more complicated modes by increasing the network depth, the cost is small, the parameters are fewer, the network structure can be deepened, the performance can be improved, and meanwhile, the filter with the size of 3*3 is more beneficial to maintaining the image property.
The first and second convolution groups are similar, each consisting of two convolution layers and a pooling layer, and each convolution layer is followed by a relu activation function. The number of the output characteristic graphs is 64 in the two convolution layers of the first group, the filter with the size of 3*3 is used for carrying out convolution operation on the cotton aphid hazard image, and then 1 pixel value is filled. The feature map size after each convolution is 64 x 224. And transmitting the characteristic information extracted by the first two convolution layers to a downsampling layer, scaling by sampling sub-blocks with the size of 2 x 2 and the step length of 2, finally obtaining 64 characteristic images with the size of 112 x 112 pixels as the input of a second convolution group, setting the output of the characteristic images to 128, setting the size and the step length of a filter and the size and the step length of sub-sampling blocks to be the same as those of the first convolution group, and finally obtaining the characteristic images with the size of 128 x 56.
The third convolution set, unlike the two above sets, contains three convolution layers and a pooling layer, each followed by a relu activation function. The input is 128 characteristic graphs of 56 x 56, the output of the characteristic graphs of three convolution layers is 256, the filter size is 3*3, 1 pixel value is filled in the characteristic graphs, the characteristic graphs after each convolution are 256 x 56, the extracted characteristic values are transferred to a pooling layer, scaling is carried out on the sampling sub-blocks of 256 x 28 with the size of 2 x 2 and the step length of 2, and finally the characteristic graphs of 256 x 28 are obtained.
The fourth and fifth convolution sets are similar and each includes three convolution layers and a pooling layer, each of which is followed by a relu activation function. The output characteristic diagrams of the two convolution groups are 512, and the output after the fourth layer and the fifth layer are pooled is 512×14×14 and 512×7×7 respectively.
The final full-connection layer stage means that each neuron in the upper layer is connected with all neurons in the lower layer, fc6 and fc7 are respectively the first full-connection layer and the second full-connection layer, the input is a vector which is 4096 dimensions and is formed by connecting the outputs of the upper group of convolution layers, and a Dropout technology is adopted to randomly shut off a part of neurons during training, so that the problem of model overfitting can be relieved. And carrying out softmax regression on the fc8 layer, wherein 1000 dimensions of the output images respectively correspond to the probability that the images belong to the category.
The number of the acquired cotton aphid hazard grades is 4, including 1 grade, 2 grade, 3 grade and 4 grade, 10000 original images are acquired altogether, and the number of samples is increased through rotation transformation. The DCNN (deep convolutional neural network) is used as a feature extractor to obtain the depth feature of a target sample, the depth feature is finally input into a classifier, the network structure of VGG-16 is used and combined with a migration learning method, the input of the model is 224 x 3 images, the network structure before migration comprises 13 pieces of Convolution Layer and 3 pieces of Fully Connected Layer,13 convolutional layer parameters are kept unchanged in the training process, the training is not participated, global average pooling is used for replacing the three fully connected layers, and the classification layer of 4 neurons follows the global average pooling layer. The training process is as follows: the convolutional layer is fixed first to train the classification layer, and then the whole model parameters are fine-tuned. Experiments are carried out on four collected grade cotton aphid hazard images, the highest recognition accuracy is obtained, and the process of adopting DCNN-based transfer learning and fine adjustment on small-data-volume cotton aphid hazard sample images is feasible and efficient. The specific process is shown in fig. 2.
Analysis of experimental results
Parameters were set as follows in the experiment: the optimization method of SGD+0.9Momentum is used, the batch_size is taken to be 64, the weight attenuation is set to be 0.0005, and the learning rate is set to be 0.0001. In the experiment, the network was trained 200 times on the training set. The convergence process of the model, i.e. the overall recognition rate, is shown in fig. 3a, 3 b.
Setting up a cycle 200, namely, performing transfer learning of 200 rounds to train a layer after a convolution layer, performing overall fine tuning of 200 rounds on the basis of a previous model, wherein as can be seen from fig. 3a, after the overall fine tuning is used, the train_acc has obvious rising, the recognition accuracy reaches 0.65 after the first round of transfer learning training, and finally, the average reaches 0.97 after 150 rounds, the test_acc also has obvious rising, and the average starts to be flat after 150 rounds, and finally, the average reaches 0.96, and the recognition accuracy is only 0.01 different from the train_acc. As can be seen from the loss value in fig. 3b, the train_loss gradually stabilizes from 150 rounds to 200 rounds, and the average is 0.03, and the test_loss also continuously descends and has almost no change after 150 rounds, and the average is 0.07, and the difference between the train_loss and the test_loss is controlled within the range of 0.04. From the whole, the accuracy difference and the loss difference of the training set and the test set in the VGG-16 model are very small and are in a convergence range. In a certain range, the accuracy of the model increases with the increase of training times, and beyond the range, the weights reach the optimal values, the model converges, and the network model performance is optimal at the moment.
According to the embodiment of the invention, the mobile phone is used for collecting the aphid hazard image of the cotton, the deep convolutional neural network method is used for identifying and distinguishing the aphid hazard level of the cotton, and the research result shows that the identification accuracy can reach 97% by using the method. The method can conveniently and rapidly acquire images and rapidly classify the images, provides a convenient and rapid aphid hazard investigation method for cotton production, improves aphid hazard investigation accuracy and investigation efficiency, and has good application prospects.
The foregoing disclosure is only a few specific embodiments of the present invention and various changes and modifications may be made by those skilled in the art without departing from the spirit and scope of the invention, and it is intended that the invention also includes such changes and modifications as fall within the scope of the claims and their equivalents.

Claims (3)

1. A cotton aphid identification method based on a convolutional neural network is characterized by comprising the following steps:
acquiring a cotton aphid hazard image;
adopting a transfer learning and fine tuning mode to establish a cotton aphid identification model based on a convolutional neural network;
inputting the cotton aphid hazard image into a cotton aphid identification model based on a convolutional neural network, and determining the cotton aphid hazard level;
the method comprises the steps of establishing a cotton aphid identification model based on a convolutional neural network by adopting a transfer learning and fine tuning mode; the method specifically comprises the following steps:
training a no_top weight parameter on an ImageNet data set by adopting a convolutional neural network model;
fine tuning all convolution layers and classification layers of the convolution neural network model by adopting a cotton aphid hazard data set;
training to form a cotton aphid identification model based on a convolutional neural network according to the no_top weight parameter and the finely tuned convolutional neural network model;
the method comprises the steps that all convolution layers and classification layers of a convolution neural network model are finely adjusted by adopting a cotton aphid hazard data set; the method specifically comprises the following steps:
the first convolution group and the second convolution group are similar and are composed of two convolution layers and a pooling layer, and a relu activation function is used after each convolution layer; setting the number of output feature images as 64 in the two convolution layers of the first group, carrying out convolution operation on the cotton aphid hazard image by using a filter with the size of 3*3, and then filling 1 pixel value; the feature map size after each convolution is 64×224×224; the characteristic information extracted by the first two layers of convolution layers is transmitted to a downsampling layer, and scaling is carried out on sampling sub-blocks with the size of 2 x 2 and the step length of 2, so that 64 characteristic images with the size of 112 x 112 pixels are finally obtained and used as the input of a second convolution group, the output of the characteristic images is 128, the size and the step length of a filter and the size and the step length of sub-sampling blocks are the same as those of the first convolution group, and finally the characteristic images with the size of 128 x 56 are obtained;
a third convolution set comprising three convolution layers and a pooling layer, each convolution layer being followed by a relu activation function; the method comprises the steps of inputting 128 characteristic graphs of 56 x 56, setting 256 characteristic graphs of three convolution layers, setting the filter size to 3*3, filling the filter with 1 pixel value, enabling the characteristic graphs after each convolution to be 256 x 56, transmitting the extracted characteristic values to a pooling layer, scaling 256 x 28 by sampling sub-blocks with the size of 2 x 2 and the step length of 2, and finally obtaining 256 x 28 characteristic graphs;
the fourth and fifth convolution sets are similar and include three convolution layers and a pooling layer, each convolution layer being followed by a relu activation function; the output characteristic diagrams of the two convolution groups are 512, and the output after the fourth layer and the fifth layer are pooled is 512.14x14 and 512.7x7 respectively;
the full-connection layer stage is that each neuron in the upper layer is connected with all neurons in the lower layer, fc6 and fc7 are respectively a first full-connection layer and a second full-connection layer, the input is a vector which is 4096 dimensions and is formed by connecting the outputs of the upper group of convolution layers, and a Dropout technology is adopted to randomly close a part of neurons during training, so that the problem of model overfitting is solved; and carrying out softmax regression on the fc8 layer, wherein 1000 dimensions of the output images respectively correspond to the probability that the images belong to the category.
2. The method for identifying cotton aphids based on convolutional neural network as claimed in claim 1, wherein said obtaining a cotton aphid hazard image comprises:
adopting a mobile phone with a camera to collect images on a cotton canopy; the lens is kept parallel to the canopy when the image is acquired, and the canopy of cotton is ensured to be completely in the acquired image.
3. The method for identifying cotton aphids based on convolutional neural network according to claim 1, characterized in that said cotton aphid hazard class; the method specifically comprises the following steps:
aphis gossypii harm grade 0: no aphid, and flat leaf;
aphis gossypii harm grade 1: aphids are arranged, and the leaves are not damaged;
aphis gossypii harm grade 2: aphids are present, the heaviest blade is wrinkled or micro-rolled, and the blade is nearly semicircular;
aphis gossypii harm grade 3: the aphid is present, the damaged heaviest blade curls to be more than semicircle or semicircle, and takes an arc shape;
aphis gossypii harm grade 4: with aphids, the blade with the heaviest damage is fully curled and takes on a spherical shape.
CN201911127841.8A 2019-11-18 2019-11-18 Aphis gossypii glover recognition method based on convolutional neural network Active CN110852398B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911127841.8A CN110852398B (en) 2019-11-18 2019-11-18 Aphis gossypii glover recognition method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911127841.8A CN110852398B (en) 2019-11-18 2019-11-18 Aphis gossypii glover recognition method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN110852398A CN110852398A (en) 2020-02-28
CN110852398B true CN110852398B (en) 2023-05-23

Family

ID=69602028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911127841.8A Active CN110852398B (en) 2019-11-18 2019-11-18 Aphis gossypii glover recognition method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN110852398B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112042449A (en) * 2020-09-17 2020-12-08 山西农业大学 Method for controlling aphids in apple orchard based on Chinese rice lacewing
CN112528726B (en) * 2020-10-14 2022-05-13 石河子大学 Cotton aphid pest monitoring method and system based on spectral imaging and deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107657602A (en) * 2017-08-09 2018-02-02 武汉科技大学 Based on the breast structure disorder recognition methods for migrating convolutional neural networks twice
CN110188824A (en) * 2019-05-31 2019-08-30 重庆大学 A kind of small sample plant disease recognition methods and system
CN110309841A (en) * 2018-09-28 2019-10-08 浙江农林大学 A kind of hickory nut common insect pests recognition methods based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107657602A (en) * 2017-08-09 2018-02-02 武汉科技大学 Based on the breast structure disorder recognition methods for migrating convolutional neural networks twice
CN110309841A (en) * 2018-09-28 2019-10-08 浙江农林大学 A kind of hickory nut common insect pests recognition methods based on deep learning
CN110188824A (en) * 2019-05-31 2019-08-30 重庆大学 A kind of small sample plant disease recognition methods and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
柴帅 ; 李壮举 ; .基于迁移学习的番茄病虫害检测.计算机工程与设计.2019,(06),全文. *

Also Published As

Publication number Publication date
CN110852398A (en) 2020-02-28

Similar Documents

Publication Publication Date Title
Hassan et al. Plant disease identification using a novel convolutional neural network
CN107016405B (en) A kind of pest image classification method based on classification prediction convolutional neural networks
WO2022160771A1 (en) Method for classifying hyperspectral images on basis of adaptive multi-scale feature extraction model
CN110263705A (en) Towards two phase of remote sensing technology field high-resolution remote sensing image change detecting method
CN109344891A (en) A kind of high-spectrum remote sensing data classification method based on deep neural network
CN111179216B (en) Crop disease identification method based on image processing and convolutional neural network
CN111652326A (en) Improved fruit maturity identification method and identification system based on MobileNet v2 network
CN110852398B (en) Aphis gossypii glover recognition method based on convolutional neural network
CN110321956B (en) Grass pest control method and device based on artificial intelligence
CN111709477A (en) Method and tool for garbage classification based on improved MobileNet network
CN111340019A (en) Grain bin pest detection method based on Faster R-CNN
CN116129260A (en) Forage grass image recognition method based on deep learning
CN113435254A (en) Sentinel second image-based farmland deep learning extraction method
CN112016596A (en) Evaluation method for farmland soil fertility based on convolutional neural network
CN114898359B (en) Litchi plant diseases and insect pests detection method based on improvement EFFICIENTDET
CN113920376A (en) Method for identifying wheat seed varieties based on light-weight convolutional neural network
Sehree et al. Olive trees cases classification based on deep convolutional neural network from unmanned aerial vehicle imagery
CN114463651A (en) Crop pest and disease identification method based on ultra-lightweight efficient convolutional neural network
CN114708492A (en) Fruit tree pest and disease damage image identification method
Bonkra et al. Scientific landscape and the road ahead for deep learning: apple leaves disease detection
CN113076873A (en) Crop disease long-tail image identification method based on multi-stage training
CN108596118A (en) A kind of Remote Image Classification and system based on artificial bee colony algorithm
CN115147835B (en) Pineapple maturity detection method based on improved RETINANET natural orchard scene
Poorni et al. Detection of rice leaf diseases using convolutional neural network
Rajeswarappa et al. Crop Pests Identification based on Fusion CNN Model: A Deep Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant