CN112560644B - Crop disease and insect pest automatic identification method suitable for field - Google Patents
Crop disease and insect pest automatic identification method suitable for field Download PDFInfo
- Publication number
- CN112560644B CN112560644B CN202011444596.6A CN202011444596A CN112560644B CN 112560644 B CN112560644 B CN 112560644B CN 202011444596 A CN202011444596 A CN 202011444596A CN 112560644 B CN112560644 B CN 112560644B
- Authority
- CN
- China
- Prior art keywords
- crop
- disease
- image
- species
- constraint
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 title claims abstract description 97
- 201000010099 disease Diseases 0.000 title claims abstract description 96
- 238000000034 method Methods 0.000 title claims abstract description 54
- 241000607479 Yersinia pestis Species 0.000 title claims abstract description 51
- 241000238631 Hexapoda Species 0.000 title claims abstract description 31
- 241000894007 species Species 0.000 claims abstract description 44
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 22
- 238000007781 pre-processing Methods 0.000 claims abstract description 10
- 239000013598 vector Substances 0.000 claims description 36
- 239000011159 matrix material Substances 0.000 claims description 25
- 238000012549 training Methods 0.000 claims description 20
- 238000002156 mixing Methods 0.000 claims description 14
- 238000009499 grossing Methods 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 8
- 230000011218 segmentation Effects 0.000 claims description 6
- 238000003709 image segmentation Methods 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 5
- 238000011176 pooling Methods 0.000 claims description 5
- 230000006870 function Effects 0.000 description 30
- 208000024891 symptom Diseases 0.000 description 14
- 241000196324 Embryophyta Species 0.000 description 10
- 238000003745 diagnosis Methods 0.000 description 5
- 238000013461 design Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 241000227653 Lycopersicon Species 0.000 description 3
- 235000007688 Lycopersicon esculentum Nutrition 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 206010039509 Scab Diseases 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 244000052769 pathogen Species 0.000 description 2
- 230000001717 pathogenic effect Effects 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 241000894006 Bacteria Species 0.000 description 1
- 244000025254 Cannabis sativa Species 0.000 description 1
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 1
- 241000233866 Fungi Species 0.000 description 1
- 241000699666 Mus <mouse, genus> Species 0.000 description 1
- 241000244206 Nematoda Species 0.000 description 1
- 241000233654 Oomycetes Species 0.000 description 1
- 241000566145 Otus Species 0.000 description 1
- 244000061456 Solanum tuberosum Species 0.000 description 1
- 235000002595 Solanum tuberosum Nutrition 0.000 description 1
- 241000700605 Viruses Species 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 244000052616 bacterial pathogen Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000002845 discoloration Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 244000037666 field crops Species 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000017074 necrotic cell death Effects 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/188—Vegetation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30188—Vegetation; Agriculture
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the field of computer vision, in particular to an automatic identification method for crop diseases and insect pests adapting to fields. The method comprises the following steps: s1, acquiring the original data of the crop image and preprocessing the data; s2, inputting the preprocessed crop image original data into the improved crop disease and insect pest automatic identification model, and predicting the disease and insect pest type corresponding to the crop image original data; the network architecture of the improved automatic crop disease and insect pest identification model is as follows: two branches of a channel orthogonal constraint and a species classification constraint are added on a backbone network of a convolutional neural network, wherein the channel orthogonal constraint is added on the last layer of features of the backbone network output, and the species classification constraint is added on any one feature of the backbone network output. By adopting the method, the accurate identification of the pest and disease categories can be realized, management personnel do not need to have professional knowledge of field experts, and the identification performance of the model in the field environment is improved.
Description
Technical Field
The invention relates to the field of computer vision, in particular to an automatic identification method for crop diseases and insect pests adapting to fields.
Background
2006 + 2015, the disease, pest, grass and mouse damage of crops in China are in a serious occurrence state, and the annual grain loss accounts for 20.88% of the total grain yield in China. The sources of crop pests mainly include bacteria, fungi, oomycetes, viruses, nematodes, insects and the like, and leaves of crops generally have symptoms of spots, discoloration, deformity, wilting, necrosis and the like after being infected with diseases. The health of crops is a condition for agricultural workers to live, and high-level professional knowledge is required for diagnosing the symptoms, so that the invention of the method for automatically identifying the crop diseases has great significance.
Compared with the traditional expert diagnosis method with strong subjectivity, strong experience and long time consumption, the automatic identification method related to the image and the computer vision can achieve more accurate and faster diagnosis effect. As the closest prior art of the invention, a paper "Using Deep Learning for Image-Based Plant Disease Detection" and a paper "Solving Current Limitations of Deep Learning Based applications for Plant Disease Detection" give detailed introduction, in the method, a classification neural network for realizing Plant diseases and insect pests is trained, automatic identification of the Plant diseases and insect pests is realized, the Plant diseases and insect pests have higher identification accuracy under controlled laboratory conditions (having high requirements on illumination postures and backgrounds), but the performance of the Plant diseases and insect pests is rapidly reduced when the Plant diseases and insect pests are applied in real fields.
The existing automatic plant disease and insect pest identification method still has great limitation when solving the problem of automatic identification of the disease and insect pest of field crops, and mainly has the following problems: 1. the crop images are usually collected under laboratory controlled conditions, and compared with actual crop growth conditions, the crop images for identifying plant diseases and insect pests are ideal, so that the trained network has poor effect in actual measurement, and only single, frontal crop leaves with uniform illumination and single background can be identified. 2. The method for identifying crop diseases and insect pests by extracting image features is not comprehensive in consideration, for example: leaves of the same disease have symptoms of different sizes and locations, have more than one disease, or have symptoms very similar to other diseases, etc. The above situation may cause the accuracy of crop pest identification to decrease.
In consideration of the above limitations of the existing methods, if a more robust automatic identification algorithm for crop diseases and insect pests adapting to real fields is provided, the following three main problems will be encountered: 1. the crop image in the data set has a single background, but the background of the leaf image in the field environment is variable, as shown in fig. 1 (a). 2. The scale and location of symptoms of the same disease vary, as shown in fig. 1 (b). 3. The symptoms of different diseases of the same crop are similar, and the symptoms of different diseases of different crops caused by the same pathogen are also similar, as shown in fig. 1(c), the target spot disease, scab disease, early blight disease and late blight disease of tomato are respectively shown from left to right.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, on one hand, the original image is enhanced, on the other hand, the network of the identification algorithm is improved, the factors of crop species and the interrelation among different channels are considered, and the crop disease and insect pest automatic identification method which is more robust and is suitable for the field is provided.
In order to achieve the above purpose, the invention provides the following technical scheme:
a crop pest and disease damage automatic identification method adapting to fields comprises the following steps:
s1, acquiring the original data of the crop image, and preprocessing the original data of the crop image;
s2, inputting the preprocessed crop image original data into the improved crop disease and insect pest automatic identification model, and predicting the disease and insect pest type corresponding to the crop image original data;
the network architecture of the improved automatic crop disease and insect pest identification model is as follows: two branches of channel orthogonal constraint and species classification constraint are added on a backbone network of the convolutional neural network, the channel orthogonal constraint is added on the last layer of features output by the backbone network, and the species classification constraint is added on any one feature output by the backbone network.
As a preferred embodiment of the present invention, the processing procedure of the backbone network of the convolutional neural network specifically includes:
s21, inputting the preprocessed crop image original data into a backbone network of a convolutional neural network, and outputting multilayer features, wherein each feature is composed of a plurality of feature maps;
s22, carrying out global average pooling operation on the last layer of characteristics output by the backbone network to obtain a characteristic vector Fdisease;
S23, using the full connection layer to connect the feature vector FdiseaseMapping out a region with the dimension of category numberdVectors, regions with number of classes as dimensionsdIn the vector, the index corresponding to the maximum value is the corresponding pest and disease damage prediction type of the crop image data.
As a preferred scheme of the present invention, the processing procedure of the channel orthogonal constraint specifically includes:
a1, obtaining the last layer of characteristics M of the backbone network output of the convolutional neural network4The channel of (1);
a2, converting the characteristic M4Converting the characteristic graph of each channel into a characteristic vector to obtain a matrix f;
a3, transposing the matrix f to obtain the matrix fTMatrix f and matrix fTMultiplying to obtain a square matrix Mchannel,MchannelThe mutual relationship between every two characteristic channels is reflected.
As a preferred embodiment of the present invention, the processing procedure of the species classification constraint specifically includes:
b1, selecting one layer of characteristics from the multiple layers of characteristics as selected characteristics;
b2, carrying out global average pooling operation on the selected features to obtain feature vector Fspecies;
B3, using full connection layer to connect feature vector FspeciesMap out one or more classesLogits with odd number as dimensionsVectors, regions with number of classes as dimensionssIn the vector, the index corresponding to the maximum value is the species type of the crop corresponding to the crop image data.
As a preferred scheme of the invention, the improved automatic crop pest identification model adopts a comprehensive loss function in a network training stage, wherein the comprehensive loss function is the weighted sum of a disease classification loss function, a species classification loss function and a channel orthogonal constraint loss function.
As a preferred embodiment of the present invention, the channel orthogonal constraint loss function is:
wherein M ischannelIs a symmetrical cosine value square matrix,represents the sum of the upper triangular elements of a symmetric cosine value square,represents the largest element in the symmetric cosine value square matrix, and λ is the balance coefficient of the two.
As a preferred embodiment of the present invention, the disease classification loss function is:
wherein, KdIs the total number of categories of crop diseases, yiRepresenting the real disease category corresponding to the current sample, N being the number of samples participating in training, epsilon representing the label smoothing factor, pyiRepresenting the corresponding prediction probability, p, of the true disease class to which the current sample correspondskIs a probability representing a prediction as class k.
As a preferred embodiment of the present invention, the species classification loss function is:
wherein, KsIs the total number of categories of crop species, yiRepresenting the real species class corresponding to the current sample, N being the number of samples participating in training, epsilon representing the label smoothing factor,representing the corresponding prediction probability, p, of the true disease class to which the current sample correspondskIs a probability representing a prediction as class k.
As a preferred embodiment of the present invention, the preprocessing of the raw data of the crop image in step S1 specifically includes the following steps:
k1, carrying out image random background mixing on the original crop image data to obtain a mixed image;
and K2, carrying out random scaling and clipping on the mixed image to obtain image data for training the model.
As a preferred embodiment of the present invention, step K1 specifically includes the following steps:
k11, carrying out image segmentation on the original crop image data to obtain a segmentation map containing the leaves;
k12, removing the background to obtain a binary mask only containing the leaf;
k13, mixing the binary mask only containing the leaf in different background pictures to obtain a plurality of mixed images.
Compared with the prior art, the invention has the beneficial effects that:
1. the method is mainly improved from three aspects of image data processing, network structure design and loss function design, the potential and generalization capability of the convolutional neural network in field crop pest and disease identification are fully explored, and finally the identification performance of the model in the field environment is improved. Given an image of the leaves of a crop in the field, whether the crop is ill or not and the type of disease can be identified based on the visual characteristics of the leaves, which can help to assist diagnosis and thus accurate treatment and prevention. Compared with the traditional expert diagnosis method and the existing automatic identification method based on CNN, the method has the advantages that: a. by adopting the method, the accurate identification of the pest and disease category can be realized, the management personnel do not need to have the professional knowledge of field experts, and the diagnosis efficiency is improved. b. The identification method is more accurate and consumes less time. c. No additional temporal and spatial complexity is added to the reasoning.
2. In the method, the CNN network for automatic identification adds channel orthogonal constraint, considers the relationship between each two characteristic channels, and leads each channel (namely the characteristic M)4Each feature map) can express different features as much as possible, and species classification constraints are increased. Due to the increase of the two constraint conditions, the problem of low identification accuracy caused by the following conditions is solved: symptoms are too similar between different disease classes, similar symptoms are present between different diseases of the same species, between different diseases of different species caused by the same pathogenic bacteria. The method can better identify the easily mixed disease category, and greatly improve the identification accuracy.
3. The invention also aims at the problem that the acquired crop image has a single background and the field crop background is changeable, and the crop for identification is subjected to image random background mixing and image random zooming to enhance the image, so that the method can adapt to different field background changes and further enhance the robustness of the crop disease and insect pest automatic identification algorithm.
Description of the drawings:
FIG. 1 is a sample image of a leaf of a crop suffering from a disease and pest;
FIG. 2 is a diagram of image preprocessing for pest crops in example 1;
FIG. 3 is a diagram of the overall network structure of the method for automatically identifying crop pests and diseases adapted to the field in example 1;
FIG. 4 is a diagram showing the channel orthogonal constraint loss function in example 1;
fig. 5 is a flow chart of network training and testing in the field-adapted automatic crop pest identification method in embodiment 1.
Detailed Description
The present invention will be described in further detail with reference to test examples and specific embodiments. It should be understood that the scope of the above-described subject matter is not limited to the following examples, and any techniques implemented based on the disclosure of the present invention are within the scope of the present invention.
Example 1
A crop pest and disease damage automatic identification method adapting to fields comprises the following steps:
s1, acquiring crop image original data and preprocessing the crop image original data;
and S2, inputting and outputting the preprocessed crop image original data into the improved crop disease and insect pest automatic identification model, and predicting the type of the disease and insect pest corresponding to the crop image original data.
The network architecture of the improved automatic crop disease and insect pest identification model is as follows: adding two branches of channel orthogonal constraint and species classification constraint on a backbone network of a Convolutional Neural Network (CNN), wherein the channel orthogonal constraint is added to the last layer of feature M output by the backbone network4The species classification constraint adds a feature M output by the backbone network1、M2、M3Or M4The above.
In step S1, the raw crop image data is mainly preprocessed. Since some common data enhancements can significantly improve the generalization capability of CNN networks, some basic data enhancements such as random horizontal flipping, random rotation, random graying, random image brightness variation, contrast variation, saturation variation, and hue variation are also used in the training phase of the network. Meanwhile, in order to deal with the problems of background change and variable symptom scale positions, two strategies of image random background mixing and image random scaling cutting are adopted respectively. Preprocessing the crop image original data, and specifically comprising the following steps:
k1, carrying out image random background mixing on the original crop image data to obtain a mixed image;
and K2, carrying out random scaling and clipping on the mixed image to obtain image data for training the model.
Preferably, the image random background blending method in step K1 is shown in fig. 2, and a schematic diagram of image random background blending and image random scaling and cropping is shown in fig. 2. Aiming at the problem that the background of the collected crop image is single and the background of the field crop is variable, the improvement method mainly comprises the following steps:
k11, carrying out image segmentation on the original crop image data to obtain a segmentation map containing the leaves;
k12, removing the background in the segmentation map containing the leaf to obtain a binary mask only containing the leaf;
k13, mixing the binary mask only containing the leaf in different background pictures to obtain a plurality of mixed images.
When the image segmentation processing is performed, the image segmentation algorithm may select a commonly used maximum between class variance (Otus) segmentation algorithm, a watershed algorithm, or the like. Removing the background to obtain a binary mask only containing the leaf, and the specific steps are as follows: and transforming the segmented image without the background into an HSV color space, and obtaining a binary mask of the leaf (0 value is the leaf and 255 value is the background) in the new space based on a certain threshold, wherein the threshold can be selected from [0,0,0] to [180,255,23 ]. And removing white dots in the blades in the mask by adopting morphological operation of firstly corroding and then expanding to prevent the original disease characteristics of the image from being damaged during mixing. And finally, pasting the segmentation image onto different background images by means of masks to obtain a mixed image.
This process can be integrated in the network training phase as an online data enhancement method, randomly blending any background to the pictures coming into the network. Some experiments have tried to find that, under otherwise identical conditions, the more complex the selected background picture, the higher the accuracy of the identification.
Aiming at the problems of unfixed symptom positions and different scales of the same disease, the method relieves the diseases by reducing or amplifying the original image to different scales and then randomly cutting a certain disease area.
As shown in fig. 2, in the training phase, the input image is randomly scaled by 0.5 to 2.0 times the original size, and pasted to a black background of a fixed size (e.g., 2 times the original size: 512 × 512) to obtain leaf images of different sizes. And then, determining the length and width of a rectangular frame by 0.5-1.5 times of the size of the original image, randomly cutting out an area on the leaf image with different scales by means of the determined rectangular frame, zooming to the size of a network input image (such as 224 multiplied by 224), and finally sending the area to a CNN network to extract disease features with different scales and positions. This method belongs to on-line data enhancement as well as the image random background blending method and both can be used with the basic data enhancement method.
In step S2, the method of the present invention is improved mainly in two aspects, i.e., the grid structure of the algorithm is improved; two branches are added when a common CNN network is trained, the network structure of the algorithm is shown in figure 3, preprocessed image data are input, and a series of characteristics are output through a Backbone network. By utilizing the characteristics and introducing supervision information of disease types and crop species types, the output recognition result is more accurate, and three loss functions are respectively designed for the network in the training stage, and the specific technical scheme is as follows:
as a specific embodiment, a ResNet series network structure (e.g., ResNet50) is selected as the backhaul network in the CNN network, and the ResNet network structure mainly includes four residual blocks. The backhaul network sets the step size of the last convolutional layer to 1, which prompts the network to learn more detailed features.
After a preprocessed crop leaf image is input into a Backbone network, four characteristics M are continuously output through four residual error learning blocks1,M2,M3,M4]Each feature is formed by c h multiplied by w feature maps to form Mi∈Rc×w×hAnd i is 1,2,3, 4. Four characteristics extracted by Backbone network [ M ]1,M2,M3,M4]In, feature M1Has the largest characteristic diagramThe sequence number of the features is increased, and the smaller the feature graph is, the richer the feature semantic information learned by the network is.
The method of the invention mainly realizes automatic identification of plant diseases and insect pests, so that the fourth characteristic M is obtained4Obtaining a feature vector F for disease classification after the operation of Global Average Pooling (GAP)diseaseThe feature vector F is then passed through the full connection layer (FC)diseaseMapping the output as a logits with the number of disease categories as the dimensiondThe vector of the vector is then calculated,Kdis the number of disease categories of crop diseases. logitsdThe index corresponding to the maximum value in the vector is the prediction class of the CNN network for the input image.
Since the symptoms between different disease classes are too similar in the problem of crop pest identification, similar symptoms are present between different diseases of the same species (such as tomato leaves with target spot and scab) and between different diseases of different species caused by the same pathogen (such as tomato leaves and potato leaves which are also late blight). To this difficulty, two branches are added when training a normal CNN network: channel orthogonal constraint branching and species classification constraints. Wherein channel orthogonal constraint branches are added at the last layer of feature M4In the above, this branch is to make up for the partial absence of channel information of the features by the GAP operation: when using the GAP operating feature M4In time, only M is utilized4The spatial information inside each feature map is omitted and the interrelationship between the 2048 feature maps is omitted. The branching is to allow each channel (i.e., feature M) to account for pairwise interrelationships between feature channels4Each feature map) can express different features as much as possible.
Species classification constraint branching may be added at M before network classification of disease classes1~M4Any of the features described above. For problems with similar symptoms in different species, if a supervisory signal for species classification can be introduced prior to disease classification, the differences are first distinguishedThe species of (2) can distinguish different diseases, and the problem can be well relieved. Species classification constraint branches may be added at M1~M4In any of the features, indicated by dashed lines in fig. 3. Preferably, in feature M3Adding a species classification constraint branch, and performing pair on the characteristics M on the species classification constraint branch3Working with GAP and FC, feature M3Feature vector F for crop species classification obtained by GAPspeciesThe feature vector F is then transformed using the full connection layer FCspeciesMapping outputs a logits with the number of species classes as the dimensionsThe vector of the vector is then calculated,Ksis the number of species classes of the crop. logitssThe index corresponding to the maximum value in the vector is the species type prediction of the crop of the input image by the CNN network.
Design of loss function
In the training stage of the network, the method adopts s max function to output the logits vector Z ═ Z of the network1,…zk…,zK]Normalizing to obtain a probability value pkThe cross entropy of the probability values is then used as the final classification loss function. To prevent larger logits values from becoming larger and larger during normalization, cross-entropy loss functions with tag smoothing are needed as loss functions for disease classification and species classification.
The softmax function is:wherein k represents the total number of classes, zkRepresenting the kth value, p, in the logits vectorkRepresenting the probability of prediction as class k.
The common cross entropy loss function is:
where N is the number of samples involved in the training,yirepresents a true class label corresponding to the sample, δ (k ═ y)i) Indicating whether the prediction is accurate, i.e. when k is yiWhen true, δ (k ═ y)i) Is 1, otherwise is 0.
And the cross entropy loss function with tag smoothing is expressed as:
ε represents the label smoothing factor when the predicted class equals the true class, i.e., k-yiWhen, 1- ε is used as its weight, when k ≠ yiWhen usingAs weights. The method adopts cross entropy loss function with label smoothing in the design of loss functions for disease classification and species classification.
The disease classification loss function is:
wherein, KdIs the total number of categories of crop diseases, yiThe actual disease category corresponding to the current sample is shown.
The species classification loss function is:
wherein, KsIs the total number of categories of crop species, yiThe actual species class corresponding to the current sample is represented.
The channel orthogonal constraint means that features in different channels are constrained to enable angles between the features and direction vectors of other channels to be orthogonal as much as possible, and therefore the network is prompted to learn more discriminant features. As shown in FIG. 4, the feature M extracted for CNN4∈R2048×7×72048, converting the feature map of each channel into feature vectors to obtain a matrix f, and converting the matrix f into a transformed matrix fTMultiplication is a 2048 × 2048 square matrix Mchannel. The ith row and j column elements in the square matrix represent M4Vector inner product of feature map of ith channel and feature map of jth channelFrom the cosine equationIt can be seen that the square matrix M obtained after unitizing the square matrixchannelIs determined by the angular cosine value cos (theta) between the feature vectors of different channelsi,j) And (4) forming. In order to make the angles between different features as orthogonal as possible, it is desirable to minimize the angle cosine values.
The channel quadrature constraint penalty function is therefore:
wherein M ischannelIs a symmetrical cosine value square matrix, the first term represents that the sum of upper triangular elements of the square matrix is as small as possible, the second term represents that the largest element is also as small as possible, and lambda is a balance coefficient of the two.
And finally, weighting and summing the three loss functions to obtain the loss function of the whole network, wherein alpha and beta are balance coefficients:
Ltotal=Ldisease+α·Lspecies+β·Lchannel
after the network structure and the loss function are designed, training and testing can be carried out. The network training and testing implementation flow chart in the field-adapted crop pest automatic identification method is shown in fig. 5. The method specifically comprises the following steps:
the training process of the network comprises the following steps:
carrying out basic preprocessing and data enhancement operation on the image in each batch: the method comprises the steps of firstly zooming a picture to the same size, such as 224 multiplied by 224, then randomly turning horizontally, randomly rotating, randomly graying and randomly transforming colors, and finally performing normalization operation according to channels.
And secondly, simultaneously implementing two strategies of image random background mixing and image random scaling and cutting to obtain the processed image data.
③ after the preprocessed input image is sent to the backbone network (such as ResNet50), four features are obtained in succession. Then L may bespeciesUsing any of these four features, e.g. M3。
Last feature M for network output4Respectively sending the two branches into the two paths, wherein the loss functions corresponding to the two paths are respectively as follows: l isdiseaseAnd Lchannel。
Through combination of Ldisease、LspeciesAnd LchannelThree loss functions train the classification network.
The prediction process of the network:
firstly, carrying out scaling and normalization basic preprocessing operation on a leaf image in the field.
Secondly, the preprocessed input images are sent to a trained classification network, and output is carried out to obtain characteristics. Then outputting the characteristics M to the network4Obtaining logits vector Z ═ Z for disease classification through GAP layer and FC layer1,…zk…,zK]Channel orthogonality and speciation constraint branches and loss functions are no longer required.
And thirdly, calculating argmax (Z) of the vector to predict the pest and disease damage type of the input crop leaf image.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included therein.
Claims (3)
1. An automatic identification method for crop diseases and insect pests adapting to fields is characterized by comprising the following steps:
s1, acquiring crop image original data and preprocessing the crop image original data;
s2, inputting the preprocessed crop image original data into an improved crop disease and insect pest automatic identification model, and predicting the disease and insect pest type corresponding to the crop image original data;
the network architecture of the improved automatic crop disease and insect pest identification model is as follows: adding two branches of a channel orthogonal constraint and a species classification constraint on a backbone network of a convolutional neural network, wherein the channel orthogonal constraint is added on the last feature of the backbone network output, and the species classification constraint is added on any one feature of the backbone network output;
the processing procedure of the backbone network of the convolutional neural network specifically comprises the following steps:
s21, inputting the preprocessed crop image original data into a backbone network of a convolutional neural network, and outputting multilayer features, wherein each feature is composed of a plurality of feature maps;
s22, carrying out global average pooling operation on the last layer of features output by the backbone network to obtain a feature vector Fdisease;
S23, using a full connection layer to connect the feature vector FdiseaseMapping out a region with the dimension of category numberdVectors, said category-number-dimensional logitsdIn the vector, the index corresponding to the maximum value is the corresponding pest and disease damage prediction category of the crop image data;
the processing procedure of the channel orthogonal constraint specifically includes:
a1, obtaining the last layer of characteristics M of the backbone network output of the convolutional neural network4The channel of (1);
a2, converting the characteristic M4Converting the characteristic graph of each channel into a characteristic vector to obtain a matrix f;
a3, transposing the matrix f to obtain the matrix fTThe matrix f and the matrix fTMultiplying to obtain a square matrix MchannelSaid M ischannelThe mutual relation between every two characteristic channels is embodied;
the processing procedure of the species classification constraint specifically includes:
b1, selecting one layer of characteristics from the multiple layers of characteristics as selected characteristics;
b2, carrying out global average pooling operation on the selected features to obtain feature vector Fspecies;
B3, using full connection layer to connect the feature vector FspeciesMapping out a region with the dimension of category numbersVectors, said category-number-dimensional logitssIn the vector, the index corresponding to the maximum value is the species type of the crop corresponding to the crop image data;
the improved automatic crop pest and disease identification model adopts a comprehensive loss function in a network training stage, wherein the comprehensive loss function is the weighted sum of a disease classification loss function, a species classification loss function and a channel orthogonal constraint loss function;
the channel orthogonal constraint loss function is:
wherein M ischannelIs a symmetrical cosine value square matrix,represents the sum of the upper triangular elements of a symmetric cosine value square,representing the maximum element in the symmetric cosine value square matrix, wherein lambda is the balance coefficient of the two elements;
the disease classification loss function is:
wherein, KdIs the total number of categories of crop diseases, yiThe real disease category corresponding to the current sample is shown, N is the number of samples participating in training, epsilon represents a label smoothing factor,representing the corresponding prediction probability, p, of the true disease class to which the current sample correspondskIs a probability representing a prediction as class k;
the species classification loss function is:
wherein, KsIs the total number of categories of crop species, yiRepresenting the real species class corresponding to the current sample, N being the number of samples participating in training, epsilon representing the label smoothing factor,representing the corresponding prediction probability, p, of the true disease class to which the current sample correspondskIs a probability representing a prediction as class k.
2. The method for automatically identifying crop pests and diseases adapting to fields as claimed in claim 1, wherein in step S1, the preprocessing of the raw data of the crop image specifically comprises the following steps:
k1, carrying out image random background mixing on the crop image original data to obtain a mixed image;
and K2, carrying out random scaling and clipping on the mixed image to obtain image data for training the model.
3. The method for automatically identifying crop pests and diseases adapting to fields as claimed in claim 2, wherein the step K1 specifically comprises the following steps:
k11, carrying out image segmentation on the original crop image data to obtain a segmentation map containing blades;
k12, removing the background to obtain a binary mask only containing the leaf;
and K13, mixing the binary mask only containing the blades in different background pictures to obtain a plurality of mixed images.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011444596.6A CN112560644B (en) | 2020-12-11 | 2020-12-11 | Crop disease and insect pest automatic identification method suitable for field |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011444596.6A CN112560644B (en) | 2020-12-11 | 2020-12-11 | Crop disease and insect pest automatic identification method suitable for field |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112560644A CN112560644A (en) | 2021-03-26 |
CN112560644B true CN112560644B (en) | 2021-09-28 |
Family
ID=75061866
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011444596.6A Active CN112560644B (en) | 2020-12-11 | 2020-12-11 | Crop disease and insect pest automatic identification method suitable for field |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112560644B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109800629A (en) * | 2018-12-05 | 2019-05-24 | 天津大学 | A kind of Remote Sensing Target detection method based on convolutional neural networks |
CN110009043A (en) * | 2019-04-09 | 2019-07-12 | 广东省智能制造研究所 | A kind of pest and disease damage detection method based on depth convolutional neural networks |
CN111369540A (en) * | 2020-03-06 | 2020-07-03 | 西安电子科技大学 | Plant leaf disease identification method based on mask convolutional neural network |
EP3676789A1 (en) * | 2017-08-28 | 2020-07-08 | The Climate Corporation | Crop disease recognition and yield estimation |
CN111563431A (en) * | 2020-04-24 | 2020-08-21 | 空间信息产业发展股份有限公司 | Plant leaf disease and insect pest identification method based on improved convolutional neural network |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108304844A (en) * | 2018-01-30 | 2018-07-20 | 四川大学 | Agricultural pest recognition methods based on deep learning binaryzation convolutional neural networks |
CN111507319A (en) * | 2020-07-01 | 2020-08-07 | 南京信息工程大学 | Crop disease identification method based on deep fusion convolution network model |
-
2020
- 2020-12-11 CN CN202011444596.6A patent/CN112560644B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3676789A1 (en) * | 2017-08-28 | 2020-07-08 | The Climate Corporation | Crop disease recognition and yield estimation |
CN109800629A (en) * | 2018-12-05 | 2019-05-24 | 天津大学 | A kind of Remote Sensing Target detection method based on convolutional neural networks |
CN110009043A (en) * | 2019-04-09 | 2019-07-12 | 广东省智能制造研究所 | A kind of pest and disease damage detection method based on depth convolutional neural networks |
CN111369540A (en) * | 2020-03-06 | 2020-07-03 | 西安电子科技大学 | Plant leaf disease identification method based on mask convolutional neural network |
CN111563431A (en) * | 2020-04-24 | 2020-08-21 | 空间信息产业发展股份有限公司 | Plant leaf disease and insect pest identification method based on improved convolutional neural network |
Non-Patent Citations (3)
Title |
---|
CNN based on Overlapping Pooling Method and Multi-layered Learning with SVM & KNN for American Cotton Leaf Disease Recognition;Kapil Prashar et al;《2019 International Conference on Automation, Computational and Technology Management (ICACTM)》;20190729;330-333 * |
基于CNN和迁移学习的农作物病害识别方法研究;李淼等;《智慧农业》;20190731;第1卷(第3期);46-55 * |
多卷积神经网络模型融合的农作物病害图像识别;龚安等;《计算机技术与发展》;20200831;第30卷(第8期);134-139 * |
Also Published As
Publication number | Publication date |
---|---|
CN112560644A (en) | 2021-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Chen et al. | Identifying plant diseases using deep transfer learning and enhanced lightweight network | |
Arivazhagan et al. | Detection of unhealthy region of plant leaves and classification of plant leaf diseases using texture features | |
Chen et al. | Combining discriminant analysis and neural networks for corn variety identification | |
Pan et al. | Intelligent diagnosis of northern corn leaf blight with deep learning model | |
Veerendra et al. | Detecting plant diseases, quantifying and classifying digital image processing techniques | |
Ferdouse Ahmed Foysal et al. | A novel approach for tomato diseases classification based on deep convolutional neural networks | |
Lin et al. | The pest and disease identification in the growth of sweet peppers using faster R-CNN and mask R-CNN | |
Yadav et al. | AFD-Net: Apple Foliar Disease multi classification using deep learning on plant pathology dataset | |
Rathore et al. | Automatic rice plant disease recognition and identification using convolutional neural network | |
Adem et al. | A sugar beet leaf disease classification method based on image processing and deep learning | |
Zeng et al. | Identification of maize leaf diseases by using the SKPSNet-50 convolutional neural network model | |
CN115601602A (en) | Cancer tissue pathology image classification method, system, medium, equipment and terminal | |
CN113221913A (en) | Agriculture and forestry disease and pest fine-grained identification method and device based on Gaussian probability decision-level fusion | |
Rai et al. | Classification of diseased cotton leaves and plants using improved deep convolutional neural network | |
Vignesh et al. | EnC-SVMWEL: ensemble approach using cnn and svm weighted average ensemble learning for sugarcane leaf disease detection | |
Patil et al. | Sensitive crop leaf disease prediction based on computer vision techniques with handcrafted features | |
Ramamoorthy et al. | Reliable and accurate plant leaf disease detection with treatment suggestions using enhanced deep learning techniques | |
CN112560644B (en) | Crop disease and insect pest automatic identification method suitable for field | |
Trivedi et al. | Identify and classify corn leaf diseases using a deep neural network architecture | |
Ahmed et al. | An automated system for early identification of diseases in plant through machine learning | |
Saxena et al. | Disease detection in plant leaves using deep learning models: AlexNet and GoogLeNet | |
Kumar et al. | Application of PSPNET and fuzzy Logic for wheat leaf rust disease and its severity | |
Neware | Paddy plant leaf diseases identification using machine learning approach | |
Hu | A rice pest identification method based on a convolutional neural network and migration learning | |
Sahu et al. | CNN based disease detection in Apple Leaf via transfer learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |