CN111860330B - Apple leaf disease identification method based on multi-feature fusion and convolutional neural network - Google Patents

Apple leaf disease identification method based on multi-feature fusion and convolutional neural network Download PDF

Info

Publication number
CN111860330B
CN111860330B CN202010705693.XA CN202010705693A CN111860330B CN 111860330 B CN111860330 B CN 111860330B CN 202010705693 A CN202010705693 A CN 202010705693A CN 111860330 B CN111860330 B CN 111860330B
Authority
CN
China
Prior art keywords
image
layer
apple leaf
neural network
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010705693.XA
Other languages
Chinese (zh)
Other versions
CN111860330A (en
Inventor
李丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Polytechnic Institute
Original Assignee
Shaanxi Polytechnic Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Polytechnic Institute filed Critical Shaanxi Polytechnic Institute
Priority to CN202010705693.XA priority Critical patent/CN111860330B/en
Publication of CN111860330A publication Critical patent/CN111860330A/en
Application granted granted Critical
Publication of CN111860330B publication Critical patent/CN111860330B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/68Food, e.g. fruit or vegetables

Abstract

The invention discloses an apple leaf disease identification method based on multi-feature fusion and convolutional neural network, which comprises the steps of firstly, carrying out noise reduction and segmentation on an original image by utilizing the multi-feature fusion method; and then taking the segmented image as an original data set of the convolutional neural network, expanding the original data set in a data expansion mode, finally training a network model by utilizing the expanded data set, and optimizing model weight parameters by using a gradient descent method. When the method is used for identifying the apple leaf diseases, manual labeling is not needed, the apple leaf disease and pest images can be accurately identified under a complex background, the accuracy rate reaches 97.05%, the identification time is 2.7s, and the problem of automatic identification of the apple leaf diseases is effectively solved.

Description

Apple leaf disease identification method based on multi-feature fusion and convolutional neural network
Technical Field
The invention relates to the field of apple leaf disease identification methods, in particular to an apple leaf disease identification method based on multi-feature fusion and convolutional neural network.
Background
Disease is one of important factors affecting apple growth, and disease occurrence on a few fruit trees may infect the whole orchard, seriously affecting the yield and quality of apples. Apple leaf is a high-incidence part of diseases, and identification of leaf diseases is a key technology in the fruit tree cultivation process. The apple leaf diseases are accurately identified, and the method has guiding significance for disease control in the growth process of fruit trees.
Traditional apple leaf disease identification mainly depends on manual work, and the mode is time-consuming and labor-consuming, has high subjectivity and is not suitable for management of modern agriculture. With the development of computer vision and pattern recognition technologies, researchers have made many researches on disease recognition using a machine vision method. Identification of cucumber diseases and insect pests is achieved by combining a Markov conditional random field with an SVM classifier based on a radial basis function, such as Ma Juncheng. Zhang Jianhua and the like take cotton diseases as research objects, firstly extract color features and texture features of images, and then combine a rough set and a neural network to obtain good recognition effects on three different cotton diseases. Qin Lifeng and the like reduce the high-dimensional characteristics of the diseases to a plurality of different low-dimensional subspaces by using principal component analysis, train BP neural networks on the subspaces, and finish the task of identifying five different cucumber diseases. Tian Kai and the like, in the identification of eggplant brown spot, the disease is identified by extracting the characteristic parameters of the color, texture and shape of the disease spots and finally using Fisher discriminant function, and the identification accuracy is more than 95%. Wang Xianfeng and the like utilize a mode of combining statistics and image processing, firstly, five different growth environment characteristics of the leaf blade are extracted by using an attribute reduction method, then 35 statistical characteristic vectors of the disease spots are extracted by using an image processing method, finally, the disease spot type is identified by utilizing a maximum membership criterion, and the identification rate in disease identification of three different cucumbers is more than 90%. Wang Jian and the like extract color features and texture features of feature images based on color moment and gray level co-occurrence matrix, and the BP neural network optimized by genetic algorithm is utilized to construct a classifier, so that the accuracy rate in recognition of tea plant lesions reaches 94.17%. Niu Chong and the like extract 8 features of the histogram, then perform feature normalization, and perform recognition on strawberry diseases and insect pests by training an SVM classifier, so that higher recognition accuracy is obtained. The method is based on the traditional machine vision technology, and has the advantages of high recognition accuracy, complex recognition method, poor generalization capability of the model and no universality.
At present, convolutional neural networks are widely applied in the fields of semantic segmentation, target recognition and the like, and a plurality of researchers apply the deep convolutional neural networks to plant disease and insect pest recognition to obtain a certain effect. Yang Jindan et al propose a convolutional neural network based on mixed pooling, replacing the maximum pooling layer of the original CNN with a mixed pooling layer, which achieves good effect in identifying powdery mildew diseases of strawberry leaves. Liang Mojie and the like realize the identification of rice insect pests by using CNN, the model can accurately distinguish the target and the background of the identification object under the complex background, and the average identification rate reaches over 96.78 percent in the identification experiments of 5 different rice insect pests. Ma Juncheng et al propose a greenhouse cucumber disease recognition system based on CNN, firstly, characteristic extraction is carried out on collected data by using composite color characteristics, then, the extracted images are sent into a convolutional neural network for training, and the accuracy rate of the system in recognition of cucumber diseases reaches 97.29%. Fu Longsheng and the like provide a multi-cluster kiwi fruit image recognition method based on a VGG-16 convolutional neural network in the recognition process of multi-cluster kiwi fruit images in the field, and the recognition accuracy reaches 94.78% in the recognition test of kiwi fruits under a complex background. Sun Jun in the process of identifying wheat seedlings and weeds, a convolutional neural network with a cavity convolution combined with global pooling is used, and the network model can achieve the identification accuracy of more than 90% after only 4 iterations, so that the training time of the network model is greatly shortened. Although the research has higher recognition accuracy in the recognition of diseases and insect pests, a great deal of time is required to manually label the data set in the construction process of the model data set.
Disclosure of Invention
Aiming at the problems, the invention aims to provide the apple leaf disease identification method based on the multi-feature fusion and convolution neural network, which is free from manual labeling, high in identification accuracy and short in identification time.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
the apple leaf disease identification method based on multi-feature fusion and convolutional neural network is characterized by comprising the following steps of: the method comprises the following steps:
s1: dividing and preprocessing the apple leaf image by utilizing a multi-feature fusion method;
s2: constructing an apple leaf disease recognition convolutional neural network model by using the apple leaf image data after the segmentation pretreatment;
s3: training a convolutional neural network model;
s4: and identifying the disease category of the apple leaf according to the final output result of the convolutional neural network model.
Further, the specific step of image segmentation preprocessing in step S1 includes:
s11: acquiring RGB features of an original image, and extracting the ultragreen features by using an ultragreen feature calculation formula; the ultra-green characteristic calculation formula is I ExB =2I G -I B -I R In the formula I G ,I B ,I R For 3 color components in the RGB color space;
s12: converting an original image from an RGB color space to an HSV color space and a CIELAB color space respectively, and extracting H component color features and L component color features of the converted image respectively;
s13: two-dimensional convolution operation is respectively carried out on three color components of the super green component, the H component and the L component by using Gaussian differential filtering and circular mean filtering, the color characteristics after filtering are combined into fusion characteristics, and the fusion calculation formula is MMF= (Gf. Times.I) H )+(Af*I L )+I ExB Wherein, gf is Gaussian filtering, af is mean filtering, and MMF is fused characteristic;
s14: and carrying out image segmentation on the fused characteristic images by using a maximum inter-class variance method, optimizing a segmentation result by morphological processing, and carrying out mask operation on the original images and the optimized images to obtain a final segmentation result.
Further, the specific operation steps of step S14 include,
s141: uniformly scaling the apple leaf images subjected to multi-feature fusion in the step S13 into three-channel RGB color images of 125 multiplied by 3;
s142: equally dividing the three-channel RGB color image into four areas of upper left, upper right, lower left and lower right, and cutting a central area in the input image, wherein the central area is concentric with the three-channel RGB color image, and 1/2 area is cut in the horizontal and vertical directions respectively, so as to obtain 5 sub-images with the size of 1/4 of the original image;
s143: interpolation is carried out on the 5 sub-images by using a bilinear interpolation method, and the equal proportion is amplified by 4 times;
s144: and converting the divided gray level image into a color image, namely performing a masking operation.
Further, the apple leaf lesion recognition convolutional neural network model in step S2 includes 13 convolutional layers, 4 pooling layers, and 1 global pooling layer.
Further, the specific operation of performing the network model training on the convolutional neural network model in step S3 includes:
s31: performing operation on the segmented image in the step S1 and convolution kernels of different convolution layers to obtain different features of an input image, and activating the obtained features through an activation function to obtain an output feature map; the calculation formula of the feature map is x l =f(W l x l-1 +b l ) Wherein x is l-1 For the output of the first-1 hidden layer, x l For the output of the convolutional layer in the first hidden layer, x 0 For input image of input layer, W l B is the weight characteristic matrix of the first hidden layer l For the bias of the l hidden layer, activate function f (x) =max (0, x);
s32: reducing the dimension of the feature map by pooling layer operation of the feature map output by the convolution layer, wherein the pooling operation adopts a maximum pooling method, and a calculation formula is as followsWherein l is the layer number, down is the downsampling operation, w is the pooling operation, b s Is an additional bias;
s33: the global pooling layer performs weighted summation on the characteristics of the characteristic map, and integrates local information with category differentiation in the convolution layer and the pooling layer;
s34: combining the feature images obtained by each layer, transmitting the feature images to a loss layer, and fusing detection results of all layers by utilizing non-maximum inhibition; the parameters of the loss layer are calculated by a loss function, the calculation of the loss functionThe formula isWherein L is loc For confidence loss, L conf For confidence loss, z is a matching result of a default category and different categories, c is confidence of a prediction target, l is position information of a predicted object frame, g is position information of a real frame, and alpha is a weighting parameter of the confidence loss and the position loss;
s35: and optimizing the weight parameters of the network model by using a gradient descent method, and repeating the steps S31-S35 until the optimal value of the network weight is obtained.
Further, the specific operation steps of optimizing the weight parameters of the convolution layer using the gradient descent method in step S36 include,
s361: randomly initializing weight parameters of a network model;
s362: calculating an error between the output value calculated by the model and the true value;
s363: carrying out weight adjustment on each neuron generating errors, and reducing error values;
s364: and repeating the iteration until the optimal value of the network weight is obtained.
Further, in step S4, according to the final output result of the convolutional neural network model, the specific operation of identifying the disease category of the apple leaf comprises: the characteristic map after the global pooling layer operation processing is subjected to identification classification on the image by using a Softmax classifier, wherein an identification calculation formula is as followsWherein w is i Weights connecting the plurality of neurons in the fully connected layer with i output neurons of the Softmax classifier.
The beneficial effects of the invention are as follows:
1. and the disease image segmentation is carried out on the training set by using a multi-feature fusion mode, and manual labeling is not needed. After the input image is equally divided, each sub-region characteristic of the image is extracted, and detail characteristics of the speckles are fully extracted. The full-connection layer is changed into the convolution layer, so that the network depth is increased, and the recognition rate of the network model is improved.
2. The method can accurately identify the apple leaf pest images under a complex background, has a simple network model structure and stronger portability, and provides theoretical basis for development of agricultural intelligent equipment.
3. Compared with the traditional recognition algorithm, the method has good performance in recognition rate and recognition time, and if the recognition accuracy is required to be further improved, the training set can be properly expanded or the network depth can be increased.
Drawings
FIG. 1 is an image of apple leaf rust in an embodiment of the present invention;
FIG. 2 is an image of apple leaf scab in an embodiment of the invention;
FIG. 3 is an image of apple leaf defoliation in an embodiment of the present invention;
FIG. 4 is an image of apple leaf virus disease in an embodiment of the present invention;
FIG. 5 is an image of leaf disease of apple She Buyin in an embodiment of the invention;
FIG. 6 is an image of powdery mildew of apple leaf in an embodiment of the invention;
FIG. 7 is a diagram of a network model architecture for apple leaf disease image recognition according to the present invention;
FIG. 8 is a flow chart of the apple leaf disease area detection module of the present invention;
Detailed Description
In order to enable those skilled in the art to better understand the technical solution of the present invention, the technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Examples:
in the fruit tree test base of the national academy of sciences of Bao chicken of Shaanxi province, an apple leaf image is acquired by using a Canon digital camera with an effective pixel value of 2200 ten thousand. The acquisition time is set to 09:00 to 16:00, and different illumination conditions are included. The collected images comprise 6 common apple diseases such as leaf rust disease, cladosporium cucumerinum, fallen leaf disease, virus disease, silver leaf disease, powdery mildew and the like, 1200 images are collected, and part of the images are shown in figures 1-6.
And cutting the blade image by using Python3.5, uniformly adjusting the size to 125X 125, and uniformly storing in a storage format. In order to enhance the network learning ability and prevent over fitting, the image data set is expanded, different disease images are respectively expanded by 10 times in the modes of rotation, translation, scaling, color dithering and the like, and the expanded image data set is divided into a training set, a verification set and a test set according to the mode of 3:1:1.
Specifically, the collected original data volume contains 865 apple disease leaves, the apple disease leaves are expanded by 10 times, the expanded data comprises 8650 apple disease leaves, the apple disease leaves are divided into a training set, a verification set and a test set according to the ratio of 3:1:1, wherein the training set contains 5190 images, the verification set contains 1730 images, and the test set contains 1730 images.
Further, the apple leaf disease identification method based on the multi-feature fusion and convolutional neural network specifically comprises the following steps:
s1: dividing and preprocessing the apple leaf image by utilizing a multi-feature fusion method;
apple leaf diseases are of different kinds, and the characteristics of the lesions among the kinds are also greatly different. In the process of identifying diseases, the key part is to extract the characteristics of different disease images. Because the image acquisition process is under the conditions of natural scenes and complex backgrounds, the acquired image contains a large amount of noise. If pretreatment is not carried out, the identification accuracy of the plant diseases and insect pests is reduced. In general, the proportion of the lesion area in the whole leaf area is small, and if a manual annotation data set is used, the annotation is inaccurate or the annotation area is wrong. Therefore, the invention provides a multi-feature fusion method for preprocessing the image and realizing the lesion segmentation.
Specifically, the operation steps of preprocessing the image by the multi-feature fusion method are as follows:
s11: acquiring RGB features of an original image, and extracting the ultragreen features by using an ultragreen feature calculation formula; the ultra-green characteristic calculation formula is I ExB =2I G -I B -I R In the formula I G ,I B ,I R For 3 color components in the RGB color space;
s12: converting an original image from an RGB color space to an HSV color space and a CIELAB color space respectively, and extracting H component color features and L component color features of the converted image respectively;
s13: two-dimensional convolution operation is respectively carried out on three color components of the super green component, the H component and the L component by using Gaussian differential filtering and circular mean filtering, the color characteristics after filtering are combined into fusion characteristics, and the fusion calculation formula is MMF= (Gf. Times.I) H )+(Af*I L )+I ExB Wherein, gf is Gaussian filtering, af is mean filtering, and MMF is fused characteristic;
s14: and (3) image segmentation is carried out on the fused features by using a maximum inter-class variance method, the segmentation result is optimized by morphological processing, and a final segmentation result is obtained by using the original image and the optimized image to carry out mask operation.
The image features of the leaf diseases are complex, and the proportion of the disease spot area in the whole leaf area is small, so that a plurality of small-area disease spots are not easy to identify. In order to improve the recognition accuracy of the network model, firstly, dividing the plant disease and insect pest blade image subjected to multi-feature fusion into 4 subgraphs with the same size, and simultaneously cutting a central region subgraph to enhance the detection capability of small lesions.
Specifically, in order to enhance the detail detection capability of the lesion image, the multi-feature fused image is uniformly scaled into a three-channel RGB color image of 125×125×3, and then the image is equally divided into four regions of upper left, upper right, lower left and lower right, and a central region is cut out in the input image. The central area is an area which is concentric with the input size image and is 1/2 of the area taken in the horizontal and vertical directions respectively. 5 sub-images with the size of 1/4 of the original image are obtained, and then interpolation is carried out on the 5 sub-images by using a bilinear interpolation method, wherein the specific method is shown in figure 8. During detection, the detection object is amplified by 4 times in equal proportion by adopting an interpolation algorithm, so that the detection object is not deformed when the detection object is fed into a network model.
The segmentation result of the fusion characteristic is a gray image, and the original image and the optimized image are used for carrying out mask operation to obtain a final segmentation result, wherein the mask operation is to convert the gray image into a color image, the final segmentation result is specifically described as that the apple lesion area is colored, and the rest areas are black.
Further, step S2 is: constructing an apple leaf disease recognition convolutional neural network model by using the apple leaf image data after the segmentation pretreatment;
conventional convolutional neural networks mainly include a convolutional layer, an active layer, a pooling layer, and a fully-connected layer. Because the VGG-16 network model has a deeper network structure and better data processing capability and has higher recognition rate in image recognition, the invention constructs a model framework suitable for apple leaf disease image recognition on the basis of the traditional VGG-16 network structure, and the specific structure is shown in figure 7.
The apple leaf disease identification network model mainly comprises 13 convolution layers (Conv 1-Conv 13), 4 pooling layers and 1 global pooling layer. Setting all convolution kernels to be 3×3 in size, and setting the sliding step length of the convolution layer to be 1; to keep the dimension consistent with the dimension of the input image, the pad parameter is set to 1 to complement the edge of the convolution layer by means of boundary expansion. The Pooling (Pooling 1-Pooling 4) size is set to be 3 multiplied by 3, the Pooling window is set to be 2 multiplied by 2 by adopting the operation mode of the maximum Pooling, and the sliding step length is set to be 2; since identification was only made for 6 different apple leaf diseases, the Softmax classification number was set to 6.
S3: training a convolutional neural network model;
and sending the preprocessed image into a convolutional neural network, carrying out multi-level feature extraction through a convolutional layer, a pooling layer and a global pooling layer, and then selecting candidate areas with different sizes and different length-width ratios at various positions in a feature map.
Specifically, the operation steps of constructing the apple leaf disease recognition convolutional neural network model by using the apple leaf image data after the segmentation pretreatment include:
s31: the segmented image in the step S1 is rolled with different convolution layersThe product core is operated to obtain different characteristics of the input image, and the obtained characteristics are subjected to an activation function to obtain an output characteristic diagram; the calculation formula of the feature map is x l =f(W l x l-1 +b l ) Wherein x is l-1 For the output of l-1 hidden layers, x l For the output of the convolutional layer in the first hidden layer, x 0 For input image of input layer, W l B is the weight characteristic matrix of the first hidden layer l For the bias of the l hidden layer, activate function f (x) =max (0, x);
s32: reducing the dimension of the feature map by pooling layer operation of the feature map output by the convolution layer, wherein the pooling operation adopts a maximum pooling method, and a calculation formula is as followsWherein l is the layer number, down is the downsampling operation, w is the pooling operation, b s Is an additional bias;
s33: the global pooling layer performs weighted summation on the characteristics of the characteristic map, and integrates local information with category differentiation in the convolution layer and the pooling layer;
s34: combining the feature images obtained by each layer, transmitting the feature images to a loss layer, and fusing detection results of all layers by utilizing non-maximum inhibition; the parameters of the loss layer are calculated through a loss function, the loss function consists of a classification part and a regression part, and the calculation formula of the loss function is thatWherein L is loc For confidence loss, L conf For confidence loss, z is a matching result of a default category and different categories, c is confidence of a prediction target, l is position information of a predicted object frame, g is position information of a real frame, and alpha is a weighting parameter of the confidence loss and the position loss;
s35: and optimizing the weight parameters of the network model by using a gradient descent method, and repeating the steps S31-S35 until the optimal value of the network weight is obtained.
The test software environment is Ubuntu 16.04 LTS, matlab is used as programming development language, the hardware environment is Intel (R) coreTM i7-7550k cpu@3.60GHz processor, RAM is 32GB, and GPU is GTX1080Ti PC for test. The deep learning development framework used was matcontroller.
A small-batch gradient descent algorithm (Stochastic gradient descent, SGD) with a driven quantity factor (momentum) is adopted in the model training process, and the activation functions of the convolution layers all adopt ReLu. Taking the requirement of computer hardware into consideration, dividing the training set image into different batch sizes (batch size) and inputting the batch sizes into a network model; the specific gradient descent algorithm is as follows,
s361: randomly initializing weight parameters of a network model;
s362: calculating an error between the output value calculated by the model and the true value;
s363: carrying out weight adjustment on each neuron generating errors, and reducing error values;
s364: and repeating the iteration until the optimal value of the network weight is obtained.
In this study, batch size was set to 64, 128 and 256, respectively, momentum factor was set to 0.9, and the number of iterations (epoch) was set to 100. In the initial training process of the network, the network weight is randomly initialized by using a Gaussian distribution with the mean value of 0 and the variance of 0.01, the initial learning rate is set to be 0.01, and the regularization coefficient is set to be 0.005.
Further, in step S4, according to the final output result of the convolutional neural network model, the specific operation of identifying the disease category of the apple leaf comprises: the characteristic map after the global pooling layer operation processing is subjected to identification classification on the image by using a Softmax classifier, wherein an identification calculation formula is as followsWherein w is i Weights connecting the plurality of neurons in the fully connected layer with i output neurons of the Softmax classifier.
In order to verify the recognition time and recognition rate of apple leaf diseases by the apple leaf disease recognition method of the present invention, 5 different methods were used to compare 6 different apple leaf diseases with the method of the present invention in recognition rate, model training time and recognition time, respectively. The 5 different methods are respectively a disease recognition method (LNNF) based on leaf characteristics, a disease recognition method based on SVM, a disease recognition method (PCAA) based on word bag characteristics and a disease recognition method (CLBP) based on rough sets and BP neural networks. The results of the different processes are shown in Table 1.
Table 1 5 recognition rates, training time and recognition time of different apple leaf disease recognition methods
As can be seen from Table 1, the apple leaf disease identification method has good performance in the aspects of identification rate and identification time, and has good robustness and stability in different pest and disease type identification although the training time is long. After the network model is trained, different diseases can be identified, and the identification time is short.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present invention, and various changes and modifications may be made without departing from the spirit and scope of the invention, which is defined in the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (3)

1. The apple leaf disease identification method based on multi-feature fusion and convolutional neural network is characterized by comprising the following steps of: the method comprises the following steps:
s1: dividing and preprocessing the apple leaf image by utilizing a multi-feature fusion method;
s2: constructing an apple leaf disease recognition convolutional neural network model by using the apple leaf image data after the segmentation pretreatment;
s3: training a convolutional neural network model;
s4: identifying the disease category of apple leaf according to the final output result of the convolutional neural network model;
the specific steps of the image segmentation preprocessing in the step S1 include:
s11: acquiring RGB features of an original image, and extracting the ultragreen features by using an ultragreen feature calculation formula; the ultra-green characteristic calculation formula is I ExB =2I G -I B -I R In the formula I G ,I B ,I R For 3 color components in the RGB color space;
s12: converting an original image from an RGB color space to an HSV color space and a CIELAB color space respectively, and extracting H component color features and L component color features of the converted image respectively;
s13: two-dimensional convolution operation is respectively carried out on three color components of the super green component, the H component and the L component by using Gaussian differential filtering and circular mean filtering, the color characteristics after filtering are combined into fusion characteristics, and the fusion calculation formula is MMF= (G) f *I H )+(A f *I L )+I ExB Wherein, gf is Gaussian filtering, af is mean filtering, and MMF is fused characteristic;
s14: image segmentation is carried out on the fused characteristic images by using a maximum inter-class variance method, the segmentation result is optimized through morphological processing, and a final segmentation result is obtained by using the original images and the optimized images through mask operation;
the specific operation steps of step S14 include,
s141: uniformly scaling the apple leaf images subjected to multi-feature fusion in the step S13 into three-channel RGB color images of 125 multiplied by 3;
s142: equally dividing the three-channel RGB color image into four areas of upper left, upper right, lower left and lower right, and cutting a central area in the input image, wherein the central area is concentric with the three-channel RGB color image, and 1/2 area is cut in the horizontal and vertical directions respectively, so as to obtain 5 sub-images with the size of 1/4 of the original image;
s143: interpolation is carried out on the 5 sub-images by using a bilinear interpolation method, and the equal proportion is amplified by 4 times;
s144: converting the divided gray level image into a color image, namely performing mask operation;
the apple leaf disease identification convolutional neural network model in the step S2 comprises 13 convolutional layers, 4 pooling layers and 1 global pooling layer;
the specific operation of performing the network model training on the convolutional neural network model in the step S3 includes:
s31: performing operation on the segmented image in the step S1 and convolution kernels of different convolution layers to obtain different features of an input image, and activating the obtained features through an activation function to obtain an output feature map; the calculation formula of the feature map is x l =f(W l x l-1 +b l ) Wherein x is l-1 For the output of the first-1 hidden layer, x l For the output of the convolutional layer in the first hidden layer, x 0 For input image of input layer, W l B is the weight characteristic matrix of the first hidden layer l For the bias of the l hidden layer, activate function f (x) =max (0, x);
s32: reducing the dimension of the feature map by pooling layer operation of the feature map output by the convolution layer, wherein the pooling operation adopts a maximum pooling method, and a calculation formula is as followsWherein l is the layer number, down is the downsampling operation, w is the pooling operation, b s Is an additional bias;
s33: the global pooling layer performs weighted summation on the characteristics of the characteristic map, and integrates local information with category differentiation in the convolution layer and the pooling layer;
s34: combining the feature images obtained by each layer, transmitting the feature images to a loss layer, and fusing detection results of all layers by utilizing non-maximum inhibition; the parameters of the loss layer are calculated through a loss function, and the calculation formula of the loss function is as followsWherein L is loc For confidence loss, L conf For confidence loss, z is a matching result of a default category and different categories, c is confidence of a prediction target, l is position information of a predicted object frame, g is position information of a real frame, and alpha is a weighting parameter of the confidence loss and the position loss;
s35: and optimizing the weight parameters of the network model by using a gradient descent method, and repeating the steps S31-S35 until the optimal value of the network weight is obtained.
2. The apple leaf lesion recognition method based on multi-feature fusion and convolutional neural network of claim 1, wherein the specific operation step of optimizing the weight parameters of the convolutional layer using gradient descent method in step S35 comprises,
s361: randomly initializing weight parameters of a network model;
s362: calculating an error between the output value calculated by the model and the true value;
s363: carrying out weight adjustment on each neuron generating errors, and reducing error values;
s364: and repeating the iteration until the optimal value of the network weight is obtained.
3. The apple leaf disease identification method based on multi-feature fusion and convolutional neural network of claim 2, wherein the method is characterized by: in step S4, according to the final output result of the convolutional neural network model, the specific operation of identifying the disease category of the apple leaf comprises: the characteristic map after the global pooling layer operation processing is subjected to identification classification on the image by using a Softmax classifier, wherein an identification calculation formula is as followsWherein w is i Weights connecting the plurality of neurons in the fully connected layer with i output neurons of the Softmax classifier.
CN202010705693.XA 2020-07-21 2020-07-21 Apple leaf disease identification method based on multi-feature fusion and convolutional neural network Active CN111860330B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010705693.XA CN111860330B (en) 2020-07-21 2020-07-21 Apple leaf disease identification method based on multi-feature fusion and convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010705693.XA CN111860330B (en) 2020-07-21 2020-07-21 Apple leaf disease identification method based on multi-feature fusion and convolutional neural network

Publications (2)

Publication Number Publication Date
CN111860330A CN111860330A (en) 2020-10-30
CN111860330B true CN111860330B (en) 2023-08-11

Family

ID=73001339

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010705693.XA Active CN111860330B (en) 2020-07-21 2020-07-21 Apple leaf disease identification method based on multi-feature fusion and convolutional neural network

Country Status (1)

Country Link
CN (1) CN111860330B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580720A (en) * 2020-12-18 2021-03-30 华为技术有限公司 Model training method and device
CN112766364A (en) * 2021-01-18 2021-05-07 南京信息工程大学 Tomato leaf disease classification method for improving VGG19
CN112884025B (en) * 2021-02-01 2022-11-04 安徽大学 Tea disease classification system based on multi-feature sectional type training
CN113627258B (en) * 2021-07-12 2023-09-26 河南理工大学 Apple leaf pathology detection method
CN113989639B (en) * 2021-10-20 2024-04-16 华南农业大学 Automatic litchi disease identification method and device based on hyperspectral image analysis processing method
CN113989509B (en) * 2021-12-27 2022-03-04 衡水学院 Crop insect pest detection method, crop insect pest detection system and crop insect pest detection equipment based on image recognition
CN114332087B (en) * 2022-03-15 2022-07-12 杭州电子科技大学 Three-dimensional cortical surface segmentation method and system for OCTA image
CN114842240A (en) * 2022-04-06 2022-08-02 盐城工学院 Method for classifying images of leaves of MobileNet V2 crops by fusing ghost module and attention mechanism
CN115631417A (en) * 2022-11-11 2023-01-20 生态环境部南京环境科学研究所 Butterfly image identification method based on convolutional neural network

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004007165A (en) * 2002-05-31 2004-01-08 Nikon Corp Image processing method, image processing program, and image processor
US6915432B1 (en) * 1999-01-29 2005-07-05 International Business Machines Corporation Composing a realigned image
CN104061907A (en) * 2014-07-16 2014-09-24 中南大学 Viewing-angle greatly-variable gait recognition method based on gait three-dimensional contour matching synthesis
CN105701513A (en) * 2016-01-14 2016-06-22 深圳市未来媒体技术研究院 Method of rapidly extracting area of interest of palm print
CN108510504A (en) * 2018-03-22 2018-09-07 北京航空航天大学 Image partition method and device
CN109712165A (en) * 2018-12-29 2019-05-03 安徽大学 A kind of similar foreground picture image set dividing method based on convolutional neural networks
CN109961042A (en) * 2019-03-22 2019-07-02 中国人民解放军国防科技大学 Smoke detection method combining deep convolutional neural network and visual change diagram
CN110008912A (en) * 2019-04-10 2019-07-12 东北大学 A kind of social platform matching process and system based on plants identification
CN110555383A (en) * 2019-07-31 2019-12-10 中国地质大学(武汉) Gesture recognition method based on convolutional neural network and 3D estimation
CN110633720A (en) * 2018-06-22 2019-12-31 西北农林科技大学 Corn disease identification method
CN111178177A (en) * 2019-12-16 2020-05-19 西京学院 Cucumber disease identification method based on convolutional neural network
CN111415302A (en) * 2020-03-25 2020-07-14 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10282589B2 (en) * 2017-08-29 2019-05-07 Konica Minolta Laboratory U.S.A., Inc. Method and system for detection and classification of cells using convolutional neural networks
US10643341B2 (en) * 2018-03-22 2020-05-05 Microsoft Technology Licensing, Llc Replicated dot maps for simplified depth computation using machine learning

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6915432B1 (en) * 1999-01-29 2005-07-05 International Business Machines Corporation Composing a realigned image
JP2004007165A (en) * 2002-05-31 2004-01-08 Nikon Corp Image processing method, image processing program, and image processor
CN104061907A (en) * 2014-07-16 2014-09-24 中南大学 Viewing-angle greatly-variable gait recognition method based on gait three-dimensional contour matching synthesis
CN105701513A (en) * 2016-01-14 2016-06-22 深圳市未来媒体技术研究院 Method of rapidly extracting area of interest of palm print
CN108510504A (en) * 2018-03-22 2018-09-07 北京航空航天大学 Image partition method and device
CN110633720A (en) * 2018-06-22 2019-12-31 西北农林科技大学 Corn disease identification method
CN109712165A (en) * 2018-12-29 2019-05-03 安徽大学 A kind of similar foreground picture image set dividing method based on convolutional neural networks
CN109961042A (en) * 2019-03-22 2019-07-02 中国人民解放军国防科技大学 Smoke detection method combining deep convolutional neural network and visual change diagram
CN110008912A (en) * 2019-04-10 2019-07-12 东北大学 A kind of social platform matching process and system based on plants identification
CN110555383A (en) * 2019-07-31 2019-12-10 中国地质大学(武汉) Gesture recognition method based on convolutional neural network and 3D estimation
CN111178177A (en) * 2019-12-16 2020-05-19 西京学院 Cucumber disease identification method based on convolutional neural network
CN111415302A (en) * 2020-03-25 2020-07-14 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于三通道卷积神经网络的纹身图像检测算法;许庆勇;江顺亮;徐少平;葛芸;唐玲;;计算机应用(第09期);第279-285页 *

Also Published As

Publication number Publication date
CN111860330A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN111860330B (en) Apple leaf disease identification method based on multi-feature fusion and convolutional neural network
Koirala et al. Deep learning–Method overview and review of use for fruit detection and yield estimation
Jiang et al. CNN feature based graph convolutional network for weed and crop recognition in smart farming
JP6935377B2 (en) Systems and methods for automatic inference of changes in spatiotemporal images
Zhu et al. In-field automatic observation of wheat heading stage using computer vision
CN109344883A (en) Fruit tree diseases and pests recognition methods under a kind of complex background based on empty convolution
CN110222215B (en) Crop pest detection method based on F-SSD-IV3
Islam et al. Rice leaf disease recognition using local threshold based segmentation and deep CNN
Lin et al. The pest and disease identification in the growth of sweet peppers using faster R-CNN and mask R-CNN
Punithavathi et al. Computer Vision and Deep Learning-enabled Weed Detection Model for Precision Agriculture.
Zhao et al. A detection method for tomato fruit common physiological diseases based on YOLOv2
Zhang et al. Combing K-means clustering and local weighted maximum discriminant projections for weed species recognition
Hu et al. Self-adversarial training and attention for multi-task wheat phenotyping
Bansal et al. Detecting Severity Levels of Cucumber Leaf Spot Disease using ResNext Deep Learning Model: A Digital Image Analysis Approach
Juwono et al. Machine learning for weed–plant discrimination in agriculture 5.0: An in-depth review
Thangaraj et al. Automatic recognition of avocado fruit diseases using modified deep convolutional neural network
Reddy et al. Mulberry leaf disease detection using yolo
CN113077452A (en) Apple tree pest and disease detection method based on DNN network and spot detection algorithm
Wang et al. Fusing vegetation index and ridge segmentation for robust vision based autonomous navigation of agricultural robots in vegetable farms
Bharadwaj Classification and grading of arecanut using texture based block-wise local binary patterns
CN115439842A (en) Mulberry sclerotinia severity detection method based on deep learning
Sharma et al. Self-attention vision transformer with transfer learning for efficient crops and weeds classification
Fang et al. Classification system study of soybean leaf disease based on deep learning
Xiao et al. Corn disease identification based on improved GBDT method
CN115601634A (en) Image blade identification method and device based on hierarchical attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant