AU2021101682A4 - Automatic plant leaf disease diagnosis with machine learning and deep convolutional neural networks - Google Patents

Automatic plant leaf disease diagnosis with machine learning and deep convolutional neural networks Download PDF

Info

Publication number
AU2021101682A4
AU2021101682A4 AU2021101682A AU2021101682A AU2021101682A4 AU 2021101682 A4 AU2021101682 A4 AU 2021101682A4 AU 2021101682 A AU2021101682 A AU 2021101682A AU 2021101682 A AU2021101682 A AU 2021101682A AU 2021101682 A4 AU2021101682 A4 AU 2021101682A4
Authority
AU
Australia
Prior art keywords
image
plant
module
visual representation
leaf
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2021101682A
Inventor
Sachin Balkrushna Jadhav
Abhiram Sanjay Patil
Sanjay Bapuso Patil
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to AU2021101682A priority Critical patent/AU2021101682A4/en
Application granted granted Critical
Publication of AU2021101682A4 publication Critical patent/AU2021101682A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

A system for diagnosis of plant leaf species diseases is disclosed. An interface module (104) configured to receive an image (102) of a plant. A data augmentation module (108) operatively coupled to said interface module (104). A training and validation module (110) are configured to classify and arrange said visual representation of the at least one plant image. A detection module (112) includes a deep convolutional neural network (CNN) configured to receive said arranged visual representation of said image (102), wherein said network consists of a max-pooling with at least three convolutional layers (114). A rectified linear unit activation function is configured to be applied to an output of each convolutional layer and a fully connected layer, wherein at each convolutional fully connected layer, a resolution of each image changes which is fed to a next layer in said cascade manner, wherein with each layer, a disease in said leaf image is highlighted. 30 1 LA. 7P.. w 19-F

Description

1 LA.
w 7P..
19-F
AUTOMATIC PLANT LEAF DISEASE DIAGNOSIS WITH MACHINE LEARNING AND DEEP CONVOLUTIONAL NEURAL NETWORKS FIELDOFINVENTION
The present invention generally relates to a field of agricultural and biotechnological engineering and particularly relates to an application of artificial intelligence involving diagnosis of plant leaf disease through machine learning and deep convolutional neural networks.
BACKGROUND OF THE INVENTION
The crop yields across the globe seems to be reducing each year by diseases. High yields per hectare are critical to a farmer's profit margin, especially during periods of low prices for example, soybean. The financial loss caused by soybean diseases is important to rural economies and to the economies of allied industries in urban areas. The effects of these losses are eventually felt throughout the soybean market worldwide. Estimates of loss due to disease in the United States and Ontario vary from year to year and by disease. From 1999 to 2002 soybean yield loss estimates were in the range of 8 million metric tons to 10 million metric tons in the United States and 90,000 to 166,000 metric tons.
The Asian Soybean Rust (herein referred to as ASR) has been reported in the Eastern and Western Hemispheres. In the Eastern Hemisphere, ASR has been reported in Australia, China, India, Japan, Taiwan and Thailand. In the Western Hemisphere, ASR has been observed in Brazil, Columbia, Costa Rica and Puerto Rico. ASR can be a devastating disease, causing yield losses of up to 70 to 80% as reported in some fields in Taiwan. Plants that are heavily infected have fewer pods and smaller seeds that are of poor quality (Frederick et al., Mycology 92: 217-227 (2002)). ASR was first observed in the United
States in Hawaii in 1994. ASR was later introduced into the continental United States in the fall of 2004, presumably as a consequence of tropical storm activity. Model predictions indicated that ASR had been widely dispersed throughout the southeastern United States, and subsequent field and laboratory observations confirmed this distribution.
Some plant diseases show disease specific indicators on the surface of the plant leaves. For example, fungal diseases such as Fungal early diseases (e.g., Septoria, S.tritici and S.nodorum), Fungal late diseases (e.g., Wheat Rusts), and Helminthosporium typically cause a change of the plant leaves in that they show disease specific spots or blobs which can be visually analyzed.
Computerized visual diagnostic methods have been proposed in the past. For example, the paper "Leaf Disease Grading by Machine Vision and Fuzzy Logic" (by Arun Kumar R et al, Int. J. Comp. Tech. Apple
, Vol 2 (5), 1709-1716) discloses an approach to automatically grade diseases on plant leaves. Some of the system use image processing techniques to analyze color specific information in an image of the diseased plant. A K-means clustering method is performed for every pixel in the image to extract clusters with diseased spots. The segmented image is saved and the total leaf area is calculated. Finally, the disease spreading on plant leaves is graded by employing Fuzzy Logic to determine a particular disease. A high computational effort is required for such an image processing-based method.
A number of techniques and methods have been developed for detection and treatment of plant leaf diseases caused due to fungus or bacteria. US20200320682A1 discloses a system and a method and computer program product for determining plant diseases. The system includes an interface module configured to receive an image of a plant, the image including a visual representation of at least one plant element. A color normalization module is configured to apply a color constancy method to the received image to generate a color normalized image. An extractor module is configured to extract one or more image portions from the color-normalized image wherein the extracted image portions correspond to the at least one plant element. A filtering module configured: to identify one or more clusters by one or more visual features within the extracted image portions, wherein each cluster is associated with a plant element portion showing characteristics of a plant disease; and to filter one or more candidate regions from the identified one or more clusters according to a predefined threshold, by using a Bayes classifier that models visual feature statistics which are always present on a diseased plant image. A plant disease diagnosis module configured to extract, by using a statistical inference method, from each candidate region one or more visual features to determine for each candidate region one or more probabilities indicating a particular disease; and to compute a confidence score for the particular disease by evaluating all determined probabilities of the candidate regions.
Furthermore, US7951998B2 states a method for assaying a soybean plant leaf for disease resistance, immunity, or susceptibility comprising the steps of: obtaining a part of said soybean plant leaf; cultivating said part in a media consisting essentially of water, wherein said media is capable of maintaining said part for an assay for up to 2 months; exposing said part to a fungal or bacterial plant leaf pathogen; and assessing said part for resistance, immunity, or susceptibility to disease caused by said fungal or bacterial plant leaf pathogen.
US20190050948A1 also states a crop prediction system which performs various machine learning operations to predict crop production and to identify a set of farming operations that, if performed, optimize crop production. The crop prediction system uses crop prediction models trained using various machine learning operations based on geographic and agronomic information. Responsive to receiving a request from a grower, the crop prediction system can access information representation of a portion of land corresponding to the request, such as the location of the land and corresponding weather conditions and soil composition. The crop prediction system applies one or more crop prediction models to the access information to predict a crop production and identify an optimized set of farming operations for the grower to perform.
The techniques mentioned herein discuss about techniques of computer vision and image processing which were applied mostly to plant protection in recent years. Disease detection and segmentation are essential, but the diseases of these crops are involved in a real environment, and traditional segmentation method cannot quickly and accurately obtain segmentation results. The traditional approach for image classification tasks has been based on hand-engineered features and then to use some form of a learning algorithm in these feature spaces.
Some drawbacks include that the disease detection and segmentation is crucial, but the diseases of plant leaf are involved in a real environment and traditional segmentation methods like k-means, colour-based segmentation technique cannot quickly and accurately obtain segmentation results. The machine learning methods, such as artificial neural networks (ANNs), Decision Trees, K-means, k nearest neighbors, and Support Vector Machines (SVMs) applied in image classification based on hand-engineered features. This led to the performance of all these approaches depending heavily on the underlying predefined features. The overall classification accuracy is, therefore, dependent on the type of image processing and feature extraction techniques used.
Furthermore, the deficiency associated with the present image processing and computer vision techniques are; uncontrolled capture conditions may present characteristics that make the image analysis more difficult. The presence of complex backgrounds cannot easily separate from the Region of interest (usually leaf and stem). Boundaries of the symptoms often are not well defined. Traditional colour-based segmentation methods like k-means, Fuzzy K-means, cannot quickly and accurately obtain segmentation results. The traditional approach for image classification tasks based on hand Crafted features. Performance of all these approaches depending heavily on the underlying predefined features. Overtraining of classifiers, on the defined image database, can cause over fitting the problem, which may lead inaccuracy in the result. These limitations and drawbacks are overcome by the technical advancements of the present invention described in greater detail below.
SUMARY OF THE INVENTION
The present invention generally relates to an application of artificial intelligence involving diagnosis of plant leaf disease through machine learning and deep convolutional neural networks.
In an embodiment of the present invention a system for diagnosis of plant leaf species diseases is disclosed. The system comprising: an interface module configured to receive an image of a plant, wherein said interface module includes a database configured to receive and store plant leaf image, wherein the image including a visual representation of at least one plant element; a data augmentation module operatively coupled to said interface module, wherein said data augmentation module is configured to receive the visual representation of the at least one plant element and is configured to enlarge said visual representation in at least three different degrees in order to clearly view said image of the plant; a training and validation module operatively coupled to the data augmentation module, wherein said training and validation module is configured to classify and arrange said visual representation of the at least one plant image; a detection module communicatively coupled to said data augmentation module and said training and validation module, wherein said detection module includes a deep convolutional neural network (CNN) configured to receive said arranged visual representation of said image, wherein said image is passed through cascaded network of the deep convolutional neural network, wherein said network consists of a max-pooling with at least three convolutional layers, wherein a rectified linear unit activation function is configured to be applied to an output of each convolutional layer and a fully connected layer, wherein at each convolutional fully connected layer, a resolution of each image changes which is fed to a next layer in said cascade manner, wherein with each layer, a disease in said leaf image is highlighted.
In other embodiment of the present invention a method for diagnosis of plant leaf species diseases is disclosed. The method comprising steps: receiving, to an interface module configured, an image of a plant, wherein said interface module includes a database configured to receive and store plant leaf image, wherein the image including a visual representation of at least one plant element; enlarging said visual representation of said image through a data augmentation module operatively coupled to said interface module, wherein said data augmentation module is configured to receive the visual representation of the at least one plant element and is configured to enlarge said visual representation in at least three different degrees in order to clearly view said image of the plant; classifying and arranging said visual representation of the at least one plant image through a training and validation module operatively coupled to the data augmentation module; and processing said classified and trained visual representation of said plant leaf image through a detection module which communicatively coupled to said data augmentation module and said training and validation module, wherein said detection module includes a deep convolutional neural network (CNN) configured to receive said arranged visual representation of said image, wherein said image is passed through cascaded network of the deep convolutional neural network, wherein said network consists of a max-pooling with at least three convolutional layers, wherein a rectified linear unit activation function is configured to be applied to an output of each convolutional layer and a fully connected layer, wherein at each convolutional fully connected layer, a resolution of each image changes which is fed to a next layer in said cascade manner, wherein with each layer, a disease in said leaf image is highlighted.
To further clarify advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings
BRIEF DESCRIPTION OF FIGURES
These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
Figure 1 illustrates a block diagram of components employed in a system for diagnosis of plant leaf species diseases.
Figure 2 illustrates a flow diagram of operations involved in diagnosis of plant leaf species diseases.
Figure 3 illustrates a proposed deep CNN model for diagnosing plant diseases.
Figure 4 illustrates a framework for Plant Species Disease Diagnosis (Detection and Classification) Using Deep CNN's Transfer Learning Technique.
Figure 5 illustrates a retraining process of an Alex Net & InceptionV3 CNNs model.
Figure 6 illustrates a plant leaf disease Sample from Soybean Leaf Image Database.
Figure 7a-e illustrates a bacterial blight class training progress using Alex Net model, Google Net model, VGG16model, ResNet101model, and DensNet201model.
Figure 8a-b illustrates a confusion matrix of Alex Net CNN (Predicted vs. Actual class) and a sophisticated confusion matrix of Alex Net CNN.
Figure 9a-b illustrates a confusion matrix of Google Net CNN (Predicted vs. Actual class) and a sophisticated confusion matrix of Google Net CNN.
Figure 10a-b illustrates a confusion matrix of VGG16 CNN (Predicted vs. Actual class) and a sophisticated confusion matrix of VGG16 CNN.
Figure 11a-b illustrates a confusion matrix of ResNet1O1 CNN (Predicted vs. Actual class) and a sophisticated confusion matrix of ResNet1O1 CNN.
Figure 12 illustrates a confusion matrix of DensNet201 CNN (Predicted vs. Actual class) and a sophisticated confusion matrix of DensNet201 CNN.
Figure 13a-e illustrates a classification results of Alex Net CNN, Google Net CNN, VGG16 CNN, ResNet1O1 CNN, and DensNet201 CNN.
Figure 14 illustrates a class wise leaf sample analysis using Deep CNN.
Table 1 shows an architecture of Retrained Alex Net Model.
Table 2 shows an architecture of Retrained Google Net Model.
Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have been necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.
DETAILED DESCRIPTION
For the purpose of promoting an understanding of the principles of the invention, reference will now be made to the embodiment illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended, such alterations and further modifications in the illustrated system, and such further applications of the principles of the invention as illustrated therein being contemplated as would normally occur to one skilled in the art to which the invention relates.
It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the invention and are not intended to be restrictive thereof.
Reference throughout this specification to "an aspect", "another aspect" or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such process or method. Similarly, one or more devices or sub-systems or elements or structures or components proceeded by "comprises.a" does not, without more constraints, preclude the existence of other devices or other sub-systems or other elements or other structures or other components or additional devices or additional sub-systems or additional elements or additional structures or additional components.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The system, methods, and examples provided herein are illustrative only and not intended to be limiting.
Embodiments of the present invention will be described below in detail with reference to the accompanying drawings.
Figure 1 illustrates a block diagram of components employed in a system for diagnosis of plant leaf species diseases. The system for diagnosis of plant leaf species diseases includes mainly an interface module (104) configured to receive an image (102) of a plant, wherein said interface module (104) includes a database storage (106) configured to receive and store plant leaf image, wherein the image including a visual representation of at least one plant element.
A data augmentation module (108) is operatively coupled to said interface module (104), wherein said data augmentation module (108) is configured to receive the visual representation of the at least one plant element and is configured to enlarge said visual representation in at least three different degrees in order to clearly view said image of the plant. A training and validation module (110) is also provided and is operatively coupled to the data augmentation module (108), wherein said training and validation module is configured to classify and arrange said visual representation of the at least one plant image.
A detection module (112) is provided and is communicatively coupled to said data augmentation module (108) and said training and validation module (110), wherein said detection module includes a deep convolutional neural network (CNN) configured to receive said arranged visual representation of said image (102), wherein said image is passed through cascaded network of the deep convolutional neural network, wherein said network consists of a max-pooling with at least three convolutional layers (114), wherein a rectified linear unit activation function is configured to be applied to an output of each convolutional layer and a fully connected layer, wherein at each convolutional fully connected layer, a resolution of each image changes which is fed to a next layer in said cascade manner, wherein with each layer, a disease in said leaf image is highlighted.
Figure 2 illustrates a flow diagram of operations involved in diagnosis of plant leaf species diseases. The method for diagnosis of plant leaf species diseases involves a number of steps as described follows.
The step (210) states receiving, to an interface module configured, an image of a plant, wherein said interface module includes a database configured to receive and store plant leaf image, wherein the image including a visual representation of at least one plant element. Step (220) states enlarging said visual representation of said image through a data augmentation module operatively coupled to said interface module, wherein said data augmentation module is configured to receive the visual representation of the at least one plant element and is configured to enlarge said visual representation in at least three different degrees in order to clearly view said image of the plant.
Step (230) involves classifying and arranging said visual representation of the at least one plant image through a training and validation module operatively coupled to the data augmentation module.
The step (240) describes processing said classified and trained visual representation of said plant leaf image through a detection module which communicatively coupled to said data augmentation module and said training and validation module, wherein said detection module includes a deep convolutional neural network (CNN) configured to receive said arranged visual representation of said image, wherein said image is passed through cascaded network of the deep convolutional neural network, wherein said network consists of a max-pooling with at least three convolutional layers, wherein a rectified linear unit activation function is configured to be applied to an output of each convolutional layer and a fully connected layer, wherein at each convolutional fully connected layer, a resolution of each image changes which is fed to a next layer in said cascade manner, wherein with each layer, a disease in said leaf image is highlighted.
Figure 3 illustrates a proposed deep CNN model for diagnosing plant diseases. In this invention, the use of pre-trained Convolutional Neural Networks (CNN's) like Alex Net, Google Net, VGG16, ResNet1O1, DensNet201 used for leaf species disease detection and classification based on its Image. In which these networks trained using a pre trained approach on the Village leaf image dataset of Soybean, Tomato, and Potato. The training of CNN's done by using advanced optimizers like Adam instead of the vanilla mini-batch gradient. Then the performance of the proposed CNNs test on defined plant leaf image database of Soybean, Potato and Tomato. Then the result of networks performance for disease classification is obtained by using Confusion Matrix and Sophisticated Confusion Matrix.
Figure 4 illustrates a framework for Plant Species Disease Diagnosis (Detection and Classification) Using Deep CNN's Transfer Learning Technique. The Alex Net, Google Net, VGG16, VGG19, ResNet1O1, and DensNet201 networks are used for testing in the proposed work. The image is passed through cascaded network of CNN and final class output is provided the network structure consist of max-pooling layer over 3 convolutional layers. Rectified linear unit (ReLu) activation function applied to the output of every convolutional layer and fully connected layer.
The proposed networks consisted of fully connected layers, in which different features are extracted at each layer output. The output of each layer with resolution changes is given as input to next layer in cascaded fashion. The progress from layer to layer highlights the features of disease in leaf image. The complex features are extracted finally with better brightness features. Figure 1 illustrates the layered architecture of Alex Net and Google Net CNN models. The three commonly used neural layers discussed as follows.
a. Convolutional layers:
These layers obtain certain features from input images the output of which can be represented by equation given as,
j i (1) Zcm MFf . f(E1 Ej +NP ) 5
Where p the layer under consideration, kij is convolutional kernel function, Nj is the bias value, and Mj is the input map. The unsupervised learning approach is used for training to obtain weights and bias values. The raw input image and feature maps are convolved in each layer of CNN using kernel functions.
b. Pooling layers
Nonlinear down sampling is done with use of max-pooling layers. This way network parameters are optimized and simplified that are learned. The probability is computed in pooling given by equation,
Z keSj ak (2)
Where Sj is jth region of pooling, j, the feature map is dented by F for element index i in region j. pooling operation carried out by stochastic St is expressed as,
~p~k = t p1F a = St(m, n, x, y) r P(ap-1,Fw(x, y)) M~n (3) Where a p, k x, y is the neuron activation at coordinate (x, y) in feature map F in pt h
layer, w (x, y) is the weighing function.
c. Fully connected layers:
1D vector is composed by flattening the 2D vectors of feature map in spatial domain using fully connected network.
d. Image pre-processing and labelling
Consistency in the output of deep neural network is maintained by pre-processing the training and testing image datasets. A sample size is consisting of total 1200 Soybean leaf images. The dimensions of color leaf images are 227 x 227 x 3 and 224 x 224 x 3 for Alex Net and other models which includes Google Net, VGG16, VGG19, ResNet1O1, DensNet201 Network architecture. These models are trained using train database. The pre-processed image samples were spread of 4 class labels assigned to them. The max epoch, mini-batch size, learning rate and bias values were modified for performance improvement of these models.
e. Deep CNN training
The training of CNN consists of forward and backward propagation. After each forward stage, image is assigned with weights and bias values. Loss values are estimated using ground truth values. The lost value is used for computing gradient parameters in backward stage. These gradient parameters are used to update all the parameters of the network for next forward stage. The forward backward stages are executed up to achievement of convergence. Once convergence is achieved after sufficient iterations count the learning process stops.
The squared error is estimated in forward stage given by equation,
T N
ET = (d - yt)2 t=1k=1 (4)
Where dk t represents label of tth pattern and kth dimension. yk t is the output obtained at kth layer with tth input pattern. 4 class types of diseases for Soybean leaf are used to train the CNN in supervised manner. The features are used by CNN to learn to recognize the disease using stochastic response in maximized neurons. Let T1 and T2 be the two datasets applied with regression such a that, { (T1 (1), T2 (1) ), . . ., (T1 (m ), T2 (m) )},Jie (1, 2, .. .,k).The probability of classifying m as class J is :
P(n(=Jm(e)- T yrn .
P=J eo) (5)
Figure 5 illustrates a retraining process of an Alex Net & InceptionV3 CNNs model. The retraining of pre-trained models is explained with the use of powerful machines with use of ImageNet. Modifying the weights before giving input to the next phase for improvement in accuracy is the main objective of the phase. Such way improvement in classification can be obtained with the help of pre-trained models. The retraining of Alex Net and Google Net is shown in figure 4. The retraining modifications done are shown in table I and II.
Retrained these networks to classify four categories are; Pre-trained network is loaded first; Perform new classification task by configuring last three layers; Use new data to train the model; and Evaluate the performance.
Figure 6 illustrates a plant leaf disease Sample from Soybean Leaf Image Database. At least three leaves are displayed with different kinds of diseases, such as brown spot over the top of a leaf, a bacterial blight, and a frogeye leaf spot where the leaf is teared with holes. A healthy leaf is also displayed for comparison with the diseased leaves or leaves with different spots over them.
Figure 7a-e illustrates a bacterial blight class training progress using Alex Net model, Google Net model, VGG16model, ResNetlOlmodel, and DensNet201model. Different models of Convolutional neural network and machine learning techniques have been employed to analyze the bacterial blight spot as discussed in figure 6 of a leaf different training module develop different graphs for the bacterial blight spot of the plant leaf (herein in case of a soyabean plant leaf).
Figure 8a-b illustrates a confusion matrix of Alex Net CNN (Predicted vs. Actual class) and a sophisticated confusion matrix of Alex Net CNN. The actual case herein shows a target class with respect to the output class. This model produces at least 9 5% result in confusion matrix with % being tolerance.
Figure 9a-b illustrates a confusion matrix of Google Net CNN (Predicted vs. Actual class) and a sophisticated confusion matrix of
Google Net CNN. This model shows 9 6 .4 % with a tolerance accuracy of 3.6%.
Figure 10a-b illustrates a confusion matrix of VGG16 CNN (Predicted vs. Actual class) and a sophisticated confusion matrix of VGG16 CNN. This model shows 9 6 .4 % with a tolerance accuracy of 3 . 6 %.
Figure 11a-b illustrates a confusion matrix of ResNet1O1 CNN (Predicted vs. Actual class) and a sophisticated confusion matrix of ResNet1O1 CNN. This model produces accuracy up to 8 5. 4 % with tolerance of 1 4 .6 %.
Figure 12a-b illustrates a confusion matrix of DensNet201 CNN (Predicted vs. Actual class) and a sophisticated confusion matrix of DensNet201 CNN. Here the accuracy is 92 .1% with tolerance of 7. 9 %.
Figure 13a-e illustrates a classification results of Alex Net CNN, Google Net CNN, VGG16 CNN, ResNet1O1 CNN, and DensNet201 CNN.
Figure 14 illustrates a class wise leaf sample analysis using Deep CNN.
All these proposed techniques use convolutional neural networks as both feature extractors and classifiers. Compared to the Handcrafted Feature using Shallow Classifier Deep CNN transfer learning models it extracts deep feature automatically on large database. It noticed that there is scope to enhance network performance by integrating a CNN network with a shallow classifier. Hence, the proposed pre-trained CNN used to analyses the performance of networks as feature extractors and classifiers. This Invention aims to introduce; the supervised machine learning CNN transfer learning approach is developed. Along with healthy, three different diseases are classified in proposed approach and main contribution towards plant disease detection using
CNN. An improved method of training networks by using advanced optimizers Adam instead of the vanilla Mini-batch gradient descent on an extensive data set for identification of specific disease symptoms.
Table 1 shows an architecture of Retrained Alex Net Model, and Table 2 shows an architecture of Retrained Google Net Model. Four classes were considered as objective while reconfiguring and modifying the network models. The architecture states functionality of each layer in convolutional neural network and in max-pooling with different filter size.
The present invention further states that the system further comprises a plurality of neural network layers configured to enhance complex features of said leaf image disease, wherein said neural layers comprises a plurality of convolutional layers which are configured to obtain features from said plant leaf image on basis of weights and bias values, wherein said input image of plant leaf and feature extracted image are configured to be convolved in each layer of deep CNN through kernel functions. A plurality of pooling layers configured to perform non-linear sampling in order to optimize and simplify parameters of said visual representation of the plant leaf image.
The fully connected layers include a single one-dimensional vector by flattening a two-dimensional vector of feature map in spatial domain through said fully connected network.
The system further comprises an image pre-processing and labelling module configured to maintain an output of deep CNN through pre processing and training and testing of said image in the database, wherein the image pre-processing and labelling is performed by taking a sample of said plant leaf images with a specified dimensions of color leaf images through neural network architecture models, wherein said models are trained through said database, wherein said image samples are configured to be spread with at least four class labels.
The class of labels of the pre-processed image samples consists of a maximum epoch, minimum-batch size, learning rate and bias values, wherein said labels are configured to be modified for enhancements of image processing through said models.
The deep convolutional neural network training consists of forward and backward propagation, wherein after each forward stage, said image is assigned with weights and bias values, wherein loss values are estimated through ground truth values, wherein the lost value is used for computing gradient parameters in backward stage, and these gradient parameters are used to update all the parameters of the network for next forward stage, wherein the forward backward stages are executed up to convergence, wherein once convergence is achieved after sufficient iterations count the learning process stops.
The system further comprises a retraining of pre-trained machine learning model layers configured to modify weights of said image samples before transmitting it to a next phase for accuracy, wherein said learning model layers are classified into four categories, wherein pre-trained network is loaded first, then a new classification task is performed by configuring last three layers, then new data is employed to train the model, and then a performance is evaluated.
The system further comprises an extractor module configured to extract one or more image portions from the visual representation of said image wherein the extracted image portions relate to the at least one plant element.
The proposed modifications in weights and biases with retraining CNN's Alex Net, Google Net, VGG16, ResNet1O1, and DensNet201 accurately identifying the category of leaf disease. The evaluation results clearly indicate that deep CNN architecture has higher classification accuracy than traditional PNN, KNN, and SVM (Shallow) classifier. The CNN's improves the accuracy and robustness in leaf species disease detection and classification. As the system accurately identifies the plant leaf disease, it is an as efficient and cost-effective tool for Plant Pathologist.
In this research work, these issues are dealt with the solution using proposed method of disease detection and classification using convolutional neural network. However, more effective and efficient methods can be developed using deep convolutional neural network (CNN's). Minimum error in analysing the image with automated deep learning method with large dataset of images for feature extraction and classification is the main advantage of deep convolutional neural network.
1. Manual input from user is not required for segmentation of diseased region (Auto-Segmentation is taking place within deep learning model.
2. The Feature extraction of the query leaf image is done automatically by Deep learning CNN model in which there is no need of extracting hand-crafted features for classification.
3. For analyzing the leaf disease detection and its classification based on colour image with proposed automated deep learning CNN method with large image dataset is possible. (Restriction of Image Database is overcome).
The performance of existing plant leaf disease segmentation is solely depending on the type of image segmentation techniques in which manual input is required during segmenting the diseased region from the infected portion of the leaf area. Also, the Performance of the existing Shallow classifiers (SVM, KNN, and PNN) is depend on the hand-crafted feature extractors with limited leaf image database. It is noticed that there is scope to enhance model performance by using Deep CNN (Convolutional Neural Networks) in which both feature extraction and its classification were performed by Deep CNN itself.
In this research work, these issues are dealt with the solution using proposed method of disease detection and classification using convolutional neural network. However, more effective and efficient methods can be developed using deep convolutional neural network (CNN's). Minimum error in analyzing the image with automated deep learning method with large dataset of images for feature extraction and classification is the main advantage of deep convolutional neural network.
The main objective of this Invention is to design and implement efficient Image processing and Machine learning Deep Convolutional Neural Networks (CNN'S) algorithms for enhancement of accuracy of plant leaf species disease, detection, classification, and severity measurement to control the amount chemicals for disease prevention.
Following points summarize the objectives of this Invention:
i. Leaf species identification using computer vision techniques. ii. Leaf disease segmentation. iii. Leaf disease feature extraction for disease detection. iv. Leaf disease classification using different Machine learning classifiers. v. Leaf disease severity measurement.
The various goals of the present invention are intended to achieve are given as below. Develop the feature optimization technique to deal with Leaf species Recognition. Develop accurate segmentation techniques to deal with Leaf species disease Detection. Develop an accurate/efficient image analysis technique to deal with Leaf species disease severity measurement. To replace handcrafted feature extraction technique by auto-machine crafted feature extraction using Deep Convolutional Neural Networks (CNN's) model. Develop a novel Deep Convolutional Neural Networks (CNN's) and Machine learning based classifier expert system to perform accurate/efficient plant leaf disease diagnosis on the extensive Image database.
Nomenclature
AD- Disease Area; Al - Total Leaf Area; ALEXNET- Alex Net Convolutional Neural Network; AI -Artificial Intelligence; ANN- Artificial Neural Network; BPNN -Back Propagation Neural Network; BOW - Bag of-Words; CCM- Colour Co-Occurrence Matrix; CSD -Colour Structure Descriptor; CNN- Convolutional Neural Network; DCT- Discrete cosine transform; DENSNET201- DensNet201Convolutional Neural Network; FAFCM -Fast Adaptive Fuzzy C-Means Clustering; GLCM- Gray Level Co-occurrence matrix; GBCM -Grid-Based Colour Moment; GOOGLENET- Google Net Convolutional Neural Network; HSV- Hue Saturation Value; HOG- Histogram of Gradient; ICL- Intelligent Computing Laboratory, Chinese Academy of Sciences Data Set; IDSC Inner-Distance Shape Context; K-MEANS++- Incremental K-Means Clustering; K-MEANS- K-Means Clustering; KNN -K-Nearest Neighbors; LBP -Local Binary Pattern; LBHPG- Local Binary Histogram Pattern of Gradient; ML -Machine Learning; MMC- Maximum Margin Criterion; PCA -Principle Component Analysis; PHOW- Pyramid Histograms of Visual Words; PNN -Probabilistic Neural Network; RGB -Red Green Blue; ROI- Region of Interest; RESNET101- ResNet101 Convolutional Neural Network; SVM -Support Vector Machine; SGDM- Spatial Gray Level Dependency Matrix; SIFT -Scale Invariant Feature Transform; SDL -Swedish Leaf Dataset; DSIFT- Dense Scale Invariant Feature
Transform; SURF- Speeded Up Robust Feature; VGG16 -VGG16 Convolutional Neural Network.
The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of embodiments is at least as broad as given by the following claims.
Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any component(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or component of any or all the claims.

Claims (10)

WE CLAIM
1. A system for diagnosis of plant leaf species diseases, the system comprising:
an interface module (104) configured to receive an image (102) of a plant, wherein said interface module (104) includes a database storage (106) configured to receive and store plant leaf image, wherein the image including a visual representation of at least one plant element;
a data augmentation module (108) operatively coupled to said interface module (104), wherein said data augmentation module (108) is configured to receive the visual representation of the at least one plant element and is configured to enlarge said visual representation in at least three different degrees in order to clearly view said image of the plant;
a training and validation module (110) operatively coupled to the data augmentation module (108), wherein said training and validation module is configured to classify and arrange said visual representation of the at least one plant image;
a detection module (112) communicatively coupled to said data augmentation module (108) and said training and validation module (110), wherein said detection module includes a deep convolutional neural network (CNN) configured to receive said arranged visual representation of said image (102), wherein said image is passed through cascaded network of the deep convolutional neural network, wherein said network consists of a max-pooling with at least three convolutional layers (114), wherein a rectified linear unit activation function is configured to be applied to an output of each convolutional layer and a fully connected layer, wherein at each convolutional fully connected layer, a resolution of each image changes which is fed to a next layer in said cascade manner, wherein with each layer, a disease in said leaf image is highlighted.
2. The system as claimed in claim 1, wherein the system further comprises:
a plurality of neural network layers configured to enhance complex features of said leaf image disease, wherein said neural layers comprises a plurality of convolutional layers which are configured to obtain features from said plant leaf image on basis of weights and bias values, wherein said input image of plant leaf and feature extracted image are configured to be convolved in each layer of deep CNN through kernel functions.
3. The system as claimed in claim 1, wherein the system further comprises:
a plurality of pooling layers configured to perform non-linear sampling in order to optimize and simplify parameters of said visual representation of the plant leaf image.
4. The system as claimed in claim 1, wherein the fully connected layers include a single one-dimensional vector by flattening a two dimensional vector of feature map in spatial domain through said fully connected network.
5. The system as claimed in claim 1, wherein the system further comprises:
an image pre-processing and labelling module configured to maintain an output of deep CNN through pre-processing and training and testing of said image in the database, wherein the image pre processing and labelling is performed by taking a sample of said plant leaf images with a specified dimensions of color leaf images through neural network architecture models, wherein said models are trained through said database, wherein said image samples are configured to be spread with at least four class labels.
6. The system as claimed in claim 5, wherein said class of labels of the pre-processed image samples consists of a maximum epoch, minimum-batch size, learning rate and bias values, wherein said labels are configured to be modified for enhancements of image processing through said models.
7. The system as claimed in claim 1, wherein the deep convolutional neural network training consists of forward and backward propagation, wherein after each forward stage, said image is assigned with weights and bias values, wherein loss values are estimated through ground truth values, wherein the lost value is used for computing gradient parameters in backward stage, and these gradient parameters are used to update all the parameters of the network for next forward stage, wherein the forward backward stages are executed up to convergence, wherein once convergence is achieved after sufficient iterations count the learning process stops.
8. The system as claimed in claim 1, wherein the system further comprises:
a retraining of pre-trained machine learning model layers configured to modify weights of said image samples before transmitting it to a next phase for accuracy, wherein said learning model layers are classified into four categories, wherein pre-trained network is loaded first, then a new classification task is performed by configuring last three layers, then new data is employed to train the model, and then a performance is evaluated.
9. The system as claimed in claim 1, wherein the system further comprises:
an extractor module configured to extract one or more image portions from the visual representation of said image wherein the extracted image portions relate to the at least one plant element.
10. A method for diagnosis of plant leaf species diseases, the method comprising steps:
receiving, to an interface module configured, an image of a plant, wherein said interface module includes a database configured to receive and store plant leaf image, wherein the image including a visual representation of at least one plant element;
enlarging said visual representation of said image through a data augmentation module operatively coupled to said interface module, wherein said data augmentation module is configured to receive the visual representation of the at least one plant element and is configured to enlarge said visual representation in at least three different degrees in order to clearly view said image of the plant;
classifying and arranging said visual representation of the at least one plant image through a training and validation module operatively coupled to the data augmentation module; and
processing said classified and trained visual representation of said plant leaf image through a detection module which communicatively coupled to said data augmentation module and said training and validation module, wherein said detection module includes a deep convolutional neural network (CNN) configured to receive said arranged visual representation of said image, wherein said image is passed through cascaded network of the deep convolutional neural network, wherein said network consists of a max-pooling with at least three convolutional layers, wherein a rectified linear unit activation function is configured to be applied to an output of each convolutional layer and a fully connected layer, wherein at each convolutional fully connected layer, a resolution of each image changes which is fed to a next layer in said cascade manner, wherein with each layer, a disease in said leaf image is highlighted.
AU2021101682A 2021-04-01 2021-04-01 Automatic plant leaf disease diagnosis with machine learning and deep convolutional neural networks Ceased AU2021101682A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2021101682A AU2021101682A4 (en) 2021-04-01 2021-04-01 Automatic plant leaf disease diagnosis with machine learning and deep convolutional neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2021101682A AU2021101682A4 (en) 2021-04-01 2021-04-01 Automatic plant leaf disease diagnosis with machine learning and deep convolutional neural networks

Publications (1)

Publication Number Publication Date
AU2021101682A4 true AU2021101682A4 (en) 2021-05-20

Family

ID=75911205

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2021101682A Ceased AU2021101682A4 (en) 2021-04-01 2021-04-01 Automatic plant leaf disease diagnosis with machine learning and deep convolutional neural networks

Country Status (1)

Country Link
AU (1) AU2021101682A4 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023107023A1 (en) * 2021-12-06 2023-06-15 Onur Yolay Artificial intelligence based predictive decision support system in disease, pest and weed fighting

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023107023A1 (en) * 2021-12-06 2023-06-15 Onur Yolay Artificial intelligence based predictive decision support system in disease, pest and weed fighting

Similar Documents

Publication Publication Date Title
Haridasan et al. Deep learning system for paddy plant disease detection and classification
Jackulin et al. A comprehensive review on detection of plant disease using machine learning and deep learning approaches
Chouhan et al. Applications of computer vision in plant pathology: a survey
Ali et al. Symptom based automated detection of citrus diseases using color histogram and textural descriptors
Hayit et al. Determination of the severity level of yellow rust disease in wheat by using convolutional neural networks
Kumari et al. Hybridized approach of image segmentation in classification of fruit mango using BPNN and discriminant analyzer
Alguliyev et al. Plant disease detection based on a deep model
Hussain et al. Automatic disease detection in wheat crop using convolution neural network
Bonifacio et al. Determination of common Maize (Zea mays) disease detection using Gray-Level Segmentation and edge-detection technique
Mohapatra et al. Rice disease detection and monitoring using CNN and naive Bayes classification
Urmashev et al. Development of a weed detection system using machine learning and neural network algorithms
Sagarika et al. Paddy plant disease classification and prediction using convolutional neural network
Rajpoot et al. Automatic early detection of rice leaf diseases using hybrid deep learning and machine learning methods
AU2021101682A4 (en) Automatic plant leaf disease diagnosis with machine learning and deep convolutional neural networks
Balasubramaniyan et al. Color contour texture based peanut classification using deep spread spectral features classification model for assortment identification
Bhadur et al. Agricultural crops disease identification and classification through leaf images using machine learning and deep learning technique: a review
Nirmal et al. Farmer Friendly Smart App for Pomegranate Disease Identification
Javidan et al. Diagnosing the spores of tomato fungal diseases using microscopic image processing and machine learning
Goyal et al. Disease detection in potato leaves using an efficient deep learning model
Moharekar et al. Detection and classification of plant leaf diseases using convolution neural networks and streamlit
Nagageetha et al. A feature ranking-based deep learning secure framework for multi-class leaf disease detection
Hayit et al. KNN-based approach for the classification of fusarium wilt disease in chickpea based on color and texture features
Thenappan et al. Wheat leaf diseases classification and severity analysis using HT-CNN and Hex-D-VCC-based boundary tracing mechanism
Mandava et al. Identification and Categorization of Yellow Rust Infection in Wheat through Deep Learning Techniques
Umar et al. Precision Agriculture Through Deep Learning: Tomato Plant Multiple Diseases Recognition with CNN and Improved YOLOv7

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry