CN113077452B - Apple tree pest and disease detection method based on DNN network and spot detection algorithm - Google Patents

Apple tree pest and disease detection method based on DNN network and spot detection algorithm Download PDF

Info

Publication number
CN113077452B
CN113077452B CN202110398406.XA CN202110398406A CN113077452B CN 113077452 B CN113077452 B CN 113077452B CN 202110398406 A CN202110398406 A CN 202110398406A CN 113077452 B CN113077452 B CN 113077452B
Authority
CN
China
Prior art keywords
image
algorithm
gaussian
layer
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110398406.XA
Other languages
Chinese (zh)
Other versions
CN113077452A (en
Inventor
李海
李谊骏
陈诗果
杨谋
兰元帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu College of University of Electronic Science and Technology of China
Original Assignee
Chengdu College of University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu College of University of Electronic Science and Technology of China filed Critical Chengdu College of University of Electronic Science and Technology of China
Priority to CN202110398406.XA priority Critical patent/CN113077452B/en
Publication of CN113077452A publication Critical patent/CN113077452A/en
Application granted granted Critical
Publication of CN113077452B publication Critical patent/CN113077452B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an apple tree pest detection method based on a DNN network and a spot detection algorithm, which relates to the technical field of fruit tree pest detection and comprises the following steps: step 1, building a basic DNN neural network, initializing a weight matrix W and a bias parameter b, inputting a data set, and updating the weight matrix W and the bias parameter b through a forward propagation algorithm and a backward propagation algorithm of the neural network; step 2, carrying out image segmentation on the acquired image after the acquired image is zoomed by adopting a Gaussian pyramid algorithm, and separating the foreground and the background of the image; step 3, histogram equalization is carried out on the image which is segmented by the image, the feature points in the image are enhanced, step 4, a LOG algorithm is adopted to extract the feature points in the image, then open operation processing is adopted to remove noise points, and step 5: and (4) inputting the characteristic points processed in the step (4) into the trained DNN neural network for judgment, and identifying whether the leaves have plant diseases and insect pests.

Description

Apple tree pest and disease detection method based on DNN network and spot detection algorithm
Technical Field
The invention relates to the technical field of fruit tree pest detection, in particular to an apple tree pest detection method based on a DNN network and a spot detection algorithm.
Background
China is the biggest apple producing country in the world, and the planting area and the yield of the apples both account for more than 50 percent of the world. However, the quality of the apples in China is still a certain gap compared with the quality of the apples in developed countries, and the laggard pest control level is a main factor for restricting the development of the apples in China. At the present stage, two main methods for preventing and treating apple tree diseases and insect pests in China are available: "prevention and cure calendar" and single pest and disease damage detection method. The prevention and control of the plant diseases and insect pests through the 'prevention and control calendar' are carried out according to the occurrence condition of the plant diseases and insect pests in the past year, the key period of the plant diseases and insect pests is usually missed, and the effect is not good. The diseases and insect pests are controlled by methods such as R-CNN, Tamura and the like, the research is often carried out on the pathological images of apples, and the diseases and insect pests cannot be detected in the initial stage of disease and insect pest invasion. The pest control is carried out by using a single detection method such as a support vector machine and a YOLO (YOLO) method, the characteristics of spots on apple tree leaves are compared with the typical characteristics of pests, so that the identification purpose is achieved, the problems that local optimal solutions, gradient disappearance and the like are prone to being trapped are often caused, the accuracy rate is low, and the demand of apple tree orchard deployment is difficult to achieve.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an apple tree pest and disease detection method based on a DNN network and a spot detection algorithm.
The purpose of the invention is realized by the following technical scheme:
the apple tree pest and disease damage detection method based on the DNN network and the spot detection algorithm is characterized by comprising the following steps of:
step 1, building a basic DNN neural network, initializing a weight matrix W and a bias parameter b, inputting a data set, updating the weight matrix W and the bias parameter b through a forward propagation algorithm and a backward propagation algorithm of the neural network, and executing step 2;
step 2, carrying out image segmentation after the acquired image is zoomed by adopting a Gaussian pyramid algorithm, separating the foreground and the background of the image, and executing step 3;
step 3, performing histogram equalization on the image subjected to image segmentation, enhancing feature points in the image, and executing step 4;
step 4, extracting feature points in the image by using an LOG algorithm, then removing noise by using open operation processing, and executing step 5;
and 5: and (4) inputting the characteristic points processed in the step (4) into the trained DNN neural network for judgment, and identifying whether the leaves have plant diseases and insect pests.
Preferably, the image segmentation extraction process of step 2 includes two steps: firstly, building a color model, and then segmenting by an iterative energy minimization segmentation algorithm.
Preferably, the process of extracting the feature points in the image by using the LOG algorithm in step 4 includes blob detection, where the blob detection includes obtaining a maximum value or a minimum value of a laplacian of gaussian response by performing derivation on a normalized two-dimensional laplacian of gaussian operator, and then obtaining blobs in the image according to the maximum value or the minimum value of the response.
Preferably, the performing of the open operation in step 4 includes performing a corrosion operation on the image obtained by the speckle detection, and then performing an expansion operation to remove the black interference block in the image.
The beneficial effects of the invention are:
the invention provides an apple tree pest detection method based on a DNN network and a spot detection algorithm, and realizes an apple tree pest detection system based on machine vision. The result shows that the identification accuracy rate of the detection system which takes the LOG algorithm as the feature extraction and the DNN neural network as the identification model to the apple tree leaf diseases and insect pests can reach 91.17%, the detection effect of the system to the apple tree leaf diseases and insect pests is clear, the disease and insect pest identification accuracy rate is high, the feasibility of the system is proved, and the detection system can be applied to the identification of common diseases and insect pests of apple trees. In conclusion, the disease spot characteristics of the apple tree leaves are extracted through the machine vision image processing method, and then the DNN neural network is constructed to detect and identify the apple tree diseases and insect pests, so that the prevention and control capacity on the apple tree diseases and insect pests can be improved.
Drawings
FIG. 1 is a schematic view of the detection method of the present invention;
FIG. 2 is a comparison graph before and after histogram equalization;
FIG. 3 is a comparison of a spot test before and after;
FIG. 4 is a comparison of before and after opening operation;
FIG. 5 is a diagram of a neural network architecture;
FIG. 6 is a diagram of a neural network framework;
FIG. 7 is a schematic view of pest detection accuracy;
fig. 8 is a schematic diagram of loss values in pest detection.
Detailed Description
The technical solutions of the present invention are further described in detail below with reference to the accompanying drawings, but the scope of the present invention is not limited to the following descriptions.
As shown in figure 1, the application provides an apple tree pest detection method based on a DNN network and a speckle detection algorithm. And carrying out image segmentation on the image acquired by the camera by a GrabCT method, then extracting feature points in the blade by using a Gaussian Laplacian operator and an LOG algorithm, and sending the feature points into a neural network for training to finally obtain a detection result. In order to solve the problem that a neural network is easy to fall into a local optimal solution and the gradient disappears, a DNN network is adopted in the system, in order to further improve the accuracy of the algorithm, the DNN network is mainly adopted in the system, a spot detection algorithm is used as an auxiliary algorithm, the apple tree pest and disease damage is efficiently detected, meanwhile, a detection result is fed back to a supervision platform in real time through a cloud platform, data updating is carried out, cloud sharing and data summarization of the data are achieved, and fruit growers are helped to know the growth condition of the apple trees in real time.
The neural network is an extension based on a perception machine, and the DNN can be understood as a neural network with a plurality of hidden layers, and the essence of the neural network is actually multi-layer linear regression. The DNN internal network layers can be divided into three types, i.e., an input layer, a hidden layer and an output layer, generally, the first layer is the input layer, the last layer is the output layer, and the middle layers are all hidden layers, which are all connected.
The system adopts an input layer, three hidden layers and an output layer, the structure is shown in figure 5, and the used activation function is a Sigmoid function.
As shown in fig. 6, a basic DNN neural network is firstly established, a weight matrix w and a bias parameter b are initialized, then images in a training set are subjected to image segmentation, histogram equalization, speckle detection and operation starting operation to obtain feature points of each image and coordinates of the positions of the feature points, the feature points are selected in an original image, the selected feature images are sent to the neural network for learning, then whether the test set images have diseases and insect pests is distinguished through a forward propagation algorithm, and the accuracy is calculated. And finally, updating the weight matrix w and the bias parameter b through a back propagation algorithm, and then testing until the final identification result is optimal to obtain an optimal model.
The essence of the forward propagation algorithm of the DNN neural network is that a series of linear operations and activation operations are performed by using a plurality of weight coefficient matrixes W, bias vectors b and input vectors x, each layer calculates the output of the next layer according to the output result of the previous layer from the input layer, the operation is performed backwards layer by layer until the output layer is reached, and the final result is output.
The activation function chosen by the present system is the Sigmoid function σ (z), assuming that the output values of the hidden layer and the output layer are a. When there are m neurons in layer 1-1, the output to the jth neuron of layer 1
Figure BDA0003013974320000031
Comprises the following steps:
Figure BDA0003013974320000032
Figure BDA0003013974320000033
it can also be represented in a matrix manner:
al=σ(zl)=σ(Wlal-1+bl), (28)
it should be noted that the back propagation algorithm of the DNN neural network is actually to perform iterative optimization on the loss function by using a gradient descent method to solve a minimum value, so as to find an optimal linear weight matrix W and an optimal offset vector b, and make the output result of the sample equal to or close to the sample label as much as possible.
The loss of the metric is first derived using a mean square error method, i.e., it is desirable to minimize the following for each sample:
Figure BDA0003013974320000034
then, according to the loss function, a weight matrix W and an offset vector b of each layer are solved by iteration through a gradient descent method, and W and b satisfy the requirement of L layers of an output layer
aL=σ(zL)=σ(WLaL-1+bL), (30)
So for the output layer parameters, the loss function becomes:
Figure BDA0003013974320000035
the gradient of the output layer can be solved as:
Figure BDA0003013974320000041
we proceed according to equation (32)The gradient of the L-1 layer and the L-2 layer can be solved by recursion step by step. According to the calculated delta of the L layerlAnd forward propagation algorithm zl=Wlal-1+blCan simply solve the W of the L-th layerLAnd blGradient of (2)
Figure BDA0003013974320000042
Figure BDA0003013974320000043
The L-th layer delta can be solved according to the above formulaLSolving for deltalAnd deltal+1Is to solve
Figure BDA0003013974320000044
But according to zlAnd zl+1It is possible to find out:
zl+1=Wl+1al+bl+1=Wl+1σ(zl)+bl+1, (35)
Figure BDA0003013974320000045
then substituting equation (36) into δ abovelAnd deltal+1Can obtain
Figure BDA0003013974320000046
The weight W can then be updatedlAnd bias bl
Figure BDA0003013974320000047
Figure BDA0003013974320000048
It should be noted that image segmentation is a process of dividing an image into a plurality of specific regions with unique properties, and then extracting a target region of interest from the regions. The system adopts GrabCont algorithm to extract foreground, and extracts apple tree leaves from the whole image. The whole extraction process is divided into two steps: and building a color model and iterating an energy minimization segmentation algorithm.
Wherein, the establishment of the color model comprises the following contents:
in the RGB color space, for each pixel, it is either a gaussian component from some background GMM or a gaussian component from some target GMM [13], so we can model the target and background with a full covariance GMM of k gaussian components, whose Gibbs energy after modeling is:
E(α,k,θ,z)=U(α,k,θ,z)+V(α,z), (1)
U(α,k,θ,z)=∑nD(αn,kn,θ,zn), (2)
in formula (2), U is an area term and represents a negative logarithm of the probability that a pixel belongs to the target or the background, i.e., a penalty for classifying the pixel as the target or the background. Because the model of the mixture gaussian density is:
Figure BDA0003013974320000049
Figure BDA00030139743200000410
so we get the negative logarithm of the Gaussian mixture model, the original equation becomes:
Figure BDA0003013974320000051
the formula (5) has three parameters theta of GMM, the first parameter theta is the weight pi of the Gaussian component, the second parameter theta is the mean vector mu of the Gaussian component, and the third parameter theta is the covariance matrix sigma.
θ={π(α,k),μ(α,k),∑(α,k),α=0,1,k=1...K}, (6)
In other words, the three GMM parameters describing the target and the background need to be determined by learning, and with the determination of these three parameters, the zone energy term of Gibbs energy can also be determined.
Generally, the similarity of two pixels in the RGB space is calculated by euclidean distance, and the most critical parameter β in this case is determined by the contrast of the image. If the image has low contrast, i.e. the pixels m and n have differences themselves, the difference i zm-zn i is low, and then a larger parameter β is multiplied to amplify the difference. If the contrast of the image is relatively high, that is, the difference i zm-zn i of m and n belonging to the same target pixel is relatively high, a relatively small parameter β needs to be multiplied to reduce the difference. In order to make the V term work normally under the condition of high or low contrast, the difference is reduced by a smaller parameter β, so when the constant γ is 50, the weight of n-link is:
V(α,z)=γ∑(m,n)∈Cn≠αm]exp-β||zm-zn||2, (7)
at this point, a color model is successfully built even if the first step of image segmentation is completed.
The iterative energy minimization segmentation algorithm comprises the following contents:
the target pixel and the background pixel are first acquired by framing the target, and then the GMMs of the target and the background can be estimated from these two pixels. There is also a need to cluster pixels belonging to the object and background into k classes, i.e. k gaussian models in GMM, by the k-mean algorithm [14 ]. Wherein, the gaussian component allocated to each pixel is:
Figure BDA0003013974320000052
the parameter mean and covariance in equation (8) are both estimated from their own RGB values, and the weight of the gaussian component can be obtained from the ratio of the number of pixels belonging to the gaussian component to the total number of pixels:
Figure BDA0003013974320000053
then, a graph can be established according to the Gibbs energy term, then the weight t-link and the weight n-link are calculated according to the graph,
image segmentation was finally performed using maxflow/mincut algorithm:
Figure BDA0003013974320000054
wherein, the histogram equalization comprises the following contents:
the system adopts histogram equalization to widen the gray value of apple leaf scabs in the image and merge the gray value without the scabs, so that the contrast is increased, the image is clear, and the purpose of enhancing the characteristic points is achieved.
The essence of the image gray value is actually a one-dimensional discrete function:
h(k)=nk k=0,1,...,L-1, (11)
in formula (11), nkFor the number of pixels in the image f (x, y) with a gray level of k, the height of each column of the histogram corresponds to nk. Here, the histogram may be normalized, and the relative frequency of occurrence of gray levels in the normalized histogram may be defined as Pr(k) .1. the Namely that
Pr(k)=nk/N, (12)
In equation (12), N is the total number of pixels in the image f (x, y),nkthe number of pixels in the image f (x, y) having a gray level K. In this case, the original image gradation and the histogram-equalized image gradation are represented by r and s, respectively, and when r, s ∈ (0, 1), the gradation value of the pixel changes between black and white. When r is 0, the gray value of the pixel is black. When r is 1, the gray scale value of the pixel is white. That is to say in [0,1 ]]Any one of r in (b), through the transformation function T (r), can generate a corresponding s, an
s=T(r), (13)
In the range of 0. ltoreq. r.ltoreq.1 in equation (13), T (r) is a monotonically increasing function, so that when 0. ltoreq. r.ltoreq.1, there is 0. ltoreq.T (r). ltoreq.1. The probability density of the random variable r is Pr (r) obtained by probability theory, and since the random variable s is a function of r, the distribution function of the random variable s is assumed to be Ps(s), from the definition of the distribution function:
Figure BDA0003013974320000061
since the derivative of the distribution function is a probability density function, the derivation of s on both sides can be obtained as follows:
Figure BDA0003013974320000062
knowledge from probability theory can be used to find the interval [ a, b]A function of uniform distribution having a probability density function of
Figure BDA0003013974320000063
Because r ∈ [0,1 ] after normalizing it]Therefore, it is
Figure BDA0003013974320000064
And due to Ps(s)ds=Pr(r) dr so that ds is PrThe two-sided integration of (r) dr yields:
Figure BDA0003013974320000065
for digital images with discrete gray levels, the probability can be replaced by frequency, so the transformation function T (r)k) May be represented as;
Figure BDA0003013974320000066
as can be seen from equation (17), the gray level sk of each pixel after equalization can be directly calculated from the histogram of the original image. The effect before and after histogram equalization on apple tree leaves is shown in fig. 2.
It should be noted that the blob detection includes the following:
laplacian of gaussian operator: for a two-dimensional gaussian function:
Figure BDA0003013974320000067
its laplace transform is:
Figure BDA0003013974320000071
the normalized gaussian lapel transform is:
Figure BDA0003013974320000072
LOG algorithm: firstly, gaussian low-pass filtering is carried out on the image f (x, y) by adopting a gaussian kernel with variance sigma, and noise points in the image are removed.
L(x,y;σ)=f(x,y)*G(x,y;σ)
Then carrying out Laplace transform on the image;
Figure BDA0003013974320000073
namely:
Figure BDA0003013974320000074
here, laplacian transform is performed on the gaussian kernel, and then the image is convolved.
Multi-scale detection: when the sigma scale is fixed, only the spots with the corresponding radius can be detected. Therefore, multi-scale detection can be performed by deriving the normalized two-dimensional laplacian of gaussian operator. The normalized gaussian laplace function is;
Figure BDA0003013974320000075
solving for
Figure BDA0003013974320000076
Is equivalent to solving the equation (24)
Figure BDA0003013974320000077
This then gives:
Figure BDA0003013974320000078
r2-2a2=0, (26)
when the size is measured
Figure BDA0003013974320000079
Then, the maximum or minimum of the laplacian response of gaussian can be obtained, and then the speckle in the image can be obtained according to the response value, as shown in fig. 3.
The morphological opening operation includes the following steps:
when the open operation is used for processing the image, noise points can be eliminated and small interference blocks can be removed on the premise of not influencing the original image. Therefore, after the speckle detection algorithm is used, the opening operation is used, the image obtained by the speckle detection is firstly subjected to the corrosion operation and then subjected to the expansion operation, and then the black interference block in the image can be removed, so that the extracted scab is more vivid. The images before and after the on operation are shown in fig. 4.
A specific embodiment of the present application is as follows:
the main control chip adopts an NLE-AI800 Internet of things platform, a CPU adopts dual-core A73+ dual-A53 + single-core A53, an AI computing unit adopts dual-core NNIE @840MHz and is provided with a 4GB (national standard) memory and a 32G memory, and the main control chip has super-strong operation processing and analysis capacity and can meet the requirements of the system on operation speed and storage space. The system also adopts a new continental Internet of things platform, which is an Internet of things system integrating the functions of equipment online acquisition, remote control, wireless transmission, data processing, early warning information release, decision support, integrated control and the like. The distributed data storage and calculation method supports access of various gateways and hardware equipment of the Internet of things, and stores and calculates data in a distributed mode. The data can be transmitted to the cloud end through a TCP protocol, and the data can be shared and visually analyzed conveniently.
In order to verify the accuracy of the apple tree pest and disease detection system more accurately, the test site is specially selected in the Shaanxi region where the apple trees are planted in the largest area and yield in recent years. In a pre-selected apple tree test field, experimenters hold the device by hands, acquire image information of apple tree leaves in real time through a camera, analyze frames by frames, process each frame of image, send the processed image into a DNN neural network, and finally display a recognition result on a screen.
The experiment scientifically selects 100 apple trees as sampling samples, the sampling samples are divided into six groups for experiment, and the test result of each time is transmitted to the cloud through a TCP protocol, so that the experimental data can be analyzed and researched conveniently.
The results of the experiment are shown in table 1 below:
TABLE 1 statistical table of probability of apple tree diseases and insect pests
Plant diseases and insect pests Number of samples Mosaic disease Rust disease Grey leaf spot disease Alternaria leaf spot Brown spot disease Rate of accuracy
Mosaic disease 100 90 4 0 0 6 90.0%
Rust disease 100 4 92 0 2 2 92.0%
Grey leaf spot disease 100 0 6 89 4 1 89.0%
Alternaria leaf spot 100 3 4 1 90 2 90.0%
Brown spot disease 100 0 0 2 4 94 94.0%
No disease 100 2 1 1 3 1 92.0%
From the above data, it was found that the accuracy of pest detection was 89% or more in each test area, as shown in fig. 7 and 8. The system can meet the requirement of fruit growers on prevention of apple tree diseases and insect pests by identifying the diseases and insect pests with accuracy, and can be widely deployed and applied in apple tree gardens. However, the system acquires image data through the camera, and the acquired image is interfered by factors such as visual interference or image distortion and the like possibly due to the influence of some factors such as a light source, environment, hardware and the like, so that the system has a certain error in detecting plant diseases and insect pests, and the error range is 0.2-0.6%
The foregoing is merely a preferred embodiment of the invention, and it is to be understood that the described embodiments are part, rather than all, of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. The invention is not intended to be limited to the forms disclosed herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (1)

1. The apple tree pest and disease damage detection method based on the DNN network and the spot detection algorithm is characterized by comprising the following steps of:
step 1, building a basic DNN neural network, initializing a weight matrix W and a bias parameter b, inputting a data set, updating the weight matrix W and the bias parameter b through a forward propagation algorithm and a backward propagation algorithm of the neural network, and executing step 2;
step 2, carrying out image segmentation on the acquired image after the acquired image is zoomed by adopting a Gaussian pyramid algorithm, separating the foreground and the background of the image, and executing step 3;
step 3, carrying out histogram equalization on the image subjected to image segmentation, enhancing feature points in the image, and executing step 4;
step 4, extracting characteristic points in the image by using an LOG algorithm, then removing noise by using opening operation processing, and executing step 5;
and 5: inputting the characteristic points processed in the step 4 into the trained DNN neural network for judgment, and identifying whether the leaves have plant diseases and insect pests;
the extraction process of the image segmentation in the step 2 comprises two steps: firstly, building a color model, and then segmenting through an iterative energy minimization segmentation algorithm;
in the step 4, the LOG algorithm is adopted, the process of extracting the feature points in the image comprises spot detection, the spot detection comprises obtaining the maximum value or the minimum value of a Gaussian Laplace response by differentiating a normalized two-dimensional Laplace Gaussian operator, and then obtaining the spots in the image according to the maximum value or the minimum value of the response;
in the step 4, the noise removal comprises the steps of firstly carrying out corrosion operation on the image obtained by spot detection and then carrying out expansion operation to remove black interference blocks in the image by adopting open operation processing;
the method comprises the following steps of establishing a basic DNN neural network, initializing a weight matrix W and a bias parameter b, inputting a data set, and updating the weight matrix W and the bias parameter b through a forward propagation algorithm and a backward propagation algorithm of the neural network, wherein the method comprises the following steps:
adopting an input layer, three hidden layers and an output layer, wherein an activation function is a Sigmoid function, constructing a basic DNN neural network, initializing a weight matrix w and a bias parameter b, carrying out image segmentation on images in a data set, carrying out histogram equalization, detecting spots, starting operation to obtain feature points of each image and coordinates of positions of the feature points, selecting out a frame in an original image, sending the selected feature images into the neural network, learning, distinguishing whether a test set image has diseases and insect pests through a forward propagation algorithm, and calculating to obtain accuracy; finally, updating the weight matrix w and the bias parameter b through a back propagation algorithm, and then testing until the final discrimination result reaches the optimum to obtain an optimum model;
the forward propagation algorithm of the DNN neural network utilizes a weight coefficient matrix W, a bias vector b and an input vector x to carry out linear operation and activation operation, each layer calculates the output of the next layer according to the output result of the previous layer from the input layer, and the operation is carried out backwards layer by layer until the output layer is reached, so that the result is output;
wherein the activation function is a Sigmoid function
Figure DEST_PATH_IMAGE001
Assuming that the output values of the hidden layer and the output layer are a; when l-1
When there are m neurons in a layer, the output to the jth neuron of the l-th layer
Figure 274731DEST_PATH_IMAGE002
The method comprises the following steps:
Figure DEST_PATH_IMAGE003
expressed in a matrix manner:
Figure 270500DEST_PATH_IMAGE004
for each sample it is desired to minimize:
Figure DEST_PATH_IMAGE005
iteratively solving the weight matrix W and the bias vector b for each layer using a gradient descent method, W, b for the output layer L layer satisfying:
Figure 581395DEST_PATH_IMAGE006
for the output layer parameters, the loss function becomes:
Figure DEST_PATH_IMAGE007
the gradient of the output layer is solved as:
Figure 380200DEST_PATH_IMAGE008
solving the gradient of the L-1 layer and the L-2 layer … according to the gradient solution, and calculating the gradient of the L layer according to the sequence
Figure DEST_PATH_IMAGE009
And forward propagation algorithm
Figure 476463DEST_PATH_IMAGE010
For solving for layer L
Figure DEST_PATH_IMAGE011
And
Figure 705450DEST_PATH_IMAGE012
gradient of (2)
Figure DEST_PATH_IMAGE013
Figure 503642DEST_PATH_IMAGE014
According to layer L
Figure DEST_PATH_IMAGE015
And an
Figure 109066DEST_PATH_IMAGE016
And
Figure DEST_PATH_IMAGE017
obtaining:
Figure 56906DEST_PATH_IMAGE018
Figure DEST_PATH_IMAGE019
then
Figure 456795DEST_PATH_IMAGE020
The weight is updated
Figure DEST_PATH_IMAGE021
And bias
Figure 742283DEST_PATH_IMAGE022
Figure DEST_PATH_IMAGE023
Figure 885819DEST_PATH_IMAGE024
The image segmentation is carried out after the acquired image is zoomed by adopting a Gaussian pyramid algorithm, and the foreground and the background of the image are separated, and the method comprises the following steps: the method adopts GrabConut algorithm to extract the foreground, extracts apple tree leaves from the whole image, and the extraction process is divided into two steps: color model construction and iterative energy minimization segmentation algorithm:
wherein, the establishment of the color model comprises the following contents:
modeling the target and background using a full covariance GMM of k gaussian components, the Gibbs energy after modeling being:
Figure DEST_PATH_IMAGE025
Figure 691095DEST_PATH_IMAGE026
wherein, U is an area item and represents the negative logarithm of the probability that a pixel belongs to a target or a background;
the model of the Gaussian mixture density is:
Figure DEST_PATH_IMAGE027
Figure 524535DEST_PATH_IMAGE028
after taking the negative logarithm of the Gaussian mixture model:
Figure DEST_PATH_IMAGE029
with three parameters of the GMM
Figure 297319DEST_PATH_IMAGE030
First parameter of
Figure 119912DEST_PATH_IMAGE030
Being the weight of a Gaussian component
Figure DEST_PATH_IMAGE031
Second parameter
Figure 91279DEST_PATH_IMAGE030
Mean vector being Gaussian component
Figure 973916DEST_PATH_IMAGE032
Third parameter
Figure 561892DEST_PATH_IMAGE030
Is the covariance matrix sigma;
Figure DEST_PATH_IMAGE033
the weight of n-link is:
Figure 657018DEST_PATH_IMAGE034
completing the construction of a color model;
the iterative energy minimization segmentation algorithm comprises the following contents:
selecting a target through a frame, obtaining a target pixel and a background pixel, and respectively estimating the GMM of the target and the GMM of the background according to the target pixel and the background pixel;
clustering pixels belonging to a target and a background into k classes through a k-mean algorithm, namely k Gaussian models in the GMM, wherein Gaussian components obtained by distributing each pixel are as follows:
Figure DEST_PATH_IMAGE035
the weight of the gaussian component is obtained by the ratio of the number of pixels belonging to the gaussian component to the total number of pixels:
Figure 824170DEST_PATH_IMAGE036
establishing a graph according to the Gibbs energy item, then solving the weights t-link and n-lin according to the graph, and carrying out image segmentation by using a maxflow/mincut algorithm;
the histogram equalization is carried out on the image subjected to image segmentation, and the characteristic points in the image are enhanced, and the method comprises the following processes:
adopting histogram equalization to widen the gray value of apple leaf scabs in the image and merge the gray value without the scabs, comprising the following steps:
discrete function of image gray value:
Figure DEST_PATH_IMAGE037
wherein the content of the first and second substances,
Figure 205603DEST_PATH_IMAGE038
as an image
Figure DEST_PATH_IMAGE039
The number of pixels with a medium gray level of k, one for each column height of the histogram
Figure 687400DEST_PATH_IMAGE040
Normalizing the histogram and defining the relative frequency of gray level occurrence in the normalized histogram as
Figure DEST_PATH_IMAGE041
Namely:
Figure 976430DEST_PATH_IMAGE042
wherein N is an image
Figure DEST_PATH_IMAGE043
The total number of pixels of (a) is,
Figure 610805DEST_PATH_IMAGE038
as an image
Figure 146829DEST_PATH_IMAGE043
The number of pixels with the middle gray level of K, r and s respectively represent the gray level of an original image and the gray level of the image after histogram equalization, when r, s belongs to (0, 1), the gray level of the pixels changes between black and white, and when r = s = 0, the gray level of the pixels is black; when r = s = 1, the gradation value of the pixel is white; in [0,1 ]]Any one of r in (b) is subjected to a transformation function
Figure 53605DEST_PATH_IMAGE044
Can all generate a corresponding s, an
Figure DEST_PATH_IMAGE045
In the range of 0. ltoreq. r.ltoreq.1, T (r) is a monotonically increasing function, and when r is 0. ltoreq. r.ltoreq.1, T is 0. ltoreq. T (r) is 1; the probability density of the random variable r is
Figure 901254DEST_PATH_IMAGE046
Assuming that the distribution function of the random variable s is
Figure DEST_PATH_IMAGE047
From the definition of the distribution function, we can obtain:
Figure 452453DEST_PATH_IMAGE048
and the derivative of the distribution function is a probability density function, and the derivative of s can be obtained by:
Figure DEST_PATH_IMAGE049
and also
Figure 831481DEST_PATH_IMAGE050
To, for
Figure DEST_PATH_IMAGE051
The two-sided integration of (a) can be found:
Figure 835341DEST_PATH_IMAGE052
transformation function
Figure DEST_PATH_IMAGE053
Can be expressed as:
Figure 466173DEST_PATH_IMAGE054
the spot detection comprises the following steps:
laplacian of gaussian operator: for a two-dimensional gaussian function:
Figure DEST_PATH_IMAGE055
its laplace transform is:
Figure 793249DEST_PATH_IMAGE056
the normalized gaussian lapel transform:
Figure DEST_PATH_IMAGE057
the LOG algorithm: first, it is necessary to match the image
Figure 481195DEST_PATH_IMAGE058
And (3) Gaussian low-pass filtering is carried out by adopting a Gaussian kernel with variance sigma, and noise points in the image are removed:
Figure DEST_PATH_IMAGE059
then carrying out Laplace transform on the image;
Figure 972350DEST_PATH_IMAGE060
namely:
Figure DEST_PATH_IMAGE061
multi-scale detection: carrying out multi-scale detection by derivation of a normalized two-dimensional Laplacian Gaussian operator, wherein the normalized Laplacian function is as follows:
Figure 531508DEST_PATH_IMAGE062
solving for
Figure DEST_PATH_IMAGE063
Is equivalent to solving the equation:
Figure 119615DEST_PATH_IMAGE064
obtaining:
Figure DEST_PATH_IMAGE065
Figure 325600DEST_PATH_IMAGE066
when the size is measured
Figure DEST_PATH_IMAGE067
And then obtaining the maximum value or the minimum value of the Gaussian Laplace response, and obtaining the spots in the image according to the Gaussian Laplace response.
CN202110398406.XA 2021-04-09 2021-04-09 Apple tree pest and disease detection method based on DNN network and spot detection algorithm Active CN113077452B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110398406.XA CN113077452B (en) 2021-04-09 2021-04-09 Apple tree pest and disease detection method based on DNN network and spot detection algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110398406.XA CN113077452B (en) 2021-04-09 2021-04-09 Apple tree pest and disease detection method based on DNN network and spot detection algorithm

Publications (2)

Publication Number Publication Date
CN113077452A CN113077452A (en) 2021-07-06
CN113077452B true CN113077452B (en) 2022-07-15

Family

ID=76617860

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110398406.XA Active CN113077452B (en) 2021-04-09 2021-04-09 Apple tree pest and disease detection method based on DNN network and spot detection algorithm

Country Status (1)

Country Link
CN (1) CN113077452B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610870A (en) * 2021-08-11 2021-11-05 华东理工大学 Method and device for monitoring liquid level height change and bubble or solid motion
CN113627531B (en) * 2021-08-11 2023-12-08 南京农业大学 Method for determining pear ring rot resistance based on support vector machine classification algorithm

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191074A (en) * 2018-08-27 2019-01-11 宁夏大学 Wisdom orchard planting management system
CN110163177A (en) * 2019-05-28 2019-08-23 李峥嵘 A kind of wind power generation unit blade unmanned plane automatic sensing recognition methods
WO2020146596A1 (en) * 2019-01-10 2020-07-16 The Regents Of The University Of Michigan Detecting presence and estimating thermal comfort of one or more human occupants in a built space in real-time using one or more thermographic cameras and one or more rgb-d sensors
CN111598001A (en) * 2020-05-18 2020-08-28 哈尔滨理工大学 Apple tree pest and disease identification method based on image processing

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190034734A1 (en) * 2017-07-28 2019-01-31 Qualcomm Incorporated Object classification using machine learning and object tracking
US20190370551A1 (en) * 2018-06-01 2019-12-05 Qualcomm Incorporated Object detection and tracking delay reduction in video analytics

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191074A (en) * 2018-08-27 2019-01-11 宁夏大学 Wisdom orchard planting management system
WO2020146596A1 (en) * 2019-01-10 2020-07-16 The Regents Of The University Of Michigan Detecting presence and estimating thermal comfort of one or more human occupants in a built space in real-time using one or more thermographic cameras and one or more rgb-d sensors
CN110163177A (en) * 2019-05-28 2019-08-23 李峥嵘 A kind of wind power generation unit blade unmanned plane automatic sensing recognition methods
CN111598001A (en) * 2020-05-18 2020-08-28 哈尔滨理工大学 Apple tree pest and disease identification method based on image processing

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A Study on Development of the Camera-Based Blind Spot Detection System Using the Deep Learning Methodology;Donghwoon Kwon等;《applied sciences》;20190723;1-20 *
基于局部二值模式的作物叶部病斑检测;李超 等;《计算机工程与应用》;20170323;第53卷(第24期);233-237 *
基于机器视觉的马铃薯晚疫病快速识别;党满意等;《农业工程学报》;20200123(第02期);201-208 *
基于深度学习的植物病害检测算法研究及系统实现;牛伯浩;《中国优秀硕士学位论文全文数据库 (农业科技辑)》;20200215;D046-20 *

Also Published As

Publication number Publication date
CN113077452A (en) 2021-07-06

Similar Documents

Publication Publication Date Title
Raut et al. Plant disease detection in image processing using MATLAB
CN109154978B (en) System and method for detecting plant diseases
Kuricheti et al. Computer vision based turmeric leaf disease detection and classification: a step to smart agriculture
CN109344883A (en) Fruit tree diseases and pests recognition methods under a kind of complex background based on empty convolution
CN111598001B (en) Identification method for apple tree diseases and insect pests based on image processing
CN109978848B (en) Method for detecting hard exudation in fundus image based on multi-light-source color constancy model
CN109685045B (en) Moving target video tracking method and system
CN113077452B (en) Apple tree pest and disease detection method based on DNN network and spot detection algorithm
Gutte et al. A survey on recognition of plant disease with help of algorithm
Revathi et al. Homogenous segmentation based edge detection techniques for proficient identification of the cotton leaf spot diseases
Raut et al. Review on leaf disease detection using image processing techniques
Kshirsagar et al. Plant disease detection in image processing using MATLAB
Kumar et al. Automatic leaf disease detection and classification using hybrid features and supervised classifier
CN115578603A (en) Panax plant leaf identification method based on multi-feature extraction
Sood et al. Image quality enhancement for Wheat rust diseased images using Histogram equalization technique
Suthakaran et al. Detection of the affected area and classification of pests using convolutional neural networks from the leaf images
CN116524224A (en) Machine vision-based method and system for detecting type of cured tobacco leaves
Patil Pomegranate fruit diseases detection using image processing techniques: a review
Ibrahim et al. Detection of diseases in rice leaf using deep learning and machine learning techniques
Shire et al. A review paper on: agricultural plant leaf disease detection using image processing
Kumar et al. A NOVEL WRAPPING CURVELET TRANSFORMATION BASED ANGULAR TEXTURE PATTERN (WCTATP) EXTRACTION METHOD FOR WEED IDENTIFICATION.
Miao et al. Research on soybean disease identification method based on deep learning
Joshi et al. Detection and classification of plant diseases using soft computing techniques
Feng et al. Research of image recognition of plant diseases and pests based on deep learning
Mewada et al. IoT based Automated Plant Disease Classification using Support Vector Machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant