CN114399480A - Method and device for detecting severity of vegetable leaf disease - Google Patents

Method and device for detecting severity of vegetable leaf disease Download PDF

Info

Publication number
CN114399480A
CN114399480A CN202111659409.0A CN202111659409A CN114399480A CN 114399480 A CN114399480 A CN 114399480A CN 202111659409 A CN202111659409 A CN 202111659409A CN 114399480 A CN114399480 A CN 114399480A
Authority
CN
China
Prior art keywords
disease
image
severity
vegetable
vegetable leaf
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111659409.0A
Other languages
Chinese (zh)
Inventor
张领先
李凯雨
徐畅
丁俊琦
朱昕怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Agricultural University
Original Assignee
China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Agricultural University filed Critical China Agricultural University
Priority to CN202111659409.0A priority Critical patent/CN114399480A/en
Publication of CN114399480A publication Critical patent/CN114399480A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for detecting the severity of vegetable leaf diseases, wherein the method for detecting the severity of vegetable leaf diseases comprises the following steps: acquiring an image of a vegetable leaf; determining a disease segmentation image based on the vegetable leaf image; determining lesion area information and health area information based on the lesion segmentation image; and determining disease severity information based on the lesion area information and the healthy area information. According to the method and the device for detecting the disease severity of the vegetable leaves, the disease segmentation image is determined based on the vegetable leaf image in the real scene, and the disease spot area information and the healthy area information are determined according to the disease segmentation image, so that the disease severity information is obtained, the accurate detection of the disease severity of the vegetable leaves with the complex background can be automatically completed, the labor cost can be saved, and the detection accuracy and the detection efficiency can be improved.

Description

Method and device for detecting severity of vegetable leaf disease
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method and a device for detecting the severity of a disease of a vegetable leaf.
Background
Diseases may be caused by various reasons during the planting process of vegetables, and further, the yield is reduced and the quality is reduced. For example, downy mildew and powdery mildew are the more common and more harmful diseases of greenhouse vegetable diseases. The accurate acquisition of the disease severity is a precondition for scientific disease control of growers, and has important significance for reducing the usage amount of pesticides and improving economic benefits.
At present, the method for detecting the severity of the vegetable leaf disease mainly depends on the experience of a grower, so that the method is time-consuming, labor-consuming, high in labor cost, dependent on the experience of the grower, poor in accuracy and low in efficiency.
Disclosure of Invention
The invention provides a method and a device for detecting the severity of a vegetable leaf disease, which are used for solving the defects that in the prior art, the method for detecting the severity of the vegetable leaf disease mainly depends on the experience of a grower, so that the method is time-consuming and labor-consuming, high in labor cost, dependent on the experience of the grower, poor in accuracy and low in efficiency, automatically completes the detection of the severity of the vegetable leaf disease, can save the labor cost, and improves the detection accuracy and the detection efficiency.
The invention provides a method for detecting the severity of vegetable leaf diseases, which comprises the following steps: acquiring an image of a vegetable leaf; determining a disease segmentation image based on the vegetable leaf image; determining lesion area information and health area information based on the lesion segmentation image; and determining disease severity information based on the lesion area information and the healthy area information.
According to the method for detecting the severity of the disease of the vegetable leaf, provided by the invention, the determination of the disease segmentation image based on the vegetable leaf image comprises the following steps: inputting the vegetable leaf image into a disease segmentation model to obtain the disease segmentation image output by the disease segmentation model; the disease segmentation model is obtained by training a vegetable leaf sample data set, wherein the vegetable leaf sample data set comprises a vegetable leaf sample image and a disease segmentation sample image corresponding to the vegetable leaf sample image.
According to the method for detecting the severity of the disease of the vegetable leaves, provided by the invention, the disease segmentation model comprises the following steps: the encoder is used for extracting low-level features and mixed semantic features from the vegetable leaf image and obtaining high-level features based on the mixed semantic features; and the decoder is used for obtaining a fusion characteristic based on the low-level characteristic and the high-level characteristic and obtaining the disease segmentation image based on the fusion characteristic.
According to the method for detecting the severity of the vegetable leaf disease provided by the invention, the encoder comprises: a depth volume block, the depth volume block comprising at least one hybrid attention module, the hybrid attention module configured to perform channel and spatial interaction feature weight calculation to obtain the low-level features and the hybrid semantic features; and the multi-scale feature extraction block is used for performing cavity convolution operation and pooling operation on the mixed semantic features to obtain the advanced features.
According to the method for detecting the severity of the disease of the vegetable leaves, provided by the invention, the training process of the disease segmentation model comprises the following steps: counting the number of lesion areas, healthy areas and background areas in the disease segmentation sample image; determining the weight of the lesion area, the weight of the healthy area and the weight of the background area based on the number of the lesion area, the healthy area and the background area and a median frequency balancing method, and applying the weight of the lesion area, the weight of the healthy area and the weight of the background area to the lesion segmentation model.
According to the method for detecting the severity of the disease of the vegetable leaves, provided by the invention, the sample data set of the vegetable leaves comprises the following steps: the method comprises an original blade sample and an enhanced blade sample, wherein the enhanced blade sample is obtained by performing simulation capacity expansion processing on the original blade sample.
According to the method for detecting the severity of the disease of the vegetable leaves, provided by the invention, the disease spot area information comprises the following steps: the number of pixels in the lesion area, and the health area information includes: the number of healthy area pixels; determining disease severity information based on the lesion area information and the healthy area information, including: determining a total pixel value of the pixel number of the lesion area and the pixel number of the healthy area; and dividing the number of pixels in the lesion area by the total pixel value to obtain the disease severity information.
The invention also provides a device for detecting the severity of the disease of the vegetable leaves, which comprises the following components: the acquisition module is used for acquiring vegetable leaf images; the first determining module is used for determining a disease segmentation image based on the vegetable leaf image; the second determination module is used for determining the information of the lesion area and the information of the health area based on the disease segmentation image; and the third determining module is used for determining disease severity information based on the lesion area information and the health area information.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the steps of the method for detecting the severity of the vegetable leaf disease.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method for detecting severity of disease in vegetable leaves as described in any one of the above.
According to the method and the device for detecting the severity of the disease of the vegetable leaf, the disease segmentation image is determined based on the vegetable leaf image, and the information of the scab area and the information of the health area are determined according to the disease segmentation image, so that the severity information of the disease is obtained, the severity of the disease of the vegetable leaf can be automatically detected, the labor cost can be saved, and the detection accuracy and the detection efficiency can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for detecting severity of disease in vegetable leaves according to the present invention;
FIG. 2 is a schematic structural diagram of a disease segmentation model of the method for detecting severity of disease in vegetable leaves according to the present invention;
FIG. 3 is a second schematic structural diagram of a disease segmentation model of the method for detecting severity of disease in vegetable leaves according to the present invention;
FIG. 4 is a schematic structural diagram of a device for detecting severity of disease in vegetable leaves according to the present invention;
fig. 5 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method and device for detecting the severity of disease in vegetable leaves according to the present invention will be described with reference to fig. 1 to 5.
As shown in FIG. 1, the present invention provides a method for detecting the severity of a disease in a vegetable leaf, which comprises the following steps 110 to 140.
And step 110, acquiring an image of the vegetable leaf.
It is understood that the image of the vegetable leaf may be obtained, for example, the image of the vegetable leaf in a real scene may be captured by a camera, where the vegetable leaf may be the leaf of various economic crops such as cucumber, eggplant, soybean, tomato, etc., and may also be the leaf of other vegetables, where the type of the vegetable is not limited.
The vegetable leaf image here can be an RGB image of the vegetable leaf obtained under a complex background or a laboratory environment, the complex background refers to an interference background area existing in the vegetable leaf image in the acquisition process, and the disease spots can be caused by downy mildew and powdery mildew or other diseases.
And step 120, determining a disease segmentation image based on the vegetable leaf image.
It can be understood that a disease spot area and a healthy area can be segmented from a vegetable leaf image to obtain a disease segmented image, the disease spot area refers to an area with disease spots on the leaf, the healthy area refers to an area without normal disease spots on the leaf, the disease segmented image marks the disease spot area and the healthy area, certainly, a background area can be included, the segmentation process can be realized through an image recognition technology, the segmentation process can also be realized through a neural network model, a specific method for obtaining the disease segmented image according to the vegetable leaf image is not limited, and a person skilled in the art can realize the process by adopting a feasible scheme.
And step 130, determining the information of the lesion area and the information of the health area based on the disease segmentation image.
It can be understood that, on the basis of obtaining the disease segmentation image, lesion area information and health area information may be extracted from the disease segmentation image, where the lesion area information may be an area of a lesion area on the disease segmentation image or a number of pixels of the lesion area on the disease segmentation image, and the health area information may be an area of a health area on the disease segmentation image or a number of pixels of the health area on the disease segmentation image, and of course, the lesion area information and the health area information may include other parameters, which are not limited herein.
And step 140, determining disease severity information based on the lesion area information and the healthy area information.
It is understood that the disease severity information may be obtained from the lesion area information and the health area information, for example, the disease severity information may be obtained by calculating a ratio of the lesion area to the entire area. Of course, the disease severity information may also be obtained by using the ratio of the number of pixels in the lesion area to the total number of pixels, and the process of specifically determining the disease severity information is not limited herein, and may be determined by those skilled in the art according to actual conditions.
It is worth mentioning that the greenhouse cucumber, eggplant or tomato and other crops cause diseases due to various reasons in the planting process, and further cause yield reduction and quality reduction. Downy mildew and powdery mildew are the more common and more harmful diseases of greenhouse crop diseases. The accurate acquisition of the disease severity is a precondition for scientific disease control of growers, and has important significance for reducing the usage amount of pesticides and improving economic benefits. The traditional disease severity estimation method mainly depends on the experience of a grower, and not only is time and labor consumed, but also is high in subjectivity. Therefore, the method for accurately and efficiently detecting the severity of the vegetable leaf diseases is important for accurate disease diagnosis.
According to the method for detecting the disease severity of the vegetable leaves, the disease segmentation image is determined based on the vegetable leaf image, and the lesion area information and the healthy area information are determined according to the disease segmentation image, so that the disease severity information is obtained, the detection of the disease severity of the vegetable leaves can be automatically completed, the labor cost can be saved, and the detection accuracy and the detection efficiency can be improved.
As shown in fig. 2, in some embodiments, the determining 120 of the disease segmentation image based on the vegetable leaf image includes: and inputting the vegetable leaf image into the disease segmentation model to obtain a disease segmentation image output by the disease segmentation model.
The disease segmentation model is obtained by training a vegetable leaf sample data set, wherein the vegetable leaf sample data set comprises a vegetable leaf sample image and a disease segmentation sample image corresponding to the vegetable leaf sample image.
It can be understood that, in this embodiment, a neural network model is used to obtain a disease segmentation image according to a vegetable leaf image, the disease segmentation model may be a residual neural network or a convolutional neural network, and the disease segmentation model is a brand new model obtained by innovation and improvement on the basis of the neural network.
It is worth mentioning that computer vision is the main technical means of quantitative diagnosis of the severity of diseases on vegetable leaves at present, a few studies aiming at the severity of diseases are to divide the severity grade according to the experience of experts and to judge the severity grade by adopting a classification identification method, but the severity of diseases cannot be intuitively reflected by the grade division. And estimating the severity of the disease by using an image processing technology, and establishing a lesion segmentation model by using a machine learning method, so as to obtain the severity of the disease according to the proportion of the lesion area to the leaf area. Although these methods have achieved certain effects, the background needs to be manually removed and the image characteristics need to be manually set in the early stage, and the expressed spots of different diseases may have similar characteristics, so that it is difficult to segment the spots by simply using the manually designed characteristics. The methods have insufficient robustness to noise such as uneven illumination and complex background in the field environment, and are difficult to expand the application.
In the embodiment, the lesion segmentation of the vegetable leaves is realized by adopting the improved neural network model, and compared with the prior art, the robustness can be improved, and the accuracy of the lesion segmentation is improved.
The input vegetable leaf image can be a vegetable leaf image which is acquired in a natural environment and has a complex background and infected with vegetable downy mildew or powdery mildew, and the disease segmentation model can process the vegetable leaf image so as to obtain a disease segmentation image.
As shown in fig. 2 and 3, in some embodiments, the disease segmentation model includes: an encoder and a decoder.
The semantic segmentation network capable of coding and decoding blocks is a prototype network, a disease segmentation model is built, and the disease segmentation model comprises an encoder module for disease image diversity characteristic extraction and a decoder module for disease pixel level classification.
The encoder is used for extracting low-level features and mixed semantic features from the vegetable leaf images and obtaining high-level features based on the mixed semantic features.
It can be understood that the output feature map of the first convolution layer of the depth convolution block in the encoder may be obtained by adjusting the number of channels through a convolution operation of 1 × 1 to obtain low-level features, then the mixed semantic features may be obtained by the convolution block and the mixed attention module, and the mixed semantic features may be obtained by adjusting the number of channels through a multi-scale feature extraction block and a convolution operation of 1 × 1 to obtain high-level features.
The decoder is used for obtaining fusion characteristics based on the low-level characteristics and the high-level characteristics and obtaining a disease segmentation image based on the fusion characteristics.
The low-level features and the high-level features can be transmitted into a decoder to form fusion features, 3 x 3 convolution operation and 4 times up-sampling operation are carried out to recover the spatial information of the lesion spots and the leaves, and a pixel level segmentation image of the lesion image is obtained through a final classification layer, namely the lesion segmentation image is obtained.
As shown in fig. 2 and 3, in some embodiments, the encoder includes: a depth volume block and a multi-scale feature extraction block.
The depth volume block comprises at least one mixed attention module, and the mixed attention module is used for carrying out channel and space interaction feature weight calculation to obtain low-level features and mixed semantic features; and the multi-scale feature extraction block is used for performing cavity convolution operation and pooling operation on the mixed semantic features to obtain high-grade features.
It is understood that deep learning is a learning method with self-learning capability, and is considered to be one of the most effective approaches for image recognition at present. The semantic segmentation network can automatically extract target features and obtain a pixel-level classification graph of a sample image, and the attention mechanism can select more critical information for a current task from a large amount of information and can adaptively highlight contextual features.
The encoder can comprise a depth rolling block for disease feature extraction and a multi-scale feature extraction block for multi-scale feature extraction of vegetable diseases;
the depth convolution block may contain 4 mixed attention mechanism optimized residual blocks: 3conv2_ x, 4conv3_ x, 6conv4_ x and 3conv5_ x, each consisting of a repetition of the convolution group operations of 1 × 1, 3 × 3 and 1 × 1 in each residual block; 3conv2_ x can comprise 3 convolution group operations, the number of feature maps convoluted in each convolution group operation is 64, 64 and 256, the output feature map of the 3conv _ x residual block is input into a first mixed attention mechanism, the first mixed attention mechanism extracts the channel and space interaction features of the vegetable leaf image, and the weights of the features are calculated and weighted with the product of the output feature maps; 4conv3_ x comprises 4 convolution group operations, the number of feature maps convolved in each convolution group operation is 128, 128 and 512, the output feature map of the 4conv3_ x residual block is input into a second mixed attention mechanism, and channel and space interaction feature weights are calculated; 6conv4_ x comprises 6 convolution group operations, the number of feature maps convoluted in each convolution group operation is 256, 256 and 1024 respectively, the output feature map of the 6conv4_ x residual block is input into a third mixed attention mechanism, and channel and space interaction feature weights are calculated; 3conv5_ x contains 3 convolution group operations, the number of feature maps convolved in each convolution group operation is 512,512 and 2048 respectively, the output feature map of the 3conv5_ x residual block is input into the fourth mixed attention mechanism, and channel and space interaction feature weights are calculated.
The 3conv5_ x residual block is subjected to a feature map weighted by mixed attention and is transmitted into a multi-scale feature extraction block, the multi-scale feature extraction block is a parallel volume block consisting of 5 parts and comprises 1 × 1, 3 × 3, 3 × 3 and 3 × 3 hole convolution operations and pooling, the expansion rates of the three hole convolution operations are respectively 6, 12 and 18, and the parallel volume block can reduce the computational complexity and extract multi-scale leaf disease features under the condition of maintaining the same performance.
The mixed attention mechanism solves the problem of space interdependence by capturing the interaction between the space dimensionality and the input tensor channel dimensionality, extracts fine disease characteristics and reduces the influence of background noise on data set segmentation, and comprises 3 parallel branches, wherein a branch 1 captures cross-dimensionality interaction information between a channel C and a space H, a branch 2 captures cross-dimensionality interaction information between the channel C and a space W, and a branch 3 captures interaction information between the spaces H and W.
The branch 1 rotates the input feature 90 degrees counterclockwise along the X axis, the shape of the rotated tensor is (W multiplied by H multiplied by C), the shape of the tensor after passing through the Z-pool is (2 multiplied by H multiplied by C), the standard convolution layer with the convolution kernel size of 7 multiplied by 7, the batch processing normalization layer is used for providing intermediate output of the dimension (1 multiplied by H multiplied by C), the attention weight value is generated through sigmoid, and finally the output is rotated 90 degrees clockwise along the H axis to keep consistent with the shape of the input and is applied to the input tensor.
Z-pool is calculated using equation 1 to reduce the C-dimension tensor to 2 dimensions, connecting the average features and the maximum features in that dimension, so that the layer can retain a rich representation of the actual tensor, while reducing its depth to make further calculations lighter;
Z-pool(χ)=[MaxPool0d(χ),AvgPool0d(χ)]formula 1;
the branch 2 rotates the input tensor 90 degrees counterclockwise along the W axis, the tensor shape after rotation is (H multiplied by C multiplied by W), the tensor shape after passing through the Z-Pool is (2 multiplied by C multiplied by W), the attention weight is generated through sigmoid by the standard convolution layer with the convolution kernel size of 7 multiplied by 7, the normalization layer is processed in batch, the intermediate output of the dimension (1 multiplied by C multiplied by W) is provided, the final output is clockwise rotated 90 degrees along the W axis, the shape of the input is kept consistent, and the intermediate output is applied to the input tensor.
The branch 3 passes the channel of the input tensor through the Z-pool, the 7 × 7 standard convolution layer, and the batch normalization layer, then generates an attention weight value of a shape of (1 × H × W) by sigmoid, and applies it to the input tensor.
Finally, the fine tensors (C × H × W) produced by branch 1, branch 2, and branch 3 are aggregated together by averaging.
In some embodiments, the training process of the lesion segmentation model includes: counting the number of lesion areas, healthy areas and background areas in the disease segmentation sample image; determining the weight of the lesion area, the weight of the healthy area and the weight of the background area based on the number of the lesion area, the healthy area and the background area and a median frequency balancing method, and applying the weight of the lesion area, the weight of the healthy area and the weight of the background area to a lesion segmentation model.
It can be understood that the process of training the disease segmentation model by using the sample data set of the vegetable leaf specifically includes: counting the number of lesion areas, healthy areas and background areas in the lesion segmentation sample image, calculating the weights of the three categories by using a median frequency equalization method, wherein the weights of the three categories are 3.4532, 0.2286 and 1, for example, the median frequency equalization method is calculated by using a formula 2 and a formula 3;
Figure BDA0003449314420000101
Figure BDA0003449314420000102
wherein, freqcRepresents the frequency, sum, of the class c pixels in the training setcIndicates the number of c-th pixels, sumpcRepresentative is the total number of pixels, alpha, comprising the class c picturescRepresents the weight of the class c pixel, mean _ freqcThe median of the frequencies for all classes is indicated.
And applying the three class weights to a final classification layer of the decoder so as to reduce the influence of class imbalance on the disease segmentation model.
In the disease segmentation model, a migration learning method is combined, a pre-training weight on an ImageNet data set is adopted to initialize a disease segmentation model, a vegetable leaf sample data set is used to train the disease segmentation model, and the obtained disease segmentation model can realize accurate segmentation of disease spots and healthy leaves in a vegetable leaf image.
In some embodiments, the vegetable leaf sample dataset comprises: the method comprises an original blade sample and an enhanced blade sample, wherein the enhanced blade sample is obtained by carrying out simulation capacity expansion processing on the original blade sample.
The method has the advantages that the original leaf sample and the enhanced leaf sample are collected in the vegetable leaf sample data, the original leaf sample is the leaf sample data shot in a real scene, the enhanced leaf sample is obtained by performing simulation expansion on the basis of the original leaf sample, namely, the original leaf sample is subjected to image processing, the enhanced leaf sample similar to the original leaf sample is obtained, the quantity of the sample data in the vegetable leaf sample data collection can be increased, and the training effect can be improved.
It can be understood that, for the convolutional neural network, the larger the sample data set of the vegetable leaf available for training the disease segmentation model, the better the recognition effect of the disease segmentation model. In order to improve the identification effect of the disease segmentation model, the vegetable leaf sample data set can be expanded by adopting different modes such as horizontal turning, vertical turning, random scaling, 90-degree rotation or 270-degree rotation and the like to construct the vegetable leaf sample data set. It can be understood that when the vegetable leaf sample data set is divided, the training set, the verification set and the test set are divided according to a proper proportion, and the relative balance of the data volume of each type of data set is ensured.
In some embodiments, the vegetable leaf sample image in the vegetable leaf sample data set may be subjected to a preprocessing process, the vegetable leaf sample image may be preprocessed, downy mildew and powdery mildew spots, healthy leaves and background parts in the vegetable leaf sample image are manually marked, category labels of 1, 2 and 0 are respectively given, and the vegetable leaf sample data set is constructed.
The method comprises the following steps of (1) removing sample images with resolution lower than a preset threshold from original blade images collected in a natural environment; namely, an image with low quality is removed, downy mildew, powdery mildew spots and healthy leaves are marked out from the rest images, 3 types of labels of the spots, the healthy leaves and the background are obtained, and an original data set is constructed.
And (3) carrying out batch processing normalization on the leaf images in the original data set, and adjusting the sizes of all the leaf images to 224 multiplied by 224 pixel image data to obtain vegetable leaf sample images, wherein the sizes of the vegetable leaf sample images are consistent with the input of the disease segmentation model.
In some embodiments, the lesion area information includes: the number of pixels in the lesion area and the health area information include: the number of healthy area pixels; determining disease severity information based on the lesion area information and the healthy area information, including: determining the total pixel value of the pixel number of the lesion area and the pixel number of the healthy area; and dividing the pixel number of the lesion area by the total pixel value to obtain the information of the disease severity.
It can be understood that the number of pixels of the lesion area and the number of pixels of the healthy area of the lesion segmentation image can be extracted by the following formula 4:
Figure BDA0003449314420000121
the device for detecting the severity of a disease on a vegetable leaf provided by the invention is described below, and the device for detecting the severity of a disease on a vegetable leaf described below and the method for detecting the severity of a disease on a vegetable leaf described above can be referred to correspondingly.
As shown in fig. 4, the present invention also provides a device for detecting severity of disease in vegetable leaves, comprising: an acquisition module 410, a first determination module 420, a second determination module 430, and a third determination module 440.
An obtaining module 410, configured to obtain a vegetable leaf image;
the first determining module 420 is configured to determine a disease segmentation image based on the vegetable leaf image;
a second determining module 430, configured to determine lesion area information and healthy area information based on the lesion segmentation image;
and a third determining module 440, configured to determine disease severity information based on the lesion area information and the healthy area information.
Fig. 5 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 5: a processor (processor)510, a communication Interface (Communications Interface)520, a memory (memory)530 and a communication bus 540, wherein the processor 510, the communication Interface 520 and the memory 530 communicate with each other via the communication bus 540. Processor 510 may invoke logic instructions in memory 530 to perform a vegetable leaf disease severity detection method comprising: acquiring an image of a vegetable leaf; determining a disease segmentation image based on the vegetable leaf image; determining lesion area information and health area information based on the lesion segmentation image; and determining disease severity information based on the lesion area information and the healthy area information.
Furthermore, the logic instructions in the memory 530 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, the computer program product includes a computer program, the computer program can be stored on a non-transitory computer readable storage medium, when the computer program is executed by a processor, a computer can execute the method for detecting severity of disease in vegetable leaves provided by the above methods, the method includes: acquiring an image of a vegetable leaf; determining a disease segmentation image based on the vegetable leaf image; determining lesion area information and health area information based on the lesion segmentation image; and determining disease severity information based on the lesion area information and the healthy area information.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium, on which a computer program is stored, the computer program, when executed by a processor, implementing the method for detecting severity of disease in vegetable leaves provided by the above methods, the method including: acquiring an image of a vegetable leaf; determining a disease segmentation image based on the vegetable leaf image; determining lesion area information and health area information based on the lesion segmentation image; and determining disease severity information based on the lesion area information and the healthy area information.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for detecting the severity of a disease on a vegetable leaf is characterized by comprising the following steps:
acquiring an image of a vegetable leaf;
determining a disease segmentation image based on the vegetable leaf image;
determining lesion area information and health area information based on the lesion segmentation image;
and determining disease severity information based on the lesion area information and the healthy area information.
2. The method for detecting severity of disease in vegetable leaves according to claim 1, wherein said determining a disease segmentation image based on said vegetable leaf image comprises:
inputting the vegetable leaf image into a disease segmentation model to obtain the disease segmentation image output by the disease segmentation model;
the disease segmentation model is obtained by training a vegetable leaf sample data set, wherein the vegetable leaf sample data set comprises a vegetable leaf sample image and a disease segmentation sample image corresponding to the vegetable leaf sample image.
3. The method for detecting severity of disease in vegetable leaves according to claim 2, wherein the disease segmentation model comprises:
an encoder for extracting low-level features and mixed semantic features from the vegetable leaf image, and deriving high-level features based on the mixed semantic features;
and the decoder is used for obtaining a fusion characteristic based on the low-level characteristic and the high-level characteristic and obtaining the disease segmentation image based on the fusion characteristic.
4. The vegetable leaf disease severity detection method of claim 3, wherein said encoder comprises:
a depth volume block, the depth volume block comprising at least one hybrid attention module, the hybrid attention module configured to perform channel and spatial interaction feature weight calculation to obtain the low-level features and the hybrid semantic features;
and the multi-scale feature extraction block is used for performing cavity convolution operation and pooling operation on the mixed semantic features to obtain the advanced features.
5. The method for detecting severity of disease in vegetable leaves according to claim 2, wherein the training process of the disease segmentation model comprises:
counting the number of lesion areas, healthy areas and background areas in the disease segmentation sample image;
determining a lesion area weight, a healthy area weight and a background area weight based on the number of lesion areas, healthy areas and background areas and a median frequency balancing method;
and applying the lesion area weight, the healthy area weight and the background area weight to the disease segmentation model.
6. The method for detecting severity of disease in vegetable leaves according to claim 2, wherein the set of vegetable leaf sample data comprises: the method comprises an original blade sample and an enhanced blade sample, wherein the enhanced blade sample is obtained by performing simulation capacity expansion processing on the original blade sample.
7. The method for detecting severity of disease in vegetable leaves according to any one of claims 1 to 6, wherein the lesion area information includes: the number of pixels in the lesion area, and the health area information includes: the number of healthy area pixels;
determining disease severity information based on the lesion area information and the healthy area information, including:
determining a total pixel value of the pixel number of the lesion area and the pixel number of the healthy area;
and dividing the number of pixels in the lesion area by the total pixel value to obtain the disease severity information.
8. A vegetable leaf disease severity detection device, comprising:
the acquisition module is used for acquiring vegetable leaf images;
the first determining module is used for determining a disease segmentation image based on the vegetable leaf image;
the second determination module is used for determining the information of the lesion area and the information of the health area based on the disease segmentation image;
and the third determining module is used for determining disease severity information based on the lesion area information and the health area information.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the program to implement the steps of the method for detecting severity of disease in vegetable leaves according to any one of claims 1 to 7.
10. A non-transitory computer readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the steps of the method for detecting severity of disease in vegetable leaves according to any one of claims 1 to 7.
CN202111659409.0A 2021-12-30 2021-12-30 Method and device for detecting severity of vegetable leaf disease Pending CN114399480A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111659409.0A CN114399480A (en) 2021-12-30 2021-12-30 Method and device for detecting severity of vegetable leaf disease

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111659409.0A CN114399480A (en) 2021-12-30 2021-12-30 Method and device for detecting severity of vegetable leaf disease

Publications (1)

Publication Number Publication Date
CN114399480A true CN114399480A (en) 2022-04-26

Family

ID=81229157

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111659409.0A Pending CN114399480A (en) 2021-12-30 2021-12-30 Method and device for detecting severity of vegetable leaf disease

Country Status (1)

Country Link
CN (1) CN114399480A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116168223A (en) * 2023-04-20 2023-05-26 华南农业大学 Multi-mode-based peanut leaf spot disease grade detection method
CN116363175A (en) * 2022-12-21 2023-06-30 北京化工大学 Polarized SAR image registration method based on attention mechanism
CN116468671A (en) * 2023-03-21 2023-07-21 中化现代农业有限公司 Plant disease degree detection method, device, electronic apparatus, and storage medium
CN117197655A (en) * 2023-08-01 2023-12-08 北京市农林科学院智能装备技术研究中心 Rice leaf roller hazard degree prediction method, device, electronic equipment and medium
CN117292281A (en) * 2023-10-11 2023-12-26 南京农业大学 Open-field vegetable detection method, device, equipment and medium based on unmanned aerial vehicle image
CN117576571A (en) * 2024-01-16 2024-02-20 汉中中园农业科技发展(集团)有限公司 Multi-mode fruit and vegetable leaf disease identification method and system based on images and texts

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116363175A (en) * 2022-12-21 2023-06-30 北京化工大学 Polarized SAR image registration method based on attention mechanism
CN116468671A (en) * 2023-03-21 2023-07-21 中化现代农业有限公司 Plant disease degree detection method, device, electronic apparatus, and storage medium
CN116468671B (en) * 2023-03-21 2024-04-16 中化现代农业有限公司 Plant disease degree detection method, device, electronic apparatus, and storage medium
CN116168223A (en) * 2023-04-20 2023-05-26 华南农业大学 Multi-mode-based peanut leaf spot disease grade detection method
CN117197655A (en) * 2023-08-01 2023-12-08 北京市农林科学院智能装备技术研究中心 Rice leaf roller hazard degree prediction method, device, electronic equipment and medium
CN117197655B (en) * 2023-08-01 2024-06-07 北京市农林科学院智能装备技术研究中心 Rice leaf roller hazard degree prediction method, device, electronic equipment and medium
CN117292281A (en) * 2023-10-11 2023-12-26 南京农业大学 Open-field vegetable detection method, device, equipment and medium based on unmanned aerial vehicle image
CN117292281B (en) * 2023-10-11 2024-06-11 南京农业大学 Open-field vegetable detection method, device, equipment and medium based on unmanned aerial vehicle image
CN117576571A (en) * 2024-01-16 2024-02-20 汉中中园农业科技发展(集团)有限公司 Multi-mode fruit and vegetable leaf disease identification method and system based on images and texts
CN117576571B (en) * 2024-01-16 2024-04-26 汉中中园农业科技发展(集团)有限公司 Multi-mode fruit and vegetable leaf disease identification method and system based on images and texts

Similar Documents

Publication Publication Date Title
CN114399480A (en) Method and device for detecting severity of vegetable leaf disease
Shen et al. Detection of stored-grain insects using deep learning
CN111986099B (en) Tillage monitoring method and system based on convolutional neural network with residual error correction fused
CN110163813B (en) Image rain removing method and device, readable storage medium and terminal equipment
CN107665492B (en) Colorectal panoramic digital pathological image tissue segmentation method based on depth network
CN107808138B (en) Communication signal identification method based on FasterR-CNN
CN108009554A (en) A kind of image processing method and device
CN109740721B (en) Wheat ear counting method and device
CN112836602B (en) Behavior recognition method, device, equipment and medium based on space-time feature fusion
CN113706472B (en) Highway pavement disease detection method, device, equipment and storage medium
Jenifa et al. Classification of cotton leaf disease using multi-support vector machine
CN110874835B (en) Crop leaf disease resistance identification method and system, electronic equipment and storage medium
CN115424093A (en) Method and device for identifying cells in fundus image
CN103279944A (en) Image division method based on biogeography optimization
CN111860465A (en) Remote sensing image extraction method, device, equipment and storage medium based on super pixels
Kazi et al. Fruit Grading, Disease Detection, and an Image Processing Strategy
CN114529730A (en) Convolutional neural network ground material image classification method based on LBP (local binary pattern) features
JPWO2021181627A5 (en) Image processing device, image recognition system, image processing method and image processing program
CN117058079A (en) Thyroid imaging image automatic diagnosis method based on improved ResNet model
CN110008881A (en) The recognition methods of the milk cow behavior of multiple mobile object and device
CN113160146B (en) Change detection method based on graph neural network
CN112598646B (en) Capacitance defect detection method and device, electronic equipment and storage medium
CN115761451A (en) Pollen classification method and device, electronic equipment and storage medium
CN112801238B (en) Image classification method and device, electronic equipment and storage medium
Liu et al. A novel image segmentation algorithm based on visual saliency detection and integrated feature extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination