CN117475163A - Crop disease severity detection method - Google Patents

Crop disease severity detection method Download PDF

Info

Publication number
CN117475163A
CN117475163A CN202311445927.1A CN202311445927A CN117475163A CN 117475163 A CN117475163 A CN 117475163A CN 202311445927 A CN202311445927 A CN 202311445927A CN 117475163 A CN117475163 A CN 117475163A
Authority
CN
China
Prior art keywords
sub
feature
convolution
features
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311445927.1A
Other languages
Chinese (zh)
Inventor
臧红岩
张守荣
雷腾飞
付海燕
黄丽丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu Institute of Technology
Original Assignee
Qilu Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu Institute of Technology filed Critical Qilu Institute of Technology
Priority to CN202311445927.1A priority Critical patent/CN117475163A/en
Publication of CN117475163A publication Critical patent/CN117475163A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for detecting severity of crop diseases, which comprises the following steps: constructing a shared feature extraction network, rolling and pooling an input crop leaf image through a DenseNet network to identify shallow lesion features, and identifying deep lesion features through an ASPP module; constructing a classification network, connecting with the output end of the shared feature extraction network, and carrying out feature analysis on the deep lesion features by a convolution attention A-CBAM module and a Mobile residual error module based on cavity convolution to determine a disease classification result; and constructing a segmentation network, connecting with the output end of the shared feature extraction network, splicing the deep lesion features with the shallow lesion features to generate spliced features, identifying the background, the leaf and the lesion in the image, and determining the severity of the disease. The method realizes classification and identification of various diseases, and simultaneously realizes segmentation of the disease spots from the pixel level, thereby realizing accurate detection of the severity of various diseases.

Description

Crop disease severity detection method
Technical Field
The invention relates to the technical fields of computer vision and intelligent agriculture, in particular to a crop disease severity detection method based on a classification segmentation fusion network.
Background
The key point of crop disease control is that the disease damage area can be timely and accurately detected, and the disease type can be identified. In modern smart agriculture production, the severity of disease is an important factor directly affecting plant yield and quality. The failure to accurately judge the severity of the disease often results in excessive application, which causes great harm to the agricultural ecological environment and seriously affects national grain safety and food safety. Therefore, it is important to accurately identify the disease degree of crops.
In recent years, deep learning has been widely used in plant disease species identification, and in particular, convolutional Neural Networks (CNNs) have been used in plant disease images. However, there are relatively few studies on disease severity assessment. Currently, there are two main ways to detect and evaluate the severity of plant diseases: the first method is to label the level of severity directly from expert experience. The method has the advantages that the severity of the disease is identified and classified as a classification problem, and the realization is relatively simple; the disadvantage is a rough classification and a large influence of human factors. The second is to divide the method to distinguish the background, leaf and disease spot, then calculate the area proportion of leaf, and then classify and mark according to the area proportion. The method has the advantage of dividing the lesion from the pixel level and has high evaluation accuracy compared with the method according to expert experience.
Taking the study of the severity of wheat diseases as an example, the wheat diseases mainly comprise scab, yellow rust, stem rust, stripe rust, powdery mildew, spike blast and other diseases. The existing segmentation model can carry out pixel level segmentation, but the simple segmentation can only evaluate the severity of one disease type and cannot judge the disease type; if disease type and severity detection is integrated into one algorithm, the existing multi-label classification can achieve disease classification and severity classification, but cannot obtain accurate crop disease spot areas.
In view of the above, the present invention has been made in an effort to solve the above-mentioned problems, which are the subject of the present invention.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a crop disease severity detection method, which fuses a segmentation network and a classification network into a network model through a shared feature extraction network to realize classification and identification of various diseases, and simultaneously realizes segmentation of disease spots on a pixel level, thereby realizing accurate detection of severity of various diseases.
To achieve the above object, an aspect of the present invention provides a method for detecting severity of disease of crops, comprising:
constructing a shared feature extraction network, rolling and pooling an input crop leaf image through a DenseNet network to identify shallow lesion features, and identifying deep lesion features through an ASPP module;
constructing a classification network, connecting with the output end of the shared feature extraction network, and carrying out feature analysis of a convolution attention A-CBAM module and a Mobile residual error module based on hole convolution on the deep lesion features to determine a disease classification result;
and constructing a segmentation network, connecting the segmentation network with the output end of the shared feature extraction network, upsampling the deep lesion features, then performing splicing operation on the deep lesion features and the shallow lesion features output by the shared feature extraction network to generate splicing features, performing segmentation convolution processing on the splicing features, identifying the background, the blades and the lesion in the image, and determining the severity of the disease.
In some embodiments, the method further comprises:
training the segmentation network and the classification network respectively to obtain a first loss function of the segmentation network and a second loss function of the classification network;
acquiring a total loss function according to the first loss function and the second loss function;
training the preset iteration times according to the total loss function to obtain the optimal disease classification result and the disease severity.
In some embodiments, the deep lesion features are passed through a convolution attention a-CBAM module based on a hole convolution to obtain a first classification feature;
and the first classification feature is fused with a Mobile residual error module through a plurality of parallel double branches to obtain a second classification feature, and the second classification feature is used as a crop disease classification result.
In some embodiments, the a-CBAM module comprises:
a channel attention sub-module, which takes the deep lesion feature output by the shared feature extraction network as a first input feature, and obtains a first sub-output feature through the channel attention sub-module; and
Multiplying the first sub-output feature and the first input feature by bits to obtain a second sub-output feature;
the space attention sub-module is used for obtaining a third sub-output characteristic by passing the second sub-output characteristic through the space attention sub-module; and
And multiplying the third sub-output feature and the second sub-output feature by bits to obtain a fourth sub-output feature, wherein the fourth sub-output feature comprises the first classification feature.
In some embodiments, the second sub-output feature is derived using a channel attention sub-module expression, wherein the channel attention sub-module expression is specifically:
wherein F1 is a second sub-output feature obtained by the channel attention sub-module, delta is a Sigmoid activation function, MLP is a multi-layer perceptron, avgpool is average pooling, maxpool is maximum pooling, F is a first input feature,representing element-wise multiplication, i.e., bit-wise multiplication.
In some embodiments, the fourth sub-output feature is obtained using a spatial attention sub-module expression based on a hole convolution, where the spatial attention sub-module expression based on the hole convolution is specifically:
wherein F2 is a fourth sub-output feature obtained by a spatial attention sub-module based on hole convolution, delta is a Sigmoid activation function, AC is a Atrous Convolution convolution block, avgpool is average pooling, maxpool is maximum pooling, F1 is a second sub-output feature obtained by a channel attention sub-module,representing element-wise multiplication, i.e., bit-wise multiplication.
In some embodiments, the Mobile residual module comprises two convolution layers, two depth separable convolution branch layers, and a attention mechanism layer;
the attention mechanism layer comprises a global average pooling layer, a first full-connection layer, a Relu activation function, a second full-connection layer and an H-swish activation function;
sequentially passing the first classification features through the convolution layer and the depth separable convolution branch layer to obtain first channel features;
sequentially passing the first channel characteristics through the global average pooling layer, a first full-connection layer, a Relu activation function, a second full-connection layer and an H-swish activation function to obtain second channel characteristics, multiplying the second channel characteristics by weight coefficients channel by channel through scale operation, and finally outputting third channel characteristics;
and adding the first classification characteristic and the third channel characteristic, and then obtaining the second classification characteristic through a ReLU activation function.
In some embodiments, one of the two convolutional layers comprises a 1 x 1 convolution, a bulk normalization BN, and a sparse activation function H-swish; the other convolution layer comprises 1X 1 convolution and batch normalization BN;
one of the two depth separable convolution branch layers comprises a group of 3 x 3 depth separable convolution DWs, batch normalization BN and a sparse activation function H-swish; the other depth separable convolution branch layer comprises two groups of 3 multiplied by 3 depth separable convolution DWs, batch normalization BN and a sparse activation function H-swish;
the two-depth separable convolution branch layers form a parallel double-branch characteristic fusion module.
In some embodiments, after upsampling the deep lesion features, performing a stitching operation with each shallow lesion feature of the shared feature extraction network to generate stitching features, including:
the deep lesion features are spliced with shallow lesion features output by an N layer of a DenseNet network after being subjected to deep rolling and up-sampling, and first spliced sub-features are output;
and gradually splicing the ith spliced sub-feature with shallow lesion features output by the N-ith layer of the DenseNet network through a deep rolling layer and up-sampling, and finally generating the spliced feature, wherein i is more than or equal to 1 and less than or equal to N.
In some embodiments, the number of pixels occupied by the disease spots and the number of pixels occupied by the blades are counted, and the severity of the disease is identified as follows:
wherein, counting the number I of pixels occupied by the disease spots disease Number of pixels occupied by blade I leaf ,DS represents disease severity.
In another aspect, the present invention provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements the steps of the method for detecting severity of disease of crops, and can achieve the same technical effects.
The advantages of the invention are as follows:
according to the crop disease severity detection method provided by the invention, a shared feature extraction network is constructed, the input crop leaf images are rolled and pooled through a DenseNet network to identify shallow disease spot features, and the ASPP module is used for identifying deep disease spot features; constructing a classification network, connecting with the output end of the shared feature extraction network, and carrying out feature analysis on the deep lesion features by a convolution attention A-CBAM module and a Mobile residual error module based on cavity convolution to determine a disease classification result; and constructing a segmentation network, connecting the segmentation network with the output end of the shared feature extraction network, upsampling deep lesion features, then performing splicing operation on the deep lesion features and shallow lesion features output by the shared feature extraction network to generate splicing features, performing segmentation convolution processing on the splicing features, identifying the background, the leaf and the lesion in the image, and determining the severity of the disease. According to the method, the disease classification problem and the disease severity segmentation problem of crop diseases are fused in a network model by utilizing shared feature extraction, the types of the diseases can be identified by extracting the local features of different layers, namely the local features of shallow layers and the global features of deep layers, the severity of the diseases can be further identified by classifying the network, the disease severity detection aiming at different diseases can be realized, and the automation level of detection is improved; and meanwhile, the feature information is shared in the classification network and the segmentation network, so that the classification and segmentation accuracy is improved.
Drawings
FIG. 1 shows a schematic overall flow chart of a method for detecting severity of disease in crops;
FIG. 2 shows a schematic diagram of the wheat disease severity detection model based on the classification and segmentation fusion network;
FIG. 3 shows a schematic diagram of a hole convolution attention module;
FIG. 4 shows a schematic diagram of a Mobile residual block;
wherein:
1-a shared feature extraction network;
2-a classification network;
3-splitting the network;
S1-S3: and (3) step (c).
Detailed Description
In order to make the above features and effects of the present invention more clearly understood, the following specific examples are given with reference to the accompanying drawings.
The method for detecting severity of crop disease provided by the present invention will be described in detail.
As shown in fig. 1 and 2, fig. 1 shows an overall flowchart of a method for detecting severity of disease of crops according to an embodiment of the present invention. Fig. 2 is a schematic diagram of a wheat disease severity detection model based on a classification and segmentation fusion network, and as shown in fig. 2, the embodiment of the invention provides a wheat disease severity detection model based on a classification and segmentation fusion network, wherein the detection model uses deep labv3+ as a basic structure frame, and uses a coding part of the deep labv3+ as a shared feature extraction network of the classification network and the segmentation network.
A method for detecting severity of a crop disease comprising:
s1, constructing a shared feature extraction network, rolling and pooling an input crop leaf image through a DenseNet network to identify shallow lesion features, and identifying deep lesion features through an ASPP module;
s2, constructing a classification network, connecting the classification network with the output end of the shared feature extraction network, and performing feature analysis of a convolution attention A-CBAM module and a Mobile residual error module based on cavity convolution on deep lesion features to determine a disease classification result;
s3, constructing a segmentation network, connecting the segmentation network with the output end of the shared feature extraction network, upsampling deep lesion features, then performing splicing operation with shallow lesion features output by the shared feature extraction network to generate splicing features, performing segmentation convolution processing on the splicing features, identifying the background, the leaf and the lesion in the image, and determining the severity of the disease.
In this embodiment, the classification network 2 and the segmentation network 3 share the shared feature extraction network 1, the shared feature extraction network 1 is input as a leaf image of a crop to be detected, the shallow lesion features are identified by rolling and pooling through the DenseNet network, and the deep lesion features are identified by the ASPP module. The classification network 2 is connected with the output end of the shared feature extraction network 1 based on the MobileNet, and determines a disease classification result after performing jump connection of the convolution attention A-CBAM module and feature analysis of the Mobile residual module based on hole convolution on the deep lesion features output by the shared feature extraction network 1. The segmentation network 3 is also connected with the output end of the shared feature extraction network 1, performs splicing operation on the deep lesion features output by the shared feature extraction network 1 and the shallow lesion features output by the feature extraction network after upsampling, generates splicing features, performs segmentation convolution processing on the splicing features, identifies the background, the leaf and the lesion in the image, and determines the severity of the disease.
It should be noted that, the backbone network model of the shared feature extraction network in deep labv3+ is replaced by a DenseNet network, connection of 2 block blocks is adopted, rolling and pooling are performed through the DenseNet network, shallow disease spot features in the leaf images of crops to be detected are identified, the shallow disease spot features are used as local features, and the shallow disease spot features comprise fine-grained features such as disease spot color, texture, edge, water chestnut information and the like of the images. Meanwhile, identifying deep lesion features in the leaf images of crops to be detected through an ASPP module, wherein the deep lesion features serve as image global features and comprise some coarse-grained features. In addition, a CBAM is added before the ASPP module so as to improve the recognition accuracy of the feature extraction on the small target features and the target positions.
In addition, in this embodiment, after the split network and the classification network are built, training is further required to be performed on the split network and the classification network respectively, so as to obtain a first loss function of the split network and a second loss function of the classification network; then, according to the first loss function and the second loss function, a total loss function is obtained; and finally, training the preset iteration times according to the total loss function to obtain the optimal disease classification result and the disease severity.
In the embodiment, the disease classification problem and the disease severity segmentation problem of crop diseases are fused in a network model by utilizing shared feature extraction, the type of the diseases can be identified through a classification network, the severity of the diseases can be further identified through a segmentation network, the disease severity detection aiming at different diseases can be realized, and the automation level of detection is improved; and meanwhile, the feature information is shared in the classification network and the segmentation network, so that the classification and segmentation accuracy is improved.
In this embodiment, for the classification network 2, the deep lesion features output by the shared feature extraction network are passed through a convolution attention a-CBAM module based on hole convolution to obtain a first classification feature C1, where the first classification feature may represent different lesion target features, for example, the size, position, shape, color, etc. of the lesion, and the a-CBAM module improves the extraction capability of various different types of lesion target features; then, the first classification feature C1 is fused with the Mobile residual error module through a plurality of parallel double branches to obtain a second classification feature C2, and the second classification feature is used as a crop disease classification result, namely the type of crop disease can be resolved through the second classification feature. For example, the types of stripe rust, leaf rust, stalk rust, etc. in wheat rust are identified. Wherein, the stripe rust mostly occurs on the leaves, is small and is orderly arranged to form a strip shape; leaf rust mainly damages leaves and leaf sheaths, stems rarely occur, are red brown and are scattered on the leaves; the rust disease mainly occurs in wheat straw, leaf sheath, large leaf and spore pile, brown and oval.
In a specific implementation, referring to FIG. 3, FIG. 3 shows a schematic diagram of an A-CBAM module. The A-CBAM module comprises: the channel attention sub-module is used for summing the maximum pooling and the average pooling through feature stitching, training through a multi-layer perceptron (MLP), further enhancing the relevance of the two feature tensors, and carrying out bit multiplication on the obtained output feature tensor and the input feature tensor through a Sigmoid activation function to obtain the feature tensor output by the channel attention module; and after the spatial attention submodule based on the hole convolution is spliced through the maximum pooling feature and the average pooling feature, the obtained output feature and the input feature are subjected to phase multiplication through the hole convolution and then through a Sigmoid activation function, and the feature output by the spatial attention module based on the hole convolution is obtained. Specifically, in this embodiment, the channel attention sub-module uses the deep lesion feature output by the shared feature extraction network as the first input feature F, and obtains the first sub-output feature1 through the channel attention sub-module; then, the first sub-output feature1 is bit-multiplied with the first input feature F to obtain a second sub-output feature channel refined feature F1. The space attention submodule based on the cavity convolution further enables the second sub-output feature channel refined feature F1 to pass through the space attention submodule to obtain a third sub-output feature2; and the third sub-output feature2 and the second sub-output feature channel refined feature F1 are bit-wise multiplied to obtain a fourth sub-output feature F2, which fourth sub-output feature F2 will contain the first classification feature C1.
In some embodiments, the second sub-output feature is derived using a channel attention sub-module expression, wherein the channel attention sub-module expression is specifically:
wherein F1 is a second sub-output feature obtained by the channel attention sub-module, delta is a Sigmoid activation function, MLP is a multi-layer perceptron, avgpool is average pooling, maxpool is maximum pooling, F is a first input feature,representing element-wise multiplication, i.e., bit-wise multiplication.
In some embodiments, the fourth sub-output feature is derived using a spatial attention sub-module expression based on a hole convolution, where the spatial attention sub-module expression based on the hole convolution is specifically:
wherein F2 is a fourth sub-output feature obtained by a spatial attention sub-module based on hole convolution, delta is a Sigmoid activation function, AC is a Atrous Convolution convolution block, avgpool is average pooling, maxpool is maximum pooling, F1 is a second sub-output feature obtained by a channel attention sub-module,representing element-wise multiplication, i.e., bit-wise multiplication.
It should be noted that the a-CBAM module of this embodiment replaces the convolution block of element 7*7 in the CBAM with a cavity convolution of 3*3 with a cavity rate of 2, and the cavity convolution receptive field is calculated according to the following formula:
f=2(rate-1)*(k-1)+k
where f is receptive field, rate is hyper-parameter void ratio, k is convolution kernel size, and the receptive field can be changed by adjusting the hyper-parameter void ratio while the calculated amount is not increased, so that the capability of the model for processing multi-scale targets is improved.
Further, in a specific implementation, referring to fig. 4, fig. 4 shows a Mobile residual module schematic diagram. For the Mobile residual module, the Mobile residual module is connected with the output end of the A-CBAM module and comprises two convolution layers, two depth separable convolution branch layers and a first attention mechanism layer. One of the two convolution layers comprises 1 multiplied by 1 convolution, batch normalization BN and a sparse activation function H-swish; the other convolution layer contains a 1 x 1 convolution and a batch normalization BN. One of the two depth separable convolution branch layers comprises a group of 3 x 3 depth separable convolution DWs, batch normalization BN and a sparse activation function H-swish; the other depth separable convolution branch layer comprises two groups of 3 multiplied by 3 depth separable convolution DWs, batch normalization BN and a sparse activation function H-swish; the two-depth separable convolution branch layers form a parallel double-branch characteristic fusion module. The attention mechanism layer comprises a global average pooling layer, a first fully connected layer and a Relu activation function, a second fully connected layer and an H-swish activation function.
In this embodiment, the deep lesion feature output by the shared feature extraction network is analyzed by a convolution attention a-CBAM module based on hole convolution to obtain a first classification feature C1, and then the first classification feature C1 sequentially passes through a convolution layer and a depth separable convolution branch layer to obtain a first channel feature M1; sequentially passing the first channel characteristic M1 through a global average pooling layer GAV, a first full-connection layer FC1 and a Relu activation function, a second full-connection layer FC2 and an H-swish activation function to obtain a second channel characteristic M2, multiplying the second channel characteristic M2 by a weight coefficient channel by channel through scale operation, and finally outputting a third channel characteristic M3; finally, the first classification feature C1 and the third classification feature M3 are added and then pass through a ReLU activation function to obtain a second classification feature C2, and then a disease classification result of the classification network is obtained.
As for the division network 3, further referring to fig. 2, after up-sampling the deep lesion features (global features) outputted from the shared feature extraction network, the deep lesion features (local features) and each shallow lesion feature (local feature) of the shared feature extraction network are successively subjected to a stitching operation to generate a stitched feature. The method specifically comprises the following steps: the deep lesion features are spliced with the shallow lesion features output by the N layer of the DenseNet network after being subjected to deep rolling and up-sampling, and the first spliced sub-features are output; and gradually splicing the ith spliced sub-feature with shallow lesion features output by the N-ith layer of the DenseNet network through a deep rolling layer and up-sampling to finally generate spliced features, wherein i is more than or equal to 1 and less than or equal to N. For example: convolving the deep lesion pattern feature S1 of the shared feature extraction network by 1*1, up-sampling by 2 times to obtain an output feature S2, performing concat splicing on the output feature S2 and the shallow lesion pattern feature Q3 output by the DenseNet third block layer, and outputting the feature S3; the output characteristic S3 is subjected to deep rolling layer and up-sampling and then is subjected to concat with the shallow lesion characteristic Q2 output by the DenseNet second block layerSplicing to obtain an output characteristic S4; and performing concatemer splicing on the output characteristic S4 and the shallow lesion characteristic Q1 output by the first block layer after deep rolling and up-sampling, and generating a final splicing characteristic. By the successive segmentation and stitching processing, the background, the leaf, the lesion and the like in the image can be identified, and the severity of the disease can be further determined. That is, by counting the number I of pixels occupied by the lesion disease Number of pixels occupied by blade I leaf (no lesions are included) the severity of the disease can be identified as:
it should be noted that, before the embodiment splices the shallow lesion features output by each layer of the shared feature extraction network, the shallow lesion features can be maintained by the jump connection operation of the hole convolution attention module a-CBAM to segment the edge feature profile, so as to reduce the problem of the edge blurring of the lesion.
In summary, according to the crop disease severity detection method provided by the invention, a shared feature extraction network is constructed, the input crop leaf image is rolled and pooled through a DenseNet network to identify shallow disease spot features, and the ASPP module is used for identifying deep disease spot features; constructing a classification network, connecting with the output end of the shared feature extraction network, and carrying out feature analysis on the deep lesion features by a convolution attention A-CBAM module and a Mobile residual error module based on cavity convolution to determine a disease classification result; and constructing a segmentation network, connecting the segmentation network with the output end of the shared feature extraction network, upsampling deep lesion features, then performing splicing operation on the deep lesion features and shallow lesion features output by the shared feature extraction network to generate splicing features, performing segmentation convolution processing on the splicing features, identifying the background, the leaf and the lesion in the image, and determining the severity of the disease. According to the method, the disease classification problem and the disease severity segmentation problem of crop diseases are fused in a network model by utilizing shared feature extraction, the types of the diseases can be identified by extracting the local features of different layers, namely the local features of shallow layers and the global features of deep layers, the severity of the diseases can be further identified by classifying the network, the disease severity detection aiming at different diseases can be realized, and the automation level of detection is improved; and meanwhile, the feature information is shared in the classification network and the segmentation network, so that the classification and segmentation accuracy is improved.
In addition, the above embodiment of the present invention may be applied to a terminal device having a function of a crop disease severity detection method, where the terminal device may include a personal terminal, an upper computer terminal, and the like, and the embodiment of the present invention is not limited thereto.
In addition, the embodiment of the application also provides electronic equipment, which comprises a processor, a memory, and a program or an instruction stored in the memory and capable of running on the processor, wherein the program or the instruction realizes the steps of the crop disease severity detection method when being executed by the processor, and the same technical effects can be achieved.
In addition, the embodiment of the application also provides a readable storage medium, and the readable storage medium stores a program or an instruction, which when executed by a processor, implements the steps of the crop disease severity detection method, and can achieve the same technical effects.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be applied, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (10)

1. A method for detecting severity of a disease in a crop, comprising:
constructing a shared feature extraction network, rolling and pooling an input crop leaf image through a DenseNet network to identify shallow lesion features, and identifying deep lesion features through an ASPP module;
constructing a classification network, connecting with the output end of the shared feature extraction network, and carrying out feature analysis of a convolution attention A-CBAM module and a Mobile residual error module based on hole convolution on the deep lesion features to determine a disease classification result;
and constructing a segmentation network, connecting the segmentation network with the output end of the shared feature extraction network, upsampling the deep lesion features, then performing splicing operation on the deep lesion features and the shallow lesion features output by the shared feature extraction network to generate splicing features, performing segmentation convolution processing on the splicing features, identifying the background, the blades and the lesion in the image, and determining the severity of the disease.
2. The method as recited in claim 1, further comprising:
training the segmentation network and the classification network respectively to obtain a first loss function of the segmentation network and a second loss function of the classification network;
acquiring a total loss function according to the first loss function and the second loss function;
training the preset iteration times according to the total loss function to obtain the optimal disease classification result and the disease severity.
3. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the deep lesion features are subjected to convolution attention A-CBAM module based on cavity convolution to obtain first classification features;
and the first classification feature is fused with a Mobile residual error module through a plurality of parallel double branches to obtain a second classification feature, and the second classification feature is used as a crop disease classification result.
4. A method according to claim 3, wherein the a-CBAM module comprises:
a channel attention sub-module, which takes the deep lesion feature output by the shared feature extraction network as a first input feature, and obtains a first sub-output feature through the channel attention sub-module; and
Multiplying the first sub-output feature and the first input feature by bits to obtain a second sub-output feature;
the space attention sub-module is used for obtaining a third sub-output characteristic by passing the second sub-output characteristic through the space attention sub-module; and
And multiplying the third sub-output feature and the second sub-output feature by bits to obtain a fourth sub-output feature, wherein the fourth sub-output feature comprises the first classification feature.
5. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
and obtaining the second sub-output characteristic by using a channel attention module expression, wherein the channel attention module expression specifically comprises:
wherein F1 is a second sub-output feature obtained by the channel attention sub-module, delta is a Sigmoid activation function, MLP is a multi-layer perceptron, avgpool is average pooling, maxpool is maximum pooling, F is a first input feature,representing element-wise multiplication, i.e., bit-wise multiplication.
6. The method of claim 5, wherein the step of determining the position of the probe is performed,
and obtaining a fourth sub-output characteristic by using a spatial attention sub-module expression based on the hole convolution, wherein the spatial attention sub-module expression based on the hole convolution specifically comprises:
wherein F2 is a fourth sub-output feature obtained by a spatial attention sub-module based on hole convolution, delta is a Sigmoid activation function, AC is a Atrous Convolution convolution block, avgpool is average pooling, maxpool is maximum pooling, F1 is a second sub-output feature obtained by a channel attention sub-module,representing element-wise multiplication, i.e., bit-wise multiplication.
7. The method according to any one of claims 3 to 6, wherein,
the Mobile residual error module comprises two convolution layers, two depth separable convolution branch layers and a attention mechanism layer;
the attention mechanism layer comprises a global average pooling layer, a first full-connection layer, a Relu activation function, a second full-connection layer and an H-swish activation function;
sequentially passing the first classification features through the convolution layer and the depth separable convolution branch layer to obtain first channel features;
sequentially passing the first channel characteristics through the global average pooling layer, a first full-connection layer, a Relu activation function, a second full-connection layer and an H-swish activation function to obtain second channel characteristics, multiplying the second channel by a weight coefficient channel by channel through scale operation, and finally outputting third channel characteristics;
and adding the first classification characteristic and the third channel characteristic, and then obtaining the second classification characteristic through a ReLU activation function.
8. The method of claim 7, wherein the step of determining the position of the probe is performed,
one of the two convolution layers comprises 1 multiplied by 1 convolution, batch normalization BN and a sparse activation function H-swish;
the other convolution layer comprises 1X 1 convolution and batch normalization BN;
one of the two depth separable convolution branch layers comprises a group of 3 x 3 depth separable convolution DWs, batch normalization BN and a sparse activation function H-swish;
the other depth separable convolution branch layer comprises two groups of 3 multiplied by 3 depth separable convolution DWs, batch normalization BN and a sparse activation function H-swish;
the two-depth separable convolution branch layers form a parallel double-branch characteristic fusion module.
9. The method of claim 1, wherein the step of determining the position of the substrate comprises,
after upsampling the deep lesion features, performing a stitching operation with each shallow lesion feature of the shared feature extraction network to generate stitching features, including:
the deep lesion features are spliced with shallow lesion features output by an N layer of a DenseNet network after being subjected to deep rolling and up-sampling, and first spliced sub-features are output;
and gradually splicing the ith spliced sub-feature with shallow lesion features output by the N-ith layer of the DenseNet network through a deep rolling layer and up-sampling, and finally generating the spliced feature, wherein i is more than or equal to 1 and less than or equal to N.
10. The method of claim 1, wherein the step of determining the position of the substrate comprises,
counting the number of pixels occupied by the disease spots and the number of pixels occupied by the blades, and identifying the severity of the disease as follows:
wherein, counting the number I of pixels occupied by the disease spots disease Number of pixels occupied by blade I leaf DS represents disease severity.
CN202311445927.1A 2023-11-02 2023-11-02 Crop disease severity detection method Pending CN117475163A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311445927.1A CN117475163A (en) 2023-11-02 2023-11-02 Crop disease severity detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311445927.1A CN117475163A (en) 2023-11-02 2023-11-02 Crop disease severity detection method

Publications (1)

Publication Number Publication Date
CN117475163A true CN117475163A (en) 2024-01-30

Family

ID=89634421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311445927.1A Pending CN117475163A (en) 2023-11-02 2023-11-02 Crop disease severity detection method

Country Status (1)

Country Link
CN (1) CN117475163A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117664878A (en) * 2024-01-31 2024-03-08 北京市农林科学院信息技术研究中心 Crop acre spike number measuring system and method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117664878A (en) * 2024-01-31 2024-03-08 北京市农林科学院信息技术研究中心 Crop acre spike number measuring system and method
CN117664878B (en) * 2024-01-31 2024-04-19 北京市农林科学院信息技术研究中心 Crop acre spike number measuring system and method

Similar Documents

Publication Publication Date Title
Das et al. Leaf disease detection using support vector machine
Kestur et al. MangoNet: A deep semantic segmentation architecture for a method to detect and count mangoes in an open orchard
CN110120040B (en) Slice image processing method, slice image processing device, computer equipment and storage medium
Al-Hiary et al. Fast and accurate detection and classification of plant diseases
Ruiz-Ruiz et al. Testing different color spaces based on hue for the environmentally adaptive segmentation algorithm (EASA)
Han et al. A novel computer vision-based approach to automatic detection and severity assessment of crop diseases
Kumari et al. Hybridized approach of image segmentation in classification of fruit mango using BPNN and discriminant analyzer
CN113095409B (en) Hyperspectral image classification method based on attention mechanism and weight sharing
CN114766041A (en) System and method for determining crop damage
CN113657294B (en) Crop disease and insect pest detection method and system based on computer vision
CN111553240A (en) Corn disease condition grading method and system and computer equipment
CN117475163A (en) Crop disease severity detection method
Suresh et al. Performance analysis of different CNN architecture with different optimisers for plant disease classification
Dubey et al. An efficient adaptive feature selection with deep learning model-based paddy plant leaf disease classification
CN114445817A (en) Citrus nutrient deficiency symptom identification method based on enhanced Raman spectroscopy and image assistance
Kolhar et al. Phenomics for Komatsuna plant growth tracking using deep learning approach
CN112528726A (en) Aphis gossypii insect pest monitoring method and system based on spectral imaging and deep learning
Bajpai et al. Deep learning model for plant-leaf disease detection in precision agriculture
CN113298835B (en) Fruit harvesting maturity weighted evaluation method
Parasa et al. Identification of Diseases in Paddy Crops Using CNN
Ramachandran et al. Tiny Criss-Cross Network for segmenting paddy panicles using aerial images
Rony et al. BottleNet18: Deep Learning-Based Bottle Gourd Leaf Disease Classification
Kundur et al. Ensemble Efficient Net and ResNet model for Crop Disease Identification
Indukuri et al. Paddy Disease Classifier using Deep learning Techniques
Wang et al. A new stochastic simulation algorithm for image-based classification: Feature-space indicator simulation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination