CN114820568B - Corn leaf disease identification model building method, equipment and storage medium - Google Patents

Corn leaf disease identification model building method, equipment and storage medium Download PDF

Info

Publication number
CN114820568B
CN114820568B CN202210549842.7A CN202210549842A CN114820568B CN 114820568 B CN114820568 B CN 114820568B CN 202210549842 A CN202210549842 A CN 202210549842A CN 114820568 B CN114820568 B CN 114820568B
Authority
CN
China
Prior art keywords
model
image
corn leaf
blade
corn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210549842.7A
Other languages
Chinese (zh)
Other versions
CN114820568A (en
Inventor
邓立苗
刘洪鑫
李洪霞
李娟�
姚莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Agricultural University
Original Assignee
Qingdao Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Agricultural University filed Critical Qingdao Agricultural University
Priority to CN202210549842.7A priority Critical patent/CN114820568B/en
Publication of CN114820568A publication Critical patent/CN114820568A/en
Application granted granted Critical
Publication of CN114820568B publication Critical patent/CN114820568B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method, equipment and storage medium for constructing a corn leaf disease identification model, wherein the construction method comprises the following steps: firstly, obtaining leaf image sets of a plurality of corn diseases, including natural environment and laboratory environment; then respectively carrying out amplification treatment on the image data sets; then constructing an LS-RCNN model based on the spark R-CNN model to be used as a corn leaf detection model; carrying out blade extraction and segmentation on the obtained blade image of the natural environment by using the constructed LS-RCNN model to obtain a blade image dataset of the natural environment after removing the complex background; and carrying out two-stage migration training and testing on the ResNet model by using the leaf image dataset in the laboratory environment and the leaf image dataset in the processed natural environment to obtain the corn leaf disease image recognition model CENet. By using the identification model, the overall identification rate of corn leaf diseases reaches 99.03 percent, which is higher than that of most human experts and traditional neural network models.

Description

Corn leaf disease identification model building method, equipment and storage medium
Technical Field
The invention belongs to the technical field of corn leaf disease identification, and particularly relates to a method, equipment and a storage medium for constructing a corn leaf disease identification model.
Background
Corn is the main grain crop with the largest planting area and yield in China, and plays an important role in light industry, animal husbandry and national economy. Corn diseases not only reduce corn yield, but also affect the development of related industries and economics. At present, the corn disease identification in China mainly adopts a manual method. The corn diseases are identified by using a manual method, so that the efficiency is low, the corn diseases are easily interfered by subjective factors such as fatigue, emotion and the like, and the corn diseases can be accurately identified only when symptoms are obvious. Therefore, how to quickly and accurately identify corn leaf diseases and take proper control measures has important significance for ensuring corn yield and quality.
Foreign studies on crop image disease identification began in the 80 s of the 20 th century. Researchers use various traditional machine learning methods to study the image recognition technology of agricultural diseases, including support vector machine classifiers, PNN, k nearest neighbor classification, BP neural network and the like, and play a positive role in promoting the application of information technology in the image recognition study of agricultural diseases. However, the conventional machine learning method has disadvantages of limited learning expression ability, need of manually extracting features, inapplicability to process a large amount of data, and the like.
The deep learning method can effectively solve the problems of big data learning and modeling. In recent years, researchers have conducted a great deal of research work on image recognition of agricultural diseases based on deep learning. HAMMAD SALEEM et al (2020) propose an image-based deep learning meta-structure model to identify plant diseases. Long et al (2018) propose an oil tea disease image recognition method based on convolutional neural network and transfer learning, and the average recognition accuracy reaches 96.53%. Zhao et al (2009) diagnose 5 main diseases of corn leaves by using a threshold method, an area marking method and a Freeman link code method according to the characteristics of the diseases of the corn leaves, and the accuracy rate is more than 80%. Y Liu et al (2018) adopts a triple-loss double-convolution neural network structure to research the characteristics of the corn image, and then uses a SIFT algorithm to extract texture characteristics, wherein the classification accuracy is more than 90%. In contrast to the conventional machine learning method, the deep learning framework can automatically learn the features contained in the image data. When the data set reaches a certain scale, better accuracy and robustness can be realized in the agricultural disease image recognition task.
At present, a corn leaf disease identification method based on deep learning has been successful under limited conditions. However, how to achieve accurate and reliable identification of corn diseases in complex environments remains a great challenge. This is because disease images obtained from natural environments are often in a complex background, and may contain elements similar to disease features or symptoms.
Disclosure of Invention
Aiming at the problems, the first aspect of the invention provides a method for constructing a corn leaf disease identification model, which comprises the following steps:
Step 1, obtaining leaf image data sets of a plurality of corn diseases; the blade image dataset comprises a blade image dataset from a natural environment and a blade image dataset in a laboratory environment;
step 2, respectively carrying out amplification treatment on a blade image dataset of a natural environment and a blade image dataset of a laboratory environment;
step 3, constructing an LS-RCNN model based on a spark R-CNN model to serve as a corn leaf detection model, wherein the LS-RCNN model is used for extracting and dividing key areas of corn leaf images in natural environments from complex backgrounds;
Step 4, carrying out blade extraction and segmentation on the blade image of the natural environment obtained in the step 2 by utilizing the LS-RCNN model built in the step 3 to obtain a blade image dataset of the natural environment with complex background removed;
And 5, respectively carrying out two-stage migration training and testing on the ResNet model by using the blade image data set in the laboratory environment obtained in the step 2 and the blade image data set in the natural environment obtained in the step 4 to obtain the CNN deep learning model CENet for identifying the corn blade disease image.
In one possible design, the LS-RCNN model in step 3 includes a feature extraction module, a region suggestion module, a regression and classification module, and a blade segmentation and output module;
The feature extraction module uses a ResNet structure-based feature pyramid network as a backbone network, inputs the outputs of 4 convolution layers of Conv2, conv3, conv4 and Conv5 in the ResNet network into an FPN network, fuses an up-sampling result and feature graphs with the same size generated from bottom to top, builds pyramids with 4 layers from P2 to P5, generates a multi-scale feature graph for an input image, and outputs 256 channels of the features;
The region suggestion module uses a set of fixed, learnable suggestion boxes to make region suggestions instead of the region suggestion network in Faster R-CNN, and uses ROI Pooling to extract region of interest features; the region suggestion module further comprises a dynamic interaction module for further extracting features of each region instance for the extracted ROI features and suggested features;
the regression and classification module calculates the region category by using the suggested feature map, obtains the final position of the detection region through Bboxpred regression, and generates the detection region of the corn leaf;
The blade segmentation and output module is used for carrying out position location on the input image according to the coordinate parameters of the detection frame for each detection area, copying the image in the area into a new file to generate a new image, and outputting and storing the new image.
In one possible design, the specific steps for training to obtain CENet models in the step 5 are as follows:
S51, pre-training a ResNet model on an ImageNet dataset;
S52, migrating the ResNet model pre-trained on the ImageNet dataset to a blade image dataset obtained in a laboratory environment to perform model training in a first stage to obtain a disease identification model suitable for laboratory blade images; in the model training process of the first stage, freezing the weights of all networks except the rearmost pooling layer and the full-connection layer, replacing the maximum pooling layer in ResNet networks with an average pooling layer, and replacing the full-connection layer and the classification layer with new full-connection layer and classification layer;
s53, further migrating the trained model in the S52 to a blade image dataset obtained in a natural environment for carrying out model training of a second stage; the image dataset for the second-stage model training is an image dataset obtained by extracting and dividing a blade image obtained in a natural environment through an LS-RCNN model; in the model training process of the second stage, the weights of all networks except the rearmost full-connection layer are frozen, the full-connection layer and the classification layer are replaced by new full-connection layers and classification layers, the number of the classification layers is also set as the disease category number, and the weights of the full-connection layers are retrained.
In one possible design, the method of performing the augmentation process on the image dataset in step 2 includes 15 methods, specifically blurring, adaptive histogram equalization, gaussian noise, flipping, RGB panning, rotation, optical distortion, gaussian blurring, filling, random grid shuffling, grid distortion, random brightness, hue saturation values, elastic transformation, and channel shuffling.
The second aspect of the invention also provides a method for identifying corn leaf diseases, comprising the following steps: acquiring a corn leaf image in a natural environment; inputting the corn leaf image into the corn leaf disease recognition model obtained by the corn leaf disease recognition model building method according to any one of claims 1 to 4; and obtaining the corn leaf disease category output by the model.
The third aspect of the present invention also provides an apparatus for identifying corn leaf disease, said apparatus comprising at least one processor and at least one memory; the memory stores program instructions of the corn leaf disease identification model obtained by the construction method according to the first aspect; and when the processor executes the program instruction, the identification of corn leaf diseases can be realized.
The fourth aspect of the present invention also provides a computer-readable storage medium, in which a computer-implemented program of the maize leaf disease recognition model obtained by the construction method according to the first aspect is stored, which when executed by a processor, can realize the recognition of maize leaf disease.
Compared with the prior art, the invention provides a method for constructing the corn leaf disease identification model, and the constructed identification model can generate the following beneficial effects: by utilizing the recognition model, the overall recognition rate of the model obtained through training on 4 corn leaf diseases reaches 99.03%, which is higher than that of most human experts and traditional neural network models; the identification model not only eliminates unnecessary characteristic extraction process, but also improves the accuracy of identifying corn leaf diseases under complex background. Compared with the original ResNet (98.35%), the recognition accuracy of the recognition model under the complex background is improved by 0.99 percent, and the recognition model is superior to the current popular deep learning method. Corn leaf detection and segmentation models based on spark R-CNN constructed by the model effectively separate corn leaves from complex backgrounds, reduce the influence of the complex backgrounds on recognition results in the disease recognition process, and simultaneously provide a two-stage transfer learning strategy to train a disease classifier CENet; by performing secondary migration on parameters of the pre-training model, the convergence rate of the model being trained is higher, the accuracy of identifying image features is higher, and the capability of the model for adapting to the environment is stronger. The invention provides a new effective method for identifying corn leaf diseases in a complex environment.
Drawings
FIG. 1 is a flow chart of a method for constructing a corn leaf disease identification model in the invention.
FIG. 2 is a graph showing the effect of 15 image amplification methods according to the present invention.
FIG. 3 is a diagram showing the structure of LS-RCNN model in the present invention.
FIG. 4 is a diagram of the structure and training process of CENet model in the present invention.
FIG. 5 is a schematic diagram of the apparatus in example 2 of the present invention.
Detailed Description
The invention will be further described with reference to specific examples.
Example 1:
As shown in FIG. 1, the invention provides a method for constructing a corn leaf disease identification model, which comprises the following steps:
Step 1, obtaining leaf image data sets of a plurality of corn diseases; the blade image dataset comprises a blade image dataset from a natural environment and a blade image dataset in a laboratory environment;
step 2, respectively carrying out amplification treatment on a blade image dataset of a natural environment and a blade image dataset of a laboratory environment;
step 3, constructing an LS-RCNN model based on a spark R-CNN model to serve as a corn leaf detection model, wherein the LS-RCNN model is used for extracting and dividing key areas of corn leaf images in natural environments from complex backgrounds;
Step 4, carrying out blade extraction and segmentation on the blade image of the natural environment obtained in the step 2 by utilizing the LS-RCNN model built in the step 3 to obtain a blade image dataset of the natural environment with complex background removed;
And 5, respectively carrying out two-stage migration training and testing on the ResNet model by using the blade image data set in the laboratory environment obtained in the step 2 and the blade image data set in the natural environment obtained in the step 4 to obtain the CNN deep learning model CENet for identifying the corn blade disease image.
1. The image acquisition process comprises the following steps:
data sets of 4 types of corn diseases in natural environment and laboratory environment are established. The laboratory corn disease dataset contained 3581 images, downloaded from PLANT VILLAGE, with a single background. The natural environment image dataset is obtained by adopting modes of field shooting, network downloading, data enhancement and the like, and contains 3563 natural environment images with complex backgrounds.
2. Image amplification:
The method is used for enhancing the existing image data, particularly the image under the natural environment, so as to achieve the purposes of increasing the data quantity, enriching the data diversity, improving the generalization capability of the model, expanding the sample space and reducing the influence of unbalanced data. A total of 15 image amplification methods were used, including the transformations of Blur blue, adaptive histogram equalization CLAHE, gaussian noise GaussNoise, flip, RGB shift RGBShift, rotation Rotate, optical distortion OpticalDistortion, gaussian Blur GaussianBlur, padding PADIFNEEDED, random grid shuffling RandomGridShuffle, grid distortion GridDistortion, random brightness RandomBrightness, hue saturation values HueSaturationValue, elastic transformation ElasticTransform, channel shuffling ChannelShuffle, etc., as shown in fig. 2. These image amplification methods come from the Albumentations library implemented based on opencv, which is a fast and flexible open source library for image enhancement, providing many different image conversion operations. For most image transformation operations, the Albumentation library is faster than other commonly used image enhancement tools.
3. Corn leaf detection model LS-RCNN based on spark R-CNN is built:
In order to reduce the influence of complex backgrounds on recognition performance, a leaf detection model is constructed based on spark R-CNN and named as LS-RCNN model, key areas of corn leaf images are extracted from the backgrounds, and then the segmented corn leaf images are input into a subsequent disease image recognition model for training and recognition. Structurally, the LS-RCNN model can be divided into 4 main modules, namely a feature extraction module, a region suggestion module, a regression and classification module and a blade segmentation and output module, and the LS-RCNN model is structurally shown in figure 3.
Firstly, the input image is preprocessed and then input into a feature extraction module for extracting a feature map. The feature extraction module uses a ResNet structure-based Feature Pyramid Network (FPN) as a backbone network, inputs the outputs of 4 convolution layers of Conv2, conv3, conv4 and Conv5 in the ResNet network into the FPN network, fuses the up-sampling result and feature graphs with the same size generated from bottom to top, constructs pyramids with 4 layers from P2 to P5, generates a multi-scale feature graph for an input image, and outputs 256 channels of features. Next, the region suggestion module uses a set of fixed, learnable suggestion boxes (Learnable proposal box, LPB) to make region suggestions instead of the region suggestion network (RPN) in the fast R-CNN, and uses ROI Pooling to extract region of interest (region of interest, ROI) features. Next, a dynamic interaction module (cross-attention module) is used on the extracted ROI features and suggested features to better extract the features of each region instance. Then, the regression and classification module calculates the region category by using the suggested feature map, obtains the final position of the detection region through Bboxpred regression, and generates the detection region of the corn leaf. Finally, the blade segmentation and output module segments the detected corn blades from the complex background, specifically, for each detection area, position positioning is carried out on the input image according to the coordinate parameters of the detection frame, then the image in the area is copied into a new file, a new image is generated, and the new image is output and stored.
4. Constructing CENet model based on two-stage transfer learning:
in order to further solve the problem of disease image recognition under a complex background, a two-stage migration learning strategy is provided, and an effective CNN deep learning model suitable for disease image recognition under the complex background is trained and named CENet model; the structure and training process of CENet model is shown in figure 4.
First, the ResNet model pre-trained on the ImageNet dataset (containing 120 ten thousand images, with 1000 categories) was migrated to the corn disease dataset collected in the laboratory environment for a new round of model training, which was the first stage of migration learning. In the first stage of the transfer learning process, the weights of all networks except the rearmost pooling layer and the fully connected layer are frozen, the largest pooling layer in ResNet networks is replaced by an average pooling layer, and the Fully Connected (FC) layer and the classified layer are replaced by new FC layer and classified layer. The number of nodes of the new classification layer is no longer 1000, but rather the number of disease categories N to be identified, if our dataset has 4 disease images, the number of classification nodes is set to 4. And (3) using the images collected in the laboratory environment as training images, and retraining weights of the pooling layer and the full-connection layer to obtain a disease identification model suitable for a laboratory data set.
The trained model is then further migrated to the data set collected in the natural environment, which is a second stage of migration learning. In the second stage of transfer learning, the weights of all networks except the rearmost full-connection layer are frozen, the FC layer and the classification layer are replaced by new FC layer and classification layer, the number of classification layers is also set as disease category number, and the disease category number according to the established data set is 4. And (3) using the images acquired in the natural environment as training images to retrain the weights of the full-connection layers. Specifically, a dataset in a natural environment is input into an LS-RCNN model, corn leaf segmentation is carried out, a disease dataset in the natural environment with complex background removed is obtained, then the images are input into a ResNet model obtained by training in the previous stage as training samples, and a second round of model training is carried out. After two rounds of model training, a trained CENet model can be obtained.
TABLE 1 image dataset partitioning
The number of corn disease categories and the image dataset were divided as shown in table 1. The established disease dataset includes 4 categories, brown spot, rust, large spot and healthy images, respectively, for a total of 3581 images collected in the experimental environment for model training for the first stage migration, 2601 for training, 707 for verification, 273 for testing. The images collected in the natural environment are 3563 total sheets for model training of the second stage migration, wherein 2399 sheets are used for training, 722 sheets are used for verification, and 442 sheets are used for testing. During training, the sizes of the input pictures are uniformly set to 224 x 224, the learning rate is set to 0.001, the batch size is set to 16, namely 16 pictures are trained each time, and the training frequency epoch is set to 50.
After the model is trained, the model can be stored in a pth format, and after the model is stored, the model can be called when a test or application program is developed later. When developing an Android mobile phone App, because the model supported by pytorch _android is a. Pt model, the model needs to be converted into a. Pt file. And importing corresponding packages such as pytorch _android and the like, carrying out image classification by loading a camera, image analysis, binding and loading a disease identification model to obtain a value with the maximum possibility of an identification result, and outputting the result.
The general flow of identifying corn leaf diseases by means of the model built in the invention is as follows: acquiring a corn leaf image in a natural environment; inputting the corn leaf image into a corn leaf disease identification model obtained by the corn leaf disease identification model building method; and obtaining the disease category of the corn leaf output by the model.
Example 2:
As shown in FIG. 5, the invention also provides a corn leaf disease identification device, which comprises at least one processor and at least one memory, and also comprises a communication interface and an internal bus; the memory stores program instructions of the corn leaf disease recognition model obtained by the construction method described in embodiment 1; the identification of corn leaf diseases as described in example 1 can be achieved when the processor executes the memory-stored execution program.
Wherein the internal bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (PERIPHERAL COMPONENT, PCI) bus, or an extended industry standard architecture (. XtendedIndustry Standard Architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, the buses in the drawings of the present application are not limited to only one bus or to one type of bus. The memory may include a high-speed RAM memory, and may further include a nonvolatile memory NVM, such as at least one magnetic disk memory, and may also be a U-disk, a removable hard disk, a read-only memory, a magnetic disk, or an optical disk.
The processor includes one or more general-purpose processors that execute the various functional modules by invoking program code in memory. The general purpose processor may be any type of device capable of processing electronic instructions, including a central processing unit (Central Processing Unit, CPU), a microprocessor, a microcontroller, a main processor, a controller, and an ASIC (Application SPECIFIC INTEGRATED Circuit), among others. The processor reads the program code stored in the memory and cooperates with the communication interface to perform all the steps of the method of the above-described embodiments of the application.
The communication interface may be a wired interface (e.g., an ethernet interface) for communicating with other computing nodes or users. When the communication interface is a wired interface, the communication interface may employ a protocol family over TCP/IP, such as RAAS protocol, remote function call (Remote Function Call, RFC) protocol, simple object access protocol (Simple Object Access Protocol, SOAP) protocol, simple network management protocol (Simple Network Management Protocol, SNMP) protocol, common object request broker architecture (Common Object Request Broker Architecture, CORBA) protocol, and distributed protocol, among others.
The device is in the form of a general purpose computing device, which may be provided as a terminal, server or other form of device.
Example 3:
The present invention also provides a nonvolatile computer-readable storage medium in which program instructions of the corn leaf disease identification model obtained by the building method described in embodiment 1 are stored, the computer-executable program being for realizing the identification of corn leaf disease described in embodiment 1 when executed by a processor.
In particular, a system, apparatus or device provided with a readable storage medium on which a software program code implementing the functions of any of the above embodiments is stored and whose computer or processor is caused to read and execute instructions stored in the readable storage medium may be provided.
In this case, the program code itself read from the readable medium may implement the functions of any of the above-described embodiments, and thus the machine-readable code and the readable storage medium storing the machine-readable code form part of the present invention.
The storage medium may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks (e.g., CD-ROM, CD-R, CD-RW, DVD-20ROM, DVD-RAM, DVD-RW), magnetic tape, and the like. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
It should be understood that the above Processor may be a central processing unit (english: central Processing Unit, abbreviated as CPU), or may be other general purpose processors, a digital signal Processor (english: DIGITAL SIGNAL Processor, abbreviated as DSP), an Application-specific integrated Circuit (english: application SPECIFIC INTEGRATED Circuit, abbreviated as ASIC), or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in a processor for execution.
It should be understood that a storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an Application SPECIFIC INTEGRATED Circuits (ASIC). The processor and the storage medium may reside as discrete components in a terminal or server.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
The computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as SMALLTALK, C ++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.
While the foregoing describes the embodiments of the present invention, it should be understood that the present invention is not limited to the embodiments, and that various modifications and changes can be made by those skilled in the art without any inventive effort.

Claims (6)

1. The method for constructing the corn leaf disease identification model is characterized by comprising the following steps of:
Step 1, obtaining leaf image data sets of a plurality of corn diseases; the blade image dataset comprises a blade image dataset from a natural environment and a blade image dataset in a laboratory environment;
step 2, respectively carrying out amplification treatment on a blade image dataset of a natural environment and a blade image dataset of a laboratory environment;
Step 3, constructing an LS-RCNN model based on a spark R-CNN model to serve as a corn leaf detection model, wherein the LS-RCNN model is used for extracting and dividing key areas of corn leaf images in natural environments from complex backgrounds; the LS-RCNN model comprises a feature extraction module, a region suggestion module, a regression and classification module and a blade segmentation and output module;
The feature extraction module uses a ResNet structure-based feature pyramid network as a backbone network, inputs the outputs of 4 convolution layers of Conv2, conv3, conv4 and Conv5 in the ResNet network into an FPN network, fuses an up-sampling result and feature graphs with the same size generated from bottom to top, builds pyramids with 4 layers from P2 to P5, generates a multi-scale feature graph for an input image, and outputs 256 channels of the features;
The region suggestion module uses a set of fixed, learnable suggestion boxes to make region suggestions instead of the region suggestion network in Faster R-CNN, and uses ROIPooling to extract region of interest features; the region suggestion module further comprises a dynamic interaction module for further extracting features of each region instance for the extracted ROI features and suggested features;
the regression and classification module calculates the region category by using the suggested feature map, determines the final position by Bboxpred regression, and generates a detection region of the corn leaf;
the blade segmentation and output module is used for carrying out position positioning on an input image according to the coordinate parameters of the detection frame for each detection area, copying the image in the area into a new file to generate a new image, and outputting and storing the new image;
Step 4, carrying out blade extraction and segmentation on the blade image of the natural environment obtained in the step 2 by utilizing the LS-RCNN model built in the step 3 to obtain a blade image dataset of the natural environment with complex background removed;
And 5, respectively carrying out two-stage migration training and testing on the ResNet model by using the blade image data set in the laboratory environment obtained in the step 2 and the blade image data set in the natural environment obtained in the step 4 to obtain the CNN deep learning model CENet for identifying the corn blade disease image.
2. The method for constructing the corn leaf disease recognition model according to claim 1, wherein the training in the step 5 to obtain the CENet model comprises the following specific steps:
S51, pre-training a ResNet model on an ImageNet dataset;
S52, migrating the ResNet model pre-trained on the ImageNet dataset to a blade image dataset obtained in a laboratory environment to perform model training in a first stage to obtain a disease identification model suitable for a laboratory blade image; in the model training process of the first stage, freezing the weights of all networks except the rearmost pooling layer and the full-connection layer, replacing the maximum pooling layer in ResNet networks with an average pooling layer, and replacing the full-connection layer and the classification layer with new full-connection layer and classification layer;
s53, further migrating the trained model in the S52 to a blade image dataset obtained in a natural environment for carrying out model training of a second stage; the image dataset for the second-stage model training is an image dataset obtained by extracting and dividing a blade image obtained in a natural environment through an LS-RCNN model; in the model training process of the second stage, the weights of all networks except the rearmost full-connection layer are frozen, the full-connection layer and the classification layer are replaced by new full-connection layers and classification layers, the number of the classification layers is also set as the disease category number, and the weights of the full-connection layers are retrained.
3. The method for constructing the corn leaf disease identification model according to claim 1, which is characterized in that: the method for amplifying the image data set in the step 2 comprises 15 methods, namely blurring, self-adaptive histogram equalization, gaussian noise, overturning, RGB translation, rotation, optical distortion, gaussian blur, filling, random grid shuffling, grid distortion, random brightness, tone saturation value, elastic transformation and channel shuffling.
4. A method for identifying corn leaf disease, comprising: acquiring a corn leaf image in a natural environment; inputting the corn leaf image into the corn leaf disease recognition model obtained by the corn leaf disease recognition model building method according to any one of claims 1 to 3; and obtaining the corn leaf disease category output by the model.
5. The utility model provides a maize leaf disease discernment equipment which characterized in that: the apparatus includes at least one processor and at least one memory; program instructions of the maize leaf disease recognition model obtained by the building method according to any one of claims 1 to 3 are stored in the memory; and when the processor executes the program instruction, the identification of corn leaf diseases can be realized.
6. A computer-readable storage medium, wherein a computer-executable program of the corn leaf disease recognition model obtained by the building method according to any one of claims 1 to 3 is stored in the computer-readable storage medium, and when the computer-executable program is executed by a processor, the recognition of corn leaf disease can be realized.
CN202210549842.7A 2022-05-20 2022-05-20 Corn leaf disease identification model building method, equipment and storage medium Active CN114820568B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210549842.7A CN114820568B (en) 2022-05-20 2022-05-20 Corn leaf disease identification model building method, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210549842.7A CN114820568B (en) 2022-05-20 2022-05-20 Corn leaf disease identification model building method, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114820568A CN114820568A (en) 2022-07-29
CN114820568B true CN114820568B (en) 2024-04-30

Family

ID=82517482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210549842.7A Active CN114820568B (en) 2022-05-20 2022-05-20 Corn leaf disease identification model building method, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114820568B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110148120A (en) * 2019-05-09 2019-08-20 四川省农业科学院农业信息与农村经济研究所 A kind of disease intelligent identification Method and system based on CNN and transfer learning
CN110391022A (en) * 2019-07-25 2019-10-29 东北大学 A kind of deep learning breast cancer pathological image subdivision diagnostic method based on multistage migration
CN111223553A (en) * 2020-01-03 2020-06-02 大连理工大学 Two-stage deep migration learning traditional Chinese medicine tongue diagnosis model
CN111553240A (en) * 2020-04-24 2020-08-18 四川省农业科学院农业信息与农村经济研究所 Corn disease condition grading method and system and computer equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110148120A (en) * 2019-05-09 2019-08-20 四川省农业科学院农业信息与农村经济研究所 A kind of disease intelligent identification Method and system based on CNN and transfer learning
CN110391022A (en) * 2019-07-25 2019-10-29 东北大学 A kind of deep learning breast cancer pathological image subdivision diagnostic method based on multistage migration
CN111223553A (en) * 2020-01-03 2020-06-02 大连理工大学 Two-stage deep migration learning traditional Chinese medicine tongue diagnosis model
CN111553240A (en) * 2020-04-24 2020-08-18 四川省农业科学院农业信息与农村经济研究所 Corn disease condition grading method and system and computer equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Sparse R-CNN: End-to-End Object Detection with Learnable Proposals;Peize Sun et al.;《2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)》;20211102;第14454- 14463页 *

Also Published As

Publication number Publication date
CN114820568A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN108491765B (en) Vegetable image classification and identification method and system
Burhan et al. Comparative study of deep learning algorithms for disease and pest detection in rice crops
US10600171B2 (en) Image-blending via alignment or photometric adjustments computed by a neural network
CN110263819A (en) A kind of object detection method and device for shellfish image
CN109740721B (en) Wheat ear counting method and device
CN112949704B (en) Tobacco leaf maturity state identification method and device based on image analysis
Zhao et al. A detection method for tomato fruit common physiological diseases based on YOLOv2
Li et al. Multi-scale sparse network with cross-attention mechanism for image-based butterflies fine-grained classification
CN113011532B (en) Classification model training method, device, computing equipment and storage medium
CN111340019A (en) Grain bin pest detection method based on Faster R-CNN
CN114399480A (en) Method and device for detecting severity of vegetable leaf disease
CN107786867A (en) Image identification method and system based on deep learning architecture
CN113221913A (en) Agriculture and forestry disease and pest fine-grained identification method and device based on Gaussian probability decision-level fusion
CN110874835B (en) Crop leaf disease resistance identification method and system, electronic equipment and storage medium
CN113989536A (en) Tomato disease identification method based on cuckoo search algorithm
CN113076873B (en) Crop disease long-tail image identification method based on multi-stage training
CN111563542A (en) Automatic plant classification method based on convolutional neural network
CN113284122B (en) Roll paper packaging defect detection method and device based on deep learning and storage medium
CN114820568B (en) Corn leaf disease identification model building method, equipment and storage medium
CN109815860A (en) TCM tongue diagnosis image color correction method, electronic equipment, storage medium
CN115019215B (en) Hyperspectral image-based soybean disease and pest identification method and device
CN113627538B (en) Method for training asymmetric generation of image generated by countermeasure network and electronic device
CN115409810A (en) Sample selection method, device and system for remote sensing image
US20230037782A1 (en) Method for training asymmetric generative adversarial network to generate image and electric apparatus using the same
CN112784840A (en) License plate recognition method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant