CN116681929A - Wheat crop disease image recognition method - Google Patents

Wheat crop disease image recognition method Download PDF

Info

Publication number
CN116681929A
CN116681929A CN202310575374.5A CN202310575374A CN116681929A CN 116681929 A CN116681929 A CN 116681929A CN 202310575374 A CN202310575374 A CN 202310575374A CN 116681929 A CN116681929 A CN 116681929A
Authority
CN
China
Prior art keywords
wheat
features
feature
disease
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310575374.5A
Other languages
Chinese (zh)
Inventor
廖子涵
陈宝远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN202310575374.5A priority Critical patent/CN116681929A/en
Publication of CN116681929A publication Critical patent/CN116681929A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Abstract

The invention discloses a wheat crop disease image recognition method in the technical field of crop disease recognition, which specifically comprises the following steps: s1: wheat disease image segmentation and identification based on multipath convolutional neural network: dividing the wheat crop data by using a U-Net semantic dividing network to obtain a single wheat ear image, and respectively extracting wheat ear features by dividing R, G, B channels of the divided single wheat ear image in a multipath convolutional neural network; s2: wheat crop disease identification and detection based on multi-scale feature extraction convolutional neural network: firstly, constructing a multi-scale feature extraction module by utilizing cavity convolution of different receptive fields, and extracting global features of wheat ear images from the receptive fields with different scales; and then positioning and learning a disease local key region with rich information from the global features, and finally carrying out feature fusion on the key region features and the global features to realize the identification of the disease of the wheat crops in different growth periods.

Description

Wheat crop disease image recognition method
Technical Field
The invention relates to the technical field of crop disease identification, in particular to a wheat crop disease image identification method.
Background
The quality of agricultural products is affected by a number of factors, of which the effect of the disease is not negligible. In such cases, farmers need to quickly suppress or stop the spread of these diseases. If farmers cannot identify plant diseases in time, the production quantity of agricultural products and the quality of plants can be affected. Meanwhile, after the plants are damaged by diseases, the problems about the quality and the quantity of agricultural products such as consumption, distribution and export are seriously affected. However, since many countries are far from popularizing more advanced plant disease detection devices and intelligent recognition systems, farmers can only rely on their own past experiences to diagnose whether plants suffer from diseases or not through observation, which results in insufficient accuracy and timeliness of recognition of plant diseases, and at the same time, the farmers' experiences may be inaccurate. The expert needs to spend more financial resources and efforts for diagnosing the plant leaf diseases in a laboratory, so that analysis and detection of the diseases after sampling are not real-time, and the plant diseases can be widely spread in the period of time. Because the victims of different diseases show ambiguity, complexity and similarity, and the science and technology and cultural quality of some farmers are generally low, the occurrence and development of plant diseases can not be accurately diagnosed and mastered, and the pesticide is sprayed in large dose when the plant diseases are serious, so that the optimal control period of the diseases is easily missed, a large amount of crop yield is reduced, and the environment is seriously polluted. Therefore, how to quickly, simply and accurately detect the plant disease occurrence area and identify the disease type thereof, and provide necessary information for disease control, has become an important problem facing plant cultivation. Therefore, it is necessary to study an identification method capable of monitoring the growth condition of plants in real time.
Agricultural applications for deep learning include plant disease identification, earth coverage classification, product type classification, plant identification, plant climate identification, root-soil separation, yield estimation, fruit counting, obstacle identification, weed identification, product identification and classification, soil moisture prediction, animal research, and weather forecast, among others. The deep learning technology has been developed rapidly in the field of plant disease detection and identification, which reduces the cost to a great extent and can also effectively improve the identification accuracy. The deep learning network has translational invariance, is not sensitive to the change of the position of the characteristic information, is not changed in the form of the image in the whole processing process, and has stable local characteristics. However, the application in the field of plant disease image recognition is more complex than that in other image recognition fields, because the similarity among plants is high, the shape and other characteristics of the plants are difficult to distinguish, the disease positions of plant leaf diseases are random, the shooting of plant disease images is easily influenced by objective factors such as illumination, background, angle and the like, and even the expression forms of the same disease are inconsistent and vary greatly with external factors such as regions, climate and the like. Therefore, applying deep learning to the agricultural field will make it more convenient to monitor the growth condition of plants and prevent plant diseases, which will lead to faster development and progress of the agricultural field.
In recent years, with the rapid development of deep learning technology and identification technology, image identification technology is widely used in plant phenotype and disease monitoring. Therefore, how to accurately monitor the disease condition of the wheat crop diseases by using the image recognition technology has important significance for recognizing the wheat crop diseases and detecting the growth state, improving the pesticide efficiency and ensuring the grain safety. Therefore, the research on the wheat crop disease identification method is more accurate, rapid and intelligent.
Disclosure of Invention
This section is intended to outline some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. Some simplifications or omissions may be made in this section as well as in the description of the invention and in the title of the invention, which may not be used to limit the scope of the invention.
Therefore, the invention aims to provide the wheat crop disease image recognition method which can accurately, rapidly and intelligently monitor the disease condition of the wheat crop disease, recognize the wheat crop disease and detect the growth state, improve the pesticide efficiency and ensure the grain safety.
In order to solve the technical problems, the invention provides a wheat crop disease image recognition method, which adopts the following technical scheme: the method specifically comprises the following steps:
s1: wheat disease image segmentation and identification based on multipath convolutional neural network: dividing the wheat crop data by using a U-Net semantic dividing network to obtain a single wheat ear image, and respectively extracting wheat ear features by dividing R, G, B channels of the divided single wheat ear image in a multipath convolutional neural network;
s2: wheat crop disease identification and detection based on multi-scale feature extraction convolutional neural network: firstly, constructing a multi-scale feature extraction module by utilizing cavity convolution of different receptive fields, and extracting global features of wheat ear images from the receptive fields with different scales; and then positioning and learning a disease local key region with rich information from the global features, and finally carrying out feature fusion on the key region features and the global features to realize the identification of the disease of the wheat crops in different growth periods.
Optionally, the U-Net semantic segmentation network is composed of two parts, namely a compression channel and an expansion channel, the compression channel is used for capturing the context information in the image, and the structure of 2 convolution layers and 1 pooling layer is repeatedly adopted, so that the dimension of the feature map becomes 2 times of the original dimension after 1 pooling operation is performed; the corresponding expansion channel mainly has the functions of accurately positioning the pixel boundary of the segmented region in the image, reducing the dimension of the original feature map to be half by deconvolution operation on the feature map in the channel, splicing the cut feature map by using a vector splicing mode, extracting depth features by convolution operation, and repeating the process; in the last layer, the U-Net network converts the original high-dimensional feature vector into a low-dimensional feature vector output through a convolution layer.
Optionally, the depth semantic features extracted by the three channels are fused by using a feature fusion strategy to form a high-strength feature, the high-strength feature is sent into a full-connection layer, and a model is further optimized, trained and learned by using a joint loss function at the end of the network, so that feature output of different wheat diseases in each channel is completed.
Optionally, the overall structure of the multi-path convolutional neural network comprises an input layer, a single channel of R, G, B colors, three single channels, a feature vector fusion layer, a full connection layer and a final result output layer.
Alternatively, the hole convolution increases the information rate of each convolution layer by expanding the receptive field, and captures more context information by setting different expansion rates in the convolution layers, wherein the receptive field is calculated as follows:
K=k+(k-1)(dilation_rate-1)
where k represents the size of the original convolution kernel in the convolution network structure and condition_rate represents the value of the expansion ratio parameter in the hole convolution.
Optionally, the multi-scale feature extraction convolutional neural network comprises three parts, namely a feature extractor module, a navigation network module and a checking network module, wherein the feature extractor module is used for extracting the depth features of wheat ears, and is used for positioning and learning a local key region with rich information according to the global features, and simultaneously fusing the local key region features and the global features through the checking network module for identifying wheat ear disease classification.
Optionally, the feature extractor module is a multi-scale feature extractor based on an attention mechanism constructed using hole convolution.
In summary, the present invention includes at least one of the following beneficial effects:
(1) A simple multipath convolutional neural network model is provided according to the color distribution characteristics of different diseases of different wheat types, firstly, a U-Net segmentation network is utilized to carry out segmentation processing operation on wheat data to obtain a single wheat ear image, the wheat ear characteristics are respectively extracted by dividing R, G, B the segmented single wheat image into three channels, and the characteristics of the three channels are fused by using a characteristic fusion strategy to enable the characteristics of each channel to play a role. And meanwhile, the combined loss function is used for training and learning, so that the function of optimizing the distance between data samples is achieved, the image segmentation and recognition of wheat diseases are realized, and the recognition precision and accuracy are improved.
(2) Aiming at the problem of subtle characteristic differences among images of different wheat types in different growth periods, a convolutional neural network based on multi-characteristic extraction is researched and used for identifying the wheat types in different growth periods. Constructing a multi-scale feature extraction module by utilizing cavity convolution of different receptive fields, and extracting global features of the wheat ear images from the receptive fields with different scales; and then, learning a disease local key region with rich information from the global features, and finally, realizing the positioning of wheat diseases in different growth periods by carrying out feature fusion on the key region features and the global features.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a flow chart of the wheat population image segmentation method;
FIG. 3 is a diagram of a U-Net semantic segmentation network according to the present invention;
FIG. 4 is a schematic diagram of a multi-path convolutional neural network of the present invention;
fig. 5 is a schematic diagram of the overall structure of the multi-scale feature extraction convolutional neural network of the present invention.
Detailed Description
The invention is described in further detail below with reference to fig. 1-5.
The invention mainly aims at the identification of stinking smut, stripe rust and powdery mildew of wheat images at different growth stages in natural scenes, constructs a wheat image database through Internet acquisition, and identifies three kinds of wheat diseases by adopting an identification algorithm based on a multi-path convolutional neural network algorithm and multi-scale positioning learning according to the characteristics of the wheat diseases at different growth stages.
And (5) collecting a data set. The invention constructs an image database of wheat crops by manually collecting data sets, wherein the database comprises wheat crop group images under natural scenes and images of three common diseases of the wheat crops in different growth periods. Because the existing technology can not accurately monitor the disease condition of the wheat crops and identify the types of diseases, how to accurately monitor the disease condition of the wheat crops by using the identification technology has very important significance for accurately preventing and controlling the wheat crops, improving the pesticide efficiency and ensuring the grain safety.
Example 1
Referring to fig. 1, the invention discloses a wheat disease image recognition method, which is characterized in that a U-Net network is improved, so that a disease plant can be more accurately recognized by an image segmentation algorithm, then, the disease plant in wheat crops in different periods is recognized by adding a cavity rolling and attention mechanism and fusing a plurality of learning algorithms with different features, the recognition accuracy is improved, and the target recognition algorithm can be better applied to disease recognition and monitoring of the whole growth period of the wheat crops, and specifically comprises the following steps:
s1: wheat disease image segmentation and identification based on multipath convolutional neural network: the method comprises the steps of performing segmentation processing operation on wheat crop data by utilizing a U-Net semantic segmentation network to obtain a single wheat ear image, wherein the single wheat ear image is shown in fig. 2, and is a flow chart for segmenting an acquired wheat group image in a complex field environment by utilizing the U-Net segmentation network; training the amplified wheat data sample to obtain a model and carrying out final data segmentation; and finally, carrying out the following wheat disease strain identification classification on the data samples after the segmentation.
The convolutional neural network (ConvolitionNeuralnetwork) is a deep feedforward neural network for effectively extracting and expressing the characteristics of an input image by integrating three technologies of local sensing areas, weight sharing and pooling. The convolutional neural network extracts the features of the image through convolutional operation, and extracts the deeper feature expression of the image information by increasing the depth of the neural network. Typically, a complete convolutional neural network model consists of an input layer, a convolutional layer, a pooling layer, a fully-connected layer, and a classifier.
Currently, the partition networks commonly used in deep learning mainly include full convolutional neural network (Fully ConvolutionalNetworks, FCN), segNe, U-Net, etc. The U-Net semantic segmentation network is shown in FIG. 3, and comprises 23 convolution layers in the whole network structure, and also comprises two parts, namely a compression channel and an expansion channel. The compression channel is used for capturing the information of the context in the image, is a typical neural network structure, and repeatedly adopts a structure of 2 convolution layers and 1 pooling layer, and has the advantage that the dimension of the feature map is 2 times of the dimension of the original feature map after every 1 pooling operation. The corresponding expansion channel mainly has the functions of accurately positioning the pixel boundary of the segmented region in the image, reducing the dimension of the original feature map to be half of the original dimension by deconvolution operation of the feature map in the channel, splicing the cut feature map by using a vector splicing mode, extracting depth features by convolution operation, and repeating the process. In the last layer, the U-Net network converts the original high-dimensional feature vector into a low-dimensional feature vector output through a convolution layer.
In the multipath convolutional neural network, the separated single wheat ear image is decomposed into R, G, B components according to channels and respectively sent into three different channels for learning and extracting detail features of the wheat ear object. Each channel, while having the same network structure, has an independent weight. In order to reduce the relevance of the intrinsic features among different R, G, B channels and improve the contribution degree of the features of each channel, the research plan fuses the depth semantic features extracted from the three channels through a feature fusion strategy to form a high-strength feature and sends the high-strength feature to a full-connection layer. And finally, the model is subjected to further optimization training learning by using a joint loss function in the network, so that the characteristic output of different wheat diseases in each channel is completed.
The overall structure of the multipath convolutional neural network (M-CNN for short) for wheat head blight image recognition is shown in figure 4. The overall structure of the M-CNN network comprises an input layer, R, G, B single channels (channels) of three colors, three single channels CNNs (S-CNNs for short), a feature vector Fusion layer (Fusion for short), a full connection layer (FC) and a final result Output layer (Output). In order to make the network possess more excellent performance, a joint loss function is introduced to perform optimization learning processing on the network. In the multipath convolutional neural network, the separated single wheat ear image is decomposed into R, G, B components according to the channels and respectively sent into three S-CNNs for learning and extracting detail features of the wheat ear objects.
As R, G, B the three channel components are interrelated and each differ. In order to fully utilize the difference among the three channel components, the fusion mode of vector splicing is selected to fuse the features extracted by the R, G, B three channel components to improve the feature strength and reduce the influence of useless features on network model identification, so that the features extracted by the multi-path convolutional neural network are richer and more comprehensive.
S2: wheat crop disease identification and detection based on multi-scale feature extraction convolutional neural network: aiming at the problem of subtle characteristic differences among images of wheat crops in different growth periods, a convolution neural network based on multi-scale characteristic extraction is researched and used for identifying the wheat crops in different growth periods. Firstly, constructing a multi-scale feature extraction module by utilizing cavity convolution of different receptive fields, and extracting global features of wheat ear images from the receptive fields with different scales; and then positioning and learning a disease local key region with rich information from the global features, and finally performing feature fusion on the key region features and the global features to realize positioning of diseases of wheat crops in different growth periods.
The common convolutional neural network can not well meet the current research situation and can not meet the requirement of high-precision identification. Therefore, the invention proposes that the idea of introducing a cavity convolution or attention mechanism in the convolution neural network structure is combined with more context information to improve the recognition accuracy of the model, thereby meeting the needs of research objects in different periods. The main idea of the intentional force mechanism is to let the system learn to put the learned center of gravity on the information quantity with the largest contribution degree to the current task in the input information, weaken or filter the interference of irrelevant information, and improve the accuracy and efficiency of the network model when processing the target task. The main way of generating the attention mechanism is by masking. For the mask, the weight coefficient is obtained through network model learning to represent the importance degree of key features in the data sample, so that the training learning gravity center of the network is concentrated in the key region of the image.
Because the inherent nature of the hole convolution allows it to expand the receptive field without affecting the properties of the data sample, the hole convolution is often an effective strategy to increase the receptive field of the network. On one hand, the cavity convolution increases the information rate of each convolution layer by expanding the receptive field, and on the other hand, more context information is captured by setting different expansion rates in the convolution layers, so that the recognition accuracy of the model can be effectively improved. The cavity convolution increases the information rate of each convolution layer by expanding the receptive field, and captures more context information by setting different expansion rates in the convolution layers, wherein the computational formula of the receptive field is as follows:
K=k+(k-1)(dilation_rate-1)
where k represents the size of the original convolution kernel in the convolution network structure and condition_rate represents the value of the expansion ratio parameter in the hole convolution. Therefore, the purpose of increasing the receptive field can be achieved by controlling the expansion rate of the cavity convolution, and meanwhile, the convolution layer is enabled to carry out convolution operation on the feature map of a larger area so as to extract more comprehensive depth features, more context information is mined, information loss caused by pooling operation is avoided, and loss of potential depth visual features caused by a convolution structure directly using a larger convolution kernel is avoided. The method is helpful for the convolutional neural network model to better extract the context characteristic information on the basis of keeping the original resolution.
In order to effectively improve the recognition of wheat ear images with different diseases, a multi-scale feature extraction convolutional neural network is provided by utilizing the idea of combining part of a deep convolutional neural network NTS_Net with the whole body, and the multi-scale feature extraction convolutional neural network consists of three parts, namely a feature extractor module (FeatureExtractor, FE), a navigation network module (navigator network) and a screening network module (scanner network). The overall structure of the entire depth recognition network is shown in fig. 5. The feature extractor is used for extracting the depth features of the wheat ears, and in order to enable the feature extractor to better extract the optimal features of the whole wheat ear objects from the multi-scale angle, a multi-scale feature extractor based on an attention mechanism is constructed by utilizing cavity convolution, multi-scale context information is captured from the whole wheat ear objects, global features of the wheat ear images are extracted, and potential visual features are prevented from being ignored. The navigation network module (Navigator Network) can learn a local key area with rich information according to global features, and fuse the local key area features and the global features through the screening network module (scintizer network) for identifying wheat spike disease classification.
The above embodiments are not intended to limit the scope of the present invention, so: all equivalent changes in structure, shape and principle of the invention should be covered in the scope of protection of the invention.

Claims (7)

1. A wheat crop disease image recognition method is characterized in that: the method specifically comprises the following steps:
s1: wheat disease image segmentation and identification based on multipath convolutional neural network: dividing the wheat crop data by using a U-Net semantic dividing network to obtain a single wheat ear image, and respectively extracting wheat ear features by dividing R, G, B channels of the divided single wheat ear image in a multipath convolutional neural network;
s2: wheat crop disease identification and detection based on multi-scale feature extraction convolutional neural network: firstly, constructing a multi-scale feature extraction module by utilizing cavity convolution of different receptive fields, and extracting global features of wheat ear images from the receptive fields with different scales; and then positioning and learning a disease local key region with rich information from the global features, and finally carrying out feature fusion on the key region features and the global features to realize the identification of the disease of the wheat crops in different growth periods.
2. The wheat crop disease image recognition method according to claim 1, characterized by comprising the following steps: the U-Net semantic segmentation network consists of two parts, namely a compression channel and an expansion channel, wherein the compression channel is used for capturing the context information in the image, and the structure of 2 convolution layers and 1 pooling layer is repeatedly adopted, so that the dimension of a feature map becomes 2 times of the original dimension after 1 pooling operation is carried out; the corresponding expansion channel mainly has the functions of accurately positioning the pixel boundary of the segmented region in the image, reducing the dimension of the original feature map to be half by deconvolution operation on the feature map in the channel, splicing the cut feature map by using a vector splicing mode, extracting depth features by convolution operation, and repeating the process; in the last layer, the U-Net network converts the original high-dimensional feature vector into a low-dimensional feature vector output through a convolution layer.
3. The wheat crop disease image recognition method according to claim 1, characterized by comprising the following steps: and (3) fusing the depth semantic features extracted by the three channels respectively by using a feature fusion strategy to form a high-strength feature, sending the high-strength feature into a full-connection layer, and carrying out further optimization training learning on the model by using a joint loss function at the end of the network, thereby completing the feature output of different wheat diseases in each channel.
4. The wheat crop disease image recognition method according to claim 1, characterized by comprising the following steps: the whole structure of the multipath convolutional neural network comprises an input layer, a single channel of R, G, B colors, three single channels, a feature vector fusion layer, a full connection layer and a final result output layer.
5. The wheat crop disease image recognition method according to claim 1, characterized by comprising the following steps: the cavity convolution increases the information rate of each convolution layer by expanding the receptive field, and captures more context information by setting different expansion rates in the convolution layers, wherein the computational formula of the receptive field is as follows:
K=k+(k-1)(dilation_rate-1)
where k represents the size of the original convolution kernel in the convolution network structure and condition_rate represents the value of the expansion ratio parameter in the hole convolution.
6. The wheat crop disease image recognition method according to claim 1, characterized by comprising the following steps: the multi-scale feature extraction convolutional neural network consists of three parts, namely a feature extractor module, a navigation network module and a checking network module, wherein the feature extractor module is used for extracting the depth features of wheat ears, and is used for positioning and learning a local key region with rich information according to global features, and meanwhile, the local key region features and the global features are fused through the checking network module and then used for identifying wheat ear disease classification.
7. The wheat crop disease image recognition method according to claim 6, wherein: the feature extractor module is a multi-scale feature extractor based on an attention mechanism constructed by utilizing cavity convolution.
CN202310575374.5A 2023-05-22 2023-05-22 Wheat crop disease image recognition method Pending CN116681929A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310575374.5A CN116681929A (en) 2023-05-22 2023-05-22 Wheat crop disease image recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310575374.5A CN116681929A (en) 2023-05-22 2023-05-22 Wheat crop disease image recognition method

Publications (1)

Publication Number Publication Date
CN116681929A true CN116681929A (en) 2023-09-01

Family

ID=87782835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310575374.5A Pending CN116681929A (en) 2023-05-22 2023-05-22 Wheat crop disease image recognition method

Country Status (1)

Country Link
CN (1) CN116681929A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152168A (en) * 2023-10-31 2023-12-01 山东科技大学 Medical image segmentation method based on frequency band decomposition and deep learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152168A (en) * 2023-10-31 2023-12-01 山东科技大学 Medical image segmentation method based on frequency band decomposition and deep learning
CN117152168B (en) * 2023-10-31 2024-02-09 山东科技大学 Medical image segmentation method based on frequency band decomposition and deep learning

Similar Documents

Publication Publication Date Title
Hu et al. Recognition of diseased Pinus trees in UAV images using deep learning and AdaBoost classifier
Khattak et al. Automatic detection of citrus fruit and leaves diseases using deep neural network model
Gayathri et al. Image analysis and detection of tea leaf disease using deep learning
Bah et al. CRowNet: Deep network for crop row detection in UAV images
CN110765916B (en) Farmland seedling ridge identification method and system based on semantics and example segmentation
Patil et al. Rice transformer: A novel integrated management system for controlling rice diseases
Radha et al. A polyhouse: plant monitoring and diseases detection using CNN
CN116051996A (en) Two-stage crop growth prediction method based on multi-mode information
CN116681929A (en) Wheat crop disease image recognition method
Tejaswini et al. Rice leaf disease classification using CNN
Hemalatha et al. Sugarcane leaf disease detection through deep learning
Miao et al. Crop weed identification system based on convolutional neural network
CN114841961A (en) Wheat scab detection method based on image enhancement and improvement of YOLOv5
Rangarajan et al. A low-cost UAV for detection of Cercospora leaf spot in okra using deep convolutional neural network
Hassan et al. A new deep learning-based technique for rice pest detection using remote sensing
CN116739868B (en) Afforestation management system and method based on artificial intelligence
CN116563205A (en) Wheat spike counting detection method based on small target detection and improved YOLOv5
CN113657294B (en) Crop disease and insect pest detection method and system based on computer vision
Suguna et al. Apple and Tomato Leaves Disease Detection using Emperor Penguins Optimizer based CNN
Patel et al. Big Data analytics for advanced viticulture
Huang et al. Application of Data Augmentation and Migration Learning in Identification of Diseases and Pests in Tea Trees
Attada et al. A methodology for automatic detection and classification of pests using optimized SVM in greenhouse crops
Masykur et al. Approach and Analysis of Yolov4 Algorithm for Rice Diseases Detection at Different Drone Image Acquisition Distances
Hui et al. Rice Disease Identification and Classification Based on MATLAB and Android Platform
Chen et al. Application of Deep Learning in Crop Stress

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination