CN115050022A - Crop pest and disease identification method based on multi-level self-adaptive attention - Google Patents
Crop pest and disease identification method based on multi-level self-adaptive attention Download PDFInfo
- Publication number
- CN115050022A CN115050022A CN202210640236.6A CN202210640236A CN115050022A CN 115050022 A CN115050022 A CN 115050022A CN 202210640236 A CN202210640236 A CN 202210640236A CN 115050022 A CN115050022 A CN 115050022A
- Authority
- CN
- China
- Prior art keywords
- crop
- network
- level
- convolution
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/68—Food, e.g. fruit or vegetables
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
A crop pest and disease identification method based on multi-level self-adaptive attention is applied to the technical field of image identification, and solves the problem that a traditional convolutional neural network model is low in recognition degree of crop pest and disease images with high similarity; constructing a multi-level identification network by constructing a neural network model based on an attention mechanism, performing feature fusion on multi-level network output by using fuzzy integration and outputting a detection result; an attention mechanism is introduced into a network, the capability of classifying similar samples by the network is enhanced, the recognition results of a multi-level network model set are fused, the model precision is improved, the phenomenon of low applicability of a single network model is overcome, parameters can be adjusted according to actual needs, the actual requirement for accurately and rapidly recognizing crop diseases and insect pests is met, the convolution calculation amount is optimized through staggered grouping and convolution, and the model reasoning time is reduced; by training different levels of network models and fusing multi-model recognition results through fuzzy integration, the recognition precision of crop diseases and insect pests is improved.
Description
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a crop pest and disease identification method based on multi-level self-adaptive attention.
Background
The agriculture is the first industry of a country, and the development condition of the agriculture not only depends on the livelihood of the country, but also determines whether the economy of a country can be stably developed. With the great improvement of national living standard and the enhancement of national purchasing power, China puts forward higher requirements on the yield and quality of crops, the spread of crop diseases and insect pests can cause the yield reduction and the quality damage of the crops, and if the diseases and insect pests are not prevented and treated in time, the serious influence on the national society, the economy and the ecology can be caused.
The traditional method still stays in a manual field observation method for diseases and insect pests of diseased crops, needs agricultural technicians to go into the field to manually identify different slightly different disease and insect pest blades, depends on the experience of the agricultural technicians, and is limited by the working place and time due to the limited number of the agricultural technicians all over the country, so that technical support can not be guaranteed to be carried out in the field at any time and any place.
The technology combining deep learning and image recognition is also becoming more widely used in recent years, which makes it possible to recognize target classes in images. At present, many algorithms perform feature extraction through a convolutional neural network model and then classify the feature. However, these methods also have certain disadvantages, such as single model, strong pertinence, low universality, and certain defects in specific solutions.
Disclosure of Invention
The invention aims to design a multi-level self-adaptive attention-based crop disease and insect pest identification method to solve the problem that a traditional convolutional neural network model is low in identification degree of a crop disease and insect pest image with high similarity.
The invention solves the technical problems through the following technical scheme:
a crop pest and disease damage identification method based on multi-level adaptive attention comprises the following steps:
s1, preprocessing the crop pest image set to obtain a preprocessed crop pest image set;
s2, initializing q to 1, classifying the preprocessed crop disease and insect pest image sets into a q-th level training image set T of the crop disease and insect pest q And a crop pest test image set V;
s3, constructing a neural network model based on the interleaving group convolution attention module;
s4, constructing a multi-level identification network;
and S5, performing feature fusion on the multi-level network output by using fuzzy integration, and outputting a detection result.
According to the technical scheme, an attention mechanism is introduced into a network, the capacity of classifying similar samples by the network is enhanced, the recognition results of a multi-level network model set are fused, the precision of the model is improved, and the phenomenon that a single network model is low in applicability is overcome, so that parameters can be adjusted according to actual needs, the actual requirement for accurately and quickly recognizing crop diseases and insect pests is met, the convolution calculated amount is optimized through staggered grouping and convolution, and the model reasoning time is shortened; by training network models of different levels and fusing multi-model recognition results through fuzzy integration, the recognition precision of crop diseases and insect pests is improved, and the practicability is enhanced.
Further, the method for preprocessing the crop pest image set in the step S1 includes: and (3) expanding and enhancing the images in the crop disease and insect pest image set, and then uniformly adjusting the images to 224 × 224.
Further, the method for constructing the neural network model based on the staggered group convolution attention module described in step S3 is as follows:
s3.1, designing a feature extraction network f based on the structure of an SE-Resnet50 convolutional neural network, wherein the feature extraction network f is formed by cascading a common convolution module A, 4 staggered group convolution attention modules B, C, D, E and a full connection layer fc and is used for extracting network features;
s3.2, initialize i 1, from the q-th training set T q Selecting the ith input image x i Inputting the input layer of the ordinary convolution module A, then passing through the interleaved convolution attention module B, C, D, E, and inputting the full connection layer fc to obtain the input feature mapAnd then obtaining a classification result z of the input image x by a softmax classifier, wherein O is the dimensionality of the characteristic vector.
Further, the input layer of the feature extraction network f is the input layer of the ordinary convolution module a, the ordinary convolution module a is connected with 3 interleaving group convolution attention modules B, the interleaving group convolution attention module B is connected with 3 interleaving group convolution attention modules C, the interleaving group convolution attention module C is connected with (3+ q) interleaving group convolution attention modules D, and the interleaving group convolution attention module D is connected with 3 interleaving group convolution attention modules E; the interleaved set convolution attention module E is connected to the full connection layer fc.
Further, the method for constructing the multi-level identification network described in step S4 is as follows:
s4.1, i ═ i +1, repeat step S3.2, and set the q-th training set T q All the rest images are taken as input images x in sequence i Thereby training a q-th model M q And obtaining a q-th training set T q Final feature graphs and classification results of all the images;
and S4.2, establishing the final evaluation index of the characteristic diagram.
Further, the method for establishing the evaluation index of the final feature map in step S4.2 is as follows:
s4.2.1, let i equal to 0, and based on latent semantic analysis, set F of feature maps i q Mapping to semantic space vectorsCalculating x in the q-th network i Semantic error distance of:
In the formula (I), the compound is shown in the specification,is represented by the formula i Identifying the number of training sample sets of the same category of the result;is represented by the formula i Identifying semantic state features of training sample sets of the same category of the result;
s4.2.2, based on entropy theory, sample x for crop pest test i Semantic error information of recognition resultsCan be defined as:
s4.2.3, judging the semantic error informationIf the threshold value is greater than the set threshold value, the input image x is represented i Adapted for model M of q-th order q I +1 and returns to step S4.2.1, otherwise, image x will be input i Put into the q +1 training set T q+1 Performing the following steps; training set T up to level q q All the images are tested and the final q + 1-stage training set T is obtained q+1 ;
S4.2.4, judging whether q is equal to q max If yes, the network models of all the levels are trained, step S5 is executed, otherwise, q is made to be q +1, the step S3.1 is returned to, and the q-th level crop pest and disease identification model M is obtained q And q +1 th training set T q+1 。
Further, the method for performing feature fusion on the multi-level network output by using fuzzy integration and outputting the detection result in step S5 specifically includes:
is provided withA multi-model fusion joint discrimination mechanism based on fuzzy integration is constructed for learning model sets aiming at different crop disease and insect pest samples;
s5.1, calculating a parameter w by using the formula (3):
in the formula (3), v j Representation training model M j Importance for crop pest identification;
s5.2, calculating a set T of to-be-identified crop pest image by using the formula (4) j Fuzzy density v of j-level crop pest identification model w (T j ):
v w (T j )=v j +v w (T j-1 )+w·v j ·v w (T j-1 ) (4)
In the formula (4), v w (T j-1 ) Representing the fuzzy density of a j-1 grade crop pest identification model; when j is 1, let v w (T j-1 )=v 1 ;
S5.3, utilization formula (5)) Obtaining the crop disease and insect pest image x to be identified belonging to the category l x Probability ofSelecting the category corresponding to the maximum value from the corresponding category probabilities as the final category of the crop pest image to be identified;
in the formula (5), the reaction mixture is,representing the final result Y of the j-th recognition network j In the category l x The V-shaped represents the maximum value of the two, and the A-shaped represents the minimum value of the two.
The invention has the advantages that:
according to the technical scheme, an attention mechanism is introduced into a network, the capacity of classifying similar samples by the network is enhanced, the recognition results of a multi-level network model set are fused, the precision of the model is improved, and the phenomenon that a single network model is low in applicability is overcome, so that parameters can be adjusted according to actual needs, the actual requirement for accurately and quickly recognizing crop diseases and insect pests is met, the convolution calculated amount is optimized through staggered grouping and convolution, and the model reasoning time is shortened; by training network models of different levels and fusing multi-model recognition results through fuzzy integration, the recognition precision of crop diseases and insect pests is improved, and the practicability is enhanced.
Drawings
Fig. 1 is a flowchart of a crop pest identification method based on multi-level adaptive attention according to a first embodiment of the present invention;
fig. 2 is a diagram of a feature extraction network structure of a crop pest identification method based on multi-level adaptive attention according to a first embodiment of the present invention;
fig. 3 is an overall operation structure diagram of a crop pest identification method based on multi-level adaptive attention according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The technical scheme of the invention is further described by combining the drawings and the specific embodiments in the specification:
example one
As shown in fig. 1, a crop pest identification method based on multi-level adaptive attention comprises the following steps:
step 2, initializing q to 1, classifying the preprocessed crop disease and insect pest image set into a q-th level training image set T of the crop disease and insect pest q And a crop pest test image set V; in this example T 1 Including 8294 images and V including 2148 images.
Step 3.1, designing the feature extraction network f based on the structure of the SE-Resnet50 convolutional neural network, wherein the feature extraction network f is formed by cascading a common convolution module A, 4 staggered group convolution attention modules B, C, D, E and a full connection layer fc and is used for extracting network features;
the structure of the feature extraction network f as shown in fig. 2 is as follows:
the first layer is a 7 × 7 convolutional layer, the number of convolutional kernels is 64, and the second layer is a 3 × 3 max pooling layer. Then connect the 3 interleaved sets of convolutional attention modules set a.
The structure of the interleaving group convolution attention module group A is 5 parts, the size of a convolution kernel of the first part is 1 multiplied by 1, the number of the convolution kernels is 64, the second part is main group convolution of the interleaving group convolution, the size of the convolution kernel is 3 multiplied by 3, the number of partitions of the main group convolution is 4, the number of channels of each partition is 16, the third part is secondary group convolution of the interleaving group convolution, the size of the convolution kernel is 3 multiplied by 3, the number of channels of each partition of the secondary group convolution is 16, and the number of partitions is 4. The fourth part is a convolutional layer with the convolutional kernel size of 1 multiplied by 1, the number of the convolutional kernels is 256, the fifth part is two fully-connected layers which are connected in series, the number of the neurons of the first fully-connected layer is 16, the excitation function is a ReLU function, the number of the neurons of the second fully-connected layer is 256, and the excitation function is a sigmoid function.
Then connect 3 interleaved sets of convolutional attention modules set B.
The structure of the interleaved convolution attention module group B is 5 parts, the size of a convolution kernel of the first part is 1 multiplied by 1, the number of convolution kernels is 128, the second part is a main group convolution of the interleaved convolution, the size of the convolution kernel is 3 multiplied by 3, the number of partitions of the main group convolution is 4, the number of channels of each partition is 32, the third part is a secondary group convolution of the interleaved convolution, the size of the convolution kernel is 3 multiplied by 3, the number of channels of each partition of the secondary group convolution is 32, and the number of partitions is 4. The fourth part is convolution layers with convolution kernel size of 1 multiplied by 1, the number of convolution kernels is 512, the fifth part is two full connection layers connected in series, the number of neurons of the first full connection layer is 32, the excitation function is a ReLU function, the number of neurons of the second full connection layer is 512, and the excitation function is a sigmoid function.
Then (3+ q) interleaved groups of convolutional attention modules C are connected.
The structure of the interleaved convolution attention module group C is 5 parts, the size of the convolution kernel of the first part is 1 × 1, the number of convolution kernels is 256, the second part is the main group convolution of the interleaved convolution, the size of the convolution kernel is 3 × 3, the number of partitions of the main group convolution is 4, the number of channels of each partition is 64, the third part is the secondary group convolution of the interleaved convolution, the size of the convolution kernel is 3 × 3, the number of channels of each partition of the secondary group convolution is 64, and the number of partitions is 4. The fourth part is a convolutional layer with the convolutional kernel size of 1 multiplied by 1, the number of the convolutional kernels is 1024, the fifth part is two fully-connected layers which are connected in series, the number of the neurons of the first fully-connected layer is 64, the excitation function is a ReLU function, the number of the neurons of the second fully-connected layer is 512, and the excitation function is a sigmoid function.
Then connect the 3 interleaved sets of convolutional attention modules set D.
The structure of the interleaved convolution attention module group D is 5 parts, the size of the first partial convolution kernel is convolution of 1 × 1, the number of convolution kernels is 512, the second part is the main group convolution of the interleaved convolution, the size of the convolution kernel is 3 × 3, the number of partitions of the main group convolution is 4, the number of channels of each partition is 128, the third part is the sub-group convolution of the interleaved convolution, the size of the convolution kernel is 3 × 3, the number of channels of each partition of the sub-group convolution is 128, and the number of partitions is 4. The fourth part is a convolutional layer with the convolutional kernel size of 1 multiplied by 1, the number of the convolutional kernels is 2048, the fifth part is two fully-connected layers which are connected in series, the number of the neurons of the first fully-connected layer is 128, the excitation function is a ReLU function, the number of the neurons of the second fully-connected layer is 2048, and the excitation function is a sigmoid function.
And finally, connecting a full connection layer and the softmax classifier.
Step 3.2, initializing i to 1, from said q-th training set T q Selecting the ith input image x i Inputting the input layer of the ordinary convolution module A, then passing through the interleaved convolution attention module B, C, D, E, and inputting the full connection layer fc to obtain the input feature mapAnd then obtaining a classification result z of the input image x through a softmax classifier, wherein O is the dimension of the feature vector.
Step 4, constructing a multi-level identification network:
step 4.1, i is i +1, repeat step 3.2, and put the q-th training set T q All the rest images are taken as input images x in sequence i Thereby training a q-th model M q And obtaining the q-th training set T q Final feature graphs and classification results of all the images;
step 4.2, establishing a final evaluation index of the characteristic diagram; the similarity between the crop pest images is high, effective features are not easy to extract from a single model, multi-model feature fusion discrimination needs to be established, the model effect is improved, therefore, quantitative basis needs to be provided for a feature space and a classification criterion thereof, and an equivalent entropy form measure relation is established between an information theory of feature space modeling and a cognition theory of crop pest cognition result evaluation;
step 4.2.1, let i equal to 0, and based on latent semantic analysis, set F of feature maps i q Mapping to semantic space vectorsComputing x in a q-th network i Semantic error distance of:
In the formula (I), the compound is shown in the specification,is represented by the formula i And identifying the number of training sample sets of the same category of the result.Is represented by the formula i And identifying semantic state features of training sample sets of the same category of the result.
Step 4.2.2, based on the information entropy theory, detecting crop diseases and insect pestsTest sample x i Semantic error information of recognition resultsCan be defined as:
step 4.2.3, judging the semantic error informationIf the threshold value is greater than the set threshold value, the input image x is represented i Adapted to model M of q-th order q I ═ i +1 and return to step 4.2.1, otherwise, image x will be input i Put into the q +1 training set T q+1 Performing the following steps; training set T up to level q q All the images are tested and the final q + 1-stage training set T is obtained q+1 ;
In this embodiment, the threshold K of the semantic error information is 0.79, which is obtained by calculationThen (c) is performed. The image can be considered to be suitable for the network of the current level, if the image is larger than the network of the current level, the image is considered to be not suitable, and the next level of data set needs to be entered for continuous training, as shown in fig. 3;
step 4.2.4, judge whether q equals q max If yes, the network models of all the levels are trained, step 5 is executed, otherwise, q is made to be q +1, the step 3.1 is returned, and a q-th level crop pest and disease identification model M is obtained q And q +1 th training set T q+1 ;
In this example q max Set to 5; at most five models can be trained to participate in recognition;
step 5, performing feature fusion on the multi-level network output by using fuzzy integral, and outputting a detection result
Is provided withA multi-model fusion joint discrimination mechanism based on fuzzy integration is constructed for learning model sets aiming at different crop disease and insect pest samples;
step 5.1, calculating a parameter w by using the formula (3):
in the formula (3), v j Representation training model M j Importance for crop pest identification;
step 5.2, calculating a set T of crop disease and insect pest images to be identified by using the formula (4) j Fuzzy density v of j-level crop pest identification model w (T j ):
v w (T j )=v j +v w (T j-1 )+w·v j ·v w (T j-1 ) (4)
In the formula (4), v w (T j-1 ) Representing the fuzzy density of a j-1 grade crop pest identification model; when j is 1, let v w (T j-1 )=v 1 ;
In this example, the blur integration parameters used are as in table 1:
step 5.3, obtaining the crop disease and insect pest image x to be identified belonging to the category l by using the formula (5) x Probability ofSelecting the category corresponding to the maximum value from the corresponding category probabilities as the final category of the crop pest image to be identified;
in the formula (5), the reaction mixture is,represents the final result Y of the j-th stage detection j In the category l x The V-shaped represents the maximum value of the two, and the A-shaped represents the minimum value of the two.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (7)
1. A crop pest and disease damage identification method based on multi-level adaptive attention is characterized by comprising the following steps:
s1, preprocessing the crop pest image set to obtain a preprocessed crop pest image set;
s2, initializing q to 1, classifying the preprocessed crop disease and insect pest image sets into a q-th level training image set T of the crop disease and insect pest q And a crop pest test image set V;
s3, constructing a neural network model based on the interleaving group convolution attention module;
s4, constructing a multi-level identification network;
and S5, performing feature fusion on the multi-level network output by using fuzzy integration, and outputting a detection result.
2. The method for identifying crop pests and diseases based on multi-level adaptive attention according to claim 1, wherein the method for preprocessing the image set of crop pests in step S1 comprises: and (3) expanding and enhancing the images in the crop disease and insect pest image set, and then uniformly adjusting the images to 224 × 224.
3. The method for identifying crop pests and diseases based on multi-level adaptive attention according to claim 2, wherein the method for constructing the neural network model based on the staggered group convolution attention module in the step S3 is as follows:
s3.1, designing a feature extraction network f based on the structure of an SE-Resnet50 convolutional neural network, wherein the feature extraction network f is formed by cascading a common convolution module A, 4 staggered group convolution attention modules B, C, D, E and a full connection layer fc and is used for extracting network features;
s3.2, initialize i 1, from the q-th training set T q Selecting the ith input image x i Inputting the input layer of the ordinary convolution module A, then passing through the interleaved convolution attention module B, C, D, E, and inputting the full connection layer fc to obtain the input feature mapAnd then obtaining a classification result z of the input image x through a softmax classifier, wherein O is the dimension of the feature vector.
4. The crop pest and disease identification method based on multi-level adaptive attention is characterized in that an input layer of the feature extraction network f is an input layer of a common convolution module A, 3 staggered group convolution attention modules B are connected behind the common convolution module A, 3 staggered group convolution attention modules C are connected behind the staggered group convolution attention modules B, a (3+ q) staggered group convolution attention module D is connected behind the staggered group convolution attention module C, and 3 staggered group convolution attention modules E are connected behind the staggered group convolution attention module D; the interleaved set convolution attention module E is connected to the full connection layer fc.
5. The crop pest identification method based on multi-level adaptive attention is characterized in that the method for constructing the multi-level identification network in the step S4 is as follows:
s4.1, i ═ i +1, repeat step S3.2, and set the q-th training set T q All the rest images are taken as input images x in sequence i Thereby training a q-th model M q And obtaining a q-th training set T q Final feature graphs and classification results of all the images;
and S4.2, establishing the final evaluation index of the characteristic diagram.
6. The method for identifying crop pests and diseases based on multi-level adaptive attention according to claim 5, wherein the method for establishing the evaluation index of the final feature map in step S4.2 is as follows:
s4.2.1, let i equal to 0, and based on latent semantic analysis, set F of feature maps i q Mapping to semantic space vectorsComputing x in a q-th network i Semantic error distance of
In the formula (I), the compound is shown in the specification,is represented by the formula i Identifying the number of training sample sets of the same category of the result;is represented by the formula i Identifying semantic state features of training sample sets of the same category of the result;
s4.2.2, based on entropy theory, sample x for crop pest test i Semantic error information of recognition resultsCan be defined as:
s4.2.3, determining the semantic error informationIf the threshold value is greater than the set threshold value, the input image x is represented i Adapted for model M of q-th order q I +1 and returns to step S4.2.1, otherwise, image x will be input i Put into the q +1 training set T q+1 Performing the following steps; training set T up to level q q All the images are tested and the final q +1 level training set T is obtained q+1 ;
S4.2.4, judging whether q is equal to q max If yes, the network models of all the levels are trained, step S5 is executed, otherwise, q is made to be q +1, the step S3.1 is returned to, and the q-th level crop pest and disease identification model M is obtained q And q +1 th training set T q+1 。
7. The method for identifying crop diseases and insect pests based on multi-level adaptive attention according to claim 6, wherein the method for performing feature fusion on multi-level network output by using fuzzy integration and outputting detection results in step S5 specifically comprises the following steps:
is provided withA multi-model fusion joint discrimination mechanism based on fuzzy integration is constructed for learning model sets aiming at different crop disease and insect pest samples;
s5.1, calculating a parameter w by using the formula (3):
in the formula (3), v j Representation training model M j Importance for crop pest identification;
s5.2, calculating a set T of to-be-identified crop pest image by using the formula (4) j Fuzzy density v of j-level crop pest identification model w (T j ):
v w (T j )=v j +v w (T j-1 )+w·v j ·v w (T j-1 ) (4)
In the formula (4), v w (T j-1 ) Representing the fuzzy density of a j-1 grade crop pest identification model; when j is 1, let v w (T j-1 )=v 1 ;
S5.3, obtaining the crop disease and insect pest image x to be identified belonging to the category l by using the formula (5) x Probability ofSelecting the category corresponding to the maximum value from the corresponding category probabilities as the final category of the crop pest image to be identified;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210640236.6A CN115050022A (en) | 2022-06-08 | 2022-06-08 | Crop pest and disease identification method based on multi-level self-adaptive attention |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210640236.6A CN115050022A (en) | 2022-06-08 | 2022-06-08 | Crop pest and disease identification method based on multi-level self-adaptive attention |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115050022A true CN115050022A (en) | 2022-09-13 |
Family
ID=83161581
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210640236.6A Pending CN115050022A (en) | 2022-06-08 | 2022-06-08 | Crop pest and disease identification method based on multi-level self-adaptive attention |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115050022A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115797789A (en) * | 2023-02-20 | 2023-03-14 | 成都东方天呈智能科技有限公司 | Cascade detector-based rice pest monitoring system and method and storage medium |
CN116403048A (en) * | 2023-04-17 | 2023-07-07 | 哈尔滨工业大学 | Crop growth estimation model construction method based on multi-mode data fusion |
-
2022
- 2022-06-08 CN CN202210640236.6A patent/CN115050022A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115797789A (en) * | 2023-02-20 | 2023-03-14 | 成都东方天呈智能科技有限公司 | Cascade detector-based rice pest monitoring system and method and storage medium |
CN115797789B (en) * | 2023-02-20 | 2023-05-30 | 成都东方天呈智能科技有限公司 | Cascade detector-based rice pest monitoring system, method and storage medium |
CN116403048A (en) * | 2023-04-17 | 2023-07-07 | 哈尔滨工业大学 | Crop growth estimation model construction method based on multi-mode data fusion |
CN116403048B (en) * | 2023-04-17 | 2024-03-26 | 哈尔滨工业大学 | Crop growth estimation model construction method based on multi-mode data fusion |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110728224B (en) | Remote sensing image classification method based on attention mechanism depth Contourlet network | |
CN112308158B (en) | Multi-source field self-adaptive model and method based on partial feature alignment | |
Amin et al. | End-to-end deep learning model for corn leaf disease classification | |
CN115050022A (en) | Crop pest and disease identification method based on multi-level self-adaptive attention | |
CN115331732B (en) | Gene phenotype training and predicting method and device based on graph neural network | |
CN111984817B (en) | Fine-grained image retrieval method based on self-attention mechanism weighting | |
CN112308115A (en) | Multi-label image deep learning classification method and equipment | |
CN113127737B (en) | Personalized search method and search system integrating attention mechanism | |
CN113592060A (en) | Neural network optimization method and device | |
CN114187308A (en) | HRNet self-distillation target segmentation method based on multi-scale pooling pyramid | |
Singh et al. | Classification of wheat seeds using image processing and fuzzy clustered random forest | |
Khan et al. | Deep transfer learning inspired automatic insect pest recognition | |
CN113239199B (en) | Credit classification method based on multi-party data set | |
Dubey et al. | An efficient adaptive feature selection with deep learning model-based paddy plant leaf disease classification | |
Brar et al. | A smart approach to coconut leaf spot disease classification using computer vision and deep learning technique | |
Rakesh et al. | Explainable AI for Crop disease detection | |
Bai et al. | A unified deep learning model for protein structure prediction | |
Song et al. | Detection of Northern Corn Leaf Blight Disease in Real Environment Using Optimized YOLOv3 | |
Jana et al. | Deep belief network based disease detection in pepper leaf for farming sector | |
CN116129189A (en) | Plant disease identification method, plant disease identification equipment, storage medium and plant disease identification device | |
Li et al. | Identification of Crop Diseases Based on Improved Genetic Algorithm and Extreme Learning Machine. | |
CN116089801A (en) | Medical data missing value repairing method based on multiple confidence degrees | |
Singh et al. | Tomato crop disease classification using convolution neural network and transfer learning | |
CN113076490B (en) | Case-related microblog object-level emotion classification method based on mixed node graph | |
Rajeswarappa et al. | Crop Pests Identification based on Fusion CNN Model: A Deep Learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |