CN113610108A - Rice pest identification method based on improved residual error network - Google Patents

Rice pest identification method based on improved residual error network Download PDF

Info

Publication number
CN113610108A
CN113610108A CN202110760490.5A CN202110760490A CN113610108A CN 113610108 A CN113610108 A CN 113610108A CN 202110760490 A CN202110760490 A CN 202110760490A CN 113610108 A CN113610108 A CN 113610108A
Authority
CN
China
Prior art keywords
rice
capsule
vector
residual error
pest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110760490.5A
Other languages
Chinese (zh)
Other versions
CN113610108B (en
Inventor
郑禄
陈楚
雷建云
帖军
田莎莎
张慧丽
单一鸣
牛悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South Central Minzu University
Original Assignee
South Central University for Nationalities
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South Central University for Nationalities filed Critical South Central University for Nationalities
Priority to CN202110760490.5A priority Critical patent/CN113610108B/en
Publication of CN113610108A publication Critical patent/CN113610108A/en
Application granted granted Critical
Publication of CN113610108B publication Critical patent/CN113610108B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a rice pest identification method based on an improved residual error network, which comprises the following steps: a training stage: step 1: acquiring a large-scale pest identification data set; step 2: preprocessing a pest identification data set, comprising: rotation, flipping, illumination processing, contrast processing, color balance processing, and sharpness processing; and step 3: constructing a picture classification network model, namely an improved residual error network model; and 4, step 4: dividing a pest identification data set into a training set and a testing set according to a certain proportion, training the constructed picture classification network model through the training set, and storing the trained picture classification network model; and (3) a testing stage: and 5: and inputting the test set images into a trained improved residual error network model for rice pest identification, and outputting the identification result accuracy. The method can make up for the defect that a residual error network loses a large amount of information during output, and improves the identification accuracy of the model.

Description

Rice pest identification method based on improved residual error network
Technical Field
The invention relates to the technical field of deep learning, in particular to a rice pest identification method based on an improved residual error network.
Background
In recent years, with the rise of artificial intelligence, deep learning has gained wide attention and application in the fields of computer vision, natural language processing, emotion calculation and the like, and many researchers apply the deep learning to the agricultural field and have preliminarily explored on the identification of crop pests.
At present, convolutional neural networks are widely applied to the field of image recognition, wherein representative networks are mainly AlexNet, VGG, GoogleNet, ResNet and DenseNet, and based on the above network models, many improved network models are proposed, which achieve better effect than conventional artificial identification of crop pests, but in the process of constructing a deep convolutional neural network, when a gradient signal is propagated backwards from a bottom layer to a top layer, the gradient signal is gradually attenuated, thereby causing loss of a large amount of characteristic information.
The key nature of the capsule network makes it possible to retain detailed information about the position and pose of the image, which occupies a prominent position in the image recognition. The capsule network is embedded into the residual error network to construct an improved residual error network model, so that the problem of information loss in the network construction process can be solved.
Disclosure of Invention
The invention aims to solve the technical problem of providing a rice pest identification method based on an improved residual error network aiming at the defects in the prior art.
The technical scheme adopted by the invention for solving the technical problems is as follows:
the invention provides a rice pest identification method based on an improved residual error network, which comprises the following steps:
a training stage:
step 1: acquiring a large-scale pest identification data set;
step 2: preprocessing a pest identification data set, comprising: rotation, flipping, illumination processing, contrast processing, color balance processing, and sharpness processing;
and step 3: constructing a picture classification network model, namely an improved residual error network model: performing convolution and downsampling on an input image to extract the features of the image, and reducing the size of a feature map and improving the channel of the feature map through four basicblocks; encapsulating and coding the characteristic diagram, converting the characteristic diagram into a plurality of capsules, then carrying out interlayer Routing, and mapping the characteristic diagram to a certain space by adopting an approximate full-connection mode of a Dynamic Routing algorithm; in the problem of classification and identification of a plurality of pests, the Dynamic Routing algorithm maps capsule features to an M × N space, namely each class corresponds to 1N-dimensional feature, then the capsule features are compressed into 1M-dimensional vector by using nonlinear mapping, the maximum value of an L2 paradigm is taken as a final predicted value label, and finally the vector of the capsule is output;
and 4, step 4: dividing a pest identification data set into a training set and a testing set according to a certain proportion, training the constructed picture classification network model through the training set, and storing the trained picture classification network model;
and (3) a testing stage:
and 5: and inputting the test set images into a trained improved residual error network model for rice pest identification, and outputting the identification result accuracy.
Further, in the step 1 of the present invention:
the pest identification data set is a hierarchical structure and is divided into 8 crop major categories and 102 pest minor categories, and the pest identification data set comprises more than 75000 pest samples; the categories include: rice leaf roller, rice snout moth's fly, rice leaf miner, chilo suppressalis, yellow rice borer, rice gall midge, rice stem fly, rice brown planthopper, white back planthopper, gray rice louse, rice water weevil, rice leafhopper, thrips graminis and rice hull pests.
Further, the method for preprocessing the pest identification data set in the step 2 of the present invention specifically comprises:
rotating at the rotation angles of 90 degrees, 180 degrees and 270 degrees respectively;
turning, wherein the turning mode is up-down turning and horizontal turning;
by adopting a data enhancement technology including illumination processing, contrast processing, color balance processing and sharpness processing, the data uniformity is achieved, the total number of pictures reaches 20670, and the generalization capability of the model and the robustness of the model are improved.
Further, the improved residual error network model constructed in the step 3 of the present invention specifically is:
firstly, performing convolution and downsampling on an input image to extract the characteristics of the image; the method for extracting the features comprises the following steps:
output image size ═ (input image-1) stride + output mapping-2 mapping + kernel _ size
Wherein stride represents a step size, outputpadding represents the number of layers of an output edge complement 0, padding represents a filling amount, and kernel _ size represents the size of a convolution kernel;
by four Basicblocks, the size of the feature map is reduced to 7 multiplied by 7, the channel of the feature map is improved to 512, and more sample features can be captured in this way;
then, encapsulating and coding the characteristic diagram of 512 × 7 × 7, converting the characteristic diagram into 32 capsules of 8 × 8, obtaining 32 capsules of 8 × 2 × 2 after 2 times of Conv1d, then performing interlayer Routing, and mapping the characteristic diagram to a space of 14 × 16 by adopting an approximate full-connection mode of a Dynamic Routing algorithm;
in the problem of 14 pest classification identification, the Dynamic Routing algorithm maps capsule features to a 14 × 16 space, that is, each class corresponds to 1 16-dimensional feature, then compresses the capsule features into 1 14-dimensional vector by using a nonlinear mapping, that is, Squash, and takes the maximum value of the L2 paradigm as a final predicted value label, and the vector output of the final capsule is as follows:
Figure BDA0003149518860000031
the derivation process is as follows:
the probability that each upper layer capsule i is connected to a lower layer j is:
Figure BDA0003149518860000032
in the formula cijIs a weight coefficient, bijIs the prior probability of capsule i connecting to capsule j, initially 0; then applying a transformation matrix wijWill uiConversion to prediction vectors
Figure BDA0003149518860000033
Figure BDA0003149518860000034
All resulting prediction vectors are then summed weighted:
Figure BDA0003149518860000041
wherein SjReferred to as the total input vector of the high-level capsule j. Replacing the activation function Relu of the traditional neural network with a non-linear flattening function squaring ensures that the direction of the vector remains unchanged, but its length is strongly required to not exceed 1, and the vector output of the final capsule is as follows:
Figure BDA0003149518860000042
the maximum value of the L2 paradigm of vector outputs is taken as the final predictor label, and each vector corresponds to the likelihood of one classification category.
Further, the specific method of the Dynamic Routing algorithm of the present invention is as follows:
1) first all the prediction vectors are obtained
Figure BDA0003149518860000043
Defining iteration times r and the l layer of the network to which the current input capsule belongs;
2) for all input capsules i and output capsules j, a parameter b is definedijInitialized to 0, for the action of this parameter, described in the next step;
3) starting iteration from the step 4) to the step 7), wherein the iteration number is r;
4) calculating vector cjThe value of (a), i.e. all routing weights of capsule i, is to guarantee ∑jcij1, so a softmax function is used to guarantee each cijNon-negative and a sum of 1;
due to the first iteration bijInitialized to 0, thus cijEqual in the first iteration, i.e. 1/p, p refers to the number of higher layer capsules;
5)
Figure BDA0003149518860000044
carrying out weighted summation on the prediction vectors;
6) the vector of the last step passes through a non-linear function square, which ensures that the direction of the vector remains unchanged, but its length is forced not to exceed 1; this step outputs the final vector Vj
7) This step is where the weight is updated, via the output V of the capsule jjAnd a prediction vector
Figure BDA0003149518860000045
Dot product of + original weight bijNew weight value; performing dot product processing to detect the similarity between input and output of the capsule; after the weight is updated, carrying out the next iteration;
8) after r iterations, the final output vector V is returnedj
Further, the method for training in step 4 of the present invention specifically includes:
a rice pest data set is randomly divided into a training set and a testing set according to the ratio of 8:2, small-batch gradient descent is adopted as a network training optimizer in the training process, and the momentum parameter is set to be 0.8. The initial learning rate is set to 0.005, the batch size is 32, and the iteration times are 100 rounds;
the small batch gradient descent method is a compromise between the batch gradient descent method and the random gradient descent method, that is, for m samples, x samples are adopted for iteration, 1< x < m, and x is 10, and the value of x can be adjusted according to the data of the samples; the corresponding update formula is:
Figure BDA0003149518860000051
wherein, the assumed function of linear regression is:
hθ(x(i))=θ1x(j)0
wherein, theta0And theta1Is a parameter; i is 1,2, …, m represents the number of samples, j is 0,1 represents the feature number, α is the learning rate, θ isiIs a parameter, yjIs the corresponding regression value.
The invention has the following beneficial effects: according to the rice pest identification method based on the improved residual error network, the residual error network is selected as the basic network model, the capsule network is added on the basis, and the capsule network is used as the full connection layer of the ResNet network model, so that the defect that the residual error network loses a large amount of information during output can be overcome, and the identification accuracy of the model is improved. The advantages are that: (1) the extracted image characteristic information is richer; (2) the model identification accuracy is higher.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
fig. 1 is a diagram of an improved residual network architecture in accordance with an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The rice pest identification method based on the improved residual error network comprises the following steps:
(1) inputting a training data set
In 2019, Wuxian et al published a large-scale pest identification data set IP102 and performed professional image annotation work. The data set categories are still hierarchical and are divided into 8 crop major categories and 102 pest minor categories. IP102 is the largest pest identification data set to date, containing 75000 pest samples whose categories almost encompass the most common pest species currently. At present, the IP102 provides an excellent experimental reference for the pest identification field, and solves the problem of few pest image data set samples to a certain extent. From the IP102 data set, the rice pests in the data set are selected for specific research, and the data set relates to 14 categories of rice leaf rollers, rice bollworms, rice leaf miners, chilo suppressalis, tryporyza incertulas, rice gall midges and the like, wherein the total number of images is 8417, and the data set is used as a rice pest image data set for experimental research of the invention. Table 1 gives the details of this data set.
Table 1 data set details
Figure BDA0003149518860000061
(2) Preprocessing of data sets
The rice pests in the IP102 data set have the phenomenon of too many or too few samples, so that the sample distribution is unbalanced, and in order to make up for the influence of the type unbalance of the samples on the model identification accuracy, the method carries out enhancement processing on a small number of sample data before training. In deep learning, data enhancement refers to expanding a data set with small data volume into more data through some technical means. According to the invention, through data enhancement technologies such as rotation (the rotation angles are respectively 90 degrees, 180 degrees and 270 degrees), overturning (the overturning mode is up-down overturning and horizontal overturning), illumination processing, contrast processing, color balance processing and sharpness processing, data uniformity is achieved, the total number of pictures reaches 20670, and the generalization capability of the model and the robustness of the model are improved.
(3) Picture classification model design
Firstly, performing convolution and downsampling on an input image to extract the characteristics of the image; the method for extracting the features comprises the following steps:
output image size ═ (input image-1) stride + output mapping-2 mapping + kernel _ size
Wherein stride represents a step size, outputpadding represents the number of layers of an output edge complement 0, padding represents a filling amount, and kernel _ size represents the size of a convolution kernel;
by four Basicblocks, the size of the feature map is reduced to 7 × 7, and the channel of the feature map is raised to 512, so that more sample features can be captured.
Then, the 512 × 7 × 7 feature map is encapsulated and converted into 32 8 × 8 capsules, and then the 32 8 × 2 × 2 capsules are obtained after 2 times of Conv1d, and then interlayer Routing is performed to map the capsules into a 14 × 16 space in an approximately fully connected manner (i.e., Dynamic Routing).
In the problem of 14 pest classification identification, the Dynamic Routing algorithm maps capsule features to a 14 × 16 space, that is, each class corresponds to 1 16-dimensional feature, and then compresses the capsule features into 1 14-dimensional vector by using a nonlinear mapping (that is, square), taking the maximum value of the L2 paradigm as a final predicted value label, and the vector output of the final capsule is as follows:
Figure BDA0003149518860000071
the derivation process is as follows:
the probability that each upper layer capsule i is connected to a lower layer j is:
Figure BDA0003149518860000072
in the formula cijIs a weight coefficient, bijIs the prior probability of capsule i connecting to capsule j, initially 0; then applying a transformation matrix wijWill uiConversion to prediction vectors
Figure BDA0003149518860000073
Figure BDA0003149518860000074
All resulting prediction vectors are then summed weighted:
Figure BDA0003149518860000081
wherein SjReferred to as the total input vector of the high-level capsule j. Replacing the activation function Relu of the traditional neural network with a non-linear flattening function squaring ensures that the direction of the vector remains unchanged, but its length is strongly required to not exceed 1, and the vector output of the final capsule is as follows:
Figure BDA0003149518860000082
the maximum value of the L2 paradigm of vector outputs is taken as the final predictor label, and each vector corresponds to the likelihood of one classification category.
The specific method of the Dynamic Routing algorithm comprises the following steps:
1) first all the prediction vectors are obtained
Figure BDA0003149518860000083
Defining iteration times r and the l layer of the network to which the current input capsule belongs;
2) for all input capsules i and output capsules j, a parameter b is definedijInitialized to 0, for the action of this parameter, described in the next step;
3) starting iteration from the step 4) to the step 7), wherein the iteration number is r;
4) calculating vector cjThe value of (a), i.e. all routing weights of capsule i, is to guarantee ∑jcij1, so a softmax function is used to guarantee each cijNon-negative and a sum of 1;
due to the first iteration bijInitialized to 0, thus cijEqual in the first iteration, i.e. 1/p, p refers to the number of higher layer capsules;
5)
Figure BDA0003149518860000084
carrying out weighted summation on the prediction vectors;
6) the vector of the last step passes through a non-linear function square, which ensures that the direction of the vector remains unchanged, but its length is forced not to exceed 1; this step outputs the final vector Vj
7) This step is where the weight is updated, via the output V of the capsule jjAnd a prediction vector
Figure BDA0003149518860000085
Dot product of + original weight bijNew weight value; performing dot product processing to detect the similarity between input and output of the capsule; after the weight is updated, carrying out the next iteration;
8) after r iterations, the final output vector V is returnedj
(4) Network model learning training
A rice pest data set is randomly divided into a training set and a testing set according to the ratio of 8:2, small-batch gradient descent is adopted as a network training optimizer in the training process, and the momentum parameter is set to be 0.8. The initial learning rate was set to 0.005, the batch size was 32, and the number of iterations was 100 rounds.
The small batch gradient descent method is a compromise between the batch gradient descent method and the random gradient descent method, i.e. for m samples, we iterate with x samples, 1< x < m. Generally, x may be 10, and of course, the value of x may be adjusted according to the data of the sample. The corresponding update formula is:
Figure BDA0003149518860000091
wherein, the assumed function of linear regression is:
hθ(x(i))=θ1x(j)0
wherein, theta0And theta1Is a parameter; i is 1,2, …, m represents the number of samples, j is 0,1 represents the feature number, α is the learning rate, θ isiIs a parameter, yjIs the corresponding regression value.
(5) And inputting the test set images into a trained improved residual error network model for rice pest identification, and outputting the identification result accuracy.
It will be understood that modifications and variations can be made by persons skilled in the art in light of the above teachings and all such modifications and variations are intended to be included within the scope of the invention as defined in the appended claims.

Claims (6)

1. A rice pest identification method based on an improved residual error network is characterized by comprising the following steps:
a training stage:
step 1: acquiring a large-scale pest identification data set;
step 2: preprocessing a pest identification data set, comprising: rotation, flipping, illumination processing, contrast processing, color balance processing, and sharpness processing;
and step 3: constructing a picture classification network model, namely an improved residual error network model: performing convolution and downsampling on an input image to extract the features of the image, and reducing the size of a feature map and improving the channel of the feature map through four basicblocks; encapsulating and coding the characteristic diagram, converting the characteristic diagram into a plurality of capsules, then carrying out interlayer Routing, and mapping the characteristic diagram to a certain space by adopting an approximate full-connection mode of a Dynamic Routing algorithm; in the problem of classification and identification of a plurality of pests, the Dynamic Routing algorithm maps capsule features to an M × N space, namely each class corresponds to 1N-dimensional feature, then the capsule features are compressed into 1M-dimensional vector by using nonlinear mapping, the maximum value of an L2 paradigm is taken as a final predicted value label, and finally the vector of the capsule is output;
and 4, step 4: dividing a pest identification data set into a training set and a testing set according to a certain proportion, training the constructed picture classification network model through the training set, and storing the trained picture classification network model;
and (3) a testing stage:
and 5: and inputting the test set images into a trained improved residual error network model for rice pest identification, and outputting the identification result accuracy.
2. The method for identifying rice pests based on the improved residual error network as claimed in claim 1, wherein in the step 1:
the pest identification data set is a hierarchical structure and is divided into 8 crop major categories and 102 pest minor categories, and the pest identification data set comprises more than 75000 pest samples; the categories include: rice leaf roller, rice snout moth's fly, rice leaf miner, chilo suppressalis, yellow rice borer, rice gall midge, rice stem fly, rice brown planthopper, white back planthopper, gray rice louse, rice water weevil, rice leafhopper, thrips graminis and rice hull pests.
3. The rice pest identification method based on the improved residual error network as claimed in claim 1, wherein the method for preprocessing the pest identification data set in the step 2 specifically comprises:
rotating at the rotation angles of 90 degrees, 180 degrees and 270 degrees respectively;
turning, wherein the turning mode is up-down turning and horizontal turning;
by adopting a data enhancement technology including illumination processing, contrast processing, color balance processing and sharpness processing, the data uniformity is achieved, the total number of pictures reaches 20670, and the generalization capability of the model and the robustness of the model are improved.
4. The method for identifying rice pests based on the improved residual error network as claimed in claim 1, wherein the improved residual error network model constructed in the step 3 is specifically:
firstly, performing convolution and downsampling on an input image to extract the characteristics of the image; the method for extracting the features comprises the following steps:
output image size ═ (input image-1) stride + output mapping-2 mapping + kernel _ size
Wherein stride represents a step size, outputpadding represents the number of layers of an output edge complement 0, padding represents a filling amount, and kernel _ size represents the size of a convolution kernel;
by four Basicblocks, the size of the feature map is reduced to 7 multiplied by 7, the channel of the feature map is improved to 512, and more sample features can be captured in this way;
then, encapsulating and coding the characteristic diagram of 512 multiplied by 7, converting the characteristic diagram into 32 capsules of 8 multiplied by 8, convolving for 2 times to finally obtain 32 capsules of 8 multiplied by 2, then carrying out interlayer Routing, and mapping the capsules to a space of 14 multiplied by 16 by adopting an approximate full connection mode of a Dynamic Routing algorithm;
in the problem of 14 pest classification identification, the Dynamic Routing algorithm maps capsule features to a 14 × 16 space, that is, each class corresponds to 1 16-dimensional feature, then compresses the capsule features into 1 14-dimensional vector by using a nonlinear mapping, that is, Squash, and takes the maximum value of the L2 paradigm as a final predicted value label, and the vector output of the final capsule is as follows:
Figure FDA0003149518850000021
the derivation process is as follows:
the probability that each upper layer capsule i is connected to a lower layer j is:
Figure FDA0003149518850000031
in the formula cijIs a weight coefficient, bijIs the prior probability of capsule i connecting to capsule j, initially 0; then applying a transformation matrix wijWill uiConversion to prediction vectors
Figure FDA0003149518850000032
Figure FDA0003149518850000033
All resulting prediction vectors are then summed weighted:
Figure FDA0003149518850000034
wherein SjReferred to as the total input vector of the high-level capsule j. Replacing the activation function Relu of the traditional neural network with a non-linear flattening function squaring ensures that the direction of the vector remains unchanged, but its length is strongly required to not exceed 1, and the vector output of the final capsule is as follows:
Figure FDA0003149518850000035
the maximum value of the L2 paradigm of vector outputs is taken as the final predictor label, and each vector corresponds to the likelihood of one classification category.
5. The method for identifying rice pests based on the improved residual error network as claimed in claim 4, wherein the specific method of the Dynamic Routing algorithm is as follows:
1) first all the prediction vectors are obtained
Figure FDA0003149518850000036
Defining iteration times r and the l layer of the network to which the current input capsule belongs;
2) for all input capsules i and output capsules j, a parameter b is definedijInitialized to 0, for the action of this parameter, described in the next step;
3) starting iteration from the step 4) to the step 7), wherein the iteration number is r;
4) calculating vector cjThe value of (a), i.e. all routing weights of capsule i, is to guarantee ∑jcij1, so a softmax function is used to guarantee each cijNon-negative and a sum of 1;
due to the first iteration bijInitialized to 0, thus cijEqual in the first iteration, i.e. 1/p, p refers to the number of higher layer capsules;
5)
Figure FDA0003149518850000041
carrying out weighted summation on the prediction vectors;
6) the vector of the last step passes through a non-linear function square, which ensures that the direction of the vector remains unchanged, but its length is forced not to exceed 1; this step outputs the final vector Vj
7) This step is where the weight is updated, via the output V of the capsule jjAnd a prediction vector
Figure FDA0003149518850000042
Dot product of + original weight bijNew weight value; performing dot product processing to detect the similarity between input and output of the capsule; after the weight is updated, carrying out the next iteration;
8) after r iterations, the final output vector V is returnedj
6. The method for identifying rice pests based on the improved residual error network as claimed in claim 1, wherein the training in the step 4 specifically comprises:
randomly setting a rice pest data set as 8: and 2, dividing the training set into a training set and a testing set, adopting small-batch gradient descent as a network training optimizer in the training process, and setting the momentum parameter to be 0.8. The initial learning rate is set to 0.005, the batch size is 32, and the iteration times are 100 rounds;
the small-batch gradient descent method is a compromise between a batch gradient descent method and a random gradient descent method, namely for m samples, adopting x samples to iterate, wherein x is more than 1 and less than m, taking x as 10, and adjusting the value of x according to the data of the samples; the corresponding update formula is:
Figure FDA0003149518850000043
wherein, the assumed function of linear regression is:
hθ(x(i))=θ1x(j)0
wherein, theta0And theta1Is a parameter; i 1, 2.. m denotes the number of samples, j 0,1 denotes the number of features, α denotes the learning rate, θ denotes the number of samples, andiis a parameter, yjIs the corresponding regression value.
CN202110760490.5A 2021-07-06 2021-07-06 Rice pest identification method based on improved residual error network Active CN113610108B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110760490.5A CN113610108B (en) 2021-07-06 2021-07-06 Rice pest identification method based on improved residual error network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110760490.5A CN113610108B (en) 2021-07-06 2021-07-06 Rice pest identification method based on improved residual error network

Publications (2)

Publication Number Publication Date
CN113610108A true CN113610108A (en) 2021-11-05
CN113610108B CN113610108B (en) 2022-05-20

Family

ID=78304069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110760490.5A Active CN113610108B (en) 2021-07-06 2021-07-06 Rice pest identification method based on improved residual error network

Country Status (1)

Country Link
CN (1) CN113610108B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821182A (en) * 2022-05-05 2022-07-29 安徽农业大学 Rice growth stage image recognition method
CN115457414B (en) * 2022-09-15 2023-05-05 西华大学 Unmanned aerial vehicle abnormal behavior identification method based on improved residual error network

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060288448A1 (en) * 2005-06-08 2006-12-21 Pioneer Hi-Bred International, Inc. Insect-specific protease recognition sequences
CN108648191A (en) * 2018-05-17 2018-10-12 吉林大学 Pest image-recognizing method based on Bayes's width residual error neural network
CN110647923A (en) * 2019-09-04 2020-01-03 西安交通大学 Variable working condition mechanical fault intelligent diagnosis method based on self-learning under small sample
CN111241958A (en) * 2020-01-06 2020-06-05 电子科技大学 Video image identification method based on residual error-capsule network
CN111257320A (en) * 2020-03-12 2020-06-09 中南林业科技大学 Wisdom forestry monitoring system
CN112233106A (en) * 2020-10-29 2021-01-15 电子科技大学中山学院 Thyroid cancer ultrasonic image analysis method based on residual capsule network
CN112348119A (en) * 2020-11-30 2021-02-09 华平信息技术股份有限公司 Image classification method based on capsule network, storage medium and electronic equipment
CN112733701A (en) * 2021-01-07 2021-04-30 中国电子科技集团公司信息科学研究院 Robust scene recognition method and system based on capsule network
CN112906813A (en) * 2021-03-09 2021-06-04 中南大学 Flotation condition identification method based on density clustering and capsule neural network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060288448A1 (en) * 2005-06-08 2006-12-21 Pioneer Hi-Bred International, Inc. Insect-specific protease recognition sequences
CN108648191A (en) * 2018-05-17 2018-10-12 吉林大学 Pest image-recognizing method based on Bayes's width residual error neural network
CN110647923A (en) * 2019-09-04 2020-01-03 西安交通大学 Variable working condition mechanical fault intelligent diagnosis method based on self-learning under small sample
CN111241958A (en) * 2020-01-06 2020-06-05 电子科技大学 Video image identification method based on residual error-capsule network
CN111257320A (en) * 2020-03-12 2020-06-09 中南林业科技大学 Wisdom forestry monitoring system
CN112233106A (en) * 2020-10-29 2021-01-15 电子科技大学中山学院 Thyroid cancer ultrasonic image analysis method based on residual capsule network
CN112348119A (en) * 2020-11-30 2021-02-09 华平信息技术股份有限公司 Image classification method based on capsule network, storage medium and electronic equipment
CN112733701A (en) * 2021-01-07 2021-04-30 中国电子科技集团公司信息科学研究院 Robust scene recognition method and system based on capsule network
CN112906813A (en) * 2021-03-09 2021-06-04 中南大学 Flotation condition identification method based on density clustering and capsule neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈智勇 等: "一种基于卷积神经网络参数优化棉花等级分类算法", 《中国纤检》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821182A (en) * 2022-05-05 2022-07-29 安徽农业大学 Rice growth stage image recognition method
CN115457414B (en) * 2022-09-15 2023-05-05 西华大学 Unmanned aerial vehicle abnormal behavior identification method based on improved residual error network

Also Published As

Publication number Publication date
CN113610108B (en) 2022-05-20

Similar Documents

Publication Publication Date Title
CN110020682B (en) Attention mechanism relation comparison network model method based on small sample learning
WO2020200030A1 (en) Neural network training method, image processing method, image processing device, and storage medium
WO2022252272A1 (en) Transfer learning-based method for improved vgg16 network pig identity recognition
CN107833183B (en) Method for simultaneously super-resolving and coloring satellite image based on multitask deep neural network
CN105631479B (en) Depth convolutional network image labeling method and device based on non-equilibrium study
CN109558832A (en) A kind of human body attitude detection method, device, equipment and storage medium
CN109299716A (en) Training method, image partition method, device, equipment and the medium of neural network
CN113610108B (en) Rice pest identification method based on improved residual error network
CN111696101A (en) Light-weight solanaceae disease identification method based on SE-Inception
CN108804397A (en) A method of the Chinese character style conversion based on a small amount of target font generates
CN110599502B (en) Skin lesion segmentation method based on deep learning
CN112329760A (en) Method for recognizing and translating Mongolian in printed form from end to end based on space transformation network
CN112115967B (en) Image increment learning method based on data protection
CN109948696A (en) A kind of multilingual scene character recognition method and system
CN110598552A (en) Expression recognition method based on improved particle swarm optimization convolutional neural network optimization
CN114283345A (en) Small sample city remote sensing image information extraction method based on meta-learning and attention
CN113807340B (en) Attention mechanism-based irregular natural scene text recognition method
CN116311483B (en) Micro-expression recognition method based on local facial area reconstruction and memory contrast learning
CN113378812A (en) Digital dial plate identification method based on Mask R-CNN and CRNN
CN116258990A (en) Cross-modal affinity-based small sample reference video target segmentation method
CN116188509A (en) High-efficiency three-dimensional image segmentation method
CN111310820A (en) Foundation meteorological cloud chart classification method based on cross validation depth CNN feature integration
CN114821736A (en) Multi-modal face recognition method, device, equipment and medium based on contrast learning
CN114648667A (en) Bird image fine-granularity identification method based on lightweight bilinear CNN model
CN108764233A (en) A kind of scene character recognition method based on continuous convolution activation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20211105

Assignee: Yunnan Ziying economic and Trade Co.,Ltd.

Assignor: SOUTH CENTRAL University FOR NATIONALITIES

Contract record no.: X2023420000234

Denomination of invention: A Method for Identifying Rice Pests Based on Improved Residual Network

Granted publication date: 20220520

License type: Common License

Record date: 20230710

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20211105

Assignee: YUNNAN HANGYUE AGRICULTURE TECHNOLOGY CO.,LTD.

Assignor: SOUTH CENTRAL University FOR NATIONALITIES

Contract record no.: X2023420000267

Denomination of invention: A Method for Identifying Rice Pests Based on Improved Residual Network

Granted publication date: 20220520

License type: Common License

Record date: 20230802

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20211105

Assignee: Yunnan Shuaixiao Sauce Agricultural Technology Co.,Ltd.

Assignor: SOUTH CENTRAL University FOR NATIONALITIES

Contract record no.: X2023420000272

Denomination of invention: A Method for Identifying Rice Pests Based on Improved Residual Network

Granted publication date: 20220520

License type: Common License

Record date: 20230804

Application publication date: 20211105

Assignee: Yunnan Shengmai Agricultural Technology Co.,Ltd.

Assignor: SOUTH CENTRAL University FOR NATIONALITIES

Contract record no.: X2023420000275

Denomination of invention: A Method for Identifying Rice Pests Based on Improved Residual Network

Granted publication date: 20220520

License type: Common License

Record date: 20230804

Application publication date: 20211105

Assignee: Yunnan Shalang Rural Tourism Resources Development Co.,Ltd.

Assignor: SOUTH CENTRAL University FOR NATIONALITIES

Contract record no.: X2023420000273

Denomination of invention: A Method for Identifying Rice Pests Based on Improved Residual Network

Granted publication date: 20220520

License type: Common License

Record date: 20230804

Application publication date: 20211105

Assignee: Yunnan Shuai Toudou Agricultural Technology Co.,Ltd.

Assignor: SOUTH CENTRAL University FOR NATIONALITIES

Contract record no.: X2023420000274

Denomination of invention: A Method for Identifying Rice Pests Based on Improved Residual Network

Granted publication date: 20220520

License type: Common License

Record date: 20230804

OL01 Intention to license declared