CN111931581A - Agricultural pest identification method based on convolutional neural network, terminal and readable storage medium - Google Patents
Agricultural pest identification method based on convolutional neural network, terminal and readable storage medium Download PDFInfo
- Publication number
- CN111931581A CN111931581A CN202010665034.8A CN202010665034A CN111931581A CN 111931581 A CN111931581 A CN 111931581A CN 202010665034 A CN202010665034 A CN 202010665034A CN 111931581 A CN111931581 A CN 111931581A
- Authority
- CN
- China
- Prior art keywords
- information
- convolutional neural
- neural network
- insect
- identification method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 241000607479 Yersinia pestis Species 0.000 title claims abstract description 50
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 40
- 238000003860 storage Methods 0.000 title claims abstract description 11
- 241000238631 Hexapoda Species 0.000 claims abstract description 59
- 238000012549 training Methods 0.000 claims abstract description 46
- 238000013507 mapping Methods 0.000 claims abstract description 27
- 238000012360 testing method Methods 0.000 claims abstract description 11
- 238000000605 extraction Methods 0.000 claims abstract description 6
- 230000006870 function Effects 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 12
- 239000013598 vector Substances 0.000 claims description 12
- 238000004088 simulation Methods 0.000 claims description 9
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 230000009193 crawling Effects 0.000 claims description 2
- 238000001514 detection method Methods 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 abstract description 4
- 238000012544 monitoring process Methods 0.000 abstract description 3
- 230000009466 transformation Effects 0.000 description 7
- 238000012417 linear regression Methods 0.000 description 5
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- OLBCVFGFOZPWHH-UHFFFAOYSA-N propofol Chemical compound CC(C)C1=CC=CC(C(C)C)=C1O OLBCVFGFOZPWHH-UHFFFAOYSA-N 0.000 description 2
- 229960004134 propofol Drugs 0.000 description 2
- 238000000844 transformation Methods 0.000 description 2
- 238000005282 brightening Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a convolutional neural network-based agricultural pest identification method, a terminal and a readable storage medium, which are used for collecting insect image information; setting a label of insect image information, and performing feature extraction to obtain feature mapping information; judging the state information of the characteristic area information; mapping the candidate region to an ROIs region with a fixed size obtained on the characteristic diagram, and judging the target type; inputting the test set into a training model, and detecting insect information in the candidate area picture; and when the detected candidate region picture precision reaches a preset value, finishing the establishment of the training model, and detecting the insect information in the image through the training model. The invention improves the identification degree of the insect image information and improves the detection precision of identifying the insects. The agricultural pest identification method based on the convolutional neural network can be effectively utilized to identify pest images in agricultural pest image identification, so that the agricultural automatic monitoring capability is improved.
Description
Technical Field
The invention relates to the technical field of pest image identification, in particular to a convolutional neural network-based agricultural pest identification method, a terminal and a readable storage medium.
Background
With the progress and development of technological innovation, technological agriculture is also gradually developed, and many agricultural technologies including pest image recognition technology are derived. The damage of pests can be accompanied with the whole growth period of crops, and if the pests and diseases cannot be treated in time, the yield of the crops can be greatly reduced. At present, in the prior art, some methods for identifying pest images adopt convolutional neural networks, the deeper the depth of the convolutional neural networks, the better the performance of the model, and the more accurate the detection result, but in practice, as the network depth deepens to a certain extent, the accuracy of the network is reduced, so that pest image information cannot be accurately identified, and further crop growth is influenced.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides an agricultural pest identification method based on a convolutional neural network, which comprises the following steps:
step one, collecting insect image information;
setting labels of insect image information, and dividing the labels of the insect image data information into a training set and a test set;
inputting the insect image data of the training set into a Resnet50 network, and performing feature extraction to obtain feature mapping information;
inputting the obtained feature mapping information into an RPN network to generate a region poppesals layer;
judging the state information of the feature area information in the region explosals layer through softmax;
if the state information of the feature area information does not meet the preset requirement, correcting the feature area information by using a bounding box regression to obtain a candidate area with preset precision;
step five, collecting the feature mapping information obtained in the step three and the candidate regions obtained in the step four through ROI posing, mapping the candidate regions to ROIs regions with fixed sizes obtained on the feature map, and sending the ROIs regions with fixed sizes to a subsequent full-connection layer to judge the target category;
classifying categories by utilizing a generic feature maps, calculating a loss function, simultaneously acquiring an unidentified area of a candidate area again, and updating network parameters to obtain a training model;
inputting the test set into a training model, and detecting insect information in the candidate area picture;
and when the detected candidate region picture precision reaches a preset value, finishing the establishment of the training model, and detecting the insect information in the image through the training model.
Preferably, in the first step, a crawler tool is used for crawling a single picture set of various insects, and each insect image information in the single picture set is configured into a known insect category;
background removal is carried out on the obtained insect image information by using opencv, and transparentization treatment is carried out;
selecting a preset number of insect image information, generating a simulation picture formed by combining a plurality of insects, and setting labels for the insect image information in the simulation picture.
Preferably, in step two, the first step,
setting a Label for the insect image information by using a Label Img;
and taking 80% of insect image information in the simulation picture as a training set, and taking 20% of insect image information in the simulation picture as a test set.
Preferably, in step three, the first step,
the method for extracting the features by adopting the Resnet50 network comprises the following steps:
extracting feature mapping information of the image by using a group of basic conv + relu + posing layers;
the insect image information is processed through a resnet50 network, and is subjected to convolution pooling, characteristic mapping information is obtained through extraction, and the characteristic mapping information is shared and used for a subsequent RPN layer and a full connection layer.
Preferably, in step four, the first and second step,
in the RPN, performing 3 x 3 convolution on the feature mapping information extracted in the third step to obtain a plurality of anchor boxes centralized feature information;
the characteristic information enters two different 1 multiplied by 1 convolutions;
one convolution is a classification layer, and whether anchors are a foreground or a background is judged through a softmax function;
performing secondary classification on each anchor, judging whether the image is an insect to be identified or not, and finally outputting foreground and background probabilities of the anchors;
and the other convolution layer is a positioning layer, the bounding box regression is utilized to correct the characteristic region information to obtain a candidate region, and 4 coordinate deviations of the characteristic region information are output.
Preferably, in step five,
mapping the candidate region generated by the RPN to a characteristic map to obtain ROIs;
and adjusting the ROIs to a fixed size, and sending the adjusted ROIs to the full connection layer to judge the target type.
Preferably, in step seven, the model training mode includes: RPN network training and fast RCNN training;
in training the RPN network, the convolutional layer is used for extracting feature mapping information, and the loss function used by the whole network is as follows:
wherein:
pipredicting the probability of being the target for the anchor;
ti={tx,ty,tw,this a vector representing the 4 parameterized coordinates of the predicted bounding box;
acquiring a candidate region through the trained RPN;
acquiring a positive root is and a positive softmax mobility simultaneously by utilizing the previous RPN;
the method for training the fast RCNN network comprises the following steps:
using the candidate area and the positive probability obtained in the previous step as a rois transmission network;
calculating bbox _ inside _ weights + bbox _ outside _ weights, transmitting the values into a soomth _ L1_ loss layer, and identifying softmax and final bounding box regression;
through comparison of preset times, for the current algorithm application scene, when the Epoch is 21, the minimum function loss exists, and at the moment, the insect image information has the highest accuracy.
The invention also provides a terminal for realizing the agricultural pest identification method based on the convolutional neural network, which comprises the following steps: a memory for storing a computer program and a convolutional neural network-based agricultural pest identification method; and the processor is used for executing the computer program and the agricultural pest identification method based on the convolutional neural network so as to realize the steps of the agricultural pest identification method based on the convolutional neural network.
The present invention also provides a readable storage medium having a convolutional neural network-based agricultural pest identification method, the readable storage medium having stored thereon a computer program, the computer program being executed by a processor to implement the steps of the convolutional neural network-based agricultural pest identification method.
According to the technical scheme, the invention has the following advantages:
the agricultural pest identification method based on the convolutional neural network improves the identification degree of insect image information and improves the detection precision of identifying insects. The agricultural pest identification method based on the convolutional neural network can be effectively utilized to identify pest images in agricultural pest image identification, so that the agricultural automatic monitoring capability is improved, and when agricultural pests appear, the agricultural pests can be accurately known and identified based on the identification method provided by the invention.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings used in the description will be briefly introduced, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
FIG. 1 is a flow chart of a convolutional neural network-based agricultural pest identification method;
FIG. 2 is a schematic diagram of a moth;
FIG. 3 is a four-dimensional vector diagram.
Detailed Description
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The invention provides an agricultural pest identification method based on a convolutional neural network, which comprises the following steps of:
s11, collecting insect image information;
the method is limited by the recognition capability of common users to insects, the manual insect labeling mode is too time-consuming and labor-consuming, and in order to increase the training sample number of the algorithm, the method for automatically generating the training set by utilizing the known classification specifically comprises the following steps:
a single picture set of various insects is first crawled using a crawler tool, and the insect categories represented by each picture in the single picture set are known.
Then, background removal and transparentization processing are carried out on the obtained image by using opencv; OpenCV is a BSD license (open source) based distributed cross-platform computer vision and machine learning software library that can run on Linux, Windows, Android, and Mac OS operating systems. [1] The method is light and efficient, is composed of a series of C functions and a small number of C + + classes, provides interfaces of languages such as Python, Ruby, MATLAB and the like, and realizes a plurality of general algorithms in the aspects of image processing and computer vision.
Then, a random number of insect pictures are selected, an image brightening method including but not limited to turning and cutting, color change and image superposition is utilized to generate a simulation picture formed by combining a plurality of insects, and a corresponding label is automatically generated.
S12, setting labels of insect image information, and dividing the labels of the insect image data information into a training set and a testing set;
using Label Img as a picture as a Label, using 80% of data as a training set, and using 20% of data as a test set; the Label Img is a data set labeling tool for deep network training.
The number of pictures to be trained per Epoch is 1600, and the Batch size adopted is 80, so that the training set has a Batch number of 1600/80-20.
S13, inputting the insect image data of the training set into a Resnet50 network, and performing feature extraction to obtain feature mapping information;
in the present invention, the feature mapping information may be expressed as feature maps. The method for extracting the features by adopting the Resnet50 network comprises the following steps:
feature maps of the images were extracted using a set of underlying conv + relu + posing layers. The Resnet50 has two basic blocks named Conv Block and Identity Block, wherein Conv Block is used for changing the dimension of the network, and Identity Block is used for deepening the network;
the structures of Conv Block and Identity Block are shown in the following tables, respectively:
the overall Resnet50 network structure is
The pictures are continuously convoluted and pooled through a resnet50 network to finally extract feature maps, and the feature maps can be shared for a subsequent RPN layer and a full connection layer.
S14, inputting the obtained feature mapping information into an RPN network to generate a region poppesals layer;
judging the state information of the feature area information in the region explosals layer through softmax;
if the state information of the feature area information does not meet the preset requirement, correcting the feature area information by using a bounding box regression to obtain a candidate area (proposals) with preset precision;
specifically, the characteristic region information is anchors; the candidate regions are propofol.
In the RPN network, the feature map extracted in step (3) is first convolved by 3 × 3 to obtain multiple anchor boxes, which can further concentrate feature information, and then the feature information enters two different branches, i.e. two different 1 × 1 convolutions. One convolution is a classification layer, whether anchors are foreground or background is judged through a softmax function, namely, two classifications are made for each anchor, whether an image is an insect to be identified is judged, and finally the foreground and background probabilities of the anchors are output; the other convolution layer is a positioning layer, and correction anchors are carried out by using a bounding box regression to obtain accurate propassals, and 4 coordinate offsets of the anchors are output.
For the training of the bounding box regression, as shown in fig. 2, the big box of the moth is the group route (GT), the small box is the extracted positive anchors, even if the big box is recognized as the moth by the classifier, the result is that the moth is not correctly recognized due to the inaccurate positioning of the small box. A method is used to fine tune the small boxes so that the positive anchors and GT are closer together.
A four-dimensional vector (x, y, w, h) is typically used for the window, representing the center point coordinates and width and height of the window, respectively.
For fig. 3, box a represents the original positive Anchors, box G represents the GT of the target, the target is to find a relation such that the input original Anchors a are mapped to a regression window G' that is closer to the real window G, i.e.:
given anchor a ═ a (a)x,Ay,Aw,Ah) And GT ═ Gx,Gy,Gw,Gh]
Find a transformation F such that: f (A)x,Ay,Aw,Ah)=(G′x,G′y,G′w,G′h) Wherein (G'x,G′y,G′w,G′h)≈(Gx,Gy,Gw,Gh)
In FIG. 3, the way of changing A to G' is
First make a translation
G′x=Aw·dx(A)+Ax
G′y=Ah·dy(A)+Ay
Redo the zoom
G′w=Aw·exp(dw(A))
G′h=Ah·exp(dh(A))
Observing the above 4 formulas, it is d that needs to be learnedx(A),dy(A),dw(A),dh(A) These four transformations. When the input anchors a and GT differ by a small amount, the transform can be considered a linear transform, and then the window can be fine-tuned by modeling with linear regression (note that linear regression models can only be used when anchors a and GT are relatively close, otherwise complex non-linearity problems are encountered).
The next question is how to obtain d by linear regressionx(A),dy(A),dw(A),dh(A) It is used. Linear regression is a set of learning vectors given an input feature vector XThe parameter W is such that the value after linear regression is very close to the true value Y, i.e. Y ═ WX. For this problem, input X is a cnn feature map, defined as Φ; with the amount of transformation introduced between A and GT, i.e. (t)x,ty,tw,th). The output is dx(A),dy(A),dw(A),dh(A) And (4) four transformations. The objective function can then be expressed as:
where φ (A) is the feature vector composed of feature maps corresponding to anchors, W*Is a parameter to be learned, d*(A) The resulting prediction is (x, y, w, h, i.e. one for each transformation). To make a predicted value d*(A) And the true value t*With the minimum gap, the L1 loss function is designed:
the function optimization objective is:
for convenience of description, the L1 loss is taken as an example, and the soomth-L1 loss is generally used in the real case.
It should be noted that the linear transformation is approximately considered to be established only when GT is relatively close to the position of the required regression frame.
Said principle is corresponding to the translation (t) between fast RCNN original text, positive anchor and ground channelx,ty) And scale factor (t)w,th) The following were used:
tx=(x-xa)/wa ty=(y-ya)/ha
tw=log(w/wa) th=log(h/ha)
for the regression branch of training box regression network, the input is cnn feature phi, and the monitoring signal is the difference between Anchor and GT (t)x,ty,tw,th) I.e. the training targets are: the network output is made as close as possible to the supervisory signal with the input phi. Then when the rounding box regression works, when phi is input again, the output of the regression network branch is the translation amount and transformation scale (t) of each Anchorx,ty:tw,th) Obviously, the Anchor position can be corrected.
S15, collecting feature maps obtained in the third step and propofol obtained in the fourth step through ROI pooling, mapping the proasels to ROIs regions with fixed sizes obtained on the feature map, and sending the ROIs regions with fixed sizes to a subsequent full-connection layer to judge target types; the ROI posing layer enables the data processing speed to be improved, and the detection accuracy is also improved.
S16, classifying categories by utilizing a generic feature maps, calculating a loss function, simultaneously obtaining an unidentified area of a candidate area again, and updating network parameters to obtain a training model;
regarding classification, its content includes: the Classification part calculates each of the propulses to belong to the category specifically through the full connection layer and the softmax by using the obtained propulses feature maps, and outputs a cls _ prob probability vector; and simultaneously, obtaining the position offset bbox _ pred of each proposal by using the bounding box regression again, and using the position offset bbox _ pred for regression of a more accurate target detection frame.
S17, inputting the test set into a training model, and detecting insect information in the candidate area picture;
and when the detected candidate region picture precision reaches a preset value, finishing the establishment of the training model, and detecting the insect information in the image through the training model.
The model training mode comprises the following steps: RPN network training and fast RCNN training;
in training the RPN network, the convolutional layer is used for extracting feature mapping information, and the loss function used by the whole network is as follows:
wherein:
pipredicting the probability of being the target for the anchor;
ti={tx,ty,tw,this a vector representing the 4 parameterized coordinates of the predicted bounding box:
acquiring a candidate region through the trained RPN;
acquiring a positive root is and a positive softmax mobility simultaneously by utilizing the previous RPN;
the method for training the fast RCNN network comprises the following steps:
using the candidate area and the positive probability obtained in the previous step as a rois transmission network;
calculating bbox _ inside _ weights + bbox _ outside _ weights, transmitting the values into a soomth _ L1_ loss layer, and identifying softmax and final bounding box regression;
according to the invention, through multiple comparison tests, aiming at the current algorithm application scene, when the Epoch generation is 21, the minimum function loss is provided, and at the moment, the image identification has the highest accuracy.
Based on the method, the invention also provides a terminal for realizing the agricultural pest identification method based on the convolutional neural network, which comprises the following steps: a memory for storing a computer program and a convolutional neural network-based agricultural pest identification method; and the processor is used for executing the computer program and the agricultural pest identification method based on the convolutional neural network so as to realize the steps of the agricultural pest identification method based on the convolutional neural network.
The invention also provides a readable storage medium with the agricultural pest identification method based on the convolutional neural network, wherein the readable storage medium is stored with a computer program, and the computer program is executed by a processor to realize the steps of the agricultural pest identification method based on the convolutional neural network.
The terminals implementing the convolutional neural network-based agricultural pest identification method are the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein, and can be implemented in electronic hardware, computer software, or a combination of both, and the components and steps of the examples have been generally described in terms of functions in the foregoing description for clarity of explanation of the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Through the above description of the embodiments, those skilled in the art will readily understand that the terminal for implementing the convolutional neural network-based agricultural pest identification method described herein can be implemented by software, and can also be implemented by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the disclosure for implementing the convolutional neural network-based agricultural pest identification method can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a mobile hard disk, etc.) or on a network, and includes several instructions to make a computing device (which can be a personal computer, a server, a mobile terminal, or a network device, etc.) execute the indexing method according to the embodiments of the disclosure.
The present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages, for performing the operations of the present disclosure. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (9)
1. A convolutional neural network-based agricultural pest identification method is characterized by comprising the following steps:
step one, collecting insect image information;
setting labels of insect image information, and dividing the labels of the insect image data information into a training set and a test set;
inputting the insect image data of the training set into a Resnet50 network, and performing feature extraction to obtain feature mapping information;
inputting the obtained feature mapping information into an RPN network to generate a region poppesals layer;
judging the state information of the feature area information in the region explosals layer through softmax;
if the state information of the feature area information does not meet the preset requirement, correcting the feature area information by using a bounding box regression to obtain a candidate area with preset precision;
step five, collecting the feature mapping information obtained in the step three and the candidate region obtained in the step four through ROIploling, mapping the candidate region to the ROIs region with fixed size obtained on the feature map, and sending the ROIs region with fixed size to a subsequent full-connection layer to judge the target category;
classifying categories by utilizing a generic feature maps, calculating a loss function, simultaneously acquiring an unidentified area of a candidate area again, and updating network parameters to obtain a training model;
inputting the test set into a training model, and detecting insect information in the candidate area picture;
and when the detected candidate region picture precision reaches a preset value, finishing the establishment of the training model, and detecting the insect information in the image through the training model.
2. The convolutional neural network-based agricultural pest identification method of claim 1,
in the first step, a crawler tool is used for crawling a single picture set of various insects, and information of each insect image in the single picture set is configured into a known insect category;
background removal is carried out on the obtained insect image information by using opencv, and transparentization treatment is carried out;
selecting a preset number of insect image information, generating a simulation picture formed by combining a plurality of insects, and setting labels for the insect image information in the simulation picture.
3. The agricultural pest identification method based on the convolutional neural network as claimed in claim 2, wherein in the second step,
setting a Label for the insect image information by using a Label Img;
and taking 80% of insect image information in the simulation picture as a training set, and taking 20% of insect image information in the simulation picture as a test set.
4. The agricultural pest identification method based on the convolutional neural network as claimed in claim 2, wherein in the third step,
the method for extracting the features by adopting the Resnet50 network comprises the following steps:
extracting feature mapping information of the image by using a group of basic conv + relu + posing layers;
the insect image information is processed through a resnet50 network, and is subjected to convolution pooling, characteristic mapping information is obtained through extraction, and the characteristic mapping information is shared and used for a subsequent RPN layer and a full connection layer.
5. The agricultural pest identification method based on convolutional neural network as claimed in claim 2, wherein in step four,
in the RPN, performing 3 x 3 convolution on the feature mapping information extracted in the third step to obtain a plurality of anchor boxes centralized feature information;
the characteristic information enters two different 1 multiplied by 1 convolutions;
one convolution is a classification layer, and whether anchors are a foreground or a background is judged through a softmax function;
performing secondary classification on each anchor, judging whether the image is an insect to be identified or not, and finally outputting foreground and background probabilities of the anchors;
and the other convolution layer is a positioning layer, the bounding box regression is utilized to correct the characteristic region information to obtain a candidate region, and 4 coordinate deviations of the characteristic region information are output.
6. The agricultural pest identification method based on the convolutional neural network as claimed in claim 2, wherein in step five,
mapping the candidate region generated by the RPN to a characteristic map to obtain ROIs;
and adjusting the ROIs to a fixed size, and sending the adjusted ROIs to the full connection layer to judge the target type.
7. The agricultural pest recognition method based on the convolutional neural network as claimed in claim 2, wherein in the seventh step, the model training mode comprises: RPN network training and fast RCNN training;
in training the RPN network, the convolutional layer is used for extracting feature mapping information, and the loss function used by the whole network is as follows:
wherein:
pipredicting the probability of being the target for the anchor;
ti={tx,ty,tw,this a vector representing the 4 parameterized coordinates of the predicted bounding box;
acquiring a candidate region through the trained RPN;
acquiring a positive root is and a positive softmax mobility simultaneously by utilizing the previous RPN;
the method for training the fast RCNN network comprises the following steps:
using the candidate area and the positive probability obtained in the previous step as a rois transmission network;
calculating bbox _ inside _ weights + bbox _ outside _ weights, transmitting the values into a soomth _ L1_ loss layer, and identifying softmax and final bounding box regression;
through comparison of preset times, for the current algorithm application scene, when the Epoch is 21, the minimum function loss exists, and at the moment, the insect image information has the highest accuracy.
8. A terminal for realizing the agricultural pest identification method based on the convolutional neural network is characterized by comprising the following steps:
a memory for storing a computer program and a convolutional neural network-based agricultural pest identification method;
a processor for executing the computer program and the convolutional neural network-based agricultural pest identification method to implement the steps of the convolutional neural network-based agricultural pest identification method according to any one of claims 1 to 7.
9. A readable storage medium having a convolutional neural network-based agricultural pest identification method, wherein the readable storage medium has a computer program stored thereon, and the computer program is executed by a processor to implement the steps of the convolutional neural network-based agricultural pest identification method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010665034.8A CN111931581A (en) | 2020-07-10 | 2020-07-10 | Agricultural pest identification method based on convolutional neural network, terminal and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010665034.8A CN111931581A (en) | 2020-07-10 | 2020-07-10 | Agricultural pest identification method based on convolutional neural network, terminal and readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111931581A true CN111931581A (en) | 2020-11-13 |
Family
ID=73312427
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010665034.8A Pending CN111931581A (en) | 2020-07-10 | 2020-07-10 | Agricultural pest identification method based on convolutional neural network, terminal and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111931581A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112861712A (en) * | 2021-02-06 | 2021-05-28 | 郑州师范学院 | Agricultural pest and disease monitoring method based on artificial intelligence and multi-temporal remote sensing |
CN112990083A (en) * | 2021-04-07 | 2021-06-18 | 重庆大学 | Broadband installed quality monitoring system based on deep learning |
CN113159075A (en) * | 2021-05-10 | 2021-07-23 | 北京虫警科技有限公司 | Insect identification method and device |
CN113159182A (en) * | 2021-04-23 | 2021-07-23 | 中国科学院合肥物质科学研究院 | Agricultural tiny pest image detection method based on dense region re-refining technology |
CN113177486A (en) * | 2021-04-30 | 2021-07-27 | 重庆师范大学 | Dragonfly order insect identification method based on regional suggestion network |
CN114510618A (en) * | 2021-12-31 | 2022-05-17 | 安徽郎溪南方水泥有限公司 | Processing method and device based on smart mine |
CN117290762A (en) * | 2023-10-11 | 2023-12-26 | 北京邮电大学 | Insect pest falling-in identification method, type identification method, device, insect trap and system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108615046A (en) * | 2018-03-16 | 2018-10-02 | 北京邮电大学 | A kind of stored-grain pests detection recognition methods and device |
CN109376736A (en) * | 2018-09-03 | 2019-02-22 | 浙江工商大学 | A kind of small video target detection method based on depth convolutional neural networks |
CN110287998A (en) * | 2019-05-28 | 2019-09-27 | 浙江工业大学 | A kind of scientific and technical literature picture extracting method based on Faster-RCNN |
KR102105954B1 (en) * | 2018-11-21 | 2020-04-29 | 충남대학교산학협력단 | System and method for accident risk detection |
-
2020
- 2020-07-10 CN CN202010665034.8A patent/CN111931581A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108615046A (en) * | 2018-03-16 | 2018-10-02 | 北京邮电大学 | A kind of stored-grain pests detection recognition methods and device |
CN109376736A (en) * | 2018-09-03 | 2019-02-22 | 浙江工商大学 | A kind of small video target detection method based on depth convolutional neural networks |
KR102105954B1 (en) * | 2018-11-21 | 2020-04-29 | 충남대학교산학협력단 | System and method for accident risk detection |
CN110287998A (en) * | 2019-05-28 | 2019-09-27 | 浙江工业大学 | A kind of scientific and technical literature picture extracting method based on Faster-RCNN |
Non-Patent Citations (1)
Title |
---|
刘志财: ""基于深度学习的目标检测算法在储粮害虫检测识别中的应用"", 《中国优秀硕士学位论文全文数据库 农业科技辑》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112861712A (en) * | 2021-02-06 | 2021-05-28 | 郑州师范学院 | Agricultural pest and disease monitoring method based on artificial intelligence and multi-temporal remote sensing |
CN112990083A (en) * | 2021-04-07 | 2021-06-18 | 重庆大学 | Broadband installed quality monitoring system based on deep learning |
CN113159182A (en) * | 2021-04-23 | 2021-07-23 | 中国科学院合肥物质科学研究院 | Agricultural tiny pest image detection method based on dense region re-refining technology |
CN113159182B (en) * | 2021-04-23 | 2022-09-09 | 中国科学院合肥物质科学研究院 | Agricultural tiny pest image detection method based on dense region re-refining technology |
CN113177486A (en) * | 2021-04-30 | 2021-07-27 | 重庆师范大学 | Dragonfly order insect identification method based on regional suggestion network |
CN113177486B (en) * | 2021-04-30 | 2022-06-03 | 重庆师范大学 | Dragonfly order insect identification method based on regional suggestion network |
CN113159075A (en) * | 2021-05-10 | 2021-07-23 | 北京虫警科技有限公司 | Insect identification method and device |
CN114510618A (en) * | 2021-12-31 | 2022-05-17 | 安徽郎溪南方水泥有限公司 | Processing method and device based on smart mine |
CN117290762A (en) * | 2023-10-11 | 2023-12-26 | 北京邮电大学 | Insect pest falling-in identification method, type identification method, device, insect trap and system |
CN117290762B (en) * | 2023-10-11 | 2024-04-02 | 北京邮电大学 | Insect pest falling-in identification method, type identification method, device, insect trap and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111931581A (en) | Agricultural pest identification method based on convolutional neural network, terminal and readable storage medium | |
Ngugi et al. | Tomato leaf segmentation algorithms for mobile phone applications using deep learning | |
Lee et al. | Simultaneous traffic sign detection and boundary estimation using convolutional neural network | |
KR101640998B1 (en) | Image processing apparatus and image processing method | |
CN110569696A (en) | Neural network system, method and apparatus for vehicle component identification | |
CN113033520B (en) | Tree nematode disease wood identification method and system based on deep learning | |
CN111027493A (en) | Pedestrian detection method based on deep learning multi-network soft fusion | |
CN110619059B (en) | Building marking method based on transfer learning | |
US20170006261A1 (en) | Controller for a Working Vehicle | |
TWI701608B (en) | Neural network system, method and device for image matching and positioning | |
CN112132014A (en) | Target re-identification method and system based on non-supervised pyramid similarity learning | |
CN111709317B (en) | Pedestrian re-identification method based on multi-scale features under saliency model | |
CN110852327A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN116385958A (en) | Edge intelligent detection method for power grid inspection and monitoring | |
CN115512238A (en) | Method and device for determining damaged area, storage medium and electronic device | |
CN112699858B (en) | Unmanned platform smoke fog sensing method and system, computer equipment and storage medium | |
CN116977859A (en) | Weak supervision target detection method based on multi-scale image cutting and instance difficulty | |
Aksoy et al. | Image mining using directional spatial constraints | |
CN116091784A (en) | Target tracking method, device and storage medium | |
CN114663751A (en) | Power transmission line defect identification method and system based on incremental learning technology | |
CN114241202A (en) | Method and device for training dressing classification model and method and device for dressing classification | |
CN113989619A (en) | Storage tank prediction method and device based on deep learning recognition model | |
CN112949731A (en) | Target detection method, device, storage medium and equipment based on multi-expert model | |
CN111476129A (en) | Soil impurity detection method based on deep learning | |
CN116503406B (en) | Hydraulic engineering information management system based on big data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201113 |
|
RJ01 | Rejection of invention patent application after publication |