CN112861767A - Small-volume pest detection method and system on pest sticking plate image - Google Patents

Small-volume pest detection method and system on pest sticking plate image Download PDF

Info

Publication number
CN112861767A
CN112861767A CN202110221105.XA CN202110221105A CN112861767A CN 112861767 A CN112861767 A CN 112861767A CN 202110221105 A CN202110221105 A CN 202110221105A CN 112861767 A CN112861767 A CN 112861767A
Authority
CN
China
Prior art keywords
module
pest
target
image
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110221105.XA
Other languages
Chinese (zh)
Inventor
李文勇
王杜锦
李明
杨信廷
孙传恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Research Center for Information Technology in Agriculture
Original Assignee
Beijing Research Center for Information Technology in Agriculture
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Research Center for Information Technology in Agriculture filed Critical Beijing Research Center for Information Technology in Agriculture
Priority to CN202110221105.XA priority Critical patent/CN112861767A/en
Publication of CN112861767A publication Critical patent/CN112861767A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention provides a method and a system for detecting small-volume pests on a pest sticking plate image, which comprises the following steps: acquiring a target image, wherein the target image is an image of a pest sticking plate for capturing target pests; inputting the target image into a trained pest detection and identification model to determine the type information and the position information of the target pest, wherein the trained pest detection and identification model is constructed based on a single-stage target detection algorithm; and acquiring the number information of the target pests by counting the detected species information and the position information. According to the method and the system for detecting the small-volume pests on the sticky trap image, the number of the target pests on the sticky trap image is counted by adopting the deep learning model aiming at the small-volume pest detection, the characteristic diagram is expanded by utilizing characteristic splicing, and the residual error unit is added, so that the automatic and accurate detection of the small-volume pests on the field sticky trap image is realized, an accurate data acquisition method is provided for the population density estimation of the pests, and a data basis is provided for a later-stage pest control strategy.

Description

Small-volume pest detection method and system on pest sticking plate image
Technical Field
The invention relates to the technical field of computer image processing, in particular to a method and a system for detecting small-volume pests on a pest sticking plate image.
Background
With the development of computer vision technology, the pests are identified and counted by replacing human eyes with image vision, so that the defects in the manual pest investigation process can be effectively overcome. In recent years, a plurality of target detection algorithms based on deep learning and methods for improving target detection precision appear for pests frequently occurring in facility agriculture such as whitefly, thrips and the like.
At present, the investigation of the pest damage degree of crops by using the number of target pests on a pest sticking plate is a common technical means in the pest management process. There is a method of identifying and classifying pests in a pre-collected pest adhesion plate image by constructing a pest identification and classification model based on multi-color space fusion. In addition, there is a method of performing insect segmentation from a background image by using a watershed algorithm, extracting color features of insects by using a mahalanobis distance method, performing pest species identification, and evaluating accuracy and calculation cost at different image resolutions.
However, the above methods are all pest detection based on artificially designed features, the robustness is weak in field application, and particularly the detection precision of small-volume pests like whiteflies, thrips and the like is generally shown, so that the method has great limitation in practical application.
Disclosure of Invention
Aiming at the problems of weak robustness and insufficient precision in the prior art, the embodiment of the invention provides a small-volume pest detection method and system on a pest sticking plate image.
The invention provides a small-volume pest detection method on a pest sticking plate image, which comprises the following steps: acquiring a target image, wherein the target image is an image of a pest sticking plate for capturing target pests; inputting the target image into a trained pest detection and identification model to determine the type information and the position information of the target pest, wherein the trained pest detection and identification model is constructed based on a single-stage target detection algorithm; and acquiring the quantity information of the target pests through the species information and the position information.
According to the small-volume pest detection method on the pest sticking plate image, before the target image is input to the trained pest detection and identification model to determine the type information and the position information of the target pest, the method further comprises the steps of constructing the pest detection and identification model to be trained; pest detection and identification model, at least comprising the following units: a CSPDarknet53 element, a spatial pyramid pooling element, a path aggregation network element, and an output element.
According to the invention, the CSPDarknet53 unit comprises: the device comprises an input layer, a first convolution module, a first residual error module, a second residual error module, a third residual error module, a fourth residual error module and a fifth residual error module which are connected in sequence; a spatial pyramid pooling unit including a maximum pooling layer; a path aggregation network element comprising: the device comprises a first splicing module, a first up-sampling module, a second splicing module, a second up-sampling module, a third splicing module and a third up-sampling module which are connected in sequence; the first downsampling module, the fourth splicing module, the second downsampling module, the fifth splicing module, the third downsampling module and the sixth splicing module are sequentially connected; the output end of the first splicing module is connected with the input end of the first down-sampling module, the output end of the second splicing module is connected with the input end of the fourth splicing module, and the output end of the third splicing module is connected with the input end of the fifth splicing module; an output unit including: a first, second, and third YOLO Head modules; the output end of the second residual error module is connected with the input end of the first splicing module through a second convolution module; the output end of the third residual error module is connected with the input end of the second splicing module through a third convolution module; the output end of the fourth residual error module is connected with the input end of the third splicing module through a fourth convolution module; the output end of the fifth residual error module is connected with the input end of the spatial pyramid pooling unit through a fifth convolution module; the output end of the spatial pyramid pooling unit is connected with the input ends of the third upsampling module and the sixth splicing module through the feature fusion module respectively; the characteristic fusion module comprises 1 merging layer and 3 convolution layers which are connected in sequence; the output end of the first splicing module is connected with the input end of the first YOLO Head module; the output end of the fifth splicing module is connected with the input end of the second YOLO Head module; the output end of the sixth splicing module is connected with the input end of the third YOLO Head module; the first residual error module and the fifth residual error module respectively comprise 4 residual error units which are sequentially connected, and the second residual error module, the third residual error module and the fourth residual error module respectively comprise 8 residual error units which are sequentially connected; the fifth convolution module comprises 3 convolution layers which are connected in sequence; the first splicing module, the second splicing module, the third splicing module, the fourth splicing module, the fifth splicing module and the sixth splicing module respectively comprise 1 merging layer and 5 convolution layers which are sequentially connected.
According to the small-volume pest detection method on the pest sticking plate image, after the pest detection identification model to be trained is constructed, the method further comprises the following steps: acquiring a plurality of sticky trap sample images, performing equidistant segmentation processing on each sticky trap sample image, and filling black background on boundaries which cannot be completely divided in each sticky trap sample image to acquire a plurality of sticky trap sample sub-images; constructing a sticky trap sample set by using the sticky trap sample sub-image; determining a kind position label corresponding to each pest sticking plate sample sub-image in the pest sticking plate sample set; the species position label comprises a species information label and a position information label of the target pest; combining each armyworm plate sample image and the corresponding category position label into a data sample based on a graphic image labeling tool; constructing a data set from all data samples; dividing data samples in the data set according to a preset proportion, constructing a training set, and verifying and testing the set; and training the pest detection and identification model to be trained by utilizing the training set and the verification set.
According to the small-volume pest detection method on the pest sticking plate image, provided by the invention, a training set and a verification set are utilized to train a pest detection and identification model to be trained, and the method comprises the following steps: step 1, setting an initial learning rate, and pre-training a pest detection and identification model to be trained by utilizing a training set based on an ADAM (adaptive dynamic analysis of materials) optimization method to obtain a preliminarily trained pest detection and identification model; step 2, carrying out primary verification on the preliminarily trained pest detection and identification model by using the verification set, and obtaining a primary training loss value of the preliminarily trained pest detection and identification model and a primary verification accuracy rate of the verification set; step 3, iteratively executing the step 1 to the step 2, gradually increasing a callback function in the process of each iteration, and executing the step 4 when the preliminary training loss value is continuously reduced for at least 3 times and the preliminary verification accuracy rate is not continuously reduced until reaching a preset number of times to obtain a pre-trained pest detection and identification model; under the condition that the initial training loss value is not reduced for 3 times or the initial verification accuracy rate is continuously reduced, stopping iteration, obtaining a pre-trained pest detection and identification model, and executing the step 4; step 4, reducing the initial learning rate, and training the pre-trained pest detection and identification model by using the training set based on the ADAM optimization method again to obtain a re-trained pest detection and identification model; step 5, the re-trained pest detection and identification model is re-verified by using the verification set, and a re-training loss value of the re-trained pest detection and identification model and a re-verification accuracy rate of the verification set are obtained; step 6, iteratively executing the step 4 to the step 5, gradually increasing a callback function in the process of each iteration, and obtaining a trained pest detection and identification model under the conditions that the retraining loss value is continuously reduced for at least 3 times and the retraining accuracy rate is not continuously reduced until reaching a preset number of times; and under the condition that the retraining loss value is not reduced for 3 times or the retraining accuracy rate is continuously reduced, stopping iteration and obtaining the trained pest detection and identification model.
According to the small-volume pest detection method on the pest sticking plate image, after the target image is obtained, the method further comprises the following steps: carrying out equidistant segmentation processing on the target image, and filling a black background on boundaries which cannot be completely divided in the target image to obtain a plurality of target sub-images; wherein, an overlapping area is arranged between every two adjacent target sub-images, and the size of the overlapping area is positively correlated with the size of the body type of the target pests; accordingly, inputting the target image to the trained pest detection and recognition model to determine the type information and the position information of the target pest, comprising: and respectively inputting each target sub-image into the trained pest detection and identification model to determine the type information and the position information of the target pest.
According to the small-volume pest detection method on the pest sticking plate image provided by the invention, each target subimage is respectively input into a trained pest detection and identification model so as to determine the type information and the position information of target pests, and the method comprises the following steps: inputting the target subimage into a trained pest detection and recognition model, and determining a detection and recognition result corresponding to the target subimage; detecting the identification result as the type information and the position information of the target pests; respectively mapping the detection recognition results related to each of the plurality of target sub-images to; and based on a non-maximum value inhibition method, carrying out repeated counting redundant elimination on the overlapped area to obtain the type information and the position information of the target pests on the target image.
The invention also provides a small-volume pest detection system on the pest sticking plate image, which comprises: the method comprises the following steps: the image acquisition module is used for acquiring a target image, and the target image is a pest sticking plate image for capturing target pests; the pest detection and identification module is used for inputting the target image into the trained pest detection and identification model to determine the type information and the position information of the target pest, and the trained pest detection and identification model is constructed based on a single-stage target detection algorithm; and the quantity information acquisition module is used for acquiring the quantity information of the target pests by counting the detected species information and the position information.
The invention also provides electronic equipment which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the program to realize the steps of the method for detecting the pests with small volume on the sticky trap image.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of any of the above-described methods for small-volume pest detection on a sticky trap image.
According to the method and the system for detecting the small-volume pests on the sticky trap image, the number of the target pests on the sticky trap image is counted by adopting the deep learning model aiming at the small-volume pest detection, the characteristic diagram is expanded by utilizing characteristic splicing, and the residual error unit is added, so that the automatic and accurate detection of the small-volume pests on the field sticky trap image is realized, an accurate data acquisition method is provided for the population density estimation of the pests, and a data basis is provided for a later-stage pest control strategy.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is one of the flow diagrams of a small-volume pest detection method on a pest adhesion plate image provided by the present invention;
FIG. 2 is a schematic diagram of a pest detection and identification model provided by the present invention;
FIG. 3 is a schematic view of a segmented armyworm plate image provided by the present invention;
FIG. 4 is a schematic diagram of the stitching of target sub-images provided by the present invention;
FIG. 5 is a schematic structural diagram of an image capturing device for a pest sticking plate according to the present invention;
FIG. 6 is a second schematic flow chart of the method for detecting small-volume pests on an image of a pest sticking plate according to the present invention;
FIG. 7 is a schematic view of a small-volume pest detection system on a sticky trap image provided by the present invention;
fig. 8 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that in the description of the embodiments of the present invention, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element. The terms "upper", "lower", and the like, indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience in describing the present invention and simplifying the description, but do not indicate or imply that the referred devices or elements must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Unless expressly stated or limited otherwise, the terms "mounted," "connected," and "connected" are intended to be inclusive and mean, for example, that they may be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
The invention provides a small-volume target pest detection and identification method on a pest sticking plate image, which aims to realize automatic monitoring of small-volume pests in facility agriculture and is based on a deep learning technology.
The method and system for detecting small-volume pests on the sticky trap image provided by the embodiment of the invention are described below with reference to fig. 1-8.
Fig. 1 is a schematic flow chart of a small-volume pest detection method on a sticky trap image provided by the present invention, as an alternative embodiment, as shown in fig. 1, including but not limited to the following steps:
step 101, acquiring a target image, wherein the target image is an image of a pest sticking plate for capturing target pests;
102, inputting a target image into a trained pest detection and identification model to determine the type information and the position information of target pests, wherein the trained pest detection and identification model is constructed based on a single-stage target detection algorithm;
and 103, acquiring the quantity information of the target pests by counting the detected species information and the position information.
As an optional embodiment, the image acquisition device may be disposed beside each sticky trap for acquiring images of the sticky traps in real time; also can catch the pest and accomplish the back at the mythimna separata board, retrieve the back with the mythimna separata board, bring the environment that light is sufficient, do benefit to the shooting, improve the picture quality, save equipment cost.
Further, in step 101, the image capturing device captures an image of each pest sticking plate to obtain a target image.
Optionally, the target image acquisition time may be selected in a time period with good illumination, or light is supplemented to the image acquisition device.
Further, in step 102, the target image acquired by the image acquisition device is input to the trained pest detection and identification model, and the type information of the identified target pest and the position information on the target image are detected through feature extraction of the target image.
Alternatively, the trained pest detection and identification model may be constructed based on a single-stage target detection algorithm (YOLO series).
Alternatively, the target pest may be any one of thrips, whiteflies or aphids.
Further, in step 103, counting statistics is performed according to the species information and the location information of the target pest, and the number information of the target pest is obtained.
According to the small-volume pest detection method on the sticky trap image, the number of target pests on the sticky trap image is counted by adopting the deep learning model aiming at small-volume pest detection, and the small-volume pests on the sticky trap image are automatically and accurately detected by means of characteristic splicing and residual error unit expansion, so that an accurate data acquisition method is provided for pest population density estimation, and a data basis is provided for a later pest control strategy.
Based on the above embodiment, as an optional embodiment, before inputting the target image to the trained pest detection and recognition model to determine the type information and the position information of the target pest, the method further includes constructing a pest detection and recognition model to be trained; pest detection and identification model, at least comprising the following units: a CSPDarknet53 element, a spatial pyramid pooling element, a path aggregation network element, and an output element.
It should be noted that the pest detection and identification model is constructed based on a single-stage target detection algorithm YOLOv4 model, and can effectively solve the problems that the target pest volume in the input image is small and the target feature is easily affected by noise flooding.
The CSPDarknet53 unit is used as a backbone network of a pest detection and identification model, is mainly used for extracting features of an input image and outputting an image feature extraction result to a path aggregation network and a pyramid pooling unit.
Wherein, Pyramid Pooling unit (SPP) is used for selecting the maximum value as the pooled value; the method can better extract image features, improve the feature extraction capability of the neural network and reduce the possibility of over-fitting.
The Path Aggregation Network (PANET) is used for performing scaling splicing processing on the input feature images; contain the subassembly that can be through the information of pipeline polymerization, be located the hack position of pest detection recognition model for the characteristic image to the input is zoomed the concatenation and is handled, carry out the pooling through the characteristic to all levels, has shortened the distance between lowest layer and the top layer, and use the reinforcing route to enrich the characteristic of every rank, can accurately keep helping the spatial information of correctly positioning the pixel, and is quick, simple and very effective, has promoted the characteristic and has drawed the process.
Wherein, the output unit is a detection layer and is used for outputting the type information and the position information of the target pests.
In the embodiment, the pest detection and identification model to be trained is constructed through the CSPDarknet53 unit, the spatial pyramid pooling unit, the path aggregation network unit and the output unit, so that the pest detection and identification model can automatically learn the features in the image, and the detection and identification of the target pest features for automatically extracting the type information and the position information are realized.
Fig. 2 is a schematic structural diagram of a pest detection and identification model provided by the present invention, as an alternative embodiment, as shown in fig. 2, the pest detection and identification model includes a CSPDarknet53 unit, a spatial pyramid pooling unit, a path aggregation network unit, and an output unit;
a CSPDarknet53 element comprising: the device comprises an input layer, a first convolution module, a first residual error module, a second residual error module, a third residual error module, a fourth residual error module and a fifth residual error module which are connected in sequence;
a spatial pyramid pooling unit including a maximum pooling layer;
a path aggregation network element comprising: the device comprises a first splicing module, a first up-sampling module, a second splicing module, a second up-sampling module, a third splicing module and a third up-sampling module which are connected in sequence; the first downsampling module, the fourth splicing module, the second downsampling module, the fifth splicing module, the third downsampling module and the sixth splicing module are sequentially connected;
the output end of the first splicing module is connected with the input end of the first down-sampling module, the output end of the second splicing module is connected with the input end of the fourth splicing module, and the output end of the third splicing module is connected with the input end of the fifth splicing module;
an output unit including: a first, second, and third YOLO Head modules;
the output end of the second residual error module is connected with the input end of the first splicing module through a second convolution module;
the output end of the third residual error module is connected with the input end of the second splicing module through a third convolution module;
the output end of the fourth residual error module is connected with the input end of the third splicing module through a fourth convolution module;
the output end of the fifth residual error module is connected with the input end of the spatial pyramid pooling unit through a fifth convolution module; the fifth convolution module comprises 3 convolution layers which are connected in sequence;
the output end of the spatial pyramid pooling unit is connected with the input ends of the third upsampling module and the sixth splicing module through the feature fusion module respectively; the characteristic fusion module comprises 1 merging layer and 3 convolution layers which are connected in sequence;
the output end of the first splicing module is connected with the input end of the first YOLO Head module;
the output end of the fifth splicing module is connected with the input end of the second YOLO Head module;
the output end of the sixth splicing module is connected with the input end of the third YOLO Head module;
the first residual error module and the fifth residual error module respectively comprise 4 residual error units which are sequentially connected, and the second residual error module, the third residual error module and the fourth residual error module respectively comprise 8 residual error units which are sequentially connected;
the fifth convolution module comprises 3 convolution layers which are connected in sequence;
the first splicing module, the second splicing module, the third splicing module, the fourth splicing module, the fifth splicing module and the sixth splicing module respectively comprise 1 merging layer and 5 convolution layers which are sequentially connected.
It should be noted that in this embodiment, in the YOLOv4 network model, 2 times of upsampling is performed on the feature map at the 8 times of downsampling position, and the feature map at the 4 times of downsampling position is spliced to establish a new YOLO detection layer at the 4 times of downsampling position. Because the larger the characteristic diagram is, the more sensitive the characteristic diagram is to small-volume target pests, the embodiment enables the network to learn the characteristics of the deep layer and the shallow layer at the same time by overlapping the adjacent characteristics of the shallow layer characteristic diagram to different channels, so that the model has fine-grained characteristics, and the detection and identification capability of small-volume target pests is improved.
Optionally, aiming at the problem that the characteristics of the pests to be detected are not obvious due to the degradation of the image quality of the field pest sticking plate, the number of residual error units in the first residual error module and the second residual error module is enlarged by 4 times, fine-grained characteristics of the pests in the shallow characteristic diagram are deeply learned and mined, more pest characteristic information of a shallow network can be obtained, and therefore the shallow characteristic and position information of the target pests are more accurately extracted.
According to the pest detection and identification method and device, the pest detection and identification model is constructed, so that the pest detection and identification method and device can accurately and quickly realize feature extraction of the pest sticking plate image and detection and identification of target pest type information and position information, the defects that the target pest detection and identification is not accurate and the like in the pest detection mode in the prior art can be overcome, the requirement for accurate detection of small-size target pests is met, and the detection rate of the small-size target pests is improved.
Based on the above embodiment, as an optional embodiment, after the pest detection and identification model to be trained is constructed, constructing a training set, a verification set and a test set:
acquiring a plurality of sticky trap sample images, performing equidistant segmentation processing on each sticky trap sample image, and filling black background on boundaries which cannot be completely divided in each sticky trap sample image to acquire a plurality of sticky trap sample sub-images;
constructing a sticky trap sample set by using the sticky trap sample sub-image;
determining a kind position label corresponding to each pest sticking plate sample sub-image in the pest sticking plate sample set; the species position label comprises a species information label and a position information label of the target pest;
combining each armyworm plate sample image and the corresponding category position label into a data sample based on a graphic image labeling tool;
constructing a data set from all data samples;
dividing data samples in the data set according to a preset proportion, constructing a training set, a verification set and a test set;
and training the pest detection and identification model to be trained by utilizing the training set and the verification set.
As an alternative embodiment, a plurality of armyworm sample images originally having a size of 2560 × 1920 pixels are subjected to 7 × 5 equidistant segmentation, and boundaries which cannot be completely divided in each armyworm sample image are segmented into 35 armyworm sample sub-images having a size of 416 × 416 pixels by using a black background filling process. On one hand, the method can adapt to the situation that a deep learning frame generally needs square image resolution, improve the processing speed and reduce the memory requirement; on the other hand, the relative size of small-volume target pests in the image of the sticky trap can be increased.
Further, 1200 random sections of the sub-image of the sticky trap sample are cut out to construct a sticky trap sample set.
Further, determining a kind position label corresponding to each sub-image of the sticky trap sample in the sticky trap sample set; the species location tag includes a species information tag and a location information tag of the target pest.
Furthermore, for the target pests of each sub-image of the sticky trap sample in the sticky trap sample set, labeling the type position tags for the target pests by using a graphic image labeling software tool labelImg to form an XML file.
Further, combining each armyworm plate sample sub-image and the labeled XML file corresponding to the armyworm plate sample sub-image into one data sample, and obtaining a data set constructed by all the data samples.
Further, the data set is randomly divided into a training set, a verification set and a test set according to a certain proportion (generally about 70%, 10% and 20%); in this example, the training set includes 700 image data samples, the validation set includes 100 image data samples, and the test set includes 200 image data samples.
Further, the training set and the verification set are utilized to train the pest detection and identification model to be trained.
In this embodiment, a large graph is divided into small graphs, and then the small graphs are labeled to construct a training set, a verification set and a test set. The memory requirement on the pest detection and identification model is reduced; the relative proportion of the small-volume target pests on the pest sticking plate image is improved, the accuracy of feature learning can be improved, and the final detection and identification precision is improved; in addition, a complete pest image information base can be established on the basis, and training samples and verification samples are provided for a pest detection and identification model to be trained subsequently.
Based on the above embodiment, as an optional embodiment, training the pest detection and identification model to be trained by using the training set and the verification set includes:
step 1, setting an initial learning rate, and pre-training a pest detection and identification model to be trained by utilizing a training set based on an ADAM (adaptive dynamic analysis of materials) optimization method to obtain a preliminarily trained pest detection and identification model;
step 2, carrying out primary verification on the preliminarily trained pest detection and identification model by using the verification set, and obtaining a primary training loss value of the preliminarily trained pest detection and identification model and a primary verification accuracy rate of the verification set;
step 3, iteratively executing the step 1 to the step 2, gradually increasing a callback function in the process of each iteration, and executing the step 4 when the preliminary training loss value is continuously reduced for at least 3 times and the preliminary verification accuracy rate is not continuously reduced until reaching a preset number of times to obtain a pre-trained pest detection and identification model;
under the condition that the initial training loss value is not reduced for 3 times or the initial verification accuracy rate is continuously reduced, stopping iteration, obtaining a pre-trained pest detection and identification model, and executing the step 4;
step 4, reducing the initial learning rate, and training the pre-trained pest detection and identification model by using the training set based on the ADAM optimization method again to obtain a re-trained pest detection and identification model;
step 5, the re-trained pest detection and identification model is re-verified by using the verification set, and a re-training loss value of the re-trained pest detection and identification model and a re-verification accuracy rate of the verification set are obtained;
step 6, iteratively executing the step 4 to the step 5, gradually increasing a callback function in the process of each iteration, and obtaining a trained pest detection and identification model under the conditions that the retraining loss value is continuously reduced for at least 3 times and the retraining accuracy rate is not continuously reduced until reaching a preset number of times;
and under the condition that the retraining loss value is not reduced for 3 times or the retraining accuracy rate is continuously reduced, stopping iteration and obtaining the trained pest detection and identification model. The callback function is added in the training process, so that the purposes of gradually reducing the learning rate and improving the model performance in the training process are achieved.
In one embodiment, in step 1, the number of iterations is preset to be 100, the maximum learning rate is 0.001, the minimum learning rate is set to be 0.000001, and the optimizer can use common ADAM; and inputting every 4 data samples in the training set as a batch into a pest detection and identification model network to be trained, pre-training the pest detection and identification model to be trained, and acquiring a preliminarily trained pest detection and identification model.
Further, in step 2, the preliminarily trained pest detection and identification model is preliminarily verified by using the verification set, and a preliminary training loss value of the preliminarily trained pest detection and identification model and a preliminary verification accuracy rate of the verification set are obtained.
Further, in step 3, iteratively executing step 1 to step 2, gradually increasing a callback function in the process of each iteration, and iteratively executing step 1 to step 2 until reaching 100 times under the conditions that the initial training loss value is continuously reduced for at least 3 times and the initial verification accuracy rate is not continuously reduced, so as to obtain a pre-trained pest detection and identification model, and executing step 4; and under the condition that the initial training loss value is not reduced for 3 times or the initial verification accuracy rate is continuously reduced, stopping iteration, obtaining a pre-trained pest detection and identification model, and executing the step 4.
Further, in step 4, on the basis of the completion of the pre-training, the number of iterations is set to 100, the maximum learning rate is reduced from 0.001 to 0.0001, and the minimum learning rate is set to 0.000001. In the training process, parameters such as network weight and the like are updated by adopting an ADAM (adaptive dynamic analysis) optimization algorithm, every 4 data samples in a training set are used as a batch to be input into a pre-trained pest detection and identification model network, the pre-trained pest detection and identification model is trained by utilizing the training set, and a retrained pest detection and identification model is obtained.
Further, in step 5, the re-trained pest detection and identification model is re-verified by using the verification set, and a re-training loss value of the re-trained pest detection and identification model and a re-verification accuracy rate of the verification set are obtained.
Further, in step 6, iteratively executing step 4 to step 5, gradually increasing a callback function in the process of each iteration, and iteratively executing step 4 to step 5 under the condition that the retraining loss value is continuously reduced for at least 3 times and the retraining accuracy rate is not continuously reduced until reaching 100 times, wherein the output network weight is the final model weight, and the trained pest detection and identification model is obtained;
and under the condition that the retraining loss value is not reduced for 3 times or the retraining accuracy rate is continuously reduced, stopping iteration, outputting the network weight which is the final model weight, and obtaining the trained pest detection and identification model.
In the embodiment, the pest detection and identification model is not too slow in convergence and incapable of converging in the training and verification process by setting the learning rate of a proper value.
It should be noted that, in the pest detection and identification model obtained by training in this embodiment, the target pests are set as thrips and whiteflies, and the detection test is performed on 200 images of the target pest sticky trap by using the model, and the test result shows that the precision of detection on thrips by the detection model is 97.94%, and the precision of detection on whiteflies is 97.42%, which are higher than the precision of the prior art.
The pest detection and identification model has the advantages of being high in fault tolerance and high in detection accuracy rate, and can detect and identify the type information and the position information of target pests on the pest sticking plate image, and provide a basis for obtaining the quantity information of the target pests.
Fig. 3 is a schematic diagram of segmenting an image of a sticky trap according to an alternative embodiment of the present invention, as shown in fig. 3, after acquiring a target image, the method further includes the following steps:
carrying out equidistant segmentation processing on the target image, and filling a black background on boundaries which cannot be completely divided in the target image to obtain a plurality of target sub-images;
wherein, an overlapping area is arranged between every two adjacent target sub-images, and the size of the overlapping area is positively correlated with the size of the body type of the target pests;
accordingly, inputting the target image to the trained pest detection and recognition model to determine the type information and the position information of the target pest, comprising:
and respectively inputting each target sub-image into the trained pest detection and identification model to determine the type information and the position information of the target pest.
Alternatively, the pixel range of the target image may be set as needed. In the following embodiments of the present invention, an image with a size of 2560 × 1920 pixels is used as a target image in the later period, which is not to be considered as a limitation to the scope of the present invention.
The overlapping region is set to prevent the pest image located on the division boundary from being distributed on the adjacent target sub-images after being divided, so that the pest image is repeated or omitted during counting, and the counting accuracy is further influenced.
In one embodiment, a target image of 2560 × 1920 pixel size is divided into 7 × 5 equal-distance segments in an overlapping dividing manner, and boundaries which cannot be divided in the target image are divided into 35 mythic insect plate sample sub-images of 416 × 416 pixel size by using a black background filling process.
The overlap division refers to that two adjacent target sub-images have a certain pixel overlap in the division process, as shown in fig. 3, the diagonally shaded area in the figure is divided into the target sub-image of 416 × 416 pixels formed in the tile 1, and is also divided into the tile 2, and the target sub-image of 416 × 416 is also formed. The selection of the width of the shaded area is determined by the size of the target pest; for example, when the target pest is set to thrips and whiteflies, the width of the overlapping area is selected to be 20 pixels in consideration of the total number of pixels of thrips and whiteflies between 10 × 10 and 20 × 20.
And further, inputting each target sub-image obtained after segmentation into a trained pest detection and identification model.
The target image is subjected to equidistant overlapping segmentation, so that the depth learning frame is adapted to the requirement of the square image resolution, the processing speed is increased, the memory requirement on the pest detection and identification model is reduced, the relative size of small-size target pests in the pest sticking plate image is increased, and the detection and identification precision of the pest detection and identification model on the target pests is improved.
Based on the above embodiment, as an optional embodiment, each target sub-image is input to the trained pest detection and recognition model to determine the type information and the position information of the target pest, including the processing of the input image; and accordingly, processing of the output results:
inputting the target subimage into a trained pest detection and recognition model, and determining a detection and recognition result corresponding to the target subimage; the detection and identification result is the type information and the position information of the target pests;
respectively mapping the detection recognition result related to each of the plurality of target sub-images to the target image;
and based on a non-maximum value inhibition method, carrying out repeated counting redundant elimination on the overlapped area to obtain the type information and the position information of the target pests on the target image.
As an optional embodiment, taking 416 × 416 pixel size target sub-images as an example, all the target sub-images are input to the trained pest detection and identification model one by one, pest feature information of a shallow network is determined, and after downsampling, upsampling and splicing processing, a detection and identification result on the target sub-images is obtained.
Further, mapping the detection and identification result of the target sub-image to the target image, wherein the mapping position formula is as follows:
(X,Y)=[(N-D)(j-1),(N-D)(i-1)]+(x,y);
wherein, (X, Y) is a row and column coordinate value on the target image, (X, Y) is a row and column coordinate value on the target sub-image, N is a single-side pixel of the target sub-image (N ═ 416), i represents a row number (i ≦ 1 ≦ N), j is a column number (j ≦ 1 ≦ N), and D is an overlapping area width (D ≦ 20).
Further, redundant elimination is carried out on the pests repeatedly detected and counted in the overlapping area, and a final target pest detection, identification and counting result on the whole original image is obtained. The repeated counting and removing method comprises the following steps: firstly, mapping the position information of target pests such as thrips or whiteflies detected on a target sub-image onto an original target image, and then removing a plurality of target frames on the same target pest on the target image by using a Non-Maximum Suppression (NMS) algorithm, so that only one target frame is reserved for each detected pest target, and the removal of repeated detection is realized. Splicing of the target sub-images is shown in fig. 4, the detection recognition results of the target sub-image a and the target sub-image b are merged into an image c, then a Non-Maximum Suppression (NMS) mode is adopted to remove a frame with a lower score from the two detection frames, and a detection frame with a higher score is retained.
According to the embodiment, the detection and identification results of all target sub-images are mapped to the target image, and redundancy of the overlapped part is removed, so that the type information and the position information of target pests on the target image can be accurately determined, the quantity information of the target pests can be accurately acquired, an accurate data acquisition method is provided for pest population density estimation, and a data basis is provided for later-stage pest control strategies.
Fig. 5 is a schematic structural diagram of an image capturing device for a pest sticking plate according to an alternative embodiment of the present invention, as shown in fig. 5, including:
a solar power supply panel 501 for converting light energy into electric energy;
the image acquisition controller and camera module 502 is used for acquiring a target image;
the pest sticking plate 503 is used for capturing target pests and providing a basis for acquiring a target image;
the storage battery 504 is used for converting the electric energy provided by the solar power supply panel 1 into chemical energy to be stored, and providing electric energy for the operation of the whole equipment;
a cross bar 505 for supporting the insect sticking plate 503;
a vertical support bar 506 for supporting the entire apparatus;
a tripod 507 is installed for fixing the whole apparatus.
Fig. 6 is a second schematic flow chart of the method for detecting small-volume pests on a pest adhesion plate image according to the present invention, as an alternative embodiment, as shown in fig. 6, the whole implementation steps include, but are not limited to:
step 601, installing and deploying image acquisition equipment of a pest sticking plate;
step 602, collecting an image of a pest sticking plate;
step 603, constructing a pest detection and identification model to be trained according to the characteristics of the target pests;
step 604, making a target image pest data set, and constructing a training set, a verification set and a test set;
605, training the pest detection and identification model to be trained by using the training set and the verification set to obtain a trained target pest detection model;
step 606, inputting the target image into the trained pest detection and identification model to obtain the quantity information of the target pests;
in step 601, the field to be detected is evenly divided into a plurality of detection sub-fields, and the pest sticking plate image acquisition equipment shown in fig. 5 is installed and deployed at a fixed position of each sub-field.
Further, in step 602, the image acquisition controller and the camera module 502 are used to acquire an image of the bug sticky board 503 at a preset time point.
Further, in step 603, a pest detection and identification model to be trained is constructed;
a pest detection identification model comprising: a CSPDarknet53 element, a spatial pyramid pooling element, a path aggregation network element, and an output element.
The CSPDarknet53 unit is a main network of a pest detection and identification model and is used for extracting features of an input image;
a pyramid pooling unit for selecting the maximum value as a pooled value;
the path aggregation network unit is used for carrying out scaling splicing processing on the input characteristic images;
the output unit is a detection layer for outputting the species information and the location information of the target pest.
Specifically, a CSPDarknet53 element, comprising: the device comprises 1 input layer, a first convolution module, a first residual error module, a second residual error module, a third residual error module, a fourth residual error module and a fifth residual error module which are sequentially connected;
in particular, the spatial pyramid pooling unit includes a max pooling layer.
Specifically, the path aggregation network unit includes: the device comprises a first splicing module, a first up-sampling module, a second splicing module, a second up-sampling module, a third splicing module and a third up-sampling module which are connected in sequence; the first downsampling module, the fourth splicing module, the second downsampling module, the fifth splicing module, the third downsampling module and the sixth splicing module are sequentially connected;
the output end of the first splicing module is connected with the input end of the first down-sampling module, the output end of the second splicing module is connected with the input end of the fourth splicing module, and the output end of the third splicing module is connected with the input end of the fifth splicing module.
Specifically, the output unit includes: a first YOLO Head module, a second YOLO Head module, and a third YOLO Head module.
Specifically, the output end of the second residual error module is connected with the input end of the first splicing module through the second convolution module; the output end of the third residual error module is connected with the input end of the second splicing module through a third convolution module; and the output end of the fourth residual error module is connected with the input end of the third splicing module through a fourth convolution module.
Specifically, the output end of the fifth residual error module is connected to the input end of the spatial pyramid pooling unit through the fifth convolution module.
Specifically, the output end of the spatial pyramid pooling unit is connected with the input ends of the third upsampling module and the sixth splicing module through the feature fusion module respectively; the feature fusion module comprises 1 merging layer and 3 convolution layers which are connected in sequence.
Specifically, the output end of the first splicing module is connected with the input end of the first YOLO Head module; the output end of the fifth splicing module is connected with the input end of the second YOLO Head module; and the output end of the sixth splicing module is connected with the input end of the third YOLO Head module.
Specifically, the first residual module and the fifth residual module respectively include 4 residual units connected in sequence, and the second residual module, the third residual module and the fourth residual module respectively include 8 residual units connected in sequence.
Specifically, the fifth convolution module includes 3 convolution layers connected in sequence.
Specifically, the first splicing module, the second splicing module, the third splicing module, the fourth splicing module, the fifth splicing module and the sixth splicing module respectively comprise 1 merging layer and 5 convolution layers which are sequentially connected.
It should be noted that, in the YOLOv4 network model, 2 times of upsampling is performed on the feature map at the 8 times of downsampling position, and the upsampling is spliced with the feature map at the 4 times of downsampling position to establish a new YOLO detection layer at the 4 times of downsampling position. The larger the characteristic diagram is, the more sensitive the small-volume target pests are, and the network can learn the characteristics of the deep layer and the shallow layer at the same time by overlapping the adjacent characteristics of the shallow layer characteristic diagram to different channels, so that the model has fine-grained characteristics, and the detection and identification capability of the small-volume target pests is improved.
Further, in step 604, a plurality of mythic plate sample images originally having a size of 2560 × 1920 pixels are divided into 7 × 5 equal distance segments, and boundaries which cannot be completely divided in each mythic plate sample image are divided into 35 sub-images of mythic plate samples having a size of 416 × 416 pixels by using a black background filling process. On one hand, the method can adapt to the situation that a deep learning frame generally needs square image resolution, improve the processing speed and reduce the memory requirement; on the other hand, the relative size of small-volume target pests in the image of the sticky trap can be increased. Randomly intercepting 1200 pieces of the sub-images of the sticky trap samples to construct a sticky trap sample set; determining a kind position label corresponding to each pest sticking plate sample sub-image in the pest sticking plate sample set; the species position label comprises a species information label and a position information label of the target pest; marking a type position label for the target pest of each sub-image of the armyworm plate sample in the armyworm plate sample set by using a graphic image marking software tool, namely label img, to form an XML file; combining each armyworm plate sample sub-image and the corresponding labeled XML file into a data sample to obtain a data set constructed by all the data samples; randomly dividing the data set into a training set, a verification set and a test set according to a certain proportion (generally: 70%, 10% and 20%); in this example, the training set includes 700 image data samples, the validation set includes 100 image data samples, and the test set includes 200 image data samples.
Further, the training set and the verification set are utilized to train the pest detection and identification model to be trained.
Further, in step 605, in step 1, the number of iterations is preset to be 100, the maximum learning rate is 0.001, the minimum learning rate is set to be 0.000001, and the optimizer may use common ADAM; and inputting every 4 data samples in the training set as a batch into a pest detection and identification model network to be trained, pre-training the pest detection and identification model to be trained, and acquiring a preliminarily trained pest detection and identification model.
Further, in step 2, the preliminarily trained pest detection and identification model is preliminarily verified by using the verification set, and a preliminary training loss value of the preliminarily trained pest detection and identification model and a preliminary verification accuracy rate of the verification set are obtained.
Further, in step 3, iteratively executing step 1 to step 2, gradually increasing a callback function in the process of each iteration, and iteratively executing step 1 to step 2 until reaching 100 times under the conditions that the initial training loss value is continuously reduced for at least 3 times and the initial verification accuracy rate is not continuously reduced, so as to obtain a pre-trained pest detection and identification model, and executing step 4;
and under the condition that the initial training loss value is not reduced for 3 times or the initial verification accuracy rate is continuously reduced, stopping iteration, obtaining a pre-trained pest detection and identification model, and executing the step 4.
Further, in step 4, on the basis of the completion of the pre-training, the number of iterations is set to 100, the maximum learning rate is reduced from 0.001 to 0.0001, and the minimum learning rate is set to 0.000001. In the training process, parameters such as network weight and the like are updated by adopting an ADAM (adaptive dynamic analysis) optimization algorithm, every 4 data samples in a training set are used as a batch to be input into a pre-trained pest detection and identification model network, the pre-trained pest detection and identification model is trained by utilizing the training set, and a retrained pest detection and identification model is obtained.
Further, in step 5, the re-trained pest detection and identification model is re-verified by using the verification set, and a re-training loss value of the re-trained pest detection and identification model and a re-verification accuracy rate of the verification set are obtained.
Further, in step 6, iteratively executing step 4 to step 5, gradually increasing a callback function in the process of each iteration, and iteratively executing step 4 to step 5 under the condition that the retraining loss value is continuously reduced for at least 3 times and the retraining accuracy rate is not continuously reduced until reaching 100 times, wherein the output network weight is the final model weight, and the trained pest detection and identification model is obtained;
and under the condition that the retraining loss value is not reduced for 3 times or the retraining accuracy rate is continuously reduced, stopping iteration, outputting the network weight which is the final model weight, and obtaining the trained pest detection and identification model.
Further, in step 606, the target image of 2560 × 1920 pixels is divided into 7 × 5 equal-distance segments in an overlapping division manner, and the boundaries that cannot be divided in the target image are divided into 35 mythic insect plate sample sub-images of 416 × 416 pixels by using black background filling processing. The overlap division refers to that two adjacent target sub-images have a certain pixel overlap in the division process, as shown in fig. 3, the diagonally shaded area in the figure is divided into the target sub-image of 416 × 416 pixels formed in the tile 1, and is also divided into the tile 2, and the target sub-image of 416 × 416 is also formed. The selection of the width of the shaded area is determined by the size of the target pest; for example, when the target pest is set to thrips and whiteflies, the width of the overlapping area is selected to be 20 pixels in consideration of the total number of pixels of thrips and whiteflies between 10 × 10 and 20 × 20.
And further, inputting each target sub-image obtained after segmentation into a trained pest detection and identification model.
And inputting all target sub-images into the trained pest detection and identification model one by one, determining pest characteristic information of the shallow network, and obtaining detection and identification results on the target sub-images after down-sampling, up-sampling and splicing.
Further, mapping the detection and identification result of the target sub-image to the target image, wherein the mapping position formula is as follows:
(X,Y)=[(N-D)(j-1),(N-D)(i-1)]+(x,y);
wherein, (X, Y) is a row and column coordinate value on the target image, (X, Y) is a row and column coordinate value on the target sub-image, N is a single-side pixel of the target sub-image (N ═ 416), i represents a row number (i ≦ 1 ≦ N), j is a column number (j ≦ 1 ≦ N), and D is an overlapping area width (D ≦ 20).
Further, redundant elimination is carried out on the pests repeatedly detected and counted in the overlapping area, and a final target pest detection, identification and counting result on the whole original image is obtained. The repeated counting and removing method comprises the following steps: firstly, mapping the position information of target pests such as thrips or whiteflies detected on a target sub-image onto an original target image, and then removing a plurality of target frames on the same target pest on the target image by using a Non-Maximum Suppression (NMS) algorithm, so that only one target frame is reserved for each detected pest target, and the removal of repeated detection is realized. Splicing of the target sub-images is shown in fig. 4, the detection recognition results of the target sub-image a and the target sub-image b are merged into an image c, then a Non-Maximum Suppression (NMS) mode is adopted to remove a frame with a lower score from the two detection frames, and a detection frame with a higher score is retained.
According to the small-volume pest detection method on the sticky trap image, the number of target pests on the sticky trap image is counted by adopting the deep learning model aiming at small-volume pest detection, and the small-volume pests on the sticky trap image are automatically and accurately detected by means of characteristic splicing and residual error unit expansion, so that an accurate data acquisition method is provided for pest population density estimation, and a data basis is provided for a later pest control strategy.
Fig. 7 is a schematic structural diagram of a small-volume pest detection system on a pest adhesion plate image provided by the present invention, as an alternative embodiment, as shown in fig. 7, mainly including but not limited to the following modules:
an image acquisition module 701, configured to acquire a target image, where the target image is an image of a pest sticking plate capturing a target pest;
a pest detection and recognition module 702, configured to input the target image into a trained pest detection and recognition model to determine type information and location information of the target pest, where the trained pest detection and recognition model is constructed based on a single-stage target detection algorithm;
a number information obtaining module 703, configured to obtain the number information of the target pest through counting the detected species information and the location information.
In one embodiment, a target image is acquired through the image acquisition module 701 and is output to the pest detection and identification module 702, the pest detection and identification module 702 performs detection and identification processing on the target image output by the image acquisition module 701 to determine the type information and the position information of the target pest so as to determine the type information and the position information of the target pest, and the trained pest detection and identification model is constructed based on a single-stage target detection algorithm; the number information acquiring module 703 acquires the number information of the target pest through the kind information and the position information output from the pest detection and identification module 702.
As an optional embodiment, the image capturing device in the image capturing module 701 may be disposed beside each sticky trap, and is configured to capture images of the sticky traps in real time; also can catch the pest and accomplish the back at the mythimna separata board, retrieve the back with the mythimna separata board, bring the environment that light is sufficient, do benefit to the shooting, improve the picture quality, save equipment cost.
Further, an image acquisition device in the image acquisition module 701 acquires an image of each pest sticking plate to acquire a target image;
optionally, the target image acquisition time may be selected in a time period with good illumination, or light is supplemented to the image acquisition device.
Further, the pest detection and identification module 702 inputs the target image acquired by the image acquisition device into the trained pest detection and identification model, and detects and identifies the type information of the target pest and the position information on the target image by extracting the features of the target image.
Alternatively, the trained pest detection and identification model may be constructed based on a single-stage target detection algorithm (YOLO series).
Alternatively, the target pest may be any one of thrips, whiteflies or aphids.
Further, the number information acquiring module 703 counts and counts according to the type information and the position information of the target pest, and acquires the number information of the target pest.
It should be noted that the small-volume pest detection system on the sticky trap image provided in the embodiment of the present invention can be implemented based on the small-volume pest detection method on the sticky trap image in any of the embodiments described above in specific execution, and details of this embodiment are not described herein.
According to the small-volume pest detection system on the sticky trap image, the number of target pests on the sticky trap image is counted by adopting the deep learning model aiming at small-volume pest detection, the means of expanding the characteristic diagram and increasing the residual error unit by utilizing characteristic splicing are utilized, the automatic and accurate detection of the small-volume pests on the field sticky trap image is realized, an accurate data acquisition method is provided for pest population density estimation, and a data basis is provided for a later-stage pest control strategy.
Fig. 8 is a schematic structural diagram of an electronic device provided in the present invention, and as shown in fig. 8, the electronic device may include: a processor (processor)810, a communication Interface 820, a memory 830 and a communication bus 840, wherein the processor 810, the communication Interface 820 and the memory 830 communicate with each other via the communication bus 840. Processor 810 may invoke logic instructions in memory 830 to perform a method of small-volume pest detection on a sticky trap image, the method comprising: acquiring a target image, wherein the target image is an image of a pest sticking plate for capturing target pests; inputting the target image into a trained pest detection and identification model, determining the type information and the position information of the target pest, and acquiring the quantity information of the target pest by counting the detected type information and the detected position information, wherein the trained pest detection and identification model is constructed based on a single-stage target detection algorithm.
In addition, the logic instructions in the memory 830 may be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the method for small-volume pest detection on a mythimna plate image provided by the above methods, the method comprising: acquiring a target image, wherein the target image is an image of a pest sticking plate for capturing target pests; inputting the target image into a trained pest detection and identification model, determining the type information and the position information of the target pest, and acquiring the quantity information of the target pest by counting the detected type information and the detected position information, wherein the trained pest detection and identification model is constructed based on a single-stage target detection algorithm.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program, which when executed by a processor, is implemented to perform the method for small-volume pest detection on a mythimna separata plate image provided in the above embodiments, the method comprising: acquiring a target image, wherein the target image is an image of a pest sticking plate for capturing target pests; inputting the target image into a trained pest detection and identification model, determining the type information and the position information of the target pest, and acquiring the quantity information of the target pest by counting the detected type information and the detected position information, wherein the trained pest detection and identification model is constructed based on a single-stage target detection algorithm.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods of the various embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A small-volume pest detection method on a pest sticking plate image is characterized by comprising the following steps:
acquiring a target image, wherein the target image is an image of a pest sticking plate for capturing target pests;
inputting the target image into a trained pest detection and identification model to determine the type information and the position information of the target pest, wherein the trained pest detection and identification model is constructed based on a single-stage target detection algorithm;
and acquiring the quantity information of the target pests by counting the detected species information and the position information.
2. The small-volume pest detection method on the sticky trap image according to claim 1, characterized by further comprising constructing a pest detection recognition model to be trained before inputting the target image to the trained pest detection recognition model to determine the type information and the position information of the target pest; pest detection and identification model, at least comprising the following units: a CSPDarknet53 element, a spatial pyramid pooling element, a path aggregation network element, and an output element.
3. The small-volume pest detection method on the sticky trap image according to claim 2,
the CSPDarknet53 unit, comprising: the device comprises an input layer, a first convolution module, a first residual error module, a second residual error module, a third residual error module, a fourth residual error module and a fifth residual error module which are connected in sequence;
the spatial pyramid pooling unit comprises a maximum pooling layer;
the path aggregation network unit includes: the device comprises a first splicing module, a first up-sampling module, a second splicing module, a second up-sampling module, a third splicing module and a third up-sampling module which are connected in sequence; the first downsampling module, the fourth splicing module, the second downsampling module, the fifth splicing module, the third downsampling module and the sixth splicing module are sequentially connected;
the output end of the first splicing module is connected with the input end of the first down-sampling module, the output end of the second splicing module is connected with the input end of the fourth splicing module, and the output end of the third splicing module is connected with the input end of the fifth splicing module;
an output unit including: a first, second, and third YOLO Head modules;
the output end of the second residual error module is connected with the input end of the first splicing module through a second convolution module;
the output end of the third residual error module is connected with the input end of the second splicing module through a third convolution module;
the output end of the fourth residual error module is connected with the input end of the third splicing module through a fourth convolution module;
the output end of the fifth residual error module is connected with the input end of the spatial pyramid pooling unit through a fifth convolution module;
the output end of the spatial pyramid pooling unit is connected with the input ends of the third upsampling module and the sixth splicing module through a feature fusion module respectively; the characteristic fusion module comprises 1 merging layer and 3 convolution layers which are sequentially connected;
the output end of the first splicing module is connected with the input end of the first YOLO Head module;
the output end of the fifth splicing module is connected with the input end of the second YOLO Head module;
the output end of the sixth splicing module is connected with the input end of the third YOLO Head module;
the first residual module and the fifth residual module respectively comprise 4 residual units which are sequentially connected, and the second residual module, the third residual module and the fourth residual module respectively comprise 8 residual units which are sequentially connected;
the fifth convolution module comprises 3 convolution layers which are connected in sequence;
the first splicing module, the second splicing module, the third splicing module, the fourth splicing module, the fifth splicing module and the sixth splicing module are equally divided into 1 merging layer and 5 convolution layers which are sequentially connected.
4. The method for detecting pests with small volume on a sticky trap image according to claim 2, further comprising, after constructing and completing a pest detection recognition model to be trained:
acquiring a plurality of sticky trap sample images, performing equidistant segmentation processing on each sticky trap sample image, and filling black background on boundaries which cannot be completely divided in each sticky trap sample image to acquire a plurality of sticky trap sample sub-images;
constructing a sticky trap sample set by using the sticky trap sample sub-image;
determining a kind position label corresponding to each pest sticking plate sample sub-image in the pest sticking plate sample set; the species position label comprises a species information label and a position information label of the target pest;
combining each armyworm plate sample image and the corresponding category position label into a data sample based on a graphic image labeling tool;
constructing a data set from all data samples;
dividing data samples in the data set according to a preset proportion, constructing a training set, and verifying and testing the set;
and training the pest detection and identification model to be trained by utilizing the training set and the verification set.
5. The method for detecting small-volume pests on sticky trap images according to claim 4, wherein the training of the pest detection and identification model to be trained by using the training set and the verification set comprises:
step 1, setting an initial learning rate, and pre-training a pest detection and identification model to be trained by utilizing a training set based on an ADAM (adaptive dynamic analysis of materials) optimization method to obtain a preliminarily trained pest detection and identification model;
step 2, carrying out primary verification on the preliminarily trained pest detection and identification model by using the verification set, and obtaining a primary training loss value of the preliminarily trained pest detection and identification model and a primary verification accuracy rate of the verification set;
step 3, iteratively executing the step 1 to the step 2, gradually increasing a callback function in the process of each iteration, and executing the step 4 when the preliminary training loss value is continuously reduced for at least 3 times and the preliminary verification accuracy rate is not continuously reduced until reaching a preset number of times to obtain a pre-trained pest detection and identification model;
under the condition that the initial training loss value is not reduced for 3 times or the initial verification accuracy rate is continuously reduced, stopping iteration, obtaining a pre-trained pest detection and identification model, and executing the step 4;
step 4, reducing the initial learning rate, and training the pre-trained pest detection and identification model by using the training set based on the ADAM optimization method again to obtain a re-trained pest detection and identification model;
step 5, the re-trained pest detection and identification model is re-verified by using the verification set, and a re-training loss value of the re-trained pest detection and identification model and a re-verification accuracy rate of the verification set are obtained;
step 6, iteratively executing the step 4 to the step 5, gradually increasing a callback function in the process of each iteration, and obtaining a trained pest detection and identification model under the conditions that the retraining loss value is continuously reduced for at least 3 times and the retraining accuracy rate is not continuously reduced until reaching a preset number of times;
and under the condition that the retraining loss value is not reduced for 3 times or the retraining accuracy rate is continuously reduced, stopping iteration and obtaining the trained pest detection and identification model.
6. The method for small volume pest detection on mythimna separata plate image according to claim 1, further comprising, after acquiring the target image:
carrying out equidistant segmentation processing on the target image, and using black background filling processing on boundaries which cannot be divided in the target image to obtain a plurality of target sub-images;
wherein, an overlapping area is arranged between every two adjacent target sub-images, and the size of the overlapping area is positively correlated with the size of the body type of the target pests;
accordingly, inputting the target image to the trained pest detection and recognition model to determine the type information and the position information of the target pest, comprising:
and respectively inputting each target sub-image into the trained pest detection and identification model to determine the type information and the position information of the target pest.
7. The small-volume pest detection method on sticky trap images according to claim 6, wherein each target sub-image is input to a trained pest detection and recognition model to determine the type information and position information of target pests, comprising:
inputting the target subimage into a trained pest detection and recognition model, and determining a detection and recognition result corresponding to the target subimage; the detection and identification result is the type information and the position information of the target pests;
respectively mapping the detection recognition result related to each of the plurality of target sub-images to the target sub-images;
and based on a non-maximum value inhibition method, carrying out repeated counting redundant elimination on the overlapped area to obtain the type information and the position information of the target pests on the target image.
8. A small-volume pest detection system on a pest sticker image, comprising:
the image acquisition module is used for acquiring a target image, wherein the target image is a pest sticking plate image for capturing target pests;
the pest detection and identification module is used for inputting the target image into a trained pest detection and identification model to determine the type information and the position information of the target pest, and the trained pest detection and identification model is constructed based on a single-stage target detection algorithm;
and the quantity information acquisition module is used for acquiring the quantity information of the target pests through the species information and the position information.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the computer program performs the steps of the method for small volume pest detection on images of a mythimna separata plate according to any of claims 1 to 7.
10. A non-transitory computer readable storage medium having a computer program stored thereon, wherein the computer program when executed by a processor implements the steps of the method for small volume pest detection on mythimna separata plate image according to any of claims 1 to 7.
CN202110221105.XA 2021-02-26 2021-02-26 Small-volume pest detection method and system on pest sticking plate image Pending CN112861767A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110221105.XA CN112861767A (en) 2021-02-26 2021-02-26 Small-volume pest detection method and system on pest sticking plate image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110221105.XA CN112861767A (en) 2021-02-26 2021-02-26 Small-volume pest detection method and system on pest sticking plate image

Publications (1)

Publication Number Publication Date
CN112861767A true CN112861767A (en) 2021-05-28

Family

ID=75990447

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110221105.XA Pending CN112861767A (en) 2021-02-26 2021-02-26 Small-volume pest detection method and system on pest sticking plate image

Country Status (1)

Country Link
CN (1) CN112861767A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269208A (en) * 2021-06-12 2021-08-17 四川虹美智能科技有限公司 Food material identification system based on Internet of things refrigerator
CN113313708A (en) * 2021-06-30 2021-08-27 安徽工程大学 Fruit detection method and system based on deep neural network
CN113744226A (en) * 2021-08-27 2021-12-03 浙大宁波理工学院 Intelligent agricultural pest identification and positioning method and system
CN113983737A (en) * 2021-10-18 2022-01-28 海信(山东)冰箱有限公司 Refrigerator and food material positioning method thereof
CN114982731A (en) * 2022-06-21 2022-09-02 黄淮学院 A wisdom agricultural control early warning system for preventing and cure plant diseases and insect pests
CN115937169A (en) * 2022-12-23 2023-04-07 广东创新科技职业学院 Shrimp fry counting method and system based on high resolution and target detection
CN116012718A (en) * 2023-02-15 2023-04-25 黑龙江科技大学 Method, system, electronic equipment and computer storage medium for detecting field pests

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107346424A (en) * 2017-06-30 2017-11-14 成都东谷利农农业科技有限公司 Lamp lures insect identification method of counting and system
US20180330166A1 (en) * 2017-05-09 2018-11-15 Blue River Technology Inc. Automated plant detection using image data
CN110428374A (en) * 2019-07-22 2019-11-08 北京农业信息技术研究中心 A kind of small size pest automatic testing method and system
WO2020047738A1 (en) * 2018-09-04 2020-03-12 安徽中科智能感知大数据产业技术研究院有限责任公司 Automatic pest counting method based on combination of multi-scale feature fusion network and positioning model
CN112084866A (en) * 2020-08-07 2020-12-15 浙江工业大学 Target detection method based on improved YOLO v4 algorithm
CN112200081A (en) * 2020-10-10 2021-01-08 平安国际智慧城市科技股份有限公司 Abnormal behavior identification method and device, electronic equipment and storage medium
CN112270681A (en) * 2020-11-26 2021-01-26 华南农业大学 Method and system for detecting and counting yellow plate pests deeply

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180330166A1 (en) * 2017-05-09 2018-11-15 Blue River Technology Inc. Automated plant detection using image data
CN107346424A (en) * 2017-06-30 2017-11-14 成都东谷利农农业科技有限公司 Lamp lures insect identification method of counting and system
WO2020047738A1 (en) * 2018-09-04 2020-03-12 安徽中科智能感知大数据产业技术研究院有限责任公司 Automatic pest counting method based on combination of multi-scale feature fusion network and positioning model
CN110428374A (en) * 2019-07-22 2019-11-08 北京农业信息技术研究中心 A kind of small size pest automatic testing method and system
CN112084866A (en) * 2020-08-07 2020-12-15 浙江工业大学 Target detection method based on improved YOLO v4 algorithm
CN112200081A (en) * 2020-10-10 2021-01-08 平安国际智慧城市科技股份有限公司 Abnormal behavior identification method and device, electronic equipment and storage medium
CN112270681A (en) * 2020-11-26 2021-01-26 华南农业大学 Method and system for detecting and counting yellow plate pests deeply

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WENKANG CHEN等: "Detecting Citrus in Orchard Environment by Using Improved YOLOv4", 《SCIENTIFIC PROGRAMMING》, vol. 2020, 25 November 2020 (2020-11-25), pages 4 - 6 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269208A (en) * 2021-06-12 2021-08-17 四川虹美智能科技有限公司 Food material identification system based on Internet of things refrigerator
CN113313708A (en) * 2021-06-30 2021-08-27 安徽工程大学 Fruit detection method and system based on deep neural network
CN113744226A (en) * 2021-08-27 2021-12-03 浙大宁波理工学院 Intelligent agricultural pest identification and positioning method and system
CN113983737A (en) * 2021-10-18 2022-01-28 海信(山东)冰箱有限公司 Refrigerator and food material positioning method thereof
CN114982731A (en) * 2022-06-21 2022-09-02 黄淮学院 A wisdom agricultural control early warning system for preventing and cure plant diseases and insect pests
CN114982731B (en) * 2022-06-21 2024-01-12 黄淮学院 Intelligent agricultural monitoring and early warning system for preventing and treating diseases and insect pests
CN115937169A (en) * 2022-12-23 2023-04-07 广东创新科技职业学院 Shrimp fry counting method and system based on high resolution and target detection
CN116012718A (en) * 2023-02-15 2023-04-25 黑龙江科技大学 Method, system, electronic equipment and computer storage medium for detecting field pests
CN116012718B (en) * 2023-02-15 2023-10-27 黑龙江科技大学 Method, system, electronic equipment and computer storage medium for detecting field pests

Similar Documents

Publication Publication Date Title
CN112861767A (en) Small-volume pest detection method and system on pest sticking plate image
CN110059558B (en) Orchard obstacle real-time detection method based on improved SSD network
US8379994B2 (en) Digital image analysis utilizing multiple human labels
CN111488789A (en) Pedestrian detection method and device for monitoring based on image analysis
CN114445706A (en) Power transmission line target detection and identification method based on feature fusion
CN106372666B (en) A kind of target identification method and device
CN110264444B (en) Damage detection method and device based on weak segmentation
CN107247952B (en) Deep supervision-based visual saliency detection method for cyclic convolution neural network
CN109242826B (en) Mobile equipment end stick-shaped object root counting method and system based on target detection
CN112906794A (en) Target detection method, device, storage medium and terminal
CN111428664A (en) Real-time multi-person posture estimation method based on artificial intelligence deep learning technology for computer vision
CN113435355A (en) Multi-target cow identity identification method and system
CN113569852A (en) Training method and device of semantic segmentation model, electronic equipment and storage medium
EP3916637A1 (en) Apparatus and method for detecting elements of an assembly
CN111881958A (en) License plate classification recognition method, device, equipment and storage medium
KR101917525B1 (en) Method and apparatus for identifying string
CN115761627A (en) Fire smoke flame image identification method
CN116543386A (en) Agricultural pest image identification method based on convolutional neural network
CN114463675B (en) Underwater fish group activity intensity identification method and device
CN116935092A (en) Automated defect classification and detection
CN107948586A (en) Trans-regional moving target detecting method and device based on video-splicing
CN114119970B (en) Target tracking method and device
CN112837404A (en) Method and device for constructing three-dimensional information of planar object
CN115527050A (en) Image feature matching method, computer device and readable storage medium
KR20230023263A (en) Deep learning-based sewerage defect detection method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination