CN116485796B - Pest detection method, pest detection device, electronic equipment and storage medium - Google Patents

Pest detection method, pest detection device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116485796B
CN116485796B CN202310727197.8A CN202310727197A CN116485796B CN 116485796 B CN116485796 B CN 116485796B CN 202310727197 A CN202310727197 A CN 202310727197A CN 116485796 B CN116485796 B CN 116485796B
Authority
CN
China
Prior art keywords
pest
image
network
model
detection model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310727197.8A
Other languages
Chinese (zh)
Other versions
CN116485796A (en
Inventor
李俊
高银
晏超
郭世荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mindu Innovation Laboratory
Original Assignee
Mindu Innovation Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mindu Innovation Laboratory filed Critical Mindu Innovation Laboratory
Priority to CN202310727197.8A priority Critical patent/CN116485796B/en
Publication of CN116485796A publication Critical patent/CN116485796A/en
Application granted granted Critical
Publication of CN116485796B publication Critical patent/CN116485796B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A50/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE in human health protection, e.g. against extreme weather
    • Y02A50/30Against vector-borne diseases, e.g. mosquito-borne, fly-borne, tick-borne or waterborne diseases whose impact is exacerbated by climate change

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of pest detection computer systems, and provides a pest detection method, a pest detection device, electronic equipment and a storage medium, wherein an image to be detected is firstly obtained; and then inputting the image to be detected into the pest detection model to obtain a detection result output by the pest detection model. The pest detection model comprises a cascaded blind deblurring model and a target detection model, wherein the blind deblurring model is used for carrying out blind deblurring on an image to be detected and generating a deblurring image corresponding to the image to be detected; the backbone network in the target detection model is used for extracting image characteristics of the deblurred image; the appointed neck network comprises a plurality of weighted bidirectional feature pyramid networks which are connected in sequence and is used for extracting and fusing semantic features of different layers of deblurred images based on image features to obtain fusion results of different layers; the head network is used for outputting a detection result based on the fusion result. The method can enable the detection result to be more accurate, and the occupied computing resources are less.

Description

Pest detection method, pest detection device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of pest detection computer systems, and in particular, to a pest detection method, a pest detection device, an electronic device, and a storage medium.
Background
The field pests have high occurrence frequency, wide occurrence area and serious hazard, and become the main natural factors which influence and restrict the grain safety. Moreover, the traditional chemical control method has great damage to crops and ecological environment. If the method can detect the field pests timely and accurately, the pest outbreak can be effectively avoided, the agricultural production is ensured, and the loss of farmers is avoided, so that a method for detecting the pests timely and accurately is needed.
However, agricultural pests are various in variety and small in size, the existing image acquisition equipment cannot capture high-definition images, so that pest detection results are inaccurate, pest detection cannot be accurately and efficiently realized, pest control cannot be timely performed, and crop yield is reduced, so that agricultural production is not facilitated. Moreover, the pest detection model adopted by the existing pest detection method generally occupies more calculation resources, has higher requirements on hardware equipment, increases the cost of pest control, and is not beneficial to the automatic implementation of pest control.
For this reason, there is a need to provide a new pest detection method.
Disclosure of Invention
The invention provides a pest detection method, a pest detection device, electronic equipment and a storage medium, which are used for solving the defects in the prior art.
The invention provides a pest detection method, which comprises the following steps:
acquiring an image to be detected;
inputting the image to be detected into a pest detection model to obtain a detection result output by the pest detection model;
the pest detection model comprises a cascaded blind motion blur removal model and a target detection model, wherein the blind motion blur removal model is obtained based on pest image sample training, and the target detection model is obtained based on the pest image sample and pest label carried by the pest image sample training;
the blind deblurring model is used for carrying out blind deblurring on the image to be detected and generating a deblurred image corresponding to the image to be detected;
the target detection model comprises a trunk network of PicoDet, a designated neck network and a head network of PicoDet which are connected in sequence; the backbone network is used for extracting image features of the deblurred image; the appointed neck network comprises a plurality of weighted bidirectional feature pyramid networks which are connected in sequence, and the weighted bidirectional feature pyramid networks are used for extracting and fusing semantic features of different layers of the deblurred image based on the image features to obtain fusion results of different layers; the head network is used for outputting the detection result based on the fusion result.
According to the pest detection method provided by the invention, the blind motion blur removal model is obtained based on training of the following steps:
training an initial generation countermeasure network model based on the pest image sample to obtain a target generation countermeasure network model, and taking a target generator in the target generation countermeasure network model as the blind removal motion blur model.
According to the pest detection method provided by the invention, the backbone network is ESNet, and the image features comprise a C3 feature map, a C4 feature map and a C5 feature map which are output by the ESNet; the header network includes four;
the appointed neck network is also used for downsampling the C3 feature map and inputting the obtained sampling result into a first weighted bidirectional feature pyramid network;
the last weighted bidirectional feature pyramid network of the designated neck network is respectively used for inputting the fusion result corresponding to the sampling result as a P6 feature map to a first head network, inputting the fusion result corresponding to the C3 feature map as a P5 feature map to a second head network, inputting the fusion result corresponding to the C4 feature map as a P4 feature map to a third head network, and inputting the fusion result corresponding to the C5 feature map as a P3 feature map to a fourth head network.
According to the pest detection method provided by the invention, the pest image sample comprises a training sample and a test sample;
the target detection model is obtained based on training of the following steps:
scaling the training sample to 640 x 640 in size, and inputting the training sample into an initial detection model to obtain an initial detection result output by the initial detection model;
calculating a loss function value based on the initial detection result and pest labels carried by the training sample, and performing iterative updating on structural parameters of the initial detection model by adopting a random gradient descent algorithm based on the loss function value to obtain an alternative detection model;
and testing the alternative detection model based on the test sample, if the alternative detection model passes the test, determining that the alternative detection model is the target detection model, otherwise, continuing to update the structural parameters of the alternative detection model.
According to the pest detection method provided by the invention, the loss function value is calculated based on the initial detection result and the pest label carried by the training sample, and the pest detection method comprises the following steps:
substituting the initial detection result and the pest label carried by the training sample into a CIOU loss function, and calculating the loss function value.
According to the pest detection method provided by the invention, the alternative detection model is tested based on the test sample, and the pest detection method comprises the following steps:
inputting the test sample into the alternative detection model to obtain an alternative detection result output by the alternative detection model;
and calculating the value of a test evaluation index based on the alternative detection result and the pest label carried by the test sample, and judging whether the alternative detection model passes the test based on the value of the test evaluation index.
According to the pest detection method provided by the invention, the test evaluation index comprises at least one of accuracy, recall rate and average accuracy.
The present invention also provides a pest detection device including:
the image acquisition module is used for acquiring an image to be detected;
the pest detection module is used for inputting the image to be detected into a pest detection model to obtain a detection result output by the pest detection model;
the pest detection model comprises a cascaded blind motion blur removal model and a target detection model, wherein the blind motion blur removal model is obtained based on pest image sample training, and the target detection model is obtained based on the pest image sample and pest label carried by the pest image sample training;
The blind deblurring model is used for carrying out blind deblurring on the image to be detected and generating a deblurred image corresponding to the image to be detected;
the target detection model comprises a trunk network of PicoDet, a designated neck network and a head network of PicoDet which are connected in sequence; the backbone network is used for extracting image features of the deblurred image; the appointed neck network comprises a plurality of weighted bidirectional feature pyramid networks which are connected in sequence, and the weighted bidirectional feature pyramid networks are used for extracting and fusing semantic features of different layers of the deblurred image based on the image features to obtain fusion results of different layers; the head network is used for outputting the detection result based on the fusion result.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the pest detection method as described in any one of the above when executing the program.
The present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a pest detection method as described in any one of the above.
The invention also provides a computer program product comprising a computer program which when executed by a processor implements a pest detection method as described in any one of the above.
Compared with the prior art, the invention has the following beneficial effects:
according to the pest detection method, the device, the electronic equipment and the storage medium, the pest detection model adopted by the method is cascaded with the blind motion blur removal model and the target detection model, so that the pest detection method has a function of removing motion blur of an image to be detected, a function of small target detection and the robustness and accuracy of the pest detection model can be improved, and a detection result is more accurate. Moreover, due to the existence of the blind motion blur removal model, the method can ensure certain accuracy under the condition that pests fly at a high speed in actual pest detection, has stronger applicability, is convenient for subsequent pest control in time, avoids the condition of crop yield reduction caused by inaccurate pest detection, and is beneficial to agricultural production. In addition, the target detection model comprises a trunk network of the PicoDet, a designated neck network and a head network of the PicoDet, so that the target detection model can realize more efficient characteristic extraction and fusion processes and has the characteristic of light weight, furthermore, the method occupies less calculation resources, has lower requirements on hardware equipment, is suitable for being deployed in an embedded system, can reduce the cost of pest control, and is beneficial to automation of pest control. Compared with other existing pest detection algorithms, the method balances accuracy and light weight, and has remarkable pest detection effect on actual scenes.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to these drawings without inventive effort.
FIG. 1 is a schematic flow chart of a pest detection method provided by the invention;
FIG. 2 is a schematic diagram of a target detection model employed in the pest detection method provided by the present invention;
FIG. 3 is a schematic diagram of the basic structure of a weighted bi-directional feature pyramid network in a target detection model employed in the pest detection method provided by the present invention;
FIG. 4 is a second flow chart of the pest detection method according to the present invention;
FIG. 5 is a schematic view of the structure of the pest detection device provided by the present invention;
fig. 6 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Currently, rice field pest detection and identification mainly faces the problems of motion blurring and small target detection caused by high-speed flying of pests. The existing algorithm is not ideal in recognition precision when the images with motion blur shot during high-speed flight of pests are actually applied, so that pest detection results are inaccurate, pest detection cannot be accurately and efficiently realized, pest control cannot be timely performed, and crop yield is reduced, so that agricultural production is not facilitated. Moreover, the pest detection model adopted by the existing pest detection method generally occupies more calculation resources, has higher requirements on hardware equipment, increases the cost of pest control, and is not beneficial to the automatic implementation of pest control. Based on the above, the embodiment of the invention provides a pest detection method for solving the technical problems.
Fig. 1 is a schematic flow chart of a pest detection method provided in an embodiment of the present invention, as shown in fig. 1, the method includes:
s1, acquiring an image to be detected;
s2, inputting the image to be detected into a pest detection model to obtain a detection result output by the pest detection model;
the pest detection model comprises a cascaded blind motion blur removal model and a target detection model, wherein the blind motion blur removal model is obtained based on pest image sample training, and the target detection model is obtained based on the pest image sample and pest label carried by the pest image sample training;
The blind deblurring model is used for carrying out blind deblurring on the image to be detected and generating a deblurred image corresponding to the image to be detected;
the target detection model comprises a trunk network of PicoDet, a designated neck network and a head network of PicoDet which are connected in sequence; the backbone network is used for extracting image features of the deblurred image; the appointed neck network comprises a plurality of weighted bidirectional feature pyramid networks which are connected in sequence, and the weighted bidirectional feature pyramid networks are used for extracting and fusing semantic features of different layers of the deblurred image based on the image features to obtain fusion results of different layers; the head network is used for outputting the detection result based on the fusion result.
Specifically, in the pest detection method provided in the embodiment of the present invention, the execution main body of the pest detection method is a processor, the processor may be configured in a computer, the computer may be a local computer or a cloud computer, the local computer may be a computer, a tablet, etc., and the local computer may include an online computer and an offline computer according to whether the local computer is networked or not, which is not limited herein specifically.
Firstly, step S1 is executed to obtain an image to be detected, wherein the image to be detected can comprise pests, and the pests can be farmland pests, woodland pests and the like. The image to be detected refers to an image in which the kind and position of vermin included therein need to be detected. The image to be detected can be a gray image or a color image. The color mode of the color image may be an RGB mode or a CMYK mode.
And then executing step S2, and inputting the image to be detected into the pest detection model to obtain a detection result output by the pest detection model.
The pest detection model comprises a blind motion blur removal model and a target detection model which are connected in cascade, wherein the blind motion blur removal model can be obtained by training a pest image sample, and the target detection model can be obtained by training the pest image sample and a pest label carried by the pest image sample.
Here, the pest image sample may be obtained by taking an image captured by a camera in an actual scene as an original image sample and expanding the original image sample by means of data expansion of at least one of rotation, scaling and mosaic stitching. Then, the type and the position of the pests contained in the pest image sample can be marked by adopting a manual marking or machine marking mode, and the pest label can be obtained.
It will be appreciated that when taking the original sample image, it is necessary to ensure that the number of each species of pest is close to prevent the problem of data imbalance that may occur during subsequent model training.
In the embodiment of the invention, the blind motion blur removal model can be obtained based on training of the following steps:
firstly, training an initial generation countermeasure network model by using a pest image sample to obtain a target generation countermeasure network model; then, the target generator in the target generation countermeasure network model is regarded as a blind removal motion blur model.
The initial generation of the antagonism network (Generative Adversarial Network, GAN) model includes a Generator (G) and a discriminant (D). Wherein the generator is adapted to generate data by the machine with the aim of cheating the arbiter as much as possible, the generated data being denoted G (z). The discriminator is used for judging whether the data is real data or data generated by the generator, and aims to find out the data generated by the generator as far as possible. Its input parameters are x, which represents the data, and the output D (x) represents the probability that x is the real data, if 1, it represents 100% of the real data, and if 0, it represents impossible to be the real data.
Thus, the generator and the discriminator form a dynamic countermeasure process (also called a game process), and as training (dynamic countermeasure) progresses, the data generated by the generator is more and more similar to real data, and the discrimination data of the discriminator is more and more high. In an ideal state, the generator may generate data sufficient to be comparable to real data; whereas for the arbiter it is difficult to determine whether the data generated by the generator is real or not, so D (G (z))=0.5. After training is completed, a target generator is available that can be used to generate data that is not significantly different from the real data.
The optimization objective for initially generating the countermeasure network model is as follows:
wherein,,the representation cost function corresponds to the objective function and is essentially a cross entropy loss function. />Representing the expected value of the distribution function +.>Representing the distribution of real data ∈>For Gaussian noise, D (z) represents the probability that z is true data, +.>Is defined in the low-dimensional Gaussian noise distribution, and is mapped to the high-dimensional data space through G to obtain +.>,/>Is a parameter of G.
According to the initial generation countermeasure network model constructed above, the generator is used for generating an initial deblurring image corresponding to the pest image sample, and the discriminator is used for judging whether the initial deblurring image generated by the generator is clear enough or not. By setting appropriate super parameters, the initial generation of the countermeasure network model can be trained on a cloud server (Cloud GPU Service, GPU), and verified and tested to obtain an end-to-end target generator as a blind movement blur removal model. The blind deblurring model can be used for receiving an image to be detected, carrying out blind deblurring on the image to be detected, and generating a deblurred image corresponding to the image to be detected. The deblurred image may be 640 x 3 in size.
As shown in fig. 2, the object detection model may include a Backbone network (Backbone) of PicoDet, a designated Neck network (neg), and a Head network (Head) of PicoDet, which are connected in sequence.
In small object detection, picoDet combines a lightweight and efficient backbone network with a specified neck network, aiming at solving the challenge of detecting small objects in high resolution images. The combination of these two networks enables PicoDet to effectively capture high-level semantic information and fine-grained spatial detail. In addition, picoDet adopts anchor-free detection, and an anchor box does not need to be adjusted, so that the complexity of the training process is reduced. These settings choices improve the performance of small target detection compared to YOLOv4 and YOLOv 5.
The backbone network is used for receiving the deblurred image and extracting the image characteristics of the deblurred image. Here, the backbone network may be an ESNet, including a Stem/Conv layer, a Max-Pool layer (Max-Pool), a first structure composed of 3 ES blocks, a second structure composed of 7 ES blocks, and a third structure composed of 3 ES blocks, which are sequentially connected, where the first structure outputs a C3 feature map, the second structure outputs a C4 feature map, the third structure outputs a C5 feature map, and the C3 feature map, the C4 feature map, and the C5 feature map are all image features output by the backbone network.
The designated neck network may include a plurality of weighted bi-directional feature pyramid networks (BiFPN) connected in sequence, for example, three, such as BiFPN1, biFPN2, biFPN3 in fig. 2. The performance of a given neck network may be improved as the depth of the network increases due to the superposition of multiple weighted bi-directional feature pyramid networks.
The weighted bi-directional feature pyramid network is an efficient neck network. The weighted bi-directional feature pyramid network also belongs to convolutional neural networks that incorporate a learnable weight to learn the importance of different levels of features extracted from an effective backbone network and iteratively use top-down and bottom-up sampling methods to fuse multi-scale features. And then, inputting the fused characteristics into a class prediction network and a box prediction network for detection.
The weighted bi-directional feature pyramid network not only contains horizontal connections from left to right and vertical connections from top to bottom and bottom to top, but also has cross-scale connections that aggregate feature information at multiple scales and levels, resulting in a more robust feature representation. The designated neck network constructed based on the weighted bidirectional feature pyramid network replaces the original neck network of PicoDet, so that the more efficient feature extraction and fusion process can be realized, and the overall performance of the algorithm is improved. Experiments show that the performance of a small target detection task can be remarkably improved by integrating the weighted bidirectional feature pyramid network into the PicoDet algorithm.
The infrastructure of the weighted bi-directional feature pyramid network is shown in fig. 3. Therefore, more semantic features can be extracted on different layers to be fused, and global information can be focused, so that any information cannot be easily omitted. The weighted bi-directional feature pyramid network can be expressed as:
wherein n is a feature number for feature fusion, C 1 For input features of the first layer, P 1 For the output features of the first layer, C 2 For input features of the second layer, P 2 For the output features of the second layer, C n P for the input feature of the nth layer n And f is a function of a feature fusion process for the output features of the nth layer. In fig. 3, n is 5, and each row represents one layer.
In the embodiment of the invention, the appointed neck network can be used for receiving the image characteristics output by the backbone network, extracting and fusing semantic characteristics of different layers of deblurred images by utilizing the image characteristics to obtain fusion results of different layers, wherein the fusion results of each layer can be respectively represented by a P3 characteristic diagram, a P4 characteristic diagram, a P5 characteristic diagram and a P6 characteristic diagram.
In order to achieve smooth connection between the designated neck network and the backbone network without affecting the performance of the designated neck network, in the embodiment of the present invention, the designated neck network is further configured to downsample the C3 feature map output by the backbone network, and input the obtained sampling result to the first weighted bi-directional feature pyramid network (i.e., biFPN 1), and the last weighted bi-directional feature pyramid network (i.e., biFPN 3) of the designated neck network is configured to input the fusion result corresponding to the sampling result as a P6 feature map to the first Head network (i.e., head 1), as shown in fig. 2, designating the first row in the neck network and the Head network. In fig. 2, the up arrow is downsampling, and the down arrow is upsampling.
In addition, the designated neck network is further configured to input the fusion result corresponding to the C3 feature map output to the backbone network as a P5 feature map to the second Head network (i.e., head 2), input the fusion result corresponding to the C4 feature map as a P4 feature map to the third Head network (i.e., head 3), and input the fusion result corresponding to the C5 feature map as a P3 feature map to the fourth Head network (i.e., head 4).
Finally, the head network receives each fusion result and outputs a detection result by utilizing the received fusion result. The detection result may include the positions of all vermin in the image to be detected and the kinds of the vermin.
The pest detection model can be obtained by means of a pest image sample and a pest label carried by the pest image sample through deep learning training of the initial detection model on a cloud server by setting proper super parameters and verification and test.
The pest detection method provided by the embodiment of the invention comprises the steps of firstly, acquiring an image to be detected; and then inputting the image to be detected into the pest detection model to obtain a detection result output by the pest detection model. The pest detection model comprises a cascaded blind deblurring model and a target detection model, wherein the blind deblurring model is used for carrying out blind deblurring on an image to be detected and generating a deblurring image corresponding to the image to be detected; the target detection model comprises a trunk network of PicoDet, a designated neck network and a head network of PicoDet which are connected in sequence, wherein the trunk network is used for extracting image characteristics of a deblurred image; the appointed neck network comprises a plurality of weighted bidirectional feature pyramid networks which are connected in sequence and is used for extracting and fusing semantic features of different layers of deblurred images based on image features to obtain fusion results of different layers; the head network is used for outputting a detection result based on the fusion result. The pest detection model adopted by the method is cascaded with the blind motion blur removal model and the target detection model, so that the method not only has the function of removing the motion blur of the image to be detected, but also has the function of small target detection, and can improve the robustness and the accuracy of the pest detection model, so that the detection result is more accurate. Moreover, due to the existence of the blind motion blur removal model, the method can ensure certain accuracy under the condition that pests fly at a high speed in actual pest detection, has stronger applicability, is convenient for subsequent pest control in time, avoids the condition of crop yield reduction caused by inaccurate pest detection, and is beneficial to agricultural production. In addition, the target detection model comprises a trunk network of the PicoDet, a designated neck network and a head network of the PicoDet, so that the target detection model can realize more efficient characteristic extraction and fusion processes and has the characteristic of light weight, furthermore, the method occupies less calculation resources, has lower requirements on hardware equipment, is suitable for being deployed in an embedded system, can reduce the cost of pest control, and is beneficial to automation of pest control. Compared with other existing pest detection algorithms, the method balances accuracy and light weight, and has remarkable pest detection effect on actual scenes.
On the basis of the above embodiments, according to the pest detection method provided in the embodiments of the present invention, the pest image sample includes a training sample and a test sample;
the target detection model is obtained based on training of the following steps:
scaling the training sample to 640 x 640 in size, and inputting the training sample into an initial detection model to obtain an initial detection result output by the initial detection model;
calculating a loss function value based on the initial detection result and pest labels carried by the training sample, and performing iterative updating on structural parameters of the initial detection model by adopting a random gradient descent algorithm based on the loss function value to obtain an alternative detection model;
and testing the alternative detection model based on the test sample, if the alternative detection model passes the test, determining that the alternative detection model is the target detection model, otherwise, continuing to update the structural parameters of the alternative detection model.
Specifically, in the embodiment of the present invention, the pest image sample may include a training sample and a test sample, where the training sample is used for training the initial detection model to obtain an alternative detection model, and the test sample is used for testing the alternative detection model, and the alternative detection model passing the test is used as the target detection model.
Here, the target detection model is trained based on the following steps:
firstly, scaling the training sample to 640×640, and inputting the training sample into an initial detection model to obtain an initial detection result output by the initial detection model. The structure of the initial detection model is the same as that of the target detection model, the difference is the structure parameter, the structure parameter of the initial detection model is a given initial value, and the structure parameter of the target detection model is a final value obtained after training and testing.
And then, calculating a loss function value by using the initial detection result and the pest label carried by the training sample. Here, the loss function used may be a CIOU loss function, and the loss function value may be calculated by substituting the initial detection result and the pest tag carried by the training sample into the CIOU loss function.
Thereafter, using the loss function value, a random gradient descent (Stochastic Gradient Descent, SGD) algorithm may be employed to iteratively update the structural parameters of the initial detection model to obtain an alternative detection model. Wherein the maximum update cycle number (epochs) may be 100 and the batch size (batch size) of each update may be 16. In the embodiment of the invention, the iterative updating process can be realized on the PyTorrch1.11.0 framework by using an NVIDIA Titan V12G training and testing algorithm and using the Python language. For example, the specific environmental configuration of the software may be:
PyTorch1.11.0+opencv4+CUDA11.6+Python3.9.12。
And finally, testing the alternative detection model by using the test sample, namely inputting the test sample into the alternative detection model to obtain an alternative detection result output by the alternative detection model, calculating the value of a test evaluation index by using the alternative detection result and a pest label carried by the test sample, and judging whether the alternative detection model passes the test by using the value of the test evaluation index. Here, the test evaluation index may include at least one of accuracy (Precision), recall (recall), and average accuracy (mean Average Precision, mAP).
The calculation formula for the value of the test evaluation index comprises:
TP is the number of correctly identified test samples; FP is the number of erroneous or unknown test samples; FN is the number of test samples that are misidentified. C is the number of pest species in the test sample; m and N 1 Representing the number of overlap (Intersection over Union, IOU) thresholds and IOU thresholds, respectively; p (k) and R (k) are precision and recall.
It can be appreciated that if the difference between the alternative detection result and the pest label carried by the test sample is within the preset range, the correct identification is considered, otherwise, the incorrect identification is considered. The preset range may be determined according to the detection precision and the actual requirement, which is not limited herein.
And when the value of the test evaluation index meets the preset condition, the alternative detection model is considered to pass the test, the alternative detection model is determined to be the target detection model at the moment, otherwise, the alternative detection model is considered to not pass the test, and at the moment, the structural parameters of the alternative detection model are continuously updated.
In order to compare the complexity of the object detection model (i.e., picoDet+BiFPN) provided in the embodiments of the present invention with the existing object detection models (i.e., yolov4, yolov 5), floating-point numbers (FLPs) are used to measure the complexity of the model. The floating point number of operations is calculated as follows:
wherein N is 2 Is the number of convolution layers, L i Is the feature layer size of the ith convolutional layer output. K (K) i Is the number of convolution kernels of the ith convolution layer. C (C) i And C i-1 The number of output channels and input channels, respectively, of the ith convolutional layer.
In the pyrerch framework, the performance of the target detection model provided in the embodiment of the invention is compared with that of the existing target detection model on a pest image sample, as shown in table 1.
TABLE 1 statistical table of target detection model performance under PyTorch framework
In table 1, parameters are the number of parameters of the model, and FPS (framespersecond) is the average frame rate. Comparing the PicoDet+BiFPN, yolov5 and Yolov4 target detection models, it can be known that Yolov5 obtains the best accuracy P (k) and average accuracy mAP. However, picoDet+BiFPN has reached a minimum in FLOPS and Params, which represents a lightweight feature of the model, which is required for deployment in embedded systems. PicoDet+BiFPN also achieved the best performance in terms of both regression rate R (k) and average frame rate FPS. The high regression rate R (k) helps to reduce the probability of false detection in small target detection tasks such as pest detection, and the high average frame rate FPS can better detect real-time video streams. Secondly, the difference between PicoDet+BiFPN and YOLOv5 is not large in the indexes of average precision mAP and accuracy P (k), and experiments show that PicoDet+BiFPN has better comprehensive performance.
Performance comparison is performed before and after a cascade blind removal motion blur model in the pest detection model provided in the embodiment of the invention, as shown in table 2.
TABLE 2 statistical tables of final model performance
It can be seen that after the blind motion blur removal model is cascaded, the average precision mAP is improved by 10.6%, in addition, the FPS can also meet the real-time requirement, and experiments prove that the pest detection model provided by the embodiment of the invention can obviously improve the detection precision under the condition that pests fly at high speed.
In summary, as shown in fig. 4, the pest detection method provided in the embodiment of the invention may include the following steps:
firstly, collecting pest image samples in an actual scene through a camera, and marking the pest image samples so that the pest image samples carry pest labels;
secondly, constructing an initial generation countermeasure network model;
thirdly, training an initial generation countermeasure network model based on the pest image sample to obtain a target generation countermeasure network model, and taking a target generator in the target generation countermeasure network model as a blind movement blur removal model;
fourthly, constructing an initial detection model based on PicoDet and a weighted bidirectional feature pyramid network;
fifthly, training an initial detection model, and verifying and testing to obtain a target detection model;
Sixthly, cascading and integrating the blind motion blur removal model and the target detection model to obtain a pest detection model;
seventh, the image to be detected is input into the pest detection model for detection and identification, and the detection result output by the pest detection model is obtained.
As shown in fig. 5, on the basis of the above embodiment, there is provided a pest detection device according to an embodiment of the present invention, including:
an image acquisition module 51 for acquiring an image to be detected;
pest detection module 52, configured to input the image to be detected into a pest detection model, and obtain a detection result output by the pest detection model;
the pest detection model comprises a cascaded blind motion blur removal model and a target detection model, wherein the blind motion blur removal model is obtained based on pest image sample training, and the target detection model is obtained based on the pest image sample and pest label carried by the pest image sample training;
the blind deblurring model is used for carrying out blind deblurring on the image to be detected and generating a deblurred image corresponding to the image to be detected;
the target detection model comprises a trunk network of PicoDet, a designated neck network and a head network of PicoDet which are connected in sequence; the backbone network is used for extracting image features of the deblurred image; the appointed neck network comprises a plurality of weighted bidirectional feature pyramid networks which are connected in sequence, and the weighted bidirectional feature pyramid networks are used for extracting and fusing semantic features of different layers of the deblurred image based on the image features to obtain fusion results of different layers; the head network is used for outputting the detection result based on the fusion result.
On the basis of the above embodiment, the pest detection device provided in the embodiment of the present invention further includes a blind motion blur removal model training module for:
training an initial generation countermeasure network model based on the pest image sample to obtain a target generation countermeasure network model, and taking a target generator in the target generation countermeasure network model as the blind removal motion blur model.
On the basis of the above embodiment, in the pest detection device provided by the embodiment of the present invention, the backbone network is an ESNet, and the image features include a C3 feature map, a C4 feature map, and a C5 feature map output by the ESNet; the header network includes four;
the appointed neck network is also used for downsampling the C3 feature map and inputting the obtained sampling result into a first weighted bidirectional feature pyramid network;
the last weighted bidirectional feature pyramid network of the designated neck network is respectively used for inputting the fusion result corresponding to the sampling result as a P6 feature map to a first head network, inputting the fusion result corresponding to the C3 feature map as a P5 feature map to a second head network, inputting the fusion result corresponding to the C4 feature map as a P4 feature map to a third head network, and inputting the fusion result corresponding to the C5 feature map as a P3 feature map to a fourth head network.
On the basis of the above embodiments, the pest detection device provided in the embodiments of the present invention, the pest image sample includes a training sample and a test sample;
the device also comprises a target detection model training module for:
scaling the training sample to 640 x 640 in size, and inputting the training sample into an initial detection model to obtain an initial detection result output by the initial detection model;
calculating a loss function value based on the initial detection result and pest labels carried by the training sample, and performing iterative updating on structural parameters of the initial detection model by adopting a random gradient descent algorithm based on the loss function value to obtain the alternative detection model;
and testing the alternative detection model based on the test sample, if the alternative detection model passes the test, determining that the alternative detection model is the target detection model, otherwise, continuing to update the structural parameters of the alternative detection model.
On the basis of the above embodiment, the pest detection device provided in the embodiment of the present invention, the target detection model training module is specifically configured to:
substituting the initial detection result and the pest label carried by the training sample into a CIOU loss function, and calculating the loss function value.
On the basis of the above embodiment, the pest detection device provided in the embodiment of the present invention, the target detection model training module is specifically configured to:
inputting the test sample into the alternative detection model to obtain an alternative detection result output by the alternative detection model;
and calculating the value of a test evaluation index based on the alternative detection result and the pest label carried by the test sample, and judging whether the alternative detection model passes the test based on the value of the test evaluation index.
On the basis of the above embodiments, the pest detection device provided in the embodiments of the present invention, the test evaluation index includes at least one of an accuracy rate, a recall rate, and an average accuracy.
Specifically, the functions of each module in the pest detection device provided in the embodiment of the present invention are in one-to-one correspondence with the operation flow of each step in the method embodiment, and the achieved effects are consistent.
Fig. 6 illustrates a physical schematic diagram of an electronic device, as shown in fig. 6, which may include: processor (Processor) 610, communication interface (Communications Interface) 620, memory (Memory) 630, and communication bus 640, wherein Processor 610, communication interface 620, and Memory 630 communicate with each other via communication bus 640. Processor 610 may invoke logic instructions in memory 630 to perform the pest detection method provided in the embodiments described above, the method comprising: acquiring an image to be detected; inputting the image to be detected into a pest detection model to obtain a detection result output by the pest detection model; the pest detection model comprises a cascaded blind motion blur removal model and a target detection model, wherein the blind motion blur removal model is obtained based on pest image sample training, and the target detection model is obtained based on the pest image sample and pest label carried by the pest image sample training; the blind deblurring model is used for carrying out blind deblurring on the image to be detected and generating a deblurred image corresponding to the image to be detected; the target detection model comprises a trunk network of PicoDet, a designated neck network and a head network of PicoDet which are connected in sequence; the backbone network is used for extracting image features of the deblurred image; the appointed neck network comprises a plurality of weighted bidirectional feature pyramid networks which are connected in sequence, and the weighted bidirectional feature pyramid networks are used for extracting and fusing semantic features of different layers of the deblurred image based on the image features to obtain fusion results of different layers; the head network is used for outputting the detection result based on the fusion result.
Further, the logic instructions in the memory 630 may be implemented in the form of software functional units and stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program, the computer program being storable on a non-transitory computer readable storage medium, the computer program, when executed by a processor, being capable of performing the pest detection method provided in the above embodiments, the method comprising: acquiring an image to be detected; inputting the image to be detected into a pest detection model to obtain a detection result output by the pest detection model; the pest detection model comprises a cascaded blind motion blur removal model and a target detection model, wherein the blind motion blur removal model is obtained based on pest image sample training, and the target detection model is obtained based on the pest image sample and pest label carried by the pest image sample training; the blind deblurring model is used for carrying out blind deblurring on the image to be detected and generating a deblurred image corresponding to the image to be detected; the target detection model comprises a trunk network of PicoDet, a designated neck network and a head network of PicoDet which are connected in sequence; the backbone network is used for extracting image features of the deblurred image; the appointed neck network comprises a plurality of weighted bidirectional feature pyramid networks which are connected in sequence, and the weighted bidirectional feature pyramid networks are used for extracting and fusing semantic features of different layers of the deblurred image based on the image features to obtain fusion results of different layers; the head network is used for outputting the detection result based on the fusion result.
In still another aspect, the present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program which is implemented when executed by a processor to perform the pest detection method provided in the above embodiments, the method comprising: acquiring an image to be detected; inputting the image to be detected into a pest detection model to obtain a detection result output by the pest detection model; the pest detection model comprises a cascaded blind motion blur removal model and a target detection model, wherein the blind motion blur removal model is obtained based on pest image sample training, and the target detection model is obtained based on the pest image sample and pest label carried by the pest image sample training; the blind deblurring model is used for carrying out blind deblurring on the image to be detected and generating a deblurred image corresponding to the image to be detected; the target detection model comprises a trunk network of PicoDet, a designated neck network and a head network of PicoDet which are connected in sequence; the backbone network is used for extracting image features of the deblurred image; the appointed neck network comprises a plurality of weighted bidirectional feature pyramid networks which are connected in sequence, and the weighted bidirectional feature pyramid networks are used for extracting and fusing semantic features of different layers of the deblurred image based on the image features to obtain fusion results of different layers; the head network is used for outputting the detection result based on the fusion result.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. A pest detection method, characterized by comprising:
acquiring an image to be detected;
inputting the image to be detected into a pest detection model to obtain a detection result output by the pest detection model;
the pest detection model comprises a cascaded blind motion blur removal model and a target detection model, wherein the blind motion blur removal model is obtained based on pest image sample training, and the target detection model is obtained based on the pest image sample and pest label carried by the pest image sample training;
the blind deblurring model is used for carrying out blind deblurring on the image to be detected and generating a deblurred image corresponding to the image to be detected;
The target detection model comprises a trunk network of PicoDet, a designated neck network and a head network of PicoDet which are connected in sequence; the backbone network is used for extracting image features of the deblurred image; the specified neck network comprises a plurality of weighted bidirectional feature pyramid networks which are connected in sequence, and the weighted bidirectional feature pyramid networks are used for extracting and fusing semantic features of different layers of the deblurred image based on the image features of the deblurred image to obtain fusion results of different layers; the head network is used for outputting the detection result based on the fusion result;
the image features comprise a C3 feature map, a C4 feature map and a C5 feature map which are output by the backbone network; the header network includes four;
the appointed neck network is also used for downsampling the C3 feature map and inputting the obtained sampling result into a first weighted bidirectional feature pyramid network;
the last weighted bidirectional feature pyramid network of the designated neck network is respectively used for inputting the fusion result corresponding to the sampling result as a P6 feature map to a first head network, inputting the fusion result corresponding to the C3 feature map as a P5 feature map to a second head network, inputting the fusion result corresponding to the C4 feature map as a P4 feature map to a third head network, and inputting the fusion result corresponding to the C5 feature map as a P3 feature map to a fourth head network.
2. The pest detection method according to claim 1, wherein the blind motion blur removal model is trained based on the steps of:
training an initial generation countermeasure network model based on the pest image sample to obtain a target generation countermeasure network model, and taking a target generator in the target generation countermeasure network model as the blind removal motion blur model.
3. The pest detection method according to any one of claims 1 to 2, wherein the pest image sample includes a training sample and a test sample;
the target detection model is obtained based on training of the following steps:
scaling the training sample to 640 x 640 in size, and inputting the training sample into an initial detection model to obtain an initial detection result output by the initial detection model;
calculating a loss function value based on the initial detection result and pest labels carried by the training sample, and performing iterative updating on structural parameters of the initial detection model by adopting a random gradient descent algorithm based on the loss function value to obtain an alternative detection model;
and testing the alternative detection model based on the test sample, if the alternative detection model passes the test, determining that the alternative detection model is the target detection model, otherwise, continuing to iteratively update the structural parameters of the alternative detection model.
4. The pest detection method according to claim 3, wherein calculating a loss function value based on the initial detection result and a pest tag carried by the training sample includes:
substituting the initial detection result and the pest label carried by the training sample into a CIOU loss function, and calculating the loss function value.
5. The pest detection method according to claim 3, wherein the testing of the alternative detection model based on the test sample includes:
inputting the test sample into the alternative detection model to obtain an alternative detection result output by the alternative detection model;
and calculating the value of a test evaluation index based on the alternative detection result and the pest label carried by the test sample, and judging whether the alternative detection model passes the test based on the value of the test evaluation index.
6. The pest detection method according to claim 5, wherein the test evaluation index includes at least one of an accuracy rate, a recall rate, and an average accuracy.
7. A pest detection device, characterized by comprising:
the image acquisition module is used for acquiring an image to be detected;
The pest detection module is used for inputting the image to be detected into a pest detection model to obtain a detection result output by the pest detection model;
the pest detection model comprises a cascaded blind motion blur removal model and a target detection model, wherein the blind motion blur removal model is obtained based on pest image sample training, and the target detection model is obtained based on the pest image sample and pest label carried by the pest image sample training;
the blind deblurring model is used for carrying out blind deblurring on the image to be detected and generating a deblurred image corresponding to the image to be detected;
the target detection model comprises a trunk network of PicoDet, a designated neck network and a head network of PicoDet which are connected in sequence; the backbone network is used for extracting image features of the deblurred image; the specified neck network comprises a plurality of weighted bidirectional feature pyramid networks which are connected in sequence, and the weighted bidirectional feature pyramid networks are used for extracting and fusing semantic features of different layers of the deblurred image based on the image features of the deblurred image to obtain fusion results of different layers; the head network is used for outputting the detection result based on the fusion result;
The image features comprise a C3 feature map, a C4 feature map and a C5 feature map which are output by the backbone network; the header network includes four;
the appointed neck network is also used for downsampling the C3 feature map and inputting the obtained sampling result into a first weighted bidirectional feature pyramid network;
the last weighted bidirectional feature pyramid network of the designated neck network is respectively used for inputting the fusion result corresponding to the sampling result as a P6 feature map to a first head network, inputting the fusion result corresponding to the C3 feature map as a P5 feature map to a second head network, inputting the fusion result corresponding to the C4 feature map as a P4 feature map to a third head network, and inputting the fusion result corresponding to the C5 feature map as a P3 feature map to a fourth head network.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the pest detection method of any one of claims 1-6 when the program is executed.
9. A non-transitory computer-readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the pest detection method according to any one of claims 1 to 6.
CN202310727197.8A 2023-06-19 2023-06-19 Pest detection method, pest detection device, electronic equipment and storage medium Active CN116485796B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310727197.8A CN116485796B (en) 2023-06-19 2023-06-19 Pest detection method, pest detection device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310727197.8A CN116485796B (en) 2023-06-19 2023-06-19 Pest detection method, pest detection device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116485796A CN116485796A (en) 2023-07-25
CN116485796B true CN116485796B (en) 2023-09-08

Family

ID=87218125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310727197.8A Active CN116485796B (en) 2023-06-19 2023-06-19 Pest detection method, pest detection device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116485796B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116935230B (en) * 2023-09-13 2023-12-15 山东建筑大学 Crop pest identification method, device, equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472699A (en) * 2019-08-24 2019-11-19 福州大学 A kind of harmful biological motion blurred picture detection method of field of electric force institute based on GAN
CN112509001A (en) * 2020-11-24 2021-03-16 河南工业大学 Multi-scale and multi-feature fusion feature pyramid network blind restoration method
CN113744226A (en) * 2021-08-27 2021-12-03 浙大宁波理工学院 Intelligent agricultural pest identification and positioning method and system
CN114708231A (en) * 2022-04-11 2022-07-05 常州大学 Sugarcane aphid target detection method based on light-weight YOLO v5
CN115719337A (en) * 2022-11-11 2023-02-28 无锡学院 Wind turbine surface defect detection method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345449B (en) * 2018-07-17 2020-11-10 西安交通大学 Image super-resolution and non-uniform blur removing method based on fusion network
CN114202696B (en) * 2021-12-15 2023-01-24 安徽大学 SAR target detection method and device based on context vision and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472699A (en) * 2019-08-24 2019-11-19 福州大学 A kind of harmful biological motion blurred picture detection method of field of electric force institute based on GAN
CN112509001A (en) * 2020-11-24 2021-03-16 河南工业大学 Multi-scale and multi-feature fusion feature pyramid network blind restoration method
CN113744226A (en) * 2021-08-27 2021-12-03 浙大宁波理工学院 Intelligent agricultural pest identification and positioning method and system
CN114708231A (en) * 2022-04-11 2022-07-05 常州大学 Sugarcane aphid target detection method based on light-weight YOLO v5
CN115719337A (en) * 2022-11-11 2023-02-28 无锡学院 Wind turbine surface defect detection method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于YOLOv5的粮仓害虫监测系统设计与实现;杜聪;《中国优秀硕士学位论文全文数据库》(第03期);摘要、第7-61页 *

Also Published As

Publication number Publication date
CN116485796A (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN110059558B (en) Orchard obstacle real-time detection method based on improved SSD network
CN107704857B (en) End-to-end lightweight license plate recognition method and device
KR102263397B1 (en) Method for acquiring sample images for inspecting label among auto-labeled images to be used for learning of neural network and sample image acquiring device using the same
US8379994B2 (en) Digital image analysis utilizing multiple human labels
CN111161311A (en) Visual multi-target tracking method and device based on deep learning
CN111507469B (en) Method and device for optimizing super parameters of automatic labeling device
CN109325395A (en) The recognition methods of image, convolutional neural networks model training method and device
CN108230291B (en) Object recognition system training method, object recognition method, device and electronic equipment
JP2020126613A (en) Method for automatically evaluating labeling reliability of training image for use in deep learning network to analyze image, and reliability-evaluating device using the same
CN110348475A (en) It is a kind of based on spatial alternation to resisting sample Enhancement Method and model
CN114821282B (en) Image detection device and method based on domain antagonistic neural network
CN111611889B (en) Miniature insect pest recognition device in farmland based on improved convolutional neural network
CN110222604A (en) Target identification method and device based on shared convolutional neural networks
CN116485796B (en) Pest detection method, pest detection device, electronic equipment and storage medium
CN112949693A (en) Training method of image classification model, image classification method, device and equipment
CN110059646A (en) The method and Target Searching Method of training action plan model
KR102349854B1 (en) System and method for tracking target
CN115546260A (en) Target identification tracking method and device, electronic equipment and storage medium
CN109934352B (en) Automatic evolution method of intelligent model
CN112084815A (en) Target detection method based on camera focal length conversion, storage medium and processor
CN114782822A (en) Method and device for detecting state of power equipment, electronic equipment and storage medium
CN113627538A (en) Method and electronic device for training asymmetric generation countermeasure network to generate image
Yilmaz et al. Analysis of the effect of training sample size on the performance of 2D CNN models
CN115410141A (en) People flow statistical method, system, electronic equipment and storage medium
Zaji et al. Wheat spike counting using regression and localization approaches

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant