CN116543386A - Agricultural pest image identification method based on convolutional neural network - Google Patents

Agricultural pest image identification method based on convolutional neural network Download PDF

Info

Publication number
CN116543386A
CN116543386A CN202310052088.0A CN202310052088A CN116543386A CN 116543386 A CN116543386 A CN 116543386A CN 202310052088 A CN202310052088 A CN 202310052088A CN 116543386 A CN116543386 A CN 116543386A
Authority
CN
China
Prior art keywords
pest
crop
convolutional neural
images
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310052088.0A
Other languages
Chinese (zh)
Inventor
陈鹏
王俊峰
章军
夏懿
焦林
庞春晖
王刘向
孟维庆
杜健铭
王儒敬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Intelligent Agriculture Collaborative Innovation Research Institute Of China Science And Technology
Original Assignee
Hefei Intelligent Agriculture Collaborative Innovation Research Institute Of China Science And Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Intelligent Agriculture Collaborative Innovation Research Institute Of China Science And Technology filed Critical Hefei Intelligent Agriculture Collaborative Innovation Research Institute Of China Science And Technology
Priority to CN202310052088.0A priority Critical patent/CN116543386A/en
Publication of CN116543386A publication Critical patent/CN116543386A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Abstract

The invention relates to an agricultural pest image identification method based on a convolutional neural network, which comprises the following steps: collecting crop pest images using a camera; screening the collected crop pest images according to screening conditions; labeling the screened crop pest images, wherein the labeled crop pest images form a data set; establishing a pest identification model; training pest identification models by adopting a training set to obtain a trained pest identification model, and inputting pest images to be identified into the trained pest identification model to obtain identification results. The convolutional neural network in the deep learning technology is applied to the agricultural crop area extraction, and the acquired agricultural crop image set training network is utilized, so that the network can finally automatically identify the number and types of agricultural crop diseases and insect pests; the invention improves the detection recognition rate of agricultural crop targets, has high recognition speed, can learn the characteristics of plant diseases and insect pests from complex environments, and enhances the robustness of a plant disease and insect pest recognition model.

Description

Agricultural pest image identification method based on convolutional neural network
Technical Field
The invention relates to the technical field of deep learning and computer vision, in particular to an agricultural pest image recognition method based on a convolutional neural network.
Background
Crop pests cause significant losses to crops, both in developing and developed countries. According to recent studies, nearly half of the crop yield in the world is lost due to insect pests and crop disease. Therefore, fine control of pests is an important task to reduce losses and increase crop yield. Once the pest is spread in the field, it must be found in time so that farmers can provide treatment in time to prevent the pest from spreading. However, conventional pest identification methods have a number of drawbacks. First, the most common method is manual investigation, i.e., manual inspections of farms by experts or farmers daily, weekly and monthly to find signs of disease and insect pests; second, the variety of insects is very large, and the number of individuals belonging to the same species is enormous. Therefore, the conventional pest identification method is time-consuming, error-prone and cumbersome.
Plants infected with disease are typically marked or damaged, and professionals typically diagnose them by visual inspection or laboratory testing of plant samples, but these methods have certain limitations: the diagnosis of the disease requires professional knowledge, and a common farmer may not have corresponding knowledge to diagnose; training of professional diagnosticians is time consuming and expensive; farmers and professionals may not be able to correctly identify non-local pests; for some diseases and insect pests with visually similar characteristics, a high level of expertise is required, in which case even professionals may make incorrect diagnoses due to fatigue, insufficient illumination, and poor eyesight. In addition, individual experts are a small group of disease experts; the use of IPT in crop pest detection is an active area of research aimed at overcoming these limitations. The increasing capabilities and availability of digital cameras and computing hardware, coupled with the decreasing costs, has meant that IPT is expected to provide a possible alternative to human expertise in this area.
In addition, the success of current disease-recognition algorithms for pests depends on a number of variables, which depend on the discretion of the system designer, including the choice of preprocessing and segmentation techniques to be used, which color space to use, which features to extract, and which learning algorithm to use to classify. When attempting to use a hand-made feature extraction and shallow classifier for automatic plant disease identification, there is no way to a priori determine which combination of pretreatment, feature extraction, or classification algorithms will produce the best results, resulting in a cumbersome trial-and-error approach. Furthermore, the hand-made feature extraction method can only succeed in limited and constrained settings, and fails when the operating conditions change slightly. It has also been noted that segmentation techniques give unreliable results, especially in the presence of complex backgrounds, and that lesions do not have well defined edges, but gradually merge with healthy parts of the leaves. Furthermore, some of the best features for classification cannot be manually extracted using any known mathematical tool currently available.
Disclosure of Invention
The invention aims to overcome the defects of time consumption, easy error and complexity of the traditional pest identification method, and provides an agricultural pest image identification method based on a convolutional neural network, which can automatically identify the number and types of agricultural crop pests, improve the detection identification rate of agricultural crop targets and has high identification speed.
In order to achieve the above purpose, the present invention adopts the following technical scheme: an agricultural pest image identification method based on a convolutional neural network comprises the following sequential steps:
(1) Collecting crop pest images using a camera;
(2) Screening the collected crop pest images according to screening conditions, and deleting the crop pest images which do not meet the requirements;
(3) Labeling the screened crop pest images, forming a data set by the labeled crop pest images, and carrying out the data set according to the following steps of 7:1:2 is divided into a training set, a testing set and a verification set;
(4) Establishing a pest identification model based on a convolutional neural network and a YOLOv5 model;
(5) And training the pest identification model by adopting a training set, testing the identification function and effect of the pest identification model by using a testing set, inputting a verification set into the pest identification model, verifying the integrity and stability of the pest identification model, obtaining a trained pest identification model, and inputting a pest image to be identified into the trained pest identification model to obtain an identification result.
In the step (1), the crop pest image resolution is 1280×1024 pixels, and the height and the position of the fixed camera are determined experimentally.
In step (2), the screening conditions include definition, pest number condition, pest size condition, pest stacking condition, and occupied area of crops.
The step (3) specifically refers to: labeling the screened crop pest images by using a labelme tool, labeling pest areas as foreground, denoted by 1, labeling other areas except the foreground as background, denoted by 0, and establishing a label image as a training or evaluation label.
The step (4) specifically comprises the following steps: the convolutional neural network includes four components: the input layer is an output feature matrix, the convolution layer is used for carrying out convolution operation, and the pooling layer is used for carrying out pooling and dimension reduction; the full-connection layer is used for vectorizing the feature matrix set;
the YOLOv5 model includes:
the input end adopts Mosaic data enhancement, namely adopts a random scaling, random cutting and random arrangement mode to splice images, and adopts self-adaptive anchor frame calculation;
the system comprises a backbone network, a connecting network and a connecting network, wherein the backbone network is used for completing feature extraction of an image and consists of a Focus structure and a CSP structure, the CSP structure comprises a CSP1_X structure and a CSP2_X structure, the CSP1_X structure is applied to the backbone network, and the CSP2_X structure is applied to the connecting network;
the connecting network is used for fusing the characteristic detection large, medium and small targets of different layers and consists of FPN and PAN;
the output end uses the loss function to predict and correct so as to achieve a good output result.
According to the technical scheme, the beneficial effects of the invention are as follows: firstly, fixing a machine position by using a high-definition camera to acquire an image, applying a convolutional neural network in a deep learning technology to the extraction of an agricultural crop area, adjusting a network structure according to an actual use scene, and gathering and training the network by using the acquired agricultural crop image, so that the network can automatically identify the number and the type of agricultural crop diseases and insect pests; secondly, the convolutional neural network is adopted, so that the detection recognition rate of agricultural crop targets is improved, the recognition speed is high, the characteristic learning of plant diseases and insect pests can be realized from a complex environment, and the robustness of a plant diseases and insect pests recognition model is enhanced; thirdly, the acquired images are divided according to the standard data set format, the sample set can be reused, the cost of repeatedly acquiring the images is avoided, and the training is convenient and the repeated use is realized.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
As shown in fig. 1, a convolutional neural network-based agricultural pest image recognition method includes the following sequential steps:
(1) Collecting crop pest images using a camera;
(2) Screening the collected crop pest images according to screening conditions, and deleting the crop pest images which do not meet the requirements;
(3) Labeling the screened crop pest images, forming a data set by the labeled crop pest images, and carrying out the data set according to the following steps of 7:1:2 is divided into a training set, a testing set and a verification set;
(4) Establishing a pest identification model based on a convolutional neural network and a YOLOv5 model;
(5) And training the pest identification model by adopting a training set, testing the identification function and effect of the pest identification model by using a testing set, inputting a verification set into the pest identification model, verifying the integrity and stability of the pest identification model, obtaining a trained pest identification model, and inputting a pest image to be identified into the trained pest identification model to obtain an identification result.
In the step (1), the crop pest image resolution is 1280×1024 pixels, and the height and the position of the fixed camera are determined experimentally.
In step (2), the screening conditions include definition, pest number condition, pest size condition, pest stacking condition, and occupied area of crops.
The step (3) specifically refers to: labeling the screened crop pest images by using a labelme tool, labeling pest areas as foreground, denoted by 1, labeling other areas except the foreground as background, denoted by 0, and establishing a label image as a training or evaluation label.
The step (4) specifically comprises the following steps: the convolutional neural network includes four components: the input layer is an output feature matrix, the convolution layer is used for carrying out convolution operation, and the pooling layer is used for carrying out pooling and dimension reduction; the full-connection layer is used for vectorizing the feature matrix set;
the YOLOv5 model includes:
the input end adopts Mosaic data enhancement, namely adopts a random scaling, random cutting and random arrangement mode to splice images, and adopts self-adaptive anchor frame calculation;
the system comprises a backbone network, a connecting network and a connecting network, wherein the backbone network is used for completing feature extraction of an image and consists of a Focus structure and a CSP structure, the CSP structure comprises a CSP1_X structure and a CSP2_X structure, the CSP1_X structure is applied to the backbone network, and the CSP2_X structure is applied to the connecting network;
the connecting network is used for fusing the characteristic detection large, medium and small targets of different layers and consists of FPN and PAN;
the output end uses the loss function to predict and correct so as to achieve a good output result.
The invention is further described below with reference to fig. 1.
Convolutional neural networks are a type of feedforward neural network that contains convolutional computations and has a deep structure, with a total of four parts: input layer, convolution layer, pooling layer, full connection layer. The input layer is an output feature matrix; the convolution layer carries out convolution operation; the pooling layer is used for pooling to reduce the dimension; the fully connected layer is to vectorize the feature matrix set. The convolutional neural network avoids complex pre-processing of the image, and can directly input the original image, so that the convolutional neural network is widely applied.
The YOLOv5 model includes:
an input end: the Mosaic data enhancement is spliced in a mode of random zooming, random cutting and random arrangement; the adaptive anchor frame calculation will have an anchor frame of an initially set length and width for different data sets. In network training, the network outputs a prediction frame on the basis of an initial anchor frame, then compares the prediction frame with a real frame groundtrunk, calculates the difference between the prediction frame and the real frame groundtrunk, and then reversely updates and iterates network parameters; and (3) zooming the self-adaptive pictures, wherein the different pictures are different in length and width, so that the original pictures are uniformly zoomed to a standard size and then sent into a detection network. The method is characterized in that the quality of the data set is improved continuously on the basis of the pretreatment of the data set, and the method is also a first step of the whole detection model, so that a foundation is laid for subsequent detection.
Backbone network, backbone: first, the Focus structure is a slicing operation added on the basis of YOLOv3 and YOLOv4, for example, a 4×4×3 image can be sliced and then changed into a 2×2×12 feature map; then, the CSP structure is designed by referring to the design thought of CSPNet, and the YOLOv5 designs two CSP structures, wherein the CSP1_X structure is applied to a Backbone network of a backhaul, and the CSP2_X structure is applied to a Neck.
The connection network is Neck: the FPN is from top to bottom, the high-level strong semantic features are transferred, the whole pyramid is enhanced, only semantic information is enhanced, and no positioning information is transferred. PAN is aimed at by adding a bottom-up pyramid behind the FPN, supplementing the FPN, and transferring the lower layer strong localization features up, also known as "double-tower tactics". In the Yolov5 Neck structure, CSP2 structure designed by referring to CSPnet is adopted to strengthen the capability of network feature fusion.
And an output end: this part is actually also an output end, mainly the different calculation methods of the IOU, which has a great influence on the output result of the target detection. The loss function of the target detection task consists of two parts, namely Classificition Loss (classification loss function) and Bounding Box Regression Loss (regression loss function); when the predicted frame and the target frame are disjoint, iou=0, and cannot reflect the distance between the two frames, and the Loss function is not conductive, so that iou_loss cannot optimize the situation that the two frames are disjoint. When two prediction frames are the same in size, two IOUs are the same, and the IOU_Loss cannot distinguish the difference of the intersection situations of the two prediction frames.
In summary, the invention uses the high-definition camera to fix the machine position to collect the image, applies the convolutional neural network in the deep learning technology to the agricultural crop area extraction, adjusts the network structure according to the actual use scene, and utilizes the collected agricultural crop image to gather the training network, so that the network can finally automatically identify the number and types of the agricultural crop diseases and insect pests; the invention adopts the convolutional neural network, improves the detection recognition rate of agricultural crop targets, has high recognition speed, can learn the characteristics of the plant diseases and insect pests from complex environments, and enhances the robustness of a plant disease and insect pest recognition model; the acquired images are divided according to the standard data set format, the sample set can be reused, the cost of repeatedly acquiring the images is avoided, and the training is convenient and the repeated use is realized.

Claims (5)

1. An agricultural pest image recognition method based on a convolutional neural network is characterized by comprising the following steps of: the method comprises the following steps in sequence:
(1) Collecting crop pest images using a camera;
(2) Screening the collected crop pest images according to screening conditions, and deleting the crop pest images which do not meet the requirements;
(3) Labeling the screened crop pest images, forming a data set by the labeled crop pest images, and carrying out the data set according to the following steps of 7:1:2 is divided into a training set, a testing set and a verification set;
(4) Establishing a pest identification model based on a convolutional neural network and a YOLOv5 model;
(5) And training the pest identification model by adopting a training set, testing the identification function and effect of the pest identification model by using a testing set, inputting a verification set into the pest identification model, verifying the integrity and stability of the pest identification model, obtaining a trained pest identification model, and inputting a pest image to be identified into the trained pest identification model to obtain an identification result.
2. The agricultural pest image recognition method based on the convolutional neural network according to claim 1, wherein: in the step (1), the crop pest image resolution is 1280×1024 pixels, and the height and the position of the fixed camera are determined experimentally.
3. The agricultural pest image recognition method based on the convolutional neural network according to claim 1, wherein: in step (2), the screening conditions include definition, pest number condition, pest size condition, pest stacking condition, and occupied area of crops.
4. The agricultural pest image recognition method based on the convolutional neural network according to claim 1, wherein: the step (3) specifically refers to: labeling the screened crop pest images by using a labelme tool, labeling pest areas as foreground, denoted by 1, labeling other areas except the foreground as background, denoted by 0, and establishing a label image as a training or evaluation label.
5. The agricultural pest image recognition method based on the convolutional neural network according to claim 1, wherein: the step (4) specifically comprises the following steps: the convolutional neural network includes four components: the input layer is an output feature matrix, the convolution layer is used for carrying out convolution operation, and the pooling layer is used for carrying out pooling and dimension reduction; the full-connection layer is used for vectorizing the feature matrix set;
the YOLOv5 model includes:
the input end adopts Mosaic data enhancement, namely adopts a random scaling, random cutting and random arrangement mode to splice images, and adopts self-adaptive anchor frame calculation;
the system comprises a backbone network, a connecting network and a connecting network, wherein the backbone network is used for completing feature extraction of an image and consists of a Focus structure and a CSP structure, the CSP structure comprises a CSP1_X structure and a CSP2_X structure, the CSP1_X structure is applied to the backbone network, and the CSP2_X structure is applied to the connecting network;
the connecting network is used for fusing the characteristic detection large, medium and small targets of different layers and consists of FPN and PAN;
the output end uses the loss function to predict and correct so as to achieve a good output result.
CN202310052088.0A 2023-02-02 2023-02-02 Agricultural pest image identification method based on convolutional neural network Pending CN116543386A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310052088.0A CN116543386A (en) 2023-02-02 2023-02-02 Agricultural pest image identification method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310052088.0A CN116543386A (en) 2023-02-02 2023-02-02 Agricultural pest image identification method based on convolutional neural network

Publications (1)

Publication Number Publication Date
CN116543386A true CN116543386A (en) 2023-08-04

Family

ID=87447770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310052088.0A Pending CN116543386A (en) 2023-02-02 2023-02-02 Agricultural pest image identification method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN116543386A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237814A (en) * 2023-11-14 2023-12-15 四川农业大学 Large-scale orchard insect condition monitoring method based on attention mechanism optimization
CN117496105A (en) * 2024-01-03 2024-02-02 武汉新普惠科技有限公司 Agricultural pest visual recognition system and method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237814A (en) * 2023-11-14 2023-12-15 四川农业大学 Large-scale orchard insect condition monitoring method based on attention mechanism optimization
CN117237814B (en) * 2023-11-14 2024-02-20 四川农业大学 Large-scale orchard insect condition monitoring method based on attention mechanism optimization
CN117496105A (en) * 2024-01-03 2024-02-02 武汉新普惠科技有限公司 Agricultural pest visual recognition system and method
CN117496105B (en) * 2024-01-03 2024-03-12 武汉新普惠科技有限公司 Agricultural pest visual recognition system and method

Similar Documents

Publication Publication Date Title
Bhalla et al. A fuzzy convolutional neural network for enhancing multi-focus image fusion
Ngugi et al. Tomato leaf segmentation algorithms for mobile phone applications using deep learning
CN108764372B (en) Construction method and device, mobile terminal, the readable storage medium storing program for executing of data set
CN113065558A (en) Lightweight small target detection method combined with attention mechanism
CN111178197B (en) Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method
CN111046880B (en) Infrared target image segmentation method, system, electronic equipment and storage medium
CN108875821A (en) The training method and device of disaggregated model, mobile terminal, readable storage medium storing program for executing
CN116543386A (en) Agricultural pest image identification method based on convolutional neural network
CN110647875B (en) Method for segmenting and identifying model structure of blood cells and blood cell identification method
CN111179216B (en) Crop disease identification method based on image processing and convolutional neural network
Keller et al. Soybean leaf coverage estimation with machine learning and thresholding algorithms for field phenotyping
CN113221864A (en) Method for constructing and applying diseased chicken visual recognition model with multi-region depth feature fusion
Shen et al. Biomimetic vision for zoom object detection based on improved vertical grid number YOLO algorithm
CN111080639A (en) Multi-scene digestive tract endoscope image identification method and system based on artificial intelligence
CN110163798A (en) Fishing ground purse seine damage testing method and system
CN112464983A (en) Small sample learning method for apple tree leaf disease image classification
CN115205520A (en) Gastroscope image intelligent target detection method and system, electronic equipment and storage medium
CN113128308B (en) Pedestrian detection method, device, equipment and medium in port scene
CN113205507A (en) Visual question answering method, system and server
McLeay et al. Deep convolutional neural networks with transfer learning for waterline detection in mussel farms
Poonguzhali et al. Crop condition assessment using machine learning
CN115187982B (en) Algae detection method and device and terminal equipment
CN114120359A (en) Method for measuring body size of group-fed pigs based on stacked hourglass network
CN115170987A (en) Method for detecting diseases of grapes based on image segmentation and registration fusion
Clark et al. Finding a good segmentation strategy for tree crown transparency estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination