CN114092808A - Crop disease and insect pest detection and prevention device and method based on image and deep learning - Google Patents

Crop disease and insect pest detection and prevention device and method based on image and deep learning Download PDF

Info

Publication number
CN114092808A
CN114092808A CN202111363501.2A CN202111363501A CN114092808A CN 114092808 A CN114092808 A CN 114092808A CN 202111363501 A CN202111363501 A CN 202111363501A CN 114092808 A CN114092808 A CN 114092808A
Authority
CN
China
Prior art keywords
image
pest
layer
crop
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111363501.2A
Other languages
Chinese (zh)
Inventor
黄家才
唐安
李毅博
朱晓春
陈�田
汪涛
汤文俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Technology
Original Assignee
Nanjing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Technology filed Critical Nanjing Institute of Technology
Priority to CN202111363501.2A priority Critical patent/CN114092808A/en
Publication of CN114092808A publication Critical patent/CN114092808A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01MCATCHING, TRAPPING OR SCARING OF ANIMALS; APPARATUS FOR THE DESTRUCTION OF NOXIOUS ANIMALS OR NOXIOUS PLANTS
    • A01M7/00Special adaptations or arrangements of liquid-spraying apparatus for purposes covered by this subclass
    • A01M7/005Special arrangements or adaptations of the spraying or distributing parts, e.g. adaptations or mounting of the spray booms, mounting of the nozzles, protection shields
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01MCATCHING, TRAPPING OR SCARING OF ANIMALS; APPARATUS FOR THE DESTRUCTION OF NOXIOUS ANIMALS OR NOXIOUS PLANTS
    • A01M7/00Special adaptations or arrangements of liquid-spraying apparatus for purposes covered by this subclass
    • A01M7/0089Regulating or controlling systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Insects & Arthropods (AREA)
  • Pest Control & Pesticides (AREA)
  • Wood Science & Technology (AREA)
  • Zoology (AREA)
  • Environmental Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a crop disease and insect pest detection and control method based on images and deep learning.A main control device trains a deep learning model to obtain image characteristics of crop diseases and insect pests; binary image for eliminating noise interference informationComparing the image characteristics of the image and the crop diseases and insect pests, and outputting a disease and insect pest classification result and two-dimensional position coordinates of the disease and insect pest tea leaves (ii)x,y) (ii) a The network structure of training deep learning model that this patent provided is to crop plant diseases and insect pests design and is set up, and network structure retrencies, and the training is with low costs, detects the rate of accuracy height.

Description

Crop disease and insect pest detection and prevention device and method based on image and deep learning
Technical Field
The invention relates to the technical field of artificial intelligence and integrated control, in particular to a crop disease and insect pest detection and control device and method based on images and deep learning.
Background
At present, due to the high fit between machine vision and deep learning technologies, practical application has been developed in many engineering fields, and in particular, substantial progress has been made in the fields of industrial sorting, security identification and the like.
However, in the field of agricultural equipment, the mechanical automation level is higher, but the intelligent level is far from enough, if processes such as real picking, pest and disease damage identification still use the manpower as the main, the working efficiency is lower, or the indiscriminate comprehensive sprinkling irrigation prevention and control that also can cause too much and extravagant problem of pesticide like this. Even though the machine vision technology is primarily applied to the field of agricultural equipment, the problems of low recognition accuracy, poor environmental interference resistance and the like exist, so that how to improve the machine vision recognition accuracy in the actual application environment becomes an inevitable engineering problem.
With the wide application of deep learning in recent years, various industries have started to develop efforts, such as a Yolo series model and a Faster R-CNN model, which are already applied to the field of agricultural equipment, but the existing training models have relatively redundant structures and poor real-time performance. Taking the Yolo-v4 model as an example, a total of 53 layers of convolutional networks are more suitable for the classification detection of dozens of targets, and the training time cost is high because the complexity of the model is high.
Disclosure of Invention
The invention provides a tea row self-propelled robot and a tea disease and insect pest detection and prevention method based on image and deep learning, which can integrate two major links of tea disease and insect pest detection and identification and pesticide spraying prevention and control under the scene of moderate hardware cost, and achieve the purposes of reducing manpower consumption and improving the disease and insect pest prevention and control efficiency. Compared with a common area detection network in the prior art, the deep learning network TeaNet provided by the patent is more simplified in model structure aiming at tea pest identification, reduces interference of irrelevant information, saves training time, and improves identification accuracy of tea pest.
The technical scheme adopted by the invention is as follows:
the utility model provides a crops plant diseases and insect pests detection prevention and cure device based on image and degree of depth study, includes: the detection and control device is arranged on the movement device;
the detection prevention device comprises a first detection prevention unit and a second detection prevention unit;
the first detection and prevention unit comprises a support, a camera, a main control device and a middle sprayer, the camera comprises a first camera, a second camera and a third camera, and the support is fixedly arranged on the movement device; the middle sprayer is arranged in the middle of the bracket, the first camera is arranged in the middle of the bracket and is higher than the middle sprayer, and the second camera and the third camera are respectively arranged on the left side and the right side of the movement device;
the second detection and prevention unit comprises a first stepping motor, a second stepping motor, a first side sprayer, a second side sprayer, a first vertical sprayer screw rod and a second vertical sprayer screw rod
A first vertical spray nozzle screw rod and a second vertical spray nozzle screw rod are respectively arranged on the left rear side and the right rear side of the moving device, a first stepping motor is fixedly arranged on the first vertical spray nozzle screw rod, a second stepping motor is fixedly arranged on the second vertical spray nozzle screw rod, and the first stepping motor controls the first side spray nozzle to vertically move up and down along the first vertical spray nozzle screw rod; the second stepping motor controls the second side sprayer to vertically move up and down along a second vertical sprayer screw rod;
the moving device comprises a crawler belt, the crawler belt is a moving part of the crop disease and insect pest detection and control device based on image and deep learning, and the device is driven to move forward through crawler belt transmission;
the first camera, the second camera and the third camera are all arranged on the aluminum profile structure, are respectively positioned right above and at the left and right sides of the crops and are responsible for collecting the pest and disease image information of the crops;
the middle spray head is arranged on the horizontal aluminum profile structure and is responsible for spraying pesticide above crops;
the first side sprayer and the second side sprayer respectively vertically move up and down along a first vertical sprayer screw rod and a second vertical sprayer screw rod through a first stepping motor and a second stepping motor, and are responsible for pesticide spraying work on the left side and the right side of crops;
the main control device is installed on a top end beam of the support and is connected with the first camera, the second camera, the third camera, the first stepping motor and the second stepping motor through the USB wire rods.
The first camera, the second camera and the third camera are responsible for collecting depth image information of crops;
the middle sprayer is responsible for spraying pesticide above crops;
the first side sprayer and the second side sprayer are responsible for spraying pesticides on the left side and the right side of crops;
the main control device is used for identifying and detecting image information transmitted by the first camera, the second camera and the third camera, and controlling the moving device, the middle sprayer, the first side sprayer and the second side sprayer.
Based on a crop plant diseases and insect pests detection and control device based on image and deep learning carries out the crop plant diseases and insect pests detection and control based on image and deep learning, specifically includes the following steps:
step one, a main control device controls a movement device to operate and controls a camera to collect images above and at two sides of crops;
secondly, preprocessing the acquired image by the main control device, wherein the preprocessing comprises gray processing, smoothing processing, contour extraction and the like, so that a binary image for eliminating noise interference information is obtained, noise information unfavorable for identification is eliminated, and preparation is made for next pest and disease identification;
step three, the master control device trains a deep learning model to obtain the image characteristics of crop diseases and insect pests; the binary image (the leaf texture and the scab characteristic information of the input image tea) for eliminating noise interference information is compared with the image characteristic progress of crop diseases and insect pests, and a disease and pest classification result and two-dimensional position coordinates (x, y) of the disease and pest tea are output;
fourthly, the main control device calibrates the depth information z of the target object in the acquired image and the central two-dimensional coordinates (x, y) of the minimum external rectangle of the area based on the camera to acquire three-dimensional coordinate information (x, y, z) of crop diseases and insect pests, and controls the first side edge spray head to vertically move up and down along the first vertical spray head screw rod according to the three-dimensional position coordinates (x, y, z) of the crop diseases and insect pests; and controlling the second side sprayer to vertically move up and down along the second vertical sprayer screw rod, and controlling the middle sprayer, the first side sprayer and the second side sprayer to spray pesticide at the three-dimensional position coordinates (x, y, z) of the crop diseases and insect pests.
The third step specifically comprises the following steps:
s31, establishing a crop disease and pest data set based on the historical data image of the crop disease and pest;
s32, building a deep learning network TeaNet for detecting crop diseases and insect pests, wherein the deep learning network TeaNet comprises a first classification sub-network a1And a second classification subnetwork a2First classification subnetwork a1And a second classification subnetwork a2Is connected to the Softmax layer, a first classification subnetwork a1Comprises a first convolution layer and a first pooling layer b1A first spatial pyramid pooling layer c1First full-junction layer e1(ii) a A second classification subnetwork a2Comprises a second convolution layer and a second pooling layer b2A second spatial pyramid pooling layer c2A second full connection layer e2(ii) a First classification subnetCollaterals of Chinese character a1And a second classification subnetwork a2Connected with each other through a spatial variable layer d;
s33, after the data in the crop disease and pest data set are input into the deep learning network TeaNet, performing convolution through the first convolution layer to extract a characteristic image; inputting feature images into a first pooling layer b1After the overfitting is prevented, outputting the characteristic image after the overfitting is prevented; inputting the characteristic image after the overfitting prevention into a first space pyramid pooling layer c1Feature compression is carried out, and network complexity is simplified; inputting the data after feature compression into a first full connection layer to obtain a first disease spot feature of crop diseases and insect pests;
s34, inputting the characteristic image after the overfitting prevention into the first space pyramid pooling layer c1Meanwhile, inputting the characteristic image after overfitting prevention into a space change layer d for detecting pest and disease damage areas, outputting pest and disease damage attention areas, and inputting the image of the pest and disease damage attention areas into a second classification sub-network a2Performing convolution through the second convolution layer to extract image characteristics of the pest and disease attention area; inputting the image characteristics of the pest attention area into a second pooling layer b2After the overfitting is prevented, outputting the image characteristics of the attention area after the overfitting is prevented; inputting the image characteristics of the attention area after overfitting prevention into a second spatial pyramid pooling layer c2Feature compression is carried out, and network complexity is simplified; inputting the data after feature compression into a second full-connection layer to obtain second scab features of crop diseases and insect pests;
s35, performing feature fusion on the first lesion features and the second lesion features, and outputting crop pest image features; first fully-connected layer e1And a second fully-connected layer e2The output of the method is subjected to feature fusion, the feature fusion is a necessary link in a convolutional neural network, the convolution process is carried out layer by layer, and features of each layer are extracted, so that the features of each layer need to be fused;
and S36, inputting the fused image features into a Softmax layer to obtain a final predicted value of the most probable disease and insect damage (the final predicted value is the probability of the most probable disease and insect damage category in the image to be detected), and outputting the most probable disease and insect damage category and the central two-dimensional coordinates (x, y) of the minimum circumscribed rectangle of the disease and insect damage identification area.
The spatial variation layer d comprises a location conversion network, a coordinate mapper and an image generator; the location transformation network is used for calculating parameters of space transformation, the coordinate mapper is used for acquiring mapping corresponding relation between the input characteristic diagram and the output characteristic diagram, and the image generator is used for generating output mapping according to the corresponding mapping relation;
location switching network with first-level classification of sub-network a1If the output characteristic graph U is input and the transformation matrix θ is output, the output expression of the location network is θ ═ f (U), where θ ═ f () is a convolutional neural network (local network);
the coordinate mapper obtains the corresponding value of each pixel point on the output characteristic graph generation grid by adopting an inverse transformation method, and according to the space variation parameter calculated by the location network, the coordinate mapper constructs an inverse-transformed image and an up-sampling grid G of the input imageiObtaining the input characteristic diagram U epsilon R by the coordinate mapping relation ofH×W×CTo the output characteristic diagram V epsilon RH×W×CEach mapping position corresponding relation τ ofθWhere R represents a real number set, W, H are the widths and heights of the input profile U and the output profile V, C is the number of input and output channels, in combination with the pixel coordinates of the input profile U
Figure BDA0003359736410000061
And outputting the pixel coordinates of the feature map V
Figure BDA0003359736410000062
The corresponding relation between the input characteristic diagram U and the output characteristic diagram V is the following formula (1):
Figure BDA0003359736410000071
theta is a spatial variation parameter calculated by the zone bit switching networkijIs a specific space variation parameter determined by the number of layers of the convolution times networkNumber, tauθ(Gi) As a coordinate mapping transformation function, GiVariable representing i-th convolution, θijSpatial parameters representing i-th convolution at j-th layer network layer
Obtaining coordinate mapping relation tau from location conversion network and coordinate mapperθThen, the input feature map U and the coordinate mapping relation tau are mappedθAs the input of the coordinate generator of the output characteristic diagram V, the original image pixel coordinates are affine transformed into the pixel point coordinates of the target image, and the expression is (2):
Figure BDA0003359736410000072
wherein Vi CThe coordinates of the ith pixel point of the target image are obtained;
Figure BDA0003359736410000073
is the pixel value with coordinates (n, m) in the color channel c; k is a kernel function, which represents the linear interpolation for realizing the resampling function;
Figure BDA0003359736410000074
the pixel coordinates of the original image are obtained; phi is a unit ofxyAn input interpolation parameter for a sampling kernel k;
and (4) detecting the area of the space variable layer d to obtain the attention area of the crop leaf diseases and insect pests. Since the spatial variation layer d is used for region detection, the sub-network a can be classified at the first level1On the basis of the extracted features, attention areas of leaf diseases and insect pests are further found out, so that a second classification sub-network a2Convolution learning is carried out in the attention area instead of convolution learning from an original data set picture, so that training efficiency is improved, and more fine pest and disease damage characteristics can be extracted.
The application provides a first pooling layer b1Then, the feature map generated by the attention area is unified and normalized, and is sent into a second classification sub-network a through a space variable layer d2By intra-network classification penalties and inter-scale ordering cross-entropy penaltiesAnd (4) performing non-union optimization, performing recursive learning on the detection of the attention area of the image in a mutual reinforcement mode, and determining the scale and the iteration times according to the needs.
A crop pest detection and control method based on image and deep learning specifically comprises the following steps:
the method comprises the following steps that firstly, a mobile robot runs, and a camera is controlled to collect images of the upper part and two sides of crops;
preprocessing the acquired image, including gray processing, smoothing processing, contour extraction and the like, to obtain a binary image for eliminating noise interference information, eliminating noise information unfavorable for identification, and preparing for next pest and disease identification;
training a deep learning model to obtain image characteristics of crop diseases and insect pests; the binary image for eliminating noise interference information is compared with the image characteristics of the crop diseases and insect pests, and the disease and insect pest classification result and the two-dimensional position coordinates (x, y) of the disease and insect pest crops are output;
and step four, calibrating the depth information z of the target object in the acquired image and the central two-dimensional coordinates (x, y) of the minimum external rectangle of the area based on the camera to acquire the three-dimensional coordinate information (x, y, z) of the crop diseases and insect pests, and controlling the spray heads at different positions to spray pesticides according to the three-dimensional position coordinates (x, y, z) of the crop diseases and insect pests.
The second step specifically comprises the following steps:
s21, reading pixels of the image collected by the camera, and carrying out binarization processing on the obtained pixel values;
s22, removing interference noise information of the binary image based on Gaussian filtering;
and S23, performing morphological closing operation on the filtered binary image, removing black spot interference information in the background, and obtaining the binary image with noise interference information eliminated, so as to be conveniently used as an input deep learning model for comparison and detection.
The third step specifically comprises the following steps:
s31, establishing a crop disease and pest data set based on the historical crop disease and pest data image;
s32, building a deep learning network TeaNet for detecting plant diseases and insect pests of crops (tea in the embodiment), wherein the deep learning network TeaNet comprises a first classification sub-network a1And a second classification subnetwork a2First classification subnetwork a1And a second classification subnetwork a2Is connected to the Softmax layer, a first classification subnetwork a1Comprises a first convolution layer and a first pooling layer b1A first spatial pyramid pooling layer c1First full-junction layer e1(ii) a Second classification subnetwork a2Comprises a second convolution layer and a second pooling layer b2A second spatial pyramid pooling layer c2A second fully-connected layer e2(ii) a First classification subnetwork a1And a second classification subnetwork a2Connected with each other through a spatial variable layer d;
s33, after the data in the crop pest data set are input into a deep learning network TeaNet, performing convolution through a first convolution layer to extract a characteristic image; inputting feature images into a first pooling layer b1After the overfitting is prevented, outputting the characteristic image after the overfitting is prevented; inputting the characteristic image after the overfitting prevention into a first space pyramid pooling layer c1Feature compression is carried out, and network complexity is simplified; inputting the data after feature compression into a first full-connection layer to obtain a first disease spot feature of crop diseases and insect pests;
s34, inputting the characteristic image after the overfitting prevention into the first space pyramid pooling layer c1Meanwhile, inputting the characteristic image after overfitting prevention into a space change layer d for detecting pest and disease damage areas, outputting pest and disease damage attention areas, and inputting the image of the pest and disease damage attention areas into a second classification sub-network a2Performing convolution through the second convolution layer to extract image characteristics of the pest and disease attention area; inputting the image characteristics of the pest and disease attention area into a second pooling layer b2After the overfitting is prevented, outputting the image characteristics of the attention area after the overfitting is prevented; inputting the image characteristics of the attention area after overfitting prevention into a second spatial pyramid pooling layer c2The characteristic compression is carried out, and the characteristic compression,the network complexity is simplified; inputting the data after feature compression into a second full-connection layer to obtain second scab features of crop diseases and insect pests;
s35, performing feature fusion on the first scab feature and the second scab feature, and outputting crop pest image features; first fully-connected layer e1And a second fully-connected layer e2The output of the method is subjected to feature fusion, the feature fusion is a necessary link in a convolutional neural network, the convolution process is carried out layer by layer, and features of each layer can be extracted, so that the features of each layer need to be fused.
And S36, merging the fused features into a Softmax layer to obtain a final predicted value of the most probable disease and insect damage (the final predicted value is the probability of the most probable disease and insect damage category in the image to be detected), and outputting the most probable disease and insect damage category and the central two-dimensional coordinates (x, y) of the minimum circumscribed rectangle of the disease and insect damage identification area.
The detection of the pest and disease damage area is finished in a space change layer d and is used as a second classification sub-network a2The spatial variation layer d comprises a location conversion network, a coordinate mapper and an image generator; the location transformation network is used for calculating parameters of space transformation, the coordinate mapper is used for acquiring mapping corresponding relation between the input characteristic diagram and the output characteristic diagram, and the image generator is used for generating output mapping according to the corresponding mapping relation;
location switching network with first-level classification of sub-network a1If the output characteristic graph U is input and the transformation matrix θ is output, the output expression of the location network is θ ═ f (U), where θ ═ f () is a convolutional neural network (local network);
the coordinate mapper obtains the corresponding value of each pixel point on the output characteristic graph generation grid by adopting an inverse transformation method, and according to the space variation parameter calculated by the location network, the coordinate mapper constructs an inverse-transformed image and an up-sampling grid G of the input imageiObtaining the input characteristic diagram U epsilon R by the coordinate mapping relation ofH×W×CTo output characteristic diagram V ∈ RH×W×CEach mapping position corresponding relation τ ofθWhere R represents a set of real numbers, W, H are the input profile U and the output profile UV width and height, C is input and output channel number, and the pixel coordinate of the input characteristic diagram U is combined
Figure BDA0003359736410000111
And outputting the pixel coordinates of the feature map V
Figure BDA0003359736410000112
The corresponding relation between the input characteristic diagram U and the output characteristic diagram V is the following formula (1):
Figure BDA0003359736410000113
theta is a spatial variation parameter calculated by the zone bit switching networkijIs a specific space variation parameter, tau, determined by the number of layers of the convolutional order networkθ(Gi) As a coordinate mapping transformation function, GiVariable representing i-th convolution, θijSpatial parameters representing the ith convolution at a j-layer network layer
Obtaining coordinate mapping relation tau by location conversion network and coordinate mapperθThen, the input feature map U and the coordinate mapping relation tau are mappedθAs the input of the coordinate generator of the output characteristic diagram V, the original image pixel coordinates are affine transformed into the pixel point coordinates of the target image, and the expression is (2):
Figure BDA0003359736410000114
wherein Vi CThe coordinates of the ith pixel point of the target image are obtained;
Figure BDA0003359736410000115
is the pixel value with coordinates (n, m) in the color channel c; k is a kernel function representing a linear interpolation for realizing a resampling function;
Figure BDA0003359736410000121
the pixel coordinates of the original image are obtained; phi is axyInterpolation of inputs for a sample kernel kAnd (4) parameters.
And (4) detecting the area of the space variable layer d to obtain the attention area of the crop leaf diseases and insect pests. Since the spatial variation layer d is used for region detection, the sub-network a can be classified at the first level1On the basis of the extracted features, attention areas of leaf diseases and insect pests are further found out, so that a second classification sub-network a2Convolution learning is carried out in the attention area instead of convolution learning from an original data set picture, so that training efficiency is improved, and more fine pest and disease damage characteristics can be extracted.
The application provides a first pooling layer b1The feature map generated by the attention area is unified in size and is sent into a second classification sub-network a through a spatial variation layer d2Through the intra-network classification loss and inter-scale sequencing cross entropy loss joint optimization, the image attention region detection is recursively learned in a mutually enhanced mode, and the scale and the iteration times can be determined according to needs.
The fourth step specifically comprises the following steps:
the camera adopts pulse modulation, and measures and calculates the distance d between a target object and the camera according to the time difference between pulse emission and pulse reception; ,
Figure BDA0003359736410000122
c is the speed of light, tpDuration of the light pulse, S0Representing the charge collected by the earlier shutter, S1Charge representing delayed shutter collection: (
Figure BDA0003359736410000123
Depth information acquired by the depth camera is measured by the change in the number of charges, introducing charges). According to a formula (3), the camera obtains the distance d between the target object and the camera, the distance d between the target object and the camera is depth information z, the depth information z of the target object in the obtained image and the central two-dimensional coordinates (x, y) of the minimum circumscribed rectangle of the area are calibrated based on the camera, and three-dimensional coordinate information (x, y,z) and pesticide spraying is carried out, and the pesticide spraying has a certain coverage, so that the device has larger fault tolerance to position deviation in actual operation.
The embodiment of the invention provides a tea row self-propelled robot and a tea disease and insect pest detection and prevention method based on image and deep learning. If the detected diseases and insect pests are certain diseases and insect pests (mainly including tea anthracnose, lesser leafhopper and tea geometrid), a control instruction is sent to a corresponding spray head valve according to the type and position coordinate transmission of the diseases and insect pests, so that the corresponding pesticide is sprayed, and the purpose of disease and insect pest control is achieved.
The network model provided by the patent is designed and built aiming at mainstream tea plant diseases and insect pests, so that the network structure is simplified, the training cost is low, and the detection accuracy is high.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a crop pest detection and control method based on image and deep learning.
Fig. 2 is a diagram of a deep learning detection algorithm according to the present application.
FIG. 3 is a diagram of a neural network model framework according to the present application.
Fig. 4 is a schematic structural diagram of the crop pest detection and control device based on image and deep learning according to the embodiment.
Detailed Description
The present invention will be further described with reference to the accompanying drawings.
As shown in fig. 4, a crop pest detection and control device based on image and deep learning includes: the detection and control device is arranged on the movement device;
the detection prevention device comprises a first detection prevention unit and a second detection prevention unit;
the first detection and prevention unit comprises a bracket, a camera, a main control device 11 and middle spray heads 4, 5 and 6, the camera comprises a first camera 1, a second camera 2 and a third camera 3, and the bracket is fixedly arranged on the moving device; the middle sprayers 4, 5 and 6 are arranged in the middle of the bracket, the first camera 1 is arranged in the middle of the bracket and is higher than the middle sprayers 4, 5 and 6, and the second camera 2 and the third camera 3 are respectively arranged on the left side and the right side of the movement device;
the second detection and prevention unit comprises a first stepping motor 7, a second stepping motor 10, a first side spray head 8, a second side spray head 9, a first vertical spray head screw rod and a second vertical spray head screw rod
A first vertical spray nozzle screw rod and a second vertical spray nozzle screw rod are respectively arranged on the left rear side and the right rear side of the moving device, a first stepping motor 7 is fixedly arranged on the first vertical spray nozzle screw rod, a second stepping motor 10 is fixedly arranged on the second vertical spray nozzle screw rod, and the first stepping motor 7 controls the first side spray nozzle 8 to vertically move up and down along the first vertical spray nozzle screw rod; the second stepping motor 10 controls the second side nozzle 9 to vertically move up and down along a second vertical nozzle screw rod;
the moving device comprises a crawler belt 12, the crawler belt 12 is a moving part of the tea robot, and the device is driven to advance through the transmission of the crawler belt 12;
the first camera 1, the second camera 2 and the third camera 3 are all arranged on an aluminum profile structure, are respectively positioned right above and at the left and right sides of the tea row and are responsible for collecting pest and disease image information of the tea row;
the middle spray heads 4, 5 and 6 are arranged on the horizontal aluminum profile structure and are responsible for spraying pesticides above the tea lines;
the first side sprayer 8 and the second side sprayer 9 vertically move up and down along a first vertical sprayer screw rod and a second vertical sprayer screw rod respectively through a first stepping motor 7 and a second stepping motor 10 and are responsible for pesticide spraying work on the left side and the right side of a tea row; the device is erected above the tea row, the spray heads on the two sides move up and down to cover the areas on the two sides of the tea row, and the spray heads on the two sides cannot move left and right.
The main control device 11 is installed on a top end beam of the support and is connected with the first camera 1, the second camera 2, the third camera 3, the first stepping motor 7 and the second stepping motor 10 through USB wires.
The main control device 11 adopts an NVIDIA embedded development board Jetson Nano, a CPU four-core ARM A571.43GHz and a GPU128 core Maxwell to operate a memory 4 GB;
the first camera 1, the second camera 2 and the third camera 3 are RealSense D455 of Intel, the visual field range covers the upper part and two sides of the tea row, the depth range of the depth cameras is 0.4m-10m, and the FOV angle is 86 degrees multiplied by 57 degrees;
the first stepping motor 7 and the second stepping motor 10 are driving and controlling integrated 57 stepping motors, the stepping angle is 1.8 degrees, and the torque is 0.4NM-2.2 N.m.
The first camera 1, the second camera 2 and the third camera 3 are responsible for collecting depth image information of crops;
the middle spray heads 4, 5 and 6 are responsible for spraying pesticides above crops;
the first side sprayer 8 and the second side sprayer 9 are responsible for spraying pesticides on the left side and the right side of crops;
the main control device 11 is used for identifying and detecting image information transmitted by the first camera 1, the second camera 2 and the third camera 3, and controlling the moving device, the middle nozzles 4, 5 and 6, the first side nozzle 8 and the second side nozzle 9.
Based on a crop plant diseases and insect pests detection and control device based on image and deep learning carries out the crop plant diseases and insect pests detection and control based on image and deep learning, specifically includes the following steps:
step one, a main control device controls a motion device to operate and controls a camera to acquire images above and at two sides of a tea line;
secondly, preprocessing the acquired image by the main control device, wherein the preprocessing comprises gray processing, smoothing processing, contour extraction and the like, so that a binary image for eliminating noise interference information is obtained, noise information unfavorable for identification is eliminated, and preparation is made for next pest and disease identification;
step three, the master control device trains a deep learning model to obtain the image characteristics of crop diseases and insect pests; inputting the binary image for eliminating noise interference information into the leaf texture and lesion feature information of the tea leaves of the image, and comparing the information with the image feature of the crop diseases and insect pests to output a disease and pest classification result and two-dimensional position coordinates (x, y) of the tea leaves of the diseases and pests;
fourthly, the main control device calibrates the depth information z of the target object in the acquired image and the central two-dimensional coordinates (x, y) of the minimum external rectangle of the area based on the camera to acquire three-dimensional coordinate information (x, y, z) of crop diseases and insect pests, and controls the first side edge spray head 8 to vertically move up and down along the first vertical spray head screw rod according to the three-dimensional position coordinates (x, y, z) of the crop diseases and insect pests; and controlling the second side sprayer 9 to vertically move up and down along a second vertical sprayer screw rod, and controlling the middle sprayers 4, 5 and 6, the first side sprayer 8 and the second side sprayer 9 to spray pesticides at the three-dimensional position coordinates (x, y and z) of crop diseases and insect pests.
As shown in figure 1, the crop pest detection and control method based on image and deep learning comprises the following steps:
the method comprises the following steps that firstly, a mobile robot runs to control a camera to collect images above and at two sides of a tea line;
preprocessing the acquired image, including gray processing, smoothing processing, contour extraction and the like, to obtain a binary image for eliminating noise interference information, eliminating noise information unfavorable for identification, and preparing for next pest and disease identification;
training a deep learning model to obtain image characteristics of crop diseases and insect pests; inputting a binary image for eliminating noise interference information into the leaf texture and lesion feature information of the tea leaves of the image) and comparing the binary image with the image feature of the crop diseases and insect pests, and outputting a disease and insect pest classification result and two-dimensional position coordinates x and y of the disease and insect pest tea);
and fourthly, calibrating the depth information z of the target object in the acquired image and the central two-dimensional coordinates (x, y) of the minimum external rectangle of the area based on the camera to acquire three-dimensional coordinate information (x, y, z) of the crop diseases and insect pests, and controlling the spray heads at different positions to spray the pesticide according to the three-dimensional position coordinates (x, y, z) of the crop diseases and insect pests.
The second step specifically comprises the following steps:
s21, the pixels of the image captured by the camera are read, and the resulting pixel values are binarized. Healthy tea leaves are dark green, the typical value of RGB value of the color is 34, 139 and 34) is taken as a threshold value, an interested area is divided, the influence of the healthy tea leaves is eliminated, the image information of the diseases and insect pests of anthracnose, tea geometrid and lesser green leafhopper is preliminarily extracted, a binary image of the interested area is obtained, and the image processing of the next step is facilitated;
s22, removing interference noise information of the binary image based on Gaussian filtering;
and S23, performing morphological closing operation on the filtered binary image, removing black spot interference information in the background, and obtaining the binary image with noise interference information eliminated, so as to be conveniently used as an input deep learning model for comparison and detection.
As shown in fig. 2, the third step specifically includes the following steps:
s31, establishing a crop pest data set based on the historical data image of the crop pest, wherein the picture in the crop pest data set is a picture of the crop pest shot in advance (including a source data set picture in a network), the crop pest data set comprises three types of common pest pictures including an anthracnose picture, an inchworm picture and a green leafhopper picture and a healthy tea sample picture, the tea pest data set is a tea pest data set enhanced by Mosaic, rotation and cutting modes, 11230 pictures and corresponding marked files in the embodiment are obtained, and the proportion of the training set to the testing set is 8: 2 dividing the training set into a training set and a test set;
the data set is used for learning the neural network, is not 'standard', and the deep learning has the advantages that the significant features do not need to be extracted manually, but the significant features are given to a machine to learn the features, so that the image samples in the data set in the step S32 have generality, and important features can be stored in the model through continuous training of the neural network for subsequent matching and comparison;
s32, building a deep learning network TeaNet for detecting plant diseases and insect pests of crops (tea in the embodiment), wherein the deep learning network TeaNet comprises a first classification sub-network a as shown in figure 31And a second classification subnetwork a2First classification subnetwork a1And a second classification subnetwork a2Is connected to the Softmax layer, a first classification subnetwork a1Comprises a first convolution layer and a first pooling layer b1A first spatial pyramid pooling layer c1First full-junction layer e1(ii) a Second classification subnetwork a2Comprises a second convolution layer and a second pooling layer b2A second spatial pyramid pooling layer c2A second full connection layer e2(ii) a First classification subnetwork a1And a second classification subnetwork a2Connected with each other through a spatial variable layer d;
s33, after the data in the crop disease and pest data set are input into the deep learning network TeaNet, performing convolution through the first convolution layer to extract a characteristic image; inputting feature images into a first pooling layer b1After the overfitting is prevented, outputting the characteristic image after the overfitting is prevented; inputting the characteristic image after the overfitting prevention into a first space pyramid pooling layer c1Feature compression is carried out, and network complexity is simplified; inputting the data after feature compression into a first full-connection layer to obtain a first disease spot feature of crop diseases and insect pests;
s34, inputting the characteristic image after the overfitting prevention into the first space pyramid pooling layer c1Meanwhile, inputting the characteristic image after the overfitting preventionDetecting the pest region in the spatial variation layer d, outputting the pest attention region, and inputting the image of the pest attention region into the second classification subnetwork a2Performing convolution through the second convolution layer to extract image characteristics of the pest and disease attention area; inputting the image characteristics of the pest and disease attention area into a second pooling layer b2After the overfitting is prevented, outputting the image characteristics of the attention area after the overfitting is prevented; inputting the image characteristics of the attention area after overfitting prevention into a second spatial pyramid pooling layer c2Feature compression is carried out, and network complexity is simplified; inputting the data after feature compression into a second full-connection layer to obtain second scab features of crop diseases and insect pests;
and s35, performing feature fusion on the first lesion feature and the second lesion feature, and outputting the crop disease and insect pest image feature. First fully-connected layer e1And a second fully-connected layer e2The output of the method is subjected to feature fusion, the feature fusion is a necessary link in a convolutional neural network, the convolution process is carried out layer by layer, and features of each layer can be extracted, so that the features of each layer need to be fused.
s36, inputting the fused image features into a Softmax layer to obtain a final predicted value of the most probable disease and insect pest (the final predicted value is the probability of the most probable disease and insect pest category in the image to be detected, for example, the most probable disease and insect pest category is '89% of anthracnose', and outputting the most probable disease and insect pest category and the central two-dimensional coordinate (x, y) of the minimum circumscribed rectangle of the disease and insect pest identification area.
The detection of the pest and disease damage area is finished in a space change layer d and is used as a second classification sub-network a2The spatial variation layer d comprises a location conversion network, a coordinate mapper and an image generator; the location conversion network is used for calculating space conversion parameters, the coordinate mapper is used for acquiring mapping corresponding relation between the input characteristic diagram and the output characteristic diagram, and the image generator is used for generating output mapping according to the corresponding mapping relation;
location switching network with first-level classification of sub-network a1The output characteristic diagram U is input and output conversionAnd the matrix theta, the output expression of the location network is theta ═ f (u), where theta ═ f () is the convolutional neural network local network).
The coordinate mapper obtains the corresponding value of each pixel point on the output characteristic graph generation grid by adopting an inverse transformation method, and according to the space variation parameter calculated by the location network, the coordinate mapper constructs an inverse-transformed image and an up-sampling grid G of the input imageiObtaining the input characteristic diagram U epsilon R by the coordinate mapping relation ofH×W×CTo the output characteristic diagram V epsilon RH×W×CEach mapping position corresponding relation τ ofθWhere R represents a real number set, W, H are the widths and heights of the input profile U and the output profile V, C is the number of input and output channels, in combination with the pixel coordinates of the input profile U
Figure BDA0003359736410000201
And outputting the pixel coordinates of the feature map V
Figure BDA0003359736410000202
The corresponding relation between the input characteristic diagram U and the output characteristic diagram V is the following formula (1):
Figure BDA0003359736410000211
theta is a spatial variation parameter calculated by the zone bit switching networkijIs a specific space variation parameter, tau, determined by the number of layers of the convolutional order networkθ(Gi) As a coordinate mapping transformation function, GiVariable representing the i-th convolution, θijSpatial parameters representing i-th convolution at j-th layer network layer
Obtaining coordinate mapping relation tau from location conversion network and coordinate mapperθThen, the input feature map U and the coordinate mapping relation tau are mappedθAs the input of the coordinate generator of the output feature map V, the original image pixel coordinates are affine transformed into the pixel point coordinates of the target image, and the expression is (2):
Figure BDA0003359736410000212
wherein Vi CThe coordinates of the ith pixel point of the target image are obtained;
Figure BDA0003359736410000213
is the pixel value with coordinates n, m) in the color channel c; k is a kernel function representing a linear interpolation for realizing a resampling function;
Figure BDA0003359736410000214
the pixel coordinates of the original image are obtained; phi is axyThe input interpolation parameter for the sampling kernel k.
Performing area detection on the spatial variable layer d to obtain attention area of tea tree leaf diseases and insect pests, and performing area detection on the spatial variable layer d in the first-stage classification subnetwork a1On the basis of the extracted features, attention areas of leaf diseases and insect pests are further found out, so that a second classification sub-network a2Convolution learning is carried out in the attention area instead of convolution learning from an original data set picture, so that training efficiency is improved, and more fine pest and disease damage characteristics can be extracted.
The application provides a first pooling layer b1The feature map generated by the attention area is unified in size and is sent into a second classification sub-network a through a spatial variation layer d2Through the intra-network classification loss and inter-scale sequencing cross entropy loss joint optimization, the image attention region detection is recursively learned in a mutually enhanced mode, and the scale and the iteration times can be determined according to needs.
The fourth step specifically comprises the following steps:
the first camera 1, the second camera 2 and the second camera 3 are depth cameras, depth information z of a target object in an image is obtained through calibration, three-dimensional coordinate information (x, y, z) of crop diseases and insect pests is obtained based on the depth information z of the target object in the image obtained through camera calibration and a central two-dimensional coordinate (x, y) of a minimum external rectangle of an area, pesticide spraying is carried out, a certain coverage range is provided for spraying, and therefore the fault-tolerant capability is high for position deviation in actual operation.
The first camera 1, the second camera 2 and the second camera 3 are selected to be TOF depth cameras, pulse modulation is adopted, and the distance d between a target object and the cameras is calculated according to the time difference between pulse emission and pulse reception; ,
Figure BDA0003359736410000221
c is the speed of light, tpDuration of the light pulse, S0Representing the charge collected by the earlier shutter, S1Charge representing delayed shutter collection
Figure BDA0003359736410000222
Depth information acquired by the depth camera is measured by the change in the number of charges, introducing charges). And (3) acquiring the distance d between the target object and the camera by the camera according to a formula (3), wherein the distance d between the target object and the camera is depth information z, fusing coordinates (x, y) of the two-dimensional image information, and comprehensively obtaining three-dimensional coordinates (x, y, z) of the target object.
Through the steps, after the categories of the plant diseases and insect pests and the corresponding three-dimensional coordinate (x, y, z) are obtained, the main control device sends instructions to the corresponding spray head control valves according to the coordinate positions to control the corresponding spray heads to be started and closed, so that the pesticide spraying universal type pesticide control does not distinguish the categories of liquid medicines, the national agricultural chemistry control advocates 'one medicine is used for treating more' in response, if the plant diseases and insect pests are detected to exist in the pixels above the images on the left side of the tea row, the main control sends the instructions to enable the spray heads on the left side of the device to move to the upper side to spray the pesticide under the driving of the rotation of the motor.
The nozzles at the two sides of the rear part of the self-propelled robot move up and down through the rotation of the stepping motor, and as the stepping motor is integrated with driving and controlling, the occupied space of a traditional motor driving plate is saved, and the specific rotating speed of the self-propelled robot is controlled by a corresponding instruction sent by the main control device.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or groups of devices in the examples disclosed herein may be arranged in a device as described in this embodiment, or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the devices in an embodiment may be adaptively changed and arranged in one or more devices different from the embodiment. Modules or units or groups in embodiments may be combined into one module or unit or group and may furthermore be divided into sub-modules or sub-units or sub-groups. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor with the necessary instructions for carrying out the method or the method elements thus forms a device for carrying out the method or the method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the computing device will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to perform the inventive method according to instructions in said program code stored in the memory.
By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer readable media includes both computer storage media and communication media. Computer storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of computer readable media.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as disclosed herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense, and the scope of the present invention is defined by the appended claims.
The above is only a preferred embodiment of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (10)

1. The utility model provides a crops plant diseases and insect pests detection prevention and cure device based on image and degree of depth study which characterized in that includes: the detection and control device is arranged on the movement device;
the detection prevention device comprises a first detection prevention unit and a second detection prevention unit;
the first detection prevention and control unit comprises a support, a camera, a main control device (11) and middle spray heads (4, 5 and 6), the camera comprises a first camera (1), a second camera (2) and a third camera (3), and the support is fixedly arranged on the moving device; the middle sprayers (4, 5 and 6) are arranged in the middle of the bracket, the first camera (1) is arranged in the middle of the bracket and is higher than the middle sprayers (4, 5 and 6), and the second camera (2) and the third camera (3) are respectively arranged on the left side and the right side of the movement device;
the second detection and prevention unit comprises a first stepping motor (7), a second stepping motor (10), a first side spray head (8), a second side spray head (9), a first vertical spray head screw rod and a second vertical spray head screw rod
A first vertical spray nozzle screw rod and a second vertical spray nozzle screw rod are respectively arranged on the left rear side and the right rear side of the moving device, a first stepping motor (7) is fixedly arranged on the first vertical spray nozzle screw rod, a second stepping motor (10) is fixedly arranged on the second vertical spray nozzle screw rod, and the first stepping motor (7) controls a first side spray nozzle (8) to vertically move up and down along the first vertical spray nozzle screw rod; the second stepping motor (10) controls the second side nozzle (9) to vertically move up and down along a second vertical nozzle screw rod;
the moving device comprises a crawler belt (12), wherein the crawler belt (12) is a moving part of the crop pest detection and control device based on image and deep learning, and the device is driven to advance through the transmission of the crawler belt (12).
2. The crop pest detection and control device based on image and deep learning according to claim 1, characterized in that the first camera (1), the second camera (2) and the third camera (3) are used for collecting depth image information of crops;
the middle spray heads (4, 5 and 6) are responsible for spraying pesticides above crops;
the first side sprayer (8) and the second side sprayer (9) are responsible for spraying pesticides on the left side and the right side of crops;
the main control device (11) is used for identifying and detecting image information transmitted by the first camera (1), the second camera (2) and the third camera (3) and controlling the movement device, the middle spray heads (4, 5 and 6), the first side spray head (8) and the second side spray head (9).
3. The crop pest detection and control device based on image and deep learning according to claim 1, characterized in that,
based on a crop plant diseases and insect pests detection and control device based on image and deep learning carries out the crop plant diseases and insect pests detection and control based on image and deep learning, specifically includes the following steps:
the method comprises the following steps that firstly, a main control device controls a movement device to operate and controls a camera to collect images of the upper part and two sides of crops;
secondly, preprocessing the acquired image by the main control device, wherein the preprocessing comprises gray processing, smoothing processing, contour extraction and the like, and acquiring a binary image for eliminating noise interference information;
step three, the master control device trains a deep learning model to obtain the image characteristics of crop diseases and insect pests; the binary image for eliminating noise interference information is compared with the image characteristics of crop diseases and insect pests, and a disease and insect pest classification result and two-dimensional position coordinates (x, y) of the disease and insect pest tea are output;
fourthly, the main control device calibrates the depth information z of the target object in the acquired image and the central two-dimensional coordinates (x, y) of the minimum external rectangle of the area based on the camera to acquire three-dimensional coordinate information (x, y, z) of crop diseases and insect pests, and controls the first side edge spray head (8) to vertically move up and down along the first vertical spray head screw rod according to the three-dimensional position coordinates (x, y, z) of the crop diseases and insect pests; and controlling a second side sprayer (9) to vertically move up and down along a second vertical sprayer screw rod, and controlling a middle sprayer (4, 5, 6), a first side sprayer (8) and a second side sprayer (9) to spray pesticide at the three-dimensional position coordinates (x, y, z) of the crop diseases and insect pests.
4. The crop pest detection and control method based on image and deep learning according to claim 3, wherein the third step specifically comprises the following steps:
s31, establishing a crop disease and pest data set based on the historical crop disease and pest data image;
s32, building a deep learning network TeaNet for detecting crop diseases and insect pests, wherein the deep learning network TeaNet comprises a first classification sub-network a1And a second classification subnetwork a2First classification subnetwork a1And a second classification subnetwork a2Is connected to the Softmax layer, a first classification subnetwork a1Comprises a first convolution layer and a first pooling layer b1A first spatial pyramid pooling layer c1First full-junction layer e1(ii) a A second classification subnetwork a2Comprises a second convolution layer and a second pooling layer b2A second spatial pyramid pooling layer c2A second full connection layer e2(ii) a First classification subnetwork a1And a second classification subnetwork a2Connected with each other through a space variable layer d;
s33, after the data in the crop disease and pest data set are input into the deep learning network TeaNet, performing convolution through the first convolution layer to extract a characteristic image; inputting feature images into a first pooling layer b1After the overfitting is prevented, outputting the characteristic image after the overfitting is prevented; inputting the characteristic image after the overfitting prevention into a first space pyramid pooling layer c1Performing feature compression; inputting the data after feature compression into a first full-connection layer to obtain a first disease spot feature of crop diseases and insect pests;
s34, inputting the characteristic image after the overfitting prevention into the first space pyramid pooling layer c1Meanwhile, inputting the characteristic image after overfitting prevention into a space change layer d for detecting pest and disease damage areas, outputting pest and disease damage attention areas, and inputting the image of the pest and disease damage attention areas into a second classification sub-network a2Performing convolution through the second convolution layer to extract image characteristics of the pest and disease attention area; inputting the image characteristics of the pest attention area into a second pooling layerb2After overfitting is prevented, outputting the image characteristics of the attention area after overfitting prevention; inputting the image characteristics of the attention area after overfitting prevention into a second spatial pyramid pooling layer c2Performing feature compression; inputting the data after feature compression into a second full-connection layer to obtain a second scab feature of the crop diseases and insect pests;
s35, performing feature fusion on the first scab feature and the second scab feature, and outputting crop pest image features;
and S36, inputting the fused image features into a Softmax layer to obtain a final predicted value of the most probable diseases and insect pests, and outputting the most probable disease and insect pest type and the central two-dimensional coordinates (x, y) of the minimum circumscribed rectangle of the disease and insect pest identification area.
5. The crop pest detection and control device based on image and deep learning according to claim 3, characterized in that,
the spatial variation layer d comprises a location conversion network, a coordinate mapper and an image generator;
the location conversion network is used for calculating space conversion parameters, the coordinate mapper is used for acquiring the mapping corresponding relation between the input characteristic diagram and the output characteristic diagram, and the image generator is used for generating output mapping according to the corresponding mapping relation;
location switching network with first-level classification of sub-network a1The output characteristic graph U is input, a transformation matrix theta is output, and the output expression of the zone bit network is theta, f and U, wherein the theta, f () is a convolutional neural network;
the coordinate mapper obtains the corresponding value of each pixel point on the output characteristic graph generation grid by adopting an inverse transformation method, and according to the space variation parameter calculated by the location network, the coordinate mapper constructs an inverse-transformed image and an up-sampling grid G of the input imageiObtaining the input characteristic diagram U epsilon R by the coordinate mapping relation ofH×W×CTo output characteristic diagram V ∈ RH×W×CEach mapping position corresponding relation τ ofθWhere R represents a set of real numbers, W, H are the widths and heights of the input profile U and the output profile V, C is the number of input and output channels, and the junctionCombining pixel coordinates of input feature graph U
Figure FDA0003359736400000051
And outputting the pixel coordinates of the feature map V
Figure FDA0003359736400000052
The corresponding relation between the input characteristic diagram U and the output characteristic diagram V is the following formula (1):
Figure FDA0003359736400000053
theta is a spatial variation parameter calculated by the zone bit switching networkijIs a specific space variation parameter, tau, determined by the number of layers of the convolutional order networkθ(Gi) As a coordinate mapping transformation function, GiVariable representing i-th convolution, θijSpatial parameters representing i-th convolution at j-th layer network layer
Obtaining coordinate mapping relation tau from location conversion network and coordinate mapperθThen, the input feature map U and the coordinate mapping relation tau are mappedθAs the input of the coordinate generator of the output feature map V, the original image pixel coordinates are affine transformed into the pixel point coordinates of the target image, and the expression is (2):
Figure FDA0003359736400000054
wherein Vi CThe coordinates of the ith pixel point of the target image are obtained;
Figure FDA0003359736400000055
is the pixel value with coordinates (n, m) in the color channel c; k is a kernel function, which represents the linear interpolation for realizing the resampling function;
Figure FDA0003359736400000056
the pixel coordinates of the original image are obtained; phi is axyThe input interpolation parameter for the sampling kernel k.
6. A crop pest detection and control method based on image and deep learning is characterized by comprising the following steps:
the method comprises the following steps that firstly, a mobile robot runs, and a camera is controlled to collect images of the upper part and two sides of crops;
preprocessing the acquired image, including gray processing, smoothing processing, contour extraction and the like, to obtain a binary image for eliminating noise interference information;
training a deep learning model to obtain image characteristics of crop diseases and insect pests; the binary image for eliminating the noise interference information is compared with the image characteristics of the crop diseases and insect pests, and the disease and insect pest classification result and the two-dimensional position coordinates (x, y) of the disease and insect pest crops are output;
and fourthly, calibrating the depth information z of the target object in the acquired image and the central two-dimensional coordinates (x, y) of the minimum external rectangle of the area based on the camera to acquire three-dimensional coordinate information (x, y, z) of the crop diseases and insect pests, and controlling the spray heads at different positions to spray the pesticide according to the three-dimensional position coordinates (x, y, z) of the crop diseases and insect pests.
7. The crop pest detection and control method based on image and deep learning according to claim 6, wherein the second step specifically comprises the following steps:
s21, reading pixels of the image collected by the camera, and carrying out binarization processing on the obtained pixel values;
s22, removing interference noise information of the binary image based on Gaussian filtering;
and S23, performing morphological closing operation on the filtered binary image, removing black spot interference information in the background, and obtaining the binary image with noise interference information eliminated, so as to be conveniently used as an input deep learning model for comparison and detection.
8. The crop pest detection and control method based on images and deep learning according to claim 6, wherein the third step specifically comprises the following steps:
s31, establishing a crop disease and pest data set based on the historical crop disease and pest data image;
s32, building a deep learning network TeaNet for detecting crop diseases and insect pests, wherein the deep learning network TeaNet comprises a first classification sub-network a1And a second classification subnetwork a2First classification subnetwork a1And a second classification subnetwork a2Is connected to the Softmax layer, a first classification subnetwork a1Comprises a first convolution layer and a first pooling layer b1A first spatial pyramid pooling layer c1First full-junction layer e1(ii) a Second classification subnetwork a2Comprises a second convolution layer and a second pooling layer b2A second spatial pyramid pooling layer c2A second full connection layer e2(ii) a First classification subnetwork a1And a second classification subnetwork a2Connected with each other through a spatial variable layer d;
s33, after the data in the crop disease and pest data set are input into the deep learning network TeaNet, performing convolution through the first convolution layer to extract a characteristic image; inputting feature images into a first pooling layer b1After the overfitting is prevented, outputting the characteristic image after the overfitting is prevented; inputting the characteristic image after the overfitting prevention into a first space pyramid pooling layer c1Performing feature compression; inputting the data after feature compression into a first full connection layer to obtain a first disease spot feature of crop diseases and insect pests;
s34, inputting the characteristic image after the overfitting prevention into the first space pyramid pooling layer c1Meanwhile, inputting the characteristic image after overfitting prevention into a space change layer d for detecting pest and disease damage areas, outputting pest and disease damage attention areas, and inputting the image of the pest and disease damage attention areas into a second classification sub-network a2Performing convolution through the second convolution layer to extract image characteristics of the pest and disease attention area; inputting the image characteristics of the pest and disease attention area into a second pooling layer b2After the overfitting is prevented, the attention after the overfitting is outputImage characteristics of the region; inputting the image characteristics of the attention area after overfitting prevention into a second spatial pyramid pooling layer c2Performing feature compression; inputting the data after feature compression into a second full-connection layer to obtain second scab features of crop diseases and insect pests;
s35, performing feature fusion on the first scab feature and the second scab feature, and outputting crop pest image features; and S36, merging the fused features into a Softmax layer to obtain a final predicted value of the most probable diseases and insect pests, and outputting the most probable disease and insect pest categories and central two-dimensional coordinates (x, y) of a minimum circumscribed rectangle of the disease and pest identification area.
9. The method for detecting and controlling crop diseases and insect pests based on image and deep learning according to claim 6,
the spatial variation layer d comprises a location conversion network, a coordinate mapper and an image generator; the location transformation network is used for calculating parameters of space transformation, the coordinate mapper is used for acquiring mapping corresponding relation between the input characteristic diagram and the output characteristic diagram, and the image generator is used for generating output mapping according to the corresponding mapping relation;
location switching network with first-level classification of sub-network a1If the output characteristic graph U is input and the transformation matrix θ is output, the output expression of the location network is θ ═ f (U), where θ ═ f () is a convolutional neural network (local network);
the coordinate mapper obtains the corresponding value of each pixel point on the output characteristic graph generation grid by adopting an inverse transformation method, and according to the space variation parameter calculated by the location network, the coordinate mapper constructs an inverse-transformed image and an up-sampling grid G of the input imageiObtaining the input characteristic diagram U epsilon R by the coordinate mapping relation ofH×W×CTo the output characteristic diagram V epsilon RH×W×CEach mapping position corresponding relation τ ofθWhere R represents a real number set, W, H are the widths and heights of the input profile U and the output profile V, C is the number of input and output channels, in combination with the pixel coordinates of the input profile U
Figure FDA0003359736400000091
And outputting the pixel coordinates of the feature map V
Figure FDA0003359736400000092
The corresponding relation between the input characteristic diagram U and the output characteristic diagram V is the following formula (1):
Figure FDA0003359736400000093
theta is a spatial variation parameter calculated by the zone bit switching networkijIs a specific space variation parameter, tau, determined by the number of layers of the convolutional order networkθ(Gi) As a coordinate mapping transformation function, GiVariable representing the i-th convolution, θijSpatial parameters representing i-th convolution at j-th layer network layer
Obtaining coordinate mapping relation tau from location conversion network and coordinate mapperθThen, the input feature map U and the coordinate mapping relation tau are mappedθAs the input of the coordinate generator of the output characteristic diagram V, the original image pixel coordinates are affine transformed into the pixel point coordinates of the target image, and the expression is (2):
Figure FDA0003359736400000094
wherein Vi CThe coordinates of the ith pixel point of the target image are obtained;
Figure FDA0003359736400000095
is the pixel value with coordinates (n, m) in the color channel c; k is a kernel function representing a linear interpolation for realizing a resampling function;
Figure FDA0003359736400000096
the pixel coordinates of the original image are obtained; phi is axyThe input interpolation parameter for the sampling kernel k.
10. The crop pest detection and control method based on image and deep learning according to claim 6, wherein the fourth step specifically comprises the following steps:
the camera adopts pulse modulation, and measures and calculates the distance d between a target object and the camera according to the time difference between pulse emission and pulse reception; ,
Figure FDA0003359736400000101
c is the speed of light, tpDuration of the light pulse, S0Representing the charge collected by the earlier shutter, S1And representing the delayed charge collected by the shutter, wherein the distance d between the target object and the camera is depth information z, and three-dimensional coordinate information (x, y, z) of crop diseases and insect pests is obtained based on the depth information z of the target object in the image obtained by calibrating the camera and the central two-dimensional coordinate (x, y) of the minimum circumscribed rectangle of the region, so that pesticide spraying is carried out.
CN202111363501.2A 2021-11-17 2021-11-17 Crop disease and insect pest detection and prevention device and method based on image and deep learning Withdrawn CN114092808A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111363501.2A CN114092808A (en) 2021-11-17 2021-11-17 Crop disease and insect pest detection and prevention device and method based on image and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111363501.2A CN114092808A (en) 2021-11-17 2021-11-17 Crop disease and insect pest detection and prevention device and method based on image and deep learning

Publications (1)

Publication Number Publication Date
CN114092808A true CN114092808A (en) 2022-02-25

Family

ID=80301405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111363501.2A Withdrawn CN114092808A (en) 2021-11-17 2021-11-17 Crop disease and insect pest detection and prevention device and method based on image and deep learning

Country Status (1)

Country Link
CN (1) CN114092808A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332849A (en) * 2022-03-16 2022-04-12 科大天工智能装备技术(天津)有限公司 Crop growth state combined monitoring method and device and storage medium
CN114694004A (en) * 2022-04-08 2022-07-01 中国农业大学 Crop data acquisition method, system and device
CN114998683A (en) * 2022-06-01 2022-09-02 北京理工大学 Attention mechanism-based ToF multipath interference removing method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332849A (en) * 2022-03-16 2022-04-12 科大天工智能装备技术(天津)有限公司 Crop growth state combined monitoring method and device and storage medium
CN114694004A (en) * 2022-04-08 2022-07-01 中国农业大学 Crop data acquisition method, system and device
CN114998683A (en) * 2022-06-01 2022-09-02 北京理工大学 Attention mechanism-based ToF multipath interference removing method
CN114998683B (en) * 2022-06-01 2024-05-31 北京理工大学 Attention mechanism-based ToF multipath interference removal method

Similar Documents

Publication Publication Date Title
Botterill et al. A robot system for pruning grape vines
CN114092808A (en) Crop disease and insect pest detection and prevention device and method based on image and deep learning
US11748976B2 (en) Automated plant detection using image data
CN105184824B (en) Reading intelligent agriculture bird-repeller system based on image sensing net
CN113112504B (en) Plant point cloud data segmentation method and system
US20230298196A1 (en) Geospatial object geometry extraction from imagery
US11776104B2 (en) Roof condition assessment using machine learning
AU2022256171B2 (en) Weeding robot and method, apparatus for planning weeding path for the same and medium
CN107808123A (en) The feasible area detecting method of image, electronic equipment, storage medium, detecting system
CN113160150B (en) AI (Artificial intelligence) detection method and device for invasion of foreign matters in wire mesh
Roggiolani et al. Hierarchical approach for joint semantic, plant instance, and leaf instance segmentation in the agricultural domain
Magistri et al. Towards in-field phenotyping exploiting differentiable rendering with self-consistency loss
Buddha et al. Weed detection and classification in high altitude aerial images for robot-based precision agriculture
Wang et al. The identification of straight-curved rice seedling rows for automatic row avoidance and weeding system
CN117392565A (en) Automatic identification method for unmanned aerial vehicle power inspection defects
Chaudhury et al. Agricultural Field Boundary Delineation From Multi-Temporal IRS P-6 LISS IV Images Using Multi-Task Learning
Kumar et al. Solar Power Based Multipurpose Agriculture Robot with Leaf-Disease Detection
Bell et al. Row following in pergola structured orchards by a monocular camera using a fully convolutional neural network
Thushara et al. A novel machine learning based autonomous farming robot for small-scale chili plantations
Nandhini et al. Deep learning solutions for pest detection
CN117961918B (en) Be applied to flower plantation's robot picking system
Montoya Cavero Sweet pepper recognition and peduncle pose estimation
CN118235586A (en) Machine vision-based pesticide spraying and fertilizer applying method and device for vine fruit trees
Pan et al. The Fieldscapes Dataset for Semantic Field Scene Understanding
Jaramillo et al. Inexpensive, Automated Pruning Weight Estimation in Vineyards

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220225