CN110543801A - Pine pest detection method, system and device based on neural network and unmanned aerial vehicle aerial image - Google Patents
Pine pest detection method, system and device based on neural network and unmanned aerial vehicle aerial image Download PDFInfo
- Publication number
- CN110543801A CN110543801A CN201810534077.5A CN201810534077A CN110543801A CN 110543801 A CN110543801 A CN 110543801A CN 201810534077 A CN201810534077 A CN 201810534077A CN 110543801 A CN110543801 A CN 110543801A
- Authority
- CN
- China
- Prior art keywords
- image
- neural network
- default
- aerial vehicle
- unmanned aerial
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 64
- 238000001514 detection method Methods 0.000 title claims abstract description 52
- 241000607479 Yersinia pestis Species 0.000 title claims abstract description 33
- 235000008331 Pinus X rigitaeda Nutrition 0.000 title claims abstract description 26
- 235000011613 Pinus brutia Nutrition 0.000 title claims abstract description 26
- 241000018646 Pinus brutia Species 0.000 title claims abstract description 26
- 238000007781 pre-processing Methods 0.000 claims abstract description 25
- 238000000605 extraction Methods 0.000 claims abstract description 15
- 238000012545 processing Methods 0.000 claims abstract description 9
- 238000012549 training Methods 0.000 claims description 38
- 238000000034 method Methods 0.000 claims description 25
- 238000012360 testing method Methods 0.000 claims description 23
- 238000004422 calculation algorithm Methods 0.000 claims description 11
- 230000006870 function Effects 0.000 claims description 6
- 239000000126 substance Substances 0.000 claims description 6
- 238000005520 cutting process Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000004891 communication Methods 0.000 claims 1
- 241000238631 Hexapoda Species 0.000 abstract description 7
- 238000011897 real-time detection Methods 0.000 abstract description 2
- 241000018651 Pinus tabuliformis Species 0.000 description 14
- 235000011610 Pinus tabuliformis Nutrition 0.000 description 14
- 238000002372 labelling Methods 0.000 description 13
- 238000013527 convolutional neural network Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 230000006378 damage Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 235000008582 Pinus sylvestris Nutrition 0.000 description 2
- 241000218626 Pinus sylvestris Species 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000011496 digital image analysis Methods 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 239000001839 pinus sylvestris Substances 0.000 description 2
- 238000002203 pretreatment Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000010348 incorporation Methods 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000005067 remediation Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000002689 soil Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/188—Vegetation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
the invention provides a pine pest detection method, a pine pest detection system and a pine pest detection device based on a neural network and an unmanned aerial vehicle aerial image, wherein the pine pest detection system comprises: the device comprises a preprocessing module, a data processing module and a data processing module, wherein the preprocessing module is used for receiving an aerial image of the unmanned aerial vehicle and preprocessing the image to obtain a preprocessed damaged image; the detection module is used for detecting aerial images of the unmanned aerial vehicle; the detection module comprises a neural network, and the neural network comprises a basic feature extractor and a prediction unit; an extra feature extraction layer is added behind the basic feature extractor, and the extra feature extraction layer and the last layer of the basic feature extractor form the prediction unit; and predicting the target category and position based on the anchor points as default boxes on the feature maps P1 and P2 generated by the prediction unit. This technical side make full use of unmanned aerial vehicle gathers the image data of taking photo by plane of specific height, to the insect pest image of forest, detects the precision height, but real-time detection, obviously is superior to prior art.
Description
Technical Field
the invention relates to the field of ecological protection, in particular to the field of monitoring and controlling forest pests, and specifically relates to a pine pest detection method and system by using an unmanned aerial vehicle orthographic image and a convolutional neural network.
Background
Forest resources play an important role in maintaining ecological balance and promoting economic development, and forest health is an important index for evaluating the ecological condition of a region. However, China is the country with the most serious forest biological disaster loss, and in the past decade, major forest pests in China all occur in an area of more than ten million hectares every year, and direct economic loss per year exceeds billion yuan. The method can quickly and accurately monitor and early warn the positions and the damage degrees of forest infected pests, is favorable for efficiently making remediation measures, and reduces ecological destruction and economic loss.
With the continuous development of computer technology, the modern remote sensing technology is combined with computer technologies such as image analysis, pattern recognition and machine learning to monitor forest insect pests, which becomes the research focus of researchers in recent years. However, by using traditional computer image analysis algorithms such as image color features and texture features, although the survey space coverage is enlarged and certain labor cost is reduced, the algorithms are easily interfered by environmental factors such as illumination, soil colors and non-detection target vegetation, and an algorithm model needs to be redesigned after image data is analyzed, and the algorithm design usually takes several days or even longer. Furthermore, the above methods typically require incorporation of ground survey information, again extending the operating time. Therefore, after the images are collected, algorithm design and damaged tree detection must be completed on the second site, and the requirements of real-time monitoring and accurate positioning of forest insect pests cannot be met.
The unmanned aerial vehicle is an unmanned aerial vehicle which has power, can be manually controlled and can execute various tasks, and can be widely applied to a plurality of military industry fields and civil fields. The unmanned aerial vehicle is applied to the field of forest disease and insect pest monitoring, the orthographic images of the disaster-stricken forest area are shot by the unmanned aerial vehicle, the forest disease and insect pest images are deeply analyzed based on a computer image analysis technology, the labor and material cost can be effectively reduced, researchers can more intuitively and comprehensively master the overall condition of forest disaster, more rapid and effective countermeasures can be provided, the major loss of forest resources caused by related disasters can be reduced, the economic benefit of forest production is improved, and the healthy development of ecological environment is guaranteed.
therefore, how to make full use of the unmanned aerial vehicle and provide a more efficient and accurate forest pest monitoring method becomes a demand in the market.
Disclosure of Invention
aiming at the defects of the prior art, the invention provides a method and a system for detecting pine insect damage based on a neural network and an unmanned aerial vehicle aerial image. Specifically, the invention provides the following technical scheme:
in one aspect, the invention provides a pine pest detection method based on a neural network and an unmanned aerial vehicle aerial image, and the method comprises the following steps:
step 1, receiving an aerial image of an unmanned aerial vehicle;
step 2, preprocessing the image to obtain a preprocessed damaged image;
Step 3, constructing a neural network; the neural network comprises a basic feature extractor and a prediction module; an additional feature extraction layer is added behind the basic feature extractor, and the additional feature extraction layer and the last layer of the basic feature extractor form the prediction module; predicting a target category and position based on the default boxes on the feature maps P1 and P2 generated by the prediction module as anchor points;
and 4, detecting the damaged forest in the aerial image of the unmanned aerial vehicle by using the neural network.
preferably, when the neural network needs to be trained, before step 1, the method further includes:
step 001, marking the existing unmanned aerial vehicle aerial image to form a marked image; the label comprises a rectangular label frame coordinate surrounding the boundary of the victim forest;
002, dividing the labeled image into a training set and a test set;
The step 4 is followed by:
and 5, training the neural network based on the training set, and testing the trained neural network based on the testing set.
preferably, in the step 2, the preprocessing includes:
And reducing the image, and cutting the reduced image in a side length equal division mode to obtain a preprocessed damaged image.
preferably, when the neural network needs to be trained, the input image used for training is labeled to form an image with a labeling frame, and after the preprocessing, a victim image is formed, wherein the victim image contains the midpoint of the labeling frame, and the labeling frame is reserved, otherwise, the victim image is not used as a detection target.
Preferably, in said step 3, each cell of the feature maps P1, P2 of said prediction module is associated with a set of default boxes;
the default box aspect ratio includes {1, 2, 1/2 }.
Preferably, the area ratio of the base frame to the input preprocessed victim image is used as the base proportion of the set of default frames;
The default frame and the label frame IoU (Intersection over, i.e. the overlapping rate of the target window and the original mark window generated by the model) are more than 50%, and the default frame is a positive default frame; default boxes and the label boxes IoU are 50% and below, negative default boxes.
Preferably, in the step 3, the objective function of the neural network is set as:
wherein N is the total number of the positive default frame and the negative default frame, Lconf is the classification loss, Lloc is the position loss, g is the mark frame, p is the prediction frame, and the parameter alpha is set to be 1;
The classification penalty is a cross-entropy penalty defined as:
wherein the content of the first and second substances,
c is a classification confidence coefficient which indicates whether the ith default frame is matched with the jth marking frame of which the category is the victim forest; the position loss is the Smooth L1 loss between the prediction box p and the annotation box, and is defined as:
(cx, cy) is the midpoint coordinate of the default box d, and w and h are the width and height of the default box, respectively.
preferably, the step 5 further comprises: the training is optimized by a random gradient descent algorithm with momentum of 0.9.
in addition, the invention also provides a pine pest detection system based on the neural network and the aerial image of the unmanned aerial vehicle, and the system comprises:
the device comprises a preprocessing module, a data processing module and a data processing module, wherein the preprocessing module is used for receiving an unmanned aerial vehicle aerial image, preprocessing the unmanned aerial vehicle aerial image and obtaining a preprocessed damaged image;
a detection module for detecting the received ortho-image; the detection module comprises a neural network, and the neural network comprises a basic feature extractor and a prediction unit; an additional feature extraction layer is added behind the basic feature extractor, and the additional feature extraction layer and the last layer of the basic feature extractor form the prediction unit; and predicting the target category and position based on the anchor points as default boxes on the feature maps P1 and P2 generated by the prediction unit.
preferably, the system further comprises:
the marking module is used for receiving the existing aerial image of the unmanned aerial vehicle, marking the aerial image and forming a marked image; the label comprises a rectangular label frame coordinate surrounding the boundary of the victim forest;
A neural network training module for dividing the received victim image into a training set and a test set, an
training the neural network based on the training set, and testing the trained neural network based on the testing set.
preferably, the pre-treatment comprises:
And reducing the image, and cutting the reduced image in a side length equal division mode to obtain a preprocessed damaged image.
Preferably, when the neural network needs to be trained, the input image used for training is labeled to form an image with a labeling frame, and after the preprocessing, a victim image is formed, wherein the victim image contains the midpoint of the labeling frame, and the labeling frame is reserved, otherwise, the victim image is not used as a detection target.
preferably, each cell of the feature maps P1, P2 of the prediction cells is associated with a set of default boxes;
The default frame aspect ratio includes {1, 2, 1/2 }.
Preferably, the area ratio of the base frame to the input preprocessed victim image is used as the base proportion of the set of default frames;
The default frame and the label frame IoU (Intersection over, i.e. the overlapping rate of the target window and the original mark window generated by the model) are more than 50%, and the default frame is a positive default frame; default boxes and the label boxes IoU are 50% and below, negative default boxes.
preferably, the objective function of the neural network is set as:
wherein N is the total number of the positive default frame and the negative default frame, Lconf is the classification loss, Lloc is the position loss, g is the mark frame, p is the prediction frame, and the parameter alpha is set to be 1;
the classification penalty is a cross-entropy penalty defined as:
wherein the content of the first and second substances,
c is a classification confidence coefficient which indicates whether the ith default frame is matched with the jth marking frame of which the category is the victim forest; the position loss is the Smooth L1 loss between the prediction box p and the annotation box, and is defined as:
(cx, cy) is the midpoint coordinate of the default box d, and w and h are the width and height of the default box, respectively.
Preferably, the training is optimized using a stochastic gradient descent algorithm with momentum of 0.9.
in another aspect of the invention, there is also provided a pine pest detection device based on a neural network and aerial images of an unmanned aerial vehicle, the device comprising a processor unit, and
The memory unit is used for storing related data used for calculation of the neural network and computer instructions which can be called and operated by the processor unit;
the computer instructions execute the pine tree pest detection method based on the neural network and the orthographic images.
drawings
FIG. 1 is a pest-damaged Chinese pine detection model based on a deep convolutional neural network according to an embodiment of the invention;
FIG. 2 is a deep convolutional neural network framework structure of an embodiment of the present invention;
FIG. 3 is a comparison between standard convolution and depth separable convolution for an embodiment of the present invention;
fig. 4a is an orthographic image original drawing of the pinus sylvestris unmanned aerial vehicle according to the embodiment of the invention;
Fig. 4b is a view of an orthophoto image detection result of the pinus sylvestris unmanned aerial vehicle according to the embodiment of the present invention.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
Example 1
In a specific embodiment, the pine tree pest detection method based on the neural network and the ortho-image can be realized by the following steps:
the invention provides a pine pest detection method based on a neural network and an unmanned aerial vehicle aerial image, which comprises the following steps:
step 1, receiving an aerial image of an unmanned aerial vehicle;
step 2, preprocessing the image to obtain a preprocessed damaged image;
Step 3, constructing a neural network; the neural network comprises a basic feature extractor and a prediction module; an additional feature extraction layer is added behind the basic feature extractor, and the additional feature extraction layer and the last layer of the basic feature extractor form the prediction module; predicting a target category and position based on the default boxes on the feature maps P1 and P2 generated by the prediction module as anchor points;
And 4, detecting the damaged forest in the aerial image of the unmanned aerial vehicle by using the neural network.
preferably, when the neural network needs to be trained, before step 1, the method further includes:
Step 001, marking the existing unmanned aerial vehicle aerial image to form a marked image; the label comprises a rectangular label frame coordinate surrounding the boundary of the victim forest;
002, dividing the labeled image into a training set and a test set;
The step 4 is followed by:
and 5, training the neural network based on the training set, and testing the trained neural network based on the testing set.
Preferably, in the step 2, the preprocessing includes:
and reducing the image, and cutting the reduced image in a side length equal division mode to obtain a preprocessed damaged image.
preferably, when the neural network needs to be trained, the input image used for training is labeled to form an image with a labeling frame, and after the preprocessing, a victim image is formed, wherein the victim image contains the midpoint of the labeling frame, and the labeling frame is reserved, otherwise, the victim image is not used as a detection target.
Preferably, in said step 3, each cell of the feature maps P1, P2 of said prediction module is associated with a set of default boxes;
The default box aspect ratio includes {1, 2, 1/2 }.
Preferably, the area ratio of the base frame to the input preprocessed victim image is used as the base proportion of the set of default frames;
The default frame and the label frame IoU (Intersection over, i.e. the overlapping rate of the target window and the original mark window generated by the model) are more than 50%, and the default frame is a positive default frame; default boxes and the label boxes IoU are 50% and below, negative default boxes.
preferably, in the step 3, the objective function of the neural network is set as:
Wherein N is the total number of the positive default frame and the negative default frame, Lconf is the classification loss, Lloc is the position loss, g is the mark frame, p is the prediction frame, and the parameter alpha is set to be 1;
the classification penalty is a cross-entropy penalty defined as:
Wherein the content of the first and second substances,
c is a classification confidence coefficient which indicates whether the ith default frame is matched with the jth marking frame of which the category is the victim forest; the position loss is the Smooth L1 loss between the prediction box p and the annotation box, and is defined as:
(cx, cy) is the midpoint coordinate of the default box d, and w and h are the width and height of the default box, respectively.
Preferably, the step 5 further comprises: the training is optimized by a random gradient descent algorithm with momentum of 0.9.
example 2
in this example, the method of the present invention will be explained in detail with reference to specific examples. The specific method process of the invention can be carried out in the following way:
A method for detecting pine pests based on a deep convolutional neural network and an orthographic image of an unmanned aerial vehicle includes the steps of marking pine pests in the orthographic image acquired by the unmanned aerial vehicle by rectangular frame coordinates surrounding the boundary of the pine pests, preprocessing the image, and correcting the coordinates of the marked frame. The depth convolution neural network framework for the detection of the damaged pinus tabulaeformis based on the SSD300, a depth separable network is used as a basic feature extractor, two feature layers are arranged as prediction modules, and a softmax classifier is used for predicting result classification. And training the model by using the labeled Chinese pine image, and finally inputting the orthoscopic Chinese pine image of the unmanned aerial vehicle to be detected into the model to obtain a detection result. The method can be synchronously carried out with the image acquisition of the unmanned aerial vehicle through the mobile workstation loaded with the GPU for acceleration, so that the real-time detection of the insect pests in the forest area is realized, and the aim of early warning is fulfilled.
The method of the present invention is described in detail below with reference to a specific set of examples. It should be noted that the present embodiment is a preferred embodiment of the present invention to illustrate the method of the present invention, and should not be construed as limiting the scope of the present invention.
fig. 1 shows a pest-damaged Chinese pine detection model based on a deep convolutional neural network. The pine tree pest detection method based on the deep convolutional neural network and the unmanned aerial vehicle orthographic image comprises the following steps:
marking original image data of the unmanned aerial vehicle orthographic incidence pest-receiving Chinese pine: after the image labeling is completed by the acquisition personnel, the forest expert rechecks the image, and partial unconventional images are combined with manual ground investigation to confirm whether the image is the damaged Chinese pine. The label is a rectangular box coordinate surrounding the victim pine boundary.
preprocessing the annotated image: the resolution of an original image shot by an unmanned aerial vehicle is 5280 pixels × 3956 pixels, the image needs a large storage space, the processing is complex, and the detection can be completed only by extremely high computer hardware equipment. Therefore, before detection, the original image is firstly reduced, and then the image is cut in a mode of dividing the side length equally, and each original image obtains 12 preprocessed damaged images. Because abnormal parts (with slender shapes or undersized sizes) of the cut pinus tabulaeformis of the same strain are reserved and trained, the problems of model false detection or repeated detection and the like are caused, when the coordinates of the marking frame are corrected, the 12 cut images containing the midpoint of the marking frame retain the marking frame, and the part of the pinus tabulaeformis in the rest images is not taken as a detection target.
Thirdly, dividing all images into a training set and a testing set, wherein the ratio of the training set to the testing set is about 4: 1. The training set is used for training the damaged Chinese pine detection model to obtain stable model parameters, and the testing set is used for testing the performance of the model.
and fourthly, constructing a deep convolutional neural network framework. As shown in fig. 2, the target detection deep convolutional network is composed of a basic feature extractor and a prediction module. The input data of the model is an orthographic image of the victim pine unmanned aerial vehicle, and the basic feature extractor selects a depth separable convolution network. As shown in fig. 3, the deep separable convolution network decomposes the standard convolution into a deep convolution, which performs individual convolution on each channel of the input, and a point convolution, which linearly connects the outputs of the deep convolution. The basic feature extractor is based on a MobileNet framework, and a first section of standard convolution and subsequent twelve sections of depth separable convolution of the framework are reserved. Then, an additional feature extraction layer is added, and the layer and the last layer of the basic feature extractor form a prediction module of the model, and the default frames on the feature maps P1 and P2 generated by the prediction module are used as anchor points to predict the category and the position of the target. And finally, obtaining a final detection result through non-maximum value inhibition. The non-maximum suppression filters all detection results obtained by using one detection target to obtain the detection with the highest confidence, so as to improve the detection accuracy.
Each unit of the prediction module P1, P2 feature map of the deep convolutional neural network is associated with a set of default boxes, each set covering two aspect ratios 1/2 and 2 on a square base box and a square default box slightly larger than the base box. The area ratio of the base frame to the input pre-processed victim image is used as the base proportion for the set of default frames. The base ratio of P1 was 0.24 and P2 was 0.38. The default box and the label box IoU of the victim pinus tabulaeformis (Intersection over, i.e. the overlapping rate of the target window and the original mark window generated by the model) is a positive default box above 50%, and IoU is a negative default box below 50%.
Sixthly, expressing the target function of the target detection deep convolutional neural network as follows:
Where N is the total number of positive and negative default frames, Lconf is the classification penalty, Lloc is the position penalty, g is the label frame, p is the prediction frame, and the parameter α is set to 1. The classification penalty is a cross-entropy penalty defined as:
Wherein the content of the first and second substances,
the background classification is 0, the damaged Chinese pine classification is 1, and c is a classification confidence coefficient, which indicates whether the ith default frame is matched with the jth label frame of the damaged Chinese pine. The position loss is the Smooth L1 loss between the prediction box (p) and the annotation box, and is defined as:
(cx, cy) is the midpoint coordinate of the default box (d), and w and h are the width and height, respectively, of the default box. The positive default box yields most of the classification penalty and all of the location penalty.
and in the model training stage, a random gradient descent algorithm with momentum of 0.9 is adopted for optimization, the initial learning rate is set to be 0.001, the regularization coefficient is set to be 0.00004, 16 images are used as a batch, training is carried out for 80000 times, and the learning rate is reduced by 0.1 time every 35000 times.
After training of the model is completed, preprocessing the images in the test set, inputting 12 images into the model as a batch, and detecting pest-damaged Chinese pine in the images.
Example 3
in another embodiment, the technical solution of the present invention can also be implemented in the form of a system or a detection device, specifically:
in one particular embodiment, the system of the present invention comprises:
The system comprises:
the device comprises a preprocessing module, a data processing module and a data processing module, wherein the preprocessing module is used for receiving an unmanned aerial vehicle aerial image, preprocessing the unmanned aerial vehicle aerial image and obtaining a preprocessed damaged image;
a detection module for detecting the received ortho-image; the detection module comprises a neural network, and the neural network comprises a basic feature extractor and a prediction unit; an additional feature extraction layer is added behind the basic feature extractor, and the additional feature extraction layer and the last layer of the basic feature extractor form the prediction unit; and predicting the target category and position based on the anchor points as default boxes on the feature maps P1 and P2 generated by the prediction unit.
preferably, the system further comprises:
The marking module is used for receiving the existing aerial image of the unmanned aerial vehicle, marking the aerial image and forming a marked image; the label comprises a rectangular label frame coordinate surrounding the boundary of the victim forest;
A neural network training module for dividing the received victim image into a training set and a test set, an
training the neural network based on the training set, and testing the trained neural network based on the testing set.
Preferably, the pre-treatment comprises:
and reducing the aerial image of the unmanned aerial vehicle, and cutting the reduced image in a side length equal division mode to obtain a preprocessed damaged image.
Preferably, when the neural network needs to be trained, the input image used for training is labeled to form an image with a labeling frame, and after the preprocessing, a victim image is formed, wherein the victim image contains the midpoint of the labeling frame, and the labeling frame is reserved, otherwise, the victim image is not used as a detection target.
preferably, each cell of the feature maps P1, P2 of the prediction cells is associated with a set of default boxes;
the default frame aspect ratio includes {1, 2, 1/2 }.
Preferably, the area ratio of the base frame to the input preprocessed victim image is used as the base proportion of the set of default frames;
The default frame and the label frame IoU (Intersection over, i.e. the overlapping rate of the target window and the original mark window generated by the model) are more than 50%, and the default frame is a positive default frame; default boxes and the label boxes IoU are 50% and below, negative default boxes.
preferably, the objective function of the neural network is set as:
Wherein N is the total number of the positive default frame and the negative default frame, Lconf is the classification loss, Lloc is the position loss, g is the mark frame, p is the prediction frame, and the parameter alpha is set to be 1;
the classification penalty is a cross-entropy penalty defined as:
wherein the content of the first and second substances,
c is a classification confidence coefficient which indicates whether the ith default frame is matched with the jth marking frame of which the category is the victim forest; the position loss is the Smooth L1 loss between the prediction box p and the annotation box, and is defined as:
(cx, cy) is the midpoint coordinate of the default box d, and w and h are the width and height of the default box, respectively.
Preferably, the training is optimized using a stochastic gradient descent algorithm with momentum of 0.9.
in yet another specific embodiment, there is also provided a pine pest detection device based on a neural network and aerial images of a drone, the device including a processor unit, and
The memory unit is used for storing related data used for calculation of the neural network and computer instructions which can be called and operated by the processor unit;
the computer instructions execute the pine pest detection method based on the neural network and the orthographic image according to the embodiments 1 and 2.
it will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. a pine pest detection method based on a neural network and an unmanned aerial vehicle aerial image is characterized by comprising the following steps:
Step 1, receiving an aerial image of an unmanned aerial vehicle;
Step 2, preprocessing the image to obtain a preprocessed damaged image;
step 3, constructing a neural network; the neural network comprises a basic feature extractor and a prediction module; an additional feature extraction layer is added behind the basic feature extractor, and the additional feature extraction layer and the last layer of the basic feature extractor form the prediction module; predicting a target category and position based on the default boxes on the feature maps P1 and P2 generated by the prediction module as anchor points;
And 4, detecting the damaged forest in the aerial image of the unmanned aerial vehicle by using the neural network.
2. The method according to claim 1, wherein when the neural network needs to be trained, the step 1 is preceded by:
step 001, marking the existing unmanned aerial vehicle aerial image to form a marked image; the label comprises a rectangular label frame coordinate surrounding the boundary of the victim forest;
002, dividing the labeled image into a training set and a test set;
The step 4 is followed by:
and 5, training the neural network based on the training set, and testing the trained neural network based on the testing set.
3. the method of claim 1, wherein in step 2, the pre-processing comprises:
And reducing the image, and cutting the reduced image in a side length equal division mode to obtain a preprocessed damaged image.
4. The method according to claim 2, characterized in that in step 3, each element of the prediction module's profile P1, P2 is associated with a set of default boxes;
The default frame aspect ratio includes {1, 2, 1/2 }.
5. the method of claim 4, wherein the area ratio of the base box to the input preprocessed victim image is used as the base proportion of the set of default boxes;
The default box and the label box IoU are more than 50%, and are positive default boxes; default boxes and the label boxes IoU are 50% and below, negative default boxes.
6. The method of claim 1, wherein in step 3, the objective function of the neural network is set as:
wherein N is the total number of the positive default frame and the negative default frame, Lconf is the classification loss, Lloc is the position loss, g is the mark frame, p is the prediction frame, and the parameter alpha is set to be 1;
the classification penalty is a cross-entropy penalty defined as:
wherein the content of the first and second substances,
c is a classification confidence coefficient which indicates whether the ith default frame is matched with the jth marking frame of which the category is the victim forest; the position loss is the Smooth L1 loss between the prediction box p and the annotation box, and is defined as:
(cx, cy) is the midpoint coordinate of the default box d, and w and h are the width and height of the default box, respectively.
7. the method of claim 2, wherein the step 5 further comprises: the training is optimized by a random gradient descent algorithm with momentum of 0.9.
8. the utility model provides a pine pest detection system based on image is taken photo by plane to neural network and unmanned aerial vehicle, a serial communication port, the system includes:
the device comprises a preprocessing module, a data processing module and a data processing module, wherein the preprocessing module is used for receiving an unmanned aerial vehicle aerial image, preprocessing the unmanned aerial vehicle aerial image and obtaining a preprocessed damaged image;
The detection module is used for detecting the received aerial image of the unmanned aerial vehicle; the detection module comprises a neural network, and the neural network comprises a basic feature extractor and a prediction unit; an additional feature extraction layer is added behind the basic feature extractor, and the additional feature extraction layer and the last layer of the basic feature extractor form the prediction unit; and predicting the target category and position based on the anchor points as default boxes on the feature maps P1 and P2 generated by the prediction unit.
9. the system of claim 8, further comprising:
the marking module is used for receiving the existing aerial image of the unmanned aerial vehicle, marking the aerial image and forming a marked image; the label comprises a rectangular label frame coordinate surrounding the boundary of the victim forest;
a neural network training module for dividing an existing victim image into a training set and a test set, an
training the neural network based on the training set, and testing the trained neural network based on the testing set.
10. A pine pest detection device based on neural network and unmanned aerial vehicle image of taking photo by plane, its characterized in that, the device includes processor unit to and
The memory unit is used for storing related data used for calculation of the neural network and computer instructions which can be called and operated by the processor unit;
the computer instructions perform the method for detecting pine pest based on neural network and ortho-image according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810534077.5A CN110543801A (en) | 2018-05-29 | 2018-05-29 | Pine pest detection method, system and device based on neural network and unmanned aerial vehicle aerial image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810534077.5A CN110543801A (en) | 2018-05-29 | 2018-05-29 | Pine pest detection method, system and device based on neural network and unmanned aerial vehicle aerial image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110543801A true CN110543801A (en) | 2019-12-06 |
Family
ID=68701627
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810534077.5A Pending CN110543801A (en) | 2018-05-29 | 2018-05-29 | Pine pest detection method, system and device based on neural network and unmanned aerial vehicle aerial image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110543801A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111523516A (en) * | 2020-05-14 | 2020-08-11 | 宁波工程学院 | Forest harmful wood identification method |
CN112001365A (en) * | 2020-09-22 | 2020-11-27 | 四川大学 | High-precision crop disease and insect pest identification method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106791659A (en) * | 2016-12-26 | 2017-05-31 | 安徽天立泰科技股份有限公司 | A kind of forest pest and disease monitoring and guard system based on splicing of taking photo by plane |
CN107423760A (en) * | 2017-07-21 | 2017-12-01 | 西安电子科技大学 | Based on pre-segmentation and the deep learning object detection method returned |
CN207321442U (en) * | 2017-09-22 | 2018-05-04 | 中国科学院遥感与数字地球研究所 | Image capturing system and remote sensing monitoring system based on unmanned plane |
-
2018
- 2018-05-29 CN CN201810534077.5A patent/CN110543801A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106791659A (en) * | 2016-12-26 | 2017-05-31 | 安徽天立泰科技股份有限公司 | A kind of forest pest and disease monitoring and guard system based on splicing of taking photo by plane |
CN107423760A (en) * | 2017-07-21 | 2017-12-01 | 西安电子科技大学 | Based on pre-segmentation and the deep learning object detection method returned |
CN207321442U (en) * | 2017-09-22 | 2018-05-04 | 中国科学院遥感与数字地球研究所 | Image capturing system and remote sensing monitoring system based on unmanned plane |
Non-Patent Citations (1)
Title |
---|
张军国等: "无人机航拍林业虫害图像分割复合梯度分水岭算法", 《农业工程学报》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111523516A (en) * | 2020-05-14 | 2020-08-11 | 宁波工程学院 | Forest harmful wood identification method |
CN111523516B (en) * | 2020-05-14 | 2024-02-02 | 宁波工程学院 | Forest harmful wood identification method |
CN112001365A (en) * | 2020-09-22 | 2020-11-27 | 四川大学 | High-precision crop disease and insect pest identification method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Al Bashish et al. | Detection and classification of leaf diseases using K-means-based segmentation and | |
Xiong et al. | Visual detection of green mangoes by an unmanned aerial vehicle in orchards based on a deep learning method | |
CN110070571B (en) | Phyllostachys pubescens morphological parameter detection method based on depth camera | |
CN105809194A (en) | Method for translating SAR image into optical image | |
CN110543801A (en) | Pine pest detection method, system and device based on neural network and unmanned aerial vehicle aerial image | |
CN116030343A (en) | Crop pest monitoring system based on machine vision identification | |
Lai et al. | Real-time detection of ripe oil palm fresh fruit bunch based on YOLOv4 | |
CN109657540B (en) | Withered tree positioning method and system | |
CN117456358A (en) | Method for detecting plant diseases and insect pests based on YOLOv5 neural network | |
CN117197676A (en) | Target detection and identification method based on feature fusion | |
CN116258956A (en) | Unmanned aerial vehicle tree recognition method, unmanned aerial vehicle tree recognition equipment, storage medium and unmanned aerial vehicle tree recognition device | |
CN115115954A (en) | Intelligent identification method for pine nematode disease area color-changing standing trees based on unmanned aerial vehicle remote sensing | |
Ozguven et al. | A new approach to detect mildew disease on cucumber (Pseudoperonospora cubensis) leaves with image processing | |
CN114965501A (en) | Peanut disease detection and yield prediction method based on canopy parameter processing | |
Ärje et al. | Automatic flower detection and classification system using a light-weight convolutional neural network | |
Miao et al. | Crop weed identification system based on convolutional neural network | |
CN117392382A (en) | Single tree fruit tree segmentation method and system based on multi-scale dense instance detection | |
CN116310913B (en) | Natural resource investigation monitoring method and device based on unmanned aerial vehicle measurement technology | |
CN112507770A (en) | Rice disease and insect pest identification method and system | |
CN116721385A (en) | Machine learning-based RGB camera data cyanobacteria bloom monitoring method | |
CN115908843A (en) | Superheat degree recognition model training method, recognition method, equipment and storage medium | |
CN114863296A (en) | Method and system for identifying and positioning wood damaged by pine wilt disease | |
de Ocampo et al. | Integrated Weed Estimation and Pest Damage Detection in Solanum melongena Plantation via Aerial Vision-based Proximal Sensing. | |
Fida et al. | Leaf image recognition based identification of plants: Supportive framework for plant systematics | |
CN115311678A (en) | Background suppression and DCNN combined infrared video airport flying bird detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191206 |
|
RJ01 | Rejection of invention patent application after publication |