CN111881728A - Grassland mouse damage monitoring method based on low-altitude remote sensing - Google Patents

Grassland mouse damage monitoring method based on low-altitude remote sensing Download PDF

Info

Publication number
CN111881728A
CN111881728A CN202010549504.4A CN202010549504A CN111881728A CN 111881728 A CN111881728 A CN 111881728A CN 202010549504 A CN202010549504 A CN 202010549504A CN 111881728 A CN111881728 A CN 111881728A
Authority
CN
China
Prior art keywords
image
grassland
remote sensing
precision
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010549504.4A
Other languages
Chinese (zh)
Inventor
程武学
董光
熊瑞东
王艺积
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Normal University
Original Assignee
Sichuan Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Normal University filed Critical Sichuan Normal University
Priority to CN202010549504.4A priority Critical patent/CN111881728A/en
Publication of CN111881728A publication Critical patent/CN111881728A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Mining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Marine Sciences & Fisheries (AREA)
  • Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Agronomy & Crop Science (AREA)
  • Animal Husbandry (AREA)
  • Software Systems (AREA)
  • Mining & Mineral Resources (AREA)
  • Computational Linguistics (AREA)
  • Astronomy & Astrophysics (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of mouse damage image processing, and aims to provide a grassland mouse damage monitoring method based on low-altitude remote sensing. On the basis of fully mastering the surface characteristics of different rats, extracting the surface rat damage information of four images one by using 4 methods of gray threshold segmentation, color texture optimization and rule-based object-oriented and BP neural network, evaluating the extraction precision by adopting double evaluation standards of spatial precision and quantitative precision, summarizing the advantages of the methods, and further obtaining the optimal extraction method of the different rat damage information in the images in different seasons by means of contrastive analysis.

Description

Grassland mouse damage monitoring method based on low-altitude remote sensing
Technical Field
The invention relates to the field of acquiring rat damage information by low-altitude remote sensing, in particular to a grassland rat damage monitoring method based on low-altitude remote sensing.
Background
For a long time, the rat damage becomes one of the main factors threatening the ecological safety of grasslands due to the heavy utilization and light management of grasslands by people, and the sustainable development of the grassland animal husbandry is severely restricted [ 4-6 ]. Since the 90 s of the 20 th century, the rats in the grassland are frequently damaged in China, which not only causes serious loss to the grass resources, but also aggravates the degradation and desertification of the grassland, the traditional middle-high resolution satellite image is difficult to meet the high-precision requirement of the grassland rat damage monitoring, along with the rapid development of the low-altitude remote sensing technology, the rat damage monitoring research has a new direction, with the wide application of unmanned aerial vehicles in the industry, the remote sensing technology is developing to ultra-high precision at unprecedented speed, the high-resolution satellite images are difficult to accurately acquire the rat damage information on the ground due to the images with the resolution, the unmanned aerial vehicle images have sub-meter-level spatial resolution, the sensor can be changed according to the research requirement, the flying height can be adjusted to obtain the images with centimeter-level resolution, the expression of the ground mouse injury information on the image is greatly influenced, so that the mouse injury information has more abundant characteristics.
CN201710636249.5, an unmanned aerial vehicle meadow management system and method, the invention provides an unmanned aerial vehicle meadow management system and method, the method includes: shooting remote sensing images of a grazing area, and detecting soil information through a soil detection sensor; distinguishing a pasture area and a non-pasture area from a remote sensing image of a pasturing area, distinguishing the remote sensing image into a pasturing area and a non-pasturing area according to the position of a herd, and marking a rat hole of the non-pasture area; analyzing the forage grass coverage in corresponding areas in the first detection unit and the second detection unit, judging the damage degree grade of the pest rats in the third detection unit, analyzing the soil condition, and sending the analysis result to the control station; selecting a region to be detected with the maximum vegetation coverage in the grassland of the remote sensing image of the second detection unit, and controlling the unmanned aerial vehicle to drive towards the region to be detected according to the direction of the region to be detected; the method monitors the vegetation, rat holes and soil conditions in the grassland in real time, and simultaneously automatically grazes the rats through the vegetation conditions of the pasture eating area in the grazing process, but for different rats, whether the optimal precision can be obtained by adopting the method can not be known
Therefore, a method capable of obtaining the images of the rats on the grassland based on low-altitude remote sensing and matching the images to the optimal images according to different time and different rat species is needed, so that the most accurate characteristics of the rats on the grassland ground can be obtained.
Disclosure of Invention
The invention aims to provide a grassland mouse damage monitoring method based on low-altitude remote sensing, which is characterized in that an unmanned aerial vehicle is used for shooting mouse damage images of two experimental areas when the grassland mouse damage monitoring method goes to the experimental areas twice in spring and summer. Through the steps of air route design, flight implementation, orthoimage production and the like, two orthoimages of the unmanned aerial vehicle in spring and summer are obtained. Meanwhile, control points are distributed in the sample area to correct the image of the unmanned aerial vehicle, and the unmanned aerial vehicle image correction device is reasonable in structure, ingenious in design and suitable for popularization;
in order to achieve the purpose, the technical scheme adopted by the invention is as follows: the grassland rat damage monitoring method based on low-altitude remote sensing comprises the following steps:
s1: the method comprises the steps that a remote sensing unmanned aerial vehicle is thrown above a grassland to be monitored, and the remote sensing unmanned aerial vehicle captures a plurality of ground images of the grassland to be monitored according to a set flight path;
s2: sending the processed ground image as input to a trained mouse damage image recognition model, wherein the output of the mouse damage image recognition model is a high-precision mouse damage recognition image;
s3: and combining and arranging a plurality of high-precision rat damage identification images to obtain the optimal surface characteristics of the grassland to be monitored in the rat damage process.
Preferably, in step S1, the remote sensing drone further includes:
s21, arranging a plurality of control points on the grassland to be monitored, setting the flying height of the remote sensing unmanned aerial vehicle according to the topographic characteristics of the grassland to be monitored, and entering S21;
s22, setting the lens resolution on the remote sensing unmanned aerial vehicle through the flying height in S21, and entering S32;
and S23, driving the remote sensing unmanned aerial vehicle to fly according to the preset navigation, and capturing the ground image of the grassland to be monitored.
Preferably, a calculation formula of the resolution GSD of the camera lens on the remote sensing unmanned aerial vehicle is as follows:
Figure BDA0002541952380000021
in the formula, GSD is the lens-to-ground resolution in m, H is the relative height in m, a is the pixel unit size in mm, and f is the camera lens focal length in mm.
Preferably, in step S23, the ground image is input to the unmanned aerial vehicle imaging processing platform and processed, and then the process proceeds to step S2.
Preferably, the unmanned aerial vehicle camera shooting processing platform comprises the following steps:
s51, after the ground image picture is imported into the platform, the platform establishes a measuring area by combining the control point in the step S21, and the step enters S52;
s52, inputting each parameter into the platform, automatically generating space-three encryption, generating DSM and DOM, and entering S53;
s53: the processed floor image is obtained, and the process proceeds to step S2.
Preferably, the unmanned aerial vehicle camera shooting processing platform is a Pix4Dmapper and comprises automatic air-to-three, area network adjustment and ground control point precision evaluation, after air-to-three calculation is completed, software can calculate space coordinates of corresponding ground discrete points according to image orientation elements and matched homonymy points, then DSM is generated by the ground discrete points, then an original image is corrected by the DSM, and an orthoimage is generated by resampling. After the orthoimage is generated, software can automatically inlay and homogenize colors, and the splicing of the image is completed.
Preferably, in step S2, the training process of the rat damage image recognition model includes the following steps,
s71: respectively acquiring a large number of regional characteristic image photos of the grassland mouse damage region in spring and summer by using a remote sensing unmanned aerial vehicle;
s72: respectively carrying out four image algorithm processing on a large number of image photos in the S71, wherein the four image algorithms comprise an image segmentation model, a color texture feature model, an object-oriented extraction model and a neural network, carrying out precision verification according to image photo results processed by the image algorithms, and entering the step S73;
s73: and comparing the eight verified precision results respectively to obtain different precisions of different rats in different seasons, and further determining that the only optimal image algorithm exists in the grassland surface image photos obtained by different rats at different times.
Preferably, in step S73, the precision comparison includes morphology comparison, area comparison, and time phase difference, and the precision of each two types of processed image photos is compared by three precision comparison methods.
Compared with the prior art, the invention has the beneficial effects that:
1. the mouse damage information extracted by visual interpretation is used as an actual measurement value, the advantages and disadvantages of the extraction results of all the methods in the aspects of the number, the area, the geometric form and the like of the image spots are compared, and the spatial precision and the quantitative precision are integrated to finally obtain the optimal extraction method in all the images;
2. the unmanned aerial vehicle is applied to grassland rat damage monitoring, and a method for extracting surface rat damage information by using a low-altitude remote sensing technology is explored. Unmanned aerial vehicle can acquire the image in the certain limit fast, and is with low costs, efficient, the simple operation. By comparing and researching the rat damage information extraction method, the advantages and disadvantages of different methods can be found, and certain theoretical and technical references can be provided for further low-altitude remote sensing rat damage monitoring of the grassland.
Drawings
FIG. 1 is a schematic diagram of a grassland rat damage monitoring method based on low-altitude remote sensing;
FIG. 2 is a schematic illustration of an airport runway crack defect in an embodiment of the present invention;
FIG. 3 is a schematic view of a first typical regional route of a zokor rat pest in an embodiment of the invention;
FIG. 4 is a schematic diagram of a second route of a typical woodchuck mouse damage area in an embodiment of the present invention;
fig. 5 is a flowchart of processing of aerial data of a drone in an embodiment of the invention;
FIG. 6 is an orthographic view of a airline during spring and summer according to an embodiment of the present invention;
FIG. 7 is an orthographic view of airline two in spring and summer according to an embodiment of the present invention;
fig. 8 shows the zokor dune spot area extracted from the summer image in the embodiment of the present invention;
fig. 9 shows the compactness of zokor dune patches extracted from summer images in an embodiment of the invention;
FIG. 10 is a table diagram showing comparison of the advantages of the method for extracting a zokor dune from a spring image according to the embodiment of the invention;
fig. 11 is an advantage comparison table diagram of a method for extracting a zokor dune from summer images according to an embodiment of the invention;
FIG. 12 is a plot area of radiation area of woodchuck cave entrance extracted from spring image in the embodiment of the present invention;
FIG. 13 is a graph spot compactness of a radiation area of an woodchuck opening extracted from a spring image according to an embodiment of the present invention;
FIG. 14 is a table illustrating the comparison of the superiority of the method for extracting the woodchuck mouse damage information from the spring image according to the embodiment of the present invention;
FIG. 15 is a table illustrating comparison of the advantage of the method for extracting woodchuck mouse damage information from summer images according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to fig. 1 to 15 of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, but not all embodiments. All other implementations made by those of ordinary skill in the art based on the embodiments of the present invention are obtained without inventive efforts.
In the description of the present invention, it is to be understood that the terms "counterclockwise", "clockwise", "longitudinal", "lateral", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc., indicate orientations or positional relationships based on those shown in the drawings, and are used for convenience of description only, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be considered as limiting.
The grassland rat damage monitoring method based on low-altitude remote sensing comprises the following steps:
s1: the method comprises the steps that a remote sensing unmanned aerial vehicle is thrown above a grassland to be monitored, and the remote sensing unmanned aerial vehicle captures a plurality of ground images of the grassland to be monitored according to a set flight path;
s2: sending the processed ground image as input to a trained mouse damage image recognition model, wherein the output of the mouse damage image recognition model is a high-precision mouse damage recognition image;
s3: and combining and arranging a plurality of high-precision rat damage identification images to obtain the optimal surface characteristics of the grassland to be monitored in the rat damage process.
It should be noted that, in step S1, the remote sensing drone further includes the following steps:
s21, arranging a plurality of control points on the grassland to be monitored, setting the flying height of the remote sensing unmanned aerial vehicle according to the topographic characteristics of the grassland to be monitored, and entering S21;
s22, setting the lens resolution on the remote sensing unmanned aerial vehicle through the flying height in S21, and entering S32;
and S23, driving the remote sensing unmanned aerial vehicle to fly according to the preset navigation, and capturing the ground image of the grassland to be monitored.
It is worth to be noted that the calculation formula of the resolution GSD of the camera lens on the remote sensing unmanned aerial vehicle is as follows:
Figure BDA0002541952380000051
in the formula, GSD is the lens-to-ground resolution in m, H is the relative height in m, a is the pixel unit size in mm, and f is the camera lens focal length in mm.
In step S23, the ground image is input to the unmanned aerial vehicle imaging processing platform and processed, and the process proceeds to step S2.
It is worth explaining that the unmanned aerial vehicle camera shooting processing platform comprises the following steps:
s51, after the ground image picture is imported into the platform, the platform establishes a measuring area by combining the control point in the step S21, and the step enters S52;
s52, inputting each parameter into the platform, automatically generating space-three encryption, generating DSM and DOM, and entering S53;
s53: the processed floor image is obtained, and the process proceeds to step S2.
It is worth to be noted that the unmanned aerial vehicle camera shooting processing platform is a Pix4Dmapper and includes automatic air-to-three, area network adjustment and ground control point precision evaluation, after air-to-three calculation is completed, the space coordinates of corresponding ground discrete points are calculated according to image orientation elements and matched homonymous points, then DSM is generated through the ground discrete points, then an original image is corrected through the DSM, and an orthoimage is generated through resampling. After the orthoimage is generated, automatic inlaying and color homogenizing can be carried out, and the image splicing is completed.
It should be noted that, in the step S2, the training process of the rat damage image recognition model includes the following steps,
s71: respectively acquiring a large number of regional characteristic image photos of the grassland mouse damage region in spring and summer by using a remote sensing unmanned aerial vehicle;
s72: respectively carrying out four image algorithm processing on a large number of image photos in the S71, wherein the four image algorithms comprise an image segmentation model, a color texture feature model, an object-oriented extraction model and a neural network, carrying out precision verification according to image photo results processed by the image algorithms, and entering the step S73;
s73: and comparing the eight verified precision results respectively to obtain different precisions of different rats in different seasons, and further determining that the only optimal image algorithm exists in the grassland surface image photos obtained by different rats at different times.
In step S73, the precision comparison includes morphology comparison, area comparison, and time phase difference, and the precision of each two processed image photos is compared by three precision comparison methods.
Further, the embodiment of the invention takes the Rumalgia as an example, and two types of zokor and woodchuck rats are analyzed:
the unmanned aerial vehicle selected by the embodiment is preferably a four-rotor aerial photography unmanned aerial vehicle, and the unmanned aerial vehicle is a model which is popularized in the market, carries a 2000 ten thousand-pixel 1-inch outsole Sony Exmor R CMOS camera sensor, and simultaneously carries a GPS/GLONASS dual-mode satellite positioning system, an IMU and a compass dual-redundancy sensor, so that high-precision positioning can be realized, and the requirement of the research on the sensor is completely met. Under the conditions of no interference and no shielding, the maximum remote control distance is about 3500m, an efficient and economic principle should be adhered to when the aerial photography flight design is carried out, and a series of factors such as terrain, altitude difference, relative altitude, course overlapping degree and side overlapping degree are comprehensively considered. Carrying out route layout according to requirements such as a task scale, course overlapping degree, side overlapping degree, ground resolution and the like; the method is characterized in that the relative flight height of the unmanned aerial vehicle is determined to be 200m by combining the research on the precision requirement of the image, comprehensively considering the size of a shooting area, the terrain, the height difference, the application of high-altitude flight airspace and other factors, and the ground resolution of the aerial vehicle can be calculated according to the determined relative flight height on the premise of ensuring the highest point overlapping degree and the lowest point resolution, wherein the calculation formula is as follows:
Figure BDA0002541952380000061
GSD is ground resolution, unit m; h is relative altitude, unit m; a is the pixel size in mm; f is the focal length of the camera lens in mm; the image sensor carried by the Dajiang eidolon 4pro unmanned aerial vehicle is a 1-inch Sony Exmor R CMOS24mm focal length lens. The theoretical ground resolution of the aerial photo is calculated to be about 0.047 m.
It should be noted that, when determining the course direction, the actual shape of the shooting area should be considered, and the design should be directed to the highest possible flight efficiency. When designing the flight paths, more than 200m extends out of two ends of each flight path in the flight direction, and more than one flight path is designed for two sides of the shooting area in the lateral direction, so that the aerial shooting full coverage of the working area is ensured, and the shooting missing at the edge of the shooting area is prevented. The exposure mode employs isochronous exposure with an exposure time interval of 3 s. In the research, two pastures located in Manxiang of the Luogu county are selected as experimental areas, wherein one pasture is a typical area of the rat damage of the homozokor, and the other pasture is a typical area of the rat damage of the woodchuck. The two experimental regions have serious rat damage degree, the surface characteristics of the rat damage are prominent, and the interference of livestock and the like on sampling is less. Two experimental areas need to be designed respectively when laying the air route, and the specific design of the two air routes refers to fig. 3 and 4.
It is worth to be noted that after flying, the original photo data of the current day are backed up, and the flying quality and the image quality are checked. In the research, the flight quality and the image quality are better, no complementary shooting and retaking conditions exist, an unmanned aerial vehicle image processing software Pix4Dmap platform is used for enabling the unmanned aerial vehicle to shoot the unmanned aerial vehicle photo to produce a digital orthoimage, please refer to fig. 5, the software can automatically carry out space-three solution to obtain the exterior orientation element of an original image, and the image is automatically calibrated by the Pix4 UAV technology and the area network adjustment technology. And a precision report is automatically generated by software, contains detailed automatic air-to-third, block adjustment and ground control point precision evaluation, and can quickly and accurately evaluate the result quality. After the space-time three-dimensional calculation is completed, software can calculate the space coordinates of corresponding ground discrete points according to the image orientation elements and the matched homonymous points, then DSM is generated by the ground discrete points, then the original image is corrected by the DSM, and the orthographic image is generated by resampling. After the orthoimage is generated, software can automatically inlay and homogenize colors, and the splicing of the image is completed. The orthoimages of the two routes in the experimental area in spring and summer are shown in fig. 6, the four orthoimages are good in splicing effect, clear in ground objects, and close to the theoretical resolution, and the spatial resolution is about 4 cm.
It is worth noting that the grayscale-based image segmentation: the image segmentation is a process of dividing and marking image pixels according to image characteristics and a specific principle. The gray threshold segmentation method realizes the segmentation of the image by using different thresholds, has simple calculation, higher operation efficiency and high speed, and is the class with the largest application number in the image segmentation. It is in fact the transformation process of the original image f into the result image g:
Figure BDA0002541952380000071
where T is a threshold value, and g (x, y) is 1 for the image element g (x, y) of the target feature and 0 for the image element g (x, y) of the background feature. The determination of the threshold is the key of the segmentation algorithm, and the accuracy of the threshold determines whether the image can be accurately segmented. According to the difference between the segmentation target and the background, different threshold processing techniques exist, and a global threshold method and an adaptive threshold method are commonly used, so that the optimal segmentation threshold value can be determined through a histogram or other adaptive algorithms.
It is worth mentioning that the preferred color texture features: the orthographic image of the unmanned aerial vehicle used in the embodiment only has 3-color gray scale information of red, green and blue, does not have quantitative multispectral information, and can obtain the color characteristics and texture characteristics through color space conversion, texture analysis and other processing. And then analyzing the color characteristics and the texture characteristics of the image of the surface mouse damage area, screening out color texture indexes suitable for distinguishing mouse damage ground objects and background ground objects, and extracting surface mouse damage information.
It is worth to be noted that the calculation and statistics of the surface mouse damage features are as follows: HLS color space conversion and second-order gray level co-occurrence matrix texture filtering are carried out on the JPG image, and 24 texture features such as mean value, variance, cooperativity, contrast, dissimilarity, information entropy, second-order moment, correlation and the like of 3 color features of chroma, brightness and saturation and 24 texture features of red, green and blue are obtained. Selecting a surface mouse damage area and a typical area of a background ground object as an interested area, counting the mean value and standard deviation of 30 color and texture characteristics of the interested area, further calculating the variation coefficient of the color texture characteristics for representing the discrete degree of the characteristics, wherein the calculation formula is as follows: .
Figure BDA0002541952380000081
Figure BDA0002541952380000082
CVi is the variation coefficient of the color texture feature i of a certain land class, which is the region of interest mean value of the feature, and σ i is the standard deviation. Then, the relative difference of two land classes on the same feature is calculated, Dij is the relative difference of a certain texture feature between the i land class and the j land class, mui is the texture feature mean value of the i land class, and muj is the texture feature mean value of the j land class.
It should be noted that the coefficient of variation reflects the degree of dispersion of a feature value of a feature, and the smaller the coefficient of variation, the smaller the degree of dispersion of the feature, and the more prominent the feature is in the region, the easier it is to extract the feature from the feature. The relative difference represents the difference of two different ground features on a certain characteristic, and the larger the relative difference is, the easier the two ground features are distinguished according to the characteristic. Because different ground objects are crossed in the numerical range of a certain characteristic, the color texture characteristics with small variation coefficient and large relative difference are selected for experimental image classification, 3 color or texture characteristics with large relative difference and small variation coefficient between the ground surface mouse damage information and the background ground object are selected, the maximum likelihood method is used for classification, and then the ground surface mouse damage information is extracted.
It is worth mentioning that rule-based object oriented: the object-oriented classification method effectively combines various object features of spectrum, texture, geometric form and the like of an image, finds the object by utilizing an image segmentation technology, classifies the image by taking the object as a processing unit, mainly comprises 3 processes of object finding, construction rule and information extraction, and can describe the same ground object from the aspects of gray scale, texture, geometric form and the like, namely a plurality of rules are required to be constructed for describing the same ground object when the extraction rule is constructed. For example, one description of a body of water: the area is more than 500 pixels; the extension line is less than 0.5; NDVI is less than 0.25. And extracting the ground features based on the constructed rule, wherein the more detailed and accurate the rule is, the closer the extraction result is to the real situation.
It is worth noting that the BP neural network: before BP neural network classification, a proper network structure model is required to be constructed, the number of nodes, the number of layers, an excitation function, a learning algorithm and the like of the model are determined, and then selected sample data and known results are input into the constructed BP neural network model to determine parameter values of the network. The number of the hidden layer neurons is critical, too many or too few can not enable the neural network to achieve the best effect, the more the number of the hidden layer nodes is, the larger the computation amount of the neural network model is, the lower the computation speed can be, and the classification accuracy can be improved to a certain extent. The determination of the number of the nodes is difficult, is mainly influenced by the training speed and precision in the self-learning stage, and usually needs to determine the number of neurons in the hidden layer which can meet the requirements through experiments.
Note that, when the accuracy of the extraction result is evaluated, a double evaluation criterion of spatial accuracy and quantitative accuracy is adopted. Firstly, a confusion matrix is utilized to carry out space precision evaluation on an image classification result, and then quantitative precision evaluation is carried out on a target ground object from the three aspects of the number of the image spots, the area and the combination shape.
It is worth mentioning that the spatial accuracy evaluation: the accuracy evaluation of the classification result generally includes the following methods: and (3) overlapping classification results, a confusion matrix and an ROC curve, and evaluating the classification results by using the confusion matrix in the research. And (3) selecting 10 interested regions of each place in the image by combining field survey data to serve as verification samples, constructing a confusion matrix, and calculating the overall precision, Kappa coefficient, drawing precision, user precision and the like of the classification result so as to evaluate the spatial precision of the image classification result. The precision evaluation of the 4 extraction methods all use the same interest, the overall precision represents the probability that each random sample is correctly classified, the probability is the percentage of the number of correctly classified pixels to the total number of pixels of the real classification of the test sample, and the calculation formula of the overall precision is as follows:
Figure BDA0002541952380000091
OA is the overall accuracy; pii is the number of pixels of the classified image and the actual type of the classified image and the attribution type i; n is the number of classification categories; n is the number of total pixels really classified in the test sample; the Kappa coefficient represents the consistency of the classified image and the real classification, and the calculation formula is as follows:
Figure BDA0002541952380000092
p + i is the total number of pixels of which the real category is i in the test sample and is the sum of columns of the confusion matrix; pi + is the total number of pixels of the class i in the classified image and is the sum of rows of the confusion matrix;
the drawing precision is the percentage of the number of the pixels correctly classified into the land type i and the number of the pixels of the real land type i in the classification result, namely the percentage of the sum of the numerical value on the diagonal of the confusion matrix and the numerical value of the column:
Figure BDA0002541952380000093
PAi is the drawing accuracy of land i; pii is the number of pixels of which the classification result and the real type are the land type i; p + i is the total number of pixels of the real land type i in the test sample and is the sum of the ith column of the confusion matrix;
the user precision is the percentage of the number of pixels correctly classified into the land i and the total number of pixels of the land i in the classification result, namely the percentage of the sum of the numerical value on the diagonal line of the confusion matrix and the numerical value of the line, and the calculation formula of the user precision is as follows:
Figure BDA0002541952380000101
UAi is drawing precision of the land class i; pi + is the total number of pixels of the land class i in the classification result, and is the sum of the ith row of the confusion matrix.
It is worth noting that the quantitative accuracy evaluation: the confusion matrix is used for evaluating the space precision, the classification precision of the target ground object can be effectively checked, but the method is easily limited to the selection of the interested region, has high randomness and is difficult to comprehensively reflect the precision. And the quantitative accuracy evaluation can further acquire the extraction accuracy of the target ground object by various methods. In field investigation, the information of rat damage such as rat holes, soil hills and the like in a sample area is preliminarily measured and recorded, wherein the information comprises the diameters, relative positions and the like of the rat holes and the soil hills. The information of the rat damage in the experimental image is visually interpreted by taking the actual research record as reference in combination with the grasped characteristics of the surface of the rat damage, and the quantitative accuracy of the extraction result of each method is evaluated as an actual measurement value. And calculating and counting data such as the number, the area, the geometric shape and the like of the image spots of the target ground object in the classification result, and comparing and analyzing the data with a real value to obtain the extraction precision, the number and the position precision of the target ground object: certain deviation may occur between the extracted rat damage information and the actual situation in quantity and position, and the precision of the extracted target ground object in quantity and position can be represented through the index of accuracy. The accuracy represents the degree of consistency between the predicted value and the true value, and the calculation formula is as follows:
Figure BDA0002541952380000102
accuracy is Accuracy, Q is the total number of target ground object pattern spots, Q is the number of actually measured target ground object pattern spots, Q' is the number of target ground object pattern spots in the extraction result, and Delta Q is the number of target ground object pattern spots with accurate extraction positions; area accuracy: the degree of deviation is a ratio of the absolute value of the difference between the predicted data and the actual data to the actual data, and is a measure for studying the degree of deviation of the predicted value from the actual value. And when area precision evaluation is carried out, constructing a scatter diagram of the target ground object image spot extraction area and the real area, and evaluating the area precision through average deviation degree. The average deviation calculation formula is as follows:
Figure BDA0002541952380000111
DEV represents the average deviation degree of the extracted value relative to the real value, Xi and Yi respectively represent the real value and the extracted value of the ith pattern spot, and n is the total number of the pattern spots of the target ground object. The smaller the value of the deviation degree is, the closer the extraction result is to the real situation; geometric shape accuracy: the object pattern spot is actually a polygon surrounded by a plurality of line segments, and the shape of the polygon can be quantified to a certain extent by introducing the compactness. The circle is the shape with the highest compactness, and the compactness is 1/pi; the compactness of the square is 1/2 √ π, and the compactness of the visually interpreted patch is calculated as
Figure BDA0002541952380000112
Compact represents the compactness of the image spot, S is the area of the image spot, L is the outer contour length of the image spot, the shape of the pixel of the unmanned aerial vehicle image is square, the contour curve of the extraction result is a right-angled toothed broken line, and the area and the perimeter of the broken line are different from the real value. And comparing the compactness of the extracted image spot with the compactness of the real image spot by using the scatter diagram, and calculating the deviation degree of the compactness of the extracted image spot compared with the compactness of the real image spot according to the formula so as to obtain the precision in the aspect of the geometric shape.
It is worth noting that the total accuracy of the zokor dune extracted by the gray level-based image segmentation is 90.94%, the Kappa coefficient is 0.73, the drawing accuracy of the dune is slightly lower, only 73.03%, and the accuracy is general. The other land types are more prominently classified into dunes, and mainly have the interference of similar land objects with high moisture content, such as bare soil, cow dung and the like. Comparing the segmentation result with the image shows that the method has poor extraction effect on the central part of the soil dune, and the reason may be that the soil moisture content in the center of the soil dune is low, the pixel value is very close to the surrounding dry bare soil, the two are difficult to distinguish from the gray level alone, and the segmentation effect is poor.
It is worth noting that the confusion matrix is constructed using validation samples based on the ENVI platform as shown in Table 5.7. The overall accuracy of the classification result of the zokor dune extracted from the spring image by the rule-based object-oriented method reaches 99.77%, and the Kappa coefficient is 0.99. The wrong separation rate and the leakage separation rate are very low, and the user precision and the drawing precision of the target ground object and the background ground object are both more than 99%, which shows that the method has very high accuracy in extracting the zokor dune on the spring image, can effectively avoid the interference of the background ground object, and can keep good extraction effect on the extraction of the center and the edge of the dune.
It should be noted that, referring to fig. 8 and fig. 9, in quantity, 27 zokor dunes exist in the images of the unmanned aerial vehicle in spring, the quantity precision of the extraction results of each method is high, the accuracy of the preferred color texture is slightly low in consideration of the spatial position of the dune, and the accuracy of the gray threshold segmentation and the rule-based object-oriented method is more than 90%. The area was found to be 16.11m 2 by visual observation, and the average area was 0.60m 2. The difference between the 4 researched methods and the real situation is not large, wherein the area deviation degree extracted by the rule-based object-oriented method is the lowest, and the method is the method with the optimal area precision. In terms of shape, the gray threshold segmentation is good for keeping the edge contour of a soil dune, but is poor in effect for extracting the central part of the soil dune, and the main reason is that the soil moisture content in the center of the soil dune is low, the pixel value of the soil dune is very close to the surrounding dry bare soil, and the two soil is difficult to distinguish from the gray level; other methods have good effect on extracting the central part, and the object-oriented method has good effect on the soil dome contour and filling.
It should be noted that, referring to fig. 10, the spatial precision and the quantitative precision of the extracted results of the methods are evaluated, and the differences between the methods are analyzed by comparison, so as to find out the precision advantages of the different methods. The 4 methods in the table are applied, so that the accuracy of extracting the zokor dune from the spring unmanned aerial vehicle image is ideal, and the accuracy indexes of the preferred color texture method are lower; except that the area precision of the BP neural network is slightly low, the precision of other items is higher; the gray threshold segmentation has poor shape extraction precision on the soil dune, is easily influenced by soil humidity of the soil dune and cannot ensure the filling effect of the center of the soil dune; the rule-based object-oriented method has the highest precision, and has obvious precision advantages in the aspects of the number, the position, the area, the geometric shape and the like of the soil mounds, and has high consistency with the actual situation.
It should be noted that, referring to fig. 11, it can be seen through comparative analysis of the precision of each method that the precision advantages of different methods are greatly different. In the 4 methods, the effect of extracting the zokor dune in the summer unmanned aerial vehicle image by the optimal color texture and neural network method is better, but the extraction area of the zokor dune is larger, and the overall accuracy is general. The result of the BP neural network is closest to the real situation, and the extraction method with the optimal precision is adopted. In the gray threshold segmentation method, the difficulty in determining the segmentation threshold is high, and the segmentation result is not ideal; when the rule-based object-oriented method is applied, the effect of finding objects is poor, too many broken objects are difficult to effectively combine by using a combination algorithm, so that the parameters of the attributes of each rule are difficult to determine when the rule is constructed, and the target ground object cannot be well extracted.
It should be noted that, referring to fig. 12 and 13, the actual area of the aperture radiation area in the experimental image is 56.16m 2, and the average area is 6.24m 2. The total area of the preferred color texture and BP neural network method is larger, but the average area of the image spots is small, which indicates that the image spots are broken as the extraction results of the two methods. Comparing the average deviation of the area of the image spots extracted by each method, only the deviation of the object-oriented method is small, which shows that the method has high precision in the area aspect, but the total area and the average area are both smaller than the true value due to the missing division condition. In terms of shape, the shape deviation degrees of the three methods of gray threshold segmentation, color texture optimization and BP neural network all reach about 0.5, which shows that the shape change of the extracted hole radiation area is large, and the extraction results of the three methods are high in image spot breakage degree by combining experimental images and area indexes, and only the extraction results of the object-oriented method are closer to the real situation.
It should be noted that, referring to fig. 14, in combination with the images of the spring experiment area, the results of comparative analysis and extraction show that, in the 4 methods of the present study, the gray threshold segmentation has the best classification accuracy for the woodchuck openings, and the number and the positions also keep high consistency with the actual situation. For the hole radiation area, the consistency of the extraction results of the 4 methods and the actual situation is poor, and the quantity and the position accuracy are less than 50%; in terms of area and geometric shape, only the extraction result of the object-oriented method is relatively good. The main reason is that a part of bare soil is very similar to the opening radiation area in the aspects of gray scale, shape and texture and is difficult to distinguish in classification. By integrating the comparison result of the spatial precision and the quantitative precision, in spring images, the gray threshold segmentation is a method for extracting the woodchuck opening with higher precision, and the rule-based object-oriented method is relatively better in the precision of extracting the opening radiation area.
It should be noted that, referring to fig. 15, in the summer unmanned aerial vehicle image, the difference between the grassland serving as the background feature and the target feature is obvious, and the accuracy of extracting the woodchuck mouse damage information by the 4 methods in the present study is ideal, but the accuracy of extracting the woodchuck cave mouth by the preferred color texture method is poor. The classification effect of each method on the opening radiation area is good, but the precision of the gray threshold segmentation and the object-oriented method in the aspects of area and shape is relatively low. In general, the BP neural network is a method for extracting the woodchuck rat damage in the summer unmanned aerial vehicle image with optimal precision.
In summary, the implementation principle of the invention is as follows: in the embodiment, 4 methods are used for extracting the rat damage information from the images of the unmanned aerial vehicle in spring and summer, quantitative precision evaluation is introduced besides classification precision evaluation and comparison, and precision verification is performed on the extraction result from the aspects of the number, position, area, geometric shape and the like of the pattern spots. The classification precision and the quantitative precision are comprehensively compared, the advantages and the disadvantages of different methods are explored, the optimal extraction method of different rodent damage information in images in different seasons is obtained, and certain theoretical and technical references are provided for further research on the low-altitude remote sensing rodent damage monitoring method of the Rugao grassland.

Claims (8)

1. The grassland rat damage monitoring method based on low-altitude remote sensing is characterized by comprising the following steps of:
s1: the method comprises the steps that a remote sensing unmanned aerial vehicle is thrown above a grassland to be monitored, and the remote sensing unmanned aerial vehicle captures a plurality of ground images of the grassland to be monitored according to a set flight path;
s2: sending the processed ground image as input to a trained mouse damage image recognition model, wherein the output of the mouse damage image recognition model is a high-precision mouse damage recognition image;
s3: and combining and arranging a plurality of high-precision rat damage identification images to obtain the optimal surface characteristics of the grassland to be monitored in the rat damage process.
2. The grassland rodent damage monitoring method based on low-altitude remote sensing of claim 1, wherein in the step S1, the remote sensing unmanned aerial vehicle further comprises the following steps:
s21, arranging a plurality of control points on the grassland to be monitored, setting the flying height of the remote sensing unmanned aerial vehicle according to the topographic characteristics of the grassland to be monitored, and entering S21;
s22, setting the lens resolution on the remote sensing unmanned aerial vehicle through the flying height in S21, and entering S32;
and S23, driving the remote sensing unmanned aerial vehicle to fly according to the preset navigation, and capturing the ground image of the grassland to be monitored.
3. The grassland rodent damage monitoring method based on the low-altitude remote sensing of claim 2, wherein a calculation formula of the GSD of the resolution of the camera lens on the remote sensing unmanned aerial vehicle is as follows:
Figure FDA0002541952370000011
in the formula, GSD is the lens-to-ground resolution in m, H is the relative height in m, a is the pixel unit size in mm, and f is the camera lens focal length in mm.
4. The grassland mouse damage monitoring method based on the low-altitude remote sensing of claim 2, wherein in the step S23, the ground image is input to the unmanned aerial vehicle camera processing platform for processing, and then the process goes to the step S2.
5. The grassland mouse damage monitoring method based on the low-altitude remote sensing of claim 4, wherein the unmanned aerial vehicle camera processing platform comprises the following steps:
s51, after the ground image picture is imported into the platform, the platform establishes a measuring area by combining the control point in the step S21, and the step enters S52;
s52, inputting each parameter into the platform, automatically generating space-three encryption, generating DSM and DOM, and entering S53;
s53: the processed floor image is obtained, and the process proceeds to step S2.
6. The grassland mouse damage monitoring method based on the low-altitude remote sensing of claim 5, wherein the unmanned aerial vehicle camera processing platform is a Pix4Dmapper, which comprises automatic sky three, area network adjustment and ground control point precision evaluation, after the sky three is solved, the space coordinates of corresponding ground discrete points are calculated according to image orientation elements and matched homonymous points, then the ground discrete points are used to generate DSM, then the original image is corrected by using the DSM, and the orthographic image is generated by resampling. After the orthoimage is generated, automatic inlaying and color homogenizing can be carried out, and the image splicing is completed.
7. The grassland rodent monitoring method based on the low-altitude remote sensing of claim 1, wherein the training process of the image recognition model of the rodent in the step S2 comprises the following steps,
s71: respectively acquiring a large number of regional characteristic image photos of the grassland mouse damage region in spring and summer by using a remote sensing unmanned aerial vehicle;
s72: respectively carrying out four image algorithm processing on a large number of image photos in the S71, wherein the four image algorithms comprise an image segmentation model, a color texture feature model, an object-oriented extraction model and a neural network, carrying out precision verification according to image photo results processed by the image algorithms, and entering the step S73;
s73: and comparing the eight verified precision results respectively to obtain different precisions of different rats in different seasons, and further determining that the only optimal image algorithm exists in the grassland surface image photos obtained by different rats at different times.
8. The grassland mouse damage monitoring method based on the low-altitude remote sensing of claim 7, wherein in the step S73, the precision comparison comprises morphology comparison, area comparison and time phase difference, and the precision of each two processed image photos is compared by three precision comparison methods.
CN202010549504.4A 2020-06-16 2020-06-16 Grassland mouse damage monitoring method based on low-altitude remote sensing Pending CN111881728A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010549504.4A CN111881728A (en) 2020-06-16 2020-06-16 Grassland mouse damage monitoring method based on low-altitude remote sensing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010549504.4A CN111881728A (en) 2020-06-16 2020-06-16 Grassland mouse damage monitoring method based on low-altitude remote sensing

Publications (1)

Publication Number Publication Date
CN111881728A true CN111881728A (en) 2020-11-03

Family

ID=73156731

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010549504.4A Pending CN111881728A (en) 2020-06-16 2020-06-16 Grassland mouse damage monitoring method based on low-altitude remote sensing

Country Status (1)

Country Link
CN (1) CN111881728A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699852A (en) * 2021-01-25 2021-04-23 青海省地方病预防控制所 Intelligent woodchuck identification and monitoring system
CN112801419A (en) * 2021-03-17 2021-05-14 广东技术师范大学 Rat damage degree grade prediction method and device based on adaptive receptive field SSD
CN113191302A (en) * 2021-05-14 2021-07-30 成都鸿钰网络科技有限公司 Method and system for monitoring grassland ecology
CN113378700A (en) * 2021-06-08 2021-09-10 四川农业大学 Grassland rat damage dynamic monitoring method based on unmanned aerial vehicle aerial image
CN114092815A (en) * 2021-11-29 2022-02-25 自然资源部国土卫星遥感应用中心 Remote sensing intelligent extraction method for large-range photovoltaic power generation facility
CN114220002A (en) * 2021-11-26 2022-03-22 通辽市气象台(通辽市气候生态环境监测中心) Method and system for monitoring invasion of foreign plants based on convolutional neural network
CN115527130A (en) * 2022-09-20 2022-12-27 中国农业科学院农业信息研究所 Grassland pest mouse density investigation method and intelligent evaluation system
CN115965812A (en) * 2022-12-13 2023-04-14 桂林理工大学 Evaluation method for wetland vegetation species and ground feature classification by unmanned aerial vehicle image
CN116778334A (en) * 2023-06-28 2023-09-19 中国农业大学 Quantitative large-scale space grassland mouse entrance density prediction method and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108537130A (en) * 2018-03-15 2018-09-14 甘肃农业大学 A kind of Myospalax baileyi and Ochotona curzoniae based on miniature drone technology endanger monitoring method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108537130A (en) * 2018-03-15 2018-09-14 甘肃农业大学 A kind of Myospalax baileyi and Ochotona curzoniae based on miniature drone technology endanger monitoring method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
熊瑞东: "基于低空遥感的若尔盖高寒草地鼠害程度估测研究", 全国优秀硕士论文全文数据库-工程科技II辑, 15 August 2020 (2020-08-15), pages 4 - 15 *
董光: "基于低空遥感的若尔盖草地鼠害信息提取方法及对比研究", 全国优秀硕士论文全文数据库-工程科技II辑, 15 February 2020 (2020-02-15), pages 4 - 54 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699852A (en) * 2021-01-25 2021-04-23 青海省地方病预防控制所 Intelligent woodchuck identification and monitoring system
CN112801419A (en) * 2021-03-17 2021-05-14 广东技术师范大学 Rat damage degree grade prediction method and device based on adaptive receptive field SSD
CN112801419B (en) * 2021-03-17 2021-08-27 广东技术师范大学 Rat damage degree grade prediction method and device based on adaptive receptive field SSD
CN113191302A (en) * 2021-05-14 2021-07-30 成都鸿钰网络科技有限公司 Method and system for monitoring grassland ecology
CN113191302B (en) * 2021-05-14 2022-11-01 成都鸿钰网络科技有限公司 Method and system for monitoring grassland ecology
CN113378700A (en) * 2021-06-08 2021-09-10 四川农业大学 Grassland rat damage dynamic monitoring method based on unmanned aerial vehicle aerial image
CN114220002A (en) * 2021-11-26 2022-03-22 通辽市气象台(通辽市气候生态环境监测中心) Method and system for monitoring invasion of foreign plants based on convolutional neural network
CN114220002B (en) * 2021-11-26 2022-11-15 通辽市气象台(通辽市气候生态环境监测中心) Method and system for monitoring invasion of foreign plants based on convolutional neural network
CN114092815B (en) * 2021-11-29 2022-04-15 自然资源部国土卫星遥感应用中心 Remote sensing intelligent extraction method for large-range photovoltaic power generation facility
CN114092815A (en) * 2021-11-29 2022-02-25 自然资源部国土卫星遥感应用中心 Remote sensing intelligent extraction method for large-range photovoltaic power generation facility
CN115527130A (en) * 2022-09-20 2022-12-27 中国农业科学院农业信息研究所 Grassland pest mouse density investigation method and intelligent evaluation system
CN115965812A (en) * 2022-12-13 2023-04-14 桂林理工大学 Evaluation method for wetland vegetation species and ground feature classification by unmanned aerial vehicle image
CN115965812B (en) * 2022-12-13 2024-01-19 桂林理工大学 Evaluation method for classification of unmanned aerial vehicle images on wetland vegetation species and land features
CN116778334A (en) * 2023-06-28 2023-09-19 中国农业大学 Quantitative large-scale space grassland mouse entrance density prediction method and system

Similar Documents

Publication Publication Date Title
CN111881728A (en) Grassland mouse damage monitoring method based on low-altitude remote sensing
Zhang et al. Assessment of defoliation during the Dendrolimus tabulaeformis Tsai et Liu disaster outbreak using UAV-based hyperspectral images
CN108369635B (en) Method for aerial image acquisition and analysis
US20230292647A1 (en) System and Method for Crop Monitoring
Peña et al. Weed mapping in early-season maize fields using object-based analysis of unmanned aerial vehicle (UAV) images
CN104700404B (en) A kind of fruit positioning identifying method
CN112541921A (en) Digitized accurate measuring method for urban green land vegetation information
Xu et al. Classification method of cultivated land based on UAV visible light remote sensing
Gené-Mola et al. Looking behind occlusions: A study on amodal segmentation for robust on-tree apple fruit size estimation
Lyu et al. Development of phenotyping system using low altitude UAV imagery and deep learning
Liu et al. Detection of Firmiana danxiaensis canopies by a customized imaging system mounted on an UAV platform
Rumora et al. Spatial video remote sensing for urban vegetation mapping using vegetation indices
Bilodeau et al. Identifying hair fescue in wild blueberry fields using drone images for precise application of granular herbicide
Liu et al. Development of a proximal machine vision system for off-season weed mapping in broadacre no-tillage fallows
CN111985472A (en) Trough hay temperature image processing method based on artificial intelligence and active ball machine
Burr et al. Estimating waterbird abundance on catfish aquaculture ponds using an unmanned aerial system
CN112233121A (en) Fruit yield estimation method based on binocular space positioning and intelligent segmentation
Afriansyah et al. Image Mapping Detection of Green Areas Using Speed Up Robust Features
van der Voort Exploring the usability of unmanned aerial vehicles for non-destructive phenotyping of small-scale maize breeding trials
López et al. Multi-Spectral Imaging for Weed Identification in Herbicides Testing
Wijesingha et al. Mapping invasive Lupinus polyphyllus Lindl. in semi-natural grasslands using object-based analysis of UAV-borne images
Dhariwal et al. Aerial Images were used to Detect Curved-Crop Rows and Failures in Sugarcane Production
CN116912702B (en) Weed coverage determination method, system and device and electronic equipment
Koma et al. Object-based habitat mapping of reedbeds using country-wide airborne laser scanning point clouds
Coleman et al. Remote sensing of burrowing shrimp density on intertidal substrates with an Unmanned Aerial System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination