AU2020101054A4 - A Multi-source Remote Sensing Data Classification Method Based On the Classification Sample Points Extracted By the UAV - Google Patents

A Multi-source Remote Sensing Data Classification Method Based On the Classification Sample Points Extracted By the UAV Download PDF

Info

Publication number
AU2020101054A4
AU2020101054A4 AU2020101054A AU2020101054A AU2020101054A4 AU 2020101054 A4 AU2020101054 A4 AU 2020101054A4 AU 2020101054 A AU2020101054 A AU 2020101054A AU 2020101054 A AU2020101054 A AU 2020101054A AU 2020101054 A4 AU2020101054 A4 AU 2020101054A4
Authority
AU
Australia
Prior art keywords
classification
sample points
remote sensing
data set
uav
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2020101054A
Inventor
Jianjun Chen
Yu Qin
Xirui Ruan
Xuelian Song
Qian Wang
Zhiwei Wang
Shuhua Yi
Guangyang Yue
Wen Zhang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou Institute Of Pratacultural
Original Assignee
Guizhou Inst Of Pratacultural
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Inst Of Pratacultural filed Critical Guizhou Inst Of Pratacultural
Priority to AU2020101054A priority Critical patent/AU2020101054A4/en
Application granted granted Critical
Publication of AU2020101054A4 publication Critical patent/AU2020101054A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Forestry; Mining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/16Real estate
    • G06Q50/165Land development
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Primary Health Care (AREA)
  • Human Resources & Organizations (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Agronomy & Crop Science (AREA)
  • Animal Husbandry (AREA)
  • Marine Sciences & Fisheries (AREA)
  • Mining & Mineral Resources (AREA)
  • Image Processing (AREA)

Abstract

of Descriptions The invention discloses a multi-source remote sensing data classification method based on the classification sample points extracted by the UAV, which comprises: extract the classification sample points uniformly from aerial photographs of the UAV, and prepare sample points for calibration; obtain the classification remote sensing data sets, perform image processing on the remote sensing data sets, and perform geospatial location on the classification sample points according to the classification remote sensing image data sets; the classification remote sensing data sets include: microwave data Sentinel-i data set, multispectral Sentinel-2 data set, vegetation index data set based on Sentinel-2 data set and digital elevation model data set; through the classification sample points located by geospatial information, and use the random forest classification model to obtain the classification results. The random forest classification method of multi-source remote sensing data based on the classification sample points extracted by the UAV in the invention can realize the surface type classification mapping process quickly, effectively and cheaply; after eliminating the effects of edge classification sample points, the classification accuracy is significantly improved, especially the accuracy of the Kappa coefficient is better. Drawings of Descriptions _4 NA SWoodland orshrub Clearingorbareland Road - construction cr c Figure 1 1/9

Description

Drawings of Descriptions
NA _4
cr c
SWoodland orshrub
Clearingorbareland
Road - construction
Figure 1
1/9
Descriptions
A Multi-source Remote Sensing Data Classification Method Based On the Classification Sample Points Extracted By the UAV
Technical Field The invention relates to the field of remote sensing data classification technology, and more specifically relates to a multi-source remote sensing data classification method based on the classification sample points extracted by the UAV.
Background Technology The area of global karst landform is relatively large, and a considerable part of the global population water source depends on the aquifer of karst region. Karst ecosystem is very fragile, especially vulnerable to the invasion of environmental changes, which leads to the destruction of surface vegetation in the region, and then causes its surface landscape to degenerate into bare soil region, or even into rock region, and this rocky desertification phenomenon is a serious short-term irreversible process of ecosystem. The area of rocky desertification is relatively large in the karst region of Southwest China. Among them, Guizhou Province, as the karst center, the surface soil degenerated into a rocky desertification region from 1974 to 2001 at a faster rate. However, this trend began to turn benign in the last 20 years, and vegetation in many areas began to become greener than before. Nevertheless, the long-term monitoring of karst regions, especially Guizhou Province, which is located in the karst center, can not be ignored.
With the development of multi-source remote sensing data, the remote sensing images have dramatically improved in terms of spatial-temporal resolution and spectral resolution, especially the research on vegetation dynamics and surface feature type monitoring in karst regions has become more and more mature. The existing surface type classification methods are becoming more and more accurate, but it is more difficult to obtain the field measured classification sample points as the necessary input conditions for any classification model. Especially in a large scale, if the traditional field survey method is used to collect classification sample points, the cost of manpower, material resources and time is extremely high, which seriously impedes the development of large scale surface classification research.
Invention Summary The embodiment of the invention provides a multi-source remote sensing data classification method based on the classification sample points extracted by the UAV, so as to solve the problems raised in the above background technology.
The embodiment of the invention provides a multi-source remote sensing data classification method based on the classification sample points extracted by the UAV, it comprises:
Extract the classification sample points uniformly from aerial photographs of the UAV, and prepare each type of sample points for calibration; among them, the types of sample points to be calibrated include: farmland and grassland, woodland and shrub, clearing and bare land, road as well as construction;
Obtain the classification remote sensing data sets, the classification remote sensing data sets include: microwave data Sentinel-i data set, multispectral Sentinel-2 data set, vegetation index data set based on Sentinel-2 data set and digital elevation model data set;
Process the remote sensing data sets, and obtain the classification remote sensing image data sets; and perform geospatial location on the classification sample points according to the classification remote sensing image data sets;
Through the classification sample points located by geospatial information, and use the random forest classification model to obtain the classification results.
Furthermore, the classification sample points that extracted from aerial photographs of the UAV comprise: extract the classification sample points uniformly from aerial photographs of the UAV through the visual interpretation method. Further, the classification sample points that extracted from aerial photographs of the UAV comprise: eliminate the sample points at the edges of different surface types.
Furthermore, based on the 10m resolution, the SNAP software is used to perform orbit trimming, thermal noise removal, radiometric correction, speckle filtering and Range-Doppler topographic correction on the Sentinel-i data set, so as to obtain the VV polarized image data set and VH polarized image data set.
Furthermore, the Sentinel-2 data set comprises 13 band data, covering visible light, near infrared and short-wave infrared spectral bands; the Sen2Cor software is used to perform topographic correction, atmospheric correction and radiometric correction processing on the Sentinel-2 data set, so as to obtain the 12 layer image data sets except the 10th band and resample the 12 layer image data sets to 10m resolution.
Furthermore, the vegetation index data set comprises: NDVI, EVI and SAVI, and the calculation formula is as follows:
NDVI = (NIR - Red )/(NIR + Red)
EVI = 2.5 x (NIR - Red) / (NIR + 6.ORed - 7.5Blue + 1) SAVI= (NIR - Red) (1 + L) / (NIR +Red + L)
In the formula, NIR, Red and Blue correspond to the data of near-infrared, red and blue bands respectively; L is the soil regulation coefficient, which is determined by the actual regional conditions; the data of NIR, Red and Blue bands correspond to the data of 8th band, 4th band and 2th band of Sentinel-2 data set respectively.
Furthermore, the soil regulation coefficient L= 0.5.
Furthermore, the DEM data set adopts the SRTM DEM data set. After resampling the SRTM DEM data set to 10m resolution, the elevation DEM image data set, slope image data set, aspect image data set and profile curvature image data set are obtained.
Furthermore, the random forest classification model comprises:
Use the readOGRO and brick commands to read the classification sample point images and classification remote sensing data sets in the R language environment;
Use the following code to build a random forest classification model;
rf <- randomForest(Ic ~ b+b2+b3+b4+b5+b6+b7+b8+b9+b8a+bll+b12,
data=rois, ntree=500, importance=TRUE)
Among them, b1-b12 are the parameter layer images in the random forest classification model, and different data sets correspond to different parameter layer images;
Use the tuneRF( and randomForesto commands to complete the parameter regulation training of the random forest classification model; use the writeRatero command to map the classification results and generate the image of the classification results.
Furthermore, the accuracy indexes of the surface type classification results comprise: Overall Accuracy (OA) and Kappa coefficient, and the calculation formula is as follows:
OA= (TP + TN )/(TP + FN + FP + TN)
In the formula, TP is true positive, that is, the positive samples classified correctly by the random forest classification model; FN is false negative, that is, the positive samples classified incorrectly by the random forest classification model; FP is false positive, that is, the negative samples classified incorrectly by the random forest classification model; TN is true negative, that is, the negative samples classified correctly by the random forest classification model; OA is the Overall Accuracy, that is, the proportion of the number of correctly classified samples to the number of all samples;
Kappa = (Po - Pe) / (1 - Pe)
In the formula, Po is the sum of the observed values in the diagonal units, that is, the Overall Accuracy (OA); Pe is the sum of the expected values in the diagonal units; Kappa is the measurement value for assessment consistency, which represents the proportion of error reduction between the classification and the full random classification.
The embodiment of the invention provides a multi-source remote sensing data classification method based on the classification sample points extracted by the UAV. Compared with the prior art, the beneficial effects are as follows:
The random forest classification method of multi-source remote sensing data based on the classification sample points extracted by the UAV in the invention can realize the surface type classification mapping process quickly, effectively and cheaply, meanwhile, it can also provide technical support and method basis for the extraction process of tens of thousands and millions of samples in the future. However, there are some problems in the classification sample points involving mixed pixels. After eliminating the effects of edge classification sample points (mixed pixels), the classification accuracy is significantly improved, especially the accuracy of the Kappa coefficient is better. Therefore, when stationing in future related research, we should avoid extracting the classification sample points at the edges as much as possible, but should select the areas with uniform surface feature types to extract the sample points. This method can effectively distinguish the withered vegetation and the bare land, even if only the image generated by the combination of visible light bands, can also be used to refer to the UAV image to conveniently distinguish various types of surface types; it can expand the collection time of classification sample points, not only limited to the peak season of vegetation growth (such as July to September); this method also reduces the time consumption of the operation process of vegetation growth season, and does not need to use the vegetation research data of long time series to invert the whole vegetation phenology process to complete the classification.
Description of Drawings Figure 1 is a random forest classification result provided by the embodiment of the invention;
Figure 2a is a confusion matrix of classification results of S2 data set provided by the embodiment of the invention;
Figure 2b is a confusion matrix of classification results of S2&VI data sets provided by the embodiment of the invention;
Figure 2c is a confusion matrix of classification results of S2&VI&DEM data sets provided by the embodiment of the invention;
Figure 2d is a confusion matrix of classification results of S2&VI&S1 data sets provided by the embodiment of the invention; Figure 2e is a confusion matrix of classification results of b3&b2&b4&b6 data sets provided by the embodiment of the invention;
Figure 3a is a Gini index diagram of S2 data set provided by the embodiment of the invention;
Figure 3b is a Gini index diagram of S2&VI data sets provided by the embodiment of the invention;
Figure 3c is a Gini index diagram of S2&VI&DEM data sets provided by the embodiment of the invention;
Figure 3d is a Gini index diagram of S2&VI&S1 data sets provided by the embodiment of the invention;
Figure 3e is a Gini index diagram of b3&b2&b4&b6 data sets provided by the embodiment of the invention;
Figure 4a is a random forest classification result and confusion matrix of S2 data set provided by the embodiment of the invention;
Figure 4b is a random forest classification result and confusion matrix of S2&VI data sets provided by the embodiment of the invention;
Figure 4c is a random forest classification result and confusion matrix of S2&VI&S1 data sets provided by the embodiment of the invention;
Figure 5 is a distinction diagram of withered vegetation and bare area provided by the embodiment of the invention;
Figure 6 is a slight and messy patch diagram provided by the embodiment of the invention;
Figure 7 is a flow diagram of a multi-source remote sensing data classification method based on the UAV extraction of classification sample points provided by the embodiment of the invention.
Detailed Description of the Presently Preferred Embodiments In the following part, the technical solutions in the embodiment of the invention will be described clearly and completely in conjunction with the drawings in the embodiment of the invention. Obviously, the described embodiments are only a part of the embodiments of the invention, not all of the embodiments. In view of the embodiments in the invention, all other embodiments obtained by those ordinary technical personnel in this field without paying any creative work belong to the scope of protection of the invention.
Referring to Figure 7, the embodiment of the invention provides a multi-source remote sensing data classification method based on the classification sample points extracted by the UAV, which comprises:
Step Si: Extract the classification sample points uniformly from aerial photographs of the UAV, and prepare sample points for calibration; among them, the types of sample points to be calibrated include: farmland and grassland, woodland and shrub, clearing and bare land, road as well as construction;
Step S2: Obtain the classification remote sensing data sets, which include: microwave data Sentinel-i data set, multispectral Sentinel-2 data set, vegetation index data set based on Sentinel-2 data set and digital elevation model data set;
Step S3: Process the remote sensing data sets, and obtain the classification remote sensing image data sets; and perform geospatial location on the classification sample points according to the classification remote sensing image data sets;
Step S4: Through the classification sample points located by geospatial information, and use the random forest classification model to obtain the classification results.
The specific descriptions of the above Steps Sl-S4 are as follows: the detailed classification and monitoring of surface types are the key means to prevent and control rocky desertification in the Karst region of Southwest China. Currently, a variety of remote sensing data is widely used in the research of surface feature classification mapping, but the scarce field measured sample points have always been one of the technical bottlenecks to perceive the surface type accurately and effectively. Therefore, the invention uses the method of extracting a large number of field measured sample points from aerial photographs of the UAV to complete the surface type classification process, and provides a practical method for the later extraction of cheap massive classification sample points.
In the invention, 982 classification sample points distributed in the research area are evenly extracted, and the remote sensing image data including the Vegetation Index data set and the Digital Elevation Model data set calculated and obtained by the Sentinel-1/2 (S1/2) data set and S2 data set are used, and then the surface type classification of the research area is completed with the help of the random forest classification model. The above remote sensing data sets not only cover visible light data, but also include near-infrared, short-wave infrared and microwave spectral band data. The results after classification show that, except for the classification results containing the DEM data sets (the Overall Accuracy and Kappa coefficient are 74.54% and 61.73%, respectively), the Overall Mapping Accuracy and Kappa coefficient accuracy of the other 4 data sets (only S2 data set, S2&VI data set, S2&VI&DEM data set, and 4 data sets composed of S2 band b3&b2&b4&b6 are used) are above 75% and 65%. In addition, the classification mapping results are more robust after disregarding the sample points (i.e. mixed pixels) located at the edges, and the OA and Kappa coefficients of the 3 data sets with the highest classification accuracy (only S2 data set, S2&VI data set and S2&VI&DEM data set are used) are all improved to above 85% and 79%. In particular, the accuracy of the Kappa coefficient is better, which is enhanced by nearly %. The above research results can provide accurate and effective technical means and method support for surface type mapping in Karst regions.
In addition, the invention also has the ability to effectively distinguish the edge classification sample points (mixed pixels), withered vegetation and bare land of classification mapping, and emphasizes the necessity of realizing the automatic process of aerial photography image classification sample points, and expects that it will become a hot research direction in the future.
It should be noted that there are many types of remote sensing data sources, and there are various classification methods. In order to fully demonstrate the possibility of using UAV to extract massive and cheap classification sample points, open source data sets are selected as remote sensing data sources for classification, such as Sentinel-1/2 and SRTM DEM. The advantage of the above data is that it provides free image data covering the visible light, near-infrared, short-wave infrared and microwave ranges, which provides extensive spectral data for classification method research. Meanwhile, the classification method selects a random forest with better accuracy for remote sensing data classification, compared with other machine learning classification methods
(except for the deep learning with extremely high hardware requirements), the random forest has better robustness.
Research Area Summary
The research area of the embodiment of the invention is located in the north of Weining County, Guizhou Province, between 104.100°E-104.118°E and 27.179°N-27.191°N, with an area
of 1.7 kmx1.4 km, 120 UAV photographs were collected and 982 classification sample points were extracted.
Data Acquisition of UAV Aerial Photographs and Classification Sample Points
The UAV aerial photographs were collected on April 21, 2018, and the shooting was completed by using the Phantom 4 of DJI company, and a total of 120 photographs were collected, with a flight altitude of nearly 300 meters, with a resolution of 12 million pixels, and the FragMAP software was used to complete the operation and shooting process. Through ArcGIS software, the sample points are completed by setting vector points evenly distributed in the research area, with a total of 982 vector points. If the traditional ecological sample box is used to complete the vegetation survey of nearly a thousand points in this experiment, or even tens of thousands or millions of points in the follow-up research, the required labor and time cost are extremely huge, so the invention uses the UAV image to complete the extraction process of the classification sample point data.
Only open source remote sensing images are used for visual interpretation, even if the resolution has reached 10m, it is still difficult to distinguish various types of surface types distinctly. For example, the color of some bare land in the research area is very similar to the color of undergrown woods, so it is very easy to interpret it incorrectly as woodland, but if there is UAV aerial photography data as a reference, the surface type will not be interpreted incorrectly. All the classification sample points are through visual interpretation to locate the UAV image on Sentinel-2 image, and then complete the sample point extraction process.
Remote Sensing Data Source
Sentinel-1/2 data are from European Space Agency (http://scihub.copernicus.ed/). Sentinel-2 (S2) data contains a total of 13 band data, covering visible light, near-infrared and short-wave infrared spectral bands, of which 5 near-infrared band data can be applied to vegetation-related research. After being processed by Sen2Cor software, the basic image processing processes such as topographic correction, atmospheric correction and radiometric correction are completed. Finally, the 12 layer image data sets except the 10th band are obtained, and all resampling to 10m resolution is consistent with the resolution of the 2nd (blue), 3rd (green), 4th (red) and 8th (near-infrared) bands. The resolution of the Sentinel-i (S) GRD data (C band, VV and VH polarization) is also 10m. after the SNAP software is used for orbit trimming, thermal noise removal, radiometric correction, speckle filtering and Range-Doppler topographic correction, the VV and VH2 layer data images are obtained. The S2 and Si data used in the invention were obtained on April 17, 2018 and April 20, 2018 respectively. SRTM DEM is used for DEM data. After resampling to 10m resolution, the 4 layer data images of DEM, slope, aspect and profile curvature are calculated and obtained. After processing, all the above remote sensing images and UAV aerial photography data are projected by WGS_1984_UTMZone_48N.
Calculation for Vegetation Index
In order to improve the classification accuracy, the introduced VI (Vegetation Indices) mainly include NDVI (Normalized Difference Vegetation Index), EVI (Enhanced Vegetation Index) and SAVI (Soil-Adjusted Vegetation Index). The calculation formula is shown as follows:
NDVI=(NIR-Red)/(NIR+Red) (1)
EVI = 2.5 x (NIR - Red) / (NIR + 6.ORed - 7.5Blue + 1) (2)
SAVI=(NIR-Red)(1+L)/(NIR+Red+L) (3)
In the formula, NIR, Red and Blue correspond to the values of near-infrared, red and blue bands respectively; L is the soil regulation coefficient, which is determined by the actual regional conditions, generally, L = 0.5 is used to complete the operation; among them, the data of NIR, Red and Blue bands correspond to the 8th, 4th and 2th band results of the S2 data respectively.
Construction for Random Forest Classification Model
Random forest is a composed supervised classification method, which is based on decision tree, so as to realize the integration of multiple decision trees. The random forest method is widely used in the research of remote sensing data classification. This classification model adopts the bagging method from the original training data set to complete the construction process of the sub-data set. In this process, the elements of different sub-data sets can be repeated, and the elements of the same sub data set can also be repeated. Meanwhile, as it introduces two random attributes (random samples, random features), the classification results are not easy to fall into overfitting. The importance of the random forest feature is positively related to the contribution of the feature to each tree in the forest. When the contribution of the feature to each tree is averaged, the Gini index is obtained. In addition, the Out of Bag (OOB) error rate can be used as an assessment index to measure the contribution of the feature set. Usually, the feature set with the lowest Out-of-Bag error rate is preferred.
The random forest classification model is realized by R language.
The first step is that the toolkits that need to be loaded include randomForest, raster, rgdal, lattice, ggplot2, caret, and e1071.
The second step is to use the readOGR and brick commands to read the classification sample point images and basic remote sensing data sets for classification in the R language environment.
The third step is to use the following code to build a random forest model.
rf <- randomForest(Ic ~ bl+b2+b3+b4+b5+b6+b7+b8+b9+b8a+bll+b12,
data=rois, ntree=500, importance=TRUE)
Among them, blb12 are the parameter layer images in this classification model, and different data sets correspond to different parameter layer images. As for the S2 data set in this example, it corresponds to the 12 parameter layer images processed by Sen2Cor software as described in Claim for its topographic correction, atmospheric correction and radiometric correction. S2&VI not only includes the S2 data set, but also includes 3 vegetation parameter layer images of NDVI, EVI and SAVI as described in Claim 6. S2&VI&DEM not only includes 3 vegetation parameter layer images in S2 and Claim 6, but also includes 4 terrain parameter images of elevation, slope, aspect and profile curvature as described in Claim 8. S2&VI&S1 not only includes 3 vegetation parameter layer images in S2 and Claim 6, but also includes VV and VH2 polarization parameter layer images in Claim 4. And b3&b2&b4&b6 only includes 4 band parameter layer images of the 3rd, 2nd, 4th and 6th of the S2 data set. The fourth step is to use the tuneRF( and randomForesto commands to complete the parameter regulation training of the model. Finally, use the writeRatero command to map the classification results and generate the image of the classification results.
In the embodiment of the invention, 5 surface types are mainly distinguished, as shown in Table 1.
Table 1 Overview of Surface Type Classification Sample Points in the Experimental Research Area Surface Types ID Number Edge or Not (Mixed Pixels) Number of Sample Points Yes 104 1 Farmland or Grassland No 339 Yes 88 2 Woodland or Shrub No 147 3
Yes 88 Clearing or Bare Land No 140 Yes 34 4 Road No 38 Yes 4 5 Construction No 0 Yes 318 No 664 Total
The accuracy index for verifying the classification results mainly includes the Overall Accuracy (OA) and Kappa coefficient. The calculation formula is as follows:
OA=(TP+TN)/(TP+FN+FP+TN) (4)
In the formula, TP is true positive, that is, the positive samples classified correctly by the model; FN is false negative, that is, the positive samples classified incorrectly by the model; FP is false positive, that is, the negative samples classified incorrectly by the model; TN is true negative, that is, the negative samples classified correctly by the model; OA is the Overall Accuracy, that is, the proportion of the number of correctly classified samples to the number of all samples;
Kappa= (Po- Pe)/(1- Pe) (5)
In the formula, Po is the sum of the observed values in the diagonal units, that is, the Overall Accuracy (OA); Pe is the sum of the expected values in the diagonal units; Kappa is the measurement value for assessment consistency, which represents the proportion of error reduction between the classification and the full random classification.
The accuracy index for verifying the classification results mainly includes the Overall Accuracy (OA) and Kappa coefficient. The calculation formula is as follows:
OA=(TP+TN)/(TP+FN+FP+TN) (4)
In the formula, TP is true positive, that is, the positive samples classified correctly by the model; FN is false negative, that is, the positive samples classified incorrectly by the model; FP is false positive, that is, the negative samples classified incorrectly by the model; TN is true negative, that is, the negative samples classified correctly by the model; OA is the Overall Accuracy, that is, the proportion of the number of correctly classified samples to the number of all samples;
Kappa = (Po - Pe) / (1 - Pe) (5)
In the formula, Po is the sum of the observed values in the diagonal units, that is, the Overall Accuracy (OA); Pe is the sum of the expected values in the diagonal units; Kappa is the measurement value for assessment consistency, which represents the proportion of error reduction between the classification and the full random classification.
According to the above data and the mapping results of the random forest classification model, as shown in Figure 1, except for the poor classification results of the S2&VI&DEM data sets (Table 2), the remaining classification results have relatively stable results, OA is above 75%, Kpppa coefficient is above 65%, meanwhile, the spatial distribution locations of various classification results are basically the same. Among them, the classification accuracy of 2 data sets S2&VI is the highest, but the value of OOB is the highest in the classification results of 3 data sets S2&VI&S1. The research results of the invention basically show the law that the more the data materials, the higher the classification accuracy. However, the introduction of DEM and its related calculation results in the invention will obviously generate noise, resulting in the reduction of classification results. Therefore, the classification research should also pay attention to screening variables to avoid errors caused by the introduction of redundant variables, resulting in the reduction of classification accuracy.
Table 2 Table of Accuracy Assessment Index for Classification Results
Classification Features Set Overall Accuracy(%) Kappa (%) Out Of Bag(%) S2 78.81 68.38 21.18 S2&VI 78.92 68.44 21.08 S2&VI&DEM 74.54 61.73 25.46 S2&VI&S1 78.81 68.35 20.67 b3&b2&b4&b6 77.49 66.44 21.99
According to Figure 2a-2e, it can be found that the more the surface types of classification samples, the higher the classification accuracy; the less the surface types of classification samples, the lower the final classification accuracy. In particular, the construction classification with the least sampling points is not correctly distinguished in the 5 classification results of the test set.
Figure 3a-3e show the Gini index values of different remote sensing layers in each classification data set. In the first 4 data sets, the 2nd, 3rd, 4th, and 6th bands of S2 data all have higher Gini index data values. Therefore, after the classification is completed only with the above 4 layers of data bands, it is found that the spatial distribution form after classification is basically similar to other multi-data layer classification results (Figure 1). In particular, the accuracy of the classification results completed only with the above 4 layers of data bands is slightly lower than that of the highest classification data set, and much higher than that after the introduction of redundant data set (DEM) (Table 2 and 2a-2e). The above results also show that the operation mode of data dimensionality reduction can effectively save time when processing large amount of data while ensuring a high classification accuracy.
The Effects of Edge Classification Sample Points (Mixed Pixels) on the Accuracy of Classification Results
In the invention, the classification sample points are set according to the uniform distribution rule, so nearly 1/3 of the sample points are located at the edges of different surface types (i.e. mixed pixels), as shown in Table 1. Here, by eliminating the 318 edge sample points in Table 1, all sample points are only located in the classification accuracy after the extremely high consistency area. The 664 sample points located on the surface type with high consistency are input into the 3 data sets with the highest accuracy of classification results in the classification model for classification, the results are shown in Figure 4a-4c, the classification accuracy is significantly improved (Table 3), and OA and Kappa are above 85% and 79%. Therefore, it is very important to eliminate the edge classification sample points (mixed pixels) for the improvement of classification accuracy. In the invention, one is close to 10%, the other is close to 15% for the improvement of OA and Kappa.
Table 3 Table of Accuracy Assessment Indexes for Classification Results of data sets S2,S2&VI and S2&VI&S1
Kappa (%) Classification Features Set Overall Accuracy(%) Out Of Bag() S2 87.2 79.85 12.8
S2&VI 86.9 79.29 13.1 S2&VI&S1 86.45 78.59 13.55
The classification method based on the extraction of classification sample points by the UAV can effectively identify the conventional visual interpretation of withered vegetation, especially when there is no aerial photograph of the UAV as reference material, the visual interpretation of visible light remote sensing images is relatively difficult to distinguish the withered woodland and some bare land, as shown in Figure 5A, the color of the withered woodland and bare land is very close from the composite image of S2. However, after the classification is completed by using UAV to extract the classification sample points, the invention will have a much more accurate distinction on the withered woodland and the bare land, as shown in Figure 5B. In particular, the classification of the wilted woodland in the second circle from top to bottom in Figure 5B can basically effectively connect the adjacent vigorous woodland. Among them, the surface types from top to bottom circle in Figure 5A are forest, forest, and bare land in sequence; and the surface types from top to bottom circle in Figure 5B are forest, forest, and bare land in sequence.
Slight Plaques and Messy Plaques
Figure 6 shows the distinction effects of classification on slight and messy plaques. The classification effects in the circle where the surface type is forest in Figure 6 can basically meet the needs of conventional surface type classification, but this result also has some salt and pepper effects, and some roads are also incoherent. In addition, what needs to be paid attention to is the poor distinction effects of construction. For example, the houses in the red circle in Figure 6 are not distinguished at all, and only some samples are distinguished in more constructions in Figure 5. There are two main reasons for this phenomenon: 1) There are too few classification sample points, as shown in Table 1, there are only 4 classification sample points of constructions, and all of them are edge sample points; 2) The area of construction is small, and most individuals are difficult to cover a 2x2 pixel unit. According to the actual classification mapping requirements, if it is really necessary to classify the surface types with small number and small area, it is recommended to increase the sample points artificially, rather than simply rely on the sample points set by uniform distribution. In addition, if conditions permit, the use of higher resolution remote sensing images (generally non-open source) for classification mapping should be considered to enhance the ability to identify small area samples. Among them, Figure 6A shows the slight and messy plaques on the composite image of S2; Figure 6A shows the slight and messy plaques after the classification is completed by using UAV to extract the classification sample points in the invention; the surface types from top to bottom circle in Figure 6A are forest, forest, and bare land in sequence; and the surface types from top to bottom circle in Figure 6B are forest, forest, and bare land in sequence.
The above disclosures are only a few specific embodiments of the invention. Those technical personnel in this field can make various changes and modifications to the invention without departing from the spirit and scope of the invention. In this way, if these amendments and modifications of the invention fall within the scope of the claims of the invention and its equivalent technology, the invention is also intended to include these changes and modifications.

Claims (10)

Claims
1. A multi-source remote sensing data classification method based on the classification sample points extracted by the UAV, which is characterized in that, it comprises:
Extract the classification sample points uniformly from aerial photographs of the UAV, and prepare each type of sample points for calibration; among them, the types of sample points to be calibrated include: farmland and grassland, woodland and shrub, clearing and bare land, road as well as construction;
Obtain the classification remote sensing data sets, the classification remote sensing data sets include: microwave data Sentine 1-1 data set, multispectral Sentine 1-2 data set, vegetation index data set based on Sentine 1-2 data set and digital elevation model data set;
Process the remote sensing data sets, and obtain the classification remote sensing image data sets; and perform geospatial location on the classification sample points according to the classification remote sensing image data sets;
Through the classification sample points located by geospatial information, and use the random forest classification model to obtain the classification results.
2. The multi-source remote sensing data classification method based on the classification sample points that extracted from the UAV as described in Claim 1, which is characterized in that, the classification sample points that extracted from aerial photographs of the UAV comprise:
Extract the classification sample points uniformly from aerial photographs of the UAV through the visual interpretation method.
3. The multi-source remote sensing data classification method based on the classification sample points that extracted from the UAV as described in Claim 1 or 2, which is characterized in that, the classification sample points that extracted from aerial photographs of the UAV comprise:
Eliminate the sample points at the edges of different surface types.
4. The multi-source remote sensing data classification method based on the classification sample points that extracted from the UAV as described in Claim 1, which is characterized in that, based on the 10m resolution, the SNAP software is used to perform orbit trimming, thermal noise removal, radiometric correction, speckle filtering and Range-Doppler topographic correction on the microwave data Sentinel-i data set, so as to obtain the VV polarized image data set and VH polarized image data set.
5. The multi-source remote sensing data classification method based on the classification sample points that extracted from the UAV as described in Claim 1, which is characterized in that, the multispectral Sentine 1-2 data set comprises 13 band data, covering visible light, near-infrared and short-wave infrared spectral bands; the Sen2Cor software is used to perform topographic correction, atmospheric correction and radiometric correction processing on the multispectral Sentine 1-2 data set, so as to obtain the 12 layer image data sets except the 10th band and resample the 12 layer image data sets to 10m resolution.
6. The multi-source remote sensing data classification method based on the classification sample points that extracted from the UAV as described in Claim 1 or 5, which is characterized in that, the vegetation index data set comprises: NDVI, EVI and SAVI, and the calculation formula is as follows:
NDVI = (NIR - Red )/(NIR + Red) EVI = 2.5 x (NIR - Red) / (NIR + 6.ORed - 7.5Blue + 1) SAVI= (NIR - Red) (1 + L) / (NIR +Red + L)
In the formula, NIR, Red and Blue correspond to the data of near-infrared, red and blue bands respectively; L is the soil regulation coefficient, which is determined by the actual regional conditions; the data of NIR, Red and Blue bands correspond to the data of 8th band, 4th band and 2th band of Sentinel-2 data set respectively.
7. The multi-source remote sensing data classification method based on the classification sample points that extracted from the UAV as described in Claim 6, which is characterized in that, the soil regulation coefficient L= 0.5.
8. The multi-source remote sensing data classification method based on the classification sample points that extracted from the UAV as described in Claim 1, which is characterized in that, the DEM data set adopts the SRTM DEM data set. After resampling the SRTM DEM data set to m resolution, the elevation DEM image data set, slope image data set, aspect image data set and profile curvature image data set are obtained.
9. The multi-source remote sensing data classification method based on the classification sample points that extracted from the UAV as described in Claim 1, which is characterized in that, the random forest classification model comprises:
Use the readOGRO and brick commands to read the classification sample point images and classification remote sensing data sets in the R language environment;
Use the following code to build a random forest classification model;
rf <- randomForest(Ic ~ bl+b2+b3+b4+b5+b6+b7+b8+b9+b8a+bll+b12,
data=rois, ntree=500, importance=TRUE)
Among them, blb12 are the parameter layer images in the random forest classification model, and different data sets correspond to different parameter layer images;
Use the tuneRF( and randomForesto commands to complete the parameter regulation training of the random forest classification model; use the writeRatero command to map the classification results and generate the image of the classification results.
10. The multi-source remote sensing data classification method based on the classification sample points that extracted from the UAV as described in Claim 1 or 9, which is characterized in that, the accuracy indexes of the surface type classification results comprise: Overall Accuracy (OA) and Kappa coefficient, and the calculation formula is as follows:
OA= (TP + TN) / (TP+ FN + FP+ TN)
In the formula, TP is true positive, that is, the positive samples classified correctly by the random forest classification model; FN is false negative, that is, the positive samples classified incorrectly by the random forest classification model; FP is false positive, that is, the negative samples classified incorrectly by the random forest classification model; TN is true negative, that is, the negative samples classified correctly by the random forest classification model; OA is the Overall Accuracy, that is, the proportion of the number of correctly classified samples to the number of all samples;
Kappa = (Po - Pe) / (1 - Pe)
In the formula, Po is the sum of the observed values in the diagonal units, that is, the Overall Accuracy (OA); Pe is the sum of the expected values in the diagonal units; Kappa is the measurement value for assessment consistency, which represents the proportion of error reduction between the classification and the full random classification.
AU2020101054A 2020-06-19 2020-06-19 A Multi-source Remote Sensing Data Classification Method Based On the Classification Sample Points Extracted By the UAV Ceased AU2020101054A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2020101054A AU2020101054A4 (en) 2020-06-19 2020-06-19 A Multi-source Remote Sensing Data Classification Method Based On the Classification Sample Points Extracted By the UAV

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2020101054A AU2020101054A4 (en) 2020-06-19 2020-06-19 A Multi-source Remote Sensing Data Classification Method Based On the Classification Sample Points Extracted By the UAV

Publications (1)

Publication Number Publication Date
AU2020101054A4 true AU2020101054A4 (en) 2020-07-30

Family

ID=71738635

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2020101054A Ceased AU2020101054A4 (en) 2020-06-19 2020-06-19 A Multi-source Remote Sensing Data Classification Method Based On the Classification Sample Points Extracted By the UAV

Country Status (1)

Country Link
AU (1) AU2020101054A4 (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001341A (en) * 2020-08-27 2020-11-27 深圳前海微众银行股份有限公司 Vegetation identification method, device, equipment and readable storage medium
CN112257531A (en) * 2020-10-13 2021-01-22 西安电子科技大学 Forest land change remote sensing monitoring method based on diversity characteristic combination
CN112270236A (en) * 2020-10-21 2021-01-26 长春工程学院 Remote sensing image vegetation classification method based on gradient scale interval change rule operator
CN112304902A (en) * 2020-11-02 2021-02-02 航天宏图信息技术股份有限公司 Real-time monitoring method and device for crop phenology
CN112818749A (en) * 2020-12-31 2021-05-18 中国电子科技集团公司第二十七研究所 Multi-cropping mode remote sensing monitoring method for bulk grain and oil crops in double cropping area of one year
CN112861658A (en) * 2021-01-14 2021-05-28 中国科学院地理科学与资源研究所 Identification method for desertification control key area based on multi-source data
CN113158934A (en) * 2021-04-28 2021-07-23 中国科学院空天信息创新研究院 High-resolution remote sensing image-based urban land use classification method, device and equipment
CN113324656A (en) * 2021-05-28 2021-08-31 中国地质科学院 Unmanned aerial vehicle-mounted infrared remote sensing earth surface heat anomaly detection method and system
CN113870425A (en) * 2021-09-03 2021-12-31 中林信达(北京)科技信息有限责任公司 Forest accumulation space mapping method based on random forest and multi-source remote sensing technology
CN113984673A (en) * 2021-10-25 2022-01-28 中国科学院合肥物质科学研究院 Satellite-borne polarization scanning remote sensor data pre-correction method and device
CN114019082A (en) * 2021-11-19 2022-02-08 安徽省农业科学院土壤肥料研究所 Soil organic matter content monitoring method and system
CN114283335A (en) * 2021-12-27 2022-04-05 河南大学 Historical period remote sensing identification precision verification preparation method
CN114324410A (en) * 2021-12-31 2022-04-12 黄陵县农产品质量安全检验检测站 Multi-terrain microwave remote sensing soil humidity downscaling method
CN114494909A (en) * 2022-02-16 2022-05-13 中国科学院空天信息创新研究院 Method and system for generating spatial distribution diagram of soybean growing season
CN114646305A (en) * 2022-03-03 2022-06-21 湖南省测绘科技研究所 Intelligent identification method for surveying and mapping behaviors of unmanned aerial vehicle
CN114998742A (en) * 2022-06-16 2022-09-02 天津市生态环境科学研究院(天津市环境规划院、天津市低碳发展研究中心) Method for quickly identifying and extracting rice planting area in single-season rice planting area
CN115115948A (en) * 2022-07-26 2022-09-27 云南大学 Forest land information fine extraction method based on random forest and auxiliary factors
CN115205688A (en) * 2022-09-07 2022-10-18 浙江甲骨文超级码科技股份有限公司 Tea tree planting area extraction method and system
CN115291229A (en) * 2022-07-28 2022-11-04 中科三清科技有限公司 Method, device and equipment for identifying emergent aquatic vegetation in lake and storage medium
CN116883785A (en) * 2023-07-17 2023-10-13 中国科学院地理科学与资源研究所 Forest carbon density data set extraction method
CN117409333A (en) * 2023-12-15 2024-01-16 四川省生态环境科学研究院 Ecological fragile area identification and ecological restoration method based on remote sensing image
CN117475325A (en) * 2023-11-16 2024-01-30 中国科学院东北地理与农业生态研究所 Automatic film-covered farmland information extraction method based on remote sensing image
CN114842286B (en) * 2022-03-24 2024-02-23 西北工业大学 Large-scale remote sensing data set generation method based on real topography

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001341A (en) * 2020-08-27 2020-11-27 深圳前海微众银行股份有限公司 Vegetation identification method, device, equipment and readable storage medium
CN112257531A (en) * 2020-10-13 2021-01-22 西安电子科技大学 Forest land change remote sensing monitoring method based on diversity characteristic combination
CN112257531B (en) * 2020-10-13 2023-07-28 西安电子科技大学 Remote sensing monitoring method for forest land change based on diversity feature combination
CN112270236A (en) * 2020-10-21 2021-01-26 长春工程学院 Remote sensing image vegetation classification method based on gradient scale interval change rule operator
CN112270236B (en) * 2020-10-21 2022-07-19 长春工程学院 Remote sensing image vegetation classification method based on gradient scale interval change rule operator
CN112304902A (en) * 2020-11-02 2021-02-02 航天宏图信息技术股份有限公司 Real-time monitoring method and device for crop phenology
CN112304902B (en) * 2020-11-02 2023-10-20 航天宏图信息技术股份有限公司 Real-time monitoring method and device for crop weather
CN112818749A (en) * 2020-12-31 2021-05-18 中国电子科技集团公司第二十七研究所 Multi-cropping mode remote sensing monitoring method for bulk grain and oil crops in double cropping area of one year
CN112818749B (en) * 2020-12-31 2022-09-13 中国电子科技集团公司第二十七研究所 Multi-cropping mode remote sensing monitoring method for bulk grain and oil crops in double cropping area of one year
CN112861658A (en) * 2021-01-14 2021-05-28 中国科学院地理科学与资源研究所 Identification method for desertification control key area based on multi-source data
CN113158934A (en) * 2021-04-28 2021-07-23 中国科学院空天信息创新研究院 High-resolution remote sensing image-based urban land use classification method, device and equipment
CN113324656A (en) * 2021-05-28 2021-08-31 中国地质科学院 Unmanned aerial vehicle-mounted infrared remote sensing earth surface heat anomaly detection method and system
CN113324656B (en) * 2021-05-28 2022-07-15 中国地质科学院 Unmanned aerial vehicle-mounted infrared remote sensing earth surface heat anomaly detection method and system
CN113870425A (en) * 2021-09-03 2021-12-31 中林信达(北京)科技信息有限责任公司 Forest accumulation space mapping method based on random forest and multi-source remote sensing technology
CN113984673A (en) * 2021-10-25 2022-01-28 中国科学院合肥物质科学研究院 Satellite-borne polarization scanning remote sensor data pre-correction method and device
CN114019082B (en) * 2021-11-19 2024-05-14 安徽省农业科学院土壤肥料研究所 Soil organic matter content monitoring method and system
CN114019082A (en) * 2021-11-19 2022-02-08 安徽省农业科学院土壤肥料研究所 Soil organic matter content monitoring method and system
CN114283335A (en) * 2021-12-27 2022-04-05 河南大学 Historical period remote sensing identification precision verification preparation method
CN114324410A (en) * 2021-12-31 2022-04-12 黄陵县农产品质量安全检验检测站 Multi-terrain microwave remote sensing soil humidity downscaling method
CN114494909A (en) * 2022-02-16 2022-05-13 中国科学院空天信息创新研究院 Method and system for generating spatial distribution diagram of soybean growing season
CN114646305B (en) * 2022-03-03 2024-04-02 湖南省测绘科技研究所 Intelligent recognition method for unmanned aerial vehicle mapping behavior
CN114646305A (en) * 2022-03-03 2022-06-21 湖南省测绘科技研究所 Intelligent identification method for surveying and mapping behaviors of unmanned aerial vehicle
CN114842286B (en) * 2022-03-24 2024-02-23 西北工业大学 Large-scale remote sensing data set generation method based on real topography
CN114998742A (en) * 2022-06-16 2022-09-02 天津市生态环境科学研究院(天津市环境规划院、天津市低碳发展研究中心) Method for quickly identifying and extracting rice planting area in single-season rice planting area
CN114998742B (en) * 2022-06-16 2023-08-18 天津市生态环境科学研究院(天津市环境规划院、天津市低碳发展研究中心) Method for rapidly identifying and extracting rice planting area of single-cropping rice region
CN115115948A (en) * 2022-07-26 2022-09-27 云南大学 Forest land information fine extraction method based on random forest and auxiliary factors
CN115115948B (en) * 2022-07-26 2024-03-29 云南大学 Forest land information refined extraction method based on random forest and auxiliary factors
CN115291229A (en) * 2022-07-28 2022-11-04 中科三清科技有限公司 Method, device and equipment for identifying emergent aquatic vegetation in lake and storage medium
CN115291229B (en) * 2022-07-28 2023-09-22 中科三清科技有限公司 Identification method, device, equipment and storage medium for emergent aquatic vegetation in lake
CN115205688A (en) * 2022-09-07 2022-10-18 浙江甲骨文超级码科技股份有限公司 Tea tree planting area extraction method and system
CN116883785B (en) * 2023-07-17 2024-03-12 中国科学院地理科学与资源研究所 Forest carbon density data set extraction method
CN116883785A (en) * 2023-07-17 2023-10-13 中国科学院地理科学与资源研究所 Forest carbon density data set extraction method
CN117475325A (en) * 2023-11-16 2024-01-30 中国科学院东北地理与农业生态研究所 Automatic film-covered farmland information extraction method based on remote sensing image
CN117475325B (en) * 2023-11-16 2024-06-18 中国科学院东北地理与农业生态研究所 Automatic film-covered farmland information extraction method based on remote sensing image
CN117409333B (en) * 2023-12-15 2024-02-13 四川省生态环境科学研究院 Ecological fragile area identification and ecological restoration method based on remote sensing image
CN117409333A (en) * 2023-12-15 2024-01-16 四川省生态环境科学研究院 Ecological fragile area identification and ecological restoration method based on remote sensing image

Similar Documents

Publication Publication Date Title
AU2020101054A4 (en) A Multi-source Remote Sensing Data Classification Method Based On the Classification Sample Points Extracted By the UAV
CN111242224B (en) Multi-source remote sensing data classification method based on unmanned aerial vehicle extraction classification sample points
Kamal et al. Assessment of multi-resolution image data for mangrove leaf area index mapping
Lin et al. Effects of atmospheric correction and pansharpening on LULC classification accuracy using WorldView-2 imagery
Lehnert et al. Retrieval of grassland plant coverage on the Tibetan Plateau based on a multi-scale, multi-sensor and multi-method approach
Al-Ali et al. A comparative study of remote sensing classification methods for monitoring and assessing desert vegetation using a UAV-based multispectral sensor
Teshaev et al. The soil-adjusted vegetation index for soil salinity assessment in Uzbekistan
CN113033670A (en) Method for extracting rice planting area based on Sentinel-2A/B data
Wang et al. A Quantitative Study of Gully Erosion Based on Object‐Oriented Analysis Techniques: A Case Study in Beiyanzikou Catchment of Qixia, Shandong, China
Teffera et al. Assessing land use and land cover dynamics using composites of spectral indices and principal component analysis: A case study in middle Awash subbasin, Ethiopia
Chen et al. Shrub biomass estimation in semi-arid sandland ecosystem based on remote sensing technology
Wang et al. Mapping tea plantations from multi-seasonal Landsat-8 OLI imageries using a random forest classifier
Danaher et al. Remote sensing of tree-grass systems: The Eastern Australian Woodlands
Zhang et al. UAV‐derived imagery for vegetation structure estimation in rangelands: validation and application
Bektas Balcik et al. Determination of magnitude and direction of land use/land cover changes in Terkos Water Basin, Istanbul
Kennaway et al. Mapping land cover and estimating forest structure using satellite imagery and coarse resolution lidar in the Virgin Islands
Mustafa et al. Object based technique for delineating and mapping 15 tree species using VHR WorldView-2 imagery
Solaimani et al. Land use/cover change detection based on remote sensing data (A case study; Neka Basin)
Afifi et al. Urban sprawl of greater Cairo and its impact on the agricultural land using remote sensing and digital soil map
Ezzeldin et al. Land use changes in the Eastern Nile Delta Region; Egypt using multi-temporal remote sensing techniques
Justin George et al. Hyperspectral remote sensing in characterizing soil salinity severity using SVM technique-a case study of alluvial plains
Kumar et al. Utilizing the potential of World View− 2 for discriminating urban and vegetation features using object based classification techniques
Nelson et al. Spatial statistical techniques for aggregating point objects extracted from high spatial resolution remotely sensed imagery
Kumar et al. Constrained linear spectral unmixing technique for regional land cover mapping using MODIS data
Brigante et al. USE OF MULTISPECTRAL SENSORS WITH HIGH SPATIAL RESOLUTION FOR TERRITORIAL AND ENVIRONMENTAL ANALYSIS.

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry