CN116579501A - Damaged forest remote sensing monitoring and evaluating method based on deep learning - Google Patents

Damaged forest remote sensing monitoring and evaluating method based on deep learning Download PDF

Info

Publication number
CN116579501A
CN116579501A CN202310765448.1A CN202310765448A CN116579501A CN 116579501 A CN116579501 A CN 116579501A CN 202310765448 A CN202310765448 A CN 202310765448A CN 116579501 A CN116579501 A CN 116579501A
Authority
CN
China
Prior art keywords
damaged
pixel
remote sensing
forest
training sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310765448.1A
Other languages
Chinese (zh)
Other versions
CN116579501B (en
Inventor
赵曦琳
赵建勋
杜杰
刘永东
贾薇
余欣遥
肖维阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
JIUZHAI VALLEY SCENIC SPOT ADMINISTRATION
Chengdu Univeristy of Technology
Original Assignee
Chengdu Univeristy of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Univeristy of Technology filed Critical Chengdu Univeristy of Technology
Publication of CN116579501A publication Critical patent/CN116579501A/en
Application granted granted Critical
Publication of CN116579501B publication Critical patent/CN116579501B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • G06F18/15Statistical pre-processing, e.g. techniques for normalisation or restoring missing data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Business, Economics & Management (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Computational Linguistics (AREA)
  • Economics (AREA)
  • Biophysics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Game Theory and Decision Science (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a damaged forest remote sensing monitoring and evaluating method based on deep learning, which comprises the following steps: s1, acquiring a remote sensing data set, and acquiring the maximum damaged year of each pixel in the remote sensing data set by using a LandTrendr algorithm; s2, calculating a spectrum index of each pixel in the time sequence according to the maximum damaged year of each pixel, and calculating a related prediction variable; s3, collecting elevation, gradient, precipitation and temperature data of each pixel, and selecting a training sample by combining a prediction variable calculated according to a spectrum index; s4, using a training sample for training a Unet neural network model, identifying and extracting a forest region in the target image by using the trained Unet neural network model to obtain a forest vegetation damage classification map, and completing remote sensing monitoring and evaluation of a damaged forest.

Description

Damaged forest remote sensing monitoring and evaluating method based on deep learning
Technical Field
The invention relates to the field of forestry remote sensing monitoring, in particular to a damaged forest remote sensing monitoring evaluation method based on deep learning.
Background
Forest is the most widely distributed among land vegetation types, has a crucial effect on carbon fixation of regional and global land ecosystems, and can cause different degrees of interference to the forest ecosystems due to urban changes, climate changes, disaster changes and the like. Among them, natural disasters such as earthquakes, fires, floods, etc. have devastating effects on forest ecosystems, which are often rapid and large, causing extensive vegetation damage. The destruction of vegetation and the reduction of vegetation coverage directly destroy the living conditions of other local organisms, biomass is reduced, the structure of an ecological system is damaged, and the functions and stability are reduced, so that the uncertainty of response to climate change is increased. At the same time, forest disturbance and subsequent vegetation restoration greatly affect forest resources, biodiversity and ecological processes. Therefore, effective forest management requires accurate estimation of the spatial and temporal patterns of forest disturbance and conditions for forest recovery, providing valuable information for solving forest dynamics and supporting formulation of appropriate policies.
In the prior art, remote sensing monitoring, evaluation and detection are carried out on forest damage through the change in the satellite image time sequence, and the forest damage is often influenced by noise caused by the detected cloud, fog and seasonal change, so that the accuracy of an evaluation result is limited.
Disclosure of Invention
Aiming at the defects in the prior art, the damaged forest remote sensing monitoring and evaluating method based on deep learning provided by the invention solves the problem that the prior art is influenced by noise.
In order to achieve the aim of the invention, the invention adopts the following technical scheme: the damaged forest remote sensing monitoring and evaluating method based on deep learning comprises the following steps:
s1, acquiring a remote sensing data set, and acquiring the maximum damaged year of each pixel in the remote sensing data set by using a LandTrendr algorithm;
s2, determining a time sequence according to the maximum damaged year of each pixel, calculating a spectrum index of each pixel in the time sequence, and calculating a prediction variable according to the spectrum index of each pixel in the time sequence;
s3, collecting elevation, gradient, precipitation and temperature grid data of a damaged area, and collecting prediction indexes at the positions of interference pixels and non-interference pixels respectively as training samples by combining prediction variables as prediction indexes;
wherein the interference pixels are pixels with pixel values of non-0 in the maximum damaged year of each pixel; the undisturbed pixels are pixels with a pixel value of 0 in the maximum damaged year of each pixel;
and S4, using the training sample to train the Unet neural network model, and using the trained Unet neural network model to identify and extract a forest region in the target image to obtain a forest vegetation damage classification map so as to complete the remote sensing monitoring and evaluation of the damaged forest.
Further: the step S1 comprises the following sub-steps:
s11, acquiring and arranging satellite images as remote sensing data sets, and preprocessing the remote sensing data sets to obtain preprocessed remote sensing data sets;
s12, stacking continuous multi-temporal remote sensing images in the preprocessed remote sensing data set to form a time sequence data set;
s13, automatically identifying the changing moment according to the slope and time of the time sequence data set by using a LandTrendr algorithm, and dividing the time sequence data set into different paragraphs;
s14, fitting is carried out according to the segments of the time sequence data set at different moments, and a fitting curve is obtained;
s15, calculating a spectrum index at a time point of a fitting curve to acquire information of specific surface features;
s16, acquiring information of specific surface features according to the change rate and the change amplitude in each time period of the fitting curve and the spectrum index at the time point of the fitting curve, and obtaining the maximum damaged year of each pixel.
Further: in the step S15, the spectrum indexes include normalized vegetation index NDVI, normalized combustion index NBR, leaf-cap conversion brightness TCB, leaf-cap conversion green degree TCG, leaf-cap conversion humidity TCW and leaf-cap conversion angle TCA.
Further: the step S2 includes the steps of:
s21, calculating a spectrum index of each pixel in the time sequence according to the maximum damaged year of each pixel;
s22, calculating a prediction variable according to the spectrum index of each pixel in the time sequence.
Further: the step S2 includes the steps of:
s21, calculating a spectrum index of each pixel in the time sequence according to the maximum damaged year of each pixel;
s22, calculating a prediction variable according to the spectrum index of each pixel in the time sequence.
Further: the prediction variables in the step S22 include:
spectral indexes of the previous year and the next year of the maximum damaged year of each index of each pixel;
spectral indices for the maximum damaged year for each index for each pixel;
average spectral index for all years before the maximum damaged year for each index for each pixel;
the average spectral index for all years after the maximum damaged year for each index for each pixel.
Further: the step S4 includes the following sub-steps:
s41, importing a training sample into a Unet neural network model, and performing four downsampling operations on the training sample to obtain a downsampled training sample;
s42, importing the training sample after the downsampling operation into a Unet neural network model, and performing four upsampling operations on the training sample after the downsampling operation to obtain the training sample after the upsampling operation;
s43, connecting and stacking the training samples subjected to the downsampling operation and the training samples subjected to the upsampling operation, and finally obtaining a high-dimensional feature map with the same size as the original image;
s44, performing 1X 1 convolution operation on the high-dimensional feature map, and performing Softmax function operation to complete training to obtain a trained Unet neural network model;
s45, inputting a target image, identifying and extracting a forest region in the target image by using a trained Unet neural network model, obtaining a forest vegetation damage classification map, and completing remote sensing monitoring and evaluation of damaged forest.
Further: in the step S41, the four downsampling operations include the following sub-steps:
s4101, performing cross operation on two 3 multiplied by 3 convolution layers and two ReLU activation function layers on the training samples to obtain a cross operation result;
s4102, performing maximum pooling operation on the cross operation result by a 2X 2 step length to obtain a maximum pooling operation result, and completing one-time downsampling operation;
s4103, taking the maximum pooling operation result as a training sample of the next downsampling operation, repeating the steps S4101-S4102 until four downsampling operations are completed, and taking the result of the fourth downsampling operation as the training sample after the downsampling operation.
Further: in the step S42, the four upsampling operations include the following sub-steps:
s4201, performing four deconvolution operations on the training sample after the downsampling operation to obtain a training sample after the deconvolution operation, and finishing an upsampling operation;
s4202, taking the training sample after the deconvolution operation as the training sample after the downsampling operation, returning to step S4201 until the fourth upsampling operation is completed, and taking the result of the fourth upsampling operation as the training sample after the upsampling operation.
The beneficial effects of the invention are as follows:
1. the fitted spectrum track is adopted, so that noise of undetected cloud, fog and seasonal variation is reduced, and the evaluation accuracy is improved;
2. the method can capture the long-term slow change trend of the forest, can detect the mutation trend, and has strong applicability.
Drawings
Fig. 1 is a flowchart of a damaged forest remote sensing monitoring and evaluating method based on deep learning.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and all the inventions which make use of the inventive concept are protected by the spirit and scope of the present invention as defined and defined in the appended claims to those skilled in the art.
As shown in fig. 1, in one embodiment of the present invention, there is provided a damaged forest remote sensing monitoring and evaluating method based on deep learning, including the steps of:
s1, acquiring a remote sensing data set, and acquiring the maximum damaged year of each pixel in the remote sensing data set by using a LandTrendr algorithm;
s2, determining a time sequence according to the maximum damaged year of each pixel, calculating a spectrum index of each pixel in the time sequence, and calculating a prediction variable according to the spectrum index of each pixel in the time sequence;
s3, collecting elevation, gradient, precipitation and temperature grid data of a damaged area, and collecting prediction indexes at the positions of interference pixels and non-interference pixels respectively as training samples by combining prediction variables as prediction indexes;
wherein the interference pixels are pixels with pixel values of non-0 in the maximum damaged year of each pixel; the undisturbed pixels are pixels with a pixel value of 0 in the maximum damaged year of each pixel;
and S4, using the training sample to train the Unet neural network model, and using the trained Unet neural network model to identify and extract a forest region in the target image to obtain a forest vegetation damage classification map so as to complete the remote sensing monitoring and evaluation of the damaged forest.
In this embodiment, the step S1 includes the following sub-steps:
s11, acquiring and arranging satellite images as remote sensing data sets, and preprocessing the remote sensing data sets to obtain preprocessed remote sensing data sets;
s12, stacking continuous multi-temporal remote sensing images in the preprocessed remote sensing data set to form a time sequence data set;
s13, automatically identifying the changing moment according to the slope and time of the time sequence data set by using a LandTrendr algorithm, and dividing the time sequence data set into different paragraphs;
s14, fitting is carried out according to the segments of the time sequence data set at different moments, and a fitting curve is obtained;
s15, calculating a spectrum index at a time point of a fitting curve to acquire information of specific surface features;
the spectrum indexes comprise normalized vegetation index NDVI, normalized combustion index NBR, leaf cap conversion brightness TCB, leaf cap conversion green degree TCG, leaf cap conversion humidity TCW and leaf cap conversion angle TCA;
s16, acquiring information of specific surface features according to the change rate and the change amplitude in each time period of the fitting curve and the spectrum index at the time point of the fitting curve, and obtaining the maximum damaged year of each pixel.
In this embodiment, the step S2 includes the steps of:
s21, calculating a spectrum index of each pixel in the time sequence according to the maximum damaged year of each pixel;
the prediction variables in the step S22 include:
spectral indexes of the previous year and the next year of the maximum damaged year of each index of each pixel;
spectral indices for the maximum damaged year for each index for each pixel;
average spectral index for all years before the maximum damaged year for each index for each pixel;
average spectral index for all years after the maximum damaged year for each index for each pixel;
s22, calculating a prediction variable according to the spectrum index of each pixel in the time sequence.
In this embodiment, the step S4 includes the following sub-steps:
s41, importing a training sample into a Unet neural network model, and performing four downsampling operations on the training sample to obtain a downsampled training sample;
in the step S41, the four downsampling operations include the following sub-steps:
s4101, performing cross operation on two 3 multiplied by 3 convolution layers and two ReLU activation function layers on the training samples to obtain a cross operation result;
the activation function ReLU calculation formula is:
Relu(x)=max(x,0)
wherein, relu (-) is an activation function, x is an input of the activation function, and max (-) is a maximum function;
s4102, performing maximum pooling operation on the cross operation result by a 2X 2 step length to obtain a maximum pooling operation result, and completing one-time downsampling operation;
s4103, taking the maximum pooling operation result as a training sample of the next downsampling operation, repeating the steps S4101-S4102 until four downsampling operations are completed, and taking the result of the fourth downsampling operation as the training sample after the downsampling operation;
s42, importing the training sample after the downsampling operation into a Unet neural network model, and performing four upsampling operations on the training sample after the downsampling operation to obtain the training sample after the upsampling operation;
in the step S42, the four upsampling operations include the following sub-steps:
s4201, performing four deconvolution operations on the training sample after the downsampling operation to obtain a training sample after the deconvolution operation, and finishing an upsampling operation;
s4202, taking the training sample after the deconvolution operation as the training sample after the downsampling operation, returning to the step S4201 until the fourth upsampling operation is completed, and taking the result of the fourth upsampling operation as the training sample after the upsampling operation;
s43, connecting and stacking the training samples subjected to the downsampling operation and the training samples subjected to the upsampling operation, and finally obtaining a high-dimensional feature map with the same size as the original image;
s44, performing 1X 1 convolution operation on the high-dimensional feature map, and performing Softmax function operation to complete training to obtain a trained Unet neural network model;
the Softmax function is calculated as:
wherein n is i For the output value of the ith node, z represents the input vector or matrix, m is the number of output nodes, and e is the natural constant, i.e. the number of classified categories.
S45, inputting a target image, identifying and extracting a forest region in the target image by using a trained Unet neural network model, obtaining a forest vegetation damage classification map, and completing remote sensing monitoring and evaluation of damaged forest.
In the description of the present invention, it should be understood that the terms "center," "thickness," "upper," "lower," "horizontal," "top," "bottom," "inner," "outer," "radial," and the like indicate or are based on the orientation or positional relationship shown in the drawings, merely to facilitate description of the present invention and to simplify the description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be configured and operated in a particular orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be interpreted as indicating or implying a relative importance or number of technical features indicated. Thus, a feature defined as "first," "second," "third," or the like, may explicitly or implicitly include one or more such feature.

Claims (8)

1. The damaged forest remote sensing monitoring and evaluating method based on deep learning is characterized by comprising the following steps of:
s1, acquiring a remote sensing data set, and acquiring the maximum damaged year of each pixel in the remote sensing data set by using a LandTrendr algorithm;
s2, determining a time sequence according to the maximum damaged year of each pixel, calculating a spectrum index of each pixel in the time sequence, and calculating a prediction variable according to the spectrum index of each pixel in the time sequence;
s3, collecting elevation, gradient, precipitation and temperature grid data of a damaged area, and collecting prediction indexes at the positions of interference pixels and non-interference pixels respectively as training samples by combining prediction variables as prediction indexes;
wherein the interference pixels are pixels with pixel values of non-0 in the maximum damaged year of each pixel; the undisturbed pixels are pixels with a pixel value of 0 in the maximum damaged year of each pixel;
and S4, using the training sample to train the Unet neural network model, and using the trained Unet neural network model to identify and extract a forest region in the target image to obtain a forest vegetation damage classification map so as to complete the remote sensing monitoring and evaluation of the damaged forest.
2. The damaged forest remote sensing monitoring and evaluating method based on deep learning according to claim 1, wherein the step S1 comprises the following sub-steps:
s11, acquiring and arranging satellite images as remote sensing data sets, and preprocessing the remote sensing data sets to obtain preprocessed remote sensing data sets;
s12, stacking continuous multi-temporal remote sensing images in the preprocessed remote sensing data set to form a time sequence data set;
s13, automatically identifying the changing moment according to the slope and time of the time sequence data set by using a LandTrendr algorithm, and dividing the time sequence data set into different paragraphs;
s14, fitting is carried out according to the segments of the time sequence data set at different moments, and a fitting curve is obtained;
s15, calculating a spectrum index at a time point of a fitting curve to acquire information of specific surface features;
s16, acquiring information of specific surface features according to the change rate and the change amplitude in each time period of the fitting curve and the spectrum index at the time point of the fitting curve, and obtaining the maximum damaged year of each pixel.
3. The method according to claim 2, wherein in the step S15, the spectrum indexes include normalized vegetation index NDVI, normalized combustion index NBR, leaf-cap conversion brightness TCB, leaf-cap conversion greenness TCG, leaf-cap conversion humidity TCW, and leaf-cap conversion angle TCA.
4. A damaged forest remote sensing monitoring and evaluating method based on deep learning according to claim 3, wherein the step S2 comprises the steps of:
s21, calculating a spectrum index of each pixel in the time sequence according to the maximum damaged year of each pixel;
s22, calculating a prediction variable according to the spectrum index of each pixel in the time sequence.
5. The method for evaluating the remote sensing monitoring of the damaged forest based on deep learning according to claim 4, wherein the predicted variables in step S22 comprise:
spectral indexes of the previous year and the next year of the maximum damaged year of each index of each pixel;
spectral indices for the maximum damaged year for each index for each pixel;
average spectral index for all years before the maximum damaged year for each index for each pixel;
the average spectral index for all years after the maximum damaged year for each index for each pixel.
6. The damaged forest remote sensing monitoring and evaluating method based on deep learning according to claim 5, wherein the method comprises the following steps: the step S4 includes the following sub-steps:
s41, importing a training sample into a Unet neural network model, and performing four downsampling operations on the training sample to obtain a downsampled training sample;
s42, importing the training sample after the downsampling operation into a Unet neural network model, and performing four upsampling operations on the training sample after the downsampling operation to obtain the training sample after the upsampling operation;
s43, connecting and stacking the training samples subjected to the downsampling operation and the training samples subjected to the upsampling operation, and finally obtaining a high-dimensional feature map with the same size as the original image;
s44, performing 1X 1 convolution operation on the high-dimensional feature map, and performing Softmax function operation to complete training to obtain a trained Unet neural network model;
s45, inputting a target image, identifying and extracting a forest region in the target image by using a trained Unet neural network model, obtaining a forest vegetation damage classification map, and completing remote sensing monitoring and evaluation of damaged forest.
7. The damaged forest remote sensing monitoring and evaluating method based on deep learning according to claim 6, wherein the method comprises the following steps: in the step S41, the four downsampling operations include the following sub-steps:
s4101, performing cross operation on two 3 multiplied by 3 convolution layers and two ReLU activation function layers on the training samples to obtain a cross operation result;
s4102, performing maximum pooling operation on the cross operation result by a 2X 2 step length to obtain a maximum pooling operation result, and completing one-time downsampling operation;
s4103, taking the maximum pooling operation result as a training sample of the next downsampling operation, repeating the steps S4101-S4102 until four downsampling operations are completed, and taking the result of the fourth downsampling operation as the training sample after the downsampling operation.
8. The damaged forest remote sensing monitoring and evaluating method based on deep learning according to claim 7, wherein: in the step S42, the four upsampling operations include the following sub-steps:
s4201, performing four deconvolution operations on the training sample after the downsampling operation to obtain a training sample after the deconvolution operation, and finishing an upsampling operation;
s4202, taking the training sample after the deconvolution operation as the training sample after the downsampling operation, returning to step S4201 until the fourth upsampling operation is completed, and taking the result of the fourth upsampling operation as the training sample after the upsampling operation.
CN202310765448.1A 2023-05-17 2023-06-25 Damaged forest remote sensing monitoring and evaluating method based on deep learning Active CN116579501B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2023105600881 2023-05-17
CN202310560088 2023-05-17

Publications (2)

Publication Number Publication Date
CN116579501A true CN116579501A (en) 2023-08-11
CN116579501B CN116579501B (en) 2024-07-09

Family

ID=87545482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310765448.1A Active CN116579501B (en) 2023-05-17 2023-06-25 Damaged forest remote sensing monitoring and evaluating method based on deep learning

Country Status (1)

Country Link
CN (1) CN116579501B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114399685A (en) * 2022-03-25 2022-04-26 航天宏图信息技术股份有限公司 Remote sensing monitoring and evaluating method and device for forest diseases and insect pests

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114399685A (en) * 2022-03-25 2022-04-26 航天宏图信息技术股份有限公司 Remote sensing monitoring and evaluating method and device for forest diseases and insect pests

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MAIN KNORM, M ET.AL: "Monitoring coniferous forest biomass change using a Landsat trajectory-based approach", REMOTE SENSING ENVIRONMENT, vol. 139, 31 December 2013 (2013-12-31), pages 277 - 290 *
华剑文: "基于LandTrendr算法和机器学习的森林干扰和恢复时空格局研究", 《中国优秀硕士论文全文数据库》, no. 02, 15 February 2022 (2022-02-15), pages 1 - 83 *
张连华;庞勇;岳彩荣;李增元: "基于缨帽变换的景洪市时间序列Landsat影像森林扰动自动识别方法研究", 林业调查规划, vol. 38, no. 02, 15 April 2013 (2013-04-15), pages 6 - 12 *
范明明 等: "改性糯米灰浆的室内研究及在九寨沟钙华地质裂缝修复中的应用", 《水文地质工程地质》, vol. 47, no. 4, 15 July 2020 (2020-07-15), pages 183 - 190 *

Also Published As

Publication number Publication date
CN116579501B (en) 2024-07-09

Similar Documents

Publication Publication Date Title
US11094040B2 (en) Noise detection method for time-series vegetation index derived from remote sensing images
US8548248B2 (en) Correlated land change system and method
Mafi-Gholami et al. Mangrove regional feedback to sea level rise and drought intensity at the end of the 21st century
CN111062368B (en) City update region monitoring method based on Landsat time sequence remote sensing image
Tran et al. Characterising spatiotemporal vegetation variations using LANDSAT time‐series and Hurst exponent index in the Mekong River Delta
Szantoi et al. A tool for rapid post-hurricane urban tree debris estimates using high resolution aerial imagery
CN115512223B (en) Mangrove forest dynamic monitoring method integrating multiple change detection algorithms
Ghansah et al. Monitoring spatial-temporal variations of surface areas of small reservoirs in Ghana's Upper East Region using Sentinel-2 satellite imagery and machine learning
Hashim et al. Environmental monitoring and prediction of land use and land cover spatio-temporal changes: a case study from El-Omayed Biosphere Reserve, Egypt
Dong et al. Mapping of small water bodies with integrated spatial information for time series images of optical remote sensing
Lin et al. Earthquake-induced landslide hazard and vegetation recovery assessment using remotely sensed data and a neural network-based classifier: a case study in central Taiwan
Islam et al. Land-Cover Classification and its Impact on Peshawar's Land Surface Temperature Using Remote Sensing.
Nguyen-Trong et al. Coastal forest cover change detection using satellite images and convolutional neural networks in Vietnam
Colditz et al. Detecting change areas in Mexico between 2005 and 2010 using 250 m MODIS images
Zou et al. Mapping individual abandoned houses across cities by integrating VHR remote sensing and street view imagery
Wang et al. Vegetation coverage precisely extracting and driving factors analysis in drylands
Mustafa et al. RETRACTED: Water surface area detection using remote sensing temporal data processed using MATLAB
Mukhopadhyay et al. Forest cover change prediction using hybrid methodology of geoinformatics and Markov chain model: A case study on sub-Himalayan town Gangtok, India
Gueguen et al. Urbanization detection by a region based mixed information change analysis between built-up indicators
Santos et al. Coastal evolution and future projections in Conde County, Brazil: A multi-decadal assessment via remote sensing and sea-level rise scenarios
CN116579501B (en) Damaged forest remote sensing monitoring and evaluating method based on deep learning
Farhadi et al. A novel flood/water extraction index (FWEI) for identifying water and flooded areas using sentinel-2 visible and near-infrared spectral bands
Santra et al. Quantifying shoreline dynamics in the Indian Sundarban delta with Google Earth Engine (GEE)-based automatic extraction approach
Lanka et al. Change detection mapping using landsat synthetic aperture radar images
CN113657275A (en) Automatic detection method for forest and grass fire points

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240521

Address after: Three road 610051 Sichuan city of Chengdu Province, No. 1 East

Applicant after: Chengdu University of Technology

Country or region after: China

Applicant after: JIUZHAI VALLEY SCENIC SPOT ADMINISTRATION

Address before: Three road 610059 Sichuan city of Chengdu Province, No. 1 East

Applicant before: Chengdu University of Technology

Country or region before: China

GR01 Patent grant
GR01 Patent grant