CN116579501A - Damaged forest remote sensing monitoring and evaluating method based on deep learning - Google Patents
Damaged forest remote sensing monitoring and evaluating method based on deep learning Download PDFInfo
- Publication number
- CN116579501A CN116579501A CN202310765448.1A CN202310765448A CN116579501A CN 116579501 A CN116579501 A CN 116579501A CN 202310765448 A CN202310765448 A CN 202310765448A CN 116579501 A CN116579501 A CN 116579501A
- Authority
- CN
- China
- Prior art keywords
- damaged
- pixel
- remote sensing
- forest
- training sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 24
- 238000000034 method Methods 0.000 title claims abstract description 19
- 238000013135 deep learning Methods 0.000 title claims abstract description 15
- 238000001228 spectrum Methods 0.000 claims abstract description 26
- 238000003062 neural network model Methods 0.000 claims abstract description 20
- 230000006378 damage Effects 0.000 claims abstract description 11
- 238000011156 evaluation Methods 0.000 claims abstract description 11
- 238000001556 precipitation Methods 0.000 claims abstract description 4
- 238000006243 chemical reaction Methods 0.000 claims description 12
- 230000006870 function Effects 0.000 claims description 12
- 230000003595 spectral effect Effects 0.000 claims description 12
- 230000008859 change Effects 0.000 claims description 10
- 238000011176 pooling Methods 0.000 claims description 9
- 230000004913 activation Effects 0.000 claims description 6
- 238000002485 combustion reaction Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 description 2
- 230000001932 seasonal effect Effects 0.000 description 2
- 239000002028 Biomass Substances 0.000 description 1
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 238000006424 Flood reaction Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 229910052799 carbon Inorganic materials 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000035772 mutation Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/10—Pre-processing; Data cleansing
- G06F18/15—Statistical pre-processing, e.g. techniques for normalisation or restoring missing data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Business, Economics & Management (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- Computational Linguistics (AREA)
- Economics (AREA)
- Biophysics (AREA)
- Entrepreneurship & Innovation (AREA)
- Development Economics (AREA)
- Probability & Statistics with Applications (AREA)
- Game Theory and Decision Science (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a damaged forest remote sensing monitoring and evaluating method based on deep learning, which comprises the following steps: s1, acquiring a remote sensing data set, and acquiring the maximum damaged year of each pixel in the remote sensing data set by using a LandTrendr algorithm; s2, calculating a spectrum index of each pixel in the time sequence according to the maximum damaged year of each pixel, and calculating a related prediction variable; s3, collecting elevation, gradient, precipitation and temperature data of each pixel, and selecting a training sample by combining a prediction variable calculated according to a spectrum index; s4, using a training sample for training a Unet neural network model, identifying and extracting a forest region in the target image by using the trained Unet neural network model to obtain a forest vegetation damage classification map, and completing remote sensing monitoring and evaluation of a damaged forest.
Description
Technical Field
The invention relates to the field of forestry remote sensing monitoring, in particular to a damaged forest remote sensing monitoring evaluation method based on deep learning.
Background
Forest is the most widely distributed among land vegetation types, has a crucial effect on carbon fixation of regional and global land ecosystems, and can cause different degrees of interference to the forest ecosystems due to urban changes, climate changes, disaster changes and the like. Among them, natural disasters such as earthquakes, fires, floods, etc. have devastating effects on forest ecosystems, which are often rapid and large, causing extensive vegetation damage. The destruction of vegetation and the reduction of vegetation coverage directly destroy the living conditions of other local organisms, biomass is reduced, the structure of an ecological system is damaged, and the functions and stability are reduced, so that the uncertainty of response to climate change is increased. At the same time, forest disturbance and subsequent vegetation restoration greatly affect forest resources, biodiversity and ecological processes. Therefore, effective forest management requires accurate estimation of the spatial and temporal patterns of forest disturbance and conditions for forest recovery, providing valuable information for solving forest dynamics and supporting formulation of appropriate policies.
In the prior art, remote sensing monitoring, evaluation and detection are carried out on forest damage through the change in the satellite image time sequence, and the forest damage is often influenced by noise caused by the detected cloud, fog and seasonal change, so that the accuracy of an evaluation result is limited.
Disclosure of Invention
Aiming at the defects in the prior art, the damaged forest remote sensing monitoring and evaluating method based on deep learning provided by the invention solves the problem that the prior art is influenced by noise.
In order to achieve the aim of the invention, the invention adopts the following technical scheme: the damaged forest remote sensing monitoring and evaluating method based on deep learning comprises the following steps:
s1, acquiring a remote sensing data set, and acquiring the maximum damaged year of each pixel in the remote sensing data set by using a LandTrendr algorithm;
s2, determining a time sequence according to the maximum damaged year of each pixel, calculating a spectrum index of each pixel in the time sequence, and calculating a prediction variable according to the spectrum index of each pixel in the time sequence;
s3, collecting elevation, gradient, precipitation and temperature grid data of a damaged area, and collecting prediction indexes at the positions of interference pixels and non-interference pixels respectively as training samples by combining prediction variables as prediction indexes;
wherein the interference pixels are pixels with pixel values of non-0 in the maximum damaged year of each pixel; the undisturbed pixels are pixels with a pixel value of 0 in the maximum damaged year of each pixel;
and S4, using the training sample to train the Unet neural network model, and using the trained Unet neural network model to identify and extract a forest region in the target image to obtain a forest vegetation damage classification map so as to complete the remote sensing monitoring and evaluation of the damaged forest.
Further: the step S1 comprises the following sub-steps:
s11, acquiring and arranging satellite images as remote sensing data sets, and preprocessing the remote sensing data sets to obtain preprocessed remote sensing data sets;
s12, stacking continuous multi-temporal remote sensing images in the preprocessed remote sensing data set to form a time sequence data set;
s13, automatically identifying the changing moment according to the slope and time of the time sequence data set by using a LandTrendr algorithm, and dividing the time sequence data set into different paragraphs;
s14, fitting is carried out according to the segments of the time sequence data set at different moments, and a fitting curve is obtained;
s15, calculating a spectrum index at a time point of a fitting curve to acquire information of specific surface features;
s16, acquiring information of specific surface features according to the change rate and the change amplitude in each time period of the fitting curve and the spectrum index at the time point of the fitting curve, and obtaining the maximum damaged year of each pixel.
Further: in the step S15, the spectrum indexes include normalized vegetation index NDVI, normalized combustion index NBR, leaf-cap conversion brightness TCB, leaf-cap conversion green degree TCG, leaf-cap conversion humidity TCW and leaf-cap conversion angle TCA.
Further: the step S2 includes the steps of:
s21, calculating a spectrum index of each pixel in the time sequence according to the maximum damaged year of each pixel;
s22, calculating a prediction variable according to the spectrum index of each pixel in the time sequence.
Further: the step S2 includes the steps of:
s21, calculating a spectrum index of each pixel in the time sequence according to the maximum damaged year of each pixel;
s22, calculating a prediction variable according to the spectrum index of each pixel in the time sequence.
Further: the prediction variables in the step S22 include:
spectral indexes of the previous year and the next year of the maximum damaged year of each index of each pixel;
spectral indices for the maximum damaged year for each index for each pixel;
average spectral index for all years before the maximum damaged year for each index for each pixel;
the average spectral index for all years after the maximum damaged year for each index for each pixel.
Further: the step S4 includes the following sub-steps:
s41, importing a training sample into a Unet neural network model, and performing four downsampling operations on the training sample to obtain a downsampled training sample;
s42, importing the training sample after the downsampling operation into a Unet neural network model, and performing four upsampling operations on the training sample after the downsampling operation to obtain the training sample after the upsampling operation;
s43, connecting and stacking the training samples subjected to the downsampling operation and the training samples subjected to the upsampling operation, and finally obtaining a high-dimensional feature map with the same size as the original image;
s44, performing 1X 1 convolution operation on the high-dimensional feature map, and performing Softmax function operation to complete training to obtain a trained Unet neural network model;
s45, inputting a target image, identifying and extracting a forest region in the target image by using a trained Unet neural network model, obtaining a forest vegetation damage classification map, and completing remote sensing monitoring and evaluation of damaged forest.
Further: in the step S41, the four downsampling operations include the following sub-steps:
s4101, performing cross operation on two 3 multiplied by 3 convolution layers and two ReLU activation function layers on the training samples to obtain a cross operation result;
s4102, performing maximum pooling operation on the cross operation result by a 2X 2 step length to obtain a maximum pooling operation result, and completing one-time downsampling operation;
s4103, taking the maximum pooling operation result as a training sample of the next downsampling operation, repeating the steps S4101-S4102 until four downsampling operations are completed, and taking the result of the fourth downsampling operation as the training sample after the downsampling operation.
Further: in the step S42, the four upsampling operations include the following sub-steps:
s4201, performing four deconvolution operations on the training sample after the downsampling operation to obtain a training sample after the deconvolution operation, and finishing an upsampling operation;
s4202, taking the training sample after the deconvolution operation as the training sample after the downsampling operation, returning to step S4201 until the fourth upsampling operation is completed, and taking the result of the fourth upsampling operation as the training sample after the upsampling operation.
The beneficial effects of the invention are as follows:
1. the fitted spectrum track is adopted, so that noise of undetected cloud, fog and seasonal variation is reduced, and the evaluation accuracy is improved;
2. the method can capture the long-term slow change trend of the forest, can detect the mutation trend, and has strong applicability.
Drawings
Fig. 1 is a flowchart of a damaged forest remote sensing monitoring and evaluating method based on deep learning.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and all the inventions which make use of the inventive concept are protected by the spirit and scope of the present invention as defined and defined in the appended claims to those skilled in the art.
As shown in fig. 1, in one embodiment of the present invention, there is provided a damaged forest remote sensing monitoring and evaluating method based on deep learning, including the steps of:
s1, acquiring a remote sensing data set, and acquiring the maximum damaged year of each pixel in the remote sensing data set by using a LandTrendr algorithm;
s2, determining a time sequence according to the maximum damaged year of each pixel, calculating a spectrum index of each pixel in the time sequence, and calculating a prediction variable according to the spectrum index of each pixel in the time sequence;
s3, collecting elevation, gradient, precipitation and temperature grid data of a damaged area, and collecting prediction indexes at the positions of interference pixels and non-interference pixels respectively as training samples by combining prediction variables as prediction indexes;
wherein the interference pixels are pixels with pixel values of non-0 in the maximum damaged year of each pixel; the undisturbed pixels are pixels with a pixel value of 0 in the maximum damaged year of each pixel;
and S4, using the training sample to train the Unet neural network model, and using the trained Unet neural network model to identify and extract a forest region in the target image to obtain a forest vegetation damage classification map so as to complete the remote sensing monitoring and evaluation of the damaged forest.
In this embodiment, the step S1 includes the following sub-steps:
s11, acquiring and arranging satellite images as remote sensing data sets, and preprocessing the remote sensing data sets to obtain preprocessed remote sensing data sets;
s12, stacking continuous multi-temporal remote sensing images in the preprocessed remote sensing data set to form a time sequence data set;
s13, automatically identifying the changing moment according to the slope and time of the time sequence data set by using a LandTrendr algorithm, and dividing the time sequence data set into different paragraphs;
s14, fitting is carried out according to the segments of the time sequence data set at different moments, and a fitting curve is obtained;
s15, calculating a spectrum index at a time point of a fitting curve to acquire information of specific surface features;
the spectrum indexes comprise normalized vegetation index NDVI, normalized combustion index NBR, leaf cap conversion brightness TCB, leaf cap conversion green degree TCG, leaf cap conversion humidity TCW and leaf cap conversion angle TCA;
s16, acquiring information of specific surface features according to the change rate and the change amplitude in each time period of the fitting curve and the spectrum index at the time point of the fitting curve, and obtaining the maximum damaged year of each pixel.
In this embodiment, the step S2 includes the steps of:
s21, calculating a spectrum index of each pixel in the time sequence according to the maximum damaged year of each pixel;
the prediction variables in the step S22 include:
spectral indexes of the previous year and the next year of the maximum damaged year of each index of each pixel;
spectral indices for the maximum damaged year for each index for each pixel;
average spectral index for all years before the maximum damaged year for each index for each pixel;
average spectral index for all years after the maximum damaged year for each index for each pixel;
s22, calculating a prediction variable according to the spectrum index of each pixel in the time sequence.
In this embodiment, the step S4 includes the following sub-steps:
s41, importing a training sample into a Unet neural network model, and performing four downsampling operations on the training sample to obtain a downsampled training sample;
in the step S41, the four downsampling operations include the following sub-steps:
s4101, performing cross operation on two 3 multiplied by 3 convolution layers and two ReLU activation function layers on the training samples to obtain a cross operation result;
the activation function ReLU calculation formula is:
Relu(x)=max(x,0)
wherein, relu (-) is an activation function, x is an input of the activation function, and max (-) is a maximum function;
s4102, performing maximum pooling operation on the cross operation result by a 2X 2 step length to obtain a maximum pooling operation result, and completing one-time downsampling operation;
s4103, taking the maximum pooling operation result as a training sample of the next downsampling operation, repeating the steps S4101-S4102 until four downsampling operations are completed, and taking the result of the fourth downsampling operation as the training sample after the downsampling operation;
s42, importing the training sample after the downsampling operation into a Unet neural network model, and performing four upsampling operations on the training sample after the downsampling operation to obtain the training sample after the upsampling operation;
in the step S42, the four upsampling operations include the following sub-steps:
s4201, performing four deconvolution operations on the training sample after the downsampling operation to obtain a training sample after the deconvolution operation, and finishing an upsampling operation;
s4202, taking the training sample after the deconvolution operation as the training sample after the downsampling operation, returning to the step S4201 until the fourth upsampling operation is completed, and taking the result of the fourth upsampling operation as the training sample after the upsampling operation;
s43, connecting and stacking the training samples subjected to the downsampling operation and the training samples subjected to the upsampling operation, and finally obtaining a high-dimensional feature map with the same size as the original image;
s44, performing 1X 1 convolution operation on the high-dimensional feature map, and performing Softmax function operation to complete training to obtain a trained Unet neural network model;
the Softmax function is calculated as:
wherein n is i For the output value of the ith node, z represents the input vector or matrix, m is the number of output nodes, and e is the natural constant, i.e. the number of classified categories.
S45, inputting a target image, identifying and extracting a forest region in the target image by using a trained Unet neural network model, obtaining a forest vegetation damage classification map, and completing remote sensing monitoring and evaluation of damaged forest.
In the description of the present invention, it should be understood that the terms "center," "thickness," "upper," "lower," "horizontal," "top," "bottom," "inner," "outer," "radial," and the like indicate or are based on the orientation or positional relationship shown in the drawings, merely to facilitate description of the present invention and to simplify the description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be configured and operated in a particular orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be interpreted as indicating or implying a relative importance or number of technical features indicated. Thus, a feature defined as "first," "second," "third," or the like, may explicitly or implicitly include one or more such feature.
Claims (8)
1. The damaged forest remote sensing monitoring and evaluating method based on deep learning is characterized by comprising the following steps of:
s1, acquiring a remote sensing data set, and acquiring the maximum damaged year of each pixel in the remote sensing data set by using a LandTrendr algorithm;
s2, determining a time sequence according to the maximum damaged year of each pixel, calculating a spectrum index of each pixel in the time sequence, and calculating a prediction variable according to the spectrum index of each pixel in the time sequence;
s3, collecting elevation, gradient, precipitation and temperature grid data of a damaged area, and collecting prediction indexes at the positions of interference pixels and non-interference pixels respectively as training samples by combining prediction variables as prediction indexes;
wherein the interference pixels are pixels with pixel values of non-0 in the maximum damaged year of each pixel; the undisturbed pixels are pixels with a pixel value of 0 in the maximum damaged year of each pixel;
and S4, using the training sample to train the Unet neural network model, and using the trained Unet neural network model to identify and extract a forest region in the target image to obtain a forest vegetation damage classification map so as to complete the remote sensing monitoring and evaluation of the damaged forest.
2. The damaged forest remote sensing monitoring and evaluating method based on deep learning according to claim 1, wherein the step S1 comprises the following sub-steps:
s11, acquiring and arranging satellite images as remote sensing data sets, and preprocessing the remote sensing data sets to obtain preprocessed remote sensing data sets;
s12, stacking continuous multi-temporal remote sensing images in the preprocessed remote sensing data set to form a time sequence data set;
s13, automatically identifying the changing moment according to the slope and time of the time sequence data set by using a LandTrendr algorithm, and dividing the time sequence data set into different paragraphs;
s14, fitting is carried out according to the segments of the time sequence data set at different moments, and a fitting curve is obtained;
s15, calculating a spectrum index at a time point of a fitting curve to acquire information of specific surface features;
s16, acquiring information of specific surface features according to the change rate and the change amplitude in each time period of the fitting curve and the spectrum index at the time point of the fitting curve, and obtaining the maximum damaged year of each pixel.
3. The method according to claim 2, wherein in the step S15, the spectrum indexes include normalized vegetation index NDVI, normalized combustion index NBR, leaf-cap conversion brightness TCB, leaf-cap conversion greenness TCG, leaf-cap conversion humidity TCW, and leaf-cap conversion angle TCA.
4. A damaged forest remote sensing monitoring and evaluating method based on deep learning according to claim 3, wherein the step S2 comprises the steps of:
s21, calculating a spectrum index of each pixel in the time sequence according to the maximum damaged year of each pixel;
s22, calculating a prediction variable according to the spectrum index of each pixel in the time sequence.
5. The method for evaluating the remote sensing monitoring of the damaged forest based on deep learning according to claim 4, wherein the predicted variables in step S22 comprise:
spectral indexes of the previous year and the next year of the maximum damaged year of each index of each pixel;
spectral indices for the maximum damaged year for each index for each pixel;
average spectral index for all years before the maximum damaged year for each index for each pixel;
the average spectral index for all years after the maximum damaged year for each index for each pixel.
6. The damaged forest remote sensing monitoring and evaluating method based on deep learning according to claim 5, wherein the method comprises the following steps: the step S4 includes the following sub-steps:
s41, importing a training sample into a Unet neural network model, and performing four downsampling operations on the training sample to obtain a downsampled training sample;
s42, importing the training sample after the downsampling operation into a Unet neural network model, and performing four upsampling operations on the training sample after the downsampling operation to obtain the training sample after the upsampling operation;
s43, connecting and stacking the training samples subjected to the downsampling operation and the training samples subjected to the upsampling operation, and finally obtaining a high-dimensional feature map with the same size as the original image;
s44, performing 1X 1 convolution operation on the high-dimensional feature map, and performing Softmax function operation to complete training to obtain a trained Unet neural network model;
s45, inputting a target image, identifying and extracting a forest region in the target image by using a trained Unet neural network model, obtaining a forest vegetation damage classification map, and completing remote sensing monitoring and evaluation of damaged forest.
7. The damaged forest remote sensing monitoring and evaluating method based on deep learning according to claim 6, wherein the method comprises the following steps: in the step S41, the four downsampling operations include the following sub-steps:
s4101, performing cross operation on two 3 multiplied by 3 convolution layers and two ReLU activation function layers on the training samples to obtain a cross operation result;
s4102, performing maximum pooling operation on the cross operation result by a 2X 2 step length to obtain a maximum pooling operation result, and completing one-time downsampling operation;
s4103, taking the maximum pooling operation result as a training sample of the next downsampling operation, repeating the steps S4101-S4102 until four downsampling operations are completed, and taking the result of the fourth downsampling operation as the training sample after the downsampling operation.
8. The damaged forest remote sensing monitoring and evaluating method based on deep learning according to claim 7, wherein: in the step S42, the four upsampling operations include the following sub-steps:
s4201, performing four deconvolution operations on the training sample after the downsampling operation to obtain a training sample after the deconvolution operation, and finishing an upsampling operation;
s4202, taking the training sample after the deconvolution operation as the training sample after the downsampling operation, returning to step S4201 until the fourth upsampling operation is completed, and taking the result of the fourth upsampling operation as the training sample after the upsampling operation.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2023105600881 | 2023-05-17 | ||
CN202310560088 | 2023-05-17 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116579501A true CN116579501A (en) | 2023-08-11 |
CN116579501B CN116579501B (en) | 2024-07-09 |
Family
ID=87545482
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310765448.1A Active CN116579501B (en) | 2023-05-17 | 2023-06-25 | Damaged forest remote sensing monitoring and evaluating method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116579501B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114399685A (en) * | 2022-03-25 | 2022-04-26 | 航天宏图信息技术股份有限公司 | Remote sensing monitoring and evaluating method and device for forest diseases and insect pests |
-
2023
- 2023-06-25 CN CN202310765448.1A patent/CN116579501B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114399685A (en) * | 2022-03-25 | 2022-04-26 | 航天宏图信息技术股份有限公司 | Remote sensing monitoring and evaluating method and device for forest diseases and insect pests |
Non-Patent Citations (4)
Title |
---|
MAIN KNORM, M ET.AL: "Monitoring coniferous forest biomass change using a Landsat trajectory-based approach", REMOTE SENSING ENVIRONMENT, vol. 139, 31 December 2013 (2013-12-31), pages 277 - 290 * |
华剑文: "基于LandTrendr算法和机器学习的森林干扰和恢复时空格局研究", 《中国优秀硕士论文全文数据库》, no. 02, 15 February 2022 (2022-02-15), pages 1 - 83 * |
张连华;庞勇;岳彩荣;李增元: "基于缨帽变换的景洪市时间序列Landsat影像森林扰动自动识别方法研究", 林业调查规划, vol. 38, no. 02, 15 April 2013 (2013-04-15), pages 6 - 12 * |
范明明 等: "改性糯米灰浆的室内研究及在九寨沟钙华地质裂缝修复中的应用", 《水文地质工程地质》, vol. 47, no. 4, 15 July 2020 (2020-07-15), pages 183 - 190 * |
Also Published As
Publication number | Publication date |
---|---|
CN116579501B (en) | 2024-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11094040B2 (en) | Noise detection method for time-series vegetation index derived from remote sensing images | |
US8548248B2 (en) | Correlated land change system and method | |
Mafi-Gholami et al. | Mangrove regional feedback to sea level rise and drought intensity at the end of the 21st century | |
CN111062368B (en) | City update region monitoring method based on Landsat time sequence remote sensing image | |
Tran et al. | Characterising spatiotemporal vegetation variations using LANDSAT time‐series and Hurst exponent index in the Mekong River Delta | |
Szantoi et al. | A tool for rapid post-hurricane urban tree debris estimates using high resolution aerial imagery | |
CN115512223B (en) | Mangrove forest dynamic monitoring method integrating multiple change detection algorithms | |
Ghansah et al. | Monitoring spatial-temporal variations of surface areas of small reservoirs in Ghana's Upper East Region using Sentinel-2 satellite imagery and machine learning | |
Hashim et al. | Environmental monitoring and prediction of land use and land cover spatio-temporal changes: a case study from El-Omayed Biosphere Reserve, Egypt | |
Dong et al. | Mapping of small water bodies with integrated spatial information for time series images of optical remote sensing | |
Lin et al. | Earthquake-induced landslide hazard and vegetation recovery assessment using remotely sensed data and a neural network-based classifier: a case study in central Taiwan | |
Islam et al. | Land-Cover Classification and its Impact on Peshawar's Land Surface Temperature Using Remote Sensing. | |
Nguyen-Trong et al. | Coastal forest cover change detection using satellite images and convolutional neural networks in Vietnam | |
Colditz et al. | Detecting change areas in Mexico between 2005 and 2010 using 250 m MODIS images | |
Zou et al. | Mapping individual abandoned houses across cities by integrating VHR remote sensing and street view imagery | |
Wang et al. | Vegetation coverage precisely extracting and driving factors analysis in drylands | |
Mustafa et al. | RETRACTED: Water surface area detection using remote sensing temporal data processed using MATLAB | |
Mukhopadhyay et al. | Forest cover change prediction using hybrid methodology of geoinformatics and Markov chain model: A case study on sub-Himalayan town Gangtok, India | |
Gueguen et al. | Urbanization detection by a region based mixed information change analysis between built-up indicators | |
Santos et al. | Coastal evolution and future projections in Conde County, Brazil: A multi-decadal assessment via remote sensing and sea-level rise scenarios | |
CN116579501B (en) | Damaged forest remote sensing monitoring and evaluating method based on deep learning | |
Farhadi et al. | A novel flood/water extraction index (FWEI) for identifying water and flooded areas using sentinel-2 visible and near-infrared spectral bands | |
Santra et al. | Quantifying shoreline dynamics in the Indian Sundarban delta with Google Earth Engine (GEE)-based automatic extraction approach | |
Lanka et al. | Change detection mapping using landsat synthetic aperture radar images | |
CN113657275A (en) | Automatic detection method for forest and grass fire points |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20240521 Address after: Three road 610051 Sichuan city of Chengdu Province, No. 1 East Applicant after: Chengdu University of Technology Country or region after: China Applicant after: JIUZHAI VALLEY SCENIC SPOT ADMINISTRATION Address before: Three road 610059 Sichuan city of Chengdu Province, No. 1 East Applicant before: Chengdu University of Technology Country or region before: China |
|
GR01 | Patent grant | ||
GR01 | Patent grant |