CN113486814A - Forest fire remote sensing dynamic monitoring method based on space-time fusion algorithm - Google Patents

Forest fire remote sensing dynamic monitoring method based on space-time fusion algorithm Download PDF

Info

Publication number
CN113486814A
CN113486814A CN202110774310.9A CN202110774310A CN113486814A CN 113486814 A CN113486814 A CN 113486814A CN 202110774310 A CN202110774310 A CN 202110774310A CN 113486814 A CN113486814 A CN 113486814A
Authority
CN
China
Prior art keywords
image
time
space
fire
remote sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110774310.9A
Other languages
Chinese (zh)
Inventor
栾海军
黄武彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University of Technology
Original Assignee
Xiamen University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University of Technology filed Critical Xiamen University of Technology
Priority to CN202110774310.9A priority Critical patent/CN113486814A/en
Publication of CN113486814A publication Critical patent/CN113486814A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Strategic Management (AREA)
  • Evolutionary Biology (AREA)
  • Human Resources & Organizations (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a forest fire remote sensing dynamic monitoring method based on a space-time fusion algorithm, which comprises the following steps: (1) determining a research area, and preprocessing acquired image data of MOD09GA, Landsat8OLI, Sentinel-2 and GF-1 WFV; (2) fusing the images respectively by using a STARFM algorithm and a ground surface reflectivity space-time fusion algorithm based on a time-sharing phase change model in the ground object, and further generating a medium spatial resolution image at the moment to be predicted; (3) and calculating a fire index factor based on the prediction image, and performing fire evolution trend analysis.

Description

Forest fire remote sensing dynamic monitoring method based on space-time fusion algorithm
Technical Field
The invention relates to a remote sensing dynamic monitoring method, in particular to a forest fire remote sensing dynamic monitoring method based on a space-time fusion algorithm.
Background
The forest fire is a natural disaster with strong burst, large destructiveness and difficult rescue, the forest fire happens tens of thousands of times per year in the whole world, the disaster area of the forest reaches hundreds of hectares, and great threat is caused to the ecological environment and the economic development of partial areas. Therefore, effective monitoring of forest fires is not slow. For forest fires, the position, the change and the fire passing area of a fire point need to be found in time, the fire loss and the influence are accurately estimated, and at the moment, a remote sensing image with a high revisit period and a high spatial resolution is needed to be analyzed and interpreted. However, under the limitation of the hardware condition of the existing sensor, the satellite remote sensing data cannot simultaneously meet the requirements of high spatial resolution and high temporal resolution. If the remote sensing image only has a high revisit period, the fire passing area and the fire passing trend of the forest fire with a small area may not be accurately judged; if the spatial resolution is high, a duplicate image of the same area or an image at the time of fire occurrence cannot be obtained at all only after a long time, and fire monitoring cannot be performed. Therefore, aiming at the contradiction between the time resolution and the space resolution of the remote sensing sensor and the limitation of other objective factors, many scholars provide a technical method of space-time fusion, so that remote sensing images with high space and high time resolution in a disaster area can be obtained more accurately and rapidly, and more powerful data support is provided for monitoring forest fires and loss evaluation.
To date, scholars at home and abroad have made a great deal of research work on space-time fusion algorithms and have achieved a series of achievements. Gao et al propose a space-time Adaptive reflectivity Fusion Model (STARFM) for fusing Landsat images and MODIS images to obtain data with high space-time resolution, and achieve better effect. The crane et al propose time series-based remote sensing data to construct a space-time fusion model. Wu Ming et al propose a method (STDFM) based on mixed pixel decomposition to fuse MODIS and Landsat image Data. Zhang et al improved the STDFM method and proposed an Enhanced hybrid pixel decomposition based method (ESTDFM). The SParse representation-based spatio-temporal reflectivity Fusion Model (SPSTFM) proposed by Huang et al introduces the theory of SParse representation into the original spatio-temporal Fusion algorithm. Schwann et al fused MODIS and Landsat OLI Data using a new spatio-Temporal Fusion method (STDFA model). Hilker et al propose a spatio-Temporal Adaptive fusion model Algorithm (STAARCH) for mapping reflection Change for reflectivity Change, and detect Change points from a dense time sequence of low-resolution images to improve STARFM performance when the land cover type changes. Zhu et al proposed an Enhanced spatio-Temporal Adaptive reflectivity Fusion Model (ESTARFM) based on STARFM, and introduced a transformation coefficient to better predict the reflectivity change of the heterogeneous earth surface. Weng et al proposed a space-time Adaptive Temperature mapping Data Fusion Algorithm (sadmat) based on STARFM by considering annual Temperature cycling and emissivity Data. Cheng et al proposed a spatio-Temporal non-local Filter-Based Data Fusion Method (STNLFFM) with higher prediction accuracy for heterogeneous surface regions. Zhao et al proposed a Robust Adaptive Spatial and Temporal Fusion Model (RASTFM) for complex surface variations, which has higher accuracy and robustness in capturing surface variation phenomena.
The most representative classical remote sensing data space-time fusion algorithm is a space-time adaptive reflectivity fusion model (STARFM) proposed by Gao et al, and the basic idea is as follows: the method is characterized in that two MODIS image graphs with known earth surface reflectivity and a Landsat image graph with known earth surface reflectivity are used for calculating and simulating to obtain a Landsat image graph with unknown earth surface reflectivity, time difference, pixel distance and spectrum difference are used as influence factors to predict the earth surface reflectivity of pixel points, and three factors are integrated to determine a weight formula.
The spatio-temporal adaptive reflectivity fusion model (STARFM) requires geometric registration of the observed data from different platforms and atmospheric correction to surface reflectivity before use. The general flow of the STARFM algorithm is described as follows:
MODIS represents low-resolution image data, Landsat represents high-resolution image data, and the STARFM algorithm has the basic principle that: suppose at tkDaily MODIS surface reflectance value M (x)i,yj,tk) And Landsat surface reflectance value L (x)i,yj,tk) The relationship between can be expressed as L (x)i,yj,tk)=M(xi,yj,tk)+εk (1)
Then at t0Daily MODIS surface reflectance value M (x)i,yj,t0) And at t0Daily Landsat surface reflectance prediction L (x)i,yj,t0) Can be expressed as
L(xi,yj,t0)=M(xi,yj,t0)+ε0 (2)
εkAnd epsilon0Respectively, the differences between the observed MODIS and Landsat surface reflectivities due to different band widths and solar geometry. Suppose that at the predicted date t0And date tkTime pixel (x)i,yj) Is constant, then e0=εkThus, therefore, it is
L(xi,yj,t0)=M(xi,yj,t0)+L(xi,yj,tk)-M(xi,yj,tk) (3)
However, this is only an ideal case, and the relationship between them is affected by the following three aspects: observed MODIS may include mixed land cover types when considered with the same spatial resolution as Landsat; during the prediction, the land cover may change from one type to another; ③ changes in the state of land cover and in the solar geometrical Bidirectional Reflectance Distribution Function (BRDF) will change from the predicted date t0By date tkThe reflectivity of (a).
Therefore, by introducing additional information of neighboring pixels, we use a weighting function to calculate the date t0Surface reflectance of the time center pixel:
Figure BDA0003153933360000041
where ω is the size of the search window,
Figure BDA0003153933360000042
is the center pixel of the moving window. To ensure that the correct information from neighboring pixels is used, only non-cloud pixels from the same spectral class and from the Landsat surface reflectivity within the moving window are used to calculate the reflectivity. Weight WijkRepresenting the degree to which each neighboring pixel contributes to the predicted reflectivity of the central pixel. We determine the final weight of each spectrally similar pixel from three aspects, spectral, temporal, and distance.
Another representative remote sensing data space-time fusion algorithm is a ground surface reflectivity space-time fusion algorithm based on a time-sharing phase change model in the ground object. The basic idea is as follows: homogeneous pixels all have the same reflectivity for the same type of land cover, and the seasonal and bi-directional reflectivity variations of these pixels should also be the same for different pixel sizes; for images with different resolutions, the variation relationship between the acquisition date and the prediction date of each land coverage type is approximately the same, namely, the space scale invariance exists between different components; therefore, the change relation established by the low-resolution image data and the medium-high resolution image data based on the acquisition date can be mapped into the medium-high resolution remote sensing image, so that the medium-high resolution image at the moment to be predicted can be predicted.
The specific implementation process of the algorithm comprises the following steps: the method comprises the steps of firstly, carrying out unsupervised classification on the high-spatial-resolution remote sensing image by adopting an iterative self-organizing data analysis algorithm (ISODATA), and dividing the image into a plurality of ground feature categories to obtain different ground feature components. And then respectively establishing a corresponding time phase change model for predicting the change of each ground feature component.
The basis of searching pure pixels is to aggregate the result obtained by unsupervised classification to the size of the pixels of the low-resolution image, and the aggregation process can be understood as forming a plurality of matrixes of n x n sizes from the top left of the unsupervised classified image by calculating the proportion n of the resolution pixels before and after aggregation, and recording the number of the rows and the columns after aggregation. Respectively calculating the proportion of each component in each n x n matrix, taking the component with the largest proportion as the category of the pixel, judging whether the proportion value is greater than 20% (20% is an empirical value, and setting different thresholds aiming at different resolutions), and if the proportion value is greater than the empirical value, calling the aggregated pixel as a pure pixel of the category. And sequentially obtaining pure pixels of various ground features, finding out how the non-cloud pixels inside corresponding components in the low-resolution images of two times are changed, and establishing time phase change models of different ground feature components by using difference values, ratios or change rates.
The three models of the earth surface reflectivity space-time fusion algorithm based on the in-ground component time-sharing phase change model are sequentially described as follows.
Difference model:
pre_Sentinel_T1=mean(MOD09GA_T1-MOD0%A_T0)+Sentinel_T0 (5)
ratio model:
Figure BDA0003153933360000051
rate of change model:
Figure BDA0003153933360000061
in the formula, MOD09GA _ T0 represents the surface reflectance value of the low-resolution remote-sensing image at time T0, MOD09GA _ T1 represents the surface reflectance value of the low-resolution remote-sensing image at time T1, Sentinel _ T0 represents the surface reflectance value of the medium-high-resolution remote-sensing image at time T0, pre _ Sentinel _ T1 represents the predicted surface reflectance value of the medium-high-resolution remote-sensing image at time T1, and mean represents averaging all the values.
Disclosure of Invention
Based on the problems existing in the background technology, the invention jointly uses a plurality of sensor images with better Spatial resolution (less than or equal to 30m) based on the classical STARFM (STARFM: Spatial and Temporal Adaptive reflection Fusion Model) space-time Fusion algorithm, carries out segmented independent prediction on the traditional single medium Spatial resolution image prediction period (such as 16 days of Landsat image) by using the principle of 'nearest time and Spatial resolution priority', and optimizes and combines the segmented prediction results to further obtain a more accurate day-by-day medium Spatial resolution prediction image.
The invention adopts the following scheme: a forest fire remote sensing dynamic monitoring method based on a space-time fusion algorithm is characterized by comprising the following steps:
(1) determining a research area, and preprocessing the acquired image data of MOD09GA, Landsat8OLI, Sentinel-2 and GF-1 WFV;
(2) respectively fusing MOD09GA and Landsat8OLI, Sentinel-2 and GF-1WFV images by using a STARFM algorithm and a ground surface reflectivity space-time fusion algorithm based on a time-sharing phase change model in the ground object, and further generating a medium spatial resolution image at the time to be predicted;
(3) and calculating a fire index factor based on the prediction image, and performing fire evolution trend analysis.
Preferably, the earth surface reflectivity spatiotemporal fusion algorithm based on the in-earth composition time-sharing phase change model in the step (2) comprises the following steps:
(1) carrying out unsupervised classification on the high-spatial-resolution remote sensing image by adopting an iterative self-organizing data analysis algorithm, and dividing the high-spatial-resolution remote sensing image into a plurality of ground feature categories to obtain different ground feature components;
(2) finding out all pure pixels corresponding to different ground feature components;
(3) and respectively establishing corresponding time phase change models for the change of each ground feature component to predict.
Preferably, the step (2) includes the steps of:
(1) calculating the proportion n of resolution pixels before and after aggregation, forming a plurality of matrixes of n x n sizes from the top left of the unsupervised classified images, and recording the row and column numbers after aggregation;
(2) respectively calculating the proportion of each component in each n x n matrix, taking the component with the largest proportion as the category of the pixel, judging whether the proportion value is greater than 20% (20% is an empirical value, and setting different thresholds aiming at different resolutions), and if the proportion value is greater than the empirical value, calling the aggregated pixel as a pure pixel of the category.
Preferably, the corresponding phase change models established in the step (3) include the following three types:
the first is a difference model:
pre_Sentinel_T1=mean(MOD09GA_T1-MOD09GA_T0)+Sentinel_T0
the second is a ratio model:
Figure BDA0003153933360000071
the third is a rate of change model:
Figure BDA0003153933360000081
in the formula, MOD09GA _ T0 represents the surface reflectance value of the low-resolution remote-sensing image at time T0, MOD09GA _ T1 represents the surface reflectance value of the low-resolution remote-sensing image at time T1, Sentinel _ T0 represents the surface reflectance value of the medium-high-resolution remote-sensing image at time T0, pre _ Sentinel _ T1 represents the predicted surface reflectance value of the medium-high-resolution remote-sensing image at time T1, and mean represents averaging all the values.
Preferably, the method for preprocessing the image data in step (1) is as follows: the image data of MOD09GA is re-projected and re-sampled, and the image data of Landsat8OLI, Sentinel-2 and GF-1WFV are subjected to geometric correction, atmospheric correction and image resizing.
Preferably, the index fire factor in the step (3) includes a combustion area index, which is calculated by the formula:
Figure BDA0003153933360000082
wherein BAI is the burning area index of the pixel, Red is the Red waveband earth surface reflectivity of the pixel after image preprocessing, and NIR is the near infrared waveband earth surface reflectivity of the pixel after image preprocessing.
Preferably, the index fire factor in step (3) includes a normalized burning index calculated by the formula:
Figure BDA0003153933360000083
wherein NBR is the normalized burning index of the pixel, NIR is the near infrared band earth surface reflectivity of the pixel after image pretreatment, and SWIR is the short wave infrared band earth surface reflectivity of the pixel after image pretreatment.
By adopting the technical scheme, the invention can obtain the following technical effects: according to the method, a classical STARFM algorithm and a ground surface reflectivity space-time fusion algorithm based on a time-sharing phase change model in a ground object are utilized, a plurality of sensor images (such as Landsat8OLI, Sentinel-2 and GF-1 WFV) with better spatial resolution (less than or equal to 30m) and MODIS images are jointly used, a traditional single medium spatial resolution image prediction period (such as Landsat image for 16 days) is subjected to segmented independent prediction by using the principle of 'nearest time and spatial resolution priority', prediction results of two prediction methods are optimized and combined, and then a remote sensing dynamic monitoring experiment is carried out on forest fires. The results show that: the space-time fusion strategy designed by the method can make up the defects of a single space-time fusion method or space-time fusion by using a single medium spatial resolution image (such as a Landsat image) and an MODIS image, and has feasibility in a forest fire remote sensing dynamic monitoring scene.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flow chart of an implementation of a ground surface reflectivity spatiotemporal fusion algorithm based on a time-sharing phase change model of components in a ground object;
FIG. 2 is a map of the location of an investigation region according to the invention;
FIG. 3 is a flow chart of the present invention;
FIG. 4 is a GF-1 image at day 3 month and 22 of the study area of the present invention;
FIG. 5 is an image of MOD09GA at day 3, day 22 in the study area of the present invention;
FIG. 6 is an image of MOD09GA at day 3, 21 of the study area of the present invention;
FIG. 7 is a 3 month, 21 day image of Sentinel-2 in the study area of the present invention;
FIG. 8 is a predicted GF-1 image taken by the STARFM algorithm at day 3 and 21 of the study area of the present invention;
FIG. 9 shows that the predicted GF-1 image of day 21/3 is obtained by establishing a difference model through a ground surface reflectivity space-time fusion algorithm based on a phase change model of composition in a ground object;
FIG. 10 shows that the predicted GF-1 image of day 21/3 is obtained by establishing a ratio model through a ground surface reflectivity space-time fusion algorithm based on a time-sharing phase change model in a ground object;
FIG. 11 is a predicted GF-1 image of day 3/21, which is obtained by establishing a change rate model through a ground surface reflectivity space-time fusion algorithm based on a phase change model of composition in a ground object;
FIG. 12 is an image of Landsat8OLI at day 3, 18 of the study area of the present invention;
FIG. 13 is an image of MOD09GA at day 3, day 17 of the study area of the present invention;
FIG. 14 is an image of MOD09GA at day 3, 26 of the study area of the present invention;
FIG. 15 is a 3 month, 26 day Sentinel-2 image of the study area of the present invention;
FIG. 16 is a predicted Landsat8OLI image obtained by STARFM algorithm at 26/3 in the study area of the present invention;
FIG. 17 is a Landsat8OLI image predicted in 26 days of 3 months, which is obtained by building a difference model through a ground surface reflectivity space-time fusion algorithm based on a time-sharing phase change model in a ground object;
FIG. 18 is a Landsat8OLI image predicted in 26 days of 3 months, which is obtained by establishing a ratio model through a ground surface reflectivity space-time fusion algorithm based on a time-sharing phase change model in a ground object;
FIG. 19 is a Landsat8OLI image predicted in 26 days of 3 months, which is obtained by building a change rate model through a ground surface reflectivity space-time fusion algorithm based on a phase change model of time-sharing components in the ground object;
FIG. 20 is a 3 month, 26 day Sentinel-2 image of the study area of the present invention;
FIG. 21 is an image of MOD09GA at day 3, 26 of the study area of the present invention;
FIG. 22 is an image of MOD09GA at month 3 and day 30 in the study area of the present invention;
FIG. 23 is a GF-1 image at day 3, 30 of the study area of the present invention;
FIG. 24 is a predicted Sentinel-2 image obtained by the STARFM algorithm for 3 months and 30 days in the study area of the present invention;
FIG. 25 is a graph of a 3-month and 30-day predicted Sentinel-2 image obtained by constructing a difference model using a ground surface reflectivity spatio-temporal fusion algorithm based on a time-sharing phase change model in a terrain;
FIG. 26 is a graph of a 3-month and 30-day predicted Sentinel-2 image obtained by establishing a ratio model through a ground surface reflectivity space-time fusion algorithm based on a time-division phase change model in a ground object according to the invention;
FIG. 27 is a graph of a 3-month and 30-day predicted Sentinel-2 image obtained by building a change rate model through a ground surface reflectivity space-time fusion algorithm based on a time-division phase change model in a terrain;
FIG. 28 is a burning area index of the early stage of a fire in a research area according to the present invention;
FIG. 29 is a burning area index at the middle stage of a fire in the research area according to the present invention;
FIG. 30 is a plot of the burn area index at the later stage of a fire in a research area according to the present invention;
FIG. 31 is a normalized burn index of a pre-stage fire in a study area according to the present invention;
FIG. 32 is a normalized burn index at the middle of a fire in a research area of the present invention;
FIG. 33 is a normalized burn index at the end of a fire in a study area of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings of the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the equipment or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, "above" or "below" a first feature means that the first and second features are in direct contact, or that the first and second features are not in direct contact but are in contact with each other via another feature therebetween. Also, the first feature being "on," "above" and "over" the second feature includes the first feature being directly on and obliquely above the second feature, or merely indicating that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature includes the first feature being directly under and obliquely below the second feature, or simply meaning that the first feature is at a lesser elevation than the second feature.
Examples
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention.
1.1 Experimental area and data
1.1.1 Experimental area
In the region (as shown in figure 2 in the specification) of the research, the latitude and longitude of the region is between 28 degrees 32 '10' to 28 degrees 33 '27' in north latitude and 101 degrees 15 '02' to 101 degrees 16 '49' in east longitude in a forest with an elevation of about 3800 meters near the edge of Yashu Zhenjiang Zhen Li village in Rimura of Ridgeon, Lizhou, Lian, Sichuan Liangshan. A large area in the area is covered by vegetation, and a small amount of water and rocks exist. Complex terrain, steep slope and deep valley, and inconvenient traffic and communication. The fire occurrence time is 3 months, 31 days to 4 months and 4 days in 2019. The fire ignition point of the fire is Yunnan pine on the ridge, and the ignition source is caused by lightning strike fire.
1.1.2 Experimental data
The experimental data sources used in this study are MOD09GA, Landsat8OLI, Sentinel-2 and GF-1WFV images, and the specific experimental image description is shown in Table 1, wherein the images listed in the table are all the data used in this experiment. The characteristics and the used bands of each remote sensing data are shown in table 2.
Figure BDA0003153933360000131
TABLE 1 Experimental data List
Figure BDA0003153933360000132
TABLE 2 characteristics and bands of use of the respective remote sensing data
Before performing space-time fusion, the acquired image data needs to be preprocessed by using remote sensing image processing software. The Landsat8OLI image was radiometric calibrated and atmospheric corrected using ENVI to generate a reflectance value image. The Sentinel-2 images were corrected for atmospheric air using Sen2Cor and SNAP software and converted to img format that was opened by ENVI. Except for preprocessing the GF-1WFV image, which is the same as Landsat8OLI, geometric correction is carried out by using ENVI remote sensing image processing software, and the absolute radiometric calibration coefficient is downloaded from a website of a China resource satellite application center during radiometric calibration. For MOD09GA images, MODIS Reprojection Tool (MRT) is used to re-project them to UTM/WGS84 coordinate system, GeoTIFF format, and resample them to 10m, 16m, and 30m resolutions. Besides the preprocessing, the image of each sensor is cut by using ENVI remote sensing image processing software to ensure that the research area ranges are the same. In this study, a single band of each image was studied.
1.2 technical Process for carrying out the invention
1.2.1 technical Process
The general technical process of the invention is shown in the attached figure 3. The main technical process is described as follows:
(1) preprocessing acquired MOD09GA, Landsat8OLI, Sentinel-2 and GF-1WFV image data;
(2) respectively fusing MOD09GA with Landsat8OLI, Sentinel-2 and GF-1WFV images by using a STARFM algorithm and a ground surface reflectivity space-time fusion algorithm based on a time-sharing phase change model in a ground object, and further generating a medium spatial resolution image at the time to be predicted;
(3) and calculating a fire index factor based on the prediction image, and performing fire evolution trend analysis.
1.2.2 construction of fire indicator factor
The remote sensing image is applied to forest fire monitoring, and a combustion area index and a normalized combustion index are calculated for image data obtained by using a space-time fusion algorithm.
Index of combustion area
The Burn Area Index (BAI) is the Red band (Red) and near infrared band (NIR) of an image used for enhancing the information on the ground surface after fire, namely enhancing the charcoal signal on the image after fire, and the BAI value of a burning Area is relatively large. The calculation formula is as follows:
Figure BDA0003153933360000141
the burning Area Index of the research Area can be calculated by selecting 'Burn Area Index' in ENVI5.3 by using a spectral Index calculation tool.
Normalized burning index
Normalized Burn Ratio (NBR) is based on Near Infrared (NIR) and short wave infrared (SWIR2) bands to enhance a wide range of fire zones, with NBR values for the Burn zones being relatively small, as calculated by the following formula:
Figure BDA0003153933360000142
the Normalized Burn index for the study area can be calculated by selecting "Normalized Burn Ratio" in ENVI5.3 using a spectral index calculation tool.
1.3 results and analysis
1.3.1 analysis of fusion results
And performing space-time fusion on MOD09GA, Landsat8OLI, Sentinel-2 and GF-1WFV by using a STARFM algorithm. According to the fusion result (see table 3), the predicted image is visually compared with images which can be obtained from other sensors at the same time, the precision of a space-time fusion algorithm adopted in the experiment is analyzed, and fire index factors before and after the fire occurs are extracted and analyzed. FIGS. 4-11 of the specification show that 3-month 22-day GF-1 images and MOD09GA images predict 3-month 21-day GF-1 images and compare them with Sentinal-2 images, FIGS. 12-19 of the specification show that 3-month 18-day Landsat8OLI images and MOD09 GA-day 3-month 26-Landsat 8OLI images and compare them with Sentinal-2 images, FIGS. 20-27 of the specification show that 3-month 26-day Sentinal-2 images and MOD09GA images predict 3-month 30-day Sentinal-2 images and compare them with GF-1 images
Figure BDA0003153933360000151
TABLE 3 spatio-temporal fusion results
The two algorithms have limitations in different types of remote sensing data fusion applications. As shown in fig. 4-27 of the specification, the single-band images after space-time fusion are overlapped to obtain an image. Based on the description, FIGS. 4-27: the prediction accuracy of the STARFM algorithm on the region with obvious earth surface change is poor, and the fact that a black region exists on the left side of the image and has obvious difference with the actual earth surface can be found from the attached figure 8 of the specification, which shows that the effect of the STARFM algorithm on predicting the surrounding bare rock of GF-1WFV is not good; comparing the mountains, vegetation, rocks, and the like in fig. 16 and 24 with those in fig. 15 and 23, it can be found that STARFM has a better prediction effect on Landsat8OLI and Sentinel-2, but the Landsat8OLI has a better result. The earth surface reflectivity space-time fusion algorithm based on the in-ground-feature time-sharing phase change model comprises 3 change models, the images of fig. 9, 17 and 25 and the corresponding images of fig. 8, 16 and 24 are compared with the images of the areas of mountains, vegetation, rocks and the like, and the results show that when the in-ground-feature time-phase change difference model is used for prediction, the Landsat8OLI, Sentinel-2 and GF-1 prediction effects are good; comparing the images of fig. 7, 15 and 23 with the images of fig. 10, 18 and 26 in the description with the images of fig. 7, 15 and 23, and finding that when the time-phase change ratio model of the surface feature is used for prediction, the rock areas on the left sides of the images of fig. 10 and 18 in the description are lighter in color and obviously different from the real earth surface, so that the time-phase change ratio model of the surface feature has a poor peripheral prediction effect on GF-1 and Landsat8OLI and a good prediction effect on Sentinel-2; comparing the images of the specification, fig. 11, fig. 19 and fig. 27, with the images of the specification, fig. 7, fig. 15 and fig. 23, it is found that when the prediction is performed by using the model of the change rate of the time-phase change of the surface feature, the left bare rock of the image of fig. 11 shows a significant difference, so that the model of the change rate of the time-phase change of the surface feature has a poor effect on GF-1 prediction and better effects on Sentinel-2 and Landsat8OLI prediction. In comprehensive analysis, a STARFM and ground feature time-phase change difference model can be selected for prediction of Landsat8OLI, the ground feature time-phase change difference model and the change rate model are better than the STARFM and the ratio model for prediction of Sentinel-2, and the ground feature time-phase change difference model is selected as much as possible for prediction of GF-1. Therefore, through image comparison and analysis, a difference model based on the time-varying phase changes of the earth feature is applicable to all the three images, and the prediction effect by using the ratio model is not good.
1.3.2 time series analysis of fire Change
And optimally combining the segmented prediction results obtained by the two space-time fusion algorithms to obtain a daily medium-spatial resolution prediction image with better quality, and further calculating two fire index factors for the prediction image. Since the GF-1WFV image has no short wave infrared band, the normalized burning index can not be calculated, and the Landsat8 OLI/Sentinel-2 prediction image which meets the condition is selected as a substitute for calculating the index by using the principle of 'nearest time and preferential spatial resolution'. Through calculation, the combustion area index changes at the three times of the early stage, the middle stage and the later stage of the fire are shown in the attached figures 28-30 of the specification, and the normalized combustion index comparison at the three times of the early stage, the middle stage and the later stage of the fire is shown in the attached figures 31-33 of the specification.
In the specification, the vegetation coverage area in the attached figure 28 is black or dark gray, the vegetation coverage area (the area without fire) in the figure 30 is black, the area with fire is white, and the obvious contrast difference is favorable for determining the position and the area after the fire occurs; however, the area of study in fig. 29 appears black, and the burning area cannot be seen, because there is dense smoke generated by vegetation burning when a fire occurs, and the burning area index is affected by the fact that neither the Red band (Red) nor the near infrared band (NIR) used for calculating the burning area index can penetrate the smoke. In fig. 31, the vegetation-covered area appears white or light gray, in fig. 33 the vegetation-covered area (area not on fire) appears white or light gray, and the area on fire appears black, and a clear black-white contrast can determine the location and area after the fire; the sporadic burning zones in the event of a fire can be seen in fig. 32, since the Short Wave Infrared (SWIR) band used for the normalized burning index calculation can penetrate smoke, and the GF-1WFV image has no short wave infrared band, which also causes GF-1 limitations in the analysis of the fire indicator factor. From FIGS. 28-30, the non-fire areas remain black or dark gray overall, while the fire areas turn white; FIGS. 31-33 are contrasted with FIGS. 28-30, in that the area not having a fire remains white or light gray in overall view, while the area having a fire turns black; and in general, the change of NBR in the fire evolution process is more obvious, and the BAI effect is slightly poor. Through comparative studies of two fire index factors, it is believed that: when the fire evolution situation is analyzed based on the space-time fusion image, the calculation result of the normalized burning index is more sensitive and more effective.
The method comprehensively utilizes a classical STARFM algorithm and a ground surface reflectivity space-time fusion algorithm based on a time-sharing phase change model in the ground object, combines multiple sensor images (such as Landsat8OLI, Sentinel-2 and GF-1 WFV) with better spatial resolution (less than or equal to 30m) and MODIS images, carries out sectional independent prediction on a traditional single medium spatial resolution image prediction period (such as Landsat image for 16 days) by using the principle of 'nearest time and spatial resolution priority', optimizes and combines prediction results of the two prediction methods, and further carries out remote sensing dynamic monitoring experiments on forest fires. The research result shows that: the space-time fusion strategy designed by the method can make up the defects of a single space-time fusion method or space-time fusion by using a single medium spatial resolution image (such as a Landsat image) and an MODIS image, and has feasibility in a forest fire remote sensing dynamic monitoring scene.

Claims (7)

1. A forest fire remote sensing dynamic monitoring method based on a space-time fusion algorithm is characterized by comprising the following steps:
(1) determining a research area, and preprocessing the acquired image data of MOD09GA, Landsat8OLI, Sentinel-2 and GF-1 WFV;
(2) respectively fusing MOD09GA and Landsat8OLI, Sentinel-2 and GF-1WFV images by using a STARFM algorithm and a ground surface reflectivity space-time fusion algorithm based on a time-sharing phase change model in the ground object, and further generating a medium spatial resolution image at the time to be predicted;
(3) and calculating a fire index factor based on the prediction image, and performing fire evolution trend analysis.
2. A forest fire remote sensing dynamic monitoring method based on a space-time fusion algorithm as claimed in claim 1, wherein the ground surface reflectivity space-time fusion algorithm based on a time-sharing phase change model in the ground object in the step (2) comprises the following steps:
(1) carrying out unsupervised classification on the high-spatial-resolution remote sensing image by adopting an iterative self-organizing data analysis algorithm, and dividing the high-spatial-resolution remote sensing image into a plurality of ground feature categories to obtain different ground feature components;
(2) finding out all pure pixels corresponding to different ground feature components;
(3) and respectively establishing corresponding time phase change models for the change of each ground feature component to predict.
3. A forest fire remote sensing dynamic monitoring method based on a space-time fusion algorithm according to claim 2, wherein the step (2) comprises the following steps:
(1) calculating the proportion n of resolution pixels before and after aggregation, forming a plurality of matrixes of n x n sizes from the top left of the unsupervised classified images, and recording the row and column numbers after aggregation;
(2) respectively calculating the proportion of each component in each n x n matrix, taking the component with the largest proportion as the category of the pixel, judging whether the proportion value is greater than 20%, and if so, calling the aggregated pixel as a pure pixel of the category.
4. A forest fire remote sensing dynamic monitoring method based on a space-time fusion algorithm as claimed in claim 2, characterized in that the corresponding time-phase change model established in the step (3) comprises the following three types:
the first is a difference model:
pre_Sentinel_t1=mean(MOD09GA_T1-MOD09GA_T0)+Sentinel_T0 (1)
the second is a ratio model:
Figure FDA0003153933350000021
the third is a rate of change model:
Figure FDA0003153933350000022
in the formula, MOD09GA _ T0 represents the surface reflectance value of the low-resolution remote-sensing image at time T0, MOD09GA _ T1 represents the surface reflectance value of the low-resolution remote-sensing image at time T1, Sentinel _ T0 represents the surface reflectance value of the medium-high-resolution remote-sensing image at time T0, pre _ Sentinel _ T1 represents the predicted surface reflectance value of the medium-high-resolution remote-sensing image at time T1, and mean represents averaging all the values.
5. A forest fire remote sensing dynamic monitoring method based on a space-time fusion algorithm as claimed in claim 1, characterized in that the method for preprocessing the image data in the step (1) is as follows: the image data of MOD09GA is re-projected and re-sampled, and the image data of Landsat8OLI, Sentinel-2 and GF-1WFV are subjected to geometric correction, atmospheric correction and image resizing.
6. A forest fire remote sensing dynamic monitoring method based on the space-time fusion algorithm as claimed in claim 1, wherein the index fire factor in the step (3) comprises a burning area index, and the calculation formula is as follows:
Figure FDA0003153933350000023
wherein BAI is the burning area index of the pixel, Red is the Red waveband earth surface reflectivity of the pixel after image preprocessing, and NIR is the near infrared waveband earth surface reflectivity of the pixel after image preprocessing.
7. A forest fire remote sensing dynamic monitoring method based on a space-time fusion algorithm as claimed in claim 1, characterized in that the index fire factor in the step (3) comprises a normalized burning index, and the calculation formula is as follows:
Figure FDA0003153933350000031
wherein NBR is the normalized burning index of the pixel, NIR is the near infrared band earth surface reflectivity of the pixel after image pretreatment, and SWIR is the short wave infrared band earth surface reflectivity of the pixel after image pretreatment.
CN202110774310.9A 2021-07-08 2021-07-08 Forest fire remote sensing dynamic monitoring method based on space-time fusion algorithm Pending CN113486814A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110774310.9A CN113486814A (en) 2021-07-08 2021-07-08 Forest fire remote sensing dynamic monitoring method based on space-time fusion algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110774310.9A CN113486814A (en) 2021-07-08 2021-07-08 Forest fire remote sensing dynamic monitoring method based on space-time fusion algorithm

Publications (1)

Publication Number Publication Date
CN113486814A true CN113486814A (en) 2021-10-08

Family

ID=77938058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110774310.9A Pending CN113486814A (en) 2021-07-08 2021-07-08 Forest fire remote sensing dynamic monitoring method based on space-time fusion algorithm

Country Status (1)

Country Link
CN (1) CN113486814A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115359369A (en) * 2022-10-19 2022-11-18 中国科学院、水利部成都山地灾害与环境研究所 Mountain satellite image fusion method and system based on time phase self-adaption

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120081496A (en) * 2011-01-11 2012-07-19 주식회사 창성에이스산업 The method for fire warning using analysis of thermal image temperature
CN104615848A (en) * 2014-12-26 2015-05-13 中国南方电网有限责任公司 Comprehensive application method for integrating forest fire danger forecasting and ground fire point monitoring
CN110379113A (en) * 2019-06-28 2019-10-25 北京中科锐景科技有限公司 A method of based on satellite remote sensing date Forest Fire Alarm
CN111860205A (en) * 2020-06-29 2020-10-30 成都数之联科技有限公司 Forest fire evaluation method based on multi-source remote sensing image and grid and storage medium
CN112906531A (en) * 2021-02-07 2021-06-04 清华苏州环境创新研究院 Multi-source remote sensing image space-time fusion method and system based on unsupervised classification

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120081496A (en) * 2011-01-11 2012-07-19 주식회사 창성에이스산업 The method for fire warning using analysis of thermal image temperature
CN104615848A (en) * 2014-12-26 2015-05-13 中国南方电网有限责任公司 Comprehensive application method for integrating forest fire danger forecasting and ground fire point monitoring
CN110379113A (en) * 2019-06-28 2019-10-25 北京中科锐景科技有限公司 A method of based on satellite remote sensing date Forest Fire Alarm
CN111860205A (en) * 2020-06-29 2020-10-30 成都数之联科技有限公司 Forest fire evaluation method based on multi-source remote sensing image and grid and storage medium
CN112906531A (en) * 2021-02-07 2021-06-04 清华苏州环境创新研究院 Multi-source remote sensing image space-time fusion method and system based on unsupervised classification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄波: "《多源卫星遥感影像时空融合研究的现状及展望》" *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115359369A (en) * 2022-10-19 2022-11-18 中国科学院、水利部成都山地灾害与环境研究所 Mountain satellite image fusion method and system based on time phase self-adaption

Similar Documents

Publication Publication Date Title
Aboelnour et al. Application of remote sensing techniques and geographic information systems to analyze land surface temperature in response to land use/land cover change in Greater Cairo Region, Egypt
Berberoglu et al. Assessing different remote sensing techniques to detect land use/cover changes in the eastern Mediterranean
Kamal et al. Assessment of multi-resolution image data for mangrove leaf area index mapping
Reiche et al. Feature level fusion of multi-temporal ALOS PALSAR and Landsat data for mapping and monitoring of tropical deforestation and forest degradation
Petropoulos et al. Burnt area delineation from a uni-temporal perspective based on Landsat TM imagery classification using Support Vector Machines
US8594375B1 (en) Advanced cloud cover assessment
Frantz et al. Phenology-adaptive pixel-based compositing using optical earth observation imagery
Odindi et al. Remote sensing land-cover change in Port Elizabeth during South Africa's democratic transition
Sparks et al. An accuracy assessment of the MTBS burned area product for shrub–steppe fires in the northern Great Basin, United States
Tian et al. A global analysis of multifaceted urbanization patterns using Earth Observation data from 1975 to 2015
Halperin et al. Canopy cover estimation in miombo woodlands of Zambia: comparison of Landsat 8 OLI versus RapidEye imagery using parametric, nonparametric, and semiparametric methods
CN113850139B (en) Multi-source remote sensing-based forest annual phenological monitoring method
US9383478B2 (en) System and method for atmospheric parameter enhancement
Pu et al. A dynamic algorithm for wildfire mapping with NOAA/AVHRR data
CN112052757B (en) Method, device, equipment and storage medium for extracting fire trace information
Chanu et al. A geospatial approach for assessing the relation between changing land use/land cover and environmental parameters including land surface temperature of Chennai metropolitan city, India
Singh et al. Detection of 2011 Sikkim earthquake-induced landslides using neuro-fuzzy classifier and digital elevation model
Donmez et al. Mapping snow cover using landsat data: toward a fine-resolution water-resistant snow index
CN113486814A (en) Forest fire remote sensing dynamic monitoring method based on space-time fusion algorithm
Sharma et al. Impact of topography on accuracy of land cover spectral change vector analysis using AWiFS in Western Himalaya
Sun et al. Identifying terraces in the hilly and gully regions of the Loess Plateau in China
Morakinyo et al. Mapping of land cover and estimation of their emissivity values for gas flaring sites in the Niger Delta
Álvarez-Martínez et al. Can training data counteract topographic effects in supervised image classification? A sensitivity analysis in the Cantabrian Mountains (Spain)
Shahtahmassebi et al. Monitoring rapid urban expansion using a multi-temporal RGB-impervious surface model
CN113657275B (en) Automatic detection method for forest and grass fire points

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination