CN114708514B - Method and device for detecting forest felling change based on deep learning - Google Patents

Method and device for detecting forest felling change based on deep learning Download PDF

Info

Publication number
CN114708514B
CN114708514B CN202210319083.5A CN202210319083A CN114708514B CN 114708514 B CN114708514 B CN 114708514B CN 202210319083 A CN202210319083 A CN 202210319083A CN 114708514 B CN114708514 B CN 114708514B
Authority
CN
China
Prior art keywords
image data
data set
image
forest
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210319083.5A
Other languages
Chinese (zh)
Other versions
CN114708514A (en
Inventor
祁胜亮
黄华兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202210319083.5A priority Critical patent/CN114708514B/en
Publication of CN114708514A publication Critical patent/CN114708514A/en
Application granted granted Critical
Publication of CN114708514B publication Critical patent/CN114708514B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a device for detecting forest felling change based on deep learning, wherein the method comprises the following steps: respectively acquiring a first SAR image data set and a second optical image data set, and carrying out preprocessing such as noise removal on the first SAR image data set to obtain a first processed image data set; constructing a characteristic image set based on the polarization wave band of the first processed image data set, and constructing characteristic labeling information based on forest felling change contents in the first processed image data set and the second optical image data set; and performing model training on a preset deep learning model by using the feature image set and the feature marking information to obtain a training model, and detecting the change state of forest felling by using the training model. The invention can carry out preprocessing such as noise removal on the image data after the image data is collected, and carries out model training and change detection on the deep learning model by utilizing the processed data, thereby reducing the interference of noise and improving the accuracy of detection.

Description

Method and device for detecting forest felling change based on deep learning
Technical Field
The invention relates to the technical field of forest felling change detection, in particular to a method and a device for detecting forest felling change based on deep learning.
Background
The forest resource monitoring refers to the work of analyzing, observing and evaluating the quantity, quality, spatial distribution and utilization condition of forest resources in a periodic positioning mode. By researching the dynamic change of the forest, the quantity and the quality of the forest resources can be known in time, the growth and reduction change rule and trend of the forest resources are mastered, the natural, economic and social objective conditions influencing and restricting the forest growth are analyzed, the forest resource file is established or updated, and the method has important significance for revealing the environmental change of an ecological system, vegetation recovery and reconstruction layout and the like.
Along with the rapid development of social economy, the urban area expansion speed is gradually increased, the demand of wood resources is continuously increased, and the forest area is smaller and smaller. The forest monitoring is beneficial to dynamic analysis of the forest, the change trend of the forest is predicted, and a basis is provided for forest area monitoring and forest protection policy making. At present, the high-resolution optical image is commonly used for manual visual interpretation, a large amount of manpower and time are consumed, the effect is poor, missing results are more, and in cloudy and rainy weather areas, the usability of optical data is severely limited, so that the results cannot be obtained or are inaccurate. Aiming at the data problem, synthetic Aperture Radar (SAR) data can overcome the defect that optical data is easily affected by cloud and rain weather, but the conventional detection method aiming at SAR data has the following technical problems: the speckle noise existing in the SAR image can cause the result of the existing detection method to be inaccurate, and the existing method has more manual input, poor applicability and the like.
Disclosure of Invention
The invention provides a method and a device for detecting forest felling change based on deep learning.
The first aspect of the embodiment of the invention provides a method for detecting forest felling change based on deep learning, which comprises the following steps:
respectively acquiring a first SAR image data set and a second optical image data set, and carrying out noise removal pretreatment on the first SAR image data set to obtain a first processed image data set;
constructing a characteristic image set based on the polarization wave band of the first processed image data set, and constructing characteristic labeling information based on forest felling change contents in the first processed image data set and the second optical image data set;
and performing model training on a preset deep learning model by using the feature image set and the feature marking information to obtain a training model, and detecting the change state of forest felling by using the training model.
In one possible implementation form of the first aspect, the first processed image dataset comprises a training dataset and a test dataset;
said constructing a feature image set based on polarization bands of said first processed image data set comprises:
randomly selecting two different time nodes of the same place, and extracting a first image wave band set from the first processed image data set according to the two time nodes, wherein the first image wave band set comprises a vertically-transmitted and horizontally-received polarized wave band and a vertically-transmitted and vertically-received polarized wave band;
calculating the variation coefficient of each polarization band in the first image band set, and constructing a corresponding variation coefficient graph by using the variation coefficient;
and combining the polarized wave bands of the first processing image data set to obtain a combined feature map, and combining the variation coefficient map and the combined feature map into a feature image set.
In a possible implementation manner of the first aspect, the calculating a coefficient of variation of each polarization band in the first image band set, and constructing a corresponding coefficient of variation map by using the coefficient of variation includes:
calculating the variation coefficient of each pixel grid in the first image band set to obtain the value of the variation coefficient;
and replacing the pixel value in the pixel corresponding to the polarized wave band of the first image wave band set by using the coefficient of variation value to generate a coefficient of variation graph.
In a possible implementation manner of the first aspect, the constructing feature labeling information based on forest felling change contents in the first processed image data set and the second optical image data set includes:
presenting the first processed image dataset and the second optical image dataset to a user;
and receiving labeled forest felling contents of the first processed image data set and the second optical image data set viewed by a user to obtain characteristic labeling information.
In a possible implementation manner of the first aspect, the denoising preprocessing includes:
and sequentially carrying out track correction, thermal noise removal, radiometric calibration, coherent speckle filtering, gradient correction and terrain correction.
In a possible implementation manner of the first aspect, the performing model training on a preset deep learning model by using the feature image set and the feature labeling information to obtain a training model includes:
taking the feature marking information as a learning target, and performing model training on a preset deep learning model by adopting the feature image set;
wherein, the loss function of the model training is binary cross entropy.
In a possible implementation manner of the first aspect, the detecting a change state of forest felling by using the training model includes:
acquiring a forest image to be detected, and segmenting the forest image to be detected into a plurality of image blocks by adopting the training model;
identifying and counting a plurality of image blocks of the forest which is felled;
and determining the change area and the change position of the forest cutting according to the patches of the forest cutting in the image blocks of the plurality of forests.
A second aspect of an embodiment of the present invention provides a device for detecting forest felling changes based on deep learning, including:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for respectively acquiring a first SAR image data set and a second optical image data set and carrying out noise removal preprocessing on the first SAR image data set to obtain a first processed image data set;
the construction module is used for constructing a characteristic image set based on the polarization wave band of the first processed image data set and constructing characteristic labeling information based on forest felling change contents in the first processed image data set and the second optical image data set;
and the training and detection module is used for carrying out model training on a preset deep learning model by utilizing the characteristic image set and the characteristic marking information to obtain a training model, and detecting the change state of forest felling by utilizing the training model.
Compared with the prior art, the method and the device for detecting the forest felling change based on deep learning provided by the embodiment of the invention have the beneficial effects that: the invention can collect two different data, then carry out preprocessing operations such as noise removal on the data, and carry out model training and change detection on the deep learning model by utilizing the processed data.
Drawings
FIG. 1 is a schematic flow chart of a method for detecting forest felling change based on deep learning according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating the operation of a denoising process according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a depth model provided in an embodiment of the invention;
FIG. 4 is a flowchart illustrating a method for detecting forest felling change based on deep learning according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a device for detecting forest felling change based on deep learning according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
The current commonly used detection method has the following technical problems: a large amount of speckle noise exists in the SAR image, so that the large amount of speckle noise is easy to interfere with the recognition of forest felling targets, the detection error is increased, and the detection accuracy is reduced.
In order to solve the above problem, a method for detecting forest felling change based on deep learning provided by the embodiments of the present application will be described and explained in detail by the following specific embodiments.
Referring to fig. 1, a schematic flow chart of a method for detecting forest felling change based on deep learning according to an embodiment of the present invention is shown.
By way of example, the method for detecting forest felling change based on deep learning may include:
s11, respectively acquiring a first SAR image data set and a second optical image data set, and carrying out noise removal preprocessing on the first SAR image data set to obtain a first processed image data set.
In one embodiment, the first SAR image data set may be Sentinel-1GRD level data, and in particular may be a geodesic multi-view (GRD) image acquired by a C-band Synthetic Aperture Radar (SAR) -loaded Earth observation satellite transmitted by the European space agency Columbus project in an interferometric wide-range (IW) mode.
In an embodiment, the second optical image data set may be Sentinel-2 L1C level data, and specifically may refer to an atmospheric apparent reflectivity image product obtained by an earth observation satellite with a multispectral imager and transmitted by the european space agency after orthorectification and sub-pixel level geometric refinement.
Because the Sentinel-1GRD data has thermal noise, speckle noise, and various distortion phenomena caused by topographic relief or satellite sensor tilt, the subsequent use of the image data is greatly affected. To reduce the distortion, the first SAR image data set may be denoised to reduce its noise.
Referring to fig. 2, a flowchart illustrating the operation of the noise removing process according to an embodiment of the present invention is shown.
In an optional embodiment, the denoising preprocessing includes:
and sequentially carrying out orbit correction, thermal noise removal, radiometric calibration, coherent speckle filtering, slope correction and terrain correction on the first SAR image data set.
In particular, radiometric calibration may convert the received backscatter signal into a backscatter coefficient gamma0 (gamma 0)
Figure SMS_1
) (ii) a Coherent speckle filtering may use a referred Lee filter;
grade correction may be to eliminate the effect of non-flat terrain contained in the data coverage area on subsequent experiments, using a radiometric terrain correction algorithm, which requires corresponding DEM data and a DEM data resolution higher than Sentinel-1 data. Preferably, 30m resolution Goynesday DEM data may be used and resampled to 9m to be higher than the 10m resolution Sentinel-1 data used.
Since the image that is not directly at the lowest point of the sensor will be distorted due to the terrain variations in the real world and the tilt of the satellite sensor, terrain correction is used to bring the geometric representation of the image as close to the real world as possible, using a range-doppler terrain correction algorithm, using DEM data that is still resampled to 9m resolution golvanic DEM data. In actual operation, the terrain correction uses a range-doppler terrain correction algorithm, the distances in the SAR image are distorted due to the terrain variation involved in the SAR image and the tilt of the satellite sensor, and the image data not located at the lowest point of the satellite sensor is also distorted, thereby causing the geometric representation in the SAR image to differ from the real world, and the range-doppler terrain correction algorithm geocodes the SAR image using the available orbit state vector information in the metadata contained in the image or the precise geographic location information obtained together with the external precise orbit, radar timing annotation, ground tilt distance conversion parameter, and reference Digital Elevation Model (DEM), so that the geometric representation in the SAR image conforms to the real world.
While noise removal uses a modified lie filter (referred to as a "Refined Lee filter"), SAR images have inherent salt-pepper texture, known as speckle, which reduces image quality and makes interpretation of features difficult, the Refined Lee filter algorithm is to overcome the drawback of the Lee filter algorithm that filtering of pixels near homogeneous regions of edge or point targets is not sufficient, and it is an edge detection-based adaptive filter algorithm that improves estimation accuracy by redefining the domain of center pixels.
In this embodiment, the preprocessed image data can be more closely fit to the geometric representation of the real world, and various noises affecting the use are removed, so that the image can be more easily recognized by human eyes for various ground feature types, and the image can be conveniently used subsequently without causing interference to the subsequent research results.
S12, constructing a characteristic image set based on the polarization wave band of the first processed image data set, and constructing characteristic labeling information based on forest felling change contents in the first processed image data set and the second optical image data set.
The feature image set may include feature images required for training, and the feature labeling information may include target information required for training.
During training, the characteristic images can be repeatedly used for training based on target information, so that the model can quickly identify whether the content of the target information is included in the image to be detected during detection, and whether the forest is cut can be determined.
In an alternative embodiment, the first processed image dataset comprises a training dataset and a test dataset.
In particular, two first processed image data sets may be acquired, one as a training data set and one as a test data set.
Wherein, as an example, step S12 may comprise the following sub-steps:
and a substep S121 of arbitrarily selecting two different time nodes of the same location, and extracting a first image band set from the first processed image data set according to the two time nodes, where the first image band set includes a vertically-transmitted and horizontally-received polarization band and a vertically-transmitted and vertically-received polarization band.
In an embodiment, a time node corresponding to a training data set may be randomly selected, a time node corresponding to a testing data set may be randomly selected, and a corresponding band may be extracted from the data set according to the time node corresponding to the data set. Specifically, the corresponding bands may be extracted from the training data set and the corresponding bands may be extracted from the test data set, respectively, to obtain the first image band set.
In particular, corresponding VH, VV bands may be extracted from the training data set, and corresponding VH, VV bands may be extracted from the test data set.
It should be noted that VH and VV bands refer to two polarization modes commonly used in a synthetic aperture radar remote sensing system, both propagation and scattering of electromagnetic waves are vector phenomena, and polarization is just such a vector characteristic used for studying electromagnetic waves, the radar is an active remote sensing system, which transmits and receives electromagnetic wave signals, electric field vectors of energy pulses transmitted by the radar can be polarized in a vertical or horizontal plane, and the radar can transmit horizontal (H) or vertical (V) electric field vectors and can also receive horizontal (H) or vertical (V) signals.
The VH wave band refers to image data formed by a vertical emission signal and a horizontal reception signal of a radar system; the VV wave band refers to image data formed by signals vertically transmitted and vertically received by a radar system.
And a substep S122, calculating a coefficient of variation of each polarization band in the first image band set, and constructing a corresponding coefficient of variation graph by using the coefficient of variation.
The variation coefficients corresponding to the VH band and the VV band of the training data set can be calculated, and then the variation coefficients are used to construct a variation coefficient map corresponding to the VH band and the VV band, respectively.
Similarly, the variation coefficients corresponding to the VH band and the VV band of the data set may be tested, and then the variation coefficients are used to construct a variation coefficient map corresponding to the VH band and the VV band.
In an embodiment, the substep S122 may comprise the substeps of:
and a substep S1221 of calculating a coefficient of variation of each pixel grid in the first image band set to obtain a coefficient of variation value.
Substep S1222, replacing the pixel values in the corresponding pixels of the polarized bands of the first image band set with the coefficient of variation values, to generate a coefficient of variation map.
The VH wave band and the VV wave band of the training data set and the test data set belong to a remote sensing image, and the remote sensing image is an image formed by a plurality of pixel grids, and each pixel grid has a value.
The coefficient of variation of each pixel value in two images of the VH wave band and the VV wave band can be calculated, the calculation result can actually obtain another value, and then the value of the calculation result is used for replacing the value in the original pixel to form a characteristic diagram formed by the coefficient of variation, so that a coefficient of variation diagram is obtained.
In actual operation, the variation coefficient maps corresponding to the VH band and the VV band of the training data set can be calculated respectively, and then the variation coefficient maps corresponding to the VH band and the VV band of the test data set can be calculated respectively, so as to obtain 4 variation coefficient maps in total.
And a substep S123 of merging the polarized wave bands of the first image wave band set to obtain a merged feature map, and merging the variation coefficient map and the merged feature map into a feature image set.
In this embodiment, the VH band and the VV band of the training data set may be merged to obtain a merged feature map of the training data set; then, the VH band and the VV band of the test data set are merged to obtain a merged feature map of the test data set.
Wherein, as an example, step S12 may comprise the following sub-steps:
substep S124, presenting the first processed image data set and the second optical image data set to a user.
And a substep S125 of receiving labeled forest felling contents of the first processed image data set and the second optical image data set viewed by a user to obtain characteristic labeling information.
In one embodiment, the second optical image dataset is Sentinel-2 data, which can be used for visual interpretation for sample labeling. Visual interpretation is also called visual interpretation and is the reverse process of remote sensing imaging. In particular to a process of acquiring specific target ground object information on a remote sensing image by a professional through direct observation or by means of some auxiliary interpretation instruments. That is, the information needed in the remote sensing image is obtained by brain analysis, reasoning and judgment by means of human eyes (or by means of some optical instruments) and by means of knowledge, experience and mastery related data of an interpreter. The method is characterized in that the remote sensing image is identified by empirical knowledge through eyes at the place where forest cutting occurs, and the forest cutting is marked.
Optionally, the user may manually label the database, or the database may be established after the professional performs manual labeling, and may be directly used in subsequent operations.
Specifically, according to the time node of the first SAR image data set, pixel-by-pixel labeling may be performed in combination with the portion of the second SAR image data set where forest felling changes occur, and then the labeled data may be combined into a binary image.
And S13, performing model training on a preset deep learning model by using the feature image set and the feature marking information to obtain a training model, and detecting the change state of forest felling by using the training model.
The preprocessed first SAR image data set and the correspondingly labeled binary labeled data set can be packaged to form a training set and a testing set, and model training is performed on a preset deep learning model by using the packaged data.
In one embodiment, step S13 may include the following sub-steps:
and the substep S131, taking the feature marking information as a learning target, and performing model training on a preset deep learning model by adopting the feature image set.
Wherein, the loss function of the model training is binary cross entropy.
In one embodiment, the preset deep learning model can be a U-net network model structure, the U-net is a semantically segmented full-convolution network model, and a main structure is composed of an encoder and a decoder, can capture hierarchical spatial patterns of multiple scales, and generates semantic segmentation results in an end-to-end mode without excessive manual input.
Referring to fig. 3, a schematic structural diagram of a depth model provided in an embodiment of the present invention is shown.
In the actual training, the input data may be a variation coefficient map corresponding to VH and VV bands at two time points, and 6 input data including two merged feature maps, and the input image data size is 256 × 256 to be suitable for inputting into the model for training, the left side of the model is a series of downsampling operations including convolution and maximum pooling, the convolution window size is 3 × 3, and the maximum pooling window size is 2 × 2.
Referring to fig. 3, the right side of the model re-reduces the extracted features to the same size as the input data by deconvolution and upsampling. The output of the model is 256 × 1, wherein each value represents the probability of forest cutting, in a detection result, pixels with the probability higher than 0.5 are marked as forest cutting, the rest pixels are marked as non-forest cutting, a loss function selects binary cross entropy, an optimizer selects Adam, the learning rate is set to be 0.0001, batch normalization is applied to each layer of convolutional layer to avoid overfitting, the last layer is removed, the activation functions of the rest layers are ReLU, and the last layer outputs a value between 0 and 1 by using a sigmoid function to represent the probability of forest cutting detection. Compared with the classical U-net model, the number of characteristic channels of the model is greatly reduced, so that the time required by model training can be greatly prolonged.
In one embodiment, step S13 may further include the following sub-steps:
and S132, acquiring a forest image to be detected, and segmenting the forest image to be detected into a plurality of image blocks by adopting the training model.
And a substep S133 of identifying and counting a plurality of image blocks of the forest which are felled.
And a substep S133 of determining the change area and the change position of the forest cutting according to the patch of the forest cutting in the image blocks of the plurality of forests.
Specifically, whether the forest in each image block is cut or not can be respectively identified through the training model, if yes, the output is 1, and if not, the output is 0.
And rearranging the image blocks into a detection image according to the original arrangement sequence of the images, and counting the number and the positions of the output 1 in the detection image so as to obtain the cutting area and the cutting position.
Finally, the felling area and felling position of the statistics are compared with those of the previous statistics, and the felling change state can be obtained.
Referring to fig. 4, an operation flowchart of a method for detecting forest felling change based on deep learning according to an embodiment of the present invention is shown.
Specifically, sentinel-1GRD level data and Sentinel-2 L1C level data may be prepared in advance, a variation coefficient map and a merged feature map may be constructed using the Sentinel-1GRD level data, data labeling may be performed using the preprocessed Sentinel-1GRD level data and Sentinel-2 L1C level data, model training may be performed using the variation coefficient map, the merged feature map, and the labeled content, and finally, change detection may be performed using the trained model.
In this embodiment, the embodiment of the present invention provides a method for detecting forest felling change based on deep learning, which has the following beneficial effects: the invention can collect two different data, then carry out preprocessing such as speckle noise suppression on the data, and carry out model training and change detection on the deep learning model by utilizing the processed data.
The embodiment of the invention also provides a device for detecting forest felling change based on deep learning, and referring to fig. 5, a schematic structural diagram of the device for detecting forest felling change based on deep learning provided by the embodiment of the invention is shown.
Wherein, as an example, the device for detecting forest felling change based on deep learning may comprise:
an obtaining module 501, configured to obtain a first SAR image data set and a second SAR image data set, and perform noise removal processing on the first SAR image data set to obtain a first processed image data set;
a construction module 502, configured to construct a feature image set based on the polarization band of the first processed image data set, and construct feature labeling information based on forest felling change contents in the first processed image data set and the second optical image data set;
and the training and detecting module 503 is configured to perform model training on a preset deep learning model by using the feature image set and the feature labeling information to obtain a training model, and detect a change state of forest felling by using the training model.
Optionally, the first processed image dataset comprises a training dataset and a testing dataset;
the building module is further configured to:
randomly selecting two different time nodes of the same place, and extracting a first image wave band set from the first processed image data set according to the two time nodes, wherein the first image wave band set comprises a vertically-transmitted and horizontally-received polarized wave band and a vertically-transmitted and vertically-received polarized wave band;
calculating the variation coefficient of each polarization wave band in the first image wave band set, and constructing a corresponding variation coefficient graph by using the variation coefficient;
and combining the polarized wave bands of the first image wave band set to obtain a combined feature map, and combining the variation coefficient map and the combined feature map into a feature image set.
Optionally, the building module is further configured to:
calculating the variation coefficient of each pixel grid in the first image band set to obtain the value of the variation coefficient;
and replacing pixel values in pixels corresponding to the polarized wave bands of the first image wave band set by using the coefficient of variation values to generate a coefficient of variation graph.
Optionally, the building module is further configured to:
presenting the first processed image dataset and the second optical image dataset to a user;
and receiving labeled forest felling contents of the first processed image data set and the second optical image data set viewed by a user to obtain characteristic labeling information.
Optionally, the denoising pre-processing includes:
and sequentially carrying out track correction, thermal noise removal, radiometric calibration, coherent speckle filtering, gradient correction and terrain correction.
Optionally, the training and detection module is further configured to:
taking the feature marking information as a learning target, and performing model training on a preset deep learning model by adopting the feature image set;
wherein, the loss function of the model training is binary cross entropy.
Optionally, the training and detection module is further configured to:
acquiring a forest image to be detected, and segmenting the forest image to be detected into a plurality of image blocks by adopting the training model;
identifying and counting a plurality of image blocks of the forest which are cut down;
and determining the change area and the change position of the forest cutting according to the patches of the forest cutting in the image blocks of the plurality of forests.
It can be clearly understood by those skilled in the art that, for convenience and brevity, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
Further, an embodiment of the present application further provides an electronic device, including: memory, a processor and a computer program stored on the memory and executable on the processor, the processor when executing the program implementing the method for detecting forest deforestation change based on deep learning as described in the above embodiments.
Further, the present application also provides a computer-readable storage medium, which stores computer-executable instructions for causing a computer to execute the method for detecting forest felling change based on deep learning according to the foregoing embodiment.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (9)

1. A method for detecting forest felling change based on deep learning is characterized by comprising the following steps:
respectively acquiring a first SAR image data set and a second optical image data set, and carrying out noise removal pretreatment on the first SAR image data set to obtain a first processed image data set;
constructing a characteristic image set based on the polarization wave band of the first processed image data set, and constructing characteristic labeling information based on forest felling change contents in the first processed image data set and the second optical image data set;
performing model training on a preset deep learning model by using the feature image set and the feature marking information to obtain a training model, and detecting the change state of forest felling by using the training model;
the first processed image dataset comprises a training dataset and a testing dataset;
said constructing a feature image set based on polarization bands of said first processed image data set comprises:
randomly selecting two different time nodes of the same place, and extracting a first image wave band set from the first processed image data set according to the two time nodes, wherein the first image wave band set comprises a vertically-transmitted and horizontally-received polarized wave band and a vertically-transmitted and vertically-received polarized wave band;
calculating the variation coefficient of each polarization wave band in the first image wave band set, and constructing a corresponding variation coefficient graph by using the variation coefficient;
and merging the polarized wave bands of the first image wave band set to obtain a merged feature map, and merging the variation coefficient map and the merged feature map into a feature image set.
2. The method for detecting forest deforestation change based on deep learning as claimed in claim 1, wherein the calculating the coefficient of variation of each polarization band in the first image band set and using the coefficient of variation to construct a corresponding coefficient of variation map comprises:
calculating the variation coefficient of each pixel grid in the first image band set to obtain the value of the variation coefficient;
and replacing the pixel value in the pixel corresponding to the polarized wave band of the first image wave band set by using the coefficient of variation value to generate a coefficient of variation graph.
3. The method for detecting forest cutting change based on deep learning as claimed in claim 1, wherein the constructing feature labeling information based on the forest cutting change content in the first processed image data set and the second optical image data set comprises:
presenting the first processed image dataset and the second optical image dataset to a user;
and receiving labeled forest felling contents of the first processed image data set and the second optical image data set viewed by a user to obtain characteristic labeling information.
4. The method for detecting forest deforestation change based on deep learning as claimed in any one of claims 1-3, wherein the denoising pre-processing comprises:
and sequentially carrying out track correction, thermal noise removal, radiometric calibration, coherent speckle filtering, gradient correction and terrain correction.
5. The method for detecting forest deforestation change based on deep learning as claimed in any one of claims 1 to 3, wherein the performing model training on a preset deep learning model by using the feature image set and the feature labeling information to obtain a training model comprises:
taking the feature marking information as a learning target, and performing model training on a preset deep learning model by adopting the feature image set;
wherein, the loss function of the model training is binary cross entropy.
6. The method for detecting forest cutting change based on deep learning as claimed in any one of claims 1-3, wherein the detecting the change state of forest cutting by using the training model comprises:
acquiring a forest image to be detected, and segmenting the forest image to be detected into a plurality of image blocks by adopting the training model;
identifying and counting a plurality of image blocks of the forest which are cut down;
and determining the change area and the change position of the forest cutting according to the patches of the forest cutting in the image blocks of the plurality of forests.
7. A detection apparatus for forest felling change based on deep learning, the apparatus comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for respectively acquiring a first SAR image data set and a second optical image data set and carrying out noise removal preprocessing on the first SAR image data set to obtain a first processed image data set;
the construction module is used for constructing a characteristic image set based on the polarization wave band of the first processed image data set and constructing characteristic labeling information based on forest felling change contents in the first processed image data set and the second optical image data set;
the training and detecting module is used for carrying out model training on a preset deep learning model by utilizing the characteristic image set and the characteristic marking information to obtain a training model, and detecting the change state of forest felling by utilizing the training model;
the first processed image dataset comprises a training dataset and a testing dataset;
said constructing a feature image set based on polarization bands of said first processed image data set comprises:
randomly selecting two different time nodes of the same place, and extracting a first image wave band set from the first processed image data set according to the two time nodes, wherein the first image wave band set comprises a vertically-transmitted and horizontally-received polarized wave band and a vertically-transmitted and vertically-received polarized wave band;
calculating the variation coefficient of each polarization band in the first image band set, and constructing a corresponding variation coefficient graph by using the variation coefficient;
and merging the polarized wave bands of the first image wave band set to obtain a merged feature map, and merging the variation coefficient map and the merged feature map into a feature image set.
8. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor when executing the program implements a method for detection of forest deforestation based on deep learning according to any of claims 1-6.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium stores computer-executable instructions for causing a computer to perform the method for detecting forest deforestation based on deep learning of any one of claims 1 to 6.
CN202210319083.5A 2022-03-29 2022-03-29 Method and device for detecting forest felling change based on deep learning Active CN114708514B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210319083.5A CN114708514B (en) 2022-03-29 2022-03-29 Method and device for detecting forest felling change based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210319083.5A CN114708514B (en) 2022-03-29 2022-03-29 Method and device for detecting forest felling change based on deep learning

Publications (2)

Publication Number Publication Date
CN114708514A CN114708514A (en) 2022-07-05
CN114708514B true CN114708514B (en) 2023-04-07

Family

ID=82170932

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210319083.5A Active CN114708514B (en) 2022-03-29 2022-03-29 Method and device for detecting forest felling change based on deep learning

Country Status (1)

Country Link
CN (1) CN114708514B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705340A (en) * 2021-07-16 2021-11-26 电子科技大学 Deep learning change detection method based on radar remote sensing data

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201502744D0 (en) * 2015-02-18 2015-04-01 Univ Edinburgh Satellite image processing
CN107358202A (en) * 2017-07-13 2017-11-17 西安电子科技大学 Polarization SAR remote sensing imagery change detection method based on depth curve ripple heap stack network
CN110852262A (en) * 2019-11-11 2020-02-28 南京大学 Agricultural land extraction method based on time sequence top-grade first remote sensing image
CN112906638A (en) * 2021-03-19 2021-06-04 中山大学 Remote sensing change detection method based on multi-level supervision and depth measurement learning
CN113486705A (en) * 2021-05-21 2021-10-08 中国科学院空天信息创新研究院 Flood monitoring information extraction method, device, equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705340A (en) * 2021-07-16 2021-11-26 电子科技大学 Deep learning change detection method based on radar remote sensing data

Also Published As

Publication number Publication date
CN114708514A (en) 2022-07-05

Similar Documents

Publication Publication Date Title
CN109840553B (en) Extraction method and system of cultivated land crop type, storage medium and electronic equipment
KR102540762B1 (en) Reservoir monitoring method using satellite informations
CN103236063B (en) Based on the SAR image oil spilling detection method of multiple dimensioned spectral clustering and decision level fusion
CN102800074B (en) Synthetic aperture radar (SAR) image change detection difference chart generation method based on contourlet transform
CN110703244B (en) Method and device for identifying urban water body based on remote sensing data
CN112183209A (en) Regional crop classification method and system based on multi-dimensional feature fusion
CN112712535A (en) Mask-RCNN landslide segmentation method based on simulation difficult sample
CN103871039B (en) Generation method for difference chart in SAR (Synthetic Aperture Radar) image change detection
CN110991430B (en) Ground feature identification and coverage rate calculation method and system based on remote sensing image
CN115236655A (en) Landslide identification method, system, equipment and medium based on fully-polarized SAR
CN113569760B (en) Three-dimensional change detection method based on multi-mode deep learning
CN105139396B (en) Full-automatic remote sensing image cloud and fog detection method
US12032659B2 (en) Method for identifying dry salt flat based on sentinel-1 data
CN115272860B (en) Determination method and system for rice planting area, electronic equipment and storage medium
CN114092794B (en) Sea ice image classification method, system, medium, equipment and processing terminal
CN108230375A (en) Visible images and SAR image registration method based on structural similarity fast robust
CN113744249A (en) Marine ecological environment damage investigation method
CN114119630B (en) Coastline deep learning remote sensing extraction method based on coupling map features
CN117115666B (en) Plateau lake extraction method, device, equipment and medium based on multi-source data
CN112989940B (en) Raft culture area extraction method based on high-resolution third satellite SAR image
CN117849792A (en) Flood inundation area minute-level extraction method
CN117575982A (en) Forest class boundary optimization method based on multisource high-resolution remote sensing data
CN114708514B (en) Method and device for detecting forest felling change based on deep learning
CN114545410B (en) Crop lodging monitoring method based on synthetic aperture radar dual-polarized data coherence
CN113128523B (en) Method for automatically extracting coral reefs based on time sequence remote sensing images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant