CN115170979A - Mining area fine land classification method based on multi-source data fusion - Google Patents

Mining area fine land classification method based on multi-source data fusion Download PDF

Info

Publication number
CN115170979A
CN115170979A CN202210769160.7A CN202210769160A CN115170979A CN 115170979 A CN115170979 A CN 115170979A CN 202210769160 A CN202210769160 A CN 202210769160A CN 115170979 A CN115170979 A CN 115170979A
Authority
CN
China
Prior art keywords
feature
depth
fusion
convolution
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210769160.7A
Other languages
Chinese (zh)
Other versions
CN115170979B (en
Inventor
李全生
郭俊廷
李军
贺安民
陈建强
常博
郑三龙
李鹏
丁雅欣
杜守航
张成业
杨飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology Beijing CUMTB
China Energy Investment Corp Ltd
National Institute of Clean and Low Carbon Energy
CHN Energy Group Xinjiang Energy Co Ltd
Original Assignee
China University of Mining and Technology Beijing CUMTB
China Energy Investment Corp Ltd
National Institute of Clean and Low Carbon Energy
CHN Energy Group Xinjiang Energy Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology Beijing CUMTB, China Energy Investment Corp Ltd, National Institute of Clean and Low Carbon Energy, CHN Energy Group Xinjiang Energy Co Ltd filed Critical China University of Mining and Technology Beijing CUMTB
Priority to CN202210769160.7A priority Critical patent/CN115170979B/en
Publication of CN115170979A publication Critical patent/CN115170979A/en
Application granted granted Critical
Publication of CN115170979B publication Critical patent/CN115170979B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/58Extraction of image or video features relating to hyperspectral data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/771Feature selection, e.g. selecting representative features from a multi-dimensional feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/817Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level by voting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a mining area fine land classification method for multi-source data fusion, which comprises the steps of firstly, obtaining multi-source data, preprocessing the multi-source data, carrying out multi-scale segmentation to obtain an image object, and cutting high-resolution image blocks of the image object for extracting depth features; then, automatically extracting a characteristic column vector and a depth semantic characteristic by the multi-branch convolutional neural network model; performing feature fusion by using a multi-source feature depth fusion module, and calculating the importance weight of the depth feature of the multi-source data by using a feature importance weight calculation module to realize the self-adaptive fusion of effective information; and finally, carrying out mining area ground object fine classification by using the depth fusion characteristics through a random forest classifier to obtain a mining area ground object fine classification result. The invention provides a mining area fine land classification method based on multi-source data fusion, which realizes mining area fine land classification, provides basic data support for mining area land monitoring and management and mining area ecological environment protection, and has important significance for mining area land monitoring and management and ecological environment protection.

Description

Mining area fine land classification method based on multi-source data fusion
Technical Field
The invention relates to the field of remote sensing image processing, land classification and deep learning, in particular to a mining area fine land classification method based on multi-source data fusion.
Background
The land covering/utilization classification technology based on the remote sensing image is numerous, the traditional method adopts visual interpretation to classify, and although the method has higher precision, the method has the problems of low efficiency and high cost. In addition, the traditional machine learning algorithm is used for classification, the classification efficiency is greatly improved compared with visual interpretation, but the improvement on the classification precision is limited, and the problems that high-dimensional data is difficult to process and the generalization capability is not ideal exist. In addition, the types of the land used in the mining area are many, and the terrain breakage degree is high, so that the existing classification method for the land used in the mining area can only identify the land use conditions of vegetation, mining land, water, bare land, roads and the like, and the vegetation and the mining land are difficult to classify more finely.
The existing land covering/utilization classification method usually adopts single remote sensing image data, researchers adopt Synthetic Aperture Radar (SAR) images to classify land, the images have better texture information, land classification errors caused by multispectral remote sensing isomorphism or foreign matter isomorphism can be reduced to a certain extent, but the image noise pollution is serious, and the spatial resolution is not high. Therefore, in order to solve the problems of low precision of a single image classification result, and obtaining of richer and higher-precision land coverage/utilization information, researchers have proposed a method for fusing multi-source remote sensing data, but the conventional manual design method is used for extracting features in the multi-source data fusion method at present, and classification is performed on the basis of the features, however, the manual feature extraction method is not high in precision due to the fact that the expression capacity of different ground features is limited, and the advantages of different data sources are not fully utilized.
The mining area is a complex scene of interaction of human activities and natural environment, the mining area land change and the ecological environment are influenced by the human activities such as mining area resource exploitation and mine reclamation, the mastering of the mining area land situation has the basic data supporting effect on understanding the current situation of natural resources of the mining area, developing land utilization management, realizing sustainable development of the mining area and the like, and therefore the realization of fine land classification of the mining area is a problem to be solved urgently at present.
Disclosure of Invention
Aiming at the technical problem of mining area fine land classification in the prior art, the invention aims to provide a mining area fine land classification method based on multi-source data fusion, which utilizes multi-spectral remote sensing images such as a Sentinel-1SAR image and a Sentinel-2, an SAR image and a high-resolution remote sensing image, so that the advantages of high spatial resolution of the high-resolution image, spectral information of the multi-spectral image and textural feature data of the SAR image can be integrated, and the problems of low single image classification precision and incapability of subdividing vegetation, mining areas and the like are solved; firstly, acquiring multi-source data for preprocessing, performing multi-scale segmentation on a high-resolution image to obtain an image object, extracting spectral and textural feature column vectors of the image object by using a multi-spectral image and an SAR image respectively, and cutting a high-resolution image block for extracting depth features of the image object; then, automatically extracting a characteristic column vector and the depth semantic characteristics of a high-resolution image object by the multi-branch convolutional neural network model; secondly, in order to provide enough spectral features, texture features and detail information for subsequent classification, the depth semantic features of the multi-source data are fused by using a multi-source feature depth fusion module, and the multi-source feature depth fusion module calculates the importance weight of the depth features of the multi-source data by using a feature importance weight calculation module to realize the self-adaptive fusion of effective information; reducing and eliminating information redundancy among multi-source information by using a characteristic redundancy eliminating module; and finally, in view of the advantages of strong anti-interference and anti-overfitting capabilities, balanced errors, high training speed, easiness in implementation and the like of the random forest, the mining area ground feature fine classification is carried out by using the depth fusion characteristics through a random forest classifier, and a mining area ground fine classification result is obtained.
The purpose of the invention is realized by the following technical scheme:
a mining area fine land classification method based on multi-source data fusion comprises the following steps:
A. acquiring multi-source data, performing fusion processing on the multi-source data to obtain high-resolution image data, segmenting the high-resolution image data in the high-resolution image data set to obtain image objects, calculating the multi-spectral image spectral characteristics and the SAR image texture characteristics of each image object, and constructing a high-resolution image data set for each image object;
B. building a multi-branch convolution neural network model, and performing depth feature extraction and depth feature fusion processing on the high-resolution image data set by using the multi-branch convolution neural network model;
b1, the multi-branch convolutional neural network model comprises two 1D-Net feature extraction networks, the two 1D-Net feature extraction networks respectively extract two feature column vectors of spectral features and texture features correspondingly, the 1D-Net feature extraction networks sequentially comprise a first convolutional layer, a first maximum convolutional layer, a second maximum convolutional layer and a full connection layer, the first convolutional layer comprises four convolutional kernels with the sizes of 3 and the step lengths of 1, an activation function is ReLU, feature maps with the same number as that of the convolutional kernels can be generated through the first convolutional layer, and the first maximum convolutional layer performs downsampling on the feature maps of the first convolutional layer; the second convolutional layer contains eight convolutional kernels with the size of 3 and the step length of 1, the activation function is ReLU, the feature maps with the number being the same as that of the convolutional kernels can be generated through the second convolutional layer, and the second maximum pooling layer performs downsampling on the feature maps of the second convolutional layer; the full-connection layer re-extracts the features and outputs the depth semantic features obtained by the feature column vectors;
the multi-branch convolution neural network model further comprises a cavity convolution-ResNet 50, the cavity convolution-ResNet 50 comprises a Conv Block module and an Identity Block module, and the cavity convolution-ResNet 50 carries out depth feature extraction on the image object;
b2, the multi-branch convolution neural network model further comprises a multi-source feature depth fusion module, the multi-source feature depth fusion module performs feature fusion processing on the depth semantic features and the depth features obtained in the step B1, and fusion depth features are obtained after fusion;
C. making a sample data set of each land type of the mining area, wherein the sample data set corresponds to the high-score image data set, and performing model training on the multi-branch convolution neural network model through the sample data set;
D. and inputting the high-resolution image data set into the trained multi-branch convolutional neural network model to obtain the fusion depth characteristic of each image object, and then inputting the fusion depth characteristic into a random forest classifier to classify the types of the land used in the mining area.
In order to better realize the method for classifying the fine land of the mining area, the step A of the invention comprises the following steps:
a1, selecting a Sentinel-2L 1C-level multispectral remote sensing image or/and a high-grade data image with the minimum cloud content from multispectral remote sensing images in multi-source data; selecting an interference wide-width mode dual-polarized ground distance image of a relevant or adjacent date Sentinel-1 from SAR image data in the multi-source data;
a2, performing atmospheric correction on a multispectral remote sensing image in high-resolution image data through SNAP software, removing band data with the resolution of 60 meters, reserving band data with the resolution of 10 meters and 20 meters, and resampling 5, 6, 7, 8b, 11 and 12 bands to obtain a multispectral band set with the spatial resolution of 10 meters including visible light, red edge, near infrared and short wave infrared; preprocessing SAR image data in high-resolution image data by SNAP software, such as orbit correction, radiometric calibration, infrared thermal noise removal, terrain correction, filtering, re-projection and cutting, resampling the preprocessed SAR image data to enable the spatial resolution to reach 10 m, and then performing geometric correction;
a3, extracting 10 spectral bands and 12 spectral features from a multispectral remote sensing image in high-resolution image data; calculating to obtain two backscattering coefficients from an SAR image in high-resolution image data, selecting a 5 multiplied by 5 filter window, and calculating and extracting 32 texture features based on four angles of 0 degrees, 45 degrees, 90 degrees and 135 degrees by utilizing a gray level co-occurrence matrix method;
and A4, automatically obtaining the optimal segmentation scale by adopting an ESP tool and carrying out object-oriented segmentation on the high-resolution image data by utilizing a multi-scale segmentation algorithm to obtain a segmentation result.
The model training method of the multi-branch convolutional neural network model in the step C of the mining area fine land classification method comprises the following steps:
c1, sample data in the sample data set are collected mining area remote sensing image data, the sample data in the sample data set are classified and labeled, and the labeled classification categories comprise open stopes, refuse dumps, coal piles, mine buildings, trees, shrubs, grasslands with high/medium/low coverage, mixed grass, shrubs and trees, water bodies, roads and bare land; then, according to the following 6:2:2, randomly dividing the sample data of each category into a training set, a verification set and a test set;
c2, carrying out iterative training on the multi-branch convolutional neural network model by using a training set, checking the model precision after each iterative training by using a verification set, and adopting a cross entropy loss function in the training process, wherein the expression of the cross entropy loss function is as follows:
Figure BDA0003723279310000041
where y represents the true category of the image,
Figure BDA0003723279310000042
representing the model prediction category, wherein S is the number of image objects, and L is the model loss value;
c3, setting iterative training times of the multi-branch convolutional neural network model, reducing a model loss value by using a gradient descent algorithm, and optimizing and updating parameters of the model;
and C4, inputting the fusion depth features of the image objects into a random forest classifier, training the random forest classifier by using a training set, and classifying the land types of the mining area by using the random forest classifier.
The step A4 of the mining area fine land classification method further comprises the following steps: and creating a minimum outsourcing rectangle of each segmentation object for the segmentation result to generate a high-resolution image block, and constructing a high-resolution image data set by using the high-resolution image block as an image object.
In step B1, a Conv Block module and an Identity Block module of a void convolution scaled-ResNet 50 both comprise a main part and a residual edge, the main part of the Conv Block module comprises convolution processing and standardization processing, the Conv Block module adopts an activation function ReLU, the convolution processing comprises dimension reduction processing by using a convolution kernel with the size of 1 × 1 to check a characteristic image and convolution processing by using a3 × 3 void convolution kernel with the dimension ratio = 2; the residual edge portion of the Conv Block module includes a1 × 1 convolution process and a normalization process.
In step B2, the multi-source feature depth fusion module of the multi-branch convolutional neural network model further comprises a feature importance weight calculation module, the feature weight of each dimension is obtained through the feature importance weight calculation module, and the method comprises the following steps: rearranging the depth semantic features and the depth features obtained in the step B1, then respectively carrying out averaging on the rearranged depth semantic features and the rearranged depth features by using global average pooling by taking three feature column vectors as a unit, and then calculating through two full-connection layers to obtain three weight values of the importance of the multi-source data features;
and (3) solving the product of the weighted value and the depth semantic feature extracted by the multi-branch convolutional neural network model and the feature column vector of the depth feature, and fusing the weighted feature column vectors by using a concat method.
The mining area fine land classification method further comprises the following steps in the step B2:
the multi-source feature depth fusion module also comprises a feature redundancy elimination module, and the feature redundancy elimination module eliminates irrelevant or redundant features from the fusion depth features.
Preferably, the present invention also includes C5;
and C5, performing precision evaluation on the multi-branch convolutional neural network model and the random forest classifier by using the test set.
Preferably, in step A3, the spectral characteristics include a normalized vegetation index, an enhanced vegetation index, a ratio vegetation index, a difference vegetation index, a green normalized vegetation index, an atmospheric impedance vegetation index, an improved soil adjustment vegetation index, a red-edge normalized vegetation index, a normalized water body index, a multi-band water body index, a normalized construction index, and a normalized burning index.
Preferably, in step A3, the texture features include mean, variance, homogeneity, contrast, difference, entropy, angular second moment, and correlation at each angle.
Compared with the prior art, the invention has the following advantages and beneficial effects:
(1) The method comprises the steps of respectively extracting depth features of multi-source data by adopting a multi-branch convolution neural network, then performing depth fusion of the features by using a multi-source depth feature fusion module, and finally realizing fine classification of mining area land use by using a random forest classifier based on the fused multi-source features; the invention provides a mining area fine land classification method based on multi-source data fusion, which realizes mining area fine land classification and provides basic data support for mining area land monitoring, management and mining area ecological environment protection.
(2) The method uses the multispectral image, the SAR image and the high-resolution image to construct the high-resolution image dataset comprising the spectrum, the textural feature column vector and the high-resolution image block, avoids the problem that image data acquired by different sensors have certain limitations in the aspects of geometry, spectrum, spatial resolution and the like, can provide enough spectrum, textural features and detail information to make more accurate judgment on the land type, and thus improves the precision of classification results.
(3) The method comprises the steps that a high-resolution image data set is input into a multi-branch convolution neural network, self-learning and extraction of depth features are carried out, the depth semantic features are obtained by processing feature column vectors through a 1D-Net feature extraction network, parallel calculation can be carried out, training efficiency is improved, high-resolution image blocks are processed through a hole convolution scaled-ResNet 50 to obtain the depth semantic features, the receptive field is increased due to the introduction of the hole convolution, and the problem of wrong resolution caused by sparse vegetation characteristics of a mining area is solved; the multi-branch convolutional neural network can respectively extract the depth features aiming at the respective isomerism characteristics of the multi-source remote sensing data, and the possibility is provided for fusion classification of the multi-source data.
(4) The multi-source feature depth fusion module fuses extracted depth features, comprises a feature importance weight calculation module and a feature redundancy elimination module, calculates the feature weight of each dimension by using the feature importance weight calculation module, weights the multi-source depth semantic features, then performs fusion by using products, and eliminates redundancy of the multi-source features by using the feature redundancy elimination module, thereby reducing the dimension of the features.
(5) The method uses the random forest to classify the deep fusion features, the random forest adopts a voting mechanism of a plurality of decision trees, has the advantages of strong anti-interference and anti-overfitting capabilities, balanced errors and easiness in implementation, and is favorable for realizing accurate classification of the land utilization types of the mining area by using the deep features.
(6) The method can replace traditional mining area land use classification methods such as visual interpretation and machine learning algorithm to realize fine classification work of the mining area land use, gives consideration to classification efficiency and precision, can be widely applied to mining area land use classification, and has important significance for mining area land use monitoring and management and ecological environment protection.
Drawings
FIG. 1 is a schematic flow chart of the method for classifying fine land in mining area according to the present invention;
FIG. 2 is a schematic diagram of the schematic structure of a multi-branch convolutional neural network model and a random forest classifier in the embodiment;
FIG. 3 is a schematic diagram of the schematic structure of a 1D-Net feature extraction network of the multi-branch convolutional neural network model in the embodiment;
FIG. 4 is a schematic structural diagram of a Conv Block and an Identity Block of the multi-branch convolutional neural network model according to the embodiment;
FIG. 5 is a schematic diagram of a hole convolved-ResNet 50 of the multi-branch convolutional neural network model according to an embodiment;
FIG. 6 is a schematic structural diagram of a feature importance weight calculation module in the embodiment;
FIG. 7 is a block diagram of a redundant feature culling module according to an embodiment.
Detailed Description
The present invention will be described in further detail with reference to the following examples:
examples
As shown in fig. 1 to 7, a mining area fine land classification method of multi-source data fusion includes the following steps:
A. acquiring multi-source data, performing fusion processing on the multi-source data to obtain high-resolution image data, segmenting the high-resolution image data in the high-resolution image data set to obtain image objects, calculating the multi-spectral image spectral characteristics and the SAR image texture characteristics of each image object, and constructing a high-resolution image data set for each image object; in the multispectral remote sensing image of the embodiment, a Sentinel-2L 1C-level multispectral remote sensing image with the minimum cloud content or/and a high-resolution one-number remote sensing image containing four bands of red, green, blue and near infrared can be selected (the Sentinel-2L 1C-level multispectral remote sensing image is taken as an example for introduction of technical principles in the embodiment); in the SAR image of the embodiment, an interference wide-range mode (IW) dual-polarized ground distance image of the Sentinel-1 adjacent date can be selected, and a VV dual-polarized mode and a VH dual-polarized mode are adopted.
In some embodiments, step a comprises the following method:
a1, selecting a Sentinel-2L 1C-level multispectral remote sensing image or/and a high-grade data image with the minimum cloud content from multispectral remote sensing images in multi-source data; selecting an interference wide-width mode dual-polarized ground distance image (adopting VV and VH dual-polarized modes) of a relevant or adjacent date Sentinel-1 from SAR image data in multi-source data;
a2, performing atmospheric correction on a multispectral remote sensing image (for example, a Sentinel-2L 1C-level multispectral remote sensing image in the embodiment) in high-resolution image data through SNAP software (the embodiment specifically adopts a Senn 2Cor plug-in the SNAP software), removing 60-meter resolution band data, reserving 10-meter and 20-meter band data, and resampling 5, 6, 7, 8b, 11 and 12 bands by using the Sen2Cor plug-in to obtain a multispectral band set which covers 10 spatial resolutions of 10 meters including visible light, red edge, near infrared, short wave infrared and the like; preprocessing track correction, radiometric calibration, infrared thermal noise removal, terrain correction, filtering, re-projection and cutting are carried out on SAR image data (in the embodiment, a Sentinel-1 interference wide-range mode dual-polarized ground distance image) in high-resolution image data through SNAP software, resampling is carried out on the SAR image data, the spatial resolution is enabled to reach 10 meters, and then geometric correction is carried out; in this embodiment, geometric correction is finally performed on the preprocessed Sentinel-1SAR image and Sentinel-2 multispectral image with the high-resolution image as a reference, and the error of the correction result is smaller than one pixel.
A3, extracting 10 spectral bands (including a band 2, a band 3, a band 4, a band 5, a band 6, a band 7, a band 8a, a band 8b, a band 11 and a band 12) and 12 spectral features from a multispectral remote sensing image (for example, a Sentinel-2 multispectral image in the embodiment) in high-resolution image data; normalized Vegetation Index (NDVI), enhanced Vegetation Index (EVI), ratio Vegetation Index (RVI), difference Vegetation Index (Difference Index, DVI), green Normalized Vegetation Index (GNDVI), atmospheric impedance Vegetation Index (ARVI), modified soil-Adjusted Vegetation Index (MSAII), red Edge Vegetation Normalized Index (reduced Edge Difference Index, RNDVI), normalized Water body Index (Normalized Difference Index, NDWI), multiband Water body Index (Normalized-Band), building Normalized Fabric Index (Normalized Number Index, normalized building BI). The method comprises the steps of calculating two backscattering coefficients (namely VH and VV) from an SAR image (a Sentinel-1SAR image in the embodiment) in high-resolution image data, selecting a 5 multiplied by 5 filter window, and calculating and extracting 32 texture features of the SAR image (the Sentinel-1SAR image in the embodiment) by utilizing a gray Level Co-occurrrence Matrix (GLCM) method based on four angles of 0 degrees, 45 degrees, 90 degrees and 135 degrees respectively; texture features include Mean (Mean), variance (Variance), homogeneity (Homogeneity), contrast (Contrast), disparities (disparities), entropy (control), angular Second Moment (Angular Second Moment), and Correlation (Correlation) at each angle.
The statistics of spectral band and texture features of the invention are as follows:
Figure BDA0003723279310000091
Figure BDA0003723279310000101
TABLE 1 statistical table of spectral band and texture characteristics
In the statistical table of spectral band and texture characteristics, rho nir 、ρ red 、ρ blue 、ρ green 、ρ vegetation red edge 、ρ SWIR The red-side band earth surface reflectivity represents the near infrared band, red light band, blue light band and red-side band earth surface reflectivity, and is respectively a band 8a, a band 4, a band 2, a band 3, a band 5 and a band 11 in the Sentinel-2; l is the adjustment factor for the soil background, set to 1; c1 and C2 are aerosol coefficients set to 6, 7.5, respectively; g is a gain coefficient set to 2.5; gamma is the optical path effect factor and is set to 1.0. Rho 0 (db) is the backscattering coefficient σ 0 Decibel conversion; p (m, n) represents the probability of the occurrence of a gray level j (m =1,2, G, n =1,2, G) given the spatial distance d and the direction θ, with the gray level i as starting point, G being the maximum value of the gray level in the image area under consideration; mu.s x ,σ x Are respectively { p x (i) (ii) a i = mean and variance of 1,2,3,g }; mu.s yy Are respectively { p y (i) (ii) a i =1,2,3,g } mean and variance.
And A4, automatically obtaining the optimal Segmentation scale by adopting an ESP (electronic stability program) tool, and performing object-oriented Segmentation on the high-resolution image data by utilizing a Multi-scale Segmentation algorithm (Multi-resolution Segmentation) to obtain a Segmentation result. The parameters of the multi-scale segmentation algorithm are set as follows: the scale parameter is 50, the shape factor is 0.3, the color factor is 0.7, the compactness factor is 0.5, and the smoothness factor is 0.5.
In some embodiments, step A4 further comprises the following method: and creating a minimum outsourcing rectangle of each segmentation object for the segmentation result to generate a high-resolution image block, and constructing a high-resolution image data set by using the high-resolution image block as an image object.
In the high-resolution image data set constructed in the embodiment, the multispectral remote sensing image extracts 10 wave bands and 12 spectral features, and 22 variables are total; 2 backscattering coefficients and 32 textural features are extracted from the SAR image, and 34 variables are extracted; the method comprises the steps that high-resolution images in a high-resolution image dataset are divided in a multi-scale mode to obtain image objects, the mean value of each variable extracted by the multispectral images and SAR images in the objects is obtained according to each divided object obtained after division, characteristic column vectors containing 22 mean values and characteristic column vectors containing 34 mean values are obtained, then the minimum outsourcing rectangle of each divided object is created for the result of multi-scale division of the high-resolution images, and therefore high-resolution image blocks (facilitating extraction of depth features) are generated, and therefore the high-resolution image dataset is obtained through construction.
B. Building a multi-branch convolutional neural network model, and performing depth feature extraction and depth feature fusion processing on the high-resolution image data set by using the multi-branch convolutional neural network model;
b1, the multi-branch convolutional neural network model comprises two 1D-Net feature extraction networks, the two 1D-Net feature extraction networks respectively and correspondingly extract two feature column vectors (respectively extracted from Sentein-1 and Sentein-2 images by using 1D-Net to respectively process), the 1D-Net feature extraction networks sequentially comprise a first convolutional layer, a first maximum pooling layer, a second convolutional layer, a second maximum pooling layer and a full-connection layer (as shown in FIG. 3, the 1D-Net feature extraction network comprises 5 convolutional layers including 2 convolutional layers, 2 maximum pooling layers and one full-connection layer), the first convolutional layer contains four convolutional kernels with the size of 3 and the step length of 1, an activation function is ReLU, feature maps with the same number as the convolutional kernels can be generated through the first convolutional layer, and the first maximum pooling layer down samples the feature maps of the first convolutional layer; the second convolutional layer contains eight convolutional kernels with the size of 3 and the step length of 1, the activation function is ReLU, the feature maps with the number being the same as that of the convolutional kernels can be generated through the second convolutional layer, and the second maximum pooling layer performs downsampling on the feature maps of the second convolutional layer; the fully-connected layer re-extracts the features and outputs the depth semantic features obtained by the feature column vectors (after the fully-connected layer discards some neurons randomly through the Dropout layer, the features are re-extracted, and the depth semantic features obtained by the feature column vectors are output).
The multi-branch convolutional neural network model further comprises a hole convolutional scaled-ResNet 50 (for processing of high-resolution image blocks, due to sparse vegetation in mining areas, the receptive field needs to be increased under the condition that the resolution is not lost, multi-scale information is obtained, and the hole convolutional scaled-ResNet 50 introduces hole Convolution (scaled Convolution)), as shown in fig. 4, the hole convolutional scaled-ResNet 50 comprises a Conv Block module and an Identity Block module, and the hole convolutional scaled-ResNet 50 performs depth feature extraction on an image object;
in some embodiments, the Conv Block module and the Identity Block module of the hole convolved scaled-ResNet 50 each include a main part and a residual edge, the main part of the Conv Block module includes convolution processing and standardization processing, the Conv Block module adopts an activation function ReLU, the convolution processing includes dimension reduction processing on a characteristic image by using a convolution kernel with the size of 1 × 1 and convolution processing by using a3 × 3 hole convolution kernel with the partition rate = 2; the residual edge portion of the Conv Block module includes a1 × 1 convolution process and a normalization process.
Preferably, the Conv Block module and the Identity Block module can be divided into a main part and a residual edge; in the main part, the Conv Block module and the Identity Block module have two convolutions, normalization (BN), activation function (ReLU), one convolution and normalization (BN). The first convolution uses a convolution kernel with the size of 1 × 1 to reduce the dimension of the characteristic image, and the second convolution uses a3 × 3 void convolution kernel (as shown in fig. 5) with the contrast ratio =2 to perform convolution; in the residual edge portion, there is a convolution and normalization of 1 × 1 once in the Conv Block module, and since there is a convolution in the residual edge portion, the width and height of the output feature layer and the number of channels can be changed by the Conv Block module. And the Identity Block module is directly connected with the output, and because the residual edge part has no convolution, the shape of the input characteristic layer and the shape of the output characteristic layer of the Identity Block module are the same and can be used for deepening the network. Finally, the depth feature extraction for the high-resolution video blocks is completed through the D-hole convolution scaled-ResNet 50.
B2, the multi-branch convolution neural network model further comprises a multi-source feature depth fusion module, the multi-source feature depth fusion module performs feature fusion processing on the depth semantic features and the depth features obtained in the step B1, and fusion depth features are obtained after fusion;
in some embodiments, in step B2, the multi-source feature depth fusion module of the multi-branch convolutional neural network model further includes a feature importance weight calculation module, and the feature weight of each dimension is obtained through the feature importance weight calculation module by the following method: rearranging the depth semantic features and the depth features obtained in the step B1, then respectively carrying out averaging on the rearranged depth semantic features and the rearranged depth features by using global average pooling by taking three feature column vectors as a unit, and then calculating through two full-connection layers to obtain three weight values of the importance of the multi-source data features;
and (3) solving the product of the weighted value and the depth semantic feature extracted by the multi-branch convolutional neural network model and the feature column vector of the depth feature, and fusing the weighted feature column vectors by using a concat method, so that the feature dimension of the image can be increased.
In order to save the computing resources and the running time, and in addition, there is a problem that the characteristics of the spectrum, the texture, and the like extracted by using the multi-source image data are inevitable and have repetition, the embodiment further includes the following method in step B2:
the multi-source feature depth fusion module also comprises a feature redundancy elimination module, and the feature redundancy elimination module eliminates irrelevant or redundant features from the fusion depth features (and then expands to obtain fusion multi-source data features required by classification).
C. Making a sample data set of each land type of the mining area, wherein the sample data set corresponds to the high-score image data set, and performing model training on the multi-branch convolution neural network model through the sample data set;
the model training method of the multi-branch convolutional neural network model in the step C is as follows:
c1, sample data in the sample data set are collected mining area remote sensing image data, the mining area remote sensing image data correspond to high-resolution image data of a high-resolution image data set, the sample data of the sample data set are classified and labeled, and the labeled classification types comprise open stopes, refuse dumps, coal piles, mine buildings, trees, shrubs, high/medium/low coverage grasslands, shrubs and trees, water bodies, roads and bare land; then, according to the following 6:2:2, randomly dividing the sample data of each category into a training set, a verification set and a test set;
and C2, carrying out iterative training on the multi-branch convolutional neural network model by using a training set, checking the model precision after each iterative training by using a verification set, and adopting a cross entropy loss function (Adam is selected as an optimizer in the embodiment) in the training process, wherein the expression of the cross entropy loss function is as follows:
Figure BDA0003723279310000141
where y represents the true category of the image,
Figure BDA0003723279310000142
representing the model prediction category, wherein S is the number of image objects, and L is the model loss value;
c3, setting iterative training times of the multi-branch convolutional neural network model (in this embodiment, the iterative training times epoch =100 is specifically set), reducing a model loss value by using a gradient descent algorithm, and simultaneously optimizing and updating parameters of the model (the parameters of the model refer to weight values connected between layers in the multi-branch convolutional neural network model);
and C4, inputting the fusion depth features of the image objects into a random forest classifier, training the random forest classifier by using a training set, and classifying the land types of the mining area by using the random forest classifier.
The parameter configuration of the model of the embodiment is shown in table 2, and the server configuration is shown in table 3.
Figure BDA0003723279310000143
TABLE 2 parameter settings for a multi-branch convolutional neural network model
Figure BDA0003723279310000144
Figure BDA0003723279310000151
TABLE 3 Server configuration
C5, performing precision evaluation on the multi-branch convolutional neural network model and the random forest classifier by using the test set; the evaluation indexes exemplified in this embodiment include: a confusion matrix; drawing precision (Producer Accuracy): the proportion of the number of correctly classified categories to the actual number of the categories; user Accuracy (User Accuracy): the number of correct classifications for a class is proportional to the number of classes predicted. The overall accuracy exemplified in this example: the number of all classified correct numbers is in proportion to all numbers; kappa coefficient.
In the embodiment, the model precision is checked by adopting four classification evaluation indexes, namely drawing precision, user precision, overall precision and Kappa coefficient, and the model with the highest precision is selected after multiple iterative training to carry out mining area land fine classification.
D. And inputting the high-resolution image data set into the trained multi-branch convolutional neural network model to obtain the fusion depth characteristic of each image object, and then inputting the fusion depth characteristic into a random forest classifier to classify the types of the land used in the mining area.
The above description is intended to be illustrative of the preferred embodiment of the present invention and should not be taken as limiting the invention, but rather, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention.

Claims (10)

1. A mining area fine land use classification method of multi-source data fusion is characterized in that: the method comprises the following steps:
A. acquiring multi-source data, performing fusion processing on the multi-source data to obtain high-resolution image data, segmenting the high-resolution image data in the high-resolution image data set to obtain image objects, calculating the multi-spectral image spectral characteristics and the SAR image texture characteristics of each image object, and constructing a high-resolution image data set for each image object;
B. building a multi-branch convolutional neural network model, and performing depth feature extraction and depth feature fusion processing on the high-resolution image data set by using the multi-branch convolutional neural network model;
b1, the multi-branch convolutional neural network model comprises two 1D-Net feature extraction networks, the two 1D-Net feature extraction networks respectively extract two feature column vectors of spectral features and texture features correspondingly, the 1D-Net feature extraction networks sequentially comprise a first convolution layer, a first maximum pooling layer, a second convolution layer, a second maximum pooling layer and a full-connection layer, the first convolution layer comprises four convolution kernels with the size of 3 and the step length of 1, an activation function is ReLU, feature maps with the number being the same as that of the convolution kernels can be generated through the first convolution layer, and the first maximum pooling layer carries out down-sampling on the feature maps of the first convolution layer; the second convolutional layer contains eight convolutional kernels with the size of 3 and the step length of 1, the activation function is ReLU, the feature maps with the number being the same as that of the convolutional kernels can be generated through the second convolutional layer, and the second maximum pooling layer performs downsampling on the feature maps of the second convolutional layer; the full-connection layer re-extracts the features and outputs the depth semantic features obtained by the feature column vectors;
the multi-branch convolution neural network model further comprises a cavity convolution-ResNet 50, the cavity convolution-ResNet 50 comprises a Conv Block module and an Identity Block module, and the cavity convolution-ResNet 50 carries out depth feature extraction on the image object;
b2, the multi-branch convolution neural network model further comprises a multi-source feature depth fusion module, the multi-source feature depth fusion module performs feature fusion processing on the depth semantic features and the depth features obtained in the step B1, and fusion depth features are obtained after fusion;
C. making a sample data set of each land type of the mining area, wherein the sample data set corresponds to the high-score image data set, and performing model training on the multi-branch convolution neural network model through the sample data set;
D. and inputting the high-resolution image data set into the trained multi-branch convolutional neural network model to obtain the fusion depth characteristic of each image object, and then inputting the fusion depth characteristic into a random forest classifier to classify the types of the land used in the mining area.
2. The method of claim 1 for fine-use classification of a multi-source data-fused mine area, wherein: the step A comprises the following steps:
a1, selecting a Sentinel-2L 1C-level multispectral remote sensing image or/and a high-score data image with the minimum cloud content from multispectral remote sensing images in multi-source data; selecting an interference wide-width mode dual-polarized ground distance image of a relevant or adjacent date Sentinel-1 from SAR image data in the multi-source data;
a2, performing atmospheric correction on a multispectral remote sensing image in high-resolution image data through SNAP software, removing band data with the resolution of 60 meters, reserving band data with the resolution of 10 meters and 20 meters, and resampling 5, 6, 7, 8b, 11 and 12 bands to obtain a multispectral band set with the spatial resolution of 10 meters including visible light, red edge, near infrared and short wave infrared; preprocessing SAR image data in high-resolution image data by SNAP software, such as orbit correction, radiometric calibration, infrared thermal noise removal, terrain correction, filtering, re-projection and cutting, resampling the preprocessed SAR image data to enable the spatial resolution to reach 10 m, and then performing geometric correction;
a3, extracting 10 spectral bands and 12 spectral features from a multispectral remote sensing image in high-resolution image data; calculating to obtain two backscattering coefficients from an SAR image in high-resolution image data, selecting a 5 multiplied by 5 filter window, and calculating and extracting 32 texture features based on four angles of 0 degrees, 45 degrees, 90 degrees and 135 degrees by utilizing a gray level co-occurrence matrix method;
and A4, automatically obtaining the optimal segmentation scale by adopting an ESP (electronic stability program) tool, and performing object-oriented segmentation on the high-resolution image data by utilizing a multi-scale segmentation algorithm to obtain a segmentation result.
3. A method of fine-use classification of a multi-source data-fused mine area according to claim 1 or 2, characterized by: the model training method of the multi-branch convolution neural network model in the step C is as follows:
c1, sample data in the sample data set are collected mining area remote sensing image data, the sample data in the sample data set are classified and labeled, and the labeled classification categories comprise open stopes, refuse dumps, coal piles, mine buildings, trees, shrubs, grasslands with high/medium/low coverage, mixed grass, shrubs and trees, water bodies, roads and bare land; then, according to the following 6:2:2, randomly dividing the sample data of each category into a training set, a verification set and a test set;
c2, carrying out iterative training on the multi-branch convolutional neural network model by using a training set, checking the model precision after each iterative training by using a verification set, and adopting a cross entropy loss function in the training process, wherein the expression of the cross entropy loss function is as follows:
Figure FDA0003723279300000031
where y represents the true category of the image,
Figure FDA0003723279300000032
representing the model prediction category, wherein S is the number of image objects, and L is the model loss value;
c3, setting iterative training times of the multi-branch convolutional neural network model, reducing a model loss value by using a gradient descent algorithm, and optimizing and updating parameters of the model;
and C4, inputting the fusion depth features of the image objects into a random forest classifier, training the random forest classifier by using a training set, and classifying the land types of the mining area by using the random forest classifier.
4. A method of classifying fine usage of a multisource data fused mine area according to claim 2, wherein: step A4 also includes the following method: and creating a minimum outsourcing rectangle of each segmentation object for the segmentation result to generate a high-resolution image block, and constructing a high-resolution image data set by using the high-resolution image block as an image object.
5. The method of claim 2 for fine-use classification of a multisource data-fused mine area, wherein: in step B1, both the Conv Block module and the Identity Block module of the hole convolution scaled-ResNet 50 include a main part and a residual edge, the main part of the Conv Block module includes convolution processing and normalization processing, the Conv Block module adopts an activation function ReLU, the convolution processing includes performing dimension reduction processing on a feature image by using a convolution kernel with a size of 1 × 1, and performing convolution processing by using a3 × 3 hole convolution kernel with a translation rate = 2; the residual edge portion of the Conv Block module includes a1 × 1 convolution process and a normalization process.
6. The method of claim 2 for fine-use classification of a multisource data-fused mine area, wherein: in step B2, the multi-source feature depth fusion module of the multi-branch convolutional neural network model further includes a feature importance weight calculation module, and the feature weight of each dimension is obtained by the feature importance weight calculation module, and the method includes: rearranging the depth semantic features and the depth features obtained in the step B1, then respectively carrying out averaging on the rearranged depth semantic features and the rearranged depth features by using global average pooling by taking three feature column vectors as a unit, and then calculating through two full-connection layers to obtain three weight values of the importance of the multi-source data features;
and (3) solving the product of the weighted value and the depth semantic feature extracted by the multi-branch convolutional neural network model and the feature column vector of the depth feature, and fusing the weighted feature column vectors by using a concat method.
7. The method of claim 6, wherein the method comprises the following steps: the step B2 further includes the following steps:
the multi-source feature depth fusion module also comprises a feature redundancy elimination module, and the feature redundancy elimination module eliminates irrelevant or redundant features from the fusion depth features.
8. A method of fine-use classification of a multi-source data-fused mine area according to claim 3, wherein: also includes C5;
and C5, performing precision evaluation on the multi-branch convolutional neural network model and the random forest classifier by using the test set.
9. The method of claim 2 for fine-use classification of a multisource data-fused mine area, wherein: in step A3, the spectral features include a normalized vegetation index, an enhanced vegetation index, a ratio vegetation index, a difference vegetation index, a green normalized vegetation index, an atmospheric impedance vegetation index, an improved soil conditioning vegetation index, a red-edge normalized vegetation index, a normalized water body index, a multiband water body index, a normalized construction index, and a normalized burning index.
10. The method of claim 2 for fine-use classification of a multisource data-fused mine area, wherein: in step A3, the texture features include mean, variance, homogeneity, contrast, difference, entropy, angular second moment, and correlation at each angle.
CN202210769160.7A 2022-06-30 2022-06-30 Mining area fine land classification method based on multi-source data fusion Active CN115170979B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210769160.7A CN115170979B (en) 2022-06-30 2022-06-30 Mining area fine land classification method based on multi-source data fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210769160.7A CN115170979B (en) 2022-06-30 2022-06-30 Mining area fine land classification method based on multi-source data fusion

Publications (2)

Publication Number Publication Date
CN115170979A true CN115170979A (en) 2022-10-11
CN115170979B CN115170979B (en) 2023-02-24

Family

ID=83488292

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210769160.7A Active CN115170979B (en) 2022-06-30 2022-06-30 Mining area fine land classification method based on multi-source data fusion

Country Status (1)

Country Link
CN (1) CN115170979B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115527123A (en) * 2022-10-21 2022-12-27 河北省科学院地理科学研究所 Land cover remote sensing monitoring method based on multi-source feature fusion
CN115860695A (en) * 2023-02-09 2023-03-28 广东智环创新环境科技有限公司 Environment-friendly informatization management system based on ecological space
CN116051398A (en) * 2022-11-23 2023-05-02 广东省国土资源测绘院 Construction method and device for multi-source multi-mode remote sensing data investigation monitoring feature library
CN116258971A (en) * 2023-05-15 2023-06-13 江西啄木蜂科技有限公司 Multi-source fused forestry remote sensing image intelligent interpretation method
CN116030355B (en) * 2023-03-30 2023-08-11 武汉城市职业学院 Ground object classification method and system
CN118072179A (en) * 2024-04-17 2024-05-24 成都理工大学 Photovoltaic development suitability evaluation method based on multi-source remote sensing technology

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0820225B1 (en) * 1995-03-31 2003-03-26 B. Eugene Guthery Fast acting and persistent topical antiseptic
CN110378224A (en) * 2019-06-14 2019-10-25 香港理工大学深圳研究院 A kind of detection method of feature changes, detection system and terminal
CN111028277A (en) * 2019-12-10 2020-04-17 中国电子科技集团公司第五十四研究所 SAR and optical remote sensing image registration method based on pseudo-twin convolutional neural network
CN111191628A (en) * 2020-01-06 2020-05-22 河海大学 Remote sensing image earthquake damage building identification method based on decision tree and feature optimization
CN111476170A (en) * 2020-04-09 2020-07-31 首都师范大学 Remote sensing image semantic segmentation method combining deep learning and random forest
CN111767801A (en) * 2020-06-03 2020-10-13 中国地质大学(武汉) Remote sensing image water area automatic extraction method and system based on deep learning
CN112101190A (en) * 2020-09-11 2020-12-18 西安电子科技大学 Remote sensing image classification method, storage medium and computing device
CN112580654A (en) * 2020-12-25 2021-03-30 西南电子技术研究所(中国电子科技集团公司第十研究所) Semantic segmentation method for ground objects of remote sensing image
CN113591766A (en) * 2021-08-09 2021-11-02 中国林业科学研究院资源信息研究所 Multi-source remote sensing tree species identification method for unmanned aerial vehicle

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0820225B1 (en) * 1995-03-31 2003-03-26 B. Eugene Guthery Fast acting and persistent topical antiseptic
CN110378224A (en) * 2019-06-14 2019-10-25 香港理工大学深圳研究院 A kind of detection method of feature changes, detection system and terminal
CN111028277A (en) * 2019-12-10 2020-04-17 中国电子科技集团公司第五十四研究所 SAR and optical remote sensing image registration method based on pseudo-twin convolutional neural network
CN111191628A (en) * 2020-01-06 2020-05-22 河海大学 Remote sensing image earthquake damage building identification method based on decision tree and feature optimization
CN111476170A (en) * 2020-04-09 2020-07-31 首都师范大学 Remote sensing image semantic segmentation method combining deep learning and random forest
CN111767801A (en) * 2020-06-03 2020-10-13 中国地质大学(武汉) Remote sensing image water area automatic extraction method and system based on deep learning
CN112101190A (en) * 2020-09-11 2020-12-18 西安电子科技大学 Remote sensing image classification method, storage medium and computing device
CN112580654A (en) * 2020-12-25 2021-03-30 西南电子技术研究所(中国电子科技集团公司第十研究所) Semantic segmentation method for ground objects of remote sensing image
CN113591766A (en) * 2021-08-09 2021-11-02 中国林业科学研究院资源信息研究所 Multi-source remote sensing tree species identification method for unmanned aerial vehicle

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BAOGUANG SHI.ETC: ""An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition"", 《ARXIV》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115527123A (en) * 2022-10-21 2022-12-27 河北省科学院地理科学研究所 Land cover remote sensing monitoring method based on multi-source feature fusion
CN115527123B (en) * 2022-10-21 2023-05-05 河北省科学院地理科学研究所 Land cover remote sensing monitoring method based on multisource feature fusion
CN116051398A (en) * 2022-11-23 2023-05-02 广东省国土资源测绘院 Construction method and device for multi-source multi-mode remote sensing data investigation monitoring feature library
CN116051398B (en) * 2022-11-23 2023-09-22 广东省国土资源测绘院 Construction method and device for multi-source multi-mode remote sensing data investigation monitoring feature library
CN115860695A (en) * 2023-02-09 2023-03-28 广东智环创新环境科技有限公司 Environment-friendly informatization management system based on ecological space
CN115860695B (en) * 2023-02-09 2023-05-09 广东智环创新环境科技有限公司 Environment-friendly informationized management system based on ecological space
CN116030355B (en) * 2023-03-30 2023-08-11 武汉城市职业学院 Ground object classification method and system
CN116258971A (en) * 2023-05-15 2023-06-13 江西啄木蜂科技有限公司 Multi-source fused forestry remote sensing image intelligent interpretation method
CN116258971B (en) * 2023-05-15 2023-08-08 江西啄木蜂科技有限公司 Multi-source fused forestry remote sensing image intelligent interpretation method
CN118072179A (en) * 2024-04-17 2024-05-24 成都理工大学 Photovoltaic development suitability evaluation method based on multi-source remote sensing technology
CN118072179B (en) * 2024-04-17 2024-06-28 成都理工大学 Photovoltaic development suitability evaluation method based on multi-source remote sensing technology

Also Published As

Publication number Publication date
CN115170979B (en) 2023-02-24

Similar Documents

Publication Publication Date Title
CN115170979B (en) Mining area fine land classification method based on multi-source data fusion
CN108573276B (en) Change detection method based on high-resolution remote sensing image
Filippi et al. Fuzzy learning vector quantization for hyperspectral coastal vegetation classification
CN110929607B (en) Remote sensing identification method and system for urban building construction progress
CN104751478B (en) Object-oriented building change detection method based on multi-feature fusion
CN102646200B (en) Image classifying method and system for self-adaption weight fusion of multiple classifiers
CN107392130A (en) Classification of Multispectral Images method based on threshold adaptive and convolutional neural networks
CN110287869A (en) High-resolution remote sensing image Crop classification method based on deep learning
CN111985543A (en) Construction method, classification method and system of hyperspectral image classification model
CN112131946B (en) Automatic extraction method for vegetation and water information of optical remote sensing image
CN107358203B (en) A kind of High Resolution SAR image classification method based on depth convolution ladder network
CN110309780A (en) High resolution image houseclearing based on BFD-IGA-SVM model quickly supervises identification
CN113312993B (en) Remote sensing data land cover classification method based on PSPNet
CN109146890A (en) The Anomaly target detection method of high spectrum image based on filter
CN112950780A (en) Intelligent network map generation method and system based on remote sensing image
CN115965812B (en) Evaluation method for classification of unmanned aerial vehicle images on wetland vegetation species and land features
CN108898070A (en) A kind of high-spectrum remote-sensing extraction Mikania micrantha device and method based on unmanned aerial vehicle platform
CN109961105A (en) A kind of Classification of High Resolution Satellite Images method based on multitask deep learning
CN111738052A (en) Multi-feature fusion hyperspectral remote sensing ground object classification method based on deep learning
Qiu Neuro-fuzzy based analysis of hyperspectral imagery
CN116091940B (en) Crop classification and identification method based on high-resolution satellite remote sensing image
Babu Optimized performance and utilization analysis of real-time multi spectral data/image categorization algorithms for computer vision applications
Zhao et al. Improving object-oriented land use/cover classification from high resolution imagery by spectral similarity-based post-classification
CN111798530A (en) Remote sensing image classification method
CN116310864A (en) Automatic identification method, system, electronic equipment and medium for crop lodging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant