CN111898662B - Coastal wetland deep learning classification method, device, equipment and storage medium - Google Patents

Coastal wetland deep learning classification method, device, equipment and storage medium Download PDF

Info

Publication number
CN111898662B
CN111898662B CN202010701215.1A CN202010701215A CN111898662B CN 111898662 B CN111898662 B CN 111898662B CN 202010701215 A CN202010701215 A CN 202010701215A CN 111898662 B CN111898662 B CN 111898662B
Authority
CN
China
Prior art keywords
data
layer
processed
frequency
hyperspectral image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010701215.1A
Other languages
Chinese (zh)
Other versions
CN111898662A (en
Inventor
陶然
李伟
赵旭东
张蒙蒙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202010701215.1A priority Critical patent/CN111898662B/en
Publication of CN111898662A publication Critical patent/CN111898662A/en
Application granted granted Critical
Publication of CN111898662B publication Critical patent/CN111898662B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N2021/1793Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Chemical & Material Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Electromagnetism (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a coastal wetland deep learning classification method, a device, equipment and a storage medium, wherein the method comprises the following steps: correcting and normalizing the acquired original hyperspectral image data to obtain hyperspectral image data to be processed, and removing abnormal points and normalizing the acquired original lidar data to obtain lidar data to be processed; constructing three layers of Octave convolution layers of each mode; based on the three-layer Octave convolution layer of each mode, performing component separation, component combination and frequency component synthesis on hyperspectral image data to be processed and laser radar data to be processed to obtain feature fusion data; directional texture information in the feature fusion data is extracted, and spatial, texture and spectrum combined classification is carried out by combining with the hyperspectral data to be processed to obtain target combined classification features so as to determine the target category. The combined ground feature classification performance under different resolutions and different modes is improved; and realizing high-precision collaborative classification.

Description

Coastal wetland deep learning classification method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of multi-sensor remote sensing combined classification, in particular to a coastal wetland deep learning classification method, a device, equipment and a storage medium.
Background
The wetland is positioned in an amphibious transition zone and is an ecosystem which is the richest in biological diversity and the highest in productivity in the nature. The research on the wetland, particularly the coastal wetland, has important significance for protecting the ecological environment and maintaining the healthy development of human production and life. In recent years, coastal wetland ecosystems in China are damaged to different degrees, so that more urgent requirements on wetland high-precision dynamic monitoring, fine classification and protection are brought forward. The remote sensing technology is an important means for coastal wetland dynamic monitoring and information extraction and interpretation by virtue of the advantages of economy, high efficiency, wide coverage range and the like. The combination of diversified spectrum, radar imaging technology and image processing technology provides convenient high-quality data for a spatial and geographic database.
The high-dimensional data represented by the hyperspectral image can synchronously acquire information in various aspects such as space, spectrum, radiation and the like of an observed object, and the description of an objective world is prompted to present new characteristics of multiple scales, multiple angles and multiple dimensions. Lidar data provides elevation information of the area under investigation, which is valuable to better describe the same scene acquired by the light sensors alone. The integration and processing of the different data sources are beneficial to integrating different information, and the performance of earth observation is further improved. The fusion of multi-source image data is to extract obvious features from each source image and then fuse these features into a single image by a suitable fusion method. Many signal processing methods such as a multi-scale decomposition method have been applied to a multi-source remote sensing image fusion task to extract the salient features of the image. And after the image decomposition method is utilized to extract the salient features of the image, a proper fusion strategy is used to obtain a final fusion image. The fused hyperspectral image has high spatial resolution and rich spectral information characteristics, and creates better conditions for in-depth research, but the hyperspectral image and spectrum integration and massive data are difficult to mark, so that the hyperspectral image fusion processing by using a conventional method is difficult.
Disclosure of Invention
In view of the above, a method, an apparatus, a device and a storage medium for classifying coastal wetlands in a deep learning manner are provided to solve the problems of high cost and low accuracy of a hyperspectral image fusion technology in classification of coastal wetlands in the related art.
The invention adopts the following technical scheme:
in a first aspect, an embodiment of the present application provides a method for deep learning and classification of coastal wetlands, where the method includes:
correcting and normalizing the acquired original hyperspectral image data to obtain hyperspectral image data to be processed, and removing abnormal points and normalizing the acquired original lidar data to obtain lidar data to be processed;
constructing three layers of Octave convolution layers of each mode according to the spatial resolution and the number of channels of the hyperspectral image data to be processed and the spatial resolution and the number of channels of the lidar data to be processed;
based on the three Octave convolutional layers of each mode, performing component separation, component combination and frequency component synthesis on the hyperspectral image data to be processed and the lidar data to be processed to obtain feature fusion data;
directional texture information in the feature fusion data is extracted, space, texture and spectrum combined classification is carried out by combining with the hyperspectral data to be processed, target combined classification features are obtained, and therefore the target category is determined.
In a second aspect, an embodiment of the present application provides a coastal wetland deep learning classification device, which includes:
the preprocessing module is used for correcting and normalizing the acquired original hyperspectral image data to obtain hyperspectral image data to be processed, and performing exception point removal processing and normalization processing on the acquired original lidar data to obtain lidar data to be processed;
the convolutional layer construction module is used for constructing three layers of Octave convolutional layers in each mode according to the spatial resolution and the number of channels of the hyperspectral image data to be processed and the spatial resolution and the number of channels of the lidar data to be processed;
the data fusion module is used for carrying out component separation, component combination and frequency component synthesis on the hyperspectral image data to be processed and the lidar data to be processed based on the three Octave convolutional layers of each mode to obtain feature fusion data;
and the classification module is used for extracting directional texture information in the feature fusion data, and performing space, texture and spectrum combined classification by combining with the hyperspectral data to be processed to obtain target combined classification features so as to determine the target category.
In a third aspect, an embodiment of the present application provides an apparatus, including:
a processor, and a memory coupled to the processor;
the memory is used for storing a computer program, and the computer program is at least used for executing the coastal wetland deep learning classification method in the first aspect of the embodiment of the application;
the processor is used for calling and executing the computer program in the memory.
In a fourth aspect, the present application provides a storage medium, where the storage medium stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the method for classifying coastal wetland deep learning according to the first aspect.
By adopting the technical scheme, the hyperspectral and laser radar combined classification can effectively combine and extract a plurality of fractional dimensional characteristics of different sensor data, further comprehensively utilize space, spectrum and texture characteristics, fully mine and utilize the integrity and reliability of multisource data, improve the combined ground object classification performance under different resolutions and different modes, and realize high-precision cooperative classification.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a method for classifying coastal wetland deep learning according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a coastal wetland deep learning classification device provided in the embodiment of the present application;
fig. 3 is a schematic structural diagram of an apparatus provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without making any creative effort, shall fall within the protection scope of the present invention.
First, an applicable scenario of the embodiment of the present application will be described. With the rise of deep learning, researchers at home and abroad propose a plurality of fusion methods based on deep learning. In the fusion method in which the convolutional neural network is used to acquire image features and reconstruct a fusion image, only the result of the last layer is used as the image features, and this operation may lose a large amount of useful information obtained by the middle layer. Therefore, how to acquire multi-dimensional features from different convolutional layers to extract image features becomes a key problem. It is difficult to sufficiently extract features of different dimensions and different directions only by using spatial and spectral information of hyperspectral images and lidar data.
In addition, deep learning model training is based on data driving, and lack of fusion data leads to insufficient training of the model. Therefore, how to adequately train the model in the absence of the fusion data source is a problem to be solved by the embodiments of the present application. The hyperspectral image can be used for identifying and detecting a ground target due to the characteristics of high spectral resolution, narrow bandwidth and large information quantity, and has strong diagnostic capability. However, the hyperspectral image is affected by the time of day and the weather and it is difficult to produce a high quality image. While radar images provide more accurate elevation information and useful spatial contrast. By combining the visible light image with higher spatial resolution, the image result with high spectral resolution, high spatial resolution and all-weather characteristics all day can be obtained. Therefore, the method has important research significance for the fusion and multi-dimensional feature extraction of multi-source remote sensing data collected by a plurality of sensors. Therefore, the embodiment of the application provides a coastal wetland deep learning classification method.
Examples
Fig. 1 is a flowchart of a method for classifying coastal wetland deep learning according to an embodiment of the present invention, where the method may be executed by a device for classifying coastal wetland deep learning according to an embodiment of the present invention, and the device may be implemented by software and/or hardware. Referring to fig. 1, the method may specifically include the following steps:
s101, correcting and normalizing the collected original hyperspectral image data to obtain hyperspectral image data to be processed, and removing abnormal points and normalizing the collected original lidar data to obtain the lidar data to be processed.
For example, the processing of the raw hyperspectral image data may include: carrying out geometric correction processing and radiation correction processing on the acquired original hyperspectral image data to obtain three-dimensional original hyperspectral image data; and carrying out normalization processing on the spectral reflectance value of the original hyperspectral image in the three-dimensional form to obtain hyperspectral image data to be processed.
The correction process may include a geometric correction process and a radiation correction process, and specifically, the raw hyperspectral image data after the correction process is recorded as X HSI Is R H *C H *B H Size of three-dimensional cube data, R H 、C H 、B H Respectively calculating the number of lines, columns and spectral channels of the hyperspectral image data, and carrying out normalization processing on the spectral reflectance values of the hyperspectral image data. In one specific example, R of three-dimensional cube data H 、C H 、B H The size may be 324 x 220 x 64.
Optionally, the processing process of the original lidar data may specifically include: the difference is made between a digital surface model and a digital elevation model in original laser radar data to obtain a normalized digital surface model so as to remove abnormal points; and respectively carrying out amplitude normalization processing on each wave band by using three-wave band laser radar intensity map data in the original laser radar data to obtain laser radar data to be processed.
Specifically, raw lidar data X LiDAR Including a size R L *C L Respectively carrying out amplitude Normalization processing and abnormal point removal on each wave band by using a Digital Surface Model (DSM) image, a Digital Elevation Model (DEM), a normalized Digital Surface Model (nDSM) and three-wave band laser radar intensity map data L 、C L The line number and the column number of the laser radar data are respectively, and the normalized digital surface model is obtained by subtracting the digital elevation model from the digital surface model. In a specific example, R L 、C L The value may be 324 x 220.
nDSM=DSM-DEM
S102, constructing three layers of Octave convolution layers of each mode according to the spatial resolution and the number of channels of the hyperspectral image data to be processed and the spatial resolution and the number of channels of the lidar data to be processed.
Specifically, based on spatial resolution and channel number of hyperspectral image data to be processed and lidar data to be processed, linear scale representation of an input channel is obtained first, and then three layers of Octave convolutional layers in different modes are constructed. The Octave is a programming language, and the Octave convolutional layer is a convolution form which separates high-frequency information from low-frequency information in convolution, compresses the number of model parameters and improves the test precision.
S103, based on the three Octave convolution layers of each mode, performing component separation, component combination and frequency component synthesis on the hyperspectral image data to be processed and the laser radar data to be processed to obtain feature fusion data.
Specifically, in a specific example, in an input layer of three Octave convolution layers, high-frequency components and low-frequency components of a two-dimensional image are separated, wherein the frequency of the high-frequency components is greater than a set frequency threshold value, and the frequency of the low-frequency components is less than the set frequency threshold value; combining high-frequency components and low-frequency components of each mode aiming at the hyperspectral image data and the laser radar data of each spatial resolution in the middle layer of the three-layer Octave convolution layer; and in the output layers of the three Octave convolution layers, frequency component synthesis is carried out on each high-frequency component and each low-frequency component with different resolutions and different frequencies to obtain feature fusion data. The feature fusion data can be high-spatial and spectral-resolution feature fusion data combining hyperspectral data information and lidar data information.
S103 will be described below with a specific example.
Obtaining an input hyperspectral image X HSI And lidar image X LiDAR The linear scale representation is performed to separate the feature tensor of the image at the Octave input layer as low and high frequency components. With X HSI For example, the high frequency component is an original image without gaussian filtering, and the low frequency component is an image obtained by gaussian filtering. Since the low frequency components of the image are usually redundant, we set the length and width of each channel of the low frequency components to be 0.5 times the length and width of the channel of the high frequency components. For the original hyperspectral image
Figure BDA0002591279180000071
According to the low-frequency channel proportion alpha epsilon [0,1]Along the spectral channel dimension into
Figure BDA0002591279180000072
Wherein the high frequency component
Figure BDA0002591279180000073
Acquiring detail information of an image, wherein the low-frequency component is
Figure BDA0002591279180000074
Obtaining high-frequency component of laser radar image
Figure BDA0002591279180000075
And low frequency components
Figure BDA0002591279180000076
In one specific example, α may take 0.75.
Applications of the invention
Figure BDA0002591279180000077
As the input to the second Octave convolution layer, the low frequency component and the high frequency component of the convolution output are
Figure BDA0002591279180000078
In convolution operations, the kernel of the convolution
Figure BDA0002591279180000079
For obtaining
Figure BDA00025912791800000710
Corresponding to
Figure BDA00025912791800000711
Are respectively composed of
Figure BDA00025912791800000712
And obtaining by convolution with the input.
In particular, output at low frequency of hyperspectral image
Figure BDA00025912791800000713
For example, the following steps are carried out:
Figure BDA00025912791800000714
i.e. the output consists of two parts of low frequency partial convolution and high frequency partial convolution, for calculation
Figure BDA00025912791800000715
Initializing convolution kernels
Figure BDA00025912791800000716
And respectively convolving the low-frequency part convolution and the high-frequency part convolution with the corresponding parts of the input data, firstly carrying out downsampling on the high-frequency part of the input image, and outputting:
Figure BDA00025912791800000717
where (pq, table) denotes the coordinate, k is the convolution kernelThe size of the capsule is as small as possible,
Figure BDA00025912791800000718
is a convolution neighborhood. Up-sampling the low frequency part of the input image and outputting as
Figure BDA00025912791800000719
Can be obtained by the same principle
Figure BDA0002591279180000081
Optionally, in the output layer of the three-layer Octave convolution layer, frequency component synthesis is performed on each high-frequency component and low-frequency component with different resolutions and different frequencies to obtain feature fusion data, which may be specifically implemented by the following method: integrating high-frequency data of the hyperspectral image data and low-frequency data of the laser radar image data to obtain first fusion layer data; and integrating the data of the first fusion layer and the high-frequency data of the laser radar image to obtain data of a second fusion layer as feature fusion data.
Output of convolution with Octave
Figure BDA0002591279180000082
Integrating high-frequency and low-frequency components of the image, setting the output low-frequency channel proportion alpha =0, and acquiring the output combined high-frequency component Y Merge . Specifically, according to the spatial resolution difference between the hyperspectral image and the lidar image (usually, the spatial resolution of the hyperspectral image is lower than that of the lidar image), the high-frequency information of the first layer comprehensive hyperspectral image
Figure BDA0002591279180000083
Low frequency information with lidar
Figure BDA0002591279180000084
Y for obtaining a first fused layer Merge1 Second layer of integrated fusion layer Y Merge1 High frequency information output combination with laser radarFrequency component Y Merge2 When the size of the fused feature is R M *C M *B M The method has the characteristics of high spatial resolution and high spectral resolution.
S104, directional texture information in the feature fusion data is extracted, space, texture and spectrum combined classification is carried out by combining with the hyperspectral data to be processed, target combined classification features are obtained, and therefore the target category is determined.
Specifically, a multi-fractional-dimension Gabor full-convolution network is designed on the basis of feature fusion data, and three layers of fractional-dimension Gabor filters with different fractional orders are designed to extract spatial directional texture features of a fusion image. In each fraction Gabor convolution layer, carrying out fraction Gabor filtering on an original image signal to extract directional texture information; and comprehensively weighting three layers of fractional Gabor features of different fractional orders and combining the spectral features of the hyperspectral image to perform spatial texture-spectrum joint classification to obtain a final hyperspectral and lidar joint classification map. The two-dimensional fractional domain Gabor filter is formed by multiplying a Gaussian function and a sinusoidal plane wave.
In the embodiment of the application, the hyperspectral and laser radar combined classification can effectively combine and extract a plurality of fractional dimensional features of different sensor data, so that space, spectrum and texture features are comprehensively utilized, the integrity and reliability of multisource data are fully excavated and utilized, the combined ground object classification performance under different resolutions and different modes is improved, and high-precision cooperative classification is realized.
The following is a detailed description:
(1) The basic concept of a two-dimensional Gabor filter is explained first.
Figure BDA0002591279180000091
Figure BDA0002591279180000092
Figure BDA0002591279180000093
Wherein f is a variable in a frequency domain, m and n are the size of a Gabor filter, theta is an angle between a Gaussian function and a plane wave function, alpha and beta are scale coefficients of the Gaussian function in two directions, and corresponding gamma and eta are scale coefficients of the Gaussian function in two directions in the frequency domain.
The traditional frequency domain filtering is global filtering, which can obtain the whole frequency spectrum of a signal, but cannot be used for effectively processing non-stationary signals and abrupt textures. In order to overcome the limitation of the traditional frequency domain filtering in two-dimensional signal processing and better analyze the local characteristics of signals, the invention combines fractional Fourier transform and Gabor filtering to improve the directional texture feature extraction of image data. The two-dimensional fractional fourier transform kernel function is: k is px,py (x, y, u, v), wherein (x, y) is a space domain variable, and (u, v) is a fractional domain variable, when fractional Gabor filtering of 2-dimensional signals is carried out by utilizing the property of a kernel function and the separability of a Gaussian window function, fractional Gabor transformation can be carried out along one direction firstly, then Gabor transformation is carried out along the other direction to complete two-dimensional fractional Gabor transformation, and transformation kernel decomposition is K p (x, u) and K p (y, v), wherein:
Figure BDA0002591279180000094
according to the principle, the two-dimensional fractional domain Gabor filter applied in the embodiment of the present application can be obtained by combining Gabor filtering as follows:
Figure BDA0002591279180000101
Figure BDA0002591279180000102
Figure BDA0002591279180000103
optionally, a fractional domain Gabor convolution layer is designed according to the two-dimensional fractional domain Gabor filter; applying a fractional domain Gabor convolution layer, and performing full convolution operation on hyperspectral image data to be processed and laser radar data to be processed; setting a spectrum convolution layer, and summing results of the fractional domain Gabor convolution layer by applying the spectrum convolution layer to obtain the spectrum characteristics of the fused data; taking the weighted sum of the directional texture information of the fusion data and the spectral feature of the fusion data as a joint feature; taking the joint characteristics as input, and acquiring the probability of each pixel point belonging to each category; and determining the category with the highest probability as the target category.
(2) And designing a fraction domain Gabor convolution layer based on the two-dimensional fraction domain Gabor filter. Using a fixed set of fractional transformation orders (p) in each convolutional layer x ,p y ) Image features in a fractional domain are extracted. The fractional domain Gabor convolution kernel is obtained by multiplying a two-dimensional fractional domain Gabor filter by a classical convolution kernel point:
Figure BDA0002591279180000104
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0002591279180000105
i-th fractional Gabor modulation kernel representing the o-th channel of the k-branch, c i,o Representing the original convolution kernel of each branch.
(3) Designing three layers with a fractional order of (p) x1 ,p y1 ),(p x2 ,p y2 ),(p x3 ,p y3 ) The output of the fractional domain Gabor convolution layer of (1) is Y Gabor1 ,Y Gabor2 ,Y Gabor3
Y Gabor1 =W Gabor Y Merge +B
Wherein, W Gabor Representing the two-dimensional fractional domain Gabor filter bank, B is the bias term. Adding the results of the first three layers in the fourth layer of convolution layer to obtain a multi-fractional domain joint directional Gabor characteristic Y of the fusion data Gabor
(4) Using raw hyperspectral data X HSI And carrying out full convolution operation on the spectral information of the hyperspectral image. Firstly, inputting a whole hyperspectral image into a first full convolution layer, wherein the size of a convolution kernel is 1 multiplied by 1, so as to obtain rich spectral characteristics of each pixel point of the hyperspectral image, and each channel of the convolution layer is as follows:
Figure BDA0002591279180000111
wherein Y is i The ith channel, w, being the convolution output i Is a convolution kernel, X j I channel for output of the previous layer, b i For the bias term of the ith channel, f (x) = max (0, x) is the activation function.
(5) Designing a spectrum convolution layer, and adding the results of the first three layers in a fourth layer of the convolution layer to obtain the spectrum characteristic Y of the hyperspectral image data Spec
Figure BDA0002591279180000112
Wherein the content of the first and second substances,
Figure BDA0002591279180000113
representing the ith channel output of the kth layer.
(6) Obtaining multi-fractional domain joint directivity Gabor characteristic Y of fusion data Gabor Spectral feature Y of hyperspectral image data Spec As a joint feature
Y=λ Gabor Y GaborSpec Y Spec
Wherein λ Gabor And λ Spec The weighting factors of the Gabor characteristic and the spectral characteristic are adopted.
(7) Inputting the joint characteristics, and obtaining the probability that the pixel point located in (u, v) belongs to the kth class as follows:
Figure BDA0002591279180000114
wherein Y (u, v) represents the label Y of the (u, v) position pixel i (u, v) represents the output characteristics of the channels corresponding to the fusion layer, and n is the number of categories included in the image in total. And taking the maximum probability of each pixel point to obtain a final classification result.
In addition, in the above embodiment, the spatial resolution of the hyperspectral image is the same as the resolution of the laser radar image, and all the hyperspectral images contain 11 types of ground objects, and the example of multisource remote sensing data with the same spatial resolution is specifically described.
Illustratively, the spatial resolution of the hyperspectral image can be 1/2 of the resolution of the lidar image, 20 types of ground object types are included, and the joint classification method provided by the invention is specifically explained by taking multisource remote sensing data with different spatial resolutions as an example. In this particular example, R of the three-dimensional cube data H 、C H 、B H Size may be 4124 × 1202 × 48, raw lidar data X LiDAR Including 8248 × 2404 × 7 digital surface model images, digital elevation models, normalized digital surface models, and three-band lidar intensity map data, for seven bands. Other method steps are the same as the above embodiments, and are not described herein.
In addition, the method and the device have the advantages that the multi-order fractional Fourier transform is carried out on the spectrum curve of the pixel point in the hyperspectral image, the statistical distribution characteristics of the image in different transform domains are analyzed, and the discrimination among different types of ground objects is remarkably improved; aiming at multi-scale and multi-modal characteristics in deep learning multi-source remote sensing data, an Octave convolution neural network is used for carrying out frequency domain analysis on a depth network, low-frequency and high-frequency components of the multi-source remote sensing data are separated, characteristic extraction is carried out in different fractional domains, and directional characteristics of the deep network are researched. Thereby realizing high-precision collaborative classification; a hierarchical fusion module is constructed to complete a multi-channel, multi-source domain and multi-hidden layer feature fusion classification model, and spatial, spectral, texture, elevation and other information of multi-source remote sensing data are comprehensively utilized to realize high-precision collaborative classification.
Fig. 2 is a schematic structural diagram of a coastal wetland deep learning classification device according to an embodiment of the present invention, and the device is suitable for implementing a coastal wetland deep learning classification method according to an embodiment of the present invention. As shown in fig. 2, the apparatus may specifically include a preprocessing module 201, a convolutional layer construction module 202, a data fusion module 203, and a classification module 204.
The preprocessing module 201 is configured to perform correction processing and normalization processing on the acquired original hyperspectral image data to obtain hyperspectral image data to be processed, and perform exception point removal processing and normalization processing on the acquired original lidar data to obtain lidar data to be processed; the convolutional layer construction module 202 is configured to construct three layers of Octave convolutional layers in each mode according to the spatial resolution and the number of channels of the to-be-processed hyperspectral image data and the spatial resolution and the number of channels of the to-be-processed lidar data; the data fusion module 203 is used for performing component separation, component combination and frequency component synthesis on the hyperspectral image data to be processed and the lidar data to be processed based on three Octave convolution layers of each mode to obtain feature fusion data; the classification module 204 is configured to extract directional texture information in the feature fusion data, perform spatial, texture, and spectrum joint classification in combination with the to-be-processed hyperspectral data, obtain target joint classification features, and determine a target category.
By adopting the technical scheme, the hyperspectral and laser radar combined classification can effectively combine and extract a plurality of fractional dimensional characteristics of different sensor data, further comprehensively utilize space, spectrum and texture characteristics, fully mine and utilize the integrity and reliability of multisource data, improve the combined ground object classification performance under different resolutions and different modes, and realize high-precision cooperative classification.
Optionally, the preprocessing module 201 is specifically configured to:
performing geometric correction processing and radiation correction processing on the acquired original hyperspectral image data to obtain three-dimensional original hyperspectral image data;
and carrying out normalization processing on the spectral reflectance value of the original hyperspectral image in the three-dimensional form to obtain hyperspectral image data to be processed.
Optionally, the preprocessing module 201 is specifically configured to:
the difference is made between a digital surface model and a digital elevation model in original laser radar data to obtain a normalized digital surface model so as to remove abnormal points;
and respectively carrying out amplitude normalization processing on each wave band by using three-wave band laser radar intensity map data in the original laser radar data to obtain laser radar data to be processed.
Optionally, the data fusion module 203 includes:
the first fusion submodule is used for separating high-frequency components and low-frequency components of the two-dimensional image on an input layer of the three-layer Octave convolution layer, wherein the frequency of the high-frequency components is greater than a set frequency threshold value, and the frequency of the low-frequency components is less than the set frequency threshold value;
the second fusion submodule is used for combining high-frequency components and low-frequency components of each mode aiming at the hyperspectral image data and the laser radar data of each spatial resolution in the middle layer of the three-layer Octave convolution layer;
and the third fusion submodule is used for carrying out frequency component synthesis on each high-frequency component and low-frequency component with different resolutions and different frequencies on the output layer of the three-layer Octave convolution layer to obtain characteristic fusion data.
Optionally, the third fusion submodule is specifically configured to:
integrating high-frequency data of the hyperspectral image data and low-frequency data of the laser radar image data to obtain first fusion layer data;
and integrating the data of the first fusion layer and the high-frequency data of the laser radar image to obtain data of a second fusion layer as feature fusion data.
Optionally, the classification module 204 is specifically configured to:
designing a plurality of two-dimensional fractional domain Gabor filters with different fractional orders according to the feature fusion data to extract directional texture information of the feature fusion data;
designing a score domain Gabor convolution layer according to a two-dimensional score domain Gabor filter;
applying a fractional domain Gabor convolution layer, and performing full convolution operation on hyperspectral image data to be processed and laser radar data to be processed;
setting a spectrum convolution layer, and summing the hyperspectral image data to be processed by applying the spectrum convolution layer to obtain the spectral characteristics of the hyperspectral image data;
taking the weighted sum of the directional texture information of the fusion data and the spectral feature of the hyperspectral image data as a joint feature;
taking the joint characteristics as input, and acquiring the probability that each pixel point belongs to each category;
and determining the class with the highest probability as the target class.
Optionally, the two-dimensional fractional domain Gabor filter is a multiplication of a gaussian function and a sinusoidal plane wave.
The coastal wetland deep learning classification device provided by the embodiment of the invention can execute the coastal wetland deep learning classification method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
An embodiment of the present invention further provides an apparatus, please refer to fig. 3, fig. 3 is a schematic structural diagram of an apparatus, and as shown in fig. 3, the apparatus includes: a processor 310, and a memory 320 coupled to the processor 310; the memory 320 is used for storing a computer program, and the computer program is at least used for executing the coastal wetland deep learning classification method in the embodiment of the invention; a processor 310 for invoking and executing the computer program in the memory; the coastal wetland deep learning classification at least comprises the following steps: correcting and normalizing the collected original hyperspectral image data to obtain hyperspectral image data to be processed, and removing abnormal points and normalizing the collected original lidar data to obtain lidar data to be processed; constructing three layers of Octave convolution layers of each mode according to the spatial resolution and the number of channels of the hyperspectral image data to be processed and the spatial resolution and the number of channels of the laser radar data to be processed; based on the three-layer Octave convolution layer of each mode, performing component separation, component combination and frequency component synthesis on hyperspectral image data to be processed and laser radar data to be processed to obtain feature fusion data; directional texture information in the feature fusion data is extracted, and spatial, texture and spectrum combined classification is carried out by combining with the hyperspectral data to be processed to obtain target combined classification features so as to determine the target category.
The embodiment of the present invention further provides a storage medium, where the storage medium stores a computer program, and when the computer program is executed by a processor, the method implements the steps in the method for classifying coastal wetland deep learning according to the embodiment of the present invention: correcting and normalizing the collected original hyperspectral image data to obtain hyperspectral image data to be processed, and removing abnormal points and normalizing the collected original lidar data to obtain lidar data to be processed; constructing three layers of Octave convolution layers of each mode according to the spatial resolution and the number of channels of the hyperspectral image data to be processed and the spatial resolution and the number of channels of the laser radar data to be processed; based on the three Octave convolution layers of each mode, performing component separation, component combination and frequency component synthesis on hyperspectral image data to be processed and laser radar data to be processed to obtain feature fusion data; directional texture information in the feature fusion data is extracted, space, texture and spectrum combined classification is carried out by combining with the hyperspectral data to be processed, target combined classification features are obtained, and the target category is determined.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that the terms "first," "second," and the like in the description of the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present invention, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are exemplary and not to be construed as limiting the present invention, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (8)

1. A coastal wetland deep learning classification method is characterized by comprising the following steps:
correcting and normalizing the acquired original hyperspectral image data to obtain hyperspectral image data to be processed, and removing abnormal points and normalizing the acquired original lidar data to obtain lidar data to be processed;
constructing three layers of Octave convolution layers of each mode according to the spatial resolution and the number of channels of the hyperspectral image data to be processed and the spatial resolution and the number of channels of the lidar data to be processed;
based on the three Octave convolutional layers of each mode, performing component separation, component combination and frequency component synthesis on the hyperspectral image data to be processed and the lidar data to be processed to obtain feature fusion data;
directional texture information in the feature fusion data is extracted, and space, texture and spectrum combined classification is carried out by combining with the hyperspectral data to be processed to obtain target combined classification features so as to determine the target category;
the three-layer Octave convolution layer based on each mode performs component separation, component combination and frequency component synthesis on the to-be-processed hyperspectral image data and the to-be-processed lidar data to obtain feature fusion data, and the method comprises the following steps:
separating high-frequency components and low-frequency components of the two-dimensional image on an input layer of the three-layer Octave convolution layer, wherein the frequency of the high-frequency components is greater than a set frequency threshold value, and the frequency of the low-frequency components is less than the set frequency threshold value;
combining high-frequency components and low-frequency components of each mode aiming at hyperspectral image data and laser radar data of each spatial resolution in an intermediate layer of the three-layer Octave convolutional layer;
performing frequency component synthesis on each high-frequency component and low-frequency component with different resolutions and different frequencies on an output layer of the three-layer Octave convolution layer to obtain feature fusion data;
wherein, in the output layer of three-layer Octave convolution layer, carry out frequency component synthesis to each high frequency component and low frequency component of different resolution and different frequency, obtain characteristic fusion data, include:
integrating high-frequency data of the hyperspectral image data and low-frequency data of the laser radar image data to obtain first fusion layer data;
and integrating the data of the first fusion layer and the high-frequency data of the laser radar image to obtain data of a second fusion layer as feature fusion data.
2. The method according to claim 1, wherein the step of performing correction processing and normalization processing on the collected original hyperspectral image data to obtain hyperspectral image data to be processed comprises the following steps:
performing geometric correction processing and radiation correction processing on the acquired original hyperspectral image data to obtain three-dimensional original hyperspectral image data;
and carrying out normalization processing on the spectral reflectance value of the original hyperspectral image in the three-dimensional form to obtain hyperspectral image data to be processed.
3. The method according to claim 1, wherein the performing outlier removal and normalization on the collected raw lidar data to obtain lidar data to be processed includes:
applying a difference between a digital surface model and a digital elevation model in the original laser radar data to obtain a normalized digital surface model so as to remove abnormal points;
and respectively carrying out amplitude normalization processing on each wave band by using the three-wave-band laser radar intensity map data in the original laser radar data to obtain laser radar data to be processed.
4. The method according to claim 1, wherein the extracting directional texture information in the feature fusion data, and performing spatial, texture and spectral joint classification in combination with to-be-processed hyperspectral data to obtain a target joint classification feature to determine a target category comprises:
designing a plurality of two-dimensional fractional domain Gabor filters with different fractional orders according to the feature fusion data to extract directional texture information of the feature fusion data;
setting a fractional domain Gabor convolution layer according to the two-dimensional fractional domain Gabor filter;
applying the fractional domain Gabor convolution layer to perform full convolution operation on hyperspectral image data to be processed and laser radar data to be processed;
setting a spectrum convolution layer, and applying the spectrum convolution layer to sum the hyperspectral image data to be processed so as to obtain the spectral characteristics of the hyperspectral image;
taking the weighted sum of the directional texture information of the fusion data and the spectral feature of the hyperspectral image data as a joint feature;
taking the combined features as input, and acquiring the probability that each pixel point belongs to each category;
and determining the class with the highest probability as the target class.
5. The method of claim 4, wherein the two-dimensional fractional domain Gabor filter is a multiplication of a Gaussian function and a sinusoidal plane wave.
6. The device for deep learning and classifying the coastal wetland is characterized by comprising:
the preprocessing module is used for correcting and normalizing the acquired original hyperspectral image data to obtain hyperspectral image data to be processed, and performing exception point removal processing and normalization processing on the acquired original lidar data to obtain lidar data to be processed;
the convolutional layer construction module is used for constructing three layers of Octave convolutional layers of each mode according to the spatial resolution and the number of channels of the hyperspectral image data to be processed and the spatial resolution and the number of channels of the lidar data to be processed;
the data fusion module is used for carrying out component separation, component combination and frequency component synthesis on the hyperspectral image data to be processed and the lidar data to be processed based on the three Octave convolutional layers of each mode to obtain feature fusion data;
the classification module is used for extracting directional texture information in the feature fusion data, and performing space, texture and spectrum combined classification by combining with the hyperspectral data to be processed to obtain target combined classification features so as to determine the target category;
wherein, based on each mode the three-layer Octave convolutional layer, to wait to handle hyperspectral image data and wait to handle lidar data and carry out component separation, component combination and frequency component synthesis, obtain characteristic fusion data, include:
separating high-frequency components and low-frequency components of the two-dimensional image on an input layer of the three-layer Octave convolution layer, wherein the frequency of the high-frequency components is greater than a set frequency threshold value, and the frequency of the low-frequency components is less than the set frequency threshold value;
combining high-frequency components and low-frequency components of each mode aiming at hyperspectral image data and laser radar data of each spatial resolution in an intermediate layer of the three-layer Octave convolutional layer;
performing frequency component synthesis on each high-frequency component and low-frequency component with different resolutions and different frequencies on an output layer of the three-layer Octave convolution layer to obtain feature fusion data;
in the output layer of the three-layer Octave convolution layer, frequency component synthesis is performed on each high-frequency component and low-frequency component with different resolutions and different frequencies to obtain feature fusion data, which includes:
integrating high-frequency data of the hyperspectral image data and low-frequency data of the laser radar image data to obtain first fusion layer data;
and synthesizing the data of the first fusion layer and the high-frequency data of the laser radar image to obtain data of a second fusion layer as feature fusion data.
7. An apparatus, comprising:
a processor, and a memory coupled to the processor;
the memory is used for storing a computer program, and the computer program is at least used for executing the coastal wetland deep learning classification method of any one of claims 1 to 5;
the processor is used for calling and executing the computer program in the memory.
8. A storage medium storing a computer program, wherein the computer program is executed by a processor to implement the steps of the coastal wetland deep learning classification method according to any one of claims 1 to 5.
CN202010701215.1A 2020-07-20 2020-07-20 Coastal wetland deep learning classification method, device, equipment and storage medium Active CN111898662B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010701215.1A CN111898662B (en) 2020-07-20 2020-07-20 Coastal wetland deep learning classification method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010701215.1A CN111898662B (en) 2020-07-20 2020-07-20 Coastal wetland deep learning classification method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111898662A CN111898662A (en) 2020-11-06
CN111898662B true CN111898662B (en) 2023-01-06

Family

ID=73189546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010701215.1A Active CN111898662B (en) 2020-07-20 2020-07-20 Coastal wetland deep learning classification method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111898662B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022109945A1 (en) * 2020-11-26 2022-06-02 深圳大学 Hyperspectral and lidar joint classification method based on scale adaptive filtering
CN112464891B (en) * 2020-12-14 2023-06-16 湖南大学 Hyperspectral image classification method
CN113361407A (en) * 2021-06-07 2021-09-07 上海海洋大学 PCANet-based space spectrum feature and hyperspectral sea ice image combined classification method
CN114707595B (en) * 2022-03-29 2024-01-16 中国科学院精密测量科学与技术创新研究院 Spark-based hyperspectral laser radar multichannel weighting system and method
CN116894972B (en) * 2023-06-25 2024-02-13 耕宇牧星(北京)空间科技有限公司 Wetland information classification method and system integrating airborne camera image and SAR image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015207235A (en) * 2014-04-23 2015-11-19 日本電気株式会社 Data fusion device, land coverage classification system, method and program
CN109993220A (en) * 2019-03-23 2019-07-09 西安电子科技大学 Multi-source Remote Sensing Images Classification method based on two-way attention fused neural network
CN111191736A (en) * 2020-01-05 2020-05-22 西安电子科技大学 Hyperspectral image classification method based on depth feature cross fusion
CN111242228A (en) * 2020-01-16 2020-06-05 武汉轻工大学 Hyperspectral image classification method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015207235A (en) * 2014-04-23 2015-11-19 日本電気株式会社 Data fusion device, land coverage classification system, method and program
CN109993220A (en) * 2019-03-23 2019-07-09 西安电子科技大学 Multi-source Remote Sensing Images Classification method based on two-way attention fused neural network
CN111191736A (en) * 2020-01-05 2020-05-22 西安电子科技大学 Hyperspectral image classification method based on depth feature cross fusion
CN111242228A (en) * 2020-01-16 2020-06-05 武汉轻工大学 Hyperspectral image classification method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN111898662A (en) 2020-11-06

Similar Documents

Publication Publication Date Title
CN111898662B (en) Coastal wetland deep learning classification method, device, equipment and storage medium
Javhar et al. Comparison of multi-resolution optical Landsat-8, Sentinel-2 and radar Sentinel-1 data for automatic lineament extraction: A case study of Alichur area, SE Pamir
Ahmadi et al. Fault-based geological lineaments extraction using remote sensing and GIS—a review
Kulkarni et al. Pixel level fusion techniques for SAR and optical images: A review
Wang et al. Review of pulse-coupled neural networks
CN107239751B (en) High-resolution SAR image classification method based on non-subsampled contourlet full convolution network
Small Spatiotemporal dimensionality and Time-Space characterization of multitemporal imagery
Fakiris et al. Object-based classification of sub-bottom profiling data for benthic habitat mapping. Comparison with sidescan and RoxAnn in a Greek shallow-water habitat
CN111008664B (en) Hyperspectral sea ice detection method based on space-spectrum combined characteristics
Wang et al. MCT-Net: Multi-hierarchical cross transformer for hyperspectral and multispectral image fusion
Wang et al. A novel image fusion method based on FRFT-NSCT
Dibs et al. Multi-fusion algorithms for detecting land surface pattern changes using multi-high spatial resolution images and remote sensing analysis
Zhao et al. Airborne multispectral LiDAR point cloud classification with a feature Reasoning-based graph convolution network
CN113610070A (en) Landslide disaster identification method based on multi-source data fusion
Bachofer et al. Multisensoral topsoil mapping in the semiarid Lake Manyara region, northern Tanzania
CN115471448A (en) Artificial intelligence-based thymus tumor histopathology typing method and device
CN112634159A (en) Hyperspectral image denoising method based on blind noise estimation
Zhang et al. A study on coastline extraction and its trend based on remote sensing image data mining
Deng et al. Multiple frame splicing and degradation learning for hyperspectral imagery super-resolution
CN112989940B (en) Raft culture area extraction method based on high-resolution third satellite SAR image
Teo et al. Pyramid-based image empirical mode decomposition for the fusion of multispectral and panchromatic images
Arellano Missing information in remote sensing: wavelet approach to detect and remove clouds and their shadows
CN115019178A (en) Hyperspectral image classification method based on large kernel convolution attention
Chen et al. Improving GPR imaging of the buried water utility infrastructure by integrating the multidimensional nonlinear data decomposition technique into the edge detection
Mei et al. Sensor-specific transfer learning for hyperspectral image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant