CN113344030B - Remote sensing image feature fusion method and system based on decision correlation analysis - Google Patents

Remote sensing image feature fusion method and system based on decision correlation analysis Download PDF

Info

Publication number
CN113344030B
CN113344030B CN202110509659.XA CN202110509659A CN113344030B CN 113344030 B CN113344030 B CN 113344030B CN 202110509659 A CN202110509659 A CN 202110509659A CN 113344030 B CN113344030 B CN 113344030B
Authority
CN
China
Prior art keywords
feature
features
sift
connection layer
remote sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110509659.XA
Other languages
Chinese (zh)
Other versions
CN113344030A (en
Inventor
杨松
庄立运
顾相平
王晓晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huai'an Ideological And Technological Development Co ltd
Huaiyin Institute of Technology
Original Assignee
Huai'an Ideological And Technological Development Co ltd
Huaiyin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huai'an Ideological And Technological Development Co ltd, Huaiyin Institute of Technology filed Critical Huai'an Ideological And Technological Development Co ltd
Priority to CN202110509659.XA priority Critical patent/CN113344030B/en
Publication of CN113344030A publication Critical patent/CN113344030A/en
Application granted granted Critical
Publication of CN113344030B publication Critical patent/CN113344030B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/245Classification techniques relating to the decision surface
    • G06F18/2451Classification techniques relating to the decision surface linear, e.g. hyperplane
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a remote sensing image feature fusion method and a remote sensing image feature fusion system based on decision correlation analysis, wherein the remote sensing image feature fusion method comprises the following steps: generating SIFT features of the remote sensing image by using a Lowe algorithm; the SIFT features are encoded by an IFK method, and the encoded SIFT features are obtained; the VGG-VD-16 neural network model pre-trained by the ImageNet data set is used as a feature extractor to extract the remote sensing image to obtain a first full-connection layer feature F i fc.1 And a second full connection layer feature F i fc.2 The method comprises the steps of carrying out a first treatment on the surface of the The encoded SIFT feature is compared with the first full connection layer feature F i fc.1 Fusing to obtain intermediate characteristics; intermediate features and second full connection layer features F using DCA transformation method i fc.2 Performing DCA conversion; combining the transformed intermediate feature with a second full connection layer feature F i fc.2 And (3) fusing to obtain a fused result, and inputting the fused structure into a linear classifier to obtain a remote sensing image classification result.

Description

Remote sensing image feature fusion method and system based on decision correlation analysis
Technical Field
The invention belongs to the technical field of image feature fusion, and particularly relates to a remote sensing image feature fusion method and system based on decision correlation analysis.
Background
The high-resolution remote sensing image has rich space and semantic information, and plays an important role in the fields of environment monitoring, urban planning, agricultural management and the like. In order to better utilize these remote sensing images, it is necessary to classify the remote sensing images.
The traditional remote sensing image classification is mostly based on the bottom layer characteristics for clustering fusion, but local characteristics can not express image semantics well because of gaps between the bottom layer characteristics and high-level semantics. With the advent of deep learning tools, neural networks have also been increasingly used in the field of image processing. The deep learning method can adaptively learn image features suitable for specific scene classification tasks and achieve better classification performance than the traditional scene classification method. However, there are some problems in classifying images by using the deep learning method, which mainly appear as follows: the effective data set of the deep learning is insufficient, the labeling is difficult, the training time of the new model is too long, the dimension of the deep learning characteristic is too large, and the application is inconvenient.
Disclosure of Invention
The invention aims to: in order to solve the problem of difficult classification of remote sensing images in the prior art, the invention provides a remote sensing image feature fusion method and a remote sensing image feature fusion system based on decision correlation analysis.
The technical scheme is as follows: a remote sensing image feature fusion method based on decision correlation analysis comprises the following steps:
step 1: generating SIFT features of the remote sensing image by using Lowe's algorithm;
step 2: the SIFT features are encoded by an IFK method, and the encoded SIFT features are obtained;
step 3: the VGG-VD-16 neural network model pre-trained by the ImageNet data set is used as a feature extractor to extract the remote sensing image to obtain a first fullConnection layer feature F i fc.1 And a second full connection layer feature F i fc.2
Step 4: the encoded SIFT feature is compared with the first full connection layer feature F i fc.1 Fusing to obtain intermediate feature H m The method comprises the steps of carrying out a first treatment on the surface of the Intermediate feature H by DCA transformation method m And a second full connection layer feature F i fc.2 Performing DCA conversion; combining the transformed intermediate feature with a second full connection layer feature F i fc.2 Fusing to obtain a fusion result;
step 5: and inputting the fusion structure into a linear classifier to obtain a remote sensing image classification result.
Further, the encoded SIFT feature is expressed as:
wherein:
in the above-mentioned method, the step of,mu is i T-dimensional gradient representation of ∈10->Is sigma (sigma) i X is a SIFT feature set of the remote sensing image, expressed as: x= { X t ,t=1,2...T};w i 、μ i 、σ i Mixing weight, mean and variance of the ith Gaussian mixture model respectively; t is the number of SIFT features; gamma ray t (i) Representing that the t-th SIFT feature is blended by the i-th GaussianProbability of model generation, x t Representing the t-th SIFT feature.
Further, the step 4 specifically includes:
to the first full connection layer feature F i fc.1 Fusion with SIFT feature after IFK coding can obtain intermediate feature H m The expression is as follows:
H m =kF i fc.1 +αH IFK (f i s ) (4)
wherein k and alpha represent the weight coefficient of the first full connection layer and the weight coefficient of the SIFT feature when the features are fused;
intermediate characteristic H by adopting discriminant correlation analysis DCA transformation method m And a second full connection layer feature F i fc.2 The two transformed features are respectively expressed as:
DCA(H m )=(H m ) * =W 1 H m (5)
DCA(F i fc.2 )=(F i fc.2 ) * =W 2 F i fc.2 (6)
in which W is 1 、W 2 The transformation matrices for the two features, respectively, are represented as:
wherein A is an intermediate feature H m The maximum eigenvector corresponding to the covariance of the set, B is the second full-connected layer feature F i fc.2 Maximum eigenvectors corresponding to covariance matrices of the set; Λ type a Is characteristic H m Diagonal matrix formed by eigenvalues corresponding to covariance of set, Λ b For the second full-connection layer feature F i fc.2 Covariance matrix of setDiagonal matrix composed of corresponding characteristic values; u and V are unitary matrices, Σ is a diagonal matrix, and U, V and Σ are intermediate features H m Aggregation and second full connection layer feature F i fc.2 Singular value decomposition is carried out on the covariance matrix of the set to obtain the covariance matrix;
will transform the intermediate feature H m And a second full connection layer feature F i fc.2 Fusion is carried out to obtain a fusion result, which is expressed as:
wherein F is i fc.1 For the first full link layer feature, H IFK (f i s ) Is SIFT feature after IFK coding, F i fc.2 For the second fully connected layer feature, k, α and β represent the weight coefficients of the features at the time of feature fusion, respectively, and represent features after DCA transformation.
The invention also discloses a remote sensing image feature fusion system based on the decision correlation analysis, which comprises the following steps:
the SIFT feature generation module is used for generating SIFT features of the remote sensing image by adopting Lowe's algorithm;
the coding module is used for coding the SIFT features by using an IFK method to obtain coded SIFT features;
the remote sensing image feature extraction module is used for extracting the remote sensing image by adopting a VGG-VD-16 neural network model pre-trained by an ImageNet data set as a feature extractor to obtain a first full-connection layer feature F i fc.1 And a second full connection layer feature F i fc.2
A fusion module for combining the encoded SIFT feature with the first full connection layer feature F i fc.1 Fusing to obtain intermediate characteristics; intermediate features and second full connection layer features F using DCA transformation method i fc.2 Performing DCA conversion; combining the transformed intermediate feature with a second full connection layer feature F i fc.2 Fusing to obtain a fusion result;
and the classifier is used for inputting the fusion structure into the linear classifier to obtain a remote sensing image classification result.
Further, the classifier is a linear classifier in libsvm.
Further, the encoded SIFT feature is expressed as:
wherein:
in the above-mentioned method, the step of,mu is i T-dimensional gradient representation of ∈10->Is sigma (sigma) i X is a SIFT feature set of the remote sensing image, expressed as: x= { X t ,t=1,2...T};w i 、μ i 、σ i Mixing weight, mean and variance of the ith Gaussian mixture model respectively; t is the number of SIFT features; gamma ray t (i) Representing the probability that the t-th SIFT feature is generated by the i-th Gaussian mixture model, x t Representing the t-th SIFT feature;
further, the intermediate feature H m The expression is as follows:
H m =kF i fc.1 +αH IFK (fi s ) (4)
where k and α represent the weight coefficient of the first fully connected layer and the weight coefficient of SIFT feature when features are fused.
Further, the transformed intermediate feature H m And a second full connection layer feature F i fc.2 The two transformed features are respectively expressed as:
DCA(H m )=(H m ) * =W 1 H m (5)
DCA(F i fc.2 )=(F i fc.2 )*=W 2 F i fc.2 (6)
in which W is 1 、W 2 The transformation matrices for the two features, respectively, are represented as:
wherein A is an intermediate feature H m The maximum eigenvector corresponding to the covariance of the set, B is the second full-connected layer feature F i fc.2 Maximum eigenvectors corresponding to covariance matrices of the set; Λ type a Is characteristic H m Diagonal matrix formed by eigenvalues corresponding to covariance of set, Λ b For the second full-connection layer feature F i fc.2 Diagonal matrix composed of eigenvalues corresponding to covariance matrix of the set; u and V are unitary matrices, Σ is a diagonal matrix, and U, V and Σ are intermediate features H m Aggregation and second full connection layer feature F i fc.2 Singular value decomposition is carried out on the covariance matrix of the set to obtain the covariance matrix;
the fusion result is expressed as:
wherein F is i fc.1 For the first full link layer feature, H IFK (f i s ) Is SIFT feature after IFK coding, F i fc.2 For the second fully connected layer feature, k, α and β represent the weight coefficients of the features at the time of feature fusion, respectively, and represent features after DCA transformation.
The beneficial effects are that: according to the method for obtaining the good classification effect by fusing the deep learning features and the traditional features, the SIFT features are used for representing the position and scale information of the image, the neural network features describe the semantic information of the image, and the connection between the bottom-layer features and the high-layer semantics is constructed; the scale invariance of SIFT features is utilized to fuse the SIFT features with the neural network features extracted by the pre-training model so as to obtain a good classification effect; meanwhile, the feature dimension is greatly reduced by utilizing a decision correlation analysis method on the premise of keeping good classification performance.
Drawings
FIG. 1 is a system block diagram;
FIG. 2 is a confusion matrix for classifying a UCM-21 dataset according to the present invention;
FIG. 3 is a confusion matrix for classifying an RS-19 dataset according to the present invention.
Detailed Description
The inventive method is further elucidated below in connection with the accompanying drawings.
The remote sensing image feature fusion method based on the decision correlation analysis shown in fig. 1 comprises the following steps:
taking a VGG-VD-16 neural network model pre-trained by an ImageNet data set as a feature extractor to extract the first full connection layer feature F of the VGG-VD-16 neural network model i fc.1 And a second full connection layer feature F i fc.2
In order to solve the problem of scale caused by the fact that the shooting height and direction of a remote sensing image in the air are unstable, the Lowe's method is adopted to generate SIFT features of the remote sensing image, and the SIFT features of the ith image are marked as f i s The method comprises the steps of carrying out a first treatment on the surface of the SIFT features are very stable local features, have invariance to rotation and scaling, and are now suitable for the process of generating SIFT features of images by using Lowe's methodThe following is a brief description:
searching for image positions on all scale spaces; potential points of interest with scale and rotation invariance are identified by gaussian derivative functions. The difference in scale space of the gaussian function has a large number of extrema. And deleting unstable extreme points, and designating a consistent direction for each key point according to the local characteristics of the image so as to realize the invariance of image rotation.
Transforming SIFT features by using an IFK method: let SIFT feature be set X, x= { X t ,t=1,2...T},λ={w iii I=1, 2..m } is a gaussian mixture model parameter, where w i 、μ i 、σ i Respectively mixing weight, mean value and variance of the ith Gaussian mixture model, wherein M represents the number of the Gaussian mixture models; the features of the SIFT features encoded by the IFK method can be represented by the following fischer vectors:
wherein,,mu is i T-dimensional gradient representation of ∈10->Is sigma (sigma) i T is the number of SIFT features and is determined by the image; gamma ray t (i) Representing the probability that the t-th SIFT feature was generated by the i-th gaussian mixture model.
The first full connection layer is fused with SIFT features after IFK coding to obtain an intermediateFeature H m The expression is as follows:
H m =kF i fc.1 +αH IFK (f i s ) (4)
where k and α represent the weight coefficient of the first fully connected layer and the weight coefficient of SIFT feature when features are fused.
The correlation between different deep learning features is stronger, while the correlation between deep learning features and classical features is weaker. In order to improve the correlation degree in different feature set classes and reduce the correlation among the classes, a discriminant correlation analysis DCA (Discriminant Correlation Analysis) is adopted for the intermediate feature H m And a second full connection layer feature F i fc.2 And performing transformation. The two characteristics after transformation are respectively expressed as follows:
DCA(H m )=(H m ) * =W 1 H m (5)
DCA(F i fc.2 )=(F i fc.2 ) * =W 2 F i fc.2 (6)
the feature dimension after transformation is c-1, and c is the category number of the remote sensing image set. In which W is 1 、W 2 The transformation matrices for the two features, respectively, are represented as:
wherein A is an intermediate feature H m The maximum eigenvector corresponding to the covariance of the set, B is the second full-connected layer feature F i fc.2 The maximum eigenvector corresponding to the covariance matrix of the set. Λ type a Is characteristic H m Diagonal matrix formed by eigenvalues corresponding to covariance of set, Λ b For the second full-connection layer feature F i fc.2 Covariance moment of setAnd a diagonal matrix formed by the eigenvalues corresponding to the matrix. U and V are unitary matrices, Σ is a diagonal matrix, and U, V and Σ are intermediate features H m Aggregation and second full connection layer feature F i fc.2 Singular value decomposition is carried out on the covariance matrix of the set.
The algorithm firstly takes SIFT features as the supplement of the first full-connection layer features extracted by VGG-VD-16, and the intermediate features after the SIFT features are fused with the second full-connection layer features extracted by VGG-VD-16 for DCA conversion, and then the two converted features are fused. The final fusion characteristics are formulated as follows:
wherein F is i fc.1 For the first full link layer feature, H IFK (f i s ) For SIFT feature after IFK encoding, F i fc.2 For the second fully connected layer feature, k, α and β represent the weight coefficients of each feature when the features are fused, and represent the features after DCA transformation.
The Windows 10 operating system is now employed, with MATLAB R2016a as the software platform. The main configuration of the computer is Intel i7-8700 CPU@3.20GHz, and the memory is 32GB. In the public data set RS-19 and UCM-21, the image classification based on the image feature fusion method is compared with other image classification methods with better performance at present, wherein the training proportion of the RS-19 is 60 percent, and the training proportion of the UCM-21 is 80 percent. The results of the present method for classifying images were evaluated by the overall classification accuracy (OA) and the confusion matrix, and the results are shown in tables 1 and 2.
TABLE 1 Overall Classification accuracy (OA) comparison for classifying data set UCM
Method Bidirectional adaptive method GBRCN Scenario(I) Proposed
Overall accuracy 95.48 94.53 96.88 96.19
TABLE 2 Overall Classification accuracy (OA) comparison of classifying data sets RS-19
Method Bidirectional adaptive method GBRCN Scenario(I) MSDS Proposed
Overall accuracy 95.48 94.35 96.88 97.61 97.17
Fig. 2 shows an image classification based on the image feature fusion method of the present invention for UCM-21 dataset, and 80% of the confusion matrix was selected as the test set, where (a) in fig. 2 is the case where no SIFT feature was added, and (b) is the case where SIFT feature was added. FIG. 3 is a graph of image classification for an RS-19 dataset based on the image feature fusion method of the present invention, with 60% selected as the confusion matrix for the test set, and FIGS. 3 (a) and (b) are respectively for both no SIFT feature and the addition of SIFT feature. As can be seen from fig. 2 and 3, the addition of SIFT features greatly improves scene classification accuracy.

Claims (5)

1. A remote sensing image feature fusion method based on decision correlation analysis is characterized in that: the method comprises the following steps:
step 1: generating SIFT features of the remote sensing image by using Lowe's algorithm;
step 2: the SIFT features are encoded by an IFK method, and the encoded SIFT features are obtained;
step 3: the VGG-VD-16 neural network model pre-trained by the ImageNet data set is used as a feature extractor to extract the remote sensing image to obtain a first full-connection layer feature F i fc.1 And a second full connection layer feature F i fc.2
Step 4: the encoded SIFT feature is compared with the first full connection layer feature F i fc.1 Fusing to obtain intermediate feature H m The method comprises the steps of carrying out a first treatment on the surface of the Intermediate feature H by DCA transformation method m And a second full connection layer feature F i fc.2 Performing DCA conversion; combining the transformed intermediate feature with a second full connection layer feature F i fc.2 Fusing to obtain a fusion result;
step 5: inputting the fusion structure into a linear classifier to obtain a remote sensing image classification result;
the step 4 specifically comprises the following steps:
to the first full connection layer feature F i fc.1 Fusion with SIFT feature after IFK coding can obtain intermediate feature H m The expression is as follows:
H m =kF i fc.1 +αH IFK (f i s ) (4)
wherein k and alpha represent the weight coefficient of the first full connection layer and the weight coefficient of the SIFT feature when the features are fused;
intermediate characteristic H by adopting discriminant correlation analysis DCA transformation method m And a second full connection layer feature F i fc.2 The two transformed features are respectively expressed as:
DCA(H m )=(H m ) * =W 1 H m (5)
DCA(F i fc.2 )=(F i fc.2 ) * =W 2 F i fc.2 (6)
in which W is 1 、W 2 The transformation matrices for the two features, respectively, are represented as:
wherein A is an intermediate feature H m The maximum eigenvector corresponding to the covariance of the set, B is the second full-connected layer feature F i fc.2 Maximum eigenvectors corresponding to covariance matrices of the set; Λ type a Is characteristic H m Diagonal matrix formed by eigenvalues corresponding to covariance of set, Λ b For the second full-connection layer feature F i fc.2 Eigenvalue structure corresponding to covariance matrix of setA diagonal matrix; u and V are unitary matrices, Σ is a diagonal matrix, and U, V and Σ are intermediate features H m Aggregation and second full connection layer feature F i fc.2 Singular value decomposition is carried out on the covariance matrix of the set to obtain the covariance matrix;
will transform the intermediate feature H m And a second full connection layer feature F i fc.2 Fusion is carried out to obtain a fusion result, which is expressed as:
wherein F is i fc.1 For the first full link layer feature, H IFK (f i s ) Is SIFT feature after IFK coding, F i fc.2 For the second fully connected layer feature, k, α and β represent the weight coefficients of the features at the time of feature fusion, respectively, and represent features after DCA transformation.
2. The remote sensing image feature fusion method based on decision correlation analysis according to claim 1, wherein the method comprises the following steps: the encoded SIFT features are expressed as:
wherein:
in the above-mentioned method, the step of,mu is i T-dimensional gradient representation of ∈10->Is sigma (sigma) i X is a SIFT feature set of the remote sensing image, expressed as: x= { X t ,t=1,2...T};w i 、μ i 、σ i Mixing weight, mean and variance of the ith Gaussian mixture model respectively; t is the number of SIFT features; gamma ray t (i) Representing the probability that the t-th SIFT feature is generated by the i-th Gaussian mixture model, x t Representing the t-th SIFT feature.
3. A remote sensing image feature fusion system based on decision correlation analysis is characterized in that: comprising the following steps:
the SIFT feature generation module is used for generating SIFT features of the remote sensing image by adopting Lowe's algorithm;
the coding module is used for coding the SIFT features by using an IFK method to obtain coded SIFT features;
the remote sensing image feature extraction module is used for extracting the remote sensing image by adopting a VGG-VD-16 neural network model pre-trained by an ImageNet data set as a feature extractor to obtain a first full-connection layer feature F i fc.1 And a second full connection layer feature F i fc.2
A fusion module for combining the encoded SIFT feature with the first full connection layer feature F i fc.1 Fusing to obtain intermediate characteristics; intermediate features and second full connection layer features F using DCA transformation method i fc.2 Performing DCA conversion; combining the transformed intermediate feature with a second full connection layer feature F i fc.2 Fusing to obtain a fusion result;
the classifier is used for inputting the fusion structure into the linear classifier to obtain a remote sensing image classification result;
the intermediate feature H m The expression is as follows:
H m =kF i fc.1 +αH IFK (f i s ) (4)
wherein k and alpha represent the weight coefficient of the first full connection layer and the weight coefficient of the SIFT feature when the features are fused;
the transformed intermediate feature H m And a second full connection layer feature F i fc.2 The two transformed features are respectively expressed as:
DCA(H m )=(H m ) * =W 1 H m (5)
DCA(F i fc.2 )=(F i fc.2 ) * =W 2 F i fc.2 (6)
in which W is 1 、W 2 The transformation matrices for the two features, respectively, are represented as:
wherein A is an intermediate feature H m The maximum eigenvector corresponding to the covariance of the set, B is the second full-connected layer feature F i fc.2 Maximum eigenvectors corresponding to covariance matrices of the set; Λ type a Is characteristic H m Diagonal matrix formed by eigenvalues corresponding to covariance of set, Λ b For the second full-connection layer feature F i fc.2 Diagonal matrix composed of eigenvalues corresponding to covariance matrix of the set; u and V are unitary matrices, Σ is a diagonal matrix, and U, V and Σ are intermediate features H m Aggregation and second full connection layer feature F i fc.2 Singular value decomposition is carried out on the covariance matrix of the set to obtain the covariance matrix;
the fusion result is expressed as:
wherein F is i fc.1 For the first full link layer feature, H IFK (f i s ) Is SIFT feature after IFK coding, F i fc.2 For the second fully connected layer feature, k, α and β represent the weight coefficients of the features at the time of feature fusion, respectively, and represent features after DCA transformation.
4. A remote sensing image feature fusion system based on decision-related analysis as recited in claim 3, wherein: the classifier is a linear classifier in libsvm.
5. A remote sensing image feature fusion system based on decision-related analysis as recited in claim 3, wherein: the encoded SIFT features are expressed as:
wherein:
in the above-mentioned method, the step of,mu is i T-dimensional gradient representation of ∈10->Is sigma (sigma) i Is represented by a T-dimensional gradient of X is telecentricThe SIFT feature set of the sensory image is expressed as: x= { X t ,t=1,2...T};w i 、μ i 、σ i Mixing weight, mean and variance of the ith Gaussian mixture model respectively; t is the number of SIFT features; gamma ray t (i) Representing the probability that the t-th SIFT feature is generated by the i-th Gaussian mixture model, x t Representing the t-th SIFT feature.
CN202110509659.XA 2021-05-11 2021-05-11 Remote sensing image feature fusion method and system based on decision correlation analysis Active CN113344030B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110509659.XA CN113344030B (en) 2021-05-11 2021-05-11 Remote sensing image feature fusion method and system based on decision correlation analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110509659.XA CN113344030B (en) 2021-05-11 2021-05-11 Remote sensing image feature fusion method and system based on decision correlation analysis

Publications (2)

Publication Number Publication Date
CN113344030A CN113344030A (en) 2021-09-03
CN113344030B true CN113344030B (en) 2023-11-03

Family

ID=77470517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110509659.XA Active CN113344030B (en) 2021-05-11 2021-05-11 Remote sensing image feature fusion method and system based on decision correlation analysis

Country Status (1)

Country Link
CN (1) CN113344030B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2423871A1 (en) * 2010-08-25 2012-02-29 Lakeside Labs GmbH Apparatus and method for generating an overview image of a plurality of images using an accuracy information
CN107291855A (en) * 2017-06-09 2017-10-24 中国电子科技集团公司第五十四研究所 A kind of image search method and system based on notable object
CN108830296A (en) * 2018-05-18 2018-11-16 河海大学 A kind of improved high score Remote Image Classification based on deep learning
CN108932455A (en) * 2017-05-23 2018-12-04 上海荆虹电子科技有限公司 Remote sensing images scene recognition method and device
WO2019042232A1 (en) * 2017-08-31 2019-03-07 西南交通大学 Fast and robust multimodal remote sensing image matching method and system
CN109544610A (en) * 2018-10-15 2019-03-29 天津大学 A kind of method for registering images based on convolutional neural networks
CN110555446A (en) * 2019-08-19 2019-12-10 北京工业大学 Remote sensing image scene classification method based on multi-scale depth feature fusion and transfer learning
CN111767800A (en) * 2020-06-02 2020-10-13 华南师范大学 Remote sensing image scene classification score fusion method, system, equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2423871A1 (en) * 2010-08-25 2012-02-29 Lakeside Labs GmbH Apparatus and method for generating an overview image of a plurality of images using an accuracy information
CN108932455A (en) * 2017-05-23 2018-12-04 上海荆虹电子科技有限公司 Remote sensing images scene recognition method and device
CN107291855A (en) * 2017-06-09 2017-10-24 中国电子科技集团公司第五十四研究所 A kind of image search method and system based on notable object
WO2019042232A1 (en) * 2017-08-31 2019-03-07 西南交通大学 Fast and robust multimodal remote sensing image matching method and system
CN108830296A (en) * 2018-05-18 2018-11-16 河海大学 A kind of improved high score Remote Image Classification based on deep learning
CN109544610A (en) * 2018-10-15 2019-03-29 天津大学 A kind of method for registering images based on convolutional neural networks
CN110555446A (en) * 2019-08-19 2019-12-10 北京工业大学 Remote sensing image scene classification method based on multi-scale depth feature fusion and transfer learning
CN111767800A (en) * 2020-06-02 2020-10-13 华南师范大学 Remote sensing image scene classification score fusion method, system, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
transferring deep convolutional neural networks networks for the scene classification of high-resolution remote sensing imagery;Fan Hu等;remote sens;第7卷(第11期);14680-14707 *
基于中、高层特征融合的高分辨率遥感图像场景分类;赵春晖;马博博;;沈阳大学学报(自然科学版);第32卷(第03期);224-232 *

Also Published As

Publication number Publication date
CN113344030A (en) 2021-09-03

Similar Documents

Publication Publication Date Title
CN107122809B (en) Neural network feature learning method based on image self-coding
Yue-Hei Ng et al. Exploiting local features from deep networks for image retrieval
CN111460077B (en) Cross-modal Hash retrieval method based on class semantic guidance
CN109086405B (en) Remote sensing image retrieval method and system based on significance and convolutional neural network
Lian et al. Max-margin dictionary learning for multiclass image categorization
CN112307995B (en) Semi-supervised pedestrian re-identification method based on feature decoupling learning
CN110222218B (en) Image retrieval method based on multi-scale NetVLAD and depth hash
Shah et al. Max-margin contrastive learning
Adler et al. Probabilistic subspace clustering via sparse representations
CN105184298A (en) Image classification method through fast and locality-constrained low-rank coding process
CN112434628B (en) Small sample image classification method based on active learning and collaborative representation
Carneiro et al. A database centric view of semantic image annotation and retrieval
CN111444367A (en) Image title generation method based on global and local attention mechanism
CN107145841B (en) Low-rank sparse face recognition method and system based on matrix
CN113052017B (en) Unsupervised pedestrian re-identification method based on multi-granularity feature representation and domain self-adaptive learning
CN114329031B (en) Fine-granularity bird image retrieval method based on graph neural network and deep hash
Sun et al. Self-adaptive feature learning based on a priori knowledge for facial expression recognition
CN114780767B (en) Large-scale image retrieval method and system based on deep convolutional neural network
Chen et al. Semi-supervised dictionary learning with label propagation for image classification
CN113705709A (en) Improved semi-supervised image classification method, equipment and storage medium
Min et al. Laplacian regularized locality-constrained coding for image classification
Li et al. Image decomposition with multilabel context: Algorithms and applications
Liu et al. Action recognition based on features fusion and 3D convolutional neural networks
CN108388918B (en) Data feature selection method with structure retention characteristics
Yao A compressed deep convolutional neural networks for face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant