CN116758353A - Remote sensing image target classification method based on domain specific information filtering - Google Patents

Remote sensing image target classification method based on domain specific information filtering Download PDF

Info

Publication number
CN116758353A
CN116758353A CN202310731995.8A CN202310731995A CN116758353A CN 116758353 A CN116758353 A CN 116758353A CN 202310731995 A CN202310731995 A CN 202310731995A CN 116758353 A CN116758353 A CN 116758353A
Authority
CN
China
Prior art keywords
domain
specific information
image
information
domain specific
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310731995.8A
Other languages
Chinese (zh)
Other versions
CN116758353B (en
Inventor
赵文达
杨瑞凯
王海鹏
刘颢
杨向广
夏学知
何友
卢湖川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN202310731995.8A priority Critical patent/CN116758353B/en
Publication of CN116758353A publication Critical patent/CN116758353A/en
Application granted granted Critical
Publication of CN116758353B publication Critical patent/CN116758353B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application belongs to the technical field of image information processing, and provides a remote sensing image target classification method based on domain specific information filtering. Filtering remote sensing image style information from two angles of normalizing and separating domain specific information and domain invariant information from an example and removing the domain specific information by image reconstruction. The robustness module of the resistance domain specific information uses instance normalization to enable the data of each input instance to trend to be in standard normal distribution, so that the influence caused by fine disturbance is reduced. The method comprises the steps of simulating the significant change of domain specific information of a remote sensing image through data expansion, and reconstructing the image through a domain invariant information filter by utilizing the characteristics of the domain specific information after filtering, so that the influence caused by the significant change of the domain specific information is reduced. The method effectively solves the problem that the existing remote sensing target recognition method has poor performance on unknown domains, and the remote sensing image target model has generalization on data sets of different domains.

Description

Remote sensing image target classification method based on domain specific information filtering
Technical Field
The application relates to the technical field of image information processing, in particular to a remote sensing image target classification method based on domain specific information filtering.
Background
The remote sensing image target classification refers to a process of classifying and identifying targets in an image by utilizing a remote sensing technology. The research of remote sensing image target classification has great significance: on one hand, the target classification of the remote sensing image covers a plurality of application fields such as land planning, disaster detection, military security and the like; on the other hand, the variety and the number of the remote sensing images are improved in a large scale, and the processing capability of the existing technology for classifying the remote sensing targets with a large number, full types and wide fields is more obvious and is weaker. Currently, the methods associated with the present application include two aspects: the first is a remote sensing image target classification algorithm; the second is a domain generalization method.
The remote sensing image target classification algorithm mainly comprises a traditional manual feature extraction method and a deep learning-based method. The traditional remote sensing image target classification method is mainly realized based on manual feature extraction and a statistical model, chen Yunhao et al propose to establish a multi-level feature system and calculate features for classification in an object-oriented and rule-based remote sensing image classification study, luo Jiancheng et al propose feature calculation and classification based on a support vector machine in a support vector machine and a remote sensing image space feature extraction and classification application study. The image target classification method based on deep learning is that Krizhevsky et al in Imagenet classification with deep convolutional neural networks proposes VGG network based on convolution calculation and full connection, and He et al in Deep residual learning for image recognition proposes ResNet network structure based on residual linkage. However, the above network structure does not consider the dependence of the convolutional neural network on image surface layer information such as color, contrast, texture, etc., which may change with the change of the image domain, thereby resulting in poor generalization performance of the deep learning model.
Domain generalization solves the domain difference problem: the remote sensing images from different sources have obvious domain specific information (color, resolution, noise, contrast and the like) differences, which are called domain differences, and the existence of the domain differences leads to unexpected index reduction when a deep learning model which is trained on a known data set and obtains good indexes is actually deployed to process new data. Blancard et al, generalizing from several related classification tasks to a new unlabeled sample, propose domain generalization for deep learning. Li et al, deep domain generalization via conditional invariant adversarial networks, propose a deep-domain generalization method of conditional invariance. Qiao et al, learning to learn single domain generalization, propose a method for domain generalization training using resistance training to generate augmented data. The method does not consider the change of the remote sensing image along with the imaging equipment, the imaging condition and the processing before and after imaging, and does not have corresponding lifting measures for the domain change capability of the deep learning model for processing the remote sensing image.
The deep learning network model is easy to overfit to domain specific information and does not work well on datasets in other unknown domains. Domain specific information changes in remote sensing images can be classified into different levels, such as minor changes in brightness, contrast, and color due to lighting conditions, atmospheric conditions, and the like. These small disturbances, although small, may adversely affect the task of classifying, segmenting, etc. the remote sensing image. Large variations generally refer to significant differences between the remote sensing images, such as differences in shooting angles, resolutions, etc. of the remote sensing images. According to the application, remote sensing image domain specific information is filtered from two angles of the example normalized separation domain specific information domain invariant information and the image reconstruction removal domain specific information.
Disclosure of Invention
In order to reduce the influence of domain specific information, the method provides that remote sensing image style information is filtered from two angles of normalizing and separating domain specific information and domain invariant information from an example and removing the domain specific information through image reconstruction. The robustness module of the resistance domain specific information uses instance normalization to enable the data of each input instance to trend to be in standard normal distribution, so that the influence caused by fine disturbance is reduced. The method comprises the steps of simulating the significant change of domain specific information of a remote sensing image through data expansion, and reconstructing the image through a domain invariant information filter by utilizing the characteristics of the domain specific information after filtering, so that the influence caused by the significant change of the domain specific information is reduced.
The technical scheme of the application is as follows: a remote sensing image target classification method based on domain specific information filtering comprises the following steps of sequentially carrying out image copying and image expansion; the copied image is input to an antagonism domain specific information robust module; the expanded image is input to a domain invariant information filtering generator; the robust module of the specific information of the resistance domain and the domain invariant information filter respectively perform loss calculation, the loss convergence is finished, and otherwise, the image copying is performed again; the resistance domain specific information robust module and the domain invariant information filter use a feature extractor of shared weights;
the resistance domain specific information robust module comprises an example normalization module, a convolution layer and a classifier; the resistance domain specific information robustness module separates domain specific information and domain invariant information through an example normalization module IN aiming at the characteristic h of the input image x to obtain corresponding domain specific information h s Sum domain invariant information h c The method comprises the steps of carrying out a first treatment on the surface of the The dimension of the feature h isWherein C represents the channel dimension of the feature, and H and W represent the height dimension of the feature and the width dimension of the feature, respectively; domain specific information h of the feature h s Mean value of through channel dimensionsMu (h) and variance sigma (h) are obtained, domain invariant information h of the features c The mean value is subtracted from the characteristic h and divided by the normalization of the variance; the specific process is as follows:
h s =concat[μ(h),σ(h)] (4)
wherein i and j represent the values of the corresponding feature h at the (i, j) position, respectively, and e represents the bias;
the domain-specific information classifier CL obtains a classification result for domain-specific informationFeature fusion is carried out on the domain-invariant information through a convolution layer, and then the domain-invariant information is sent into a classifier shared with the domain-specific information classifier weight to obtain corresponding classification output
Classifying results for domain-specific informationThe domain specific information in the features is obtained by the feature extractor through the resistance training, the output of the result classifier is not influenced, and the classifier adopts maximum entropy to constrain the output result of the domain specific information, and the specific process is as follows:
where N represents the number of images and,the domain specific information classifier corresponding to the index k is represented and outputted; l (L) adv Cross entropy between domain specific information classification results and discrete uniformly distributed F (k; 0, 1);
for domain-invariant information outputClassification of losses L by cross entropy CE The constraint obtains the correct classification result, as shown below,
wherein y is k Representing the true value of the image with input index k,representing the classifier's predicted value for the image with input index k.
The domain invariant information filter generator comprises an image reconstruction decoder; the domain invariant information filter generator performs image reconstruction aiming at the characteristic h 'of the extended image x', and constrains the reconstructed image through the input image x; the feature extractor corresponding to the domain invariant information filter generator shares weight with the feature extractor of the robust module of the antagonism domain specific information; the intermediate reconstructed image x is further reconstructed from the reconstructed image by the features obtained by the feature extractor recon Reconstructing an image x from the middle recon Construction of an image reconstruction loss L from the mean square error loss of the input image x recon Constraints are made as follows:
wherein i and j represent the corresponding input image x and the intermediate reconstructed image x, respectively recon The pixel values corresponding to the pixel (i, j) positions, H and W, represent the pixel length and width of the input image.
The application has the beneficial effects that: the antagonism domain specific information robust module decouples domain specific information and domain invariant information and respectively trains the domain specific information and the domain invariant information in a targeted manner, so that the dependence of the model on the domain specific information is reduced; the domain invariable information filtering generator improves the robustness of the feature extractor to domain specific information change, and can obtain the feature of richer domain invariable information; the capability of extracting features of the remote sensing image target model is enhanced. The existing remote sensing target identification method has good effect on the data set of one domain only, and has poor performance on unknown domains. The method effectively solves the problem, and the remote sensing image target model has generalization to different domain data sets.
Drawings
FIG. 1 is a general training flow diagram of an affinity domain specific information robustness module and domain invariant information filter generator; FE is a feature extractor; CL is a classifier; IN is an example normalization module; dashed arrows represent shared weights;
fig. 2 is a flow chart of a remote sensing image target classification method based on domain specific information filtering.
Detailed Description
The following describes the embodiments of the present application further with reference to the drawings and technical schemes.
Adverse effects on the convolutional neural network model caused by different domain-specific information change differences of the remote sensing image should be solved in a targeted manner. The remote sensing image target classification method based on domain specific information filtering provided by the application is realized based on two major modules: for the effect of the fine disturbance of the image domain specific information, a robust module of the resistance domain specific information is provided, and the first four modules of the Resnet50 are used as feature extractors. The opposite domain specific information robust module uses an example normalization module IN to separate domain specific information and domain invariant information aiming at the characteristic h of the input image x, and obtains corresponding domain specific information h s Sum domain invariant information h c The dimensions of the features areWhere C represents the channel dimension of the feature and H and W represent the height dimension and width dimension, respectively. For feature h, domain specific information h of the feature is represented by mean μ (h) and variance σ (h) of the channel dimensions s Domain invariant information h of the feature h c Represented by the normalized features of the subtracted mean divided by the variance. The specific process is as follows:
h s =concat[μ(h),σ(h)] (4)
where i and j represent the values of the corresponding feature h at the (i, j) position, respectively, and e represents a bias preventing the variance σ (h) from going to 0.
Obtaining classification results for domain-specific information by domain-specific information classifier CLThe domain-invariant information is subjected to further feature fusion by a convolution layer and then is sent into a classifier which is shared with the domain-specific information classifier weight to obtain corresponding classification output +.>
Classification output for domain-specific informationLetting the feature extractor obtain through ideas of the countermeasure trainingThe domain-specific information in the features of (a) may not affect the output of the classifier, so the classifier uses the constraint of maximum entropy for the output result of the domain-specific information, and the specific process is as follows:
where N represents the number of images of the batch,representing the domain-specific information classifier output corresponding to index k. L (L) adv The cross entropy between the domain specific information classification result and the F (k; 0, 1) of the discrete uniform distribution can be understood, and the classifier can be constrained from correctly classifying through the domain specific information, so that the predicted value is uniformly distributed among k categories. So that the method does not depend on domain specific information but performs classification judgment through domain invariant information.
For domain-invariant information outputThe method encourages classification output using domain invariant information, thus directly using cross entropy classification loss L CE Constraining the network to obtain the correct classification results is as follows
Where yk represents the true value of the image with input index k,representing the classifier's predicted value for the image with input index k.
The domain invariant information filter generator performs image reconstruction for the feature h ' of the extended image x ' and uses the input image x instead of the extended image x ' for reconstruction constraints. Feature extractor and antagonism domain corresponding to domain invariant information filter generatorThe feature extractors of the specific information robust module share weights, thereby constraining the common feature extractor from significant changes in domain specific information. As with the resistance domain specific information robust module, the mean is usedSum of variancesTo represent the domain-specific information of the enhanced feature, to remove the domain-specific information of the feature to obtain domain-invariant information of the feature, and then to reconstruct an image using the corresponding domain-invariant information to obtain an intermediate reconstructed image x recon Construction of reconstruction loss L by mean square error loss of intermediate reconstruction image and original image recon To constrain as follows:
where i and j represent pixel values corresponding to the positions of the input de-selected image and the intermediate reconstructed image (i, j), respectively, and H and W represent the pixel length and width of the input image.
The size of the input image was 224×224 pixel size, the batch size was set to 36, the update of parameters was performed using Adam optimizer, the learning rate was set to 0.000125, and the decay parameter of the learning rate was exponential decay of 0.99 for each round of training. The offset e is set to 1e-6. The network model framework used by the proposed remote sensing image target classification method based on domain specific information filtering mainly uses ResNet50 as a proposed feature extractor and classifier. The domain-specific information and domain-invariant information used in the resistance domain-specific information robustness module are features of 1024×14×14 in size for the output dimension of the fourth convolution module of the res net50, mean and variance information of 1024×1 and 1024×1 are obtained, followed by tensor concatenation along the channel dimension, called 2048×1 vectors. The image generator used in the domain invariant information filter generator is opposite to each layer output of the feature extractor, and upsamples each time through bilinear interpolation.
Tables 1 and 2 training on the DIOR dataset, testing on NWPU VHR-10 dataset, DOTA dataset, and HRRSD dataset to verify that the domain-specific information classifications corresponding to the two modules are against loss L adv And reconstruction loss L recon Is an experiment of the effectiveness of (a). As can be seen from tables 1 and 2, the accuracy of the application for different domain data can reach a relatively stable state after determining the appropriate super parameters.
TABLE 1 Top-1 accuracy of specific information classification against loss of different weights (%)
TABLE 2 reconstruction loss Top-1 accuracy for different weights (%)
FIG. 1 is a flowchart of the overall training of a network model used, including an antagonistic domain specific information robustness module and a domain invariant information filter generator. The robustness module of the resistance domain specific information uses the instance normalization module to enable the data of each input instance to trend to be in standard normal distribution, so that the influence caused by fine disturbance is reduced. The method comprises the steps of simulating the significant change of domain specific information of a remote sensing image through data expansion, and reconstructing the image through a domain invariant information filter by utilizing the characteristics of the domain specific information after filtering, so that the influence caused by the significant change of the domain specific information is reduced. The specific flow of fig. 2 includes that an input image is copied first, then the image is expanded, then the loss calculation is performed through the resistance domain specific information robust module and the domain invariant information filter respectively, if the loss converges, the process is ended, otherwise, the process is repeated.

Claims (2)

1. The remote sensing image target classification method based on domain specific information filtering is characterized by comprising the following steps of sequentially performing image copying and image expansion; the copied image is input to an antagonism domain specific information robust module; the expanded image is input to a domain invariant information filtering generator; the robust module of the specific information of the resistance domain and the domain invariant information filter respectively perform loss calculation, the loss convergence is finished, and otherwise, the image copying is performed again; the resistance domain specific information robust module and the domain invariant information filter use a feature extractor of shared weights;
the resistance domain specific information robust module comprises an example normalization module, a convolution layer and a classifier; the resistance domain specific information robustness module separates domain specific information and domain invariant information through an example normalization module IN aiming at the characteristic h of the input image x to obtain corresponding domain specific information h s Sum domain invariant information h c The method comprises the steps of carrying out a first treatment on the surface of the The dimension of the feature h isWherein C represents the channel dimension of the feature, and H and W represent the height dimension of the feature and the width dimension of the feature, respectively; domain specific information h of the feature h s Obtained by means of the mean μ (h) and the variance σ (h) of the channel dimensions, domain invariant information h of the features h c Subtracting the mean from the feature h and dividing by the normalized feature of the variance; the specific process is as follows:
h s =concat[μ(h),σ(h)] (4)
wherein i and j represent the values of the corresponding feature h at the (i, j) position, respectively, and e represents the bias;
the domain-specific information classifier CL obtains a classification result for domain-specific informationAfter feature fusion is carried out on domain invariant information through a convolution layer, the domain invariant information is sent into a classifier which is shared with domain specific information classifier weight to obtain corresponding classification output +.>
Classifying results for domain-specific informationThe domain specific information in the features is obtained by the feature extractor through the resistance training, the output of the result classifier is not influenced, and the classifier adopts maximum entropy to constrain the output result of the domain specific information, and the specific process is as follows:
where N represents the number of images and,the domain specific information classifier corresponding to the index k is represented and outputted; l (L) adv Cross entropy between domain specific information classification results and discrete uniformly distributed F (k; 0, 1);
for domain-invariant information outputClassification of losses L by cross entropy CE The constraint obtains the correct classification result, as shown below,
wherein y is k Representing the true value of the image with input index k,representing the classifier's predicted value for the image with input index k.
2. The remote sensing image target classification method based on domain specific information filtering according to claim 1, wherein the domain invariant information filtering generator comprises an image reconstruction decoder; the domain invariant information filter generator performs image reconstruction aiming at the characteristic h 'of the extended image x', and constrains the reconstructed image through the input image x; the feature extractor corresponding to the domain invariant information filter generator shares weight with the feature extractor of the robust module of the antagonism domain specific information; the intermediate reconstructed image x is further reconstructed from the reconstructed image by the features obtained by the feature extractor recon Reconstructing an image x from the middle recon Construction of an image reconstruction loss L from the mean square error loss of the input image x recon Constraints are made as follows:
wherein i and j represent the corresponding input image x and the intermediate reconstructed image x, respectively recon The pixel values corresponding to the pixel (i, j) positions, H and W, represent the pixel length and width of the input image.
CN202310731995.8A 2023-06-20 2023-06-20 Remote sensing image target classification method based on domain specific information filtering Active CN116758353B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310731995.8A CN116758353B (en) 2023-06-20 2023-06-20 Remote sensing image target classification method based on domain specific information filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310731995.8A CN116758353B (en) 2023-06-20 2023-06-20 Remote sensing image target classification method based on domain specific information filtering

Publications (2)

Publication Number Publication Date
CN116758353A true CN116758353A (en) 2023-09-15
CN116758353B CN116758353B (en) 2024-01-23

Family

ID=87949247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310731995.8A Active CN116758353B (en) 2023-06-20 2023-06-20 Remote sensing image target classification method based on domain specific information filtering

Country Status (1)

Country Link
CN (1) CN116758353B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180218284A1 (en) * 2017-01-31 2018-08-02 Xerox Corporation Method and system for learning transferable feature representations from a source domain for a target domain
CN111738315A (en) * 2020-06-10 2020-10-02 西安电子科技大学 Image classification method based on countermeasure fusion multi-source transfer learning
CN112883908A (en) * 2021-03-16 2021-06-01 南京航空航天大学 Space-frequency characteristic consistency-based SAR image-to-optical image mapping method
CN115019106A (en) * 2022-06-27 2022-09-06 中山大学 Robust unsupervised domain self-adaptive image classification method and device based on anti-distillation
CN115272880A (en) * 2022-07-29 2022-11-01 大连理工大学 Multimode remote sensing target recognition method based on metric learning
CN115690479A (en) * 2022-05-23 2023-02-03 安徽理工大学 Remote sensing image classification method and system based on convolution Transformer
CN116152671A (en) * 2022-12-27 2023-05-23 南京工业大学 Cross-domain remote sensing image scene classification method based on rotation robust feature subclass center alignment
CN116188428A (en) * 2023-02-27 2023-05-30 中国计量大学 Bridging multi-source domain self-adaptive cross-domain histopathological image recognition method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180218284A1 (en) * 2017-01-31 2018-08-02 Xerox Corporation Method and system for learning transferable feature representations from a source domain for a target domain
CN111738315A (en) * 2020-06-10 2020-10-02 西安电子科技大学 Image classification method based on countermeasure fusion multi-source transfer learning
CN112883908A (en) * 2021-03-16 2021-06-01 南京航空航天大学 Space-frequency characteristic consistency-based SAR image-to-optical image mapping method
CN115690479A (en) * 2022-05-23 2023-02-03 安徽理工大学 Remote sensing image classification method and system based on convolution Transformer
CN115019106A (en) * 2022-06-27 2022-09-06 中山大学 Robust unsupervised domain self-adaptive image classification method and device based on anti-distillation
CN115272880A (en) * 2022-07-29 2022-11-01 大连理工大学 Multimode remote sensing target recognition method based on metric learning
CN116152671A (en) * 2022-12-27 2023-05-23 南京工业大学 Cross-domain remote sensing image scene classification method based on rotation robust feature subclass center alignment
CN116188428A (en) * 2023-02-27 2023-05-30 中国计量大学 Bridging multi-source domain self-adaptive cross-domain histopathological image recognition method

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
HAOYU WANG等: "Hyperspectral Image Classification Based on Domain Adversarial Broad Adaptation Network", IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, vol. 60, pages 1 - 13, XP011900808, DOI: 10.1109/TGRS.2021.3128162 *
XIANGHONG FANG等: "DART: Domain-Adversarial Residual-Transfer networks for unsupervised cross-domain image classification", NEURAL NETWORKS, vol. 127, pages 182 - 192 *
徐海等: "视觉域泛化技术及研究进展", 广州大学学报(自然科学版), vol. 21, no. 02, pages 42 - 59 *
王威;张佳娥;: "引导滤波和稀疏表示相结合的遥感图像融合算法", 小型微型计算机系统, no. 03, pages 187 - 190 *
范博文等: "基于域特定批量归一化的对抗域适应图像分类", 人工智能与机器人研究, vol. 12, no. 2, pages 107 - 114 *
陈德海;潘韦驰;丁博文;黄艳国;: "重校准特征融合对抗域适应的遥感影像场景分类", 计算机应用与软件, no. 05, pages 151 - 156 *

Also Published As

Publication number Publication date
CN116758353B (en) 2024-01-23

Similar Documents

Publication Publication Date Title
Jin et al. Deep learning for underwater image recognition in small sample size situations
CN108446589B (en) Face recognition method based on low-rank decomposition and auxiliary dictionary in complex environment
CN112967178B (en) Image conversion method, device, equipment and storage medium
Liu et al. The classification and denoising of image noise based on deep neural networks
CN109949200B (en) Filter subset selection and CNN-based steganalysis framework construction method
CN110490265A (en) A kind of image latent writing analysis method based on two-way convolution sum Fusion Features
CN114693607A (en) Method and system for detecting tampered video based on multi-domain block feature marker point registration
Hongmeng et al. A detection method for deepfake hard compressed videos based on super-resolution reconstruction using CNN
Qi et al. Research on the image segmentation of icing line based on NSCT and 2-D OSTU
CN109003247B (en) Method for removing color image mixed noise
CN114283058A (en) Image super-resolution reconstruction method based on countermeasure network and maximum mutual information optimization
CN112541566B (en) Image translation method based on reconstruction loss
CN113989256A (en) Detection model optimization method, detection method and detection device for remote sensing image building
CN116758353B (en) Remote sensing image target classification method based on domain specific information filtering
CN116310452B (en) Multi-view clustering method and system
CN112070714A (en) Method for detecting copied image based on local ternary counting characteristics
CN111461002A (en) Sample processing method for thermal imaging pedestrian detection
CN109165551B (en) Expression recognition method for adaptively weighting and fusing significance structure tensor and LBP characteristics
CN111340741A (en) Particle swarm optimization gray level image enhancement method based on quaternion and L1 norm
CN111047537A (en) System for recovering details in image denoising
CN116703750A (en) Image defogging method and system based on edge attention and multi-order differential loss
CN113378620B (en) Cross-camera pedestrian re-identification method in surveillance video noise environment
Liu et al. Adaptive Texture and Spectrum Clue Mining for Generalizable Face Forgery Detection
CN113160345A (en) ConvLSTM-based time series image reconstruction method
Ahmadia et al. The application of neural networks, image processing and cad-based environments facilities in automatic road extraction and vectorization from high resolution satellite images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant