CN114610933B - Image classification method based on zero sample domain adaptation - Google Patents

Image classification method based on zero sample domain adaptation Download PDF

Info

Publication number
CN114610933B
CN114610933B CN202210265349.2A CN202210265349A CN114610933B CN 114610933 B CN114610933 B CN 114610933B CN 202210265349 A CN202210265349 A CN 202210265349A CN 114610933 B CN114610933 B CN 114610933B
Authority
CN
China
Prior art keywords
domain
feature
features
task
adopting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210265349.2A
Other languages
Chinese (zh)
Other versions
CN114610933A (en
Inventor
刘龙
陈锦鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202210265349.2A priority Critical patent/CN114610933B/en
Publication of CN114610933A publication Critical patent/CN114610933A/en
Application granted granted Critical
Publication of CN114610933B publication Critical patent/CN114610933B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image classification method based on zero sample domain adaptation, which comprises the following steps: step 1, acquiring a picture data set; step 2, adopting a feature extractor G c Extracting semantic features of image data, decomposing the semantic features by adopting an countermeasure learning method, and then carrying out feature refinement to obtain features C; step 3, adopting a feature extractor G d Extracting domain features, decomposing the domain features by adopting an anti-learning method, and then refining the features to obtain the attentive features f M The method comprises the steps of carrying out a first treatment on the surface of the Step 4, for feature C and attention-carrying feature f M Matrix multiplication to obtain domain-specific features with attention f r The method comprises the steps of carrying out a first treatment on the surface of the Step 5, utilizing the finally obtained feature f r The image classification is performed by the classifier C. The invention solves the problems that the unlabeled target domain data in the same label space is required to be used for training in the original domain adaptation technology and the problem that the data of the source domain and the target domain are strictly required to have the same label space.

Description

Image classification method based on zero sample domain adaptation
Technical Field
The invention belongs to the technical field of zero sample learning, and relates to an image classification method based on zero sample domain adaptation.
Background
With the advent of deep learning, neural networks have been successful in image classification, segmentation, object detection, and the like. At the same time, the neural network has even far exceeded human performance by virtue of a large amount of tagged data and the improvement of computer power. However, in real life, it is often difficult to obtain a large amount of high quality tagged data, which results in a significant degradation of performance when applied in a scene with little or no tagged data, as opposed to a scene with large amount of tagged data, and even is completely unavailable, such as a poor performance when a scene segmentation model trained with a large number of synthetic images is applied on a small number of images of the real world.
In order to solve the problem that the trained model can only be applied to a single scene, transfer learning is proposed as an innovative method. The migration learning mainly solves the problem of how the learning system adjusts quickly to adapt when a scene or task is switched. The migration learning under the condition that the data distribution of the target domain and the source domain is different but the tasks are the same is the domain self-adaption. Typical domain adaptation aims at transferring knowledge learned from a source domain rich in large amounts of tagged data to a target domain where tags are scarce in the same tag space. Meanwhile, if the model parameters can be continuously adjusted through domain adaptation to adapt to any new domain under the same label, the model has strong generalization capability to adapt to different scenes.
Typical domain adaptation methods assume that the target domain data is available during the training phase, however in practical applications it is often not feasible to obtain unlabeled target domain data that shares the same tag as the source domain data of interest, which is referred to as zero sample domain adaptation.
Disclosure of Invention
The invention aims to provide an image classification method based on zero sample domain adaptation, which solves the problems that unlabeled target domain data in the same label space is required to be used for training in the original domain adaptation technology and the data in a source domain and a target domain are strictly required to have the same label space.
The technical scheme adopted by the invention is as follows:
an image classification method based on zero sample domain adaptation comprises the following steps:
step 1, acquiring a picture data set;
step 2, adopting a feature extractor G c Extracting semantic features of image data, decomposing the semantic features by adopting an countermeasure learning method, and then carrying out feature refinement to obtain features C;
step 3, adopting a feature extractor G d Extracting domain features, decomposing the domain features by adopting an anti-learning method, and then refining the features to obtain the attentive features f M
Step 4, for feature C and attention-carrying feature f M Matrix multiplication to obtain domain-specific with attentionFeature f r
Step 5, utilizing the finally obtained feature f r The image classification is performed by the classifier C.
The invention is also characterized in that:
the step 2 specifically comprises the following steps:
step 2.1, employing a feature extractor G d Extracting domain features f c =G c (x);
Step 2.2, decomposing the semantic features by adopting an countermeasure learning method to obtain domain-invariant semantic features f c The method comprises the steps of carrying out a first treatment on the surface of the The method comprises the following specific steps: constructing a domain discriminator D, and adopting an countermeasure learning method to the discriminator D and a feature extractor G of a transformer network structure c Performing countermeasure learning to remove domain related information so as to obtain domain-invariant semantic features f c The domain discriminator D consists of two fully connected layers; first the domain discriminator D is required to discriminate that the domain label is from feature f d Or f c The loss function of the domain discriminator is as follows:
representing the characteristic from the feature f d Is lost, is->Representing the characteristic from the feature f c Is a part of the loss, L D Is the total loss;
at the same time feature extractor G c Using classifier C r And classifier C ir Training of the task of interest ToI and the task of independence IrT, respectively, to preserve f c By minimizing classification errors, both classifiers are made up of a fully connected layer, the loss functions of both classifiers are as follows:
represents cross entropy loss, p r And p ir Probability distributions for the task of interest and the task of independence, respectively;
step 2.3 domain invariant semantic features f c And obtaining the characteristic C through a Conv convolution layer.
Feature extractor G in step 2.1 c The method adopts a transducer network, and the specific method for extracting the characteristics comprises the following steps: firstly, carrying out blocking processing on an input picture through an image block embedding module, then combining the divided picture blocks into a sequence, and then adding position information through a position embedding module; and transmitting the sequence information extracted in the previous step into a multi-head self-attention module for characteristic extraction.
The step 3 specifically comprises the following steps:
step 3.1, employing feature extractor G d Extracting domain features f d =G d (x);
Step 3.2, constructing classifier C ir ,C r Classifier C ir ,C r Is composed of a layer of full-connection layer, and passes through a classifier C ir ,C r Sum domain feature extractor G d The antagonism learning between them separates class-related semantic information from domain features f d Removing to obtain the task unchanged domain feature f d
Specifically, classifier C is first performed ir ,C r Fixed, update only the domain feature extractor G d Disabling f by maximizing entropy of prediction class distribution d The maximum entropy loss function is:
the definition of the maximum entropy loss function is as follows:
n r and n ir The number of data samples in the task of interest and the task of independence, respectively;
then the local domain feature extractor G d After being updated, the classifier is then trained to be at fixed G d Is derived from domain feature f d The class-related features are removed and the loss function is as follows:
represents cross entropy loss, p r And p ir Probability distributions for the task of interest and the task of independence, respectively;
step 3.3 is to extract the task invariant domain feature f d Respectively converting into two features A and B through two convolution layers, performing matrix multiplication on the features A and B, and obtaining a feature f with attention through an attention mechanism diagram M M . Note that the equation for force diagram M is as follows:
m ji representing the importance of the ith position in feature A to the jth position in feature B, A i T Representing transposed transforming the i-th position in feature a.
Feature extractor G in step 3.1 d The method adopts a Resnet network, wherein the Resnet50 network consists of 7 parts, the first part does not contain residual blocks, convolution, regularization, activation function and maximum pooling calculation are mainly carried out on input, the second, third, fourth and fifth parts of structures contain residual blocks, the residual blocks consist of 3 layers of convolution, then a layer of full-connection layer is adopted, and finally feature vectors are extracted through maximum pooling sampling.
The step 4 specifically comprises the following steps: for feature C and attention-carrying feature f M Matrix multiplication is performed, gamma is a dynamic value, and is specifically
The output formula is as follows:
initializing the time to let f=f c Gamma is initialized to 0, and more weight is continuously allocated to the feature mapping of the specific domain through training, so that the feature f with attention and specific to the domain is obtained r . The whole process is realized by minimizing the classification loss, and the formula is as follows:
λ r is a super parameter for balancing the loss of the disentanglement phase and the loss L of the coordination phase fr . Set to lambda r =3。
The beneficial effects of the invention are as follows:
1. the invention discloses an image classification method based on zero sample domain self-adaption, which introduces a transformer structure as a feature extraction network, and has better effect and better retention on semantic features compared with the traditional CNN convolution structure.
2. Attention mechanisms are introduced, important characteristic parts are highlighted, negative migration effects are reduced, and positive migration effects are added.
3. The semantic information does not need to be known in advance, but is known by means of learning.
Drawings
FIG. 1 is a flow chart of an image classification method based on zero sample domain adaptation of the present invention;
FIG. 2 is a feature extractor G of the present invention c Is a structural diagram of (1);
FIG. 3 is a network structure diagram of the feature decomposition phase in steps 2 and 3 of the present invention;
fig. 4 is a network structure diagram of the feature refinement stage in step 2 and step 3 of the present invention.
Detailed Description
The invention will be described in detail below with reference to the drawings and the detailed description.
The image classification method based on zero sample domain adaptation of the invention, as shown in fig. 1, comprises the following steps:
step 1, acquiring an Office-Home data set, wherein the data set consists of 4 images in different style fields: the method comprises the steps of taking 4 fields as a source field and a target field respectively, randomly extracting the same number of images from a dataset as source fields of irrelevant tasks, and taking the target field and the source field of an interested task during training, wherein the images extracted from the source field and the target field in the irrelevant tasks need the same label, and only the style field is different;
step 2, feature extractor G employing a transducer structure c Extracting semantic features f of image data c =G c (x) Semantic feature f is obtained by adopting an countermeasure learning method c =G c (x) After decomposition, carrying out feature refinement to obtain a feature C;
step 3, feature extractor G adopting a network structure of a resnet residual error d Extracting domain features f d =G d (x) Also adopts the method of countermeasure learning to the domain feature f d =G d (x) Feature refinement after decomposition to obtain attentive featuresf M
Step 4, for feature C and attention-carrying feature f M Matrix multiplication to obtain domain-specific features with attention f r
Step 5, utilizing the finally obtained feature f r The image classification is performed by a classifier C (consisting of one full-connected layer).
The step 2 specifically comprises the following steps:
step 2.1, employing a feature extractor G c Extracting semantic features f of a dataset c =G c (x) As shown in fig. 2, the feature extractor adopts a transformer network, and the specific method for extracting the features is as follows: firstly, carrying out blocking processing on an input picture through an image block embedding module (Patch modules), then combining the divided picture blocks into a sequence, and then adding position information through a position embedding module (Position Embedding); the sequence information extracted in the previous step is transmitted to a Multi-head Self-attention module (Multi-head Self-attention) for feature extraction, and the structure is shown in figure 2.
Step 2.2, as in FIG. 3, a domain discriminator D is constructed, and a countermeasure learning method is employed for the discriminator D and a feature extractor G of the transducer network structure c Performing countermeasure learning to remove domain related information so as to obtain domain-invariant semantic features f c The domain discriminator D consists of two fully connected layers;
specifically, first the domain discriminator D is required to discriminate that the domain label is from the feature f d Or f c The loss function of the domain discriminator is as follows:
representing the characteristic from the feature f d Is lost, is->Representing the characteristic from the feature f c Is a part of the loss, L D Is the total loss.
At the same time feature extractor G c Using classifier C r (for task of interest ToI) and classifier C ir Training (for unrelated tasks IrT) to preserve f c By minimizing classification errors, both classifiers are made up of a fully connected layer, the loss functions of both classifiers are as follows:
represents cross entropy loss, p r And p ir The probability distributions for the task of interest and the task of nothing, respectively.
Step 2.3, as in FIG. 4, domain invariant semantic features f c And obtaining the characteristic C through a Conv convolution layer.
The step 3 specifically comprises the following steps:
step 3.1, employing feature extractor G d The feature extractor employs a network of Resnet50,
the Resnet50 network consists of 7 parts, the first part containing no residual blocks, mainly convolving the inputs, regularizing, activating functions, max-pooling calculations. The second, third, fourth and fifth part structures comprise residual blocks, the residual blocks are all formed by 3 layers of convolution, then a full connection layer is adopted, and finally the feature vectors are extracted through maximum pooling sampling;
step 3.2, as in FIG. 3, construct classifier C ir ,C r Classifier C ir ,C r Is composed of a layer of full-connection layer, and then passes through a classifier C by adopting the idea of countermeasure learning ir ,C r Sum domain feature extractor G d The antagonism learning between them separates class-related semantic information from domain features f d Removing to obtain the task unchanged domain feature f d
Specifically, classifier C is first performed ir ,C r Fixed, update only the domain feature extractor G d Thereby disabling f d This is achieved by maximizing the entropy of the prediction class distribution.
The definition of the maximum entropy loss function is as follows:
n r and n ir The number of data samples in the task of interest and the task of independence, respectively.
Then the local domain feature extractor G d After being updated, the classifier is then trained to be at fixed G d Is derived from domain feature f d Class-related features are removed by minimizing class-loss, the loss function is as follows:
representing cross entropyLoss, p r And p ir The probability distributions for the task of interest and the task of nothing, respectively.
Step 3.3, as in FIG. 4, since the semantic features belonging to the portion of the task of interest ToI are generally consistent with the semantic feature distribution of the portion of the unrelated task IrT, the semantic features of the unrelated task IrT are easily misleading the classifier C for the task of interest ToI r . To solve this problem we propose collaborative improvement of the feature map to pass through the features from domain f d Is directed to M highlight a significant portion of the target domain.
Specifically, the extracted task invariant domain feature f d Respectively converting into two features A and B through two convolution layers, performing matrix multiplication on the features A and B, and obtaining a feature f with attention through an attention mechanism diagram M M . Note that the equation for force diagram M is as follows:
m ji representing the importance of the ith position in feature A to the jth position in feature B, A i T Representing transposed transforming the i-th position in feature a.
The step 4 specifically comprises the following steps:
for feature C and attention-carrying feature f M Matrix multiplication is performed, gamma is a dynamic value, and is specifically
The output formula is as follows:
initializing the time to let f=f c Gamma is initialized to 0, and more weight is continuously allocated to the feature mapping of the specific domain through training, so that the feature f with attention and specific to the domain is obtained r . The whole process is realized by minimizing the classification loss, and the formula is as follows:
λ r is a super parameter for balancing the loss of the disentanglement phase and the loss of the coordination phaseSet to lambda r =3。

Claims (4)

1. The image classification method based on zero sample domain adaptation is characterized by comprising the following steps:
step 1, acquiring a picture data set;
step 2, adopting a feature extractor G c Extracting semantic features of image data, decomposing the semantic features by adopting an countermeasure learning method, and then carrying out feature refinement to obtain features C; the method comprises the following specific steps:
step 2.1, employing a feature extractor G d Extracting domain features f cc (x);
Step 2.2, decomposing the semantic features by adopting an countermeasure learning method to obtain domain-invariant semantic features f c The method comprises the steps of carrying out a first treatment on the surface of the The method comprises the following specific steps: constructing a domain discriminator D, and adopting an countermeasure learning method to the discriminator D and a feature extractor G of a transformer network structure c Performing countermeasure learning to remove domain related information so as to obtain domain-invariant semantic features f c The domain discriminator D consists of two fully connected layers; first the domain discriminator D is required to discriminate that the domain label is from feature f d Or f c The loss function of the domain discriminator is as follows:
representing the characteristic from the feature f d Is lost, is->Representing the characteristic from the feature f c Is a part of the loss, L D Is the total loss;
at the same time feature extractor G c Using classifier C r And classifier C ir Training of the task of interest ToI and the task of independence IrT, respectively, to preserve f c By minimizing classification errors, both classifiers are made up of a fully connected layer, the loss functions of both classifiers are as follows:
l (·) represents cross entropy loss, p r And p ir Probability distributions for the task of interest and the task of independence, respectively;
step 2.3 domain invariant semantic features f c Obtaining a characteristic C through a Conv convolution layer;
step 3, adopting a feature extractor G d Extracting domain features, decomposing the domain features by adopting an anti-learning method, and then refining the features to obtain the attentive features f M The method comprises the steps of carrying out a first treatment on the surface of the The method comprises the following specific steps:
step 3.1, employing feature extractor G d Extracting domain features f d =G d (x);
And 3, step 3.2, constructing a classifier C ir ,C r Classifier C ir ,C r Is composed of a layer of full-connection layer, and passes through a classifier C ir ,C r Sum domain feature extractor G d The antagonism learning between them separates class-related semantic information from domain features f d Removing to obtain the task unchanged domain feature f d
Specifically, classifier C is first performed ir ,C r Fixed, update only the domain feature extractor G d Disabling f by maximizing entropy of prediction class distribution d The maximum entropy loss function is:
the definition of the maximum entropy loss function is as follows:
n r and n ir The number of data samples in the task of interest and the task of independence, respectively;
then the local domain feature extractor G d After being updated, the classifier is then trained to be at fixed G d Is derived from domain feature f d The class-related features are removed and the loss function is as follows:
l (·) represents cross entropy loss, p r And p ir Probability distributions for the task of interest and the task of independence, respectively;
step 3.3 is to extract the task invariant domain feature f d Respectively converting into two features A and B through two convolution layers, performing matrix multiplication on the features A and B, and obtaining the feature with attention through an attention mechanism diagram Mf M Note that the equation for striving M is as follows:
m ji representing the importance of the ith position in feature A to the jth position in feature B, A i T Representing transpose transforming the ith position in feature A;
step 4, for feature C and attention-carrying feature f M Matrix multiplication to obtain domain-specific features with attention f r
Step 5, utilizing the finally obtained feature f r The image classification is performed by the classifier C.
2. The zero sample domain adaptation based image classification method of claim 1, wherein the feature extractor G c The specific method for extracting the characteristics by adopting a transducer network comprises the following steps: firstly, carrying out blocking processing on an input picture through an image block embedding module, then combining the divided picture blocks into a sequence, and then adding position information through a position embedding module; and transmitting the sequence information extracted in the previous step into a multi-head self-attention module for characteristic extraction.
3. The zero-sample-domain-adaptation-based image classification method of claim 1, wherein the feature extractor G d The method adopts a Resnet network, wherein the Resnet50 network consists of 7 parts, the first part does not contain residual blocks, convolution, regularization, activation function and maximum pooling calculation are mainly carried out on input, the second, third, fourth and fifth parts of structures contain residual blocks, the residual blocks consist of 3 layers of convolution, then a layer of full-connection layer is adopted, and finally feature vectors are extracted through maximum pooling sampling.
4. The method of image classification based on zero sample domain adaptation according to claim 1, wherein the step 4 specifically comprises:
for feature C and attention-carrying feature f M And (3) performing matrix multiplication, wherein gamma is a dynamic value, and a specific output formula is as follows:
initializing the time to let f=f c Gamma is initialized to 0, and more weight is continuously allocated to the feature mapping of the specific domain through training, so that the feature f with attention and specific to the domain is obtained r The whole process is realized by minimizing the classification loss, and the formula is as follows:
λ r is a super parameter for balancing the loss of the disentanglement phase and the loss of the coordination phaseSet to lambda r =3。
CN202210265349.2A 2022-03-17 2022-03-17 Image classification method based on zero sample domain adaptation Active CN114610933B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210265349.2A CN114610933B (en) 2022-03-17 2022-03-17 Image classification method based on zero sample domain adaptation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210265349.2A CN114610933B (en) 2022-03-17 2022-03-17 Image classification method based on zero sample domain adaptation

Publications (2)

Publication Number Publication Date
CN114610933A CN114610933A (en) 2022-06-10
CN114610933B true CN114610933B (en) 2024-02-13

Family

ID=81864751

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210265349.2A Active CN114610933B (en) 2022-03-17 2022-03-17 Image classification method based on zero sample domain adaptation

Country Status (1)

Country Link
CN (1) CN114610933B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368886A (en) * 2020-02-25 2020-07-03 华南理工大学 Sample screening-based label-free vehicle picture classification method
WO2021139069A1 (en) * 2020-01-09 2021-07-15 南京信息工程大学 General target detection method for adaptive attention guidance mechanism

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021139069A1 (en) * 2020-01-09 2021-07-15 南京信息工程大学 General target detection method for adaptive attention guidance mechanism
CN111368886A (en) * 2020-02-25 2020-07-03 华南理工大学 Sample screening-based label-free vehicle picture classification method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姚明海 ; 黄展聪 ; .基于主动学习的半监督领域自适应方法研究.高技术通讯.2020,(08),全文. *

Also Published As

Publication number Publication date
CN114610933A (en) 2022-06-10

Similar Documents

Publication Publication Date Title
CN109754015B (en) Neural networks for drawing multi-label recognition and related methods, media and devices
Zhang et al. Efficient feature learning and multi-size image steganalysis based on CNN
CN113076994A (en) Open-set domain self-adaptive image classification method and system
Wan et al. Generative adversarial multi-task learning for face sketch synthesis and recognition
Gao et al. Co-saliency detection with co-attention fully convolutional network
CN112232151A (en) Iterative aggregation neural network high-resolution remote sensing scene classification method embedded with attention mechanism
CN112580480A (en) Hyperspectral remote sensing image classification method and device
CN110659663A (en) Unsupervised bidirectional reconstruction field self-adaption method
CN115563327A (en) Zero sample cross-modal retrieval method based on Transformer network selective distillation
Wei et al. Universal deep network for steganalysis of color image based on channel representation
Nguyen et al. Adaptive nonparametric image parsing
Zhao et al. Visible-infrared person re-identification based on frequency-domain simulated multispectral modality for dual-mode cameras
Qiu et al. High resolution remote sensing image denoising algorithm based on sparse representation and adaptive dictionary learning
Tian et al. Convolutional neural networks for steganalysis via transfer learning
CN114329031A (en) Fine-grained bird image retrieval method based on graph neural network and deep hash
CN114610933B (en) Image classification method based on zero sample domain adaptation
Wang et al. Recurrent multi-level residual and global attention network for single image deraining
CN117011638A (en) End-to-end image mask pre-training method and device
Bashir et al. Towards deep learning-based image steganalysis: practices and open research issues
Özyurt et al. A new method for classification of images using convolutional neural network based on Dwt-Svd perceptual hash function
CN112651329B (en) Low-resolution ship classification method for generating countermeasure network through double-flow feature learning
CN114780767A (en) Large-scale image retrieval method and system based on deep convolutional neural network
Liu et al. Adaptive Texture and Spectrum Clue Mining for Generalizable Face Forgery Detection
CN108052981B (en) Image classification method based on nonsubsampled Contourlet transformation and convolutional neural network
CN113313202A (en) Single-domain generalization method based on progressive unknown domain expansion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant