CN117370594A - Distributed difference self-adaptive image retrieval method based on space-frequency interaction - Google Patents

Distributed difference self-adaptive image retrieval method based on space-frequency interaction Download PDF

Info

Publication number
CN117370594A
CN117370594A CN202311424869.4A CN202311424869A CN117370594A CN 117370594 A CN117370594 A CN 117370594A CN 202311424869 A CN202311424869 A CN 202311424869A CN 117370594 A CN117370594 A CN 117370594A
Authority
CN
China
Prior art keywords
hash
image
distribution
hash quantization
quantization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311424869.4A
Other languages
Chinese (zh)
Inventor
张军
张智铭
杨召云
张旭鹏
高英杰
王仪诺
王朝权
张程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University of Technology
Original Assignee
Hebei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University of Technology filed Critical Hebei University of Technology
Priority to CN202311424869.4A priority Critical patent/CN117370594A/en
Publication of CN117370594A publication Critical patent/CN117370594A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Library & Information Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a distributed difference self-adaptive image retrieval method based on space-frequency interaction, which comprises the steps of firstly obtaining an original image for training, and obtaining strong and weak transformation images through data enhancement; then, constructing a deep hash network; the strong transformation image and the weak transformation image are respectively input into a student model and a teacher model to obtain hash quantization codes, and then self-distillation difference quantization loss, hash agent loss and binary cross entropy loss are obtained; then, a distribution migration module is constructed, and the distribution center and the discrete degree of the hash quantization codes extracted by the student model are utilized to migrate the hash quantization codes extracted by the teacher model, so that distribution migration loss is obtained; constructing a frequency component extraction module, extracting frequency domain information of the hash quantization codes through fast Fourier transformation, and extracting frequency components of the hash quantization codes through arc tangent transformation, so as to obtain frequency component loss; and finally, constructing a target optimization function based on all losses, training a student model and a teacher model, and using the trained student model or teacher model for image retrieval. By fully quantizing the difference information between hash codes, the retrieval performance is improved.

Description

基于空频交互的分布差异自适应图像检索方法Distribution difference adaptive image retrieval method based on space-frequency interaction

技术领域Technical field

本发明属于信息检索中的图像检索领域,具体是一种基于空频交互的分布差异自适应图像检索方法。The invention belongs to the field of image retrieval in information retrieval, and is specifically a distribution difference adaptive image retrieval method based on space-frequency interaction.

背景技术Background technique

图像检索是一种通过待查询图像来查找、检索与之匹配图像的技术,主要目的是从大型图像数据库中匹配得到与待查询图像在语义上最相关的图像。图像检索具有广泛的应用价值,在许多领域都发挥着关键作用,包括图像搜索引擎、医学图像分析、安全监控等,能够帮助人们更轻松地访问和管理图像数据,从而提高工作效率和用户体验,缩短决策进程。随着数据库规模的逐渐增大,在数据库之中遍历搜索所需要的图像,需要更多的人力、时间资源的消耗。而通过语义特征对数据库中的图像和待查询图像进行表示,从而将待查询图像在数据库中的查找问题转化为语义特征之间的相似性判断问题,能够极大程度地提高检索效率。Image retrieval is a technology that searches for and retrieves matching images through the image to be queried. The main purpose is to match the image that is most semantically related to the image to be queried from a large image database. Image retrieval has wide application value and plays a key role in many fields, including image search engines, medical image analysis, security monitoring, etc. It can help people access and manage image data more easily, thereby improving work efficiency and user experience. Shorten the decision-making process. As the size of the database gradually increases, traversing and searching for the images required in the database requires more manpower and time resources. The images in the database and the image to be queried are represented by semantic features, thereby converting the search problem of the image to be queried in the database into a similarity judgment problem between semantic features, which can greatly improve the retrieval efficiency.

哈希算法在速度以及存储方面具有显著优势,故被广泛用于大规模的图像检索中。哈希算法分为传统哈希和深度哈希算法两类,早期的哈希算法多为传统哈希算法,基于图像特征实现,通过手工设计的卷积核提取图像的特征,通过特征之间相似性的大小来确定数据库中与之匹配的图像,将其作为检索结果。相较于早期依赖于人工输入的元数据和标签的检索方式,传统哈希算法更加易于实现,但是受限于手工设计的卷积核以及模型深度的问题,生成的哈希编码只含有少量的语义信息。随着深度学习技术的发展,图像检索领域取得了巨大的进展。借助于深度神经网络更为强大的表征学习能力,基于深度学习的图像检索算法能够获取到包含更多高层语义信息的特征编码,而后为了实现更快速的检索,将深度神经网络提取的特征压缩到海明空间中,将离散的特征量化编码之间的相似性的计算转换为二进制哈希编码的汉明距离计算。The hash algorithm has significant advantages in speed and storage, so it is widely used in large-scale image retrieval. Hash algorithms are divided into two categories: traditional hashing and deep hashing algorithms. Early hashing algorithms are mostly traditional hashing algorithms, implemented based on image features. The features of the image are extracted through manually designed convolution kernels, and the similarity between features is used. The size of the image is used to determine the matching image in the database and use it as the retrieval result. Compared with early retrieval methods that relied on manually input metadata and tags, traditional hashing algorithms are easier to implement. However, they are limited by manually designed convolution kernels and model depth, and the generated hash codes only contain a small amount of semantic information. With the development of deep learning technology, tremendous progress has been made in the field of image retrieval. With the help of the more powerful representation learning ability of deep neural networks, image retrieval algorithms based on deep learning can obtain feature codes containing more high-level semantic information. Then, in order to achieve faster retrieval, the features extracted by deep neural networks are compressed into In Hamming space, the calculation of similarity between discrete feature quantization codes is converted into the calculation of Hamming distance of binary hash codes.

目前哈希编码之间距离量化方式包括元组损失和中心编码损失两类,元组损失包括对值损失和三元组损失。将一对图像作为一组,将提取到的图像特征转化为编码,将编码之间的距离作为损失,由于两个样本之间的关系只有相似和不相似两类,因此虽然对值损失会使得相似图像接近,不相似图像远离,而两者之间量级存在的差异也会导致正负样本不均衡的问题,同时无法获得类内、类间关系,此外计算任意一组图像之间的编码距离也有着巨大的时间开销。虽然三元组损失在一定程度上能够缓解正负样本不均衡的问题,但由于类内、类间样本数量的问题,导致训练得到的模型具有一定的偏向性,同时无法获取类间关系。类中心损失预先按照定义或通过聚类构建类中心,将图像对之间的编码损失转化为图像编码与类中心编码的距离。与元组损失相比,类中心损失不需要对所有样本之间的距离进行两两计算,大大减少了训练时间,同时由于类中心之间的关系,使得学习得到的编码也具有一定的类别关系。Currently, distance quantification methods between hash codes include tuple loss and center coding loss. Tuple loss includes pairwise loss and triplet loss. Treat a pair of images as a group, convert the extracted image features into codes, and use the distance between codes as a loss. Since the relationship between the two samples is only similar and dissimilar, although the pair-value loss will make Similar images are close, and dissimilar images are far away. The difference in magnitude between the two will also lead to the problem of imbalance of positive and negative samples. At the same time, it is impossible to obtain intra-class and inter-class relationships. In addition, calculate the difference between any set of images. Encoding distance also has a huge time overhead. Although triplet loss can alleviate the problem of imbalance between positive and negative samples to a certain extent, due to the problem of the number of samples within and between classes, the trained model has a certain bias and cannot obtain the relationship between classes. The class center loss constructs the class center by definition or through clustering in advance, and converts the encoding loss between image pairs into the distance between the image encoding and the class center encoding. Compared with tuple loss, class center loss does not require pairwise calculation of the distance between all samples, which greatly reduces training time. At the same time, due to the relationship between class centers, the learned encoding also has a certain class relationship. .

现有的基于深度学习的图像检索方法更加关注如何更好地量化图像编码之间的差异,但是如何更充分地利用图像之间的类内和类间关系进行编码,使得编码能够更加充分的解耦类别信息,以及利用类别之间的分布差异对于提升图像检索的性能也很重要。Existing deep learning-based image retrieval methods focus more on how to better quantify the differences between image encodings, but how to more fully utilize the intra-class and inter-class relationships between images for encoding so that the encoding can more fully solve the problem. Coupling category information and exploiting distribution differences between categories are also important to improve the performance of image retrieval.

发明内容Contents of the invention

针对现有技术的不足,本发明拟解决的技术问题是,提供一种基于空频交互的分布差异自适应图像检索方法。In view of the shortcomings of the existing technology, the technical problem to be solved by the present invention is to provide a distribution difference adaptive image retrieval method based on space-frequency interaction.

本发明解决所述技术问题采用如下的技术方案:The present invention solves the technical problems and adopts the following technical solutions:

一种基于空频交互的分布差异自适应图像检索方法,其特征在于,该方法包括如下步骤:A distribution difference adaptive image retrieval method based on space-frequency interaction, characterized in that the method includes the following steps:

第一步:获取训练用的原始图像,对原始图像进行数据增强,得到强变换图像和弱变换图像;The first step: obtain the original image for training, perform data enhancement on the original image, and obtain a strong transformation image and a weak transformation image;

第二步:构建深度哈希网络;将强、弱变换图像分别输入到深度哈希网络的学生模型和教师模型中,得到学生模型提取的哈希量化编码和教师模型提取的哈希量化编码;基于学生模型和教师提取的哈希量化编码得到自蒸馏差异量化损失LSdh、哈希代理损失LHP和二进制交叉熵损失Lbce-QStep 2: Construct a deep hash network; input the strong and weak transformed images into the student model and teacher model of the deep hash network respectively to obtain the hash quantization code extracted by the student model and the hash quantization code extracted by the teacher model; Based on the hash quantization encoding extracted by the student model and the teacher, the self-distillation difference quantization loss L Sdh , the hash proxy loss L HP and the binary cross-entropy loss L bce-Q are obtained;

LSdh=1-cos(HT,HS) (1)L Sdh = 1-cos (H T , H S ) (1)

LHP=H(y,Softmax(PT/T)) (4)L HP =H(y,Softmax(P T /T)) (4)

式中,HT、HS表示教师模型和学生模型提取的哈希量化编码,PT为代理样本,T表示温度标度超参数,H(·)表示图像的真实类别标签与预测类别标签之间的量化误差,y表示图像的真实类别标签序列,表示哈希编码的值为1,/>表示第k个哈希编码的极大似然估计值,Hk表示哈希量化编码HT的第k位,K表示编码长度;In the formula, H T and H S represent the hash quantization codes extracted by the teacher model and the student model, P T is the proxy sample, T represents the temperature scale hyperparameter, and H(·) represents the difference between the real category label and the predicted category label of the image. The quantified error between , y represents the real category label sequence of the image, Indicates that the value of hash encoding is 1, /> Represents the maximum likelihood estimate of the k-th hash code, H k represents the k-th bit of the hash quantization code H T , and K represents the coding length;

第三步、构建分布迁移模块,分布迁移模块利用学生模型提取的哈希量化编码的分布中心和离散程度对教师模型提取的哈希量化编码进行迁移,得到分布迁移后的哈希量化编码,通过量化分布迁移后的哈希量化编码和教师模型提取的哈希量化编码之间的差异,得到分布迁移损失;分布迁移损失LDIT表示为:The third step is to build a distribution migration module. The distribution migration module uses the distribution center and discrete degree of the hash quantization code extracted by the student model to migrate the hash quantization code extracted by the teacher model, and obtains the hash quantization code after distribution migration. Quantify the difference between the hash quantization coding after distribution migration and the hash quantization coding extracted by the teacher model to obtain the distribution migration loss; the distribution migration loss L DIT is expressed as:

LDIT=1-cos(HT,HT_S) (12)L DIT =1-cos( HT , HT_S ) (12)

式中,HT_S为经过范围约束的分布迁移后的哈希量化编码;In the formula, HT_S is the hash quantization code after range-constrained distribution migration;

第四步:构建频率成分提取模块,频率成分提取模块通过快速傅里叶变换提取哈希量化编码的频域信息,再通过反正切变换提取哈希量化编码的频率成分;Step 4: Construct a frequency component extraction module. The frequency component extraction module extracts the frequency domain information of hash quantization coding through fast Fourier transform, and then extracts the frequency component of hash quantization coding through arctangent transformation;

其中,x表示快速傅里叶变换输入的哈希量化编码,F(x)(u,v)为哈希量化编码在频域坐标(u,v)处的信息,(h,w)表示哈希量化编码的空域坐标,x(h,w)表示哈希量化编码在空域坐标(h,w)处的值,H和W表示哈希量化编码的长度、宽度;Where, The spatial coordinates of the hash quantization code, x(h, w) represents the value of the hash quantization code at the spatial domain coordinates (h, w), H and W represent the length and width of the hash quantization code;

其中,PH表示哈希量化编码的频率成分,R(x′)(u,v)为哈希量化编码在频域坐标(u,v)处的实部,I(x′)(u,v)为哈希量化编码在频域坐标(u,v)处的虚部;Among them, PH represents the frequency component of hash quantization coding, R(x′)(u, v) is the real part of hash quantization coding at frequency domain coordinates (u, v), I(x′)(u, v) ) is the imaginary part of the hash quantization code at the frequency domain coordinates (u, v);

频率成分损失Lph表示为:The frequency component loss L ph is expressed as:

Lph=1-cos(PHT,PHS) (15)L ph =1-cos (PH T , PH S ) (15)

其中,PHT表示哈希量化编码HT的频率成分,PHS为哈希量化编码HS的频率成分;Among them, PH T represents the frequency component of the hash quantization code HT , and PH S is the frequency component of the hash quantization code HS ;

第五步:构建目标优化函数,对学生模型和教师模型进行训练,通过目标优化函数衡量训练损失;目标优化函数为:Step 5: Construct the objective optimization function, train the student model and teacher model, and measure the training loss through the objective optimization function; the objective optimization function is:

其中,NB为样本总数,λ1、λ2、λ3、λ4均为权重;Among them, N B is the total number of samples, λ 1 , λ 2 , λ 3 , and λ 4 are all weights;

将待查询图像输入到训练后的学生模型或教师模型中,输出检索图像。Input the image to be queried into the trained student model or teacher model, and output the retrieved image.

与现有技术相比,本发明的优点和有益效果是:Compared with the prior art, the advantages and beneficial effects of the present invention are:

1、目前的基于自蒸馏模型的图像检索方法通过数据增强的方式对图像进行变换,使图像的分布产生变化,通过量化图像对之间的哈希编码差异引导哈希编码的生成,但当两张图像之间的分布差异过大时,直接量化两者之间的哈希编码将导致数据增强产生的差异信息无法被充分利用的问题,差异信息无法被充分量化会影响检索性能,降低检索的准确度,因此设计了分布迁移模块,分布迁移模块利用学生模型提取的哈希量化编码的分布中心和离散程度对教师模型提取的哈希量化编码进行迁移,得到分布迁移后的哈希量化编码;通过计算分布迁移后的哈希量化编码与学生模型提取的哈希量化编码之间的相似性,来辅助量化教师模型和学生模型提取到的哈希量化编码之间的差异,从而更充分地利用数据增强产生的分布差异信息,提升检索性能。1. The current image retrieval method based on the self-distillation model transforms the image through data enhancement to change the distribution of the image, and guides the generation of the hash code by quantifying the hash code difference between the image pairs. However, when the two When the distribution difference between images is too large, directly quantifying the hash coding between the two will lead to the problem that the difference information generated by data enhancement cannot be fully utilized. The difference information cannot be fully quantified, which will affect the retrieval performance and reduce the retrieval efficiency. Accuracy, so a distribution migration module is designed. The distribution migration module uses the distribution center and discrete degree of the hash quantization code extracted by the student model to migrate the hash quantization code extracted by the teacher model, and obtains the hash quantization code after distribution migration; By calculating the similarity between the hash quantization code after distribution migration and the hash quantization code extracted by the student model, we can assist in quantifying the difference between the hash quantization code extracted by the teacher model and the student model, so as to make full use of The distribution difference information generated by data enhancement improves retrieval performance.

2、图像的信息变换往往在频域空间中有更多的体现,目前的深度哈希网络在进行图像编码的量化时,仅仅考虑空域中的编码量化,而不考虑图像本身的相对变换,故本发明设计了频率成分提取模块,用于对哈希量化编码的频率成分进行分析,通过捕获图像变换产生的相对变化,使得生成的图像编码能够包含差异更明显的高层语义信息。频率成分提取模块首先通过快速傅里叶变换将空域的图像编码转换为频域形式,再通过反正切变换进行相位分析,通过量化编码之间的相位差异来捕获频域上的相对变化,从而更加充分地量化数据增强产生的差异信息。2. The information transformation of images is often more reflected in the frequency domain space. When quantizing image coding, the current deep hash network only considers the coding quantization in the spatial domain, without considering the relative transformation of the image itself. Therefore, The present invention designs a frequency component extraction module to analyze the frequency components of hash quantization coding. By capturing the relative changes caused by image transformation, the generated image coding can contain higher-level semantic information with more obvious differences. The frequency component extraction module first converts the image coding in the spatial domain into the frequency domain form through fast Fourier transform, and then performs phase analysis through arctangent transformation, and captures the relative changes in the frequency domain by quantizing the phase difference between codes, thereby making it more precise. Fully quantify the differential information produced by data augmentation.

3.本发明在单标签数据集ImageNet以及多标签数据集MS COCO、NUS-WIDE、NUS-WIDE_M数据集上进行了实验。实验结果显示,相较于当前流行的图像检索模型DHD,本发明在分布更为简单的单标签数据集上编码长度32、48、64的情况下性能取得了提升,编码长度为16的情况下获得了相当的结果。对于多标签数据集,在不同编码长度的情况下均取得了不同程度的提升,结果表明本发明方法在解决自蒸馏数据增强形式产生的相对变化方面具有更好的适应性和性能,取得了更好的检索效果。3. The present invention conducted experiments on the single-label data set ImageNet and the multi-label data set MS COCO, NUS-WIDE, and NUS-WIDE_M data sets. Experimental results show that compared with the currently popular image retrieval model DHD, the present invention has improved performance when the coding length is 32, 48, and 64 on a single-label data set with a simpler distribution. When the coding length is 16 Comparable results were obtained. For multi-label data sets, various improvements have been achieved under different encoding lengths. The results show that the method of the present invention has better adaptability and performance in solving the relative changes caused by the self-distillation data enhancement form, and has achieved better results. Good search results.

附图说明Description of the drawings

图1为本发明的深度哈希网络在训练阶段的结构示意图;Figure 1 is a schematic structural diagram of the deep hash network of the present invention in the training stage;

图2为本发明的分布迁移模块的原理图;Figure 2 is a schematic diagram of the distributed migration module of the present invention;

图3为本发明的频率成分提取模块的原理图。Figure 3 is a schematic diagram of the frequency component extraction module of the present invention.

具体实施方式Detailed ways

下面结合附图和具体实现方式对本发明的技术方案进行详细说明,并不以此限定本申请的保护范围。The technical solution of the present invention will be described in detail below with reference to the accompanying drawings and specific implementation modes, but this does not limit the scope of protection of the present application.

本发明为一种基于空频交互的分布差异自适应图像检索方法(简称方法,参见图1~3),包括如下步骤:The present invention is a distribution difference adaptive image retrieval method based on space-frequency interaction (referred to as method, see Figures 1 to 3), which includes the following steps:

第一步:获取原始图像,若干张原始图像组成数据集;The first step: obtain the original image, and several original images form a data set;

本实施例选取MS COCO、ImageNet、NUS-WIDE、NUS-WIDE_M四种数据集,ImageNet数据集为单标签数据集,其余三个都是多标签数据集,每张图像均被处理为256*256像素;每个数据集分为数据库、训练集和测试集,ImageNet数据集包含100个类别,其中数据库、训练集、测试集分别包含了128503、13000、5000张图像;NUS-WIDE与NUS-WIDE-M数据集均包含21个类别,数据库、训练集、测试集分别包含了149736、10500、2100张图像;MS COCO数据集包含80个类别,数据库、训练集、测试集分别包含了117218、10000、5000张图像。This embodiment selects four data sets: MS COCO, ImageNet, NUS-WIDE, and NUS-WIDE_M. The ImageNet data set is a single-label data set, and the other three are multi-label data sets. Each image is processed into 256*256 Pixels; each data set is divided into database, training set and test set. The ImageNet data set contains 100 categories, of which the database, training set and test set contain 128503, 13000 and 5000 images respectively; NUS-WIDE and NUS-WIDE The -M data set contains 21 categories, and the database, training set, and test set contain 149,736, 10,500, and 2,100 images respectively; the MS COCO data set contains 80 categories, and the database, training set, and test set contain 117,218, and 10,000 images respectively. , 5000 images.

通过随机裁剪、水平翻转、高斯模糊以及亮度、对比度、饱和度变换等操作对原始图像进行数据增强,以概率的形式表征图像总体数据增强的强弱,得到强变换图像和弱变换图像;在实际应用中,待查询图像可能是变换后的,通过数据增强也可以模实际应用过程中的图像变换。The original image is data enhanced through random cropping, horizontal flipping, Gaussian blur, brightness, contrast, saturation transformation and other operations, and the strength of the overall image data enhancement is characterized in the form of probability, and strong transformation images and weak transformation images are obtained; in practice In the application, the image to be queried may be transformed, and the image transformation in the actual application process can also be simulated through data enhancement.

第二步:构建深度哈希网络;Step 2: Build a deep hash network;

深度哈希网络(Deep-Hash-Distillation,DHD)以孪生网络作为基础框架,包括学生模型和教师模型,学生模型和教师模型之间的参数共享,学生模型和教师模型均包括特征提取网络和编码生成网络两部分,特征提取网络通常采用ResNet50或AlexNet网络,特征提取网络提取的特征输入到编码生成网络中,编码生成网络生成哈希量化编码;编码生成网络包括全连接层、层归一化层和tanh激活函数,通过tanh激活函数将层归一化层得到的哈希量化编码的值约束到[-1,1]范围内;将强、弱变换图像分别输入学生模型和教师模型中提取各自的哈希量化编码,对于蒸馏模型而言,固定单分支能够有效提升模型的性能,通过量化两者之间的分布差异信息来辅助编码的生成,DHD网络通过强、弱变换图像之间的分布差异实现学生模型与教师模型之间的自蒸馏差异量化损失,通过余弦相似性进行计算,公式如下:Deep-Hash-Distillation (DHD) uses twin network as the basic framework, including student model and teacher model, parameter sharing between student model and teacher model, both student model and teacher model include feature extraction network and coding There are two parts of the generation network. The feature extraction network usually uses ResNet50 or AlexNet network. The features extracted by the feature extraction network are input into the encoding generation network, and the encoding generation network generates hash quantization encoding; the encoding generation network includes a fully connected layer and a layer normalization layer. and tanh activation function. The tanh activation function constrains the value of the hash quantization code obtained by the layer normalization layer to the range of [-1, 1]; input the strong and weak transformed images into the student model and the teacher model respectively to extract their respective Hash quantization coding. For distillation models, fixed single branches can effectively improve the performance of the model and assist the generation of codes by quantifying the distribution difference information between the two. The DHD network transforms the distribution between images by strong and weak Difference realizes the self-distillation difference quantification loss between the student model and the teacher model, calculated through cosine similarity, the formula is as follows:

LSdh=1-cos(HT,HS) (1)L Sdh = 1-cos (H T , H S ) (1)

HT=tanh(hT) (2)H T =tanh(h T ) (2)

HS=tanh(hS) (3)H S =tanh(h S ) (3)

式中,LSdh为学生模型与教师模型之间的自蒸馏差异量化损失,hT、hS表示教师模型和学生模型的层归一化层输出的哈希量化编码,HT、HS表示教师模型和学生模型提取的哈希量化编码,tanh(·)表示tanh激活函数;In the formula, L Sdh is the self-distillation difference quantization loss between the student model and the teacher model, h T and h S represent the hash quantization codes output by the layer normalization layer of the teacher model and the student model, and H T and H S represent The hash quantization encoding extracted by the teacher model and the student model, tanh(·) represents the tanh activation function;

在编码差异量化阶段,将哈希量化编码HT与代理样本PT进行相似度判断,代理样本PT表示图像样本中心,则哈希代理损失的计算公式如下:In the coding difference quantification stage, the hash quantized code H T and the proxy sample P T are judged for similarity. The proxy sample P T represents the center of the image sample. The calculation formula of the hash proxy loss is as follows:

LHP=H(y,Softmax(PT/T)) (4)L HP =H(y,Softmax(P T /T)) (4)

其中,T是温度标度超参数,H(·)表示图像的真实类别标签与预测类别标签之间的量化误差,通过Softmax函数得到预测类别标签,y表示图像的真实类别标签序列;Among them, T is the temperature scale hyperparameter, H(·) represents the quantified error between the real category label of the image and the predicted category label. The predicted category label is obtained through the Softmax function, and y represents the real category label sequence of the image;

深度哈希算法通过回归的方式最小化哈希编码与二进制目标之间的距离来减小量化误差。由于哈希量化编码会对每个位的编码分别进行量化,因此将编码量化视为二进制分类,通过高斯分布估计量g(h)对各个位的编码结果进行预测,公式如下所示:The deep hashing algorithm reduces the quantization error by minimizing the distance between the hash code and the binary target through regression. Since hash quantization coding quantizes the coding of each bit separately, coding quantization is regarded as binary classification, and the coding result of each bit is predicted by the Gaussian distribution estimator g(h). The formula is as follows:

其中,m、σ为高斯分布估计量9(h)的均值和标准差,m的取值为+1或-1,当时,m取+1,当/>时,m取-1;Among them, m and σ are the mean and standard deviation of the Gaussian distribution estimator 9(h), and the value of m is +1 or -1. When When , m takes +1, when/> When , m takes -1;

综上,二进制交叉熵(BCE)损失的计算公式如下所示:To sum up, the calculation formula of binary cross entropy (BCE) loss is as follows:

其中,表示哈希编码的值为1,/>表示第k个哈希编码的极大似然估计值,Hk表示哈希量化编码HT的第k位,K表示编码长度;in, Indicates that the value of hash encoding is 1, /> Represents the maximum likelihood estimate of the k-th hash code, H k represents the k-th bit of the hash quantization code H T , and K represents the coding length;

第三步、构建分布迁移模块,用于量化分布迁移损失;分布迁移模块利用学生模型提取的哈希量化编码的分布中心和离散程度对教师模型提取的哈希量化编码进行迁移,得到分布迁移后的哈希量化编码,通过量化分布迁移后的哈希量化编码和教师模型提取的哈希量化编码之间的差异,得到分布迁移损失;The third step is to construct a distribution migration module to quantify the distribution migration loss; the distribution migration module uses the distribution center and discrete degree of the hash quantization code extracted by the student model to migrate the hash quantization code extracted by the teacher model, and obtains the distribution migration Hash quantization coding, by quantifying the difference between the hash quantization coding after distribution migration and the hash quantization coding extracted by the teacher model, the distribution migration loss is obtained;

现有的深度哈希网络只考虑了图像进行数据增强产生的变换差异,通过教师模型和学生模型量化变换图像之间的差异,仅仅通过直接量化教师模型和学生模型的哈希量化编码之间的差异信息,来辅助深度哈希网络生成图像的哈希量化编码;但是,当强、弱变换图像之间分布存在较大差异时,直接量化强、弱变换图像得到的哈希量化编码无法充分利用强、弱变换图像之间的分布差异信息,因此如何充分利用由于数据增强导致的强、弱变换图像之间的分布差异,充分利用图像之间的类内和类间关系,使网络构建出具有更多图像差异信息的哈希编码对于提升图像检索性能十分关键。因此,本发明在量化编码生成阶段,利用分布迁移模块(Distribution Information Transformation Block)通过学生模型提取的哈希量化编码的分布信息引导教师模型生成的哈希量化编码,对教师模型提取的哈希量化编码进行迁移,得到分布迁移后的哈希量化编码,通过量化分布迁移后的哈希量化编码与教师模型提取的哈希量化编码之间的分布差异,来辅助提取学生模型的哈希量化编码与教师模型的哈希量化编码之间的差异,从而更充分地利用由于数据增强变换带来的分布差异信息,DIT-Block模块使数据增强产生的分布差异得到更多的关注。The existing deep hash network only considers the transformation difference caused by data enhancement of the image, quantifies the difference between the transformed images through the teacher model and the student model, and only directly quantifies the difference between the hash quantization codes of the teacher model and the student model. Difference information is used to assist the deep hash network to generate hash quantization coding of images; however, when there is a large difference in distribution between strong and weak transformation images, the hash quantization coding obtained by directly quantizing strong and weak transformation images cannot be fully utilized. The distribution difference information between strong and weakly transformed images, so how to make full use of the distribution difference between strong and weakly transformed images due to data enhancement, make full use of the intra-class and inter-class relationships between images, so that the network can construct a network with Hash encoding of more image difference information is critical to improving image retrieval performance. Therefore, in the quantized code generation stage, the present invention uses the distribution information transformation module (Distribution Information Transformation Block) to guide the hash quantized code generated by the teacher model through the distribution information of the hash quantized code extracted by the student model, and quantizes the hash quantized code extracted by the teacher model. The code is migrated to obtain the hash quantized code after distribution migration. By quantifying the distribution difference between the hash quantization code after distribution migration and the hash quantization code extracted by the teacher model, it is used to assist in extracting the hash quantization code and the hash quantization code of the student model. The hash of the teacher model quantifies the difference between encodings, thereby more fully utilizing the distribution difference information caused by data augmentation transformation, and the DIT-Block module allows the distribution difference caused by data augmentation to receive more attention.

DIT-Block模块的输入包括教师模型和学生模型提取的哈希量化编码,由于tanh激活函数对哈希量化编码的范围进行了约束,使得范围约束后的哈希量化编码产生了一定的信息丢失,因此将层归一化层输出的哈希量化编码hT和hS作为DIT-Block模块的输入;假设哈希量化编码的均值和方差分别表示编码的分布中心以及离散程度,首先,计算哈希量化编码hT和hS的均值和方差,通过哈希量化编码hS的分布中心以及离散程度来引导哈希量化编码hT的迁移,得到分布迁移后的哈希量化编码,分布迁移后的哈希量化编码作为量化教师模型提取的哈希量化编码与学生模型提取的哈希量化编码之间分布差异的约束引导项;The input of the DIT-Block module includes the hash quantization codes extracted by the teacher model and the student model. Since the tanh activation function constrains the range of the hash quantization codes, the hash quantization codes after the range constraints produce a certain amount of information loss. Therefore, the hash quantization codes h T and h S output by the layer normalization layer are used as the input of the DIT-Block module; assuming that the mean and variance of the hash quantization code represent the distribution center and discreteness of the code respectively, first, calculate the hash The mean and variance of the quantized codes h T and h S are used to guide the migration of the hash quantized codes h T through the distribution center and discrete degree of the hash quantized codes h S to obtain the hash quantized codes after distribution migration. Hash quantization coding serves as a constraint guidance term that quantifies the distribution difference between the hash quantization coding extracted by the teacher model and the hash quantization coding extracted by the student model;

式中,hT_S表示分布迁移后的哈希量化编码,μ(.)表示哈希量化编码的均值,即分布中心;σ(.)表示哈希量化编码的方差,即离散程度;xhw表示哈希量化编码在坐标(h,w)处的值,H和W表示哈希量化编码的长度、宽度,∈为偏置值;In the formula, h T_S represents the hash quantization code after distribution migration, μ(.) represents the mean value of the hash quantization code, that is, the distribution center; σ(.) represents the variance of the hash quantization code, that is, the degree of discreteness; x hw represents The value of the hash quantization code at coordinates (h, w), H and W represent the length and width of the hash quantization code, ∈ is the offset value;

通过DIT-block模块实现了关于哈希量化编码hT的分布迁移,得到分布迁移后的哈希量化编码hT_S;通过tanh激活函数对分布迁移后的哈希量化编码hT_S进行范围约束,得到哈希量化编码HT_S;对于哈希量化编码HT_S与HT之间的差异量化,即分布迁移损失,通过余弦相似性进行判断,公式如下:The distribution migration of the hash quantization code h T is implemented through the DIT-block module, and the hash quantization code h T_S after the distribution migration is obtained; the range constraint of the hash quantization code h T_S after the distribution migration is obtained by using the tanh activation function. Hash quantization code HT_S ; for the difference quantification between hash quantization code HT_S and HT , that is, distribution migration loss, it is judged by cosine similarity, the formula is as follows:

LDIT=1-cos(HT,HT_S) (12)L DIT =1-cos( HT , HT_S ) (12)

第四步:构建频率成分提取模块,用于量化频率成分损失;Step 4: Construct a frequency component extraction module to quantify frequency component loss;

数据增强使得图像产生了一定的变换,目前的图像检索网络通常在进行编码量化时仅仅考虑编码之间的空域信息差异,而不考虑编码的频域信息,导致无法充分利用图像变换导致的相对变化信息获取图像对之间的相对差异变化,因此本发明通过频率成分提取模块(Frequency Component Extraction Block)提取编码的频域信息,通过关注教师模型提取的哈希量化编码HT与学生模型提取的哈希量化编码HS之间的频域信息差异,来获取数据增强对图像产生的相对变换关系在哈希编码过程中造成的影响,进而提高检索的精确度;Data enhancement causes a certain transformation in the image. Current image retrieval networks usually only consider the difference in spatial domain information between codes when performing coding quantization, without considering the frequency domain information of coding, resulting in the inability to fully utilize the relative changes caused by image transformation. The information obtains the relative difference change between the image pairs. Therefore, the present invention extracts the frequency domain information of the encoding through the frequency component extraction module (Frequency Component Extraction Block), and pays attention to the hash quantization encoding H T extracted by the teacher model and the hash extracted by the student model. The difference in frequency domain information between HS quantization codes is used to obtain the impact of the relative transformation relationship caused by data enhancement on the image in the hash coding process, thereby improving the accuracy of retrieval;

FCE-Block模块通过快速傅里叶变换提取哈希量化编码的频域信息,通过快速傅里叶变换将空域的图像编码表征转换为频域空间的表征;The FCE-Block module extracts the frequency domain information of hash quantization coding through fast Fourier transform, and converts the image coding representation in the spatial domain into the representation of frequency domain space through fast Fourier transform;

其中,x表示快速傅里叶变换输入的哈希量化编码,F(x)(u,v)为哈希量化编码在频域坐标(u,v)处的信息,(h,w)表示哈希量化编码的空域坐标,x(h,w)表示哈希量化编码在空域坐标(h,w)处的值,表示哈希量化编码的频域坐标;Where, The spatial coordinate of the hash quantization code, x(h, w) represents the value of the hash quantization code at the spatial coordinate (h, w), Represents the frequency domain coordinates of hash quantization coding;

通过快速傅里叶变换将原始的表征中从空域上转换到频域上,再通过反正切变换进行相位分析,提取频率成分;The original representation is converted from the spatial domain to the frequency domain through fast Fourier transform, and then phase analysis is performed through arctangent transform to extract the frequency component;

其中,PH表示哈希量化编码的频率成分,R(x′)(u,v)为哈希量化编码x在频域坐标(u,v)处的实部,I(x′)(u,v)为哈希量化编码x在频域坐标(u,v)处的虚部;Among them, PH represents the frequency component of the hash quantization code, R(x′)(u, v) is the real part of the hash quantization code x at the frequency domain coordinate (u, v), I(x′)(u, v) is the imaginary part of hash quantization encoding x at frequency domain coordinates (u, v);

哈希量化编码HT和HS经过频率成分提取模块,得到频率成分PHT和PHS,通过余弦相似性来量化频率成分之间的相似性,从而关注教师模型提取的哈希量化编码与学生模型提取的哈希量化编码之间的频域信息差异,更充分的利用数据增强变换对图像产生的相对变换关系;频率成分损失计算公式为:The hash quantization codes HT and HS pass through the frequency component extraction module to obtain the frequency components PH T and PH S. The similarity between the frequency components is quantified through cosine similarity, thereby focusing on the hash quantization code extracted by the teacher model and the student The frequency domain information difference between the hash quantization codes extracted by the model can more fully utilize the relative transformation relationship produced by the data enhancement transformation on the image; the frequency component loss calculation formula is:

Lph=1-cos(PHT,PHS) (15)L ph =1-cos (PH T , PH S ) (15)

其中,Lph为频率成分损失,损失值越接近1,表示两个频率成分越相似或相关;Among them, L ph is the frequency component loss. The closer the loss value is to 1, the more similar or relevant the two frequency components are;

第五步:构建目标优化函数;Step 5: Construct the objective optimization function;

联合哈希代理损失LHP、自蒸馏差异量化损失LSdh、二进制交叉熵损失Lbce-Q、分布迁移损失LDIT以及频率成分损失Lph,得到目标优化函数:Combined hash proxy loss L HP , self-distillation difference quantization loss L Sdh , binary cross-entropy loss L bce-Q , distribution transfer loss L DIT and frequency component loss L ph , the objective optimization function is obtained:

其中,NB为样本总数,λ1、λ2、λ3、λ4均为权重,本实施例中λ1、λ2取0.1,当特征提取网络采用ResNet50时,λ3取1,当特征提取网络采用AlexNet网络时,λ3取0.7;当数据集中图像的频域差异较小时,对于不同的编码长度,通过调节λ4实现频域和空域量化之间的平衡,使得检索效果达到最优;Among them, N B is the total number of samples, λ 1 , λ 2 , λ 3 , and λ 4 are all weights. In this embodiment, λ 1 and λ 2 take 0.1. When the feature extraction network uses ResNet50, λ 3 takes 1. When the feature When the extraction network uses the AlexNet network, λ 3 is taken to be 0.7; when the frequency domain difference of the images in the data set is small, for different encoding lengths, the balance between frequency domain and spatial domain quantization is achieved by adjusting λ 4 , so that the retrieval effect is optimized. ;

使用数据集训练深度哈希网络,采用小批次的mini-batch样本进行训练,样本 xi表示输入图像,/>表示输入图像对应的标签;数据集的原始图像经过数据增强得到强变换图像和弱变换图像,强、弱变换图像分别输入到学生模型和教师模型中,通过目标优化函数衡量训练损失,直至损失收敛,得到训练后的学生模型和教师模型;Use the data set to train the deep hash network, and use small batches of mini-batch samples for training. Samples x i represents the input image, /> Indicates the label corresponding to the input image; the original image of the data set is enhanced through data to obtain a strong transformation image and a weak transformation image. The strong and weak transformation images are input into the student model and the teacher model respectively, and the training loss is measured through the objective optimization function until the loss converges. , obtain the trained student model and teacher model;

将待查询图像输入到训练后的学生模型或教师模型中,输出检索得到的图像,完成图像检索。Input the image to be queried into the trained student model or teacher model, output the retrieved image, and complete the image retrieval.

本发明未述及之处适用于现有技术。The parts not described in the present invention are applicable to the existing technology.

Claims (4)

1.一种基于空频交互的分布差异自适应图像检索方法,其特征在于,该方法包括如下步骤:1. A distribution difference adaptive image retrieval method based on space-frequency interaction, characterized in that the method includes the following steps: 第一步:获取训练用的原始图像,对原始图像进行数据增强,得到强变换图像和弱变换图像;The first step: obtain the original image for training, perform data enhancement on the original image, and obtain a strong transformation image and a weak transformation image; 第二步:构建深度哈希网络;将强、弱变换图像分别输入到深度哈希网络的学生模型和教师模型中,得到学生模型提取的哈希量化编码和教师模型提取的哈希量化编码;基于学生模型和教师提取的哈希量化编码得到自蒸馏差异量化损失LSdh、哈希代理损失LHP和二进制交叉熵损失Lbce-QStep 2: Construct a deep hash network; input the strong and weak transformed images into the student model and teacher model of the deep hash network respectively to obtain the hash quantization code extracted by the student model and the hash quantization code extracted by the teacher model; Based on the hash quantization encoding extracted by the student model and the teacher, the self-distillation difference quantization loss L Sdh , the hash proxy loss L HP and the binary cross-entropy loss L bce-Q are obtained; Lsdh=1-cos(HT,HS) (1)L sdh =1-cos(H T ,H S ) (1) LHP=H(y,Softmax(PT/T)) (4)L HP =H(y,Softmax(P T /T)) (4) 式中,HT、HS表示教师模型和学生模型提取的哈希量化编码,PT为代理样本,T表示温度标度超参数,H(·)表示图像的真实类别标签与预测类别标签之间的量化误差,y表示图像的真实类别标签序列,表示哈希编码的值为1,/>表示第k个哈希编码的极大似然估计值,Hk表示哈希量化编码HT的第k位,K表示编码长度;In the formula, H T and H S represent the hash quantization codes extracted by the teacher model and the student model, P T is the proxy sample, T represents the temperature scale hyperparameter, and H(·) represents the difference between the real category label and the predicted category label of the image. The quantified error between , y represents the real category label sequence of the image, Indicates that the value of hash encoding is 1, /> Represents the maximum likelihood estimate of the k-th hash code, H k represents the k-th bit of the hash quantization code H T , and K represents the coding length; 第三步、构建分布迁移模块,分布迁移模块利用学生模型提取的哈希量化编码的分布中心和离散程度对教师模型提取的哈希量化编码进行迁移,得到分布迁移后的哈希量化编码,通过量化分布迁移后的哈希量化编码和教师模型提取的哈希量化编码之间的差异,得到分布迁移损失;分布迁移损失LDIT表示为:The third step is to build a distribution migration module. The distribution migration module uses the distribution center and discrete degree of the hash quantization code extracted by the student model to migrate the hash quantization code extracted by the teacher model, and obtains the hash quantization code after distribution migration. Quantify the difference between the hash quantization coding after distribution migration and the hash quantization coding extracted by the teacher model to obtain the distribution migration loss; the distribution migration loss L DIT is expressed as: LDIT=1-cos(HT,HT_S) (12)L DIT =1-cos(H T ,H T_S ) (12) 式中,HT_S为经过范围约束的分布迁移后的哈希量化编码;In the formula, HT_S is the hash quantization code after range-constrained distribution migration; 第四步:构建频率成分提取模块,频率成分提取模块通过快速傅里叶变换提取哈希量化编码的频域信息,再通过反正切变换提取哈希量化编码的频率成分;Step 4: Construct a frequency component extraction module. The frequency component extraction module extracts the frequency domain information of hash quantization coding through fast Fourier transform, and then extracts the frequency component of hash quantization coding through arctangent transformation; 其中,x表示快速傅里叶变换输入的哈希量化编码,F(x)(u,v)为哈希量化编码在频域坐标(u,v)处的信息,(h,w)表示哈希量化编码的空域坐标,x(h,w)表示哈希量化编码在空域坐标(h,w)处的值,H和W表示哈希量化编码的长度、宽度;Where, The spatial coordinates of the hash quantization code, x(h,w) represents the value of the hash quantization code at the spatial domain coordinates (h, w), H and W represent the length and width of the hash quantization code; 其中,PH表示哈希量化编码的频率成分,R(x′)(u,v)为哈希量化编码在频域坐标(u,v)处的实部,I(x′)(u,v)为哈希量化编码在频域坐标(u,v)处的虚部;Among them, PH represents the frequency component of hash quantization coding, R(x′)(u,v) is the real part of hash quantization coding at frequency domain coordinates (u,v), I(x′)(u,v) ) is the imaginary part of the hash quantization code at the frequency domain coordinates (u, v); 频率成分损失Lph表示为:The frequency component loss L ph is expressed as: lph=1-cos(PHT,PHS) (15)l ph =1-cos(PH T ,PH S ) (15) 其中,PHT表示哈希量化编码HT的频率成分,PHS为哈希量化编码HS的频率成分;Among them, PH T represents the frequency component of the hash quantization code HT , and PH S is the frequency component of the hash quantization code HS ; 第五步:构建目标优化函数,对学生模型和教师模型进行训练,通过目标优化函数衡量训练损失;目标优化函数为:Step 5: Construct the objective optimization function, train the student model and teacher model, and measure the training loss through the objective optimization function; the objective optimization function is: 其中,NB为样本总数,λ1、λ2、λ3、λ4均为权重;Among them, N B is the total number of samples, λ 1 , λ 2 , λ 3 , and λ 4 are all weights; 将待查询图像输入到训练后的学生模型或教师模型中,输出检索图像。Input the image to be queried into the trained student model or teacher model, and output the retrieved image. 2.根据权利要求1所述的基于空频交互的分布差异自适应图像检索方法,其特征在于,第三步中,哈希量化编码HT_S由分布迁移后的哈希量化编码hT_S经过tanh激活函数得到,分布迁移后的哈希量化编码hT_S表示为:2. The distribution difference adaptive image retrieval method based on space-frequency interaction according to claim 1, characterized in that, in the third step, the hash quantization code H T_S is the hash quantization code h T_S after distribution migration through tanh The activation function is obtained, and the hash quantization code h T_S after distribution migration is expressed as: 其中,μ(.)表示哈希量化编码的均值,表征哈希量化编码的分布中心;σ(.)表示哈希量化编码的方差,表征哈希量化编码的离散程度;hT为教师模型的层归一化层输出的哈希量化编码。Among them, μ(.) represents the mean value of hash quantization coding, which represents the distribution center of hash quantization coding; σ(.) represents the variance of hash quantization coding, which represents the degree of discreteness of hash quantization coding; h T is the teacher model's Hash quantization encoding of layer normalization layer output. 3.根据权利要求1或2所述的基于空频交互的分布差异自适应图像检索方法,其特征在于,所述学生模型和教师模型均包括特征提取网络和编码生成网络,特征提取网络采用ResNet50或AlexNet网络,编码生成网络包括全连接层、层归一化层和tanh激活函数。3. The distributed difference adaptive image retrieval method based on space-frequency interaction according to claim 1 or 2, characterized in that both the student model and the teacher model include a feature extraction network and a code generation network, and the feature extraction network adopts ResNet50. Or AlexNet network, the encoding generation network includes a fully connected layer, a layer normalization layer and a tanh activation function. 4.根据权利要求3所述的基于空频交互的分布差异自适应图像检索方法,其特征在于,第一步中,数据增强包括随机裁剪、水平翻转、高斯模糊以及亮度、对比度、饱和度变换。4. The distribution difference adaptive image retrieval method based on space-frequency interaction according to claim 3, characterized in that in the first step, data enhancement includes random cropping, horizontal flipping, Gaussian blur and brightness, contrast, and saturation transformation. .
CN202311424869.4A 2023-10-31 2023-10-31 Distributed difference self-adaptive image retrieval method based on space-frequency interaction Pending CN117370594A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311424869.4A CN117370594A (en) 2023-10-31 2023-10-31 Distributed difference self-adaptive image retrieval method based on space-frequency interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311424869.4A CN117370594A (en) 2023-10-31 2023-10-31 Distributed difference self-adaptive image retrieval method based on space-frequency interaction

Publications (1)

Publication Number Publication Date
CN117370594A true CN117370594A (en) 2024-01-09

Family

ID=89400187

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311424869.4A Pending CN117370594A (en) 2023-10-31 2023-10-31 Distributed difference self-adaptive image retrieval method based on space-frequency interaction

Country Status (1)

Country Link
CN (1) CN117370594A (en)

Similar Documents

Publication Publication Date Title
CN115759092A (en) Network threat information named entity identification method based on ALBERT
CN112733965A (en) Label-free image classification method based on small sample learning
CN112036511B (en) Image retrieval method based on attention mechanism graph convolutional neural network
CN113420775A (en) Image classification method under extremely small quantity of training samples based on adaptive subdomain field adaptation of non-linearity
CN114911967B (en) A 3D model sketch retrieval method based on adaptive domain enhancement
CN113836319B (en) Knowledge Completion Method and System for Integrating Entity Neighbors
CN114972904A (en) Zero sample knowledge distillation method and system based on triple loss resistance
CN113837290A (en) Unsupervised unpaired image translation method based on attention generator network
CN118260630A (en) A multimodal small sample electromagnetic signal classification method and device based on self-supervised learning
CN117150068A (en) Cross-modal retrieval method and system based on self-supervision comparison learning concept alignment
CN118656511A (en) A multimodal face retrieval method based on generative language model
López-Cifuentes et al. Attention-based knowledge distillation in scene recognition: The impact of a DCT-driven loss
CN111291705A (en) A cross-multi-object domain person re-identification method
CN116681128A (en) A neural network model training method and device for noisy multi-label data
CN116343294A (en) A person re-identification method suitable for domain generalization
CN104809468A (en) Multi-view classification method based on indefinite kernels
Shang et al. Cross-modal dual subspace learning with adversarial network
CN107291813A (en) Exemplary search method based on semantic segmentation scene
CN110705384A (en) Vehicle re-identification method based on cross-domain migration enhanced representation
CN117370594A (en) Distributed difference self-adaptive image retrieval method based on space-frequency interaction
CN117912597A (en) Molecular toxicity prediction method based on global attention mechanism
CN116756363A (en) Strong-correlation non-supervision cross-modal retrieval method guided by information quantity
CN115982355A (en) Emotion analysis method, system and equipment based on deep learning
CN116310407A (en) A semantic extraction method for heterogeneous data for multi-dimensional business of power distribution and utilization
CN115731600A (en) Small target detection method based on semi-supervision and feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination