CN112488231A - Cosine measurement supervision deep hash algorithm with balanced similarity - Google Patents

Cosine measurement supervision deep hash algorithm with balanced similarity Download PDF

Info

Publication number
CN112488231A
CN112488231A CN202011443669.XA CN202011443669A CN112488231A CN 112488231 A CN112488231 A CN 112488231A CN 202011443669 A CN202011443669 A CN 202011443669A CN 112488231 A CN112488231 A CN 112488231A
Authority
CN
China
Prior art keywords
similarity
hash
image
loss
cosine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011443669.XA
Other languages
Chinese (zh)
Inventor
毋立芳
陈禹锟
胡文进
简萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202011443669.XA priority Critical patent/CN112488231A/en
Publication of CN112488231A publication Critical patent/CN112488231A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

A cosine measurement supervision depth hash algorithm with balanced similarity belongs to the field of image retrieval. The deep supervised hashing has the advantages of low storage cost, high calculation efficiency and the like. However, similarity preservation, quantization error, and unbalanced data remain significant challenges in deep supervised hashing. The invention provides a deep hash scheme for maintaining pairwise similarity, which solves the problems. The method uses a deep network as a basic model to extract features, and replaces the last classification layer with a hash layer to output a hash code. The method designs a loss function, and can effectively keep semantic similarity, treat the problems of unbalanced and difficult classes and quantization loss in the training process. When the Hash code obtained by the method is used for image retrieval, the retrieval accuracy rate can be effectively improved for the extremely unbalanced data set.

Description

Cosine measurement supervision deep hash algorithm with balanced similarity
Technical Field
The invention relates to the field of image retrieval, in particular to a cosine measurement supervision depth hash algorithm with balanced similarity.
Background
With the rapid development of multimedia processing technology, large-scale image search has been widely applied to our daily lives. As one of the most effective techniques, the hash technique is receiving more and more attention from academia and industry because of its storage and computation advantages. Its purpose is to map high-dimensional images to compact binary code while preserving the image correlation. In particular, the deep supervised hashing may improve retrieval performance in integrating image features and hash coding through end-to-end learning.
Hash embedding is typically implemented using discrete optimization, which is a standard NP-hard problem. The deep supervision hash method mostly adopts a continuous relaxation method, and replaces discrete coding with approximate continuous embedding. In this process, approximation errors and quantization errors are inevitably brought about. Several existing deep supervised hashing methods use continuous relaxation and square loss or inner product loss to maintain semantic similarity of an original space and a hamming space. Deep Supervised Hashing (DSH) measures sample similarity using the opposite loss of euclidean distance. And it applies a regularizer to the real-valued network output to approximate the desired hash code. The Depth Quantization Network (DQN) preserves similarity with dual cosine loss and reduces quantization error with product quantization loss. With inner product cross-entropy loss, the inner product is likely to fall into the saturation region of the sigmoid function, possibly resulting in an undesirable vanishing gradient. Most euclidean or inner product based metrology methods may suffer from a large effect on the visual representation size, resulting in undesirable approximation and quantization errors. And the cosine measurement method is not interfered by the influence of the image characteristic size when measuring the image similarity.
In addition, on the supervised informational similarity matrix, there are far fewer similar images than dissimilar images. This will lead to positive/negative and difficult/easy imbalance problems. Pos/neg refers to the imbalance of positive and negative sample pairs. On this issue, network optimization relies primarily on dominance samples. On the other hand, if it is difficult (easy) to push (pull) a pair of samples apart from each other by a general discrimination strategy, they are a difficult (easy) pair of samples. The gradient of contribution of the easy sample pair in model training is small, and the model does not benefit much from the easy sample pair. In contrast, attention should be paid more to the dilemma. However, in general, the easy sample pairs are larger in size than the difficult sample pairs. Although one easy sample pair contributes less to the global gradient than a hard sample pair, the total contribution of a large number of easy sample pairs is cumulative, possibly exceeding a small number of hard sample pairs. This phenomenon makes the model slow to converge, possibly falling into local optimality, and even degrades the retrieval performance to some extent. In this regard, HashNet and HashGAN focus on the data imbalance problem, weighting the training pairs according to the importance of misclassifying the pair. Focal loss is mainly to solve the serious imbalance problem of pos/neg and hard/easy in single-stage target detection. Inspired by the method, the CMHH designs an exponential focus loss and an exponential quantity loss under a Bayesian learning framework to solve the problem of hard/easy imbalance.
Disclosure of Invention
In order to solve the above problems, we invented a depth hash algorithm, which combines the similarity of the opposite cosines and the quantization of the cosine distance entropy. It can keep the original semantic distribution and reduce the quantization error. Meanwhile, a weighting similarity measurement method with cosine measurement entropy is designed. It adaptively reduces the impact of unbalanced data in similarity preserving embedding.
The method comprises the following specific steps:
step 1, establishing a similarity matrix S of an image pair and image preprocessing: and regarding the images of the same class of the data set as similar images, and regarding the images of different classes as dissimilar images, so as to obtain a similarity matrix S. And preprocessing the input image, wherein the preprocessing step follows the unified setting of the current depth hash algorithm.
Step 2, establishing a deep network model: a deep network model is selected, in this embodiment AlexNet is used, and the last classification layer is replaced by a hash layer to obtain a hash code.
Step 3, designing a loss function: in order to retain the similarity information of the image pair, the problems of imbalance and difficulty of data and quantization loss are solved, and the loss function comprises similarity loss of the image pair and quantization loss of the generated hash code.
Step 4, training a model: inputting the preprocessed pictures into a network model, calculating a loss function in pairs for the output of the hash layer of each batch of pictures, and learning model parameters by using random gradient descent optimization.
Step 5, generating an image hash code: and (4) inputting the images of the database into the model trained in the step (4) to obtain corresponding hash codes.
Compared with the prior art, the invention has the following obvious and prominent substantive characteristics and remarkable technical progress:
the invention provides a depth hash algorithm, which combines pairwise cosine similarity and cosine distance entropy quantization. It can keep the original semantic distribution and reduce the quantization error. In addition, the invention also designs a weighting similarity measurement method with cosine measurement entropy. It adaptively reduces the impact of unbalanced data in similarity preserving embedding. Meanwhile, the problem of imbalance of difficult and easy samples is reduced.
Drawings
FIG. 1 is a flow chart of the algorithm of the present invention.
Fig. 2 is a diagram of a network architecture of the present invention.
Detailed Description
The invention provides a cosine measurement supervision depth hash algorithm with balanced similarity. The overall structure of the present invention is shown in fig. 1. The embodiment of the invention performs simulation in win10 and matlab environments. The method comprises the following concrete implementation steps:
step 1: and establishing a similarity matrix S of the image pair and image preprocessing, and regarding label images with the same category in the image training set as similar images and regarding completely different categories of images as dissimilar images. Image preprocessing follows the unified setting of the current depth hash algorithm.
Step 2: and establishing a deep network model, wherein AlexNet is adopted in the embodiment, the last classification layer is removed, and a hash layer is added to obtain a hash code. The hash layer is a fully connected layer, and the output dimension of the hash layer is the dimension of the hash code. The hash layer is not constrained by using the tanh activation function.
And step 3: and (4) designing a loss function, wherein the loss function is designed to keep the similarity information of the image pair, solve the problems of imbalance and difficulty of data and quantization loss, and comprises the similarity loss of the image pair and the quantization loss of the generated hash code.
And 4, step 4: training a model: inputting the preprocessed pictures into the network model according to batches, calculating the loss function of the step 3 in pairs according to the output of the hash layer of each batch of pictures, and learning the model parameters by using random gradient descent optimization.
Step 5, generating an image hash code: and (4) inputting the images of the database into the model trained in the step (4) to obtain corresponding hash codes.
In the step 1, a similarity matrix S of the image pairs is established, and the image pairs in the training set having the same category are regarded as similar images according to the unified setting of the current depth hash algorithm. When image xiAnd image xjWhen similar, its similarity label sij1 is ═ 1; when image xiAnd image xjWhen they are not similar, their similarity labels sij0. For a single label dataset, images of the same category are similar images and images of different categories are dissimilar images. For multi-label datasets, only image xiAnd image xjHaving a same category, s is considered similarij1 is ═ 1; when image xiAnd image xjNone of the classes are identical, and are considered dissimilar, sij0. The image preprocessing also follows the unified setting of the current depth hash algorithm, and the images in the training set are uniformly scaled to 256x256 pixel size, randomly cut 224x224 pixel size, randomly turned and normalized in a standard way; the test set and database full images were uniformly scaled to 256x256 pixel size, center cropped 224x224 pixel size and normalized to a standard.
In the step 2, AlexNet is used in this embodiment. AlexNet is a standard baseline network model. In this embodiment, the last classification layer is removed, and a hash layer is added to obtain the hash code. The hash layer is a fully connected layer, and the output dimension of the hash layer is the dimension of the hash code. Because the invention uses cosine measurement, the tanh activation function is not needed to be used for constraint after the hash layer. In addition, the invention has no special limitation on the basic baseline network.
In step 3, the loss function includes two terms, cosine similarity loss and cosine distance entropy quantization loss.
Cosine similarity loss:
Figure BDA0002830785830000041
where i and j correspond to image i and image j,
Figure BDA0002830785830000051
for cosine similarity measurement, K is the dimension of the required hash code, and 16,32,48,64, etc. are commonly usediAnd ujIs the output vector of the hash layer and,<ui,uj>represents the inner product operation, | uiII represents the modulo operation; m is a boundary threshold parameter with a value range of [ -1, +1]The invention recommends a value of 0; sijSimilarity labels of the image i and the image j are obtained in the step 1; eps equals 10-7To prevent negative infinity constant values. w is aijWeights are balanced for sample pair similarity, which is the sum of the sample log numbers divided by the number of similar sample pairs for similar sample pairs and the sum of the sample log numbers divided by the number of dissimilar sample pairs for each batch. Thus, wijThe problem of the number imbalance between similar and dissimilar can be solved. Gamma is a hyper-parameter of cosine measurement entropy for solving the problem of difficult and easy samples, and is taken in a range of 1 to 3, and the recommended value of the invention is 2.
Cosine similarity loss uses pairwise cosine similarity to constrain hashing to achieve the goal of similarity preservation. The constructed cosine similarity loss is used for coding images with similar semantics more closely and coding images with dissimilar semantics more remotely. Cosine similarity can be equivalent to inner product similarity in hamming space, and the range of cosine similarity is (-1, +1), so there is no need to use the tanh activation function for constraint.
In order to solve the problems of unbalanced data types and difficulty, a weighting similarity measurement method with cosine measurement entropy is embedded into cosine similarity loss to reduce the influence of data unbalance. Meanwhile, the cosine measurement entropy can solve the problem of difficult samples, and larger weight is used for the difficult samples, so that the network is more concentrated on the difficult samples. The hyper-parameter gamma can control the learning speed of the cosine measurement entropy.
Cosine distance entropy quantization loss:
Figure BDA0002830785830000052
wherein, biFor discrete binarization of ui
Cosine distance entropy quantization loss controls the quantization error of continuous relaxation, and reduces the difference between cosine similarity and Hamming distance. The cosine distance entropy is added into the quantization loss, and the quantization function can be further converged more quickly according to the distance between the current hash real value and the discrete hash code.
The final loss function is:
L=Ls+αLQ
where α is the hyperparameter of the quantization loss function.
In step 4, the pictures are input into the network in batches, and the generation of the hash code is supervised by the similarity information of the image pairs of each batch. The loss function is computed pair-wise for the output of the hash layer for each batch of pictures. Model parameters were learned using stochastic gradient descent optimization with a batch size of 64 and weight decay of 5 x 10-4The initial learning rate is typically between 0.01 and 0.0001, with this example learning rate of 0.001; values between 1 and 3, 2 being recommended in this example; α may be taken as 50,100,150,200, recommended as 100 in this example. The learning rate decreases by a factor of 10 after each 50 iteration cycles; and (4) training the network until the loss function oscillates and does not obviously decline any more, wherein the iteration period is generally 150, and the model parameters are saved after the training is finished.
In the step 5, after the image in the database is output by using the model obtained in the step 5, the image is binarized by using a sign function to obtain a discrete hash code. When the image retrieval is carried out, the corresponding hash codes obtained by inquiring the image input model are compared with the hash codes in the database, and the images in the database are returned according to the sequence of the Hamming distance from small to large.

Claims (6)

1. A cosine metric supervised depth hashing algorithm with balanced similarity is characterized by comprising the following steps of:
step 1, establishing a similarity matrix S of an image pair and image preprocessing: regarding images of the same class of the data set as similar images, and regarding images of different classes as dissimilar images to obtain a similarity matrix S; the data preprocessing adopts the unified setting of the current deep hash algorithm;
step 2, establishing a deep network model: selecting a deep network model, and using AlexNet to replace the last classification layer with a hash layer so as to obtain a hash code;
step 3, designing a loss function: in order to keep the similarity information of the image pair and solve the problems of imbalance, difficulty and easiness of data and quantization loss, the loss function comprises the similarity loss of the image pair and the quantization loss of the generated hash code;
step 4, training a model: inputting the preprocessed pictures into a network model, outputting a pairwise calculation loss function to the hash layer of each batch of pictures, and learning model parameters by using random gradient descent optimization;
step 5, generating an image hash code: and (4) inputting the images of the database into the model trained in the step (4) to obtain corresponding hash codes.
2. The method of claim 1, wherein: in the step 1, a specific method for establishing a similarity matrix of an image pair is as follows: regarding the same kind of images in the image training set as similar, when the image xiAnd image xjWhen similar, its similarity label sij1 is ═ 1; when image xiAnd image xjWhen they are not similar, their similarity labels sij0; data pre-processingAdopting the unified setting of the current deep hash algorithm; the specific method comprises zooming, cutting, turning and standard normalization.
3. The method of claim 1, wherein: in said step 2, using AlexNet, its classification layer is removed and a hash layer is added to generate a hash code.
4. The method of claim 1, wherein: in the step 3, the loss function comprises two losses, namely cosine similarity loss and cosine distance entropy quantization loss;
4.1 cosine similarity loss
The paired cosine similarity is used for restraining hash so as to achieve the purpose of similarity storage; the constructed cosine similarity loss is used for coding images with similar semantics more closely and coding images with dissimilar semantics more remotely; in order to solve the problems of unbalanced data types and difficulty, a weighting similarity measurement method with cosine measurement entropy is embedded into cosine similarity loss to reduce the influence of the cosine similarity loss; the cosine similarity loss is:
Figure FDA0002830785820000021
where i and j correspond to image i and image j,
Figure FDA0002830785820000022
for cosine similarity measurement, K is the dimension of the required hash code, and 16,32,48,64, etc. are commonly usediAnd ujIs the output vector of the hash layer and,<ui,uj>represents the inner product operation, | uiII represents the modulo operation; m is a boundary threshold parameter, and has a value of 0; sijSimilarity labels of the image i and the image j are obtained in the step 1; eps equals 10-7Constant values to prevent negative infinity; w is aijWeights are balanced for sample pair similarity, which is the sum of the number of sample logarithms divided by the number of similar sample logarithms for each batch for similar sample pairsA quantity, for a dissimilar sample pair, which is the sum of the number of sample pairs divided by the number of dissimilar sample pairs; thus, wijThe problem of quantity imbalance between similar and dissimilar can be solved; gamma is a hyper-parameter of cosine measurement entropy for solving the problem of difficult and easy samples, and the value is 2;
4.2 cosine distance entropy quantization loss is:
Figure FDA0002830785820000023
wherein, biFor discrete binarization of ui
The loss function is then:
L=Ls+αLQ
wherein alpha is a hyper-parameter of the quantization loss function; alpha is 100.
5. The method according to claim 1, wherein in step 4, according to the deep neural network used in step 2, the preprocessed pictures are input in batches, the loss function in step 3 is calculated in pairs for the output of the hash layer of each batch of pictures, the model is trained by using stochastic gradient descent optimization, and the trained model is saved.
6. The method according to claim 1, wherein in the step 5, the network parameters obtained in the step 4 are used for inputting the pictures of the database into the network, and the output binarization of the hash layer is the hash code; when the image retrieval is carried out, the corresponding hash codes obtained by inquiring the image input model are compared with the hash codes in the database, and the images in the database are returned according to the sequence of the Hamming distance from small to large.
CN202011443669.XA 2020-12-11 2020-12-11 Cosine measurement supervision deep hash algorithm with balanced similarity Pending CN112488231A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011443669.XA CN112488231A (en) 2020-12-11 2020-12-11 Cosine measurement supervision deep hash algorithm with balanced similarity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011443669.XA CN112488231A (en) 2020-12-11 2020-12-11 Cosine measurement supervision deep hash algorithm with balanced similarity

Publications (1)

Publication Number Publication Date
CN112488231A true CN112488231A (en) 2021-03-12

Family

ID=74941243

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011443669.XA Pending CN112488231A (en) 2020-12-11 2020-12-11 Cosine measurement supervision deep hash algorithm with balanced similarity

Country Status (1)

Country Link
CN (1) CN112488231A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113204522A (en) * 2021-07-05 2021-08-03 中国海洋大学 Large-scale data retrieval method based on Hash algorithm combined with generation countermeasure network
CN114547354A (en) * 2022-02-15 2022-05-27 华南师范大学 Deep hash method based on function adaptive mapping
CN114564610A (en) * 2022-01-14 2022-05-31 厦门理工学院 Central product quantitative retrieval method based on semi-supervision
CN116070277A (en) * 2023-03-07 2023-05-05 浙江大学 Longitudinal federal learning privacy protection method and system based on deep hash

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657350A (en) * 2015-03-04 2015-05-27 中国科学院自动化研究所 Hash learning method for short text integrated with implicit semantic features
CN107679250A (en) * 2017-11-01 2018-02-09 浙江工业大学 A kind of multitask layered image search method based on depth own coding convolutional neural networks
CN109063112A (en) * 2018-07-30 2018-12-21 成都快眼科技有限公司 A kind of fast image retrieval method based on multi-task learning deep semantic Hash, model and model building method
CN110059807A (en) * 2019-04-26 2019-07-26 腾讯科技(深圳)有限公司 Image processing method, device and storage medium
CN110309333A (en) * 2019-05-28 2019-10-08 北京工业大学 A kind of depth hashing image search method based on cosine measurement

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657350A (en) * 2015-03-04 2015-05-27 中国科学院自动化研究所 Hash learning method for short text integrated with implicit semantic features
CN107679250A (en) * 2017-11-01 2018-02-09 浙江工业大学 A kind of multitask layered image search method based on depth own coding convolutional neural networks
CN109063112A (en) * 2018-07-30 2018-12-21 成都快眼科技有限公司 A kind of fast image retrieval method based on multi-task learning deep semantic Hash, model and model building method
CN110059807A (en) * 2019-04-26 2019-07-26 腾讯科技(深圳)有限公司 Image processing method, device and storage medium
CN110309333A (en) * 2019-05-28 2019-10-08 北京工业大学 A kind of depth hashing image search method based on cosine measurement

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
冯兴杰;程毅玮;: "基于深度卷积神经网络与哈希的图像检索", 计算机工程与设计, no. 03 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113204522A (en) * 2021-07-05 2021-08-03 中国海洋大学 Large-scale data retrieval method based on Hash algorithm combined with generation countermeasure network
CN113204522B (en) * 2021-07-05 2021-09-24 中国海洋大学 Large-scale data retrieval method based on Hash algorithm combined with generation countermeasure network
CN114564610A (en) * 2022-01-14 2022-05-31 厦门理工学院 Central product quantitative retrieval method based on semi-supervision
CN114547354A (en) * 2022-02-15 2022-05-27 华南师范大学 Deep hash method based on function adaptive mapping
CN116070277A (en) * 2023-03-07 2023-05-05 浙江大学 Longitudinal federal learning privacy protection method and system based on deep hash
CN116070277B (en) * 2023-03-07 2023-08-29 浙江大学 Longitudinal federal learning privacy protection method and system based on deep hash

Similar Documents

Publication Publication Date Title
CN110298037B (en) Convolutional neural network matching text recognition method based on enhanced attention mechanism
CN112488231A (en) Cosine measurement supervision deep hash algorithm with balanced similarity
Hu et al. From hashing to cnns: Training binary weight networks via hashing
CN110222218B (en) Image retrieval method based on multi-scale NetVLAD and depth hash
Pan et al. Product quantization with dual codebooks for approximate nearest neighbor search
CN111008224B (en) Time sequence classification and retrieval method based on deep multitasking representation learning
WO2018103179A1 (en) Near-duplicate image detection method based on sparse representation
CN110929848A (en) Training and tracking method based on multi-challenge perception learning model
CN113505225B (en) Small sample medical relation classification method based on multi-layer attention mechanism
CN114358188A (en) Feature extraction model processing method, feature extraction model processing device, sample retrieval method, sample retrieval device and computer equipment
CN113806580B (en) Cross-modal hash retrieval method based on hierarchical semantic structure
CN108920446A (en) A kind of processing method of Engineering document
CN114329109A (en) Multimodal retrieval method and system based on weakly supervised Hash learning
CN112163114B (en) Image retrieval method based on feature fusion
CN113656700A (en) Hash retrieval method based on multi-similarity consistent matrix decomposition
CN115795065A (en) Multimedia data cross-modal retrieval method and system based on weighted hash code
Krishnan et al. Mitigating sampling bias and improving robustness in active learning
CN113704473A (en) Media false news detection method and system based on long text feature extraction optimization
CN113743593B (en) Neural network quantization method, system, storage medium and terminal
US11763136B2 (en) Neural hashing for similarity search
CN115424275A (en) Fishing boat brand identification method and system based on deep learning technology
CN115239967A (en) Image generation method and device for generating countermeasure network based on Trans-CSN
CN112101267B (en) Rapid face retrieval method based on deep learning and Hash coding
CN111291788A (en) Image description method, system, device and medium based on neural network
Qiang et al. Large-scale multi-label image retrieval using residual network with hash layer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination