CN110489585B - Distributed image searching method based on supervised learning - Google Patents

Distributed image searching method based on supervised learning Download PDF

Info

Publication number
CN110489585B
CN110489585B CN201910609588.3A CN201910609588A CN110489585B CN 110489585 B CN110489585 B CN 110489585B CN 201910609588 A CN201910609588 A CN 201910609588A CN 110489585 B CN110489585 B CN 110489585B
Authority
CN
China
Prior art keywords
matrix
node
classification
hash code
constraint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910609588.3A
Other languages
Chinese (zh)
Other versions
CN110489585A (en
Inventor
胡海峰
熊键
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN201910609588.3A priority Critical patent/CN110489585B/en
Publication of CN110489585A publication Critical patent/CN110489585A/en
Application granted granted Critical
Publication of CN110489585B publication Critical patent/CN110489585B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2471Distributed queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Mathematical Physics (AREA)
  • Fuzzy Systems (AREA)
  • Library & Information Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a distributed image searching method based on supervised learning, which comprises the steps of firstly carrying out classification marking on images, videos and files in a database of each node, initializing a classification matrix, an encoding matrix, a Hash code matrix and corresponding Lagrangian multipliers, then introducing a minimized classification error and a reconstructed error to construct an objective function, solving the objective function and updating a parameter matrix; the data nodes are communicated with the central node, whether the conversion matrixes of the nodes tend to be consistent or not is judged, the Lagrangian multiplier is updated, and finally the similarity searching process is carried out; the invention solves the problems that large-scale data is stored and the scale required during calculation is too large, and the algorithm model is not suitable for centralized training; and the data node and the central node do not exchange original information in communication, so that the problem of overlarge transmission communication can be effectively solved, and the data on the node keeps independence.

Description

Distributed image searching method based on supervised learning
Technical Field
The invention relates to an image searching method, in particular to a distributed image searching method, and belongs to the field of machine learning.
Background
With the continuous development of social networks, electronic commerce, mobile internet and the like, the scale of data required to be stored and processed becomes larger and larger, and the stand-alone system cannot meet the increasing requirements. Internet companies such as Google and Alibaba successfully bring forward two popular fields of cloud computing and big data, and the cloud computing and the big data are both applications constructed on distributed storage. The core of cloud storage is a large-scale distributed storage system at the back end, large data not only needs to store massive data, but also needs to be analyzed through a proper framework and tools to obtain a useful part, and if distributed storage is not available, analysis on the large data is not referred to. Although research on distributed systems has been conducted for many years, until recently the rise of internet big data has made distributed systems to be applied in engineering practice on a large scale. The distributed system utilizes a plurality of computers to cooperatively solve the problems of calculation and storage which cannot be solved by a single computer, and the biggest difference between the distributed system and a single computer lies in the scale of the problems. It is a system composed of a plurality of nodes, often called a node, a server or a process on a server, which are not isolated but communicate with each other via a network to transfer information. In addition, due to the rapid development of mobile terminals such as smart phones and the like, the smart phones store a large amount of information such as pictures, texts and videos, the smart phones can also be regarded as an independent node, and the data processing capacity is improved through the base station or through distributed cooperation among the smart phones.
Supervised learning (Supervised learning), a type of algorithm in machine learning, can learn or create a pattern from training data, and infer a new instance based on the pattern. The training data consists of input objects (usually vectors) and expected outputs. The output of the function may be a continuous value (called regression analysis), or a predictive classification label. There is also a large class of algorithms in machine learning, called Unsupervised learning (Unsupervised learning), which is to model and learn training data without labels directly, and it is noted that the data here is unmarked data, and the most basic difference from supervised learning is that the modeled data is labeled one and unlabeled one. Compared with unsupervised learning, the supervised learning has the advantages that the known marking information can be fully utilized, more information is fused into the constructed model, and the reliability of the model is effectively improved.
In addition, with the wide spread of the internet and the development of multimedia technology, data of various industries is rapidly increased, and modern information technology infrastructure has to deal with huge databases. In fact, retrieving relevant content in large-scale databases is a more challenging task compared to storage costs, especially in searching multimedia data, such as audio, image and video content. When the traditional nearest neighbor algorithm is used for processing a large-scale image retrieval problem, the characteristic dimension of sample data can reach thousands of dimensions, and the problem of large storage space consumption and low retrieval speed can be caused by dimension disaster. In recent years, the hash algorithm, as a representative nearest neighbor search technology, can meet special requirements for storage space and search time in large-scale search. The purpose of the hashing algorithm is to represent the image as a set of fixed-length binary codes, i.e., hash codes, in which bits are typically represented using-1/1 or 0/1. The Hash algorithm solves the unreasonable requirements of the traditional retrieval problem on large-scale data storage space and retrieval time, greatly reduces the requirements on the storage space and the retrieval time, and can obtain good retrieval effect, so that the Hash algorithm becomes a sharp for processing the large-data problem and is widely concerned in the field of computer vision. However, most of the hash algorithms are centralized at present, and a plurality of problems such as large calculation amount of a single node exist, so that how to apply the hash algorithm in a distributed scene is an interesting problem.
In summary, there is no disclosure in the prior art of how to implement distributed image search using a supervised hash algorithm.
Disclosure of Invention
The invention aims to provide a distributed image searching method based on supervised learning, which is mainly used for solving the problems that the number of samples such as images, videos, texts and the like is large, semantic neighbors cannot be accurately found, and if the samples are trained together, the transmission quantity and the calculated quantity are overlarge.
The invention provides a distributed image searching method based on supervised learning, which comprises the following steps:
step 1: classifying and marking images, videos and files in a database of each node;
step 2: initializing a classification matrix, an encoding matrix, a Hash code matrix and a corresponding Lagrange multiplier;
and step 3: introducing a minimum classification error and a reconstruction error to construct an objective function;
and 4, step 4: solving the objective function, updating a classification matrix, an encoding matrix and a Hash code matrix;
and 5: the data nodes are communicated with the central node, whether the conversion matrixes of the nodes tend to be consistent or not is judged, and the Lagrangian multiplier is updated;
and 6: a proximity search process is performed.
As a further limitation of the present invention, in step 1, it is assumed that there are N nodes, and each node corresponds to a database X i ,X i The databases of the ith node are independent from one another, and information is not expected to be shared among different nodes, each database has c types of category labels, and different samples are marked with different labels.
As a further limitation of the present invention, in step 2, initialization setting is performed on the classification matrix, the coding matrix, the hash code matrix and the corresponding lagrangian multiplier at each node, the initialization classification matrix is set to be a d × r dimensional unit matrix, the corresponding initialization lagrangian multiplier is a d × r dimensional all-0 matrix, the initialization classification matrix is an r × c dimensional unit matrix, the corresponding initialization lagrangian multiplier is an r × c dimensional all-0 matrix, the initialization hash code matrix is an r × n dimensional matrix with an absolute value of each element of 1, d represents a dimension of the original feature space of the sample, r represents the number of coding bits, c represents the number of classes, and n represents the number of samples.
As a further limitation of the present invention, in step 3, a minimum classification error and a minimum reconstruction error are introduced into the objective function, and the original feature space is mapped to the hash codes through the coding matrix, so that the classification accuracy based on the hash codes is as high as possible, to ensure the validity of the hash codes, and meanwhile, to reduce the correlation between the hash codes, an orthogonal constraint is added, and to reduce the quantization error, a discrete constraint is added, that is, the hash codes are forced to be equal to 1 or-1.
The constructed objective function is as follows in sequence:
Figure BDA0002121810570000031
in the above formula X i Samples representing the ith node, i.e. database X i ,C i 、B i Respectively representing the coding matrix and the Hash code matrix of the ith node, pi i Representing dual variables, wherein rho is a Lagrange multiplier, Z is a global parameter introduced by consistency, the constraint in the formula is composed of two parts, the first part is the global consistency constraint of an Alternating Direction Multiplier Method (ADMM), and the second constraint is the mutually independent constraint of hash codes:
Figure BDA0002121810570000032
in the above formula Y i Sample flag, W, representing the ith node i 、B i The classification matrix and the hash code matrix of the ith node are respectively represented, lambda is a Lagrange multiplier, U is a global parameter introduced by the Alternating Direction Multiplier Method (ADMM) consistency, and the constraint is global consistency constraint.
Figure BDA0002121810570000033
In the above formula Y i Sample flag, W, representing the ith node i 、B i 、C i And v is a balance parameter, and the added constraint is to ensure that the middle value of each bit of the hash code is discrete.
As a further limitation of the present invention, in step 4, when solving the coding matrix C, it is necessary to solve by using singular value decomposition since it involves solving a problem of minimizing the matrix trace under the condition of orthogonal constraint.
As a further limitation of the present invention, in step 5, when the coding matrix C and the classification matrix W are optimized in a distributed manner, in addition to the N data nodes, there is a central node for performing global update on W and C, and parameter information is transmitted between the central node and the data nodes, so as to ensure parameter consistency.
As a further limitation of the present invention, in step 6, for a new query sample, broadcast it to all nodes, after mapping the coding matrix, calculate hamming distances between the new sample and the node samples, and take the sample corresponding to the top k minimum distances as the result of the proximity search.
Compared with the prior art, the invention adopting the technical scheme has the following technical effects:
1. the method solves the problems that many traditional methods are too unilateral when used for neighbor searching, marking information is not considered when neighbor searching is carried out, discretization processing is not carried out in the intermediate process, and the performance of the algorithms in the practical application of approximate searching is poor;
2. the problem that the large-scale data is stored and the required scale is too large during calculation, the calculation capacity of a single calculation node is exceeded, and an algorithm model is not suitable any more in a centralized training mode is solved;
3. the parameter matrix is used for carrying out communication among the nodes, original information is not exchanged in the communication among the nodes, the problem of overlarge transmission communication can be effectively solved, and meanwhile, good performance is achieved.
Drawings
FIG. 1 is a system block diagram of the present method.
Fig. 2 is a flow chart of the distributed training of the present method.
Fig. 3 is a flowchart of neighbor search in the present method.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the drawings as follows:
a system framework diagram of the method is shown in fig. 1, the whole method process can be divided into a distributed training process and an approximation search process, and specific flows are respectively shown in fig. 2 and fig. 3, wherein the first step to the fifth step adopt a mode shown in fig. 2, and the sixth step adopts a mode shown in fig. 3.
In the first step, images, videos, files, etc. are labeled in a database of each node by classification.
Assuming that there are N nodes, each node corresponds to a database X i ,X i Databases representing the ith node, the databases in different nodes being independent of each other and notThe same nodes do not want to share information, each database has n samples, each database has c kinds of category labels, and different labels are marked on different samples.
And secondly, initializing a classification matrix, an encoding matrix and a Lagrange multiplier matrix, and initializing a Hash code matrix.
Initializing a classification matrix, a coding matrix and a corresponding Lagrange multiplier in each node, initializing a Hash code matrix, and setting an initialization classification matrix C of the ith node i Is an identity matrix with dimension d multiplied by r, the corresponding initialized Lagrange multiplier is a full 0 matrix with dimension d multiplied by r when the ith node optimizes C, and a classification matrix W is initialized i The matrix is an r × c-dimensional unit matrix, the initialized Lagrange multiplier corresponding to the optimized W is an r × c-dimensional all-0 matrix, and the initialized Hash code matrix B is an r × n-dimensional matrix with the absolute value of each element being 1. d represents the dimension of the original feature space of the sample, r represents the number of encoding bits, c represents the number of classes, and n represents the number of samples. The conversion matrix and the initialization value of the lagrangian multiplier are the same for all nodes.
And thirdly, introducing a minimized classification error and a reconstruction error, and constructing an objective function by discretization and orthogonal constraint.
It should be noted that the key point of the present invention is to obtain the coding matrix and to utilize the coding matrix, so that the original objective function and the specific optimization process of the objective function are not listed in the key point, and only the result after the objective function is optimized is given; optimization C constructed by ith node i The latter objective function is as follows:
Figure BDA0002121810570000051
in the above formula X i Sample representing the ith node, C i 、B i Respectively representing the coding matrix and the Hash code matrix of the ith node, wherein rho is a Lagrange multiplier, pi i Representing dual variables, Z is a global parameter introduced by the Alternating Direction Multiplier Method (ADMM) consistency, and the constraint in the formula is composed of two parts, wherein the first part is an alternating directionThe second constraint is the constraint that the hashes are independent of each other, which is the global consistency constraint of the multiplier method (ADMM).
Optimization W constructed by ith node i The latter objective function is as follows:
Figure BDA0002121810570000052
in the above formula Y i Sample flag, W, representing the ith node i 、B i The classification matrix and the hash code matrix of the ith node are respectively represented, lambda is a Lagrange multiplier, U is a global parameter introduced by the Alternating Direction Multiplier Method (ADMM) consistency, and the constraint is global consistency constraint.
Optimization B constructed by ith node i The latter objective function is as follows:
Figure BDA0002121810570000053
in the above formula Y i Sample flag, W, representing the ith node i 、B i 、C i The classification matrix, the hash code matrix and the coding matrix of the ith node are respectively represented, v is a balance parameter, and the added constraint is to ensure that the intermediate value of each bit of the hash code is discrete.
And fourthly, solving the objective function and updating the parameter matrix.
Respectively solving the three objective functions to optimize W i ,C i All adopt an Alternative Direction Multiplier Method (ADMM) to solve and optimize B i The data information of each node is directly utilized to solve, and the nodes are completely distributed during optimization.
And fifthly, the data nodes are communicated with the central node, whether the conversion matrixes of the nodes tend to be consistent or not is judged, and the Lagrangian multiplier is updated.
The data nodes and the global nodes are communicated, that is, each node transmits the parameter matrix calculated by the node to the central node, the central node performs global optimization, and then transmits the parameter matrix of the global optimization to each data node for next iteration updating, and the parameter transmission can ensure that the training process meets the idea of consistency.
And if the conversion matrixes of all the nodes do not tend to be consistent, iteratively updating the Lagrange multiplier, and repeating the third step.
And sixthly, performing an approximation search process.
For a new query sample x c Inputting the code matrix into all distributed nodes, and assuming that the code matrix enters the ith node, obtaining the code matrix C by using a distributed training process i For query sample x c And data node samples are encoded, then x is calculated c The hamming distance from them (i.e., the number of bits corresponding to the hash code); obtaining a query sample x c And after the distances between the nodes and other samples in each node, sequencing the distances to obtain the first k minimum distances, wherein the samples corresponding to the first k minimum distances are the results obtained by the approximation search.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can understand that the modifications or substitutions within the technical scope of the present invention are included in the scope of the present invention, and therefore, the scope of the present invention should be subject to the protection scope of the claims.

Claims (3)

1. A distributed image searching method based on supervised learning is characterized by comprising the following steps:
step 1: classifying and marking images, videos and files in a database of each node;
step 2: initializing a classification matrix, an encoding matrix, a Hash code matrix and a corresponding Lagrange multiplier;
and step 3: introducing a minimum classification error and a reconstruction error to construct a target function, introducing the minimum classification error and the reconstruction error in the target function, and mapping an original characteristic space to a hash code through a coding matrix, so that the classification accuracy based on the hash code is as high as possible to ensure the effectiveness of the hash code, wherein the constructed target function is as follows in sequence:
Figure FDA0003769359080000011
in the above formula X i Samples representing the ith node, i.e. database X mentioned earlier i ,C i 、B i Respectively representing the coding matrix and the Hash code matrix of the ith node, pi i Representing dual variables, wherein rho is a Lagrange multiplier, Z is a global parameter introduced by consistency, the constraint in the formula is composed of two parts, the first part is global consistency constraint of an Alternating Direction Multiplier Method (ADMM), and the second constraint is constraint that hash codes are mutually independent:
Figure FDA0003769359080000012
in the above formula Y i Sample flag, W, representing the ith node i 、B i Respectively representing a classification matrix and a Hash code matrix of the ith node, wherein lambda is a Lagrange multiplier, U is a global parameter introduced by an Alternative Direction Multiplier Method (ADMM) consistency, and the constraint is global consistency constraint;
Figure FDA0003769359080000013
in the above formula Y i Sample flag, W, representing the ith node i 、B i 、C i The classification matrix, the Hash code matrix and the coding matrix of the ith node are respectively represented, v is a balance parameter, and the added constraint is to ensure that the middle value of each bit of the Hash code is discrete;
and 4, step 4: solving the target function, updating a classification matrix, a coding matrix and a Hash code matrix, wherein when the coding matrix C is solved, singular value decomposition is needed for solving the problem of minimizing matrix traces under the condition of orthogonal constraint;
and 5: the data nodes are communicated with the central node, whether the conversion matrixes of the nodes tend to be consistent or not is judged, and when the Lagrange multiplier is updated and the distributed optimization coding matrix C and the classification matrix W are updated, besides the N data nodes, the central node is used for globally updating the W and the C and transmitting parameter information between the central node and the data nodes, so that the consistency of parameters is ensured;
step 6: and performing an approximation search process, broadcasting a new query sample to all nodes, mapping the new query sample by using a coding matrix, calculating Hamming distances between the new sample and the node samples, and taking the samples corresponding to the first k minimum distances as the result of the approximation search.
2. The supervised learning-based distributed image searching method of claim 1, wherein in step 1, assuming that there are N nodes, each node corresponds to one database X i ,X i The databases of the ith node are independent from one another, and information is not expected to be shared among different nodes, each database has c types of category labels, and different samples are marked with different labels.
3. The supervised learning-based distributed image searching method of claim 2, wherein in step 2, the classification matrix, the coding matrix, the hash code matrix and the corresponding lagrangian multiplier are initialized, the initialized classification matrix is a unit matrix of d × r dimension, the corresponding initialized lagrangian multiplier is a full 0 matrix of d × r dimension, the initialized classification matrix is a unit matrix of r × c dimension, the corresponding initialized lagrangian multiplier is a full 0 matrix of r × c dimension, the initialized hash code matrix is a matrix of r × n dimension with each element having an absolute value of 1, d represents a dimension of a sample original feature space, r represents a number of coding bits, c represents a number of categories, and n represents a number of samples.
CN201910609588.3A 2019-07-08 2019-07-08 Distributed image searching method based on supervised learning Active CN110489585B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910609588.3A CN110489585B (en) 2019-07-08 2019-07-08 Distributed image searching method based on supervised learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910609588.3A CN110489585B (en) 2019-07-08 2019-07-08 Distributed image searching method based on supervised learning

Publications (2)

Publication Number Publication Date
CN110489585A CN110489585A (en) 2019-11-22
CN110489585B true CN110489585B (en) 2022-12-02

Family

ID=68546684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910609588.3A Active CN110489585B (en) 2019-07-08 2019-07-08 Distributed image searching method based on supervised learning

Country Status (1)

Country Link
CN (1) CN110489585B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111159443B (en) * 2019-12-31 2022-03-25 深圳云天励飞技术股份有限公司 Image characteristic value searching method and device and electronic equipment
CN111553418B (en) * 2020-04-28 2022-12-16 腾讯科技(深圳)有限公司 Method and device for detecting neuron reconstruction errors and computer equipment
CN111881928B (en) * 2020-05-19 2022-07-29 杭州中奥科技有限公司 Coding model training method and device, storage medium and electronic equipment
CN111832637B (en) * 2020-06-30 2022-08-30 南京邮电大学 Distributed deep learning classification method based on alternating direction multiplier method ADMM
CN112199520B (en) * 2020-09-19 2022-07-22 复旦大学 Cross-modal Hash retrieval algorithm based on fine-grained similarity matrix
CN112965722B (en) * 2021-03-03 2022-04-08 深圳华大九天科技有限公司 Verilog-A model optimization method, electronic device and computer readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150234779A1 (en) * 2014-02-20 2015-08-20 Mitsubishi Electric Research Laboratories, Inc. Method for Solving Quadratic Programs for Convex Sets with Linear Equalities by an Alternating Direction Method of Multipliers with Optimized Step Sizes
CN107315765A (en) * 2017-05-12 2017-11-03 南京邮电大学 A kind of method of the concentrated-distributed proximity search of extensive picture

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150234779A1 (en) * 2014-02-20 2015-08-20 Mitsubishi Electric Research Laboratories, Inc. Method for Solving Quadratic Programs for Convex Sets with Linear Equalities by an Alternating Direction Method of Multipliers with Optimized Step Sizes
CN107315765A (en) * 2017-05-12 2017-11-03 南京邮电大学 A kind of method of the concentrated-distributed proximity search of extensive picture

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Weakly Supervised Visual Dictionary Learning by Harnessing Image Attributes;Yue Gao 等;《IEEE Transactions on Image Processing》;20141029;全文 *
搜索引擎中基于内容的图像重排序;谢辉;《计算机应用》;20130215;全文 *

Also Published As

Publication number Publication date
CN110489585A (en) 2019-11-22

Similar Documents

Publication Publication Date Title
CN110489585B (en) Distributed image searching method based on supervised learning
Deng et al. Two-stream deep hashing with class-specific centers for supervised image search
Mandal et al. Generalized semantic preserving hashing for cross-modal retrieval
CN106777318B (en) Matrix decomposition cross-modal Hash retrieval method based on collaborative training
CN110765281A (en) Multi-semantic depth supervision cross-modal Hash retrieval method
CN107766555B (en) Image retrieval method based on soft-constraint unsupervised cross-modal hashing
Gu et al. Clustering-driven unsupervised deep hashing for image retrieval
Zhu et al. Multi-modal hashing for efficient multimedia retrieval: A survey
CN107122411B (en) Collaborative filtering recommendation method based on discrete multi-view Hash
CN112199532B (en) Zero sample image retrieval method and device based on Hash coding and graph attention machine mechanism
Guan et al. Efficient BOF generation and compression for on-device mobile visual location recognition
Wei et al. Projected residual vector quantization for ANN search
CN108256082A (en) A kind of multi-tag image search method based on the more similarity Hash of depth
Akbarnejad et al. An efficient semi-supervised multi-label classifier capable of handling missing labels
CN113312505B (en) Cross-modal retrieval method and system based on discrete online hash learning
CN111832637B (en) Distributed deep learning classification method based on alternating direction multiplier method ADMM
CN116978011B (en) Image semantic communication method and system for intelligent target recognition
Liu et al. Deep cross-modal hashing based on semantic consistent ranking
CN111368176A (en) Cross-modal Hash retrieval method and system based on supervision semantic coupling consistency
Liang et al. Cross-media semantic correlation learning based on deep hash network and semantic expansion for social network cross-media search
CN108647295B (en) Image labeling method based on depth collaborative hash
Ou et al. Cross-modal generation and pair correlation alignment hashing
Chen et al. Multiple-instance ranking based deep hashing for multi-label image retrieval
CN116703531B (en) Article data processing method, apparatus, computer device and storage medium
Vural et al. Deep multi query image retrieval

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant