CN108595688A - Across the media Hash search methods of potential applications based on on-line study - Google Patents
Across the media Hash search methods of potential applications based on on-line study Download PDFInfo
- Publication number
- CN108595688A CN108595688A CN201810429547.1A CN201810429547A CN108595688A CN 108595688 A CN108595688 A CN 108595688A CN 201810429547 A CN201810429547 A CN 201810429547A CN 108595688 A CN108595688 A CN 108595688A
- Authority
- CN
- China
- Prior art keywords
- data
- image
- text
- hash
- function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000012549 training Methods 0.000 claims abstract description 34
- 238000012360 testing method Methods 0.000 claims abstract description 17
- 238000005457 optimization Methods 0.000 claims abstract description 5
- 230000001174 ascending effect Effects 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 47
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000013507 mapping Methods 0.000 claims description 8
- 230000000007 visual effect Effects 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 4
- 238000011480 coordinate descent method Methods 0.000 claims description 3
- 125000004122 cyclic group Chemical group 0.000 claims description 3
- 238000003064 k means clustering Methods 0.000 claims description 2
- 238000013459 approach Methods 0.000 abstract description 2
- 239000012141 concentrate Substances 0.000 abstract 1
- 238000002474 experimental method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Probability & Statistics with Applications (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses the cross-media retrievals that a kind of across the media hash methods of potential applications based on on-line study realize image and text modality, and this approach includes the following steps:Image, text are established to data set, the feature of data intensive data is extracted and goes mean value, divide training set and test set according to a certain percentage;Discrete tags are mapped to continuous latent semantic space, and object function is built using similitude between keeping data;Using the iteration optimization Scheme Solving object function based on on-line study, when there is new data generation, hash function only is updated using new data, improves the efficiency of training process;The Hash codes of image and text data in test set are calculated using hash function, using the data of a certain mode in test set as query set, the data of another mode are as target data set, calculate the Hamming distance that inquiry data intensive data concentrates all data with target data, and be sorted in ascending order, the forward isomeric data that sorts is returned as cross-media retrieval result.
Description
Technical Field
The invention relates to the field of multimedia retrieval and pattern recognition, in particular to a latent semantic cross-media Hash retrieval method based on online learning.
Background
In recent years, due to the high efficiency and effectiveness of the hash method on large-scale data sets, the method has attracted extensive attention of researchers; the objective of the hash method is to map data to hamming space by keeping the node similarity of the data in the original feature space or label; the similarity between the data can be efficiently calculated through XOR operation, and the retrieval speed is greatly accelerated on the premise of ensuring the retrieval performance; however, most hash methods are only applied to a single modality, and with the rapid development of internet technologies and digital devices, multimedia data on a network is increasing; data of different modalities can represent the same semantic data, which limits the application of the single-mode hash method; for the user, the user inputs single-modality data, but expects to return similar data of various modalities; however, the similarity between heterogeneous data cannot be measured directly, how to measure the similarity of the heterogeneous data becomes a challenge, and the cross-media hashing method maps the heterogeneous data to a shared hamming space, where the similarity of the heterogeneous data can be calculated efficiently.
Recently, researchers have proposed a variety of cross-media hashing methods and achieved satisfactory results; it has been proven that hash codes based on high-level semantic preservation can be generated using supervisory information (e.g., class labels) of data to improve retrieval performance; however, the discrete labels cannot accurately measure the similarity between data, which leads to the reduction of the distinguishing capability of the hash function; furthermore, despite some advances in the research of cross-media hashing, most existing approaches are based on bulk data; this method requires all training data to be available before learning the hash function, but in practical applications, multimedia data on the network will be generated continuously over time, for example, billions of images are uploaded to the internet every day; after new data is generated, the methods need to retrain the hash function by using all accumulated training data; this makes the hashing method lose its efficiency, especially when new data is frequently generated; in addition, as new data continues to be generated, the training data set becomes very large; on one hand, the memory occupied by the training data is too high, so that all data cannot be loaded into the memory at one time; on the other hand, even if memory is sufficient, training time is often unacceptable; in order to solve the problems, the invention provides a cross-media Hash body searching method based on online learning, which utilizes a discrete label to learn a continuous potential semantic space so as to more accurately measure the similarity between data and ensure that a returned searching result is more accurate; and the method effectively realizes that the training hash function is updated only by using new data when the new data is generated, so that the training of the hash function is more efficient and the memory overhead is reduced.
Disclosure of Invention
The invention aims to provide a cross-media Hash body searching method based on online learning, which is characterized by comprising the following steps.
Step 1: image and text data pairs are collected from a network, a cross-modal retrieval database is constructed, the features of the image and text data in the database are extracted, the mean value of the image and text data is removed, and a data set is divided into a training set and a testing set.
Step 2: discrete labels of the data are mapped to a continuous latent semantic space, and an objective function is constructed based on similarity among the data of the space.
And step 3: and solving the objective function by using an iterative optimization algorithm based on online learning, so that when new data is generated, the hash function is updated only by using the new data.
And 4, step 4: and mapping the data to a Hamming space by using a corresponding modal hash function according to the mode of the data in the test set.
And 5: data of one modality (such as images) in the test set is used as a query set, and data of another modality (such as texts) is used as a target data set.
Step 6: calculating the Hamming distance between one data in the query set and all data in the target data set, sorting in ascending order, and returning to the previous stepThe data is used as a cross-media retrieval result.
The supervised cross-media hash search method based on online learning of claim 1, wherein the step 1 comprises the following steps.
1) Image and text data are collected using a network and are made to correspond one-to-one.
2) Extracting SIFT feature points of all image data, carrying out K-Means clustering on the feature points, and taking a clustering center as a visual word; then all the feature points are quantized to the nearest visual words, finally, the feature representation of the image is generated by using a word frequency-inverse file frequency method in the same way as the processing of the text data, and the image data is finally represented asWhereinIs the dimension of the representation of the image data,the number of text data in the training database.
3) Generating characteristic representation by utilizing a word bag model for all text data, weighting each word by utilizing a word frequency-inverse file frequency method, and finally representing all text data asWhereinIs the dimensionality of the textual representation.
4) And carrying out mean value removing processing on the generated image and text feature representation.
5) The data set is divided into a training set and a test set according to a certain proportion.
The supervised cross-media hash retrieval method based on online learning of claim 1, wherein the step 2 comprises the following steps.
1) And establishing an objective function based on the characteristics of the image and text data in the training set.
2) The objective function is defined as follows:
wherein,for the tag matrix of all the data,andin order to map the matrix, the mapping matrix,is a hash code of the data and is,a hash function representing the image and text modalities respectively,andin order to determine the weight parameter to be determined,representing the F-norm. The online-based system of claim 1The learning supervised cross-media hash retrieval method is characterized in that the step 3 comprises the following steps.
1) Dividing data in training database according to collection time sequenceThe data blocks are used for simulating that new data are generated continuously along with the time, an initial training set only contains the data of the first data block, and then one data block is added to the training set each time;
2) setting a threshold and a maximum iteration number, and executing 3) -7 as long as the difference between the objective function values of two adjacent iterations is larger than the threshold or the iteration number is smaller than the maximum iteration number;
3) fixing、、Andsolving for: when it comes to() When the sub-data is generated, the new image and the text data are used separately,For indicating, for labels of new dataFor representing hash code of new dataFor presentation, of existing image and text data,For labels indicating, existing dataFor hash codes representing, existing dataRepresents; removing the constant term, the objective function becomes:
the problem can be solved bit by using a discrete cyclic coordinate descent method to obtainAnd updating the variables;
4) Fixing、、Andsolving for: removing the constant term, the objective function can be written as:
then:
wherein:
whereinAndthe constant term can be pre-calculated before updating the function and stored in the memory, thus updatingOnly with new data;
5) fixing、、Andsolving for: removing the constant term, the objective function can be written as:
then:
wherein:
whereinAndconstant term, which can be pre-computed before updating the function, is stored in memory and thus updatedOnly with new data;
6) fixing、、Andsolving for: removing the constant term, the objective function can be written as:
then:
wherein:
whereinAndthe constant term can be pre-calculated before updating the function and stored in the memory, thus updatingOnly with new data;
7) fixing、、Andsolving for: and solving forLike
Wherein:
whereinAndthe constant term can be pre-calculated before updating the function and stored in the memory, thus updatingAssociated with the new data.
Compared with the background technology, the invention has the following beneficial effects:
the invention provides a set of new cross-media retrieval method based on contents; by mapping the discrete labels to a continuous space, the similarity between data is measured more accurately; the optimization method based on online learning is provided, when new data is generated, only the hash function needs to be updated by the new data, and the efficiency of the algorithm is improved on the premise of ensuring the performance of the algorithm; the method maps the heterogeneous data to the shared Hamming space, and is suitable for cross-media retrieval of streaming network big data in reality.
Drawings
FIG. 1 is a flow chart of a supervised cross-media hash retrieval method based on online learning according to the present invention.
Fig. 2 is a schematic diagram of the retrieval effect from an image to a text according to the cross-media retrieval method of the present invention.
Fig. 3 is a schematic diagram of the retrieval effect from text to image according to the cross-media retrieval method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings in conjunction with specific embodiments.
The method maps heterogeneous data to the same Hamming space, and when new data is generated, only the new data is used for updating hash functions in different modes; and measuring the similarity of data in different modes in the learned shared Hamming subspace, and achieving the purpose of efficient cross-media retrieval.
Fig. 1 is a flowchart of a latent semantic cross-media hash retrieval method based on online learning according to the present invention, and the latent semantic cross-media hash retrieval method based on online learning according to the present invention includes the following steps.
Step 1: image and text data are collected from a network, a cross-modal retrieval database is constructed, the features of the image and the text data in the database are extracted, the mean value of the image and the text data is removed, and a data set is divided into a training set and a testing set.
In the invention, for image data, firstly extracting Scale-Invariant Feature Transform (SIFT) features, then clustering the SIFT features by using a K-means algorithm to obtain 500 clustering centers, and finally constructing the features Of the image data by using a Bag Of Visual Words (BOVW) algorithm; for text data, 1000 most representative Words are selected to construct 1000-dimensional BOW (Bag Of Words) features Of the text data.
The data set is divided into a training data set and a testing data set, and the division can be performed according to actual needs, for example, 99% of data pairs in the data set are randomly selected to form the training data set, and the remaining 1% of data forms the testing data set.
Step 2: discrete labels of the data are mapped to a continuous latent semantic space, and an objective function is constructed based on similarity among the data of the space.
The objective function is defined as follows:
wherein,for the tag matrix of all the data,andin order to map the matrix, the mapping matrix,is a hash code of the data and is,a hash function representing the image and text modalities respectively,andin order to determine the weight parameter to be determined,representing the F-norm.
And step 3: and solving the objective function by using an iterative optimization algorithm based on online learning, so that when new data is generated, the hash function is updated only by using the new data.
When new data is generated, the data features in the training set are divided into a plurality of data blocks, and each time one data block is added to form the training set together with the existing training data, so that the simulation data is generated in a data flow mode.
The data features in the training set are divided into a plurality of data blocks, which can be divided according to actual requirements, for example, the data features are divided into 16 data blocks on average.
The solving of the objective function specifically includes the following steps.
1) When new data production is added to the training set, a threshold (e.g., 0.001) and a maximum number of iterations (e.g.: 50) and executing 2) -6) as long as the difference between the objective function values of the two adjacent iterations is larger than a threshold value or the iteration number is smaller than the maximum iteration number.
2) Fixing other variables, solving for the following update:
The problem can be solved bit by using a discrete cyclic coordinate descent method to obtainAnd updating the variables;
3) Fixing other variables, updating by:
Wherein:
。
4) fixing other variables, updating by:
Wherein:
;
5) fixing other variables, updating by:
Wherein:
;
6) fixing other variables, updating by:
Wherein:
。
and 4, step 4: and mapping the data to a Hamming space by using a corresponding modal hash function according to the mode of the data in the test set.
And 5: data of one modality (for example, images) in the test set is used as a query data set, and data of another modality (for example, texts) is used as a target data set.
Step 6: and calculating the Hamming distance between one data in the query data set and all data in the target data set, sequencing the data in an ascending order, and returning the first k data as a cross-media retrieval result.
To verify the effectiveness of the method of the invention, experiments were carried out on the published standard data set NUS-WIDE; in order to enable each type to have enough training samples, 21 types with the largest data quantity, 195969 image and text pairs are selected from the NUS-WIDE data set; the image data is represented by 500-dimensional visual word bag characteristics, and the text is represented by 1000-dimensional word bag characteristics; randomly selecting 99% of image and text pairs to form a training set, forming the rest 1% of the image and text pairs into a test set, averagely dividing the data of the training set into 16 data blocks and simulating data to generate the data in a data stream mode; in order to objectively evaluate the performance of the method of the present invention, Average accuracy (MAP) widely used in the field of search is used as an evaluation criterion; experiments were performed on the NUS-WIDE dataset, and the results of the MAP experiments for different hash code lengths r are shown in Table 1.
TABLE 1 MAP results on NUS-WIDE dataset
r=16 | r=24 | r=32 | r=64 | |
Image retrieval text | 0.4116 | 0.4150 | 0.4174 | 0.4183 |
Text retrieval image | 0.4323 | 0.4370 | 0.4461 | 0.4478 |
Claims (4)
1. A cross-media Hash body searching method based on online learning is characterized by comprising the following steps:
step 1: collecting image and text data pairs from a network, constructing a cross-modal retrieval database, extracting the characteristics of the image and text data in the database, removing the mean value, and dividing a data set into a training set and a test set;
step 2: mapping discrete labels of the data to a continuous latent semantic space, and keeping similarity between the data based on the space to construct an objective function;
and step 3: solving the objective function by using an iterative optimization algorithm based on online learning, so that when new data is generated, only the hash function is updated by using the new data;
and 4, step 4: mapping the data to a Hamming space by using a corresponding modal hash function according to the mode of the data in the test set;
and 5: taking data of a certain modality (such as images) in the test set as a query set, and taking data of another modality (such as texts) as a target data set;
step 6: calculating the Hamming distance between one data in the query set and all data in the target data set, sorting in ascending order, and returning to the previous stepThe data is used as a cross-media retrieval result.
2. The supervised cross-media hash retrieval method based on online learning of claim 1, wherein the step 1 comprises:
1) collecting image and text data using a network and making the image and text data in one-to-one correspondence;
2) extracting SIFT feature points of all image data, carrying out K-Means clustering on the feature points, and taking a clustering center as a visual word; then all the feature points are quantized to the nearest visual words, finally, the feature representation of the image is generated by using a word frequency-inverse file frequency method in the same way as the processing of the text data, and the image data is finally represented asWhereinIs the dimension of the representation of the image data,the number of text data in the training database;
3) generating characteristic representation by utilizing a word bag model for all text data, weighting each word by utilizing a word frequency-inverse file frequency method, and finally representing all text data asWhereinDimension for textual representation;
4) carrying out mean value removing processing on the generated image and text feature representation;
5) the data set is divided into a training set and a test set according to a certain proportion.
3. The supervised cross-media hash retrieval method based on online learning of claim 1, wherein the step 2 comprises the following steps:
1) establishing an objective function based on the characteristics of the images and the text data in the training set;
2) the objective function is defined as follows:
wherein,for the tag matrix of all the data,andin order to map the matrix, the mapping matrix,hash code for data,A hash function representing the image and text modalities respectively,andin order to determine the weight parameter to be determined,representing the F-norm.
4. The supervised cross-media hash retrieval method based on online learning of claim 1, wherein the step 3 comprises the following steps:
1) dividing data in training database according to collection time sequenceThe data blocks are used for simulating that new data are generated continuously along with the time, an initial training set only contains the data of the first data block, and then one data block is added to the training set each time;
2) setting a threshold and a maximum iteration number, and executing 3) -7 as long as the difference between the objective function values of two adjacent iterations is larger than the threshold or the iteration number is smaller than the maximum iteration number;
3) fixing、、Andsolving for: when it comes to() When the sub-data is generated, the new image and the text data are used separately,For indicating, for labels of new dataFor representing hash code of new dataFor presentation, of existing image and text data,For labels indicating, existing dataFor hash codes representing, existing dataRepresents; removal constantTerm, the objective function becomes:
the problem can be solved bit by using a discrete cyclic coordinate descent method to obtainAnd updating the variables;
4) Fixing、、Andsolving for: removing the constant term, the objective function can be written as:
then:
wherein:
whereinAndthe constant term can be pre-calculated before updating the function and stored in the memory, thus updatingOnly with new data;
5) fixing、、Andsolving for: removing the constant term, the objective function can be written as:
then:
wherein:
whereinAndconstant term, which can be pre-computed before updating the function, is stored in memory and thus updatedOnly with new data;
6) fixing、、Andsolving for: removing the constant term, the objective function can be written as:
then:
wherein:
whereinAndthe constant term can be pre-calculated before updating the function and stored in the memory, thus updatingOnly with new data;
7) fixing、、Andsolving for: and solving forLike
Wherein:
whereinAndthe constant term can be pre-calculated before updating the function and stored in the memory, thus updatingAssociated with the new data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810429547.1A CN108595688A (en) | 2018-05-08 | 2018-05-08 | Across the media Hash search methods of potential applications based on on-line study |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810429547.1A CN108595688A (en) | 2018-05-08 | 2018-05-08 | Across the media Hash search methods of potential applications based on on-line study |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108595688A true CN108595688A (en) | 2018-09-28 |
Family
ID=63635729
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810429547.1A Pending CN108595688A (en) | 2018-05-08 | 2018-05-08 | Across the media Hash search methods of potential applications based on on-line study |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108595688A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109766455A (en) * | 2018-11-15 | 2019-05-17 | 南京邮电大学 | A kind of full similitude reservation Hash cross-module state search method having identification |
CN109871379A (en) * | 2018-12-10 | 2019-06-11 | 宁波大学 | A kind of online Hash K-NN search method based on data block study |
CN109960732A (en) * | 2019-03-29 | 2019-07-02 | 广东石油化工学院 | A kind of discrete Hash cross-module state search method of depth and system based on robust supervision |
CN110020214A (en) * | 2019-04-08 | 2019-07-16 | 北京航空航天大学 | A kind of social networks streaming events detection system merging knowledge |
CN110059198A (en) * | 2019-04-08 | 2019-07-26 | 浙江大学 | A kind of discrete Hash search method across modal data kept based on similitude |
CN110110100A (en) * | 2019-05-07 | 2019-08-09 | 鲁东大学 | Across the media Hash search methods of discrete supervision decomposed based on Harmonious Matrix |
CN110674323A (en) * | 2019-09-02 | 2020-01-10 | 山东师范大学 | Unsupervised cross-modal Hash retrieval method and system based on virtual label regression |
CN111639197A (en) * | 2020-05-28 | 2020-09-08 | 山东大学 | Cross-modal multimedia data retrieval method and system with label embedded online hash |
CN111914108A (en) * | 2019-05-07 | 2020-11-10 | 鲁东大学 | Discrete supervision cross-modal Hash retrieval method based on semantic preservation |
CN112214623A (en) * | 2020-09-09 | 2021-01-12 | 鲁东大学 | Image-text sample-oriented efficient supervised image embedding cross-media Hash retrieval method |
CN113312505A (en) * | 2021-07-29 | 2021-08-27 | 山东大学 | Cross-modal retrieval method and system based on discrete online hash learning |
CN114117153A (en) * | 2022-01-25 | 2022-03-01 | 山东建筑大学 | Online cross-modal retrieval method and system based on similarity relearning |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018010365A1 (en) * | 2016-07-11 | 2018-01-18 | 北京大学深圳研究生院 | Cross-media search method |
CN107729513A (en) * | 2017-10-25 | 2018-02-23 | 鲁东大学 | Discrete supervision cross-module state Hash search method based on semanteme alignment |
-
2018
- 2018-05-08 CN CN201810429547.1A patent/CN108595688A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018010365A1 (en) * | 2016-07-11 | 2018-01-18 | 北京大学深圳研究生院 | Cross-media search method |
CN107729513A (en) * | 2017-10-25 | 2018-02-23 | 鲁东大学 | Discrete supervision cross-module state Hash search method based on semanteme alignment |
Non-Patent Citations (3)
Title |
---|
ZHUANG, YUETING ET AL.: "Cross-Media Hashing with Neural Networks" * |
姚涛;孔祥维;付海燕;TIAN QI;: "基于映射字典学习的跨模态哈希检索" * |
李志义;黄子风;许晓绵;: "基于表示学习的跨模态检索模型与特征抽取研究综述" * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109766455A (en) * | 2018-11-15 | 2019-05-17 | 南京邮电大学 | A kind of full similitude reservation Hash cross-module state search method having identification |
CN109766455B (en) * | 2018-11-15 | 2021-09-24 | 南京邮电大学 | Identified full-similarity preserved Hash cross-modal retrieval method |
CN109871379A (en) * | 2018-12-10 | 2019-06-11 | 宁波大学 | A kind of online Hash K-NN search method based on data block study |
CN109871379B (en) * | 2018-12-10 | 2022-04-01 | 宁波大学 | Online Hash nearest neighbor query method based on data block learning |
CN109960732A (en) * | 2019-03-29 | 2019-07-02 | 广东石油化工学院 | A kind of discrete Hash cross-module state search method of depth and system based on robust supervision |
CN110020214A (en) * | 2019-04-08 | 2019-07-16 | 北京航空航天大学 | A kind of social networks streaming events detection system merging knowledge |
CN110059198A (en) * | 2019-04-08 | 2019-07-26 | 浙江大学 | A kind of discrete Hash search method across modal data kept based on similitude |
CN110059198B (en) * | 2019-04-08 | 2021-04-13 | 浙江大学 | Discrete hash retrieval method of cross-modal data based on similarity maintenance |
CN111914108A (en) * | 2019-05-07 | 2020-11-10 | 鲁东大学 | Discrete supervision cross-modal Hash retrieval method based on semantic preservation |
CN110110100A (en) * | 2019-05-07 | 2019-08-09 | 鲁东大学 | Across the media Hash search methods of discrete supervision decomposed based on Harmonious Matrix |
CN110674323A (en) * | 2019-09-02 | 2020-01-10 | 山东师范大学 | Unsupervised cross-modal Hash retrieval method and system based on virtual label regression |
CN111639197B (en) * | 2020-05-28 | 2021-03-12 | 山东大学 | Cross-modal multimedia data retrieval method and system with label embedded online hash |
CN111639197A (en) * | 2020-05-28 | 2020-09-08 | 山东大学 | Cross-modal multimedia data retrieval method and system with label embedded online hash |
CN112214623A (en) * | 2020-09-09 | 2021-01-12 | 鲁东大学 | Image-text sample-oriented efficient supervised image embedding cross-media Hash retrieval method |
CN113312505A (en) * | 2021-07-29 | 2021-08-27 | 山东大学 | Cross-modal retrieval method and system based on discrete online hash learning |
CN114117153A (en) * | 2022-01-25 | 2022-03-01 | 山东建筑大学 | Online cross-modal retrieval method and system based on similarity relearning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108595688A (en) | Across the media Hash search methods of potential applications based on on-line study | |
WO2020182019A1 (en) | Image search method, apparatus, device, and computer-readable storage medium | |
CN106202256B (en) | Web image retrieval method based on semantic propagation and mixed multi-instance learning | |
CN113190699A (en) | Remote sensing image retrieval method and device based on category-level semantic hash | |
CN111125411B (en) | Large-scale image retrieval method for deep strong correlation hash learning | |
CN110110100A (en) | Across the media Hash search methods of discrete supervision decomposed based on Harmonious Matrix | |
CN103559504A (en) | Image target category identification method and device | |
CN114117153B (en) | Online cross-modal retrieval method and system based on similarity relearning | |
CN107291895B (en) | Quick hierarchical document query method | |
CN112819023A (en) | Sample set acquisition method and device, computer equipment and storage medium | |
CN109871454B (en) | Robust discrete supervision cross-media hash retrieval method | |
CN102289522A (en) | Method of intelligently classifying texts | |
CN105718532A (en) | Cross-media sequencing method based on multi-depth network structure | |
CN104199965A (en) | Semantic information retrieval method | |
CN105740404A (en) | Label association method and device | |
CN112650923A (en) | Public opinion processing method and device for news events, storage medium and computer equipment | |
CN109829065B (en) | Image retrieval method, device, equipment and computer readable storage medium | |
CN114329109B (en) | Multimodal retrieval method and system based on weakly supervised Hash learning | |
CN112836509A (en) | Expert system knowledge base construction method and system | |
CN111950728A (en) | Image feature extraction model construction method, image retrieval method and storage medium | |
CN105183792B (en) | Distributed fast text classification method based on locality sensitive hashing | |
CN111259140A (en) | False comment detection method based on LSTM multi-entity feature fusion | |
CN110647995A (en) | Rule training method, device, equipment and storage medium | |
CN103778206A (en) | Method for providing network service resources | |
CN110442736B (en) | Semantic enhancer spatial cross-media retrieval method based on secondary discriminant analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180928 |