CN114610941A - Cultural relic image retrieval system based on comparison learning - Google Patents
Cultural relic image retrieval system based on comparison learning Download PDFInfo
- Publication number
- CN114610941A CN114610941A CN202210253589.0A CN202210253589A CN114610941A CN 114610941 A CN114610941 A CN 114610941A CN 202210253589 A CN202210253589 A CN 202210253589A CN 114610941 A CN114610941 A CN 114610941A
- Authority
- CN
- China
- Prior art keywords
- image
- feature
- network
- query
- retrieval
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000013598 vector Substances 0.000 claims description 41
- 230000006870 function Effects 0.000 claims description 15
- 238000000034 method Methods 0.000 claims description 13
- 238000007781 pre-processing Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 6
- 230000000052 comparative effect Effects 0.000 claims description 5
- 238000005457 optimization Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000000844 transformation Methods 0.000 claims description 3
- OAICVXFJPJFONN-UHFFFAOYSA-N Phosphorus Chemical compound [P] OAICVXFJPJFONN-UHFFFAOYSA-N 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/51—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Library & Information Science (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a cultural relic image retrieval system based on contrast learning, which is characterized in that firstly, a supervised contrast learning algorithm is used for training a model to obtain a good feature extractor, so that extracted features can accurately represent semantic information contained in an image, then, the similarity between feature representations is calculated for retrieval, and the accuracy of a retrieval result is further improved through average query expansion and database end feature enhancement. The retrieval system of the invention trains the network through the supervised contrast learning algorithm to obtain the feature extractor, can extract the image to obtain effective and discriminant feature representation, and further improves the retrieval accuracy through average query expansion and data characteristic enhancement of the data end.
Description
Technical Field
The invention relates to a feature extraction and contrast matching technology of cultural relic image data, in particular to an image retrieval system aiming at cultural relic data based on a contrast learning algorithm.
Background
The auditing work in each link of the civil cultural relic transaction circulation is too dependent on empirical analysis and naked eye judgment, so that the problems of complicated process, low efficiency and the like exist, and the requirement of automatically searching the cultural relic image by a computer is also generated. Image retrieval aims to index a query image into a database of images and, according to some metric, output images in the database that match or are similar to the query image. Based on the current situation that the image data volume is large and the retrieval requirement is high, a high-fidelity digital information acquisition technology which is suitable for the diversity of civilian cultural relics and the complexity of scenes is urgently needed to be provided, and a cultural relic data key feature information extraction mode and a comparison matching method are designed.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a cultural relic image retrieval system based on comparison learning, which solves the problems in the prior art that the retrieval accuracy and efficiency are low, the requirement on computing power is high, and the like.
The technical scheme of the invention is as follows:
a cultural relic image retrieval system based on contrast learning comprises a feature extractor and a retrieval module, wherein the feature extractor comprises preprocessing and feature extraction, and the retrieval module comprises sorting, indexing and similarity calculation; inputting an image to be retrieved, preprocessing and feature extracting the image to obtain a corresponding feature vector, preprocessing and feature extracting all images in an image database to obtain a corresponding image feature library, then calculating the similarity between the feature vector of a query image and the feature vector in the image feature library through a retrieval module, and ranking and indexing the images in the image feature library by using the similarity to obtain an image matched with the query image as a final retrieval result.
Performing network training on the feature extractor by using a supervised comparative learning algorithm; the contrast learning model adopts two completely symmetrical branches with shared parameters, each branch comprises a data enhancement network, an encoder network and a projection network, wherein the encoder network and the projection network form a feature extractor; for any image x, it forms two enhanced views x by two different data enhancement modesiAnd xj(ii) a Since the upper and lower branches are completely symmetrical, x in the upper branchiFirst converted into a corresponding representation h via an encoder networki=fθ(xi) (ii) a Non-linear transformation structure-projection network then maps the feature representation to a final feature representation zi=gθ(hi) (ii) a Similarly, the enhanced view of the lower branch undergoes two nonlinear transformations to obtain a final feature representation zj=gθ(fθ(xj))。
The network training is as follows: randomly sampling N samples to form a Batch, which is marked as { xk,yk}k=1,2,...,N,ykIs xkThe 2N samples can be obtained through data enhancementWherein,andthe label information in the data enhancement process is not changed all the time, and the label information is a data pair obtained by two random data enhancement modes of the same sample; for supervised contrast learning, one sample corresponds to a plurality of positive samples, namely, the sample with the same label information in the Batch is used as a positive sample, and the sample with different label information is used as a negative sample, so that the known label information can be effectively utilized for the supervised learning, the samples of the same type are closer in the expression space, and the samples of different types are in the tableThe feature representation is far away from each other in the representation space, and the discrimination capability of the feature representation is improved; thus, the loss function for supervised contrast learning is defined as:
wherein 1 isi≠jE {0, 1} is an indication function, and if and only if i ≠ j, 1 is taken, otherwise 0 is taken; tau > 0 is a temperature parameter; z is a radical ofj(i)Denotes ziPositive sample of (2), zi·zj(i)Representing inner product operations between vectors;represents the sum sample z in BatchiTotal number of samples having the same label information; training the network through a loss function in an optimization formula (4), and taking the trained encoder network and the trained projection network as a feature extractor to extract features of the query image and the image in the image database.
The similarity calculation function between the feature vectors adopts a dot product after regularization of the feature vectors L2 or cosine similarity between the feature vectors:
wherein z isiAnd ziRepresents a one-dimensional vector, | · | | non-conducting phosphor2Representing the L2 norm of the vector.
In the indexing and sorting process, average query expansion and database side feature enhancement are used to further improve the accuracy of the retrieval result.
The mean query expansion is based first on the original query Q0The similarity between the feature vector(s) of (2) and the feature vector(s) in the feature library orders the images in the database, and returns the top m (m < 5)0) A result, then Q, on the original query0Average with m results to form a new query QavgGenerating a final retrieval result by using the new query;
wherein z is0For the feature vector of the original query, ziIs the feature vector of the ith result.
The database-side characteristic enhancement replaces the original image by combining the image in the database and the image close to the image, and aims to improve the quality of image representation by utilizing the characteristics of image neighborhood; firstly, similarity is calculated pairwise for feature vectors in an image feature library, and for any image, K image features closest to the image feature are added, or the sum is weighted according to the ranking of the features:
where r is the ranking of the image features and k is the total number of similar images considered.
Has the advantages that:
the cultural relic image retrieval system based on the contrast learning provided by the invention trains the network through the supervised contrast learning algorithm to obtain the feature extractor, can obtain effective and discriminative feature representation for image extraction, and further improves the retrieval accuracy through average query expansion and data end data feature enhancement. The user enters a cultural relic image as a query, and the retrieval system can accurately retrieve in the image database and return the result(s) matched with the query image. Both quantitative and qualitative results obtained on the common image dataset cifar10 indicate the effectiveness of the retrieval system.
Drawings
FIG. 1 is a cultural relic image retrieval system based on comparative learning;
FIG. 2 compares learning models;
quantitative and qualitative results for the system of figure 3.
Detailed Description
The invention is described in detail below with reference to the drawings and the detailed description.
The cultural relic image retrieval system based on contrast learning is shown in figure 1, an image to be retrieved is input, preprocessing such as scale conversion, random overturning and the like is carried out on the image to be retrieved, corresponding query image features are extracted, meanwhile, preprocessing and feature extraction are carried out on all images in an image database, a corresponding image feature library is obtained, then, the similarity between the image features and all features in the image feature library is calculated, the images in the image feature library are ranked and indexed by utilizing the similarity, and an image (one or more ranked images) matched with the query image is obtained to serve as a final retrieval result. The following describes in detail the implementation of the training and retrieval module of the feature extractor:
1. feature extractor
The feature extractor needs to extract features of the query image and the image in the image database to effectively represent image information, which is the basis for measuring the similarity between images according to the similarity between feature representations in a subsequent retrieval module and is also the key point of the retrieval accuracy of the whole retrieval system. Here the feature extractor is trained using a supervised contrast learning algorithm.
The core idea of contrast learning is to pull in the distance between a sample and a positive sample while pulling away the distance between a sample and its negative sample. In the supervised contrast learning algorithm, label information in a data set is used as supervision, each sample corresponds to a plurality of positive samples and negative samples, and the training feature extractor can generate discriminant feature representation, so that the realization of an image retrieval task is facilitated. As shown in fig. 2, the comparative learning model adopts two completely symmetric branches with shared parameters, each branch includes a data enhancement, an encoder network and a projection network, wherein the encoder network and the projection network form a feature extractor.
ForAny one image x, which forms two enhanced views x by two different data enhancement modesiAnd xj. Since the upper and lower branches are completely symmetrical, the above branches are taken as examples, xiFirst, the representation is converted into a corresponding feature representation h through an encoder network (generally adopting ResNet as a model structure)i=fθ(xi). Then nonlinear transformation structure- -projection network (from [ FC- > BN- > ReLU- > FC)]Two-layer MLP formation) maps the feature representation to a final feature representation zi=gθ(hi). Similarly, the enhanced view of the lower branch undergoes two nonlinear transformations to obtain a final feature representation zj=gθ(fθ(xj)). The goal of contrast learning is to make the distance between positive samples closer and the distance between negative samples farther in the representation space.
During network training, randomly sampling N samples to form a Batch, and recording the Batch as { xk,yk}k=1,2,...,N,ykIs xkThe 2N samples can be obtained through data enhancementWherein,andthe label information in the data enhancement process is not changed all the time. Data pair without considering class supervision informationAre positive samples of each otherAnd divide by BatchAny other 2N-2 samples are negative samples. In this case, the algorithm is an auto-supervised comparative learning algorithm, and the loss function is defined as:
wherein 1 isi≠kE {0, 1} is an indication function, and if and only if i ≠ k, 1 is taken, otherwise 0 is taken; tau > 0 is a temperature parameter; z is a radical ofj(i)Denotes ziPositive sample of (2), zi·zj(i)Representing the inner product operation between vectors. It can be seen that the molecular part of the loss function encourages the higher the similarity between the sample and the positive sample, the better, i.e. the closer the distance in the representation space; the denominator part encourages the similarity between the exemplars and the negative exemplars to be as low as possible, i.e., the distance in the representation space to be as long as possible.
It is known that the loss function of the self-supervised contrast learning processes each sample as a single class, and cannot deal with the case where a label exists in a data set, that is, it is known that a plurality of samples belong to the same class. For supervised contrast learning, one sample corresponds to a plurality of positive samples, namely, the sample in the Batch with the same label information is used as a positive sample, and the sample with different label information is used as a negative sample, so that the known label information can be effectively utilized for supervised learning, the samples of the same type are closer in the representation space, the samples of different types are far away from each other in the representation space, and the discrimination capability of feature representation is improved. Thus, the loss function for supervised contrast learning is defined as:
wherein,represents the sum sample z in BatchiTotal number of samples having the same label information. Training the network through a loss function in an optimization formula (4), and taking the trained encoder network and the trained projection network as a feature extractor to extract features of the query image and the image in the image database.
2. Retrieval module
And performing feature extraction on all images in the database by using a feature extractor to obtain a corresponding image feature library. When searching is carried out, a query image is input, and feature extraction is carried out on the query image to obtain a corresponding feature vector. And then, calculating the similarity between the feature vector of the query image and the feature vector in the image feature library through a retrieval module, indexing and sequencing according to the similarity, and outputting the sequenced image as a final result (the number is set manually).
The similarity calculation function between feature vectors generally adopts a dot product after regularization of the feature vector L2 or cosine similarity between feature vectors:
in the indexing and sorting process, average query expansion and database side feature enhancement are used to further improve the accuracy of the retrieval result. Mean query expansion, i.e. first from the original query Q0The similarity between the feature vectors in the database and the feature vectors in the feature library sorts the images in the database, returns the first m (m < 50) results, and then averages the original query Q0 with the m results to form a new query QavgAnd generating a final retrieval result by using the new query.
Wherein z is0Is the feature vector of the original query, ziIs the feature vector of the ith result. The database-side feature enhancement replaces the original image by combining the image in the database and the image close to the image, and aims to improve the image representation quality by using the features of the image neighborhood. Firstly, similarity is calculated pairwise for feature vectors in an image feature library, and for any image, K image features closest to the image feature are added, or the sum is weighted according to the ranking of the features:
where r is the ranking of the image features and k is the total number of similar images considered.
According to the cultural relic image retrieval system based on contrast learning, a ResNet50 network architecture is adopted by an encoder network in a feature extractor, cosine similarity is used for calculating similarity between image feature vectors, and results of top five feature similarity ranks are selected for calculation when a retrieval module carries out average query expansion and database end feature enhancement. Firstly training a feature extractor based on a supervised contrast learning algorithm, extracting features of images in an image database by using the trained feature extractor to obtain a feature database, inputting a query image into a system when a user queries, preprocessing and extracting the features by using the feature extractor by the system to obtain query features, then measuring similarity between the query features and the features in the feature database, realizing sorting and indexing by using feature similarity, and finally outputting an image (one or a plurality of sorted images) matched with the query image to the user. The retrieval system can realize quick and effective retrieval, the retrieval accuracy of the retrieval system on a common image data set cifar10 is shown in table 1, and the retrieval result with 10 outputs is shown in fig. 3.
TABLE 1
Precision@1(%) | Precision@10(%) | Map@all(%) | |
Cifar10 | 98.0 | 100 | 98.1 |
While the methods and techniques of the present invention have been described in terms of preferred embodiments, it will be apparent to those of ordinary skill in the art that variations and/or modifications of the methods and techniques described herein may be made without departing from the spirit and scope of the invention. It is expressly intended that all such similar substitutes and modifications apparent to those skilled in the art are deemed to be within the spirit, scope and content of the invention. The invention belongs to the known technology.
Claims (7)
1. A cultural relic image retrieval system based on contrast learning is characterized by comprising a feature extractor and a retrieval module, wherein the feature extractor comprises preprocessing and feature extraction, and the retrieval module comprises sorting, indexing and similarity calculation; inputting an image to be retrieved, preprocessing and feature extracting the image to obtain a corresponding feature vector, preprocessing and feature extracting all images in an image database to obtain a corresponding image feature library, then calculating the similarity between the feature vector of a query image and the feature vector in the image feature library through a retrieval module, and ranking and indexing the images in the image feature library by using the similarity to obtain an image matched with the query image as a final retrieval result.
2. The cultural relic image retrieval system based on the comparison learning as claimed in claim 1, wherein the feature extractor is network trained by using a supervised comparison learning algorithm; the contrast learning model adopts two completely symmetrical branches with shared parameters, each branch comprises a data enhancement network, an encoder network and a projection network, wherein the encoder network and the projection network form a feature extractor; for any image x, it forms two enhanced views x by two different data enhancement modesiAnd xj(ii) a Since the upper and lower branches are completely symmetrical, x in the upper branchiFirst converted into a corresponding representation h via an encoder networki=fθ(xi) (ii) a Non-linear transformation structure-projection network then maps the feature representation to a final feature representation zi=gθ(hi) (ii) a Similarly, the enhanced view of the lower branch undergoes two nonlinear transformations to obtain a final feature representation zj=gθ(fθ(xj))。
3. The cultural relic image retrieval system based on the comparative learning of claim 2, wherein the network training is: randomly sampling N samples to form a Batch, which is marked as { xk,yk}k=1,2,...,N,ykIs xkThe 2N samples can be obtained through data enhancementWherein,andthe label information in the data enhancement process is not changed all the time, and the label information is a data pair obtained by two random data enhancement modes of the same sample;
for supervised contrast learning, one sample corresponds to a plurality of positive samples, namely, a sample in the Batch, which has the same label information as the positive sample, and a sample with different label information as the negative sample, so that the known label information can be effectively utilized for supervised learning, the samples of the same type are closer in the representation space, the samples of different types are far away from each other in the representation space, and the discrimination capability of feature representation is improved; thus, the loss function for supervised contrast learning is defined as:
wherein 1 isi≠jE {0, 1} is an indication function, and if and only if i ≠ j, 1 is taken, otherwise 0 is taken; tau > 0 is a temperature parameter; z is a radical of formulaj(i)Denotes ziPositive sample of (2), zi·zj(i)Representing inner product operations between vectors;represents the sum sample z in BatchiTotal number of samples having the same label information; training the network through a loss function in an optimization formula (4), and taking the trained encoder network and the trained projection network as a feature extractor to extract features of the query image and the image in the image database.
4. The cultural relic image retrieval system based on the contrast learning of claim 1, wherein the similarity calculation function between the feature vectors adopts dot products after regularization of feature vectors L2 or cosine similarity between feature vectors:
wherein z isiAnd zjRepresents a one-dimensional vector, | · | | non-conducting phosphor2Representing the L2 norm of the vector.
5. The system for retrieving cultural relics image based on contrast learning of claim 1, wherein in the process of indexing and sorting, average query expansion and database-side feature enhancement are used to further improve the accuracy of the retrieval result.
6. The system as claimed in claim 5, wherein the average query expansion is first based on the original query Q0The similarity between the feature vectors in the feature library and the feature vectors in the feature library orders the images in the database, returns the first m (m < 50) results, and then performs Q-point search on the original query0Average with m results to form a new query QavgGenerating a final retrieval result by using the new query;
wherein z is0Is the feature vector of the original query, ziIs the feature vector of the ith result.
7. The system for retrieving cultural relics image based on contrast learning as claimed in claim 5, wherein the database-side feature enhancement is used for replacing the original image by the combination of the image in the database and the image close to the image, and aiming at improving the quality of image representation by using the features of image neighborhood; firstly, similarity is calculated pairwise for feature vectors in an image feature library, and for any image, K image features closest to the image feature are added, or the sum is weighted according to the ranking of the features:
where r is the ranking of the image features and k is the total number of similar images considered.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210253589.0A CN114610941A (en) | 2022-03-15 | 2022-03-15 | Cultural relic image retrieval system based on comparison learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210253589.0A CN114610941A (en) | 2022-03-15 | 2022-03-15 | Cultural relic image retrieval system based on comparison learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114610941A true CN114610941A (en) | 2022-06-10 |
Family
ID=81862722
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210253589.0A Pending CN114610941A (en) | 2022-03-15 | 2022-03-15 | Cultural relic image retrieval system based on comparison learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114610941A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116580268A (en) * | 2023-07-11 | 2023-08-11 | 腾讯科技(深圳)有限公司 | Training method of image target positioning model, image processing method and related products |
-
2022
- 2022-03-15 CN CN202210253589.0A patent/CN114610941A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116580268A (en) * | 2023-07-11 | 2023-08-11 | 腾讯科技(深圳)有限公司 | Training method of image target positioning model, image processing method and related products |
CN116580268B (en) * | 2023-07-11 | 2023-10-03 | 腾讯科技(深圳)有限公司 | Training method of image target positioning model, image processing method and related products |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110851645B (en) | Image retrieval method based on similarity maintenance under deep metric learning | |
CN111198959A (en) | Two-stage image retrieval method based on convolutional neural network | |
CN107291895B (en) | Quick hierarchical document query method | |
Champ et al. | A comparative study of fine-grained classification methods in the context of the LifeCLEF plant identification challenge 2015 | |
WO2001046858A1 (en) | Vector index creating method, similar vector searching method, and devices for them | |
CN106815362A (en) | One kind is based on KPCA multilist thumbnail Hash search methods | |
CN110458175B (en) | Unmanned aerial vehicle image matching pair selection method and system based on vocabulary tree retrieval | |
CN112434662B (en) | Tea leaf scab automatic identification algorithm based on multi-scale convolutional neural network | |
CN113392191B (en) | Text matching method and device based on multi-dimensional semantic joint learning | |
CN115048539B (en) | Social media data online retrieval method and system based on dynamic memory | |
CN112927783A (en) | Image retrieval method and device | |
JP5014479B2 (en) | Image search apparatus, image search method and program | |
CN110598022A (en) | Image retrieval system and method based on robust deep hash network | |
CN112785015A (en) | Equipment fault diagnosis method based on case reasoning | |
CN114676769A (en) | Visual transform-based small sample insect image identification method | |
CN116187444A (en) | K-means++ based professional field sensitive entity knowledge base construction method | |
Sadique et al. | Content-based image retrieval using color layout descriptor, gray-level co-occurrence matrix and k-nearest neighbors | |
CN114610941A (en) | Cultural relic image retrieval system based on comparison learning | |
CN111191033A (en) | Open set classification method based on classification utility | |
López-Cifuentes et al. | Attention-based knowledge distillation in scene recognition: the impact of a dct-driven loss | |
CN113342950A (en) | Answer selection method and system based on semantic union | |
CN113342949A (en) | Matching method and system of intellectual library experts and topic to be researched | |
CN116935411A (en) | Radical-level ancient character recognition method based on character decomposition and reconstruction | |
CN112084353A (en) | Bag-of-words model method for rapid landmark-convolution feature matching | |
CN113920303B (en) | Convolutional neural network based weak supervision type irrelevant image similarity retrieval system and control method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20221101 Address after: 300072 Tianjin City, Nankai District Wei Jin Road No. 92 Applicant after: Tianjin University Applicant after: Yiyuan digital (Beijing) Technology Group Co.,Ltd. Address before: 300072 Tianjin City, Nankai District Wei Jin Road No. 92 Applicant before: Tianjin University |