WO2023093574A1 - 基于多级图文语义对齐模型的新闻事件搜索方法及系统 - Google Patents
基于多级图文语义对齐模型的新闻事件搜索方法及系统 Download PDFInfo
- Publication number
- WO2023093574A1 WO2023093574A1 PCT/CN2022/131992 CN2022131992W WO2023093574A1 WO 2023093574 A1 WO2023093574 A1 WO 2023093574A1 CN 2022131992 W CN2022131992 W CN 2022131992W WO 2023093574 A1 WO2023093574 A1 WO 2023093574A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- text
- news
- image
- model
- modal
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 239000013598 vector Substances 0.000 claims abstract description 25
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 19
- 230000006870 function Effects 0.000 claims description 37
- 238000001514 detection method Methods 0.000 claims description 30
- 238000012549 training Methods 0.000 claims description 24
- 238000004364 calculation method Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 8
- 238000002372 labelling Methods 0.000 claims description 7
- 238000012216 screening Methods 0.000 claims description 7
- 238000013461 design Methods 0.000 claims description 6
- 238000005516 engineering process Methods 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 claims description 4
- 238000003062 neural network model Methods 0.000 claims description 4
- 230000003247 decreasing effect Effects 0.000 claims description 3
- 230000002829 reductive effect Effects 0.000 claims description 3
- 239000013589 supplement Substances 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 abstract description 10
- 230000006872 improvement Effects 0.000 abstract description 7
- 238000013528 artificial neural network Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 13
- 238000000605 extraction Methods 0.000 description 12
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 6
- 238000011160 research Methods 0.000 description 6
- 238000013135 deep learning Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 4
- 230000015556 catabolic process Effects 0.000 description 3
- 238000006731 degradation reaction Methods 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000000691 measurement method Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/906—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/907—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Definitions
- the present application relates to the field of computer technology, in particular to a news event search method based on a multi-level image-text semantic alignment model.
- Cross-modal retrieval aims to use data from one modality as a query to retrieve data from another modality.
- image-text retrieval image-text retrieval
- the main difficulty of cross-modal retrieval lies in the "heterogeneous gap".
- the heterogeneous gap refers to the inconsistency between the query input and the retrieval result, and the two data are in different distribution spaces.
- the focus of research is how to represent the low-level features, how to model the high-level semantics, and how to find a suitable measurement method to calculate the association between modalities.
- cross-modal retrieval is achieved by projecting the features of different modalities into a common latent subspace and measuring the similarity of different modalities in it.
- the hidden topic space in cross-modal data is mined by a generative model, thereby mapping the underlying features of cross-modal data to the implicit semantic space.
- the individual effective representations of different modalities are extracted at the bottom layer, and then the semantic associations of different modalities are established at the high level, and the correlation of different modal representations is maximized by using the high-level network.
- the method based on deep learning shows great advantages in the extraction, learning and representation of different modal information features such as pictures and texts, and has become a research hotspot in cross-modal retrieval in recent years.
- the main evaluation index of cross-modal retrieval is recall@K, and the recall rate is calculated based on whether the correct answer appears in the first K returned results.
- representation learning is a collection of techniques for learning a feature, which is a direction used to improve the expression of raw data.
- the main task of representation learning is to let the computer learn how to automatically extract suitable and useful data features and use the learned features to complete the target task.
- Representation learning can be divided into two categories: supervised and unsupervised, the former uses labeled data as features, and the latter uses unlabeled data as features for learning.
- Deep learning is a representation learning method with multi-level representation, which is used to represent more and more abstract concepts or patterns step by step, usually in the form of a multi-layer neural network.
- the deep architecture brings two main advantages: (1) it promotes the reuse of features; (2) the deep architecture can obtain the abstraction of higher-level features.
- the widely used method is to use pre-trained deep neural network models (such as VGG, ResNet, etc.) to extract feature information for subsequent tasks; in the field of NLP, for text input data, also use Feature extractors (such as RNN, Transformer, etc.) to obtain vector representations of words and sentences.
- ResNet and BERT are the most widely used pre-training models in the image and text fields respectively. Many research works first use them to obtain a baseline embedding representation, and then fine-tune them in downstream tasks to obtain the final embedding representation.
- Deep Metric Learning is a method of metric learning, the goal is to learn the mapping from the original features to a low-dimensional dense vector space (called embedding space, embedding space), so that the positive samples are in the embedding space
- embedding space a low-dimensional dense vector space
- embedding space a low-dimensional dense vector space
- the distance between the negative samples is relatively close, and the distance between the negative samples is relatively large.
- metric learning methods use paired samples for loss calculation.
- This type of method is called pair-based deep metric learning.
- pair-based deep metric learning For example, in the process of training the model, randomly select two samples, use the model to extract features, and calculate the distance between the features. If the two samples belong to the same category, the distance between them should be as small as possible, or even 0; if the two samples belong to different categories, the distance between them should be as large as possible, or even infinite.
- a loss function Loss function
- the essence of metric learning is the learning of similarity, and the loss function can guide the update of neural network parameters, so the optimization of metric learning is mainly the design of loss function.
- Softmax Loss is the most basic loss function for metric learning, which can better complete classification tasks but does not consider the distance between classes. The formula is shown in 1-1:
- W and b are the classification layer parameters
- m is the amount of training data.
- xi represents the feature before the fully connected layer, Indicates the feature center of the y i -th category,
- the triplet loss function [Hoffer E, Ailon N.Deep metric learning using triplet network[C].International Workshop on Similarity-Based Pattern Recognition,2015:84-92.] consists of target (Anchor), positive sample (Positive) and The negative sample (Negative) consists of three parts.
- the purpose of Triplet loss is to make the feature expression interval of similar samples as small as possible and the feature expression interval of heterogeneous samples be as large as possible through learning. The formula is shown in 1-3.
- the common space feature learning method is mostly used for cross-modal image-text retrieval tasks, and high-quality and high-semantic cross-modal representations are obtained under the condition that the image-text model is independent and non-interactive.
- the representative method is VSE++ [Faghri F, Fleet D J, Kiros J R, et al. Vse++: Improving visual-semantic embeddings with hard negatives[J].arXiv preprint arXiv:1707.05612,2017.].
- VSE++ uses Ranking Loss (sorting loss) to make the distance between paired samples in the public space small, and the distance between unmatched samples is large. At the same time, it uses Hard Negative to improve the performance of visual semantic joint embedding.
- the model mainly consists of two parts. First, feature extraction is performed on images and texts through deep neural networks. Then, with the help of metric learning methods, a loss function is designed to learn an effective public representation space, that is, the joint embedding space.
- the VSE++ model uses VGG19 or ResNet152 for feature extraction for images, and GRU for feature extraction for text.
- VSE++ proposes a new loss function max hinge loss, which advocates that more attention should be paid to difficult samples in the sorting process, so that the model can better learn the boundary of positive and negative samples.
- the formula of max hinge loss is shown in 1-4, which is obtained by summing the two symmetrical parts of the negative sample image to the reference text and the distance constraints from the negative sample text to the reference image.
- [x] + max(0,x)
- s(i,c) is a cosine distance function that measures the similarity between the two modalities of image and text
- ⁇ is the set hyperparameter, namely the margin.
- the commonly used feature extraction method in NLP tasks is to combine word2vec and RNN.
- the BERT model based on Transformer is pre-trained on a very large corpus, and has shown a stronger ability in the task of text feature extraction.
- Formulas 1-4 only focus on the relationship between the modalities, ignoring the relationship within the modal. This leads to too many parameters that need to be adjusted, and it is difficult for the ranking loss to optimize the representation of images and texts at the same time.
- Both words and sentences in a text object are effective descriptions of images, where words are low-level detailed descriptions and sentences correspond to high-level overviews of images.
- Existing cross-modal image-text retrieval models focus more on the alignment at the sentence level, which may lead to deviations in the prediction of image details.
- the main idea of another type of cross-modal image-text retrieval task is to fuse image-text features and calculate cross-modal similarity.
- Typical methods include superimposed attention in image-text matching [Lee K H, Xi C, Gang H, et al.Stacked Cross Attention for Image-Text Matching[J].2018.] (Stacked Cross Attention for Image-Text Matching, SCAN), SCAN uses the attention mechanism to interact the local information of the image and the text to obtain a better feature representation, and at the same time builds a similarity function to learn under the commonly used ranking loss.
- Figure 2 is the version of Image-Text, which uses images and text for attention calculation.
- s ij represents the similarity between the i-th image region and the j-th word.
- the combination of graphic and text features can provide more cross-feature information for the hidden layer of the model, it is impossible to use the top-level embedding vector to independently represent the input signals of images and texts.
- the search process of cross-modal similarity measurement methods is time-consuming. Specifically, when the user enters a text query q, the system needs to calculate the feature combination of all images and q online to obtain the similarity score between q and each image. The calculation performance is a huge bottleneck, making it impossible to apply in practice. .
- the purpose of the present invention is to construct a multi-modal news graphic data set, which fills the vacancy of this type of data set, and proposes a multi-level visual-text semantic alignment model MSAVT (Multi-level Semantic Alignments for Visual and Text ); designed and implemented a cross-modal image-text search system for news events to meet the current news retrieval needs.
- MSAVT Multi-level Semantic Alignments for Visual and Text
- a news event search method based on a multi-level graphic-text semantic alignment model comprising the following steps:
- Step 1) building a multi-modal news graphic dataset
- Step 1.1 news event selection
- step 1.1 Use the event name that obtains in step 1.1) to be a search term, search for the matching news report data that is obtained, and extract the matching picture and title text of each news report as a piece of sample data of the news event;
- the obtained data is preprocessed through the algorithm, and the algorithm preliminary screening of the data set is completed;
- Step 2) establishing a multi-level visual-text semantic alignment model MSAVT for image-text matching
- Step 2.1) utilize deep neural network model to extract image feature and text feature
- Step 2.2 Map the extracted text features and image features to the joint embedding space of image semantics and text semantics;
- Step 2.3) Aims at proposing a clustering loss that simultaneously establishes intra-modal constraints and inter-modal constraints;
- Step 2.4 For image features, add word detection loss to focus on word-level alignment
- Step 2.5 The clustering loss and the word detection loss are used as a supplement to the ranking loss to obtain the final overall loss function;
- Step 3 realizing cross-modal image-text search of news events
- step 1.3) data labeling the specific steps of the algorithm preliminary screening include:
- Step 1.3.1 use the pre-trained RoBERTa model to extract text features and the pre-trained ResNet50 model to extract image features;
- Step 1.3.2 Each event is regarded as a class, and the text center and picture center of the class are calculated by taking the average value of the text and picture features;
- Step 1.3.3 determine that the 20% of the image features or text features closest to its center are reliable data with high confidence, and keep their union;
- Step 1.3.4) The rest of the data is judged by manual annotation.
- the clustering loss in the described step 2.3) is:
- r ik is the vector representation of object i in cluster k
- ⁇ k is the center of the kth cluster, which is defined as shown in Publication 2-3:
- the variance ⁇ is defined as shown in formula 2-4:
- the distance between clusters can be calculated by formula 2-5:
- the word detection loss is used to evaluate whether a news image-text pair contains high-frequency words contained in its title text.
- the attribute dictionary is composed of 1000 high-frequency words in the text data in the multimodal dataset.
- Step 2.4.I) Use the weight matrix W to multiply the image descriptor ⁇ to obtain the probability score s of each word in the top1k word set, which is defined as shown in formula 2-7:
- Step 2.4.II Calculate in advance which attributes (that is, high-frequency words) in the attribute dictionary contained in the title text of each news image-text pair are used as labels for classification problems, and use 1000 binary classifiers to calculate the word detection loss L word , such as Formula 2-8 shows:
- Step 2.4.1 Use the pre-trained ResNet-152 model and fix its weights as an image feature encoder
- Step 2.4.2 According to the overall loss function, use the BP algorithm to update the parameters of the model except ResNet-152;
- Step 2.4.3 The above training is carried out for 40 rounds, and the initial learning rate is 0.001, which is reduced by 10 times every 20 epochs;
- Step 2.4.4 Instead of fixing the weights of ResNet-152, the entire architecture is fine-tuned end-to-end in 50 rounds of training.
- the initial learning rate is 0.00001, which is decreased 10 times every 20 epochs.
- the weight ⁇ 1 is fixed to 1 and ⁇ 2 is fixed to 0.1.
- the user uploads a news picture as a search term with the picture search text, and the system inputs it into the trained MSAVT model for forward propagation, calculates its Euclidean coordinates in the joint embedding space, and returns the N closest to it Text data of news headlines, thus realizing search for text by image;
- the text of the news headline uploaded by the user in the text search image is used as the search term, and the system inputs it into the trained MSAVT model for forward propagation, calculates its Euclidean coordinates in the joint embedding space, and returns the nearest N pieces of news picture data, thus realizing the search for pictures by text.
- the present invention also provides a news event search based on a news event search method based on a multi-level graphic-text semantic alignment model, using a multi-level visual-text semantic alignment model MSAVT model for graphic-text matching as the algorithm core, and using front-end and back-end programming techniques to design and implement A cross-modal image-text search system for news events, using the relationship between two different modal data in news reports, title text and accompanying images, to achieve retrieval results.
- News has important social significance, and its expression is mostly a multi-modal form combining graphics and text.
- the traditional single-modal retrieval has a single form, cannot effectively use the correlation between different modal information, and cannot meet the current needs of netizens to obtain news.
- Cross-modal image-text retrieval can take advantage of the characteristics of low feature heterogeneity and high semantic correlation between the news headline text and the picture in the news report, return the retrieval effects of different modalities, and enrich people's understanding of the same news event. cognition.
- the present invention constructs a multi-modal news graphic data set, which fills the vacancy of this type of data set; the present invention proposes a multi-level visual-text semantic alignment model MSAVT (Multi-level Semantic Alignments for Visual and Text); designed and implemented a cross-modal graphic-text search system for news events to meet the current news retrieval needs.
- MSAVT Multi-level Semantic Alignments for Visual and Text
- the cross-modal retrieval model has higher image-text alignment accuracy, and when applied to cross-modal image-text retrieval of news events, it has a significant improvement in multiple levels of recall and average precision. promote.
- the pre-trained BERT model is introduced to extract text features, which improves the generalization performance of the algorithm.
- the model adopts the public space feature learning method, which can independently obtain the vector representation of images and texts, that is, the vector representation of the retrieval results can be stored in advance, the retrieval time is relatively short, and it can be applied in actual scenarios.
- the embodiments of the present invention improve the network information capacity of the binary target detection neural network by improving and adding the classification decoupling network and the positioning decoupling network, avoiding classification and positioning features.
- FIG. 1 is a schematic diagram of a public space feature learning method used in cross-modal image-text retrieval in the prior art
- Fig. 2 is a schematic diagram of attention calculation using images and texts in cross-modal image-text retrieval in the prior art
- Figure 3 is a schematic diagram of the working structure of the algorithm primary screening of the data set
- Figure 4 is a schematic diagram of extracting text features using RoBERTa
- Fig. 5 is a schematic diagram of a residual learning unit
- Figure 6 is a schematic diagram of the structure of the ResNet model
- FIG. 7 is a schematic diagram of two residual modules in ResNet
- Figure 8 is a schematic diagram of image features extracted by ResNet-50;
- Fig. 9 is a schematic structural diagram of the multi-level visual-text semantic alignment model MSAVT model proposed by the present invention.
- Fig. 10 is a schematic diagram of word detection module
- Fig. 11 is a schematic diagram of the system application of the present invention using the method of the present invention.
- the news event search method based on the multi-level graphic-text semantic alignment model proposed by the present invention comprises the following steps:
- Step 1) building a multi-modal news graphic dataset
- step 1.2 Use the event name obtained in step 1.2) as the search term, obtain the matching news report data obtained by Google News search through the crawler, and extract the picture and title text pair of each news report as a sample data of the event .
- the obtained data is preprocessed through algorithms, and the difference between each sample and the cluster center is used as compact information, thus completing the algorithmic preliminary screening of the data set.
- the specific steps of the algorithm preliminary screening include:
- Step 1.3.1 use the pre-trained RoBERTa model to extract text features and the pre-trained ResNet50 model to extract image features;
- RoBERTa is an improved version of BERT. By improving training tasks and data generation methods, training longer, using larger batches, and using more data, it refreshed the records of multiple NLP tasks when it was released.
- the headline text of the news is directly input into the pre-trained Chinese RoBERTa model. Since the BERT model divides Chinese characters by characters, the model outputs the word vector of each word. Add up all the word vectors of a sentence and calculate the average value to get the text features of the sentence.
- the idea of constructing the residual network is to construct a natural identity map for the neural network. Assuming that the input and output dimensions of the nonlinear unit of the neural network are consistent, a residual learning unit can be expressed by the following formula.
- the function to be fitted by the neural network unit is That is, the residual, f is the ReLU activation function, and x (l) and x (l-1) represent the input and output of the l-th residual unit, respectively.
- the residual learning unit is generally implemented in the form of a short-circuit connection (shortcut connection).
- ResNet solves the degradation problem of deep CNN network through residual learning, and has become the basic feature extraction network in the field of computer vision.
- ResNet-50 and ResNet-152 involved in the patent of the present invention refer to ResNet networks of different depths.
- the "50" in "ResNet-50” means that the model contains 50 weighted convolutional layers.
- Figure 6 describes the specific structure of multiple versions of ResNet.
- the left side of Figure 7 is the basic residual block, corresponding to the convolutional subnetwork in Figure 3
- the right side of Figure 7 is the bottleneck residual block, corresponding to the convolutional subnetwork in Figure 3
- ResNet-18 and ResNet-34 the basic residual block on the left is used.
- the bottleneck residual block on the right is used in ResNet-50, ResNet-101, and ResNet-152.
- ResNet is a deep convolutional neural network stacked with these residual modules.
- an identity map can be used, that is, the input is directly added to the output.
- a 1x1 convolution is used to increase the dimension of the input to make it the same dimension as the residual.
- Step 1.3.2 Each event is regarded as a class, and the text center and picture center of the class are calculated by taking the average value of the text and picture features;
- Step 1.3.3 determine that the 20% of the image features or text features closest to its center are reliable data with high confidence, and keep their union;
- Step 1.3.4) The rest of the data is judged by manual supplementary labeling
- Step 2 establish a multi-level visual-text semantic alignment model MSAVT (Multi-Level Semantic Alignments for Visual and Text) for image-text matching;
- MSAVT Multi-Level Semantic Alignments for Visual and Text
- the present invention proposes a multi-level visual-text semantic alignment MSAVT model (Multi-Level Semantic Alignments for Visual and Text), which is used for graphic and text cross-modal retrieval of news events.
- Fig. 9 is a schematic structural diagram of the multi-level visual-text semantic alignment model MSAVT model proposed by the present invention; in conjunction with Fig. 9, the establishment and application of MSAVT are specifically as follows:
- Multi-level visual-text semantic alignment model MSAVT Multi-level Semantic Alignments for Visual and Text.
- Step 2.1) utilize deep neural network model to extract image feature and text feature
- the original image I with a size of 224 ⁇ 224 is input, and after using various data enhancement methods such as random cropping and horizontal flipping, it is input into the ResNet-152 model, and a vector with a length of 2048 dimensions is obtained as a visual description of the image input.
- the visual descriptor ⁇ (I) is used in step 2.4) to compute the word detection loss.
- the BERT model For text features, input the news title text corresponding to the image into the BERT-base model, and the BERT model can automatically segment the text. Since the BERT model segments Chinese characters by characters, the model outputs the word vector of each character. Add up all the word vectors of a sentence and calculate the average value to get the text feature of the sentence, which is a vector with a length of 768 dimensions.
- Step 2.2 Map the extracted text features and image features to the joint embedding space of image semantics and text semantics;
- the image feature vector its input embedding module (a two-layer feed-forward neural network) is mapped into a 1024-dimensional embedding space.
- the 768-dimensional sentence vector output by the BERT model is input into a gated recurrent neural network GRU to map it into a 1024-dimensional embedding space. So far, image features and text features have been mapped to a joint embedding space, and the similarity of vectors can be measured by indicators such as cosine similarity.
- Step 2.3) Aims at proposing a clustering loss that simultaneously establishes intra-modal constraints and inter-modal constraints;
- r ik is the vector representation of object i in cluster k
- ⁇ k is the center of the kth cluster, which is defined as shown in Publication 2-3:
- the variance ⁇ is defined as shown in formula 2-4:
- clustering loss makes the samples within the clusters closer, and in the learned joint embedding space, the distance between the same news event will be smaller, while the distance between different news events will be larger.
- clustering loss builds constraints from clustering views. It optimizes all samples in the selected cluster instead of one image-text pair in one iteration, so it converges faster and performs better than ranking loss.
- the common space feature learning method is mostly used for cross-modal image-text retrieval tasks.
- the model framework mainly includes two parts. First, the features of the image and text are extracted through the deep neural network, and then the method of metric learning is designed to learn the loss function. Efficient public representation space. Although such methods have achieved remarkable results, there is still the problem of insufficient image-text alignment accuracy. Compared with the traditional sorting loss function, the clustering loss helps to make the relevant samples of a news event closer, and the word detection module helps to focus on the fine-grained alignment of images and texts at the word level.
- Step 2.4 Aiming at the image features, add the word detection module loss to focus on the alignment at the word level;
- the word detection loss is used to evaluate whether an image contains a high-frequency word contained in its headline text in a news image-text pair.
- the attribute dictionary consists of 1000 high-frequency words in the text data in the multimodal dataset. Specifically, when training a model on a multimodal dataset, given an image and its corresponding caption, check the words in the top1k word set. For each attribute word, a simple binary classifier is used to determine whether the image contains it.
- a word detection module we add 1000 labels to each image. Compared with the original single task that only used sorting loss, we added 1000 strict constraint tasks, which can effectively prevent the model from falling into a local optimal solution and better guide the parameter convergence direction of ResNet-152.
- the specific method of attribute dictionary setting is as follows.
- Step 2.4.I) use the weight matrix W to multiply the image descriptor ⁇ to obtain the probability score s of the occurrence of each word in the top1k word set, which is defined as shown in formula 2-7:
- Step 2.4.II Calculate in advance which attributes (that is, high-frequency words) in the attribute dictionary contained in the title text of each news image-text pair are used as labels for classification problems, and use 1000 binary classifiers to calculate the word detection loss L word , such as Formula 2-8 shows:
- Step 2.4.1 Use the pre-trained ResNet-152 model and fix its weights as an image feature encoder
- Step 2.4.2 According to the overall loss function, use the BP algorithm to update the parameters of the model except ResNet-152;
- Step 2.4.3 The above training is carried out for 40 rounds, and the initial learning rate is 0.001, which is reduced by 10 times every 20 epochs;
- Step 2.4.4 Instead of fixing the weights of ResNet-152, the entire architecture is fine-tuned end-to-end in 50 rounds of training.
- the initial learning rate is 0.00001, which is decreased 10 times every 20 epochs.
- the weight ⁇ 1 is fixed to 1 and ⁇ 2 is fixed to 0.1.
- the clustering loss and word detection loss proposed in step 2.5) can be used as a supplement to the ranking loss in the prior art 1, aiming to improve network convergence efficiency and image-text matching accuracy.
- the final overall loss function is shown in 2-9:
- Step 3 realize the cross-modal graphic and text search system of news events
- a news event cross-modal graphic search system is designed and implemented using Vue, SpringBoot and other front-end and back-end programming technologies, which effectively utilizes the two aspects of headline text and pictures in news reports.
- the relationship between two different modal data can achieve richer retrieval results than single-modal retrieval systems.
- the image-text mutual search function is the core function of the system designed and realized by the present invention, and it is the main value provided by the system for users. It mainly includes two main realization sub-modules: search for text by image and search for image by text.
- search for text by picture the user uploads a news picture as a search, and the system inputs it into the trained MSAVT model for forward propagation, calculates its Euclidean coordinates in the joint embedding space, and returns the nearest N articles News headline text data, thus realizing the function of searching text by image.
- the implementation process of searching images by text is similar, the difference is that the input and return modes are opposite.
- the pre-trained BERT model is used to improve the ability to extract text features
- each embodiment in this specification is described in a progressive manner, the same and similar parts of each embodiment can be referred to each other, and each embodiment focuses on the differences from other embodiments.
- the description is relatively simple, and for relevant parts, please refer to part of the description of the method embodiments.
- the device and system embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, It can be located in one place, or it can be distributed to multiple network elements. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. It can be understood and implemented by those skilled in the art without creative effort.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Library & Information Science (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Image Analysis (AREA)
Abstract
本发明提出用于图文匹配的多级视觉-文本语义对齐模型MSAVT,并提供了基于图文匹配的多级视觉-文本语义对齐模型MSAVT的新闻事件检索方法,实现了新闻事件跨模态图文搜索,以满足当下新闻检索需求。本发明提供的跨模态检索模型的图文对齐精度更高,应用于新闻事件跨模态图文检索时在多个水平的召回率和平均准确精度等指标上有显著的提升。同时,引入预训练的BERT模型提取文本特征,提高了算法的泛化性能。模型采用公共空间特征学习方法,可以独立的获取图像和文本的向量表征,即可以预先存储检索结果的向量表示,检索耗时较短,可以应用于实际场景中。
Description
本申请涉及计算机技术领域,尤其涉及一种基于多级图文语义对齐模型的新闻事件搜索方法。
跨模态检索
模态指数据的存在形式,如文本、图片、视频等。跨模态检索(cross-modal retrieval)旨在用一种模态的数据作为查询来检索另一种模态的数据。最常见的是图像文本检索(image-text retrieval),给定一段文本,检索相关的图像,或者反过来给定一张图像,检索相关的文本。跨模态检索的主要难点在于“异构鸿沟”。异构鸿沟是指由于查询输入与检索结果的表示形式不一致,两者数据处于不同的分布空间,尽管高层语义相关,却无法直接度量相似性。因此研究的重点是如何表示底层特征、如何对高层语义建模以及如何找到合适的度量方法计算模态间的关联。目前主要有以下四类研究方法。
1)子空间的方法
利用不同模态样本对的成对共生信息学习投影矩阵,通过将不同模态的特征投影到公共潜在子空间并在其中度量不同模态的相似性来实现跨模态检索。
2)主题模型的方法
通过生成式模型挖掘跨模态数据中隐含的主题空间,从而将跨模态数据的底层特征映射到隐形语义空间。
3)哈希变换的方法
利用不同模态的样本对信息,学习不同模态的哈希变换,将不同模态特征映射到一个汉明二值空间,然后在汉明空间实现快速的跨模态检索。
4)深度学习的方法
利用深度神经网络的特征抽取能力,在底层提取不同模态的单独有效表示,然后在高层建立不同模态的语义关联,利用高层网络最大化不同模态表示的相关性。与传
统跨模态检索方法相比,基于深度学习的方法在图片、文本等不同模态信息特征的提取、学习和表示方面表现出极大优越性,是近年来跨模态检索的研究热点。
跨模态检索的主要评价指标是recall@K,以正确答案是否出现在前K个返回结果为标准计算召回率。
表征学习
机器学习方法的性能很大程度上取决于数据表达(或者特征)的选择。 在机器学习中,表征学习(Representation Learning)是学习一个特征的技术的集合,是用来提升原始数据表达的一种方向。表征学习的主要任务是让计算机学习如何自动提取适合、有用的数据特征并利用学习到的特征来完成目标任务。表征学习可以被分为两类:监督的和无监督的,前者使用标记过的数据作为特征,而后者将未被标记过的数据被当作特征用来学习。
随着计算机硬件计算能力的提升和神经网络结构的不断发展,采用深度架构的表征学习被广泛应用于CV和NLP领域的各项任务中。深度学习是具有多级表示的表征学习方法,用以逐级表示越来越抽象的概念或模式,且形式通常为多层神经网络。深度架构带来了两个主要优势:(1)促进了特征的重复使用;(2)深层的架构可以得到更高层的特征的抽象。例如在CV领域,对于图像输入数据,广泛采用的方法是使用预训练的深度神经网络模型(如VGG、ResNet等)提取特征信息再用于后续任务;在NLP领域,对于文本输入数据,同样使用特征提取器(如RNN、Transformer等)来获取词语和句子的向量表示。目前ResNet和BERT分别是图片和文本领域中应用最广泛的预训练模型,许多研究工作都是先用它们得到一个基线嵌入表示,然后在下游任务中重新微调得到最终的嵌入表示。
度量学习
度量学习[Bellet A,Habrard A,Sebban M.Metric learning[J].Synthesis Lectures on Artificial Intelligence and Machine Learning,2015,9(1):1-151.][Kulis B.Metric learning:A survey[J].Foundations and trends in machine learning,2012,5(4):287-364.](Metric Learning)是为不同目标学习距离函数的一类任务,研究如何在特定的任务上学习一个距离函数,使得该距离函数能够帮助基于近邻的算法取得较好的性能。深度度量学习(Deep Metric Learning)是度量学习的一种方法,目标是学习从原始特征到一个低维稠密的向量空间(称为嵌入空间,embedding space)的映射,使得正样本之间在嵌入空间的距离较近,负样本之间的距离较远。
在深度学习中,很多度量学习的方法都使用成对的样本进行损失计算,这类方法被称为基于样本对的深度度量学习(pair-based deep metric learning)。例如,在训练模型的过程中随机地选取两个样本,使用模型提取特征,并计算特征之间的距离。如果这两个样本属于同一个类别,则使它们之间的距离尽可能小,甚至为0;如果这两个样本属于不同的类别,它们之间的距离应该尽量的大,甚至是无穷大。基于这种思想构建损失函数(Loss function),使用损失函数对样本对之间的距离进行度量,并根据生成的损失使用各种优化方法对模型进行更新。度量学习本质是对相似度的学习,损失函数可以指导神经网络参数的更新,所以度量学习的优化主要是损失函数的设计。
深度度量学习中常用的损失函数有:
1)Softmax Loss
Softmax Loss是度量学习最基本的损失函数,可以较好完成分类任务但没有考虑类间距离。公式如1-1所示:
其中,W和b为分类层参数,m为训练数据量。
2)Center Loss
Center Loss不仅考虑分类的正确性,而且要求类间有一定距离。公式如1-2所示:
3)Triplet Loss
三元组损失函数[Hoffer E,Ailon N.Deep metric learning using triplet network[C].International Workshop on Similarity-Based Pattern Recognition,2015:84-92.]由目标(Anchor)、正样本(Positive)和负样本(Negative)三部分组成。Triplet loss的目的就是通过学习,让同类样本的特征表达间距尽可能小,而异类样本的特征表达间距尽可能大。公式如1-3所示。
与本发明相关的现有技术一
现有技术一的技术方案
目前跨模态图文检索任务多采用公共空间特征学习方法,在图像文本模型独立不交互的情况下获取高质量高语义的跨模态表示,代表方法为VSE++[Faghri F,Fleet D J,Kiros J R,et al.Vse++:Improving visual-semantic embeddings with hard negatives[J].arXiv preprint arXiv:1707.05612,2017.]。VSE++采用Ranking Loss(排序损失)使公共空间中配对的样本之间距离小,不匹配的样本间距离大,同时使用难分样本(Hard Negative)改善视觉语义联合嵌入性能。
如图1所示,模型主要包括两部分,首先通过深度神经网络对图像和 文本分别进行特征提取,然后借助度量学习的方法,设计损失函数来学习得到有效的公共表示空间即联合嵌入空间。在特征提取上,VSE++模型对图像采用VGG19或者ResNet152进行特征提取,对文本采用GRU进行特征提取。
难分样本指与正样本距离较近的负样本。在相似性度量方面,VSE++提出了一个新的损失函数max hinge loss,它主张在排序过程中应该更多地关注难分样本,使模型能更好地学习到正负样本的边界。(i,c)为一个正确的图像-文本对,i′=argmax
j≠is(j,c),c′=argmax
d≠cs(i,c),分别为相对于这一个正样本的最难分的图像和文本。max hinge loss的公式如1-4所示,它由负样本图像到基准文本和负样本文本到基准图像的距离约束这两个对称的部分求和得到。
L
Rank=max
c′[α+s(i,c′)-s(i,c)]
++max
i′[α+s(c,i′)-s(i,c)]
+#(1-4)
其中,[x]
+=max(0,x),s(i,c)为度量图像和文本两种模态之间相似性的余弦距离函数,α为设定的超参数即裕度。
现有技术一的缺点
1)文本特征提取仍可优化
NLP任务中常用的特征提取手段是结合word2vec和RNN。目前基于Transformer的BERT模型,在超大型语料库上进行预训练,在文本特征提取这一任务上表现出了更强大的能力。
2)损失函数设计仍可优化
公式1-4只关注了模态间的关系,忽略了模态内的关系。这导致了所需调整的参数过多,排序损失难以同时优化图像和文本的表征。
3)图文对齐水平不足
文本对象中的单词和句子都是图像的有效描述,其中单词是低级的详细描述,句子则对应图像的高级概述。现有跨模态图文检索模型更多地只关注句子层面的对齐,可能会产生对图像细节预测的偏差。
与本发明相关的现有技术二
现有技术二的技术方案
另一类跨模态图文检索任务的主要思路是将图文特征进行融合,并计算跨模态相似度。
典型方法有图文匹配中的叠加注意力[Lee K H,Xi C,Gang H,et al.Stacked Cross Attention for Image-Text Matching[J].2018.](Stacked Cross Attention for Image-Text Matching,SCAN),SCAN利用注意力机制将图像和文本的局部信息交互以得到更好的特征表示,同时构建相似度函数在常用的排序损失下进行学习。
图2为Image-Text的版本,即用图像与文本做注意力计算。
(1)用自下而上的注意力模型(bottom-up atttention)来检测和编码图像区域,获取图像特征V={v
1,v
2,...,v
k},每一个图像特征编码了图像中 的一个区域。
(2)使用双向GRU获得文本特征,一个长度为n的句子得到一组词向量E={e
1,e
2,...,e
n}。
(3)计算所有图像对之间的相似性,s
ij表示第i个图像区域与第j个单词之间的相似性。
(4)对相似性得分做归一化。
(5)用每一个图像区域和句子中的单词做注意力的计算。
(6)计算图像区域与句向量的相似性。
(7)把i个图像区域与句子的相似度叠加得到图像I和文本T的整体相似度。
现有技术二的缺点
尽管图文特征组合后可以为模型隐层提供更多的交叉特征信息,但是无法使用顶层嵌入向量独立表示图像和文本的输入信号。相较于公共空间特征学习方法,跨模态相似性度量方法的搜索过程耗时长。具体而言,当用户输入一个文本查询q,系统需要在线计算所有图像与q的特征组合,才能获得q与每个图像的相似度分数,计算性能是巨大的瓶颈,使其无法在实际中应用。
发明内容
本发明的目的是构建多模态新闻图文数据集,填补了这类数据集的空缺,提出用于图文匹配的多级视觉-文本语义对齐模型MSAVT(Multi-level Semantic Alignments for Visual and Text);设计并实现了一套新闻事件跨模态图文搜索系统,以满足当下新闻检索需求。
为实现本发明的发明目的,本发明提供的技术方案是:一种基于多级图文语义对齐模型的新闻事件搜索方法,包括以下步骤:
步骤1),构建多模态新闻图文数据集;
步骤1.1)新闻事件选取;
在对新闻事件进行整理和归纳之后,得到事件名称;
步骤1.2)新闻数据获取
使用步骤1.1)中得到的事件名称为检索词,搜索得到的与之匹配的新闻报道数据,提取每一则新闻报道的配图和标题文本对作为该新闻事件的一条样例数据;
步骤1.3)数据标注;
通过算法对所得数据进行预处理,完成数据集的算法初筛工作;
步骤2),建立用于图文匹配的多级视觉-文本语义对齐模型MSAVT;
步骤2.1)利用深度神经网络模型提取图像特征与文本特征;
步骤2.2)将提取的文本特征和图像特征映射到图像语义和文本语义的联合嵌入空间;
步骤2.3)针对提出同时建立模态内约束和模态间约束的聚类损失;
步骤2.4)针对图像特征,加入单词检测损失以关注单词层面的对齐;
步骤2.5)聚类损失和单词检测损失作为排序损失的补充,得到最终的整体损失函数;
步骤3),实现新闻事件跨模态图文搜索;
采用以图搜文或以文搜图的方式,实现新闻事件跨模态图文搜索。
本发明提供的优选技术方案为:
所述的步骤1.3)数据标注中,所述的算法初筛具体步骤包括:
步骤1.3.1)使用预训练的RoBERTa模型提取文本特征和预训练的ResNet50模型提取图片特征;
步骤1.3.2)每个事件视作一个类,通过文本和图片特征取平均值的方式计算出该类的文本中心和图片中心;
步骤1.3.3)认定图像特征或文本特征距离其中心最近的20%的数据是高置信度的可靠数据,取它们的并集予以保留;
步骤1.3.4)其余数据通过人工补标注的形式进行判定。
本发明提供的另外的优选技术方案为:
所述的步骤2.3)中的聚类损失为:
假设数据集有K个集群且每个集群内包含N个样本对,给定集群k中的对象i,计算集群内距离为公式(2-2),
其中,r
ik为集群k中的对象i的向量表示,μ
k为第k个集群的中心, 其定义为公示2-3所示:
方差σ的定义为公式2-4所示:
集群之间的距离可通过公式2-5计算得到:
通过最小化集群内距离和最大化集群间距离,我们得到聚类损失定义为公式2-6:
所述的步骤2.4)中,单词检测损失用于评估一个新闻图文对中,图像是否包含其标题文本中含有的高频词。根据所使用的数据集设置属性字典,属性字典由多模态数据集中文本数据的1000个高频词组成,单词检测损失的的具体计算步骤为:
步骤2.4.I)使用权重矩阵W乘上图像描述符υ获得top1k单词集中每个单词出现的概率分数s,其定义如公式2-7所示:
s=Wυ#(2-7)
步骤2.4.II)提前计算每个新闻图文对中标题文本中含有哪些属性字典中的属性(即高频词)作为分类问题的标签,利用1000个二进制分类器计算单词检测损失L
word,如公式2-8所示:
其中,s
i表示第i个单词的概率分数,t
i∈{0,1}代表第i个单词是否出现在标题文本中。
所述的数据集的训练整体步骤如下:
步骤2.4.1)使用预训练的ResNet-152模型,固定其权重作为图像特征编码器;
步骤2.4.2)根据整体损失函数,使用BP算法来更新模型除ResNet-152以外部分的参数;
步骤2.4.3)以上训练进行40轮,初始学习率为0.001,每20个epoch减少10倍;
步骤2.4.4)不再固定ResNet-152的权重,在50轮训练中端到端地微调整个架构。初始学习率为0.00001,每20个epoch下降10次。在整个训 练过程中,权重λ
1固定为1,λ
2固定为0.1。
本发明提供的另一优选技术方案为:
所述的步骤3)中,
所述的以图搜文为用户上传新闻图片作为检索词,系统将其输入训练好的MSAVT模型进行前向传播,计算其在联合嵌入空间中的欧氏坐标,并返回距离其最近的N条新闻标题文本数据,从而实现了以图搜文;
所述的以文搜图中为用户上传新闻标题文本作为检索词,系统将其输入训练好的MSAVT模型进行前向传播,计算其在联合嵌入空间中的欧氏坐标,并返回距离其最近的N条新闻图片数据,从而实现了以文搜图。
本发明还提供了基于多级图文语义对齐模型的新闻事件搜索方法的新闻事件搜索,采用图文匹配的多级视觉-文本语义对齐模型MSAVT模型为算法核心,运用前后端编程技术设计并实现一个新闻事件跨模态图文搜索系统,利用新闻报道中标题文本和配图两种不同模态数据的关系,实现检索结果。
本发明的有益效果是:
新闻具有重要的社会意义,且其表达多为图文结合的多模态形式。传统的单模态检索形式单一,不能有效利用不同模态信息之间的关联,不能满足目前网民获取新闻的需要。而跨模态图文检索可以利用新闻报道中新闻标题文本和配图两者之间具有低特征异构、高语义相关这一特点,返回不同模态的检索效果,丰富人们对同一新闻事件的认知。因此,本发明构建多模态新闻图文数据集,填补了这类数据集的空缺;本发明提出用于图文匹配的多级视觉-文本语义对齐模型MSAVT(Multi-level Semantic Alignments for Visual and Text);设计并实现了一套新闻事件跨模态图文搜索系统,以满足当下新闻检索需求。
在MSAVT模型中,增加了同时做模态内约束和模态间约束的聚类损失,并加入单词检测模块以关注单词层面的图文对齐,改进了传统的排序损失函数。与现有技术方案相比,该跨模态检索模型的图文对齐精度更高,应用于新闻事件跨模态图文检索时在多个水平的召回率和平均准确精度等指标上有显著的提升。同时,引入预训练的BERT模型提取文本特征,提高了算法的泛化性能。模型采用公共空间特征学习方法,可以独立的获取图像和文本的向量表征,即可以预先存储检索结果的向量表示,检索耗时较短,可以应用于实际场景中。
由上述本发明的实施例提供的技术方案可以看出,本发明实施例通过改进增加分类解耦网络和定位解耦网络提升了二值目标检测神经网络的网络信息容量,避免了分类和定位特征信息提取不均衡的问题;通过设计改进Anchor采样和基于关联性约束的新型损失函数算法解决了二值化目标检测神经网络中Anchor采样的任务不一致性问题,并通过动态可学习权重的目标损失函数对二值化目标检测神经网络进行分类和定位任务的 同步优化,能够提升检测框的质量、改善二值化目标检测网络的检测精准度和算法的鲁棒性。
本发明附加的方面和优点将在下面的描述中部分给出,这些将从下面的描述中变得明显,或通过本发明的实践了解到。
为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为现有技术跨模态图文检索采用公共空间特征学习方法的示意图;
图2为现有技术跨模态图文检索采用图像与文本做注意力计算的示意图;
图3为数据集的算法初筛工作结构示意图;
图4为使用RoBERTa提取文本特征示意图;
图5为残差学习单元示意图;
图6为ResNet模型结构示意图;
图7为ResNet中的两种残差模块示意图;
图8为ResNet-50提取图片特征示意图;
图9为本发明提出的多级视觉-文本语义对齐模型MSAVT模型的结构示意图;
图10为单词检测模块示意图;
图11为本发明使用本发明方法的系统应用示意图。
下面详细描述本发明的实施方式,所述实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本发明,而不能解释为对本发明的限制。
下面将结合附图对本申请实施例中的技术方案进行清楚、完整地描述。
本发明提出的基于多级图文语义对齐模型的新闻事件搜索方法,包括以下步骤:
步骤1),构建多模态新闻图文数据集
不同于传统方式,神经网络的训练需要大量样本的支持,可用且高质量的新闻图文多模态数据集是进行新闻事件跨模态搜索算法研究的第一步。目前尚无开源的新闻事件图文多模态数据集,因此要自己构建数据集。
构建多模态新闻图文数据集具体步骤如下:
步骤1.1)新闻事件选取
针对新闻这一信息文体的特殊性,选取了BBC、China Daily、Global Times、 TNW、VOA NEWS、People’s Daily、Engadget、The New York Times、The Wall Street Journal等综合类、科技类、金融类等国内外主流新闻网站,爬取了600余条新闻标题文本,涵盖了政治、科技、体育、娱乐、环境、经济和艺术等主要新闻类型。在对新闻事件进行人工的整理和归纳之后,得到250个事件名称。
步骤1.2)新闻数据获取
使用步骤1.2)中得到的事件名作为检索词,通过爬虫获取Google News搜索得到的与之匹配的新闻报道数据,提取每一则新闻报道的配图和标题文本对作为该事件的一条样例数据。
步骤1.3)数据标注
为了降低人工清洗数据的工作量,提升工作效率,通过算法对所得数据进行预处理,利用每个sample与聚类中心的差作为紧凑信息,从而完成了数据集的算法初筛工作。
如图3所示出的,所述的步骤1.3)数据标注中,所述的算法初筛具体步骤包括:
步骤1.3.1)使用预训练的RoBERTa模型提取文本特征和预训练的ResNet50模型提取图片特征;
RoBERTa是BERT的改进版,通过改进训练任务和数据生成方式、训练更久、使用更大批次、使用更多数据等方法,在发布时刷新了多项NLP任务的记录。RoBERTa在训练方法上的改进主要有两点:一是删除了BERT中可能损害模型性能的下一句预测目标(Next Sentence Prediction)训练任务,二是对BERT中的静态MASK进行了改进,采用了动态MASK,即在每次输入数据的时候进行MASK,这样可以避免在每轮训练中每个序列被MASK的方式相同。
RoBERTa的模型结构与BERT完全相同,其提取文本特征的步骤如图4所示。
将新闻的标题文本直接输入预训练的中文RoBERTa模型,由于BERT模型对中文是按字切分的,模型输出每个字的字向量。将一个句子的所有字向量加起来求平均值,即得到这个句子的文本特征。
ResNet-50模型结构和使用ResNet-50提取图片特征的步骤介绍如下:
ResNet简介:
通常情况下,深度学习中网络深度的增加可以增强模型的特征提取能力,但随着深度神经网络的进一步研究,研究人员发现层数超过一定数量时反而会带来性能的退化。对此,He等人在《Deep residual learning for image recognition》一文提出深度残差网络(Deep residual network,ResNet),解决了深度神经网络的退化问题。
残差网络的构建思路为对神经网络构造天然的恒等映射,假设神经网络非线性单元的输入和输出的维度一致,一个残差学习单元可以由如下公式表示。
其中,神经网络单元要拟合的函数为
即残差,f是ReLU激活函数,x
(l)和x
(l-1)分别表示的是第l个残差单元的输入和输出。如图5所示,残差学习单元一般以短路连接(shortcut connection)的形式实现。
实践证明,ResNet通过残差学习解决了深度CNN网络的退化问题,已成为计算机视觉领域问题中的基础特征提取网络。
本发明专利中涉及的ResNet-50和ResNet-152指的是不同深度的ResNet网络,如“ResNet-50”中的“50”指模型包含50个带权的卷积层。如图6描述了ResNet多个版本的具体结构。
ResNet中有两种残差模块:basic residual block和bottleneck residual block,其结构如图7所示:
图7左边为basic residual block,对应图3中的卷积子网络
图7右边为bottleneck residual block,对应图3中的卷积子网络
在ResNet-18、ResNet-34中,使用的是左边的basic residual block。在ResNet-50、ResNet-101、ResNet-152中使用的是右边的bottleneck residual block。ResNet就是用这些残差模块堆叠出的深层卷积神经网络。对于残差模块中的短路连接,当输入和输出的维度一致时,可以使用恒等映射,即直接将输入加到输出上。当输入和输出的维度不一致时则不能直接相加,一般利用1x1的卷积对输入进行升维操作,使其与残差的维度相同。
ResNet-50提取图片特征的步骤如图8所示。
将尺寸为3×224×224的新闻配图输入预训练的ResNet-50模型,输出一个2048维的向量,即为图像的特征向量。
步骤1.3.2)每个事件视作一个类,通过文本和图片特征取平均值的方式计算出该类的文本中心和图片中心;
步骤1.3.3)认定图像特征或文本特征距离其中心最近的20%的数据是高置信度的可靠数据,取它们的并集予以保留;
步骤1.3.4)剩下的数据通过人工补标注的形式进行判定;
步骤2),建立用于图文匹配的多级视觉-文本语义对齐模型MSAVT(Multi-Level Semantic Alignments for Visual and Text);
本发明提出了多级视觉-文本语义对齐MSAVT模型(Multi-Level Semantic Alignments for Visual and Text),用于新闻事件的图文跨模态检索。图9为本发明提出的多级视觉-文本语义对齐模型MSAVT模型的结构示意图;结合图9说明MSAVT的建立与应用具体如下:
针对现有跨模态图文检索模型对齐精度不足,即评估指标尚有较大提升空间这一问题,我们对以VSE++0为代表的经典模型做出改进,提出用于图文 匹配的多级视觉-文本语义对齐模型MSAVT(Multi-level Semantic Alignments for Visual and Text)。主要做了两方面的改进,一是提出同时建立模态内约束和模态间约束的聚类损失,二是在现有模型中加入单词检测模块,提出了一个单词检测损失函数以关注单词层面的对齐。最后,我们还引入预训练的BERT模型对文本建模,提高了模型的泛化性能。下面介绍改进后的具体实施步骤。
步骤2.1)利用深度神经网络模型提取图像特征与文本特征;
对于图像特征,输入大小为224×224的原始图像I,采用随机裁剪和水平翻转等多种数据增强方法后,输入ResNet-152模型,得到一个长度为2048维的向量,作为图像输入的视觉描述符υ(I),如公式2-1所示。
υ(I)=f
img(I)#(2-1)
视觉描述符υ(I)在步骤2.4)中用于计算单词检测损失。
对于文本特征,将图像对应的新闻标题文本输入BERT-base模型,BERT模型可以自动对文本分词。由于BERT模型对中文是按字切分的,模型输出每个字的字向量。将一个句子的所有字向量加起来求平均值,即得到这个句子的文本特征,为一个长度为768维的向量。
步骤2.2)将提取的文本特征和图像特征映射到图像语义和文本语义的联合嵌入空间;
对于图像特征向量,将其输入嵌入模块(一个双层的前馈神经网络)映射到1024维的嵌入空间中。对于文本特征向量,将BERT模型输出的768维的句向量,再输入一个门控循环神经网络GRU,将其映射到1024维的嵌入空间中。至此,图像特征和文本特征就映射到了一个联合嵌入空间中,可以用余弦相似度等指标度量向量的相似性。
步骤2.3)针对提出同时建立模态内约束和模态间约束的聚类损失;
在新闻图文数据集中,许多图像-文本对都属于一个事件。而传统的排序损失(Ranking Loss)只考虑图像和文本之间应满足的距离约束忽略了图像与图像、文本与文本之间的距离约束关系。我们从聚类的角度同时建立模态间和模态内的关系,将一个新闻事件的配图及其相关新闻的标题划分为同一个集群。假设数据集有K个集群且每个集群内包含N个样本对。给定集群k中的对象i,我们可计算集群内距离。
其中,r
ik为集群k中的对象i的向量表示,μ
k为第k个集群的中心,其定义为公示2-3所示:
方差σ的定义为公式2-4所示:
通过最小化集群内距离和最大化集群间距离,我们得到聚类损失L
cluster定义为公式2-6所示:
聚类损失使得集群内的样本距离更近,在学习到的联合嵌入空间中,同一新闻事件的距离会更小,而不同新闻事件之间的距离会更大。与单纯的排序损失相比,聚类损失从聚类视图中构建约束。它优化了所选聚类中的所有样本而不是一次迭代中的一个图像-文本对,所以它比排序损失收敛地更快且效果更好。
目前跨模态图文检索任务多采用公共空间特征学习方法,模型框架主要包括两部分,首先通过深度神经网络对图像和文本分别进行特征提取,然后借助度量学习的方法即设计损失函数来学习得到有效的公共表示空间。尽管这类方法取得了显著的成果,仍存在图文对齐精度不足的问题。相对于传统的排序损失函数,聚类损失有助于让一个新闻事件的相关样本距离更近,单词检测模块有助于关注图像与文本在单词层面的细粒度的对齐。
步骤2.4)针对图像特征,加入单词检测模块损失以关注单词层面的对齐;
由于排序损失只在全局表示层面进行了约束,其很难指导参数量庞大的ResNet-152的参数更新的方向。在实际实验中,只使用排序损失时模型参数较难收敛。对此,我们设计了单词检测模块,在粗粒度的句子对齐的基础上增加了细粒度的单词对齐,单词检测模块设计思路如图10所示。
单词检测损失用于评估一个新闻图文对中,图像是否包含其标题文本中含有的高频词。根据本文所使用的数据集设置属性字典,属性字典由多模态数据集中文本数据的1000个高频词组成。具体来说,在多模态数据集上训练模型时,给定一个图像及其相应的标题,检查top1k单词集中的单词。对于每个属性词,使用简单的二分类器来确定图像是否包含它。通过添加单词检测模块,我们给每个图像增加了对应的1000个标签。与原来只使用排序损失的单任务相比,我们添加了1000个严格的约束任务,从而可以有效地避免模型陷入局部最优解,更好地指导ResNet-152的参数收敛方向。属性字典设置的具体做法如下。
步骤2.4.I)使用权重矩阵W乘上图像描述符υ获得top1k单词集中每个 单词出现的概率分数s,其定义如公式2-7所示:
s=Wυ#(2-7)
步骤2.4.II)提前计算每个新闻图文对中标题文本中含有哪些属性字典中的属性(即高频词)作为分类问题的标签,利用1000个二进制分类器计算单词检测损失L
word,如公式2-8所示:
其中,s
i表示第i个单词的概率分数,t
i∈{0,1}代表第i个单词是否出现在标题文本中。
其中,数据集的训练整体步骤如下:
步骤2.4.1)使用预训练的ResNet-152模型,固定其权重作为图像特征编码器;
步骤2.4.2)根据整体损失函数,使用BP算法来更新模型除ResNet-152以外部分的参数;
步骤2.4.3)以上训练进行40轮,初始学习率为0.001,每20个epoch减少10倍;
步骤2.4.4)不再固定ResNet-152的权重,在50轮训练中端到端地微调整个架构。初始学习率为0.00001,每20个epoch下降10次。在整个训练过程中,权重λ
1固定为1,λ
2固定为0.1。
步骤2.5)提出的聚类损失和单词检测损失可作为现有技术一中排序损失的补充,旨在提升网络收敛效率和图文匹配准确率。最终得到的整体损失函数如2-9所示:
L=L
Ranking+λ
1L
cluster+λ
2L
word#(2-9)
步骤3),实现新闻事件跨模态图文搜索系统
如图11所示,以MSAVT模型为算法核心,运用Vue、SpringBoot等前后端编程技术设计并实现了一个新闻事件跨模态图文搜索系统,有效利用了新闻报道中标题文本和配图这两种不同模态数据的关系,实现了比单模态检索系统更为丰富的检索结果。
图文互搜功能是本发明所设计并实现的系统的核心功能,是系统为用户的提供的主要价值所在,主要包括以图搜文和以文搜图两个主要实现子模块。以图搜文这一功能中,用户上传新闻图片作为检索,系统将其输入训练好的MSAVT模型进行前向传播,计算其在联合嵌入空间中的欧氏坐标,并返回距离其最近的N条新闻标题文本数据,从而实现了以图搜文这一功能。同理,以文搜图的实现过程相似,区别是输入和返回的模态与之相反。
本发明关键点:
第一,自主构建了多模态新闻图文数据集用于训练模型;
第二,使用了预训练的BERT模型提升提取文本特征能力;
第三,提出同时建立模态内约束和模态间约束的聚类损失,改进损失 函数。
第四,提取图像特征时增加单词检测模块以关注图像与文本在单词层面的对齐。
本领域普通技术人员可以理解:附图只是一个实施例的示意图,附图中的模块或流程并不一定是实施本发明所必须的。
通过以上的实施方式的描述可知,本领域的技术人员可以清楚地了解到本发明可借助软件加必需的通用硬件平台的方式来实现。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例或者实施例的某些部分所述的方法。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置或系统实施例而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的装置及系统实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
以上所述,仅为本发明较佳的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应该以权利要求的保护范围为准。
Claims (7)
- 一种基于多级图文语义对齐模型的新闻事件搜索方法,其特征在于,包括以下步骤:步骤1),构建多模态新闻图文数据集;步骤1.1)新闻事件选取;在对新闻事件进行整理和归纳之后,得到事件名称;步骤1.2)新闻数据获取使用步骤1.1)中得到的事件名称为检索词,搜索得到的与之匹配的新闻报道数据,提取每一则新闻报道的配图和标题文本对作为该新闻事件的一条样例数据;步骤1.3)数据标注;通过算法对所得多模态新闻图文数据集进行预处理,完成多模态新闻图文数据集的算法初筛工作;步骤2),建立并训练用于图文匹配的多级视觉-文本语义对齐模型MSAVT;步骤2.1)利用深度神经网络模型提取图像特征与文本特征;步骤2.2)将提取的文本特征和图像特征映射到图像语义和文本语义的联合嵌入空间;步骤2.3)针对提出同时建立模态内约束和模态间约束的聚类损失;步骤2.4)针对图像特征,加入单词检测损失以关注单词层面的对齐;步骤2.5)聚类损失和单词检测损失作为排序损失的补充,得到最终的整体损失函数,完成所述MSAVT模型的训练;步骤3),利用训练好的MSAVT模型实现新闻事件跨模态图文搜索;采用以图搜文或以文搜图的方式,实现新闻事件跨模态图文搜索。
- 根据权利要求1所述的基于多级图文语义对齐模型的新闻事件搜索方法,其特征在于,所述的步骤1.3)数据标注中,所述的算法初筛具体步骤包括:步骤1.3.1)使用预训练的RoBERTa模型提取文本特征和预训练的ResNet50模型提取图片特征;步骤1.3.2)每个事件视作一个类,通过文本和图片特征取平均值的方式计算出该类的文本中心和图片中心;步骤1.3.3)认定图像特征或文本特征距离其中心最近的20%的数据是高置信度的可靠数据,取它们的并集予以保留;步骤1.3.4)其余数据通过人工补标注的形式进行判定。
- 根据权利要求1所述的基于多级图文语义对齐模型的新闻事件搜索方法,其特征在于,所述的步骤2.4)中,单词检测损失用于评估一个新闻图文对中,图像是否包含其标题文本中含有的高频词。根据所使用的数据集设置属性字典,属性字典由多模态数据集中文本数据的1000个高频词组成,单词检测损失的的具体计算步骤为:步骤2.4.I)使用权重矩阵W乘上图像描述符υ获得top1k单词集中每个单词出现的概率分数s,其定义如公式2-7所示:s=Wυ#(2-7)步骤2.4.II)提前计算每个新闻图文对中标题文本中含有高频词作为分类问题的标签,利用1000个二进制分类器计算单词检测损失L word,如公式2-8所示:其中,s i表示第i个单词的概率分数,t i∈{0,1}代表第i个单词是否出现在标题文本中。
- 根据权利要求4所述的基于多级图文语义对齐模型的新闻事件搜索方法,其特征在于,所述的模型的训练整体步骤如下:步骤2.4.1)使用预训练的ResNet-152模型,固定其权重作为图像特征编码器;步骤2.4.2)根据整体损失函数,使用BP算法来更新模型除ResNet-152以外部分的参数;步骤2.4.3)以上训练进行40轮,初始学习率为0.001,每20个epoch减少10倍;步骤2.4.4)不再固定ResNet-152的权重,在50轮训练中端到端地微调整个架构。初始学习率为0.00001,每20个epoch下降10次。在整个训练过程中,权重λ 1固定为1,λ 2固定为0.1。
- 根据权利要求1所述的基于多级图文语义对齐模型的新闻事件搜索方法,其特征在于,所述的步骤3)中,所述的以图搜文为用户上传新闻图片作为检索词,系统将其输入训练好的MSAVT模型进行前向传播,计算其在联合嵌入空间中的欧氏坐标,并返回距离其最近的N条新闻标题文本数据,从而实现了以图搜文;所述的以文搜图中为用户上传新闻标题文本作为检索词,系统将其输入训练好的MSAVT模型进行前向传播,计算其在联合嵌入空间中的欧氏坐标,并返回距离其最近的N条新闻图片数据,从而实现了以文搜图。
- 一种如权利要1-6中的任一项权利要求所述的基于多级图文语义对齐模型的新闻事件搜索方法的新闻事件搜索,其特征在于,采用图文匹配的多级视觉-文本语义对齐模型MSAVT模型为算法核心,运用前后端编程技术设计并实现一个新闻事件跨模态图文搜索系统,利用新闻报道中标题文本和配图两种不同模态数据的关系,实现检索结果。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111413975.3A CN114297473B (zh) | 2021-11-25 | 2021-11-25 | 基于多级图文语义对齐模型的新闻事件搜索方法及系统 |
CN202111413975.3 | 2021-11-25 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2023093574A1 true WO2023093574A1 (zh) | 2023-06-01 |
WO2023093574A9 WO2023093574A9 (zh) | 2023-08-10 |
Family
ID=80966465
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/131992 WO2023093574A1 (zh) | 2021-11-25 | 2022-11-15 | 基于多级图文语义对齐模型的新闻事件搜索方法及系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114297473B (zh) |
WO (1) | WO2023093574A1 (zh) |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116431855A (zh) * | 2023-06-13 | 2023-07-14 | 荣耀终端有限公司 | 图像检索方法和相关设备 |
CN116578738A (zh) * | 2023-07-14 | 2023-08-11 | 深圳须弥云图空间科技有限公司 | 一种基于图注意力和生成对抗网络的图文检索方法和装置 |
CN116579337A (zh) * | 2023-07-07 | 2023-08-11 | 南开大学 | 一种融合证据可信度的虚假新闻检测方法 |
CN116842141A (zh) * | 2023-08-28 | 2023-10-03 | 北京中安科技发展有限公司 | 一种基于警烟联动数字化情报研判方法 |
CN116912629A (zh) * | 2023-09-04 | 2023-10-20 | 小舟科技有限公司 | 基于多任务学习的通用图像文字描述生成方法及相关装置 |
CN116933854A (zh) * | 2023-09-18 | 2023-10-24 | 腾讯科技(深圳)有限公司 | 图像生成模型的处理方法、装置、设备和存储介质 |
CN116935329A (zh) * | 2023-09-19 | 2023-10-24 | 山东大学 | 一种类级别对比学习的弱监督文本行人检索方法及系统 |
CN116978048A (zh) * | 2023-09-25 | 2023-10-31 | 北京中关村科金技术有限公司 | 上下文内容获取方法、装置、电子设备和存储介质 |
CN117094396A (zh) * | 2023-10-19 | 2023-11-21 | 北京英视睿达科技股份有限公司 | 知识抽取方法、装置、计算机设备及存储介质 |
CN117131214A (zh) * | 2023-10-26 | 2023-11-28 | 北京科技大学 | 基于特征分布对齐与聚类的零样本草图检索方法及系统 |
CN117153393A (zh) * | 2023-08-30 | 2023-12-01 | 哈尔滨工业大学 | 一种基于多模态融合的心血管疾病风险预测方法 |
CN117407558A (zh) * | 2023-12-14 | 2024-01-16 | 武汉理工大学三亚科教创新园 | 一种海洋遥感图文检索方法、装置、电子设备及存储介质 |
CN117611245A (zh) * | 2023-12-14 | 2024-02-27 | 浙江博观瑞思科技有限公司 | 用于电商运营活动策划的数据分析管理系统及方法 |
CN117609902A (zh) * | 2024-01-18 | 2024-02-27 | 知呱呱(天津)大数据技术有限公司 | 一种基于图文多模态双曲嵌入的专利ipc分类方法及系统 |
CN117726721A (zh) * | 2024-02-08 | 2024-03-19 | 湖南君安科技有限公司 | 基于主题驱动与多模态融合的图像生成方法、设备及介质 |
CN117746441A (zh) * | 2024-02-20 | 2024-03-22 | 浪潮电子信息产业股份有限公司 | 一种视觉语言理解方法、装置、设备及可读存储介质 |
CN117808923A (zh) * | 2024-02-29 | 2024-04-02 | 浪潮电子信息产业股份有限公司 | 一种图像生成方法、系统、电子设备及可读存储介质 |
CN117912373A (zh) * | 2024-03-20 | 2024-04-19 | 内江广播电视台 | 一种线下可移动新闻媒体智能宣讲装置及宣讲方法 |
CN117972133A (zh) * | 2024-03-21 | 2024-05-03 | 珠海泰坦软件系统有限公司 | 基于大数据的图文检索方法及系统 |
CN118038497A (zh) * | 2024-04-10 | 2024-05-14 | 四川大学 | 一种基于sam的文本信息驱动的行人检索方法及系统 |
CN118114188A (zh) * | 2024-04-30 | 2024-05-31 | 江西师范大学 | 基于多视角和分层融合的虚假新闻检测方法 |
CN118133946A (zh) * | 2024-05-07 | 2024-06-04 | 烟台海颐软件股份有限公司 | 一种多模态知识分层识别和受控对齐方法 |
CN118227744A (zh) * | 2024-05-27 | 2024-06-21 | 山东体育学院 | 一种虚假新闻检测方法 |
CN118296414A (zh) * | 2024-06-06 | 2024-07-05 | 中国科学技术大学 | 基于层级聚类和属性挖掘的可计算价值体系构建方法 |
CN118506107A (zh) * | 2024-07-17 | 2024-08-16 | 烟台大学 | 一种基于多模态多任务学习的机器人分类检测方法及系统 |
CN118507036A (zh) * | 2024-07-17 | 2024-08-16 | 长春理工大学中山研究院 | 一种情感语义多模态抑郁倾向识别系统 |
CN118551194A (zh) * | 2024-07-30 | 2024-08-27 | 中国科学院空天信息创新研究院 | 面向事件抽取的大语言模型数据增强方法及其装置 |
CN118568650A (zh) * | 2024-08-05 | 2024-08-30 | 山东省计算中心(国家超级计算济南中心) | 基于细粒度文本提示特征工程的工业异常检测方法及系统 |
CN118656446A (zh) * | 2024-08-20 | 2024-09-17 | 华信咨询设计研究院有限公司 | 一种大模型的新闻信息提取方法、系统及电子设备 |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114297473B (zh) * | 2021-11-25 | 2024-10-15 | 北京邮电大学 | 基于多级图文语义对齐模型的新闻事件搜索方法及系统 |
CN115033727B (zh) * | 2022-05-10 | 2023-06-20 | 中国科学技术大学 | 基于跨模态置信度感知的图像文本匹配方法 |
CN114625910B (zh) * | 2022-05-13 | 2022-08-19 | 中国科学技术大学 | 基于负感知注意力框架的图像文本跨模态检索方法 |
CN115048491B (zh) * | 2022-06-18 | 2024-09-06 | 哈尔滨工业大学 | 在异构语义空间中基于假设检验的软件跨模态检索方法 |
CN115909317B (zh) * | 2022-07-15 | 2024-07-05 | 广州珠江在线多媒体信息有限公司 | 一种三维模型-文本联合表达的学习方法及系统 |
CN116167434B (zh) * | 2023-04-24 | 2023-07-04 | 清华大学 | 一种弱监督视觉语言预训练模型的训练方法和装置 |
CN117037209A (zh) * | 2023-07-24 | 2023-11-10 | 西北工业大学 | 一种基于完备属性识别增强的行人属性跨模态对齐方法 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103425757A (zh) * | 2013-07-31 | 2013-12-04 | 复旦大学 | 融合多模态信息的跨媒体人物新闻检索方法与系统 |
US20200311798A1 (en) * | 2019-03-25 | 2020-10-01 | Board Of Trustees Of The University Of Illinois | Search engine use of neural network regressor for multi-modal item recommendations based on visual semantic embeddings |
CN113239214A (zh) * | 2021-05-19 | 2021-08-10 | 中国科学院自动化研究所 | 基于有监督对比的跨模态检索方法、系统及设备 |
CN113516118A (zh) * | 2021-07-29 | 2021-10-19 | 西北大学 | 一种图像与文本联合嵌入的多模态文化资源加工方法 |
CN113535949A (zh) * | 2021-06-15 | 2021-10-22 | 杭州电子科技大学 | 基于图片和句子的多模态联合事件检测方法 |
CN113537304A (zh) * | 2021-06-28 | 2021-10-22 | 杭州电子科技大学 | 一种基于双向cnn的跨模态语义聚类方法 |
CN114297473A (zh) * | 2021-11-25 | 2022-04-08 | 北京邮电大学 | 基于多级图文语义对齐模型的新闻事件搜索方法及系统 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108319686B (zh) * | 2018-02-01 | 2021-07-30 | 北京大学深圳研究生院 | 基于受限文本空间的对抗性跨媒体检索方法 |
CN109255047A (zh) * | 2018-07-18 | 2019-01-22 | 西安电子科技大学 | 基于互补语义对齐和对称检索的图像-文本互检索方法 |
CN113065012B (zh) * | 2021-03-17 | 2022-04-22 | 山东省人工智能研究院 | 一种基于多模态动态交互机制的图文解析方法 |
CN113590865B (zh) * | 2021-07-09 | 2022-11-22 | 北京百度网讯科技有限公司 | 图像搜索模型的训练方法及图像搜索方法 |
-
2021
- 2021-11-25 CN CN202111413975.3A patent/CN114297473B/zh active Active
-
2022
- 2022-11-15 WO PCT/CN2022/131992 patent/WO2023093574A1/zh unknown
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103425757A (zh) * | 2013-07-31 | 2013-12-04 | 复旦大学 | 融合多模态信息的跨媒体人物新闻检索方法与系统 |
US20200311798A1 (en) * | 2019-03-25 | 2020-10-01 | Board Of Trustees Of The University Of Illinois | Search engine use of neural network regressor for multi-modal item recommendations based on visual semantic embeddings |
CN113239214A (zh) * | 2021-05-19 | 2021-08-10 | 中国科学院自动化研究所 | 基于有监督对比的跨模态检索方法、系统及设备 |
CN113535949A (zh) * | 2021-06-15 | 2021-10-22 | 杭州电子科技大学 | 基于图片和句子的多模态联合事件检测方法 |
CN113537304A (zh) * | 2021-06-28 | 2021-10-22 | 杭州电子科技大学 | 一种基于双向cnn的跨模态语义聚类方法 |
CN113516118A (zh) * | 2021-07-29 | 2021-10-19 | 西北大学 | 一种图像与文本联合嵌入的多模态文化资源加工方法 |
CN114297473A (zh) * | 2021-11-25 | 2022-04-08 | 北京邮电大学 | 基于多级图文语义对齐模型的新闻事件搜索方法及系统 |
Cited By (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116431855B (zh) * | 2023-06-13 | 2023-10-20 | 荣耀终端有限公司 | 图像检索方法和相关设备 |
CN116431855A (zh) * | 2023-06-13 | 2023-07-14 | 荣耀终端有限公司 | 图像检索方法和相关设备 |
CN116579337B (zh) * | 2023-07-07 | 2023-10-10 | 南开大学 | 一种融合证据可信度的虚假新闻检测方法 |
CN116579337A (zh) * | 2023-07-07 | 2023-08-11 | 南开大学 | 一种融合证据可信度的虚假新闻检测方法 |
CN116578738A (zh) * | 2023-07-14 | 2023-08-11 | 深圳须弥云图空间科技有限公司 | 一种基于图注意力和生成对抗网络的图文检索方法和装置 |
CN116578738B (zh) * | 2023-07-14 | 2024-02-20 | 深圳须弥云图空间科技有限公司 | 一种基于图注意力和生成对抗网络的图文检索方法和装置 |
CN116842141B (zh) * | 2023-08-28 | 2023-11-07 | 北京中安科技发展有限公司 | 一种基于警烟联动数字化情报研判方法 |
CN116842141A (zh) * | 2023-08-28 | 2023-10-03 | 北京中安科技发展有限公司 | 一种基于警烟联动数字化情报研判方法 |
CN117153393A (zh) * | 2023-08-30 | 2023-12-01 | 哈尔滨工业大学 | 一种基于多模态融合的心血管疾病风险预测方法 |
CN116912629A (zh) * | 2023-09-04 | 2023-10-20 | 小舟科技有限公司 | 基于多任务学习的通用图像文字描述生成方法及相关装置 |
CN116912629B (zh) * | 2023-09-04 | 2023-12-29 | 小舟科技有限公司 | 基于多任务学习的通用图像文字描述生成方法及相关装置 |
CN116933854A (zh) * | 2023-09-18 | 2023-10-24 | 腾讯科技(深圳)有限公司 | 图像生成模型的处理方法、装置、设备和存储介质 |
CN116933854B (zh) * | 2023-09-18 | 2024-03-29 | 腾讯科技(深圳)有限公司 | 图像生成模型的处理方法、装置、设备和存储介质 |
CN116935329A (zh) * | 2023-09-19 | 2023-10-24 | 山东大学 | 一种类级别对比学习的弱监督文本行人检索方法及系统 |
CN116935329B (zh) * | 2023-09-19 | 2023-12-01 | 山东大学 | 一种类级别对比学习的弱监督文本行人检索方法及系统 |
CN116978048A (zh) * | 2023-09-25 | 2023-10-31 | 北京中关村科金技术有限公司 | 上下文内容获取方法、装置、电子设备和存储介质 |
CN116978048B (zh) * | 2023-09-25 | 2023-12-22 | 北京中关村科金技术有限公司 | 上下文内容获取方法、装置、电子设备和存储介质 |
CN117094396B (zh) * | 2023-10-19 | 2024-01-23 | 北京英视睿达科技股份有限公司 | 知识抽取方法、装置、计算机设备及存储介质 |
CN117094396A (zh) * | 2023-10-19 | 2023-11-21 | 北京英视睿达科技股份有限公司 | 知识抽取方法、装置、计算机设备及存储介质 |
CN117131214B (zh) * | 2023-10-26 | 2024-02-09 | 北京科技大学 | 基于特征分布对齐与聚类的零样本草图检索方法及系统 |
CN117131214A (zh) * | 2023-10-26 | 2023-11-28 | 北京科技大学 | 基于特征分布对齐与聚类的零样本草图检索方法及系统 |
CN117407558B (zh) * | 2023-12-14 | 2024-03-26 | 武汉理工大学三亚科教创新园 | 一种海洋遥感图文检索方法、装置、电子设备及存储介质 |
CN117407558A (zh) * | 2023-12-14 | 2024-01-16 | 武汉理工大学三亚科教创新园 | 一种海洋遥感图文检索方法、装置、电子设备及存储介质 |
CN117611245A (zh) * | 2023-12-14 | 2024-02-27 | 浙江博观瑞思科技有限公司 | 用于电商运营活动策划的数据分析管理系统及方法 |
CN117611245B (zh) * | 2023-12-14 | 2024-05-31 | 浙江博观瑞思科技有限公司 | 用于电商运营活动策划的数据分析管理系统及方法 |
CN117609902A (zh) * | 2024-01-18 | 2024-02-27 | 知呱呱(天津)大数据技术有限公司 | 一种基于图文多模态双曲嵌入的专利ipc分类方法及系统 |
CN117609902B (zh) * | 2024-01-18 | 2024-04-05 | 北京知呱呱科技有限公司 | 一种基于图文多模态双曲嵌入的专利ipc分类方法及系统 |
CN117726721A (zh) * | 2024-02-08 | 2024-03-19 | 湖南君安科技有限公司 | 基于主题驱动与多模态融合的图像生成方法、设备及介质 |
CN117726721B (zh) * | 2024-02-08 | 2024-04-30 | 湖南君安科技有限公司 | 基于主题驱动与多模态融合的图像生成方法、设备及介质 |
CN117746441B (zh) * | 2024-02-20 | 2024-05-10 | 浪潮电子信息产业股份有限公司 | 一种视觉语言理解方法、装置、设备及可读存储介质 |
CN117746441A (zh) * | 2024-02-20 | 2024-03-22 | 浪潮电子信息产业股份有限公司 | 一种视觉语言理解方法、装置、设备及可读存储介质 |
CN117808923A (zh) * | 2024-02-29 | 2024-04-02 | 浪潮电子信息产业股份有限公司 | 一种图像生成方法、系统、电子设备及可读存储介质 |
CN117808923B (zh) * | 2024-02-29 | 2024-05-14 | 浪潮电子信息产业股份有限公司 | 一种图像生成方法、系统、电子设备及可读存储介质 |
CN117912373A (zh) * | 2024-03-20 | 2024-04-19 | 内江广播电视台 | 一种线下可移动新闻媒体智能宣讲装置及宣讲方法 |
CN117912373B (zh) * | 2024-03-20 | 2024-05-31 | 内江广播电视台 | 一种线下可移动新闻媒体智能宣讲方法 |
CN117972133B (zh) * | 2024-03-21 | 2024-05-31 | 珠海泰坦软件系统有限公司 | 基于大数据的图文检索方法及系统 |
CN117972133A (zh) * | 2024-03-21 | 2024-05-03 | 珠海泰坦软件系统有限公司 | 基于大数据的图文检索方法及系统 |
CN118038497A (zh) * | 2024-04-10 | 2024-05-14 | 四川大学 | 一种基于sam的文本信息驱动的行人检索方法及系统 |
CN118114188A (zh) * | 2024-04-30 | 2024-05-31 | 江西师范大学 | 基于多视角和分层融合的虚假新闻检测方法 |
CN118133946A (zh) * | 2024-05-07 | 2024-06-04 | 烟台海颐软件股份有限公司 | 一种多模态知识分层识别和受控对齐方法 |
CN118227744A (zh) * | 2024-05-27 | 2024-06-21 | 山东体育学院 | 一种虚假新闻检测方法 |
CN118296414A (zh) * | 2024-06-06 | 2024-07-05 | 中国科学技术大学 | 基于层级聚类和属性挖掘的可计算价值体系构建方法 |
CN118506107A (zh) * | 2024-07-17 | 2024-08-16 | 烟台大学 | 一种基于多模态多任务学习的机器人分类检测方法及系统 |
CN118507036A (zh) * | 2024-07-17 | 2024-08-16 | 长春理工大学中山研究院 | 一种情感语义多模态抑郁倾向识别系统 |
CN118551194A (zh) * | 2024-07-30 | 2024-08-27 | 中国科学院空天信息创新研究院 | 面向事件抽取的大语言模型数据增强方法及其装置 |
CN118568650A (zh) * | 2024-08-05 | 2024-08-30 | 山东省计算中心(国家超级计算济南中心) | 基于细粒度文本提示特征工程的工业异常检测方法及系统 |
CN118656446A (zh) * | 2024-08-20 | 2024-09-17 | 华信咨询设计研究院有限公司 | 一种大模型的新闻信息提取方法、系统及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN114297473A (zh) | 2022-04-08 |
WO2023093574A9 (zh) | 2023-08-10 |
CN114297473B (zh) | 2024-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023093574A1 (zh) | 基于多级图文语义对齐模型的新闻事件搜索方法及系统 | |
Li et al. | Visual to text: Survey of image and video captioning | |
WO2021223323A1 (zh) | 一种中文视觉词汇表构建的图像内容自动描述方法 | |
Wang et al. | Application of convolutional neural network in natural language processing | |
Wang et al. | Multilayer dense attention model for image caption | |
Zhang et al. | Keywords extraction with deep neural network model | |
Feng et al. | Enhanced sentiment labeling and implicit aspect identification by integration of deep convolution neural network and sequential algorithm | |
Cai et al. | Intelligent question answering in restricted domains using deep learning and question pair matching | |
Ji et al. | Survey of visual sentiment prediction for social media analysis | |
CN113535949B (zh) | 基于图片和句子的多模态联合事件检测方法 | |
Salur et al. | A soft voting ensemble learning-based approach for multimodal sentiment analysis | |
CN111061939A (zh) | 基于深度学习的科研学术新闻关键字匹配推荐方法 | |
CN114357148A (zh) | 一种基于多级别网络的图像文本检索方法 | |
CN116977992A (zh) | 文本信息识别方法、装置、计算机设备和存储介质 | |
Al-Tameemi et al. | Multi-model fusion framework using deep learning for visual-textual sentiment classification | |
Verma et al. | Automatic image caption generation using deep learning | |
Liu et al. | A multimodal approach for multiple-relation extraction in videos | |
CN118069927A (zh) | 基于知识感知和用户多兴趣特征表示的新闻推荐方法及系统 | |
Tian et al. | Scene graph generation by multi-level semantic tasks | |
Perdana et al. | Instance-based deep transfer learning on cross-domain image captioning | |
Cai et al. | Multi‐level deep correlative networks for multi‐modal sentiment analysis | |
Ren et al. | ABML: attention-based multi-task learning for jointly humor recognition and pun detection | |
CN116975403A (zh) | 内容检索模型及内容检索处理方法、装置和计算机设备 | |
Yang | Feature Extraction of English Semantic Translation Relying on Graph Regular Knowledge Recognition Algorithm | |
Chen et al. | Document-level multi-task learning approach based on coreference-aware dynamic heterogeneous graph network for event extraction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22897657 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |