CN113935329B - Asymmetric text matching method based on adaptive feature recognition and denoising - Google Patents
Asymmetric text matching method based on adaptive feature recognition and denoising Download PDFInfo
- Publication number
- CN113935329B CN113935329B CN202111192675.7A CN202111192675A CN113935329B CN 113935329 B CN113935329 B CN 113935329B CN 202111192675 A CN202111192675 A CN 202111192675A CN 113935329 B CN113935329 B CN 113935329B
- Authority
- CN
- China
- Prior art keywords
- document
- representation
- hash
- matching
- query
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Machine Translation (AREA)
Abstract
The invention relates to an asymmetric text matching method based on adaptive feature recognition and denoising, and belongs to the technical field of natural language processing. The present invention is designed to explicitly identify identifying features and filter out irrelevant features in a context-aware manner for each asymmetric text pair. Specifically, a matching adaptive twin cell is first designed to adaptively identify the discriminating characteristic, thereby deriving a corresponding hybrid representation for each text pair. Then, a local constraint Hash de-noising device is provided, and the characteristic level de-noising is carried out on the redundant long text by learning a differentiated low-dimensional binary code, so that better correlation learning is realized. A large number of experiments on real data sets of four different downstream tasks show that compared with the latest most advanced method, the method disclosed by the invention obtains huge performance gain and provides support for subsequent downstream tasks such as information retrieval, answer selection and the like.
Description
Technical Field
The invention relates to an asymmetric text matching method based on adaptive feature recognition and denoising, and belongs to the technical field of natural language processing.
Background
Text Matching (TM) is a valuable but challenging task in the fields of information retrieval and natural language processing. Given a pair of documents, the TM aims to predict their semantic relationship. Note that in many information retrieval systems, question-answering systems and dialogue systems, efficient matching algorithms are an indispensable asset. In most application scenarios, matching sequence pairs (e.g., query documents, keyword documents, and question-answer pairs) typically differ greatly in the amount of information (e.g., asymmetric text matching). For example, the average number of words in the InsuranceQA dataset for the two documents in the matching pair is 7.15 and 95.54 (i.e., orders of magnitude). The asymmetry of short queries and long documents makes it a very important task. Asymmetric text matching has become an increasing demand for many downstream tasks, such as information retrieval and natural language processing. Here, asymmetric means that the documents involved in the match contain different amounts of information, e.g., a short query for relatively long documents.
Early solutions can be divided into two categories, namely representation-based models and interaction-based models. The former solution utilizes Recurrent Neural Networks (RNNs) and long short term memory networks (LSTM) to learn potential representations of document pairs, including DSSM, SNRM, and ARC-I, by processing each document independently. In contrast, the latter captures fine-grained interaction signals between them. It is generally believed that the use of interactive signals can greatly improve associative learning capabilities. Examples include DRMM, KNR and ARC-II. Recently, with the advent of deep pre-trained Language Models (LMs) like BERT, the latest LMs-based deep correlation models have pushed the development of the latest technologies tremendously. Specifically, LMs are pre-trained on a large-scale corpus and then applied to TM tasks by computing contextual semantic representations of sentence pairs. The goal is to further eliminate lexical mismatches between documents and queries. Despite these efforts to achieve significant performance gains, the main drawback of these models is that further feature recognition and denoising is omitted between asymmetric texts, which may help to improve matching performance.
Disclosure of Invention
The invention provides an asymmetric text matching method based on adaptive feature recognition and denoising, and designs a matching adaptive twin cell system (MAGS) for adaptively recognizing and identifying features so as to derive a corresponding mixed representation for each text pair.
The technical scheme of the invention is as follows: the asymmetric text matching method based on the self-adaptive feature recognition and denoising comprises the following specific steps:
step1, firstly, preprocessing a question-answer matching data set and a query-document matching data set;
step2, performing context representation on each asymmetric text pair preprocessed in Step1 by utilizing a BERT-based context coder; adaptively identifying the discriminating characteristic based on an adaptively matching twin cell, thereby deriving a respective mixed representation for each asymmetric text pair; a local constraint Hash de-noising device is provided, and the characteristic level de-noising is carried out on a redundant long text by learning a distinctive low-dimensional binary code; and finally, obtaining the matching scores of the asymmetric text pairs by utilizing a similarity predictor.
As a further scheme of the present invention, in Step1, the question-answer matching data set includes inspuranceqa, wikiQA, and yahooQA, and the query-document matching data set adopts MS MARCO; the preprocessing comprises the step of carrying out matching deletion on special characters in the text by using a regular expression.
As a further aspect of the present invention, in Step2, performing context representation on each asymmetric text pair preprocessed in Step1 by using a BERT-based context encoder includes:
selecting BERT as context encoder, and labeling [ CLS ] with specific mark symbols according to the format of BERT input]At the beginning of the sequence, i.e., { [ CLS],q 1 ,q 2 ,…,q l And { [ CLS { [],d 1 ,d 2 ,…,d t Here, the BERT based context encoder is described as follows:
U Q =BERT([CLS],q 1 ,q 2 ,…,q l ) (1)
V D =BERT([CLS],d 1 ,d 2 ,…,d t ) (2)
wherein, U Q ∈R l×d And V D ∈R t×d A context representation representing query Q and document D, respectively; d represents the output dimension of BERT; to reduce the number of parameters, prevent overfitting, and facilitate information interaction across text pairs, queries and documents share a context encoder.
As a further aspect of the present invention, said Step2, adaptively identifying the discriminating characteristic based on an adaptively matching twin cell, so as to derive the corresponding mixed representation for each asymmetric text pair, comprises:
the feature recognition process was simulated using adaptive matched twins called MAGS; the self-adaptive matching twin cell is a parallel architecture with two subunits MAG, namely a query end MAG and a document end MAG; because the query end and the document end MAG are the same, the query end MAG is:
given the context representation U of the extracted query Q =[u 1 ,…,u l ]And a contextual representation V of the document D =[v 1 ,…,v t ]L and t respectively represent the length of the query text and the length of the document text, and are used for identifying the identifying characteristic and synthesizing the identifying characteristic into the relevance characteristic; specifically, word-level similarity is first calculated as follows:
wherein S ∈ R l×t Is a similarity matrix of all word pairs in the two sequences; these similarity scores are then normalized and are based on V D A reference representation is derived for each word in query Q:
R Q =softmax(S)V D (4)
the purpose of this operation is to pair V according to S D Performing soft feature selection; that is, the relevant information in document D is transmitted to representation Q;
however, in this reference representation, irrelevant information in Q also provides further relevant learning; the supplementary features are first constructed by taking into account the difference of the reference representation from the original representation: d Q =U Q -R Q (ii) a Furthermore, to identify the discriminating characteristic, R is first identified using a similar pattern denoted by S Q 、D Q Important features in these two semantic signals are as follows:
E=σ(W 1 S+B 1 ) (5)
F (r) =R Q ⊙E (6)
F (d) =D Q ⊙(1-E) (7)
where σ (-) denotes a sigmoid activation function, W 1 And B 1 Are the transformation matrix and the bias matrix, respectively, and &, < > is the element bitwise product operation; then, the two parts are further connected, i.e.Andby an attention mechanism similar to equation 5:
p i =σ(W 2 S i +B 2 ) (8)
wherein S is i ,F i (r) And F i (d) Respectively corresponding to the matrixes S, F (r) 、F (d) Is a vector concatenation operation, d represents the output dimension of BERT, W 2 And B 2 Also a transformation matrix and a bias matrix, respectively; then, a high speed network is used to generate the discriminating characteristics of each word
p i =relu(W 3 F i (c) +b 3 ) (10)
g i =sig moid(W 4 F i (c) +b 4 ) (11)
i i =(1-g i )⊙F i (c) +g i ⊙p i (12)
Wherein, W 3 ,W 4 ∈R 2d×2d And W 5 ∈R d×2d Representing a parameter matrix, b 3 ,b 4 ,b 5 Representing a bias vector; forming the synthesized mixed authentication features into a matrix:
as a further scheme of the present invention, in Step2, a locally constrained hash denoising method is provided, and specifically, learning a distinct low-dimensional binary code to perform feature level denoising on a redundant long text includes:
the locally constrained hash denoiser defines an encoding function F en A hash function F h And a decoding function F de (ii) a (1) Coding function F en Mapping representation form H D Converting into a low-dimensional matrix B epsilon R t×h (ii) a Here, a feedforward neural network FNN (-) implemented by a three-layer multi-layer perceptron MLP is used for F en Modeling; furthermore, in order to filter semantic noise and alleviate the gradient vanishing problem, relu (-) is chosen as the second layer of activation function, which can skip unnecessary features and preserve discriminating clues; the encoding process is summarized as follows:
B=F en (H D )=FFN(H D ) (14)
(2) Hash function F h Is used to learn a differentiated binary matrix representation for the purpose of cleansing and efficient matching; the sgn (·) function is the best choice for binarization, but sgn (·) is not trivial; therefore, an approximation function tanh (-) is used to replace sgn (-) for supporting model training; specifically, the hash function is expressed as follows;
B D =F h (B)=tanh(αB) (15)
note that the hyper-parameter α is introduced to make the hash function more flexible and to generate a balanced, differentiated hash code, and to ensure that the value in B belongs to-1,1, an additional constraint is defined:
wherein B is (b) = sgn (B) denotes H D Is represented by a binary matrix, | | | | | luminous flux F Denotes the F-norm, B D Representing the context of the document D after the document D passes through a Hash de-noising device, namely generating a binary code by a Hash function;
(3) Decoding function F de From B D In reconstructing H D It consists of three layers of multilayer perceptrons for decoding the binary matrix B D Back to the original one H D Thus, reconstructing the sequence matrixThe definition is as follows:
wherein FNN T The function of a decoder is used, and in order to reduce the loss of semantics in the reconstruction process, the mean square error MSE (mean square error) is added as a constraint condition when a model is trained;
it is emphasized that also H Q Performing hash denoising, updating matrix representation H of query Q using a single MLP layer Q To match the dimension of the hash denoiser, h;
H Q =MLP(H Q ) (19)。
as a further aspect of the present invention, in Step2, obtaining a matching score of an asymmetric text pair by using a similarity predictor includes:
context representation after passing through hash denoiser for query QAnd a context representation of document D after passing through a hash denoiserThe matching score G (Q, D) between query Q and document D is estimated by the MaxSim operator as follows:
where Norm (·) represents the L2 normalization, so that when the inner product of any two hidden representations is calculated, the result is [ -1,1]I.e., equivalent to the rest of the chord similarity,is H Q The vector representation of the ith word in (a),is B D The jth vector of (a).
As a further aspect of the present invention, in Step2, the model optimization includes:
in the training phase, by using a negative sampling strategy based on triple hinge loss:
L 3 =max{0,0.1-G(Q,D)+G(Q,D - )} (21)
wherein D - Is the corresponding negative sample document sampled from the training set, G (Q, D) is the matching score between query Q and document D;
finally, combining hinge loss and two constraints in a Hash denoiser; that is, the final optimization objective is L 1 、L 2 And L 3 Linear fusion of (2):
where δ and γ are tunable hyper-parameters that control the importance of two constraints respectively, θ is the set of parameters, adam is used in small batchesUpdating parameters quantitatively in an end-to-end manner, B D Is a context representation of the document D after passing through a hash de-noiser, namely a binary code generated by a hash function, B D Representing a hash code generated by document D using the sgn sign function.
The invention has the beneficial effects that:
according to the invention, for each asymmetric text pair, distinctive features are explicitly distinguished in a context perception manner and irrelevant features are filtered out; specifically, a matching adaptive twin cell (MAGS) is first designed to adaptively identify the discriminating features, thereby deriving a corresponding hybrid representation for each text pair. Then, the invention further provides a local constraint Hash de-noising device, which is used for carrying out characteristic level de-noising on the redundant long text by learning a distinguishing low-dimensional binary code, thereby realizing better correlation learning. Extensive experiments on real data sets of four different downstream tasks show that the proposed invention achieves a huge performance gain compared to the latest state-of-the-art alternatives.
Drawings
FIG. 1 is a schematic representation of a model of the present invention;
FIG. 2 is a diagram of the adaptive matching twin cell structure of the present invention;
FIG. 3 is a line graph of the superparameter sensitivity analysis of the present invention.
Detailed Description
Example 1: as shown in fig. 1-3, the asymmetric text matching method based on adaptive feature recognition and denoising specifically includes the following steps:
step1, firstly, preprocessing a question-answer matching data set and a query-document matching data set;
step1.1, pre-processing the question-answer match dataset (inspuranceQA, wikiQA and yahooQA) and the query-document match dataset (MS MARCO); and matching and deleting the special characters in the text by using the regular expression. Where a query-document matching dataset (MS MARCO) is a collection of 880 million web page paragraphs containing about 4 million queries, tuples of positive and negative paragraphs. The present invention reports the results of an MSMARCO Dev set containing about 6900 queries; the question-answer matching dataset size is shown in table 1:
TABLE 1 statistical information for the QA data set (the inspuranceQA Test set includes Test1 and Test 2)
Step2, performing context representation on each asymmetric text pair preprocessed in Step1 by utilizing a BERT-based context coder; adaptively identifying the discriminating characteristic based on an adaptively matching twin cell, thereby deriving a respective mixed representation for each asymmetric text pair; a local constraint Hash de-noising device is provided, and the characteristic level de-noising is carried out on a redundant long text by learning a distinctive low-dimensional binary code; and finally, obtaining the matching scores of the asymmetric text pairs by utilizing a similarity predictor.
As a further aspect of the present invention, in Step2, performing context representation on each asymmetric text pair preprocessed in Step1 by using a BERT-based context encoder includes:
selecting BERT as context encoder, following the format of BERT input, a specific mark symbol [ CLS ]]At the beginning of the sequence, i.e., { [ CLS],q 1 ,q 2 ,…,q l And { [ CLS { [],d 1 ,d 2 ,…,d t Here, the BERT based context encoder is described as follows:
U Q =BERT([CLS],q 1 ,q 2 ,…,q l ) (1)
V D =BERT([CLS],d 1 ,d 2 ,…,d t ) (2)
wherein, U Q ∈R l×d And V D ∈R t×d A context representation representing query Q and document D, respectively; d represents the output dimension of BERT; to reduce the number of parameters, prevent overfitting, and promoteA context encoder is shared between queries and documents in response to information interaction across text pairs.
As a further aspect of the present invention, said Step2, adaptively identifying the discriminating characteristic based on an adaptively matching twin cell, so as to derive the corresponding mixed representation for each asymmetric text pair, comprises:
a human being can clearly identify the relationship between two sequences (e.g., query-document, keyword-document, and question-answer) at a glance. For example, a trained researcher can easily classify papers of his/her research direction according to title and abstract, because he/she can subconsciously identify distinguishing features, while ignoring irrelevant features for decision-making reasoning.
Using adaptive matching twin cells (called MAGS) to mimic the feature recognition process; the self-adaptive matching twin cell is a parallel architecture with two subunits MAG, namely a query end MAG and a document end MAG; since the query and document MAGs are the same, for simplicity, the present invention mainly describes the query MAG (i.e. fig. 2 illustrates the overall architecture), where the query MAG is:
given the context representation U of the extracted query Q =[u 1 ,…,u l ]And a contextual representation V of the document D =[v 1 ,…,v t ]L and t respectively represent the length of the query text and the length of the document text, and are used for identifying the identifying characteristic and synthesizing the identifying characteristic into the relevance characteristic; specifically, word-level similarity is first calculated as follows:
wherein S ∈ R l×t Is a similarity matrix of all word pairs in the two sequences; these similarity scores are then normalized and are based on V D A reference representation is derived for each word in query Q:
R Q =softmax(S)V D (4)
the purpose of this operation is to pair V according to S D Performing soft feature selection; that is, the relevant information in document D is transmitted to representation Q;
however, in this reference representation, irrelevant information in Q also provides further relevant learning; the supplementary features are first constructed by taking into account the difference of the reference representation from the original representation: d Q =U Q -R Q (ii) a Furthermore, to identify the discriminating characteristic, R is first identified using a similar pattern denoted by S Q 、D Q Important features in these two semantic signals are as follows:
E=σ(W 1 S+B 1 ) (5)
F (r) =R Q ⊙E (6)
F (d) =D Q ⊙(1-E) (7)
where σ (-) denotes a sigmoid activation function, W 1 And B 1 Are the transformation matrix and the bias matrix, respectively, and &, < > is the element bitwise product operation; then, the two parts are further connected, i.e.Andby an attention mechanism similar to equation 5:
p i =σ(W 2 S i +B 2 )(8)
wherein S is i ,F i (r) And F i (d) Respectively corresponding to the matrixes S, F (r) 、F (d) Line i of (1), symbolIs a vector stringUnion operation, d denotes the output dimension of BERT, W 2 And B 2 Also a transformation matrix and a bias matrix, respectively; then, a high speed network is used to generate the discriminating characteristics of each word
p i =relu(W 3 F i (c) +b 3 ) (10)
g i =sig moid(W 4 F i (c) +b 4 ) (11)
i i =(1-g i )⊙F i (c) +g i ⊙p i (12)
Wherein, W 3 ,W 4 ∈R 2d×2d And W 5 ∈R d×2d Representing a parameter matrix, b 3 ,b 4 ,b 5 Representing a bias vector; forming the synthesized mixed authentication features into a matrix:
a document end MAG: similar to the query MAG, the document MAG unit switches the roles of Q and D for the same flow, but the parameters of the two subunits are not shared. The invention usesRepresenting the authentication features derived from Wen Dangce MAG.
As a further scheme of the present invention, in Step2, a locally constrained hash denoising method is provided, and specifically, learning a distinct low-dimensional binary code to perform feature level denoising on a redundant long text includes:
since document D is much larger than query Q, the authentication feature extraction performed by the document-side MAG still introduces permissionsMulti-semantic noise. Here, the present invention employs a locally constrained hash denoiser to further filter out irrelevant features. More specifically, the locally constrained hash denoiser defines the encoding function F en A hash function F h And a decoding function F de ;
(1) Coding function F en Map representation form H D Converting into a low-dimensional matrix B epsilon R t×h (ii) a Here, a feedforward neural network FNN (-) implemented by a three-layer multi-layer perceptron MLP is used for F en Modeling; furthermore, in order to filter semantic noise and alleviate the gradient vanishing problem, we choose to use relu (-) as the activation function of the second layer (others are tanh ()), which can skip unnecessary features and preserve discriminating clues; the encoding process is summarized as follows:
B=F en (H D )=FFN(H D ) (14)
(2) Hash function F h Is used to learn a differentiated binary matrix representation for the purpose of cleansing and efficient matching; the sgn (·) function is the best choice for binarization, but sgn (·) is not trivial; therefore, an approximation function tanh (-) is used to replace sgn (-) for supporting model training; specifically, the hash function is expressed as follows;
B D =F h (B)=tanh(αB) (15)
note that the hyper-parameter α is introduced to make the hash function more flexible and to generate a balanced, differentiated hash code, and to ensure that the value in B belongs to-1,1, an additional constraint is defined:
wherein B is (b) = sgn (B) denotes H D Is represented by a binary matrix, | | | | | luminous flux F Denotes the F-norm, B D Representing the context of the document D after passing through a hash de-noiser, namely generating a binary code by a hash function;
(3) Decoding function F de From B D In reconstructing H D It consists of three layers of multilayer perceptrons for decoding the binary matrix B D Back to the original one H D Thus, reconstructing the sequence matrixThe definition is as follows:
wherein FNN T The function of a decoder is used, and in order to reduce the loss of semantics in the reconstruction process, the mean square error MSE (mean square error) is added as a constraint condition when a model is trained;
it is emphasized that also H Q Performing hash denoising, updating matrix representation H of query Q using a single MLP layer Q To match the dimension of the hash denoiser, h;
H Q =MLP(H Q ) (19)。
as a further aspect of the present invention, in Step2, obtaining a matching score of an asymmetric text pair by using a similarity predictor includes:
context representation after passing through hash denoiser for query QAnd a context representation of document D after passing through a hash denoiserThe matching score G (Q, D) between query Q and document D is estimated by the MaxSim operator as follows:
where Norm (·) represents the L2 normalization, so that when the inner product of any two hidden representations is calculated, the result is [ -1,1]I.e., equivalent to the rest of the chord similarity,is H Q The vector representation of the ith word in (a),is B D The jth vector of (a).
As a further aspect of the present invention, in Step2, the purpose of model optimization is to guide the relevant learning of ADDAX, and help estimate the matching score of the asymmetric text pair, where the model optimization includes:
in the training phase, by using a negative sampling strategy based on triple hinge loss:
L 3 =max{0,0.1-G(Q,D)+G(Q,D - )} (21)
wherein D - Is the corresponding negative sample document sampled from the training set, G (Q, D) is the matching score between query Q and document D;
finally, combining hinge loss and two constraints in a Hash denoiser; that is, the final optimization objective is L 1 、L 2 And L 3 Linear fusion of (2):
where δ and γ are tunable hyper-parameters that control the importance of two constraints respectively, θ is the parameter set, the parameters are updated in an end-to-end fashion on small batches using Adam, B D Is a context representation of the document D after passing through a hash de-noiser, namely a binary code generated by a hash function, B D Representing a hash code generated by document D using the sgn sign function.
In order to verify the effectiveness of the invention, a reference model for evaluating indexes, experimental detailed parameter setting and comparison is introduced below, and the experimental result is analyzed and discussed.
1. The evaluation indexes of the invention mainly adopt MRR (Mean Recyclical Rank), P@1 (Precision at 1) and MAP (Mean Average Precision). In the experiments of the present invention, the present invention selects BERT base As a context encoder in ADDAX. More specifically, the invention sets the concealment dimension h =300. Minimum batch sizes for the inspuranceqa, wikiQA, yahooQA, and MS MARCO were set to 32, 64, and 64, respectively. The random deactivation rate was set to 0.1. The learning rates for inspuranceQA, MS MARCO, wikiQA and yahooQA were 5e-6,5e-6,1e-5 and 9e-6, respectively. The number of training times was 60 for instannaceqa, 18 for wikiQA, and 9 for yahooQA. In addition, the number of iterations of the present invention in MS MARCO is 20 ten thousand. The values of α, δ and γ are set to 5, 1e-6, 0.003, respectively.
2. Since asymmetric text matching has become an increasing demand for many downstream tasks, such as information retrieval and answer selection, experiments were conducted on four real data sets to evaluate the validity of the ADDAX proposed by the present invention, including question-answering and document retrieval tasks. At the same time, the present invention compares ADDAX to the two most advanced baselines. The first type may perform question-answer matching, and the other type may perform document retrieval.
Question and answer matching: selecting baseline models for answer selection can be divided into four categories: (a) conventional single model: IARNN-GATE, AP-CNN, RNN-POA, AP-BilSTM, HD-LSTM, AP-LSTM, multihop-Sequential-LSTM, hyperQA, MULT, TFM + HN, LSTM-CNN + HN; (b) a single model that incorporates external knowledge: KAN, CKANN; (c) ensemble model: SUM BASE,PTK LRXNET, SD (BiLSTM + TFM); (d) BERT-based model: HAS, BERT-pooling and BERT-anchorage.
Document retrieval: the present invention first takes the BM25 as a baseline, which is a representative conventional search method. Including interaction-based neural ranking models such as KNRM, fastText + ConvKNRM, and Duet. Furthermore, since the proposed ADDAX uses BERT as context coder, the present inventionObviously selects several latest pre-training language model-based methods, including BERT base rander, deepCT, docT5query, colBERT, TCTCTColBERT, COIL-tok, and COIL-full. In addition, the present invention adds two dense retrievers for performance comparison, namely CLEAR and ADORE + STAR.
3. To verify the validity of the ADDAX proposed by the present invention and to take into account different task properties and data characteristics, the existing most advanced models in the four datasets are completely different. Table 2 summarizes the performance of the 22 methods of selecting answers on question-answer matches for the corresponding three data sets. The present invention chooses to discuss the experimental results separately on each data set.
Table 2 shows the performance comparison between the ADDAX proposed by the present invention and several of the most advanced baselines on the QA dataset, with the inapplicable results indicated by "- -" and no inapplicable results. The best results are highlighted in bold.
Results of inspuranceqa. Table 2 summarizes the experimental results of the inspuranceqa dataset. The present invention observes that traditional single models, such as AP-CNN, AP-BilSTM, multihop-Sequential LSTM, and IARNN-GATE, have much lower P@1 values in both test sets than MULT, LSTM-CNN + HN, and TFM + HN. Furthermore, it is not surprising that BERT-based methods (e.g., BERT-posing, BERT-implantation, and HAS) consistently yield better performance than the single model. Because BERT is pre-trained on large-scale linguistic corpora, it can leverage rich public knowledge to help eliminate lexical mismatches. These phenomena are consistent with conclusions drawn from previous work. The single models (such as KAN, CKANN and CKANN-L) that incorporate external knowledge are superior to the traditional single models and the BERT-based models. Because they can extract relevant information from external knowledge and Knowledge Graphs (KG), the semantic signals are enriched, and the effectiveness of integrating external knowledge is verified. At the same time, the present invention can see that performance of ADDAX is significantly better than nearly all baselines in the inspuranceQA dataset (except CKANN on Test 2).
Results on wikiQA. From table 2, the present invention analyzed MAP and MRR performance on wikiQA for a total of 17 methods. It is observed by the present invention that no significant advantage is obtained with a single model of external knowledge, compared to some single models (e.g., MULT and Multihop-Sequential LSTM). For example, the MAP of MULT on CKANN achieves a performance gain of 1.13%. Possible causes of this phenomenon are: (1) Lack of wikiQA training data leads to insufficient relevance learning; (2) The integration of irrelevant external knowledge may produce semantic noise. Second, with respect to set models, SUM BASE,PTK LRXNET is superior to SD (BilSIM + TFM) in both MAP and MRR values. Obviously, the integrated model has better matching performance than the traditional single model and the model with external knowledge. This observation indicates that integrating multiple models is critical to improving generalization capability. Third, the present invention observes that BERT-pooling consistently performs worse than BERT-attention and HAS, compared to these most advanced BERT-based methods. This observation is consistent across the three datasets, suggesting that interaction modeling plays an important role in text matching. In contrast, ADDAX achieves better performance relative to all baselines in the wikiQA dataset.
Results of yahooQA. From the results in Table 2, the present invention observes a similar performance pattern as the inspuranceQA dataset. ADDAX is clearly superior to all baselines in MAP and MRR. Specifically, the MAP value of ADDAX proposed by the present invention is improved by 3.23% compared to CKANN (best baseline).
Table 3. Experimental results for MS MARCO. The best performance is highlighted in bold. A% represents the relative improvement of ADDAX over all baseline models.
Experimental results for MS MARCO. Table 3 reports the results of a comparison of the performance of the different document retrieval models on MS MARCO. It can be seen from table 3 that, first, the performance of the conventional query document matching technique (i.e., BM 25) is consistently much worse than the deep learning solutions (e.g., KNRM and fastText + ConvKNRM), which is not surprising. Second, for all neural matching models, pre-training language model based methods (e.g., BERT-base, colBERT, and COIL-full) achieve better matching accuracy than KNRM, fastText + ConvKNRM, and Duet. This is because the powerful language expression capabilities of the pre-trained language model greatly alleviate the vocabulary mismatch problem. Note that deep ct and DocT5Query, while they can break the term frequency constraint with pre-trained language models, are still poor in semantic matching. Furthermore, it is worth noting that dense retrievers almost compete with models based on pre-trained language models. Third, ADDAX always achieves the best performance on the MS MARCO dataset. ADDAX increased 1.2% -17.4% over MRR @10 at all baselines. Overall, the above comparisons performed on two different tasks and data sets consistently show that the ADDAX proposed by the present invention achieves significant performance gains overall. These results demonstrate that the adaptive matching twin cells and hash denoiser used in ADDAX can improve asymmetric text matching accuracy in performing feature recognition and denoising.
4. To verify that each module in the model of the invention is effective for the whole, the following comparison and ablation experiments were designed. More specifically, the present invention compares ADDAX to the following variants: (a) w/o MAG, removing the self-adaptive matched twinned cells; (b) w/o FD, without feature recognition as described in equations 5-9; (c) w/o-HW, the fusion of two semantic signals by a high-speed network is saved, and the two semantic signals are directly added; (d) no HD, excluding a locally constrained hash denoiser.
TABLE 4 ablation test results
Table 4 reports the results of these experiments on the MS MARCO and wikiQA datasets. The invention can see that the elimination of the self-adaptive matching twin cells leads to the largest performance reduction, and is followed by a hash de-noising device. In particular, the w/o MAG on wikiQA decreased by 8.77% and 8.48% respectively for MAP and MRR values, while the MRR @10 value of MS MARCO decreased by 1.25%. This suggests that adaptive matching twins play a crucial role in the identification of identifying features at ADDAX to improve matching accuracy. In addition, w/o HD also causes performance degradation, which explains the effectiveness of performing feature-level denoising at the document end. More specifically, for each specific structure designed in the MAGS, the present invention also leads to the following conclusions: (i) A performance degradation of w/o FD can be observed, which indicates that it is important to adaptively highlight different kinds of semantic signals; (ii) The performance of w/o HW is also somewhat degraded. This shows that high speed networks synthesize hybrid authentication features more efficiently.
5. In this section, the invention performed sensitivity analysis on four important hyperparameters (α, δ, γ, and h) on the wikiOA test set. From fig. 3, it can be seen by the present invention that by increasing α to 5 (refer to fig. 3 (c)), matching performance can be improved by learning a more robust hash function. In addition, fig. 3 (b) plots the performance line by changing the δ value. The present invention observes that ADDAX is insensitive to the range [1-7,1-5] ] and better matching accuracy is obtained at δ =1 e-6. Fig. 3 (a) plots the performance line by varying the gamma value. When γ is greater than or less than 0.003, the performance becomes worse.
In order to select the most suitable low dimensional space h, the invention has been experimented with adjusting h between {64,128,256,300,512 }. The results are shown in FIG. 3 (d). The present invention observes that ADDAX consistently achieves better matching accuracy on wikiQA datasets when h =300. As h becomes smaller or larger, a certain performance degradation of ADDAX occurs. This may be due to the fact that a small h produces insufficient semantic signal, while a large value will inevitably lead to an overfitting of the model.
While the present invention has been described in detail with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art.
Claims (4)
1. The asymmetric text matching method based on the self-adaptive feature recognition and denoising is characterized by comprising the following steps: the method comprises the following specific steps:
step1, firstly, preprocessing a question-answer matching data set and a query-document matching data set;
step2, performing context representation on each asymmetric text pair preprocessed in Step1 by utilizing a BERT-based context coder; adaptively identifying the discriminating characteristic based on an adaptively matching twin cell, thereby deriving a respective mixed representation for each asymmetric text pair; a local constraint Hash de-noising device is provided, and the characteristic level de-noising is carried out on a redundant long text by learning a distinctive low-dimensional binary code; finally, a similarity predictor is utilized to obtain the matching scores of the asymmetric text pairs;
in Step2, performing context representation on each asymmetric text pair preprocessed in Step1 by using a BERT-based context encoder comprises:
selecting BERT as context encoder, following the format of BERT input, a specific mark symbol [ CLS ]]At the beginning of the sequence, i.e., { [ CLS],q 1 ,q 2 ,…,q l And { [ CLS ]],d 1 ,d 2 ,…,d t Here, the BERT based context encoder is described as follows:
U Q =BERT([CLS],q 1 ,q 2 ,…,q l ) (1)
V D =BERT([CLS],d 1 ,d 2 ,…,d t ) (2)
wherein, U Q ∈R l×d And V D ∈R t×d A context representation representing query Q and document D, respectively; d represents the output dimension of BERT; to reduce the number of parameters, prevent overfitting, and facilitate information interaction, query and document sharing across text pairsA context encoder;
in Step2, adaptively identifying the discriminating characteristic based on an adaptively matching twin cell, so as to derive a corresponding mixed representation for each asymmetric text pair comprises:
the feature recognition process was simulated using adaptive matched twins called MAGS; the self-adaptive matching twin cell is a parallel architecture with two subunits MAG, namely a query end MAG and a document end MAG; because the query end and the document end MAG are the same, the query end MAG is:
given the context representation U of the extracted query Q =[u 1 ,…,u l ]And a contextual representation V of the document D =[v 1 ,…,v t ]L and t respectively represent the length of the query text and the length of the document text, and are used for identifying the identifying characteristic and synthesizing the identifying characteristic into the relevance characteristic; specifically, word-level similarity is first calculated as follows:
wherein S ∈ R l×t Is a similarity matrix of all word pairs in the two sequences; these similarity scores are then normalized and are based on V D A reference representation is derived for each word in query Q:
R Q =soft max(S)V D (4)
the purpose of this operation is to pair V according to S D Performing soft feature selection; that is, the relevant information in document D is transmitted to representation Q;
however, in this reference representation, irrelevant information in Q also provides further relevant learning; the supplementary features are first constructed by taking into account the difference of the reference representation from the original representation: d Q =U Q -R Q (ii) a Furthermore, to identify the discriminating characteristic, R is first identified using a similar pattern denoted by S Q 、D Q Important features in these two semantic signals are as follows:
E=σ(W 1 S+B 1 ) (5)
F (r) =R Q ⊙E (6)
F (d) =D Q ⊙(1-E) (7)
where σ (-) denotes a sigmoid activation function, W 1 And B 1 Are the transformation matrix and the bias matrix, respectively, and &, < > is the element bitwise product operation; then, the two parts are further connected, i.e. F i (r) And F i (d) By an attention mechanism similar to equation 5:
p i =σ(W 2 S i +B 2 ) (8)
wherein S is i ,F i (r) And F i (d) Respectively corresponding to the matrixes S, F (r) 、F (d) Is a vector concatenation operation, d represents the output dimension of BERT, W 2 And B 2 Also a transformation matrix and a bias matrix, respectively; then, a high speed network is used to generate the discriminating characteristics of each word
p i =relu(W 3 F i (c) +b 3 ) (10)
g i =sigmoid(W 4 F i (c) +b 4 ) (11)
i i =(1-g i )⊙F i (c) +g i ⊙p i (12)
Wherein, W 3 ,W 4 ∈R 2d×2d And W 5 ∈R d×2d Representing a parameter matrix, b 3 ,b 4 ,b 5 Representing a bias vector; forming the synthesized mixed authentication features into a matrix:
in Step2, a locally constrained hash denoising device is provided, and learning a distinct low-dimensional binary code to perform feature level denoising on a redundant long text specifically includes:
the local constraint Hash de-noising device defines an encoding function F en A hash function F h And a decoding function F de ;
(1) Coding function F en Map representation form H D Converting into a low-dimensional matrix B epsilon R t×h (ii) a Here, a feedforward neural network FNN (-) implemented by a three-layer multi-layer perceptron MLP is used for F en Modeling; furthermore, in order to filter semantic noise and alleviate the gradient vanishing problem, relu (-) is chosen as the second layer of activation function, which can skip unnecessary features and preserve discriminating clues; the encoding process is summarized as follows:
B=F en (H D )=FFN(H D ) (14)
(2) Hash function F h Is used to learn a differentiated binary matrix representation for the purpose of cleansing and efficient matching; the sgn (·) function is the best choice for binarization, but sgn (·) is not trivial; therefore, an approximation function tanh (-) is used to replace sgn (-) for supporting model training; specifically, the hash function is expressed as follows;
B D =F h (B)=tanh(αB) (15)
note that the hyper-parameter α is introduced to make the hash function more flexible and to generate a balanced, differentiated hash code, and to ensure that the value in B belongs to-1,1, an additional constraint is defined:
wherein B is (b) = sgn (B) denotes H D Is represented by a binary matrix, | | | | survival rate F Denotes the F-norm, B D Representing the context of the document D after the document D passes through a Hash de-noising device, namely generating a binary code by a Hash function;
(3) Decoding function F de From B D In reconstructing H D It consists of three layers of multilayer perceptrons for decoding the binary matrix B D Back to the original one H D Thus, reconstructing the sequence matrixThe definition is as follows:
wherein FNN T The function of a decoder is used, and in order to reduce the loss of semantics in the reconstruction process, the mean square error MSE (mean square error) is added as a constraint condition when a model is trained;
it is emphasized that also H Q Performing hash denoising, updating matrix representation H of query Q using a single MLP layer Q To match the dimension of the hash denoiser, h;
H Q =MLP(H Q ) (19)。
2. the asymmetric text matching method based on adaptive feature recognition and denoising as claimed in claim 1, wherein: in Step1, the question-answer matching data set comprises inspuranceQA, wikiQA and yahooQA, and the query-document matching data set adopts MS MARCO; the preprocessing comprises the step of carrying out matching deletion on special characters in the text by using a regular expression.
3. The asymmetric text matching method based on adaptive feature recognition and denoising as claimed in claim 1, wherein: in Step2, obtaining a matching score of an asymmetric text pair by using a similarity predictor includes:
context representation after passing through hash denoiser for query QAnd a context representation of document D after passing through a hash denoiserThe matching score G (Q, D) between query Q and document D is estimated by the MaxSim operator as follows:
4. The asymmetric text matching method based on adaptive feature recognition and denoising as claimed in claim 1, wherein: in Step2, model optimization comprises the following steps:
in the training phase, by using a negative sampling strategy based on triple hinge loss:
L 3 =max{0,0.1-G(Q,D)+G(Q,D - )} (21)
wherein D - Is the corresponding negative sample document sampled from the training set, G (Q, D) is the matching score between query Q and document D;
finally, combining hinge loss and two constraints in a Hash denoiser; that is, the final optimization objective is L 1 、L 2 And L 3 Linear fusion of (2):
where δ and γ are tunable hyper-parameters that control the importance of two constraints respectively, θ is the parameter set, the parameters are updated in an end-to-end fashion on small batches using Adam, B D Is a context representation of the document D after passing through a hash de-noiser, namely a binary code generated by a hash function, B D Representing a hash code generated by document D using the sgn sign function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111192675.7A CN113935329B (en) | 2021-10-13 | 2021-10-13 | Asymmetric text matching method based on adaptive feature recognition and denoising |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111192675.7A CN113935329B (en) | 2021-10-13 | 2021-10-13 | Asymmetric text matching method based on adaptive feature recognition and denoising |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113935329A CN113935329A (en) | 2022-01-14 |
CN113935329B true CN113935329B (en) | 2022-12-13 |
Family
ID=79278623
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111192675.7A Active CN113935329B (en) | 2021-10-13 | 2021-10-13 | Asymmetric text matching method based on adaptive feature recognition and denoising |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113935329B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111460176A (en) * | 2020-05-11 | 2020-07-28 | 南京大学 | Multi-document machine reading understanding method based on Hash learning |
CN111460077A (en) * | 2019-01-22 | 2020-07-28 | 大连理工大学 | Cross-modal Hash retrieval method based on class semantic guidance |
CN111737706A (en) * | 2020-05-11 | 2020-10-02 | 华南理工大学 | Front-end portrait encryption and identification method with biological feature privacy protection function |
CN112732748A (en) * | 2021-01-07 | 2021-04-30 | 西安理工大学 | Non-invasive household appliance load identification method based on adaptive feature selection |
CN112906716A (en) * | 2021-02-25 | 2021-06-04 | 北京理工大学 | Noisy SAR image target identification method based on wavelet de-noising threshold self-learning |
CN112925888A (en) * | 2019-12-06 | 2021-06-08 | 上海大岂网络科技有限公司 | Method and device for training question-answer response and small sample text matching model |
CN112989055A (en) * | 2021-04-29 | 2021-06-18 | 腾讯科技(深圳)有限公司 | Text recognition method and device, computer equipment and storage medium |
CN113064959A (en) * | 2020-01-02 | 2021-07-02 | 南京邮电大学 | Cross-modal retrieval method based on deep self-supervision sorting Hash |
Family Cites Families (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8819045B2 (en) * | 2011-03-28 | 2014-08-26 | Citrix Systems, Inc. | Systems and methods of UTF-8 pattern matching |
CN102855276A (en) * | 2012-07-20 | 2013-01-02 | 北京大学 | Method for judging polarity of comment text and application of method |
US9424598B1 (en) * | 2013-12-02 | 2016-08-23 | A9.Com, Inc. | Visual search in a controlled shopping environment |
CN104657350B (en) * | 2015-03-04 | 2017-06-09 | 中国科学院自动化研究所 | Merge the short text Hash learning method of latent semantic feature |
WO2016178984A1 (en) * | 2015-05-01 | 2016-11-10 | Ring-A-Ling, Inc. | Methods and systems for management of video and ring tones among mobile devices |
US10104228B2 (en) * | 2015-05-01 | 2018-10-16 | Vyng, Inc. | Methods and systems for management of media content associated with message context on mobile computing devices |
EP3185221B1 (en) * | 2015-12-23 | 2023-06-07 | Friedrich Kisters | Authentication apparatus and method for optical or acoustic character recognition |
US20170293678A1 (en) * | 2016-04-11 | 2017-10-12 | Nuance Communications, Inc. | Adaptive redo for trace text input |
CN106250777A (en) * | 2016-07-26 | 2016-12-21 | 合肥赛猊腾龙信息技术有限公司 | In the leakage-preventing system of data, a kind of document fingerprint extracts and matching process |
CN106599129B (en) * | 2016-12-02 | 2019-06-04 | 山东科技大学 | A kind of multi-beam point cloud data denoising method for taking lineament into account |
CN106776553A (en) * | 2016-12-07 | 2017-05-31 | 中山大学 | A kind of asymmetric text hash method based on deep learning |
CN108244744B (en) * | 2016-12-29 | 2021-06-08 | 中国移动通信有限公司研究院 | Motion state identification method, sole and shoe |
CN107184203B (en) * | 2017-07-03 | 2019-07-26 | 重庆大学 | Electrocardiosignal Feature point recognition method based on adaptive set empirical mode decomposition |
CN108074310B (en) * | 2017-12-21 | 2021-06-11 | 广东汇泰龙科技股份有限公司 | Voice interaction method based on voice recognition module and intelligent lock management system |
CN108319672B (en) * | 2018-01-25 | 2023-04-18 | 南京邮电大学 | Mobile terminal bad information filtering method and system based on cloud computing |
CN108491430B (en) * | 2018-02-09 | 2021-10-15 | 北京邮电大学 | Unsupervised Hash retrieval method based on clustering characteristic directions |
CN110147531B (en) * | 2018-06-11 | 2024-04-23 | 广州腾讯科技有限公司 | Method, device and storage medium for identifying similar text content |
CN110020002B (en) * | 2018-08-21 | 2024-01-12 | 山西掌柜鼎科技有限公司 | Query method, device, equipment and computer storage medium of event processing scheme |
CN109119085A (en) * | 2018-08-24 | 2019-01-01 | 深圳竹云科技有限公司 | A kind of relevant audio recognition method of asymmetric text based on wavelet analysis and super vector |
CN109033478B (en) * | 2018-09-12 | 2022-08-19 | 重庆工业职业技术学院 | Text information rule analysis method and system for search engine |
US10997463B2 (en) * | 2018-11-08 | 2021-05-04 | Adobe Inc. | Training text recognition systems |
CN109858018A (en) * | 2018-12-25 | 2019-06-07 | 中国科学院信息工程研究所 | A kind of entity recognition method and system towards threat information |
CN109960732B (en) * | 2019-03-29 | 2023-04-18 | 广东石油化工学院 | Deep discrete hash cross-modal retrieval method and system based on robust supervision |
CN109992648B (en) * | 2019-04-10 | 2021-07-02 | 北京神州泰岳软件股份有限公司 | Deep text matching method and device based on word migration learning |
CN110019685B (en) * | 2019-04-10 | 2021-08-20 | 鼎富智能科技有限公司 | Deep text matching method and device based on sequencing learning |
CN110166478B (en) * | 2019-05-30 | 2022-02-25 | 陕西交通电子工程科技有限公司 | Text content secure transmission method and device, computer equipment and storage medium |
CN110321562B (en) * | 2019-06-28 | 2023-06-02 | 广州探迹科技有限公司 | Short text matching method and device based on BERT |
CN110390023A (en) * | 2019-07-02 | 2019-10-29 | 安徽继远软件有限公司 | A kind of knowledge mapping construction method based on improvement BERT model |
CN110472230B (en) * | 2019-07-11 | 2023-09-05 | 平安科技(深圳)有限公司 | Chinese text recognition method and device |
CN110610001B (en) * | 2019-08-12 | 2024-01-23 | 大箴(杭州)科技有限公司 | Short text integrity recognition method, device, storage medium and computer equipment |
CN110717325B (en) * | 2019-09-04 | 2020-11-13 | 北京三快在线科技有限公司 | Text emotion analysis method and device, electronic equipment and storage medium |
CN110688861B (en) * | 2019-09-26 | 2022-12-27 | 沈阳航空航天大学 | Multi-feature fusion sentence-level translation quality estimation method |
CN111078911B (en) * | 2019-12-13 | 2022-03-22 | 宁波大学 | Unsupervised hashing method based on self-encoder |
CN111209401A (en) * | 2020-01-03 | 2020-05-29 | 西安电子科技大学 | System and method for classifying and processing sentiment polarity of online public opinion text information |
CN111581956B (en) * | 2020-04-08 | 2022-09-13 | 国家计算机网络与信息安全管理中心 | Sensitive information identification method and system based on BERT model and K nearest neighbor |
CN112199520B (en) * | 2020-09-19 | 2022-07-22 | 复旦大学 | Cross-modal Hash retrieval algorithm based on fine-grained similarity matrix |
CN113076398B (en) * | 2021-03-30 | 2022-07-29 | 昆明理工大学 | Cross-language information retrieval method based on bilingual dictionary mapping guidance |
CN113239181B (en) * | 2021-05-14 | 2023-04-18 | 电子科技大学 | Scientific and technological literature citation recommendation method based on deep learning |
-
2021
- 2021-10-13 CN CN202111192675.7A patent/CN113935329B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111460077A (en) * | 2019-01-22 | 2020-07-28 | 大连理工大学 | Cross-modal Hash retrieval method based on class semantic guidance |
CN112925888A (en) * | 2019-12-06 | 2021-06-08 | 上海大岂网络科技有限公司 | Method and device for training question-answer response and small sample text matching model |
CN113064959A (en) * | 2020-01-02 | 2021-07-02 | 南京邮电大学 | Cross-modal retrieval method based on deep self-supervision sorting Hash |
CN111460176A (en) * | 2020-05-11 | 2020-07-28 | 南京大学 | Multi-document machine reading understanding method based on Hash learning |
CN111737706A (en) * | 2020-05-11 | 2020-10-02 | 华南理工大学 | Front-end portrait encryption and identification method with biological feature privacy protection function |
CN112732748A (en) * | 2021-01-07 | 2021-04-30 | 西安理工大学 | Non-invasive household appliance load identification method based on adaptive feature selection |
CN112906716A (en) * | 2021-02-25 | 2021-06-04 | 北京理工大学 | Noisy SAR image target identification method based on wavelet de-noising threshold self-learning |
CN112989055A (en) * | 2021-04-29 | 2021-06-18 | 腾讯科技(深圳)有限公司 | Text recognition method and device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113935329A (en) | 2022-01-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110298037B (en) | Convolutional neural network matching text recognition method based on enhanced attention mechanism | |
CN108984724B (en) | Method for improving emotion classification accuracy of specific attributes by using high-dimensional representation | |
CN109271522B (en) | Comment emotion classification method and system based on deep hybrid model transfer learning | |
Jang et al. | Recurrent neural network-based semantic variational autoencoder for sequence-to-sequence learning | |
Zhang et al. | Advanced data exploitation in speech analysis: An overview | |
US11862145B2 (en) | Deep hierarchical fusion for machine intelligence applications | |
CN111414461B (en) | Intelligent question-answering method and system fusing knowledge base and user modeling | |
CN109189925A (en) | Term vector model based on mutual information and based on the file classification method of CNN | |
CN110991190B (en) | Document theme enhancement system, text emotion prediction system and method | |
CN114564565A (en) | Deep semantic recognition model for public safety event analysis and construction method thereof | |
Zhou et al. | Master: Multi-task pre-trained bottlenecked masked autoencoders are better dense retrievers | |
CN111125333A (en) | Generation type knowledge question-answering method based on expression learning and multi-layer covering mechanism | |
Chattopadhyay et al. | A feature selection model for speech emotion recognition using clustering-based population generation with hybrid of equilibrium optimizer and atom search optimization algorithm | |
Ullah et al. | A deep neural network-based approach for sentiment analysis of movie reviews | |
CN115687595A (en) | Comparison and interpretation generation method based on template prompt and oriented to common sense question answering | |
Antit et al. | TunRoBERTa: a Tunisian robustly optimized BERT approach model for sentiment analysis | |
Wang et al. | Non-uniform speaker disentanglement for depression detection from raw speech signals | |
Sun et al. | Multi-classification speech emotion recognition based on two-stage bottleneck features selection and MCJD algorithm | |
CN113935329B (en) | Asymmetric text matching method based on adaptive feature recognition and denoising | |
Dwojak et al. | From dataset recycling to multi-property extraction and beyond | |
CN115952360A (en) | Domain-adaptive cross-domain recommendation method and system based on user and article commonality modeling | |
CN116257616A (en) | Entity relation extraction method and system for music field | |
Fan et al. | Large margin nearest neighbor embedding for knowledge representation | |
CN114757183A (en) | Cross-domain emotion classification method based on contrast alignment network | |
CN114510569A (en) | Chemical emergency news classification method based on Chinesebert model and attention mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |