CN115309865A - Interactive retrieval method, device, equipment and storage medium based on double-tower model - Google Patents

Interactive retrieval method, device, equipment and storage medium based on double-tower model Download PDF

Info

Publication number
CN115309865A
CN115309865A CN202210962907.0A CN202210962907A CN115309865A CN 115309865 A CN115309865 A CN 115309865A CN 202210962907 A CN202210962907 A CN 202210962907A CN 115309865 A CN115309865 A CN 115309865A
Authority
CN
China
Prior art keywords
vector
target
retrieval
double
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210962907.0A
Other languages
Chinese (zh)
Inventor
丁嘉罗
董世超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202210962907.0A priority Critical patent/CN115309865A/en
Publication of CN115309865A publication Critical patent/CN115309865A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to an artificial intelligence technology, and discloses an interactive retrieval method, an interactive retrieval device, interactive retrieval equipment and an interactive retrieval medium based on a double-tower model, wherein the method comprises the following steps: acquiring a target retrieval item and a target sample set according to each retrieval item in the retrieval log and the corresponding sample set thereof; respectively carrying out model calculation on a target sample and a target retrieval item in a target sample set by using a pre-trained double-tower model to obtain a fusion sample vector and an attention vector; optimizing the double-tower model into a target double-tower model according to the similarity of the attention vector and the fusion sample vector; when the off-line is carried out, a fusion matching vector of the data to be matched is calculated by using the target double-tower model; when the method is on line, calculating the attention vector of the to-be-detected cable by using a double-tower model and performing average pooling to obtain an average attention vector; and obtaining a retrieval result according to the similarity between the fusion matching vector and the average attention vector. The invention can improve the efficiency and accuracy of retrieval.

Description

Interactive retrieval method, device, equipment and storage medium based on double-tower model
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an interactive retrieval method and device based on a double-tower model, electronic equipment and a computer readable storage medium.
Background
Information retrieval, which is an important field in the field of Natural Language Processing (NLP), mainly involves storing, indexing and retrieving massive unstructured or semi-structured data, and aims to help users to efficiently obtain desired information from massive data. Under the general condition, in an industrial application scene, terms are searched according to users, and a certain amount of candidate subsets meeting the requirements of the users are screened from hundred million-level mass candidate data, so that the requirements on the effect of a solution are met, and the requirements on the efficiency are very strict.
At the present stage, with the rapid development of a deep neural network, a plurality of deep semantic matching schemes are also developed in the field of information retrieval, and the schemes can be generally divided into two categories, namely a representation type and an interactive type; the representational scheme is generally to encode the search term and the candidate data respectively based on two semantic models with similar structures, and then recall the candidate subset according to the similarity of the encoded vector, which is also called a double-tower model. The method has the advantages that massive candidate vectors can be calculated off-line in advance, and on-line only the vectors of the search terms need to be coded and the similarity between the vectors and the candidate vectors calculated in advance is calculated; the scheme is high in speed, but because the search terms and the candidate data lack interaction in the training stage, the model is insufficient in learning the correlation of the search terms and the candidate data, and the effect is often poor for complex long text matching. The interactive model splices the search terms and the candidate data in the initial stage, and inputs the spliced search terms and the candidate data into a more complex neural network structure as a whole, so that the deeper correlation between the search terms and the candidate data can be learned; the scheme can achieve higher semantic matching precision; however, since the interactive model needs to calculate a large amount of splicing vectors on line, it is difficult to meet the performance requirements in the industry.
In addition, if the encoding calculation of the candidate data by using the traditional semantic model needs to integrate the candidate data into a whole as input; in many application scenes in the industry, the candidate data is often characterized by multiple channels and multiple fields, rather than a simple long text; for example, in a Taobao product search scenario, each product has text information such as a title, an advertisement, a subtitle, and a label, and the text information has different weights in the search process. If all text fields are modeled separately, the entire search scheme becomes complex and inefficient.
In summary, the prior art has a problem that it is difficult to consider both the characteristics of high matching efficiency of the double-tower model and good matching effect of the interactive model.
Disclosure of Invention
The invention provides an interactive retrieval method and device based on a double-tower model, electronic equipment and a computer readable storage medium, and mainly aims to solve the problem that the two characteristics of high matching efficiency of the double-tower model and good matching effect of the interactive model are difficult to be considered.
In order to achieve the above object, the present invention provides an interactive search method based on a double-tower model, comprising:
constructing a double-tower model according to a pre-trained standard language model, acquiring a retrieval log, constructing a corresponding sample set according to each retrieval item in the retrieval log, selecting a target retrieval item from the retrieval items, and selecting a sample meeting a preset condition from the sample set corresponding to the target retrieval item as a target sample set;
performing multi-channel calculation on each target sample in the target sample set by using the double-tower model to obtain a target sample vector, and performing vector calculation on the target retrieval items by using the double-tower model to obtain a target retrieval vector;
splicing and network computing the target sample vector to obtain a fusion sample vector, initializing a plurality of parameter vectors, and performing attention computing according to the parameter vectors and the target retrieval vector to obtain an attention vector;
calculating the similarity of the attention vector and the fusion sample vector, determining the matching degree of the target sample and the target retrieval item according to the similarity, and optimizing the double-tower model according to the matching degree of the target sample set and the target retrieval item to obtain a target double-tower model;
when a preset retrieval server is offline, acquiring preset data to be matched, and calculating a fusion matching vector of the data to be matched by using the target double-tower model;
when the retrieval server is on line, acquiring an item to be retrieved, calculating the attention vector of the item to be retrieved by using the double-tower model, and performing average pooling on the attention vector of the item to be retrieved to obtain an average attention vector;
and selecting the data to be matched corresponding to the item to be retrieved according to the similarity between the fusion matching vector and the average attention vector, and taking the data to be matched corresponding to the item to be retrieved as a retrieval result.
Optionally, the building a double-tower model according to a pre-trained standard language model includes:
acquiring general training corpus data, and performing horizontal field training on a preset language model according to the general training corpus data to obtain a preliminary language model;
acquiring vertical training corpus data, and training the preliminary language model according to the vertical training corpus data to obtain the standard language model;
and constructing the double-tower model according to the standard language model and a preset attention mechanism and a network computing layer, wherein the double-tower model comprises a retrieval end model and a matching end model.
Optionally, the performing multi-channel computation on each target sample in the target sample set by using the double-tower model to obtain a target sample vector includes:
performing multi-channel identification on the target sample to obtain multi-channel data corresponding to the target sample;
and performing vector calculation on the multi-channel data by using a matching end model in the double-tower model to obtain a language vector of each channel, and taking the language vector as a target sample vector corresponding to the target sample.
Optionally, the splicing and network computing the target sample vector to obtain a fusion sample vector includes:
splicing the target sample vectors, and fully connecting and activating the spliced language vectors to obtain channel weights corresponding to each channel of the target sample;
and carrying out weighted summation according to the channel weight of each channel and the target sample vector to obtain a fusion sample vector corresponding to the target sample.
Optionally, the performing weighted summation according to the channel weight of each channel and the target sample vector to obtain a fusion sample vector corresponding to the target sample includes:
weighted summing the channel weights and the target sample vector for each channel by:
Figure BDA0003793560250000031
wherein f (d) is a fused sample vector corresponding to the target sample d; n is the total number of channels corresponding to the target sample; w is a i Channel weight corresponding to the ith channel of the target sample; is composed of
Figure BDA0003793560250000032
And the target sample vector corresponding to the ith channel of the target sample is obtained.
Optionally, the performing attention calculation according to the parameter vector and the target retrieval vector to obtain an attention vector includes:
generating a vector sequence set according to the parameter vector and the retrieval vector, and selecting a target vector sequence from the target vector sequence set one by one;
performing first weight calculation on the target vector sequence to obtain an updating sequence corresponding to the target vector sequence;
performing second weight calculation on the updating sequence to obtain a plurality of expression vectors corresponding to the updating sequence;
performing attention operation on the expression vectors corresponding to all the updating sequences to obtain an initial attention vector;
and performing activation calculation on the initial attention vector by using a preset activation function, and performing point-product summation on the activation calculation result and the expression vector corresponding to the updating sequence to obtain the attention vector.
Optionally, the determining, according to the similarity, a matching degree between the target sample and the target search entry includes:
determining a matching degree of the target sample and the target retrieval item by the following formula:
Figure BDA0003793560250000041
wherein h (q, d) is the matching degree of the target sample d and the target retrieval item q; (P) j ·f(d)) similarity Retrieving the jth attention vector P in the entry for the target j Similarity of the fused sample vector f (d) corresponding to the target sample d.
In order to solve the above problem, the present invention further provides an interactive search device based on a double-tower model, the device comprising:
the target sample set generation module is used for acquiring a retrieval log, constructing a corresponding sample set according to each retrieval item in the retrieval log, selecting a target retrieval item from the retrieval items, and selecting a sample meeting a preset condition from the sample set corresponding to the target retrieval item as a target sample set;
the double-tower model calculation module is used for constructing a double-tower model according to a pre-trained standard language model, performing multi-channel calculation on each target sample in the target sample set by using the double-tower model to obtain a target sample vector, and performing vector calculation on the target retrieval items by using the double-tower model to obtain a target retrieval vector;
the target double-tower model generation module is used for splicing and network computing the target sample vectors to obtain fusion sample vectors, initializing a plurality of parameter vectors, and performing attention computing according to the parameter vectors and the target retrieval vectors to obtain attention vectors; calculating the similarity of the attention vector and the fusion sample vector, determining the matching degree of the target sample and the target retrieval item according to the similarity, and optimizing the double-tower model according to the matching degree of the target sample set and the target retrieval item to obtain a target double-tower model;
the off-line calculation module is used for acquiring preset data to be matched when a preset retrieval server is off-line, and calculating a fusion matching vector of the data to be matched by using the target double-tower model;
the online calculation module is used for acquiring an item to be retrieved when the retrieval server is online, calculating the attention vector of the item to be retrieved by using the double-tower model, and performing average pooling on the attention vector of the item to be retrieved to obtain an average attention vector; and selecting the data to be matched corresponding to the item to be retrieved according to the similarity between the fusion matching vector and the average attention vector, and taking the data to be matched corresponding to the item to be retrieved as a retrieval result.
In order to solve the above problem, the present invention also provides an electronic device, including:
at least one processor; and (c) a second step of,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the dual-tower model-based interactive search method described above.
In order to solve the above problem, the present invention further provides a computer-readable storage medium, in which at least one computer program is stored, and the at least one computer program is executed by a processor in an electronic device to implement the above-mentioned dual-tower model-based interactive search method.
The embodiment of the invention constructs each retrieval item and a corresponding sample set thereof through the retrieval log, constructs a double-tower model according to the language model, and then carries out similarity calculation on the sample set and vectors corresponding to the retrieval items according to the double-tower model, can optimize the double-tower model according to the similarity result, realizes the interaction of the retrieval items and the samples in a training phase, solves the problem that the double-tower model is insufficient in the study of the correlation between the retrieval items and the samples, and improves the matching precision of subsequently utilizing the double-tower model; the method has the advantages that the vector representation of the retrieval items is improved by initializing a plurality of parameter vectors and performing attention calculation on the retrieval vectors according to the parameter vectors, and when the retrieval server is on line, a certain vector representation is reserved and the online matching speed of the model is improved by performing average pooling on the attention vectors of the items to be retrieved; and performing multi-channel calculation, splicing and network calculation on the target sample by using the double-tower model to obtain a fusion sample vector, and improving the accuracy of generating the sample vector by using the sample with the characteristics of multiple channels and multiple fields. Therefore, the interactive retrieval method, the interactive retrieval device, the electronic equipment and the computer readable storage medium based on the double-tower model can solve the problem that the two characteristics of high matching efficiency of the double-tower model and good matching effect of the interactive model are difficult to be considered.
Drawings
Fig. 1 is a schematic flowchart of an interactive search method based on a double-tower model according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a method for constructing a two-tower model according to a pre-trained standard language model according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of splicing and network computing the target sample vectors according to an embodiment of the present invention;
FIG. 4 is a functional block diagram of an interactive retrieving device based on a double-tower model according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device for implementing the interactive search method based on the double-tower model according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the application provides an interactive retrieval method based on a double-tower model. The execution subject of the interactive retrieval method based on the double-tower model includes, but is not limited to, at least one of the electronic devices that can be configured to execute the method provided by the embodiments of the present application, such as a server, a terminal, and the like. In other words, the interactive retrieval method based on the double tower model may be executed by software or hardware installed in a terminal device or a server device, and the software may be a block chain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like. The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
Fig. 1 is a schematic flowchart of an interactive search method based on a double-tower model according to an embodiment of the present invention. In this embodiment, the interactive search method based on the double-tower model includes:
s1, constructing a double-tower model according to a pre-trained standard language model, obtaining a retrieval log, constructing a corresponding sample set according to each retrieval item in the retrieval log, selecting a target retrieval item from the retrieval items, and selecting a sample meeting a preset condition from the sample set corresponding to the target retrieval item as a target sample set.
In the embodiment of the present invention, the double-tower model may be composed of two transform-based language models, an attention mechanism, a neural network layer, and the like.
Referring to fig. 2, in the embodiment of the present invention, the building a double-tower model according to a pre-trained standard language model includes:
s21, acquiring general training corpus data, and performing horizontal field training on a preset language model according to the general training corpus data to obtain a preliminary language model;
s22, acquiring vertical training corpus data, and training the preliminary language model according to the vertical training corpus data to obtain the standard language model;
s23, constructing the double-tower model according to the standard language model and a preset attention mechanism and a network computing layer, wherein the double-tower model comprises a retrieval end model and a matching end model.
In the embodiment of the invention, the universal training corpus data is based on a universally-disclosed universal corpus, a language model based on a Transformer is pre-trained in a semi-supervised or unsupervised mode, and the model has the general semantic expression and information extraction capability in the universal field; the vertical training corpus data is based on the vertical domain corpus of the retrieval scene, and a Transformer model trained based on the general training corpus data can be finely adjusted in a semi-supervised or unsupervised mode, so that the model has language representation capability of specific domain information.
In the embodiment of the invention, the double-tower model comprises a retrieval end model and a matching end model, wherein the retrieval end model and the matching end model can be formed by the standard language model, an attention mechanism is utilized to carry out attention calculation on the output of the retrieval end model, and pooling, full connection and activation are utilized to carry out network calculation on the output of the matching end model; further, the result of attention calculation and the result of network calculation may output the search result of the double tower model by means of similarity calculation.
In the embodiment of the invention, the retrieval log refers to a log produced by a user according to retrieval items and subsequent clicking or browsing records in the historical retrieval process. The data selected and clicked by the user according to the retrieval items is a positive sample; in the data generated by the search item, taking the data except the data browsed by the user as a negative sample; or the data generated by all the search items except the search item is taken as a negative sample.
For example, it is assumed that the search log includes a search entry a and a search entry B, where the search entry a generates data x (browsing data), data y (non-browsing data), the search entry B generates data u (browsing data), and data v (non-browsing data). Thus, for retrieving entry a, there is a positive sample: data x, negative example: data y, data u, data v; for retrieving entry A, there is a positive sample: data u, negative example: data x, data y, data v.
In the embodiment of the present invention, the selecting of the sample meeting the preset condition from the sample set corresponding to the target retrieval item may be to use a positive sample and all negative samples corresponding to the target retrieval item as the target sample set. Further, a positive sample corresponding to the target search entry and a negative sample next to the positive sample can be used as a target sample set; the negative sample next to the positive sample can be obtained by performing similarity calculation on the target retrieval item, the corresponding positive sample and all negative sample machines through a double-tower model, and the negative sample with the similarity calculation result most similar to the similarity calculation result corresponding to the positive sample in the negative sample is selected from the negative samples to serve as the negative sample next to the positive sample.
S2, performing multi-channel calculation on each target sample in the target sample set by using the double-tower model to obtain a target sample vector, and performing vector calculation on the target retrieval items by using the double-tower model to obtain a target retrieval vector.
In this embodiment of the present invention, the performing multi-channel computation on each target sample in the target sample set by using the double-tower model to obtain a target sample vector includes:
performing multi-channel identification on the target sample to obtain multi-channel data corresponding to the target sample;
and performing vector calculation on the multi-channel data by using a matching end model in the double-tower model to obtain a language vector of each channel, and taking the language vector as a target sample vector corresponding to the target sample.
In the embodiment of the invention, the target sample is assumed to be a network commodity, each commodity has text information such as a title, an advertisement, a subtitle, a label and the like, so that multi-channel data is formed, and the weight occupied by the text information in the subsequent retrieval process is different. Therefore, the target sample can be subjected to multi-channel identification to obtain the language vector of each channel, and the weight of each channel can be obtained by performing vector calculation on the language vector of each channel, so that the finally generated fusion sample vector is more accurate.
In the embodiment of the invention, a retrieval end model (a Transformer-based language model) in the double-tower model performs vector calculation on a target retrieval item, so as to generate a target retrieval vector.
And S3, splicing and network computing the target sample vector to obtain a fusion sample vector, initializing a plurality of parameter vectors, and performing attention computing according to the parameter vectors and the target retrieval vector to obtain an attention vector.
Referring to fig. 3, in the embodiment of the present invention, the splicing and network computing the target sample vector to obtain a fusion sample vector includes:
s31, splicing the target sample vectors, and fully connecting and activating the spliced language vectors to obtain channel weights corresponding to each channel of the target sample;
and S32, carrying out weighted summation according to the channel weight of each channel and the target sample vector to obtain a fusion sample vector corresponding to the target sample.
In detail, in the embodiment of the present invention, the channel weight and the target sample vector of each channel may be weighted and summed by the following formula:
Figure BDA0003793560250000091
wherein h (q, d) is the matching degree of the target sample d and the target retrieval item q; (P) j ·f(d)) similarity Retrieving the jth attention vector P in the entry for the target j Similarity of a fused sample vector f (d) corresponding to the target sample d.
In this embodiment of the present invention, the performing attention calculation according to the parameter vector and the target retrieval vector to obtain an attention vector includes:
generating a vector sequence set according to the parameter vector and the retrieval vector, and selecting a target vector sequence from the target vector sequence set one by one;
performing first weight calculation on the target vector sequence to obtain an updating sequence corresponding to the target vector sequence;
performing second weight calculation on the updating sequence to obtain a plurality of expression vectors corresponding to the updating sequence;
performing attention operation on the expression vectors corresponding to all the updating sequences to obtain an initial attention vector;
and performing activation calculation on the initial attention vector by using a preset activation function, and performing point-product summation on the activation calculation result and the expression vector corresponding to the update sequence to obtain the attention vector.
In the embodiment of the present invention, the attention operation is an attention operation, and the attention operation may be implemented by scaled dot product or the like.
In the embodiment of the invention, the first weight calculation can be carried out by utilizing the weight coefficient defined by the network to obtain a new vector sequence; and performing second weight calculation on the update sequence by using three different weight matrixes defined by the network to obtain three expression vectors corresponding to the update sequence.
S4, calculating the similarity between the attention vector and the fusion sample vector, determining the matching degree between the target sample and the target retrieval item according to the similarity, and optimizing the double-tower model according to the matching degree between the target sample set and the target retrieval item to obtain a target double-tower model.
In the embodiment of the invention, the similarity between the attention vector and the fusion sample vector can be calculated by adopting Euclidean distance, cosine similarity, pearson coefficient and the like.
In this embodiment of the present invention, the matching degree between the target sample and the target search entry may be determined by the following formula:
Figure BDA0003793560250000101
wherein h (q, d) is the matching degree of the target sample d and the target retrieval item q; p is j Retrieving a jth attention vector in an entry for the target; f (d) is a fused sample vector corresponding to the target sample d; similarity is the similarity calculation.
In the embodiment of the invention, if the matching degree of the positive sample in the target sample set and the target retrieval item is smaller than the matching degree of the negative sample in the target sample set and the target retrieval item, the model training is indicated to have deviation, and the model can be subjected to gradient updating by adding a punishment mechanism, so that the network parameters in the double-tower model are updated, and further, the model is optimized.
Further, in the embodiment of the present invention, after the middle network parameters of the double-tower model are updated, the updated model may be used to perform matching degree calculation on the target sample set and the target search entry again, and whether to update the double-tower model is determined according to a result of the calculation that meets a preset matching degree condition. For example, the updating of the double-tower model may be stopped when the degree of matching between the positive samples in the target sample set and the target search entry is greater than the degree of matching between the negative samples in the target sample set and the target search entry.
In another optional embodiment of the present invention, when the target sample set includes a plurality of negative samples, performing multi-classification task training on the samples in the target sample set, that is, finding a plurality of negative samples of positive samples from a plurality of samples in the target sample set; when the target sample set only contains one negative sample (the negative sample is obtained through a matching degree calculation result obtained through multi-classification task training), and the negative sample has certain distinguishing difficulty, performing advanced two-classification task training on the samples in the target sample set, namely finding multiple negative samples of the positive sample from multiple samples of the target sample set, namely finding the positive sample from the two samples. By the model training method, the model has high precision, and the capability of distinguishing difficult negative samples is improved.
And S5, when a preset retrieval server is offline, acquiring preset data to be matched, and calculating a fusion matching vector of the data to be matched by using the target double-tower model.
In the embodiment of the invention, the data to be matched is data which needs to be subjected to multi-channel calculation and network fusion calculation and is used for matching with the items to be retrieved.
In the embodiment of the invention, the data to be matched is calculated off-line when being off-line, so that the processing pressure for calculating the data to be matched when being on-line is reduced, and the efficiency of entry retrieval is improved.
In the embodiment of the present invention, the process of calculating the fusion matching vector of the data to be matched by using the target double-tower model is similar to the process of performing multi-channel calculation on each target sample in the target sample set by using the double-tower model in step S2 to obtain a target sample vector, and performing splicing and network calculation on the target sample vector in step S3 to obtain a fusion sample vector, which is not described herein in detail.
And S6, when the retrieval server is on line, acquiring an item to be retrieved, calculating the attention vector of the item to be retrieved by using the double-tower model, and performing average pooling on the attention vector of the item to be retrieved to obtain an average attention vector.
In the embodiment of the present invention, the average pooling of the attention vectors of the items to be asked for can be performed by using the following formula, including:
Figure BDA0003793560250000111
wherein the content of the first and second substances,
Figure BDA0003793560250000112
is the average attention vector; p k The kth attention vector of the item to be detected is obtained; m is the total number of the attention vectors.
In the embodiment of the invention, the average attention vector is generated according to a plurality of attentions, and the information quantity contained in the vector representation is expanded, so that the vector representation of the item to be searched is increased.
And S7, selecting the data to be matched corresponding to the items to be retrieved according to the similarity between the fusion matching vector and the average attention vector, and taking the data to be matched corresponding to the items to be retrieved as a retrieval result.
In the embodiment of the present invention, a fusion matching vector corresponding to the maximum similarity between the fusion matching vector and the average attention vector may be selected, and the data to be matched corresponding to the fusion matching vector may be used as the search result.
In the embodiment of the invention, the similarity between the fusion matching vector and the average attention vector is directly calculated, so that the vector matching efficiency is improved; in addition, the fusion matching vector of the data to be matched is calculated in advance when the data is offline, so that the time for calculating the fusion matching vector on line is shortened, and the retrieval efficiency is improved.
According to the embodiment of the invention, each retrieval item and the corresponding sample set are constructed through the retrieval log, the double-tower model is constructed according to the language model, the similarity calculation is carried out on the sample set and the vector corresponding to the retrieval item according to the double-tower model, the double-tower model can be optimized according to the similarity result, the interaction of the retrieval item and the sample in the training stage is realized, the problem that the double-tower model is insufficient in the study of the relevance of the retrieval item and the sample is solved, and the matching precision of the subsequent utilization of the double-tower model is improved; the method has the advantages that the vector representation of the retrieval items is improved by initializing a plurality of parameter vectors and performing attention calculation on the retrieval vectors according to the parameter vectors, and when the retrieval server is on line, a certain vector representation is reserved and the online matching speed of the model is improved by performing average pooling on the attention vectors of the items to be retrieved; and performing multi-channel calculation, splicing and network calculation on the target sample by using the double-tower model to obtain a fusion sample vector, and improving the accuracy of generating the sample vector by using the sample with the characteristics of multiple channels and multiple fields. Therefore, the interactive retrieval method based on the double-tower model can solve the problem that the two characteristics of high matching efficiency and good matching effect of the double-tower model are difficult to be considered simultaneously.
Fig. 4 is a functional block diagram of an interactive search device based on a double-tower model according to an embodiment of the present invention.
The interactive search apparatus 100 based on the double tower model according to the present invention may be installed in an electronic device. According to the implemented functions, the interactive retrieving apparatus 100 based on the double tower model may include a target sample set generating module 101, a double tower model calculating module 102, a target double tower model generating module 103, an offline calculating module 104 and an online calculating module 105. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions of the respective modules/units are as follows:
the target sample set generating module 101 is configured to obtain a retrieval log, construct a corresponding sample set according to each retrieval item in the retrieval log, select a target retrieval item from the retrieval items, and select a sample meeting a preset condition from the sample set corresponding to the target retrieval item as a target sample set;
the double-tower model calculation module 102 is configured to construct a double-tower model according to a pre-trained standard language model, perform multi-channel calculation on each target sample in the target sample set by using the double-tower model to obtain a target sample vector, and perform vector calculation on the target retrieval items by using the double-tower model to obtain a target retrieval vector;
the target double-tower model generation module 103 is configured to splice and perform network calculation on the target sample vectors to obtain fused sample vectors, initialize multiple parameter vectors, and perform attention calculation according to the parameter vectors and the target search vectors to obtain attention vectors; calculating the similarity of the attention vector and the fusion sample vector, determining the matching degree of the target sample and the target retrieval item according to the similarity, and optimizing the double-tower model according to the matching degree of the target sample set and the target retrieval item to obtain a target double-tower model;
the offline calculation module 104 is configured to, when a preset retrieval server is offline, acquire preset data to be matched, and calculate a fusion matching vector of the data to be matched by using the target double-tower model;
the online calculation module 105 is configured to, when the retrieval server is online, obtain an item to be retrieved, calculate an attention vector of the item to be retrieved by using the double-tower model, and perform average pooling on the attention vector of the item to be retrieved to obtain an average attention vector; and selecting the data to be matched corresponding to the item to be retrieved according to the similarity between the fusion matching vector and the average attention vector, and taking the data to be matched corresponding to the item to be retrieved as a retrieval result.
In detail, when the modules in the interactive retrieval device 100 based on the double-tower model according to the embodiment of the present invention are used, the same technical means as the interactive retrieval method based on the double-tower model shown in the drawings is adopted, and the same technical effects can be produced, which is not described herein again.
Fig. 5 is a schematic structural diagram of an electronic device for implementing an interactive search method based on a double-tower model according to an embodiment of the present invention.
The electronic device 1 may comprise a processor 10, a memory 11, a communication bus 12 and a communication interface 13, and may further comprise a computer program, such as a double tower model based interactive retrieval program, stored in the memory 11 and executable on the processor 10.
In some embodiments, the processor 10 may be composed of an integrated circuit, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same function or different functions, and includes one or more Central Processing Units (CPUs), a microprocessor, a digital Processing chip, a graphics processor, a combination of various control chips, and the like. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the whole electronic device by using various interfaces and lines, executes various functions of the electronic device and processes data by running or executing programs or modules (for example, executing an interactive search program based on a double tower model, etc.) stored in the memory 11, and calling data stored in the memory 11.
The memory 11 includes at least one type of readable storage medium including flash memory, removable hard disks, multimedia cards, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, for example a removable hard disk of the electronic device. The memory 11 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used not only to store application software installed in the electronic device and various types of data, such as codes of an interactive search program based on a double tower model, but also to temporarily store data that has been output or will be output.
The communication bus 12 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
The communication interface 13 is used for communication between the electronic device and other devices, and includes a network interface and a user interface. Optionally, the network interface may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), which are typically used to establish a communication connection between the electronic device and other electronic devices. The user interface may be a Display (Display), an input unit, such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
Fig. 5 only shows an electronic device with components, and it will be understood by a person skilled in the art that the structure shown in fig. 5 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The interactive retrieving program based on the two-tower model stored in the memory 11 of the electronic device 1 is a combination of a plurality of instructions, which when executed in the processor 10, can realize:
constructing a double-tower model according to a pre-trained standard language model, acquiring a retrieval log, constructing a corresponding sample set according to each retrieval item in the retrieval log, selecting a target retrieval item from the retrieval items, and selecting a sample meeting a preset condition from the sample set corresponding to the target retrieval item as a target sample set;
performing multi-channel calculation on each target sample in the target sample set by using the double-tower model to obtain a target sample vector, and performing vector calculation on the target retrieval items by using the double-tower model to obtain a target retrieval vector;
splicing and network computing the target sample vector to obtain a fusion sample vector, initializing a plurality of parameter vectors, and performing attention computing according to the parameter vectors and the target retrieval vector to obtain an attention vector;
calculating the similarity of the attention vector and the fusion sample vector, determining the matching degree of the target sample and the target retrieval item according to the similarity, and optimizing the double-tower model according to the matching degree of the target sample set and the target retrieval item to obtain a target double-tower model;
when a preset retrieval server is offline, acquiring preset data to be matched, and calculating a fusion matching vector of the data to be matched by using the target double-tower model;
when the retrieval server is on line, acquiring an item to be retrieved, calculating the attention vector of the item to be retrieved by using the double-tower model, and performing average pooling on the attention vector of the item to be retrieved to obtain an average attention vector;
and selecting the data to be matched corresponding to the items to be retrieved according to the similarity between the fusion matching vector and the average attention vector, and taking the data to be matched corresponding to the items to be retrieved as a retrieval result.
Specifically, the specific implementation method of the instruction by the processor 10 may refer to the description of the relevant steps in the embodiment corresponding to the drawings, which is not described herein again.
Further, the integrated modules/units of the electronic device 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. The computer readable storage medium may be volatile or non-volatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, a recording medium, a usb-disk, a removable hard disk, a magnetic diskette, an optical disk, a computer Memory, a Read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium storing a computer program which, when executed by a processor of an electronic device, implements:
constructing a double-tower model according to a pre-trained standard language model, acquiring a retrieval log, constructing a corresponding sample set according to each retrieval item in the retrieval log, selecting a target retrieval item from the retrieval items, and selecting a sample meeting a preset condition from the sample set corresponding to the target retrieval item as a target sample set;
performing multi-channel calculation on each target sample in the target sample set by using the double-tower model to obtain a target sample vector, and performing vector calculation on the target retrieval items by using the double-tower model to obtain a target retrieval vector;
splicing and network computing the target sample vector to obtain a fusion sample vector, initializing a plurality of parameter vectors, and performing attention computing according to the parameter vectors and the target retrieval vector to obtain an attention vector;
calculating the similarity of the attention vector and the fusion sample vector, determining the matching degree of the target sample and the target retrieval item according to the similarity, and optimizing the double-tower model according to the matching degree of the target sample set and the target retrieval item to obtain a target double-tower model;
when a preset retrieval server is offline, acquiring preset data to be matched, and calculating a fusion matching vector of the data to be matched by using the target double-tower model;
when the retrieval server is on line, acquiring an item to be retrieved, calculating the attention vector of the item to be retrieved by using the double-tower model, and performing average pooling on the attention vector of the item to be retrieved to obtain an average attention vector;
and selecting the data to be matched corresponding to the item to be retrieved according to the similarity between the fusion matching vector and the average attention vector, and taking the data to be matched corresponding to the item to be retrieved as a retrieval result.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the same, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made to the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. An interactive retrieval method based on a double-tower model, which is characterized by comprising the following steps:
constructing a double-tower model according to a pre-trained standard language model, acquiring a retrieval log, constructing a corresponding sample set according to each retrieval item in the retrieval log, selecting a target retrieval item from the retrieval items, and selecting a sample meeting a preset condition from the sample set corresponding to the target retrieval item as a target sample set;
performing multi-channel calculation on each target sample in the target sample set by using the double-tower model to obtain a target sample vector, and performing vector calculation on the target retrieval items by using the double-tower model to obtain a target retrieval vector;
splicing and network computing the target sample vector to obtain a fusion sample vector, initializing a plurality of parameter vectors, and performing attention computing according to the parameter vectors and the target retrieval vector to obtain an attention vector;
calculating the similarity between the attention vector and the fusion sample vector, determining the matching degree between the target sample and the target retrieval item according to the similarity, and optimizing the double-tower model according to the matching degree between the target sample set and the target retrieval item to obtain a target double-tower model;
when a preset retrieval server is offline, acquiring preset data to be matched, and calculating a fusion matching vector of the data to be matched by using the target double-tower model;
when the retrieval server is on line, acquiring an item to be retrieved, calculating the attention vector of the item to be retrieved by using the double-tower model, and performing average pooling on the attention vector of the item to be retrieved to obtain an average attention vector;
and selecting the data to be matched corresponding to the items to be retrieved according to the similarity between the fusion matching vector and the average attention vector, and taking the data to be matched corresponding to the items to be retrieved as a retrieval result.
2. The interactive search method based on double-tower model as claimed in claim 1, wherein said building double-tower model according to pre-trained standard language model comprises:
acquiring general training corpus data, and performing horizontal field training on a preset language model according to the general training corpus data to obtain a preliminary language model;
acquiring vertical training corpus data, and training the preliminary language model according to the vertical training corpus data to obtain the standard language model;
and constructing the double-tower model according to the standard language model and a preset attention mechanism and a network computing layer, wherein the double-tower model comprises a retrieval end model and a matching end model.
3. The interactive retrieving method based on the double-tower model as claimed in claim 1, wherein said performing multi-channel computation on each target sample in the target sample set by using the double-tower model to obtain a target sample vector comprises:
performing multi-channel identification on the target sample to obtain multi-channel data corresponding to the target sample;
and performing vector calculation on the multi-channel data by using a matching end model in the double-tower model to obtain a language vector of each channel, and taking the language vector as a target sample vector corresponding to the target sample.
4. The interactive retrieval method based on the double-tower model as claimed in claim 1, wherein the splicing and network computing the target sample vector to obtain a fused sample vector comprises:
splicing the target sample vectors, and fully connecting and activating the spliced language vectors to obtain channel weights corresponding to each channel of the target sample;
and carrying out weighted summation according to the channel weight of each channel and the target sample vector to obtain a fusion sample vector corresponding to the target sample.
5. The interactive retrieving method based on the double-tower model as claimed in claim 1, wherein the performing a weighted summation according to the channel weight of each channel and the target sample vector to obtain a fused sample vector corresponding to the target sample comprises:
performing a weighted summation of the channel weights and the target sample vector for each channel by:
Figure FDA0003793560240000021
wherein f (d) is a fused sample vector corresponding to the target sample d; n is the total number of channels corresponding to the target sample; w is a i Channel weight corresponding to ith channel of the target sample; is composed of
Figure FDA0003793560240000022
And obtaining a target sample vector corresponding to the ith channel of the target sample.
6. The interactive searching method based on double tower model of claim 1, wherein said performing attention calculation according to said parameter vector and said target search vector to obtain an attention vector comprises:
generating a vector sequence set according to the parameter vector and the retrieval vector, and selecting a target vector sequence from the target vector sequence set one by one;
performing first weight calculation on the target vector sequence to obtain an updating sequence corresponding to the target vector sequence;
performing second weight calculation on the updating sequence to obtain a plurality of expression vectors corresponding to the updating sequence;
performing attention operation on the expression vectors corresponding to all the updating sequences to obtain an initial attention vector;
and performing activation calculation on the initial attention vector by using a preset activation function, and performing point-product summation on the activation calculation result and the expression vector corresponding to the updating sequence to obtain the attention vector.
7. The interactive searching method based on the double-tower model as claimed in any one of claims 1 to 6, wherein the determining the matching degree of the target sample and the target searching item according to the similarity comprises:
determining a matching degree of the target sample and the target retrieval item by the following formula:
Figure FDA0003793560240000031
wherein h (q, d) is the matching degree of the target sample d and the target retrieval item q; (P) j ·f(d)) similarity Retrieving the jth attention vector P in the entry for the target j Similarity of the fused sample vector f (d) corresponding to the target sample d.
8. An interactive search apparatus based on a double tower model, the apparatus comprising:
the target sample set generation module is used for acquiring a retrieval log, constructing a corresponding sample set according to each retrieval item in the retrieval log, selecting a target retrieval item from the retrieval items, and selecting a sample meeting a preset condition from the sample set corresponding to the target retrieval item as a target sample set;
the double-tower model calculation module is used for constructing a double-tower model according to a pre-trained standard language model, performing multi-channel calculation on each target sample in the target sample set by using the double-tower model to obtain a target sample vector, and performing vector calculation on the target retrieval items by using the double-tower model to obtain a target retrieval vector;
the target double-tower model generation module is used for splicing and network computing the target sample vectors to obtain fused sample vectors, initializing a plurality of parameter vectors, and performing attention computing according to the parameter vectors and the target retrieval vectors to obtain attention vectors; calculating the similarity of the attention vector and the fusion sample vector, determining the matching degree of the target sample and the target retrieval item according to the similarity, and optimizing the double-tower model according to the matching degree of the target sample set and the target retrieval item to obtain a target double-tower model;
the off-line calculation module is used for acquiring preset data to be matched when a preset retrieval server is off-line and calculating a fusion matching vector of the data to be matched by using the target double-tower model;
the online calculation module is used for acquiring an item to be retrieved when the retrieval server is online, calculating the attention vector of the item to be retrieved by using the double-tower model, and performing average pooling on the attention vector of the item to be retrieved to obtain an average attention vector; and selecting the data to be matched corresponding to the item to be retrieved according to the similarity between the fusion matching vector and the average attention vector, and taking the data to be matched corresponding to the item to be retrieved as a retrieval result.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the dual tower model-based interactive search method of any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out a method for interactive dual-tower model-based retrieval according to any one of claims 1 to 7.
CN202210962907.0A 2022-08-11 2022-08-11 Interactive retrieval method, device, equipment and storage medium based on double-tower model Pending CN115309865A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210962907.0A CN115309865A (en) 2022-08-11 2022-08-11 Interactive retrieval method, device, equipment and storage medium based on double-tower model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210962907.0A CN115309865A (en) 2022-08-11 2022-08-11 Interactive retrieval method, device, equipment and storage medium based on double-tower model

Publications (1)

Publication Number Publication Date
CN115309865A true CN115309865A (en) 2022-11-08

Family

ID=83861186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210962907.0A Pending CN115309865A (en) 2022-08-11 2022-08-11 Interactive retrieval method, device, equipment and storage medium based on double-tower model

Country Status (1)

Country Link
CN (1) CN115309865A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116933896A (en) * 2023-09-15 2023-10-24 上海燧原智能科技有限公司 Super-parameter determination and semantic conversion method, device, equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116933896A (en) * 2023-09-15 2023-10-24 上海燧原智能科技有限公司 Super-parameter determination and semantic conversion method, device, equipment and medium
CN116933896B (en) * 2023-09-15 2023-12-15 上海燧原智能科技有限公司 Super-parameter determination and semantic conversion method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN112632385B (en) Course recommendation method, course recommendation device, computer equipment and medium
CN110163252B (en) Data classification method and device, electronic equipment and storage medium
CN113378970B (en) Sentence similarity detection method and device, electronic equipment and storage medium
CN111767375A (en) Semantic recall method and device, computer equipment and storage medium
CN113821622B (en) Answer retrieval method and device based on artificial intelligence, electronic equipment and medium
CN111831924A (en) Content recommendation method, device, equipment and readable storage medium
CN115392237B (en) Emotion analysis model training method, device, equipment and storage medium
CN114298122A (en) Data classification method, device, equipment, storage medium and computer program product
CN113886708A (en) Product recommendation method, device, equipment and storage medium based on user information
CN114077841A (en) Semantic extraction method and device based on artificial intelligence, electronic equipment and medium
CN114358023B (en) Intelligent question-answer recall method, intelligent question-answer recall device, computer equipment and storage medium
CN116821373A (en) Map-based prompt recommendation method, device, equipment and medium
CN113344125B (en) Long text matching recognition method and device, electronic equipment and storage medium
CN114461777A (en) Intelligent question and answer method, device, equipment and storage medium
CN112598039B (en) Method for obtaining positive samples in NLP (non-linear liquid) classification field and related equipment
CN115309865A (en) Interactive retrieval method, device, equipment and storage medium based on double-tower model
WO2023272862A1 (en) Risk control recognition method and apparatus based on network behavior data, and electronic device and medium
CN113656690A (en) Product recommendation method and device, electronic equipment and readable storage medium
CN116628162A (en) Semantic question-answering method, device, equipment and storage medium
CN116450829A (en) Medical text classification method, device, equipment and medium
CN115186188A (en) Product recommendation method, device and equipment based on behavior analysis and storage medium
CN114548114A (en) Text emotion recognition method, device, equipment and storage medium
CN114741608A (en) News recommendation method, device, equipment and storage medium based on user portrait
CN114595321A (en) Question marking method and device, electronic equipment and storage medium
CN113705692A (en) Emotion classification method and device based on artificial intelligence, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination