CN112182126A - Model training method and device for determining matching degree, electronic equipment and readable storage medium - Google Patents

Model training method and device for determining matching degree, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN112182126A
CN112182126A CN202010990380.3A CN202010990380A CN112182126A CN 112182126 A CN112182126 A CN 112182126A CN 202010990380 A CN202010990380 A CN 202010990380A CN 112182126 A CN112182126 A CN 112182126A
Authority
CN
China
Prior art keywords
poi
query information
comment
target
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010990380.3A
Other languages
Chinese (zh)
Inventor
孙兴武
唐弘胤
张富峥
王仲远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202010990380.3A priority Critical patent/CN112182126A/en
Publication of CN112182126A publication Critical patent/CN112182126A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9035Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/907Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/908Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Remote Sensing (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the disclosure provides a model training method, a model training device, an electronic device and a readable storage medium for determining matching degree, wherein the method comprises the following steps: acquiring query information, POI search results related to the query information and a comment set corresponding to each POI in the POI search results; determining a target comment corresponding to each POI in a comment set corresponding to each POI, wherein preset relevant conditions are met between the target comment and query information; generating a first vector according to the target comment corresponding to each POI, wherein the first vector is used for expressing the semantic relevance of the target comment and the query information; and taking the first vector, the second vector corresponding to the query information, the third vector corresponding to each POI in the POI search result and the relevance marking information of the query information and each POI as training data, and training a target model for determining the matching degree. The embodiment of the disclosure can improve the success rate and accuracy of searching.

Description

Model training method and device for determining matching degree, electronic equipment and readable storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of internet, and in particular relates to a model training method and device for determining matching degree, an electronic device and a readable storage medium.
Background
Determining the relevance of text is a fundamental problem in natural language processing, and how to measure the relevance between sentences or phrases is particularly important in systems such as information retrieval systems and dialog systems.
O2O (Online To Offline) combines the opportunity for Offline commerce with the internet, making the internet the foreground for Offline transactions. In the O2O search scenario, a user search request is relatively clear, usually structured merchants and commodities are taken as the main points, and merchant search and commodity search account for about 40% of the entire search traffic, so how to more accurately judge the correlation between a user's query term and a searched merchant or commodity is a key factor affecting the search accuracy and the user search experience.
However, since the business name or the business profile often contains a small amount of information, it is difficult to retrieve a matching search result in a case where the Query term (Query) is long and complicated.
Disclosure of Invention
Embodiments of the present disclosure provide a model training method, apparatus, electronic device and readable storage medium for determining a matching degree, so as to improve a success rate and an accuracy rate of a search.
According to a first aspect of embodiments of the present disclosure, there is provided a model training method for determining a degree of matching, the method including:
acquiring query information, POI search results related to the query information and a comment set corresponding to each POI in the POI search results;
determining a target comment corresponding to each POI in a comment set corresponding to each POI, wherein preset relevant conditions are met between the target comment and the query information;
generating a first vector according to the target comment corresponding to each POI, wherein the first vector is used for expressing the semantic relevance of the target comment and the query information;
and taking the first vector, the second vector corresponding to the query information, the third vector corresponding to each POI in the POI search result and the correlation labeling information of the query information and each POI as training data to train a target model for determining the matching degree.
According to a second aspect of embodiments of the present disclosure, there is provided a model training apparatus for determining a degree of matching, the apparatus including:
receiving current query information input by a user;
acquiring a candidate POI set related to the current query information;
and inputting each candidate POI in the candidate POI set and the current query information into a target model, and outputting the matching degree between each candidate POI and the current query information through the target model, wherein the target model is obtained by training according to the model training method for determining the matching degree.
According to a third aspect of embodiments of the present disclosure, there is provided a model training apparatus for determining a degree of matching, including:
the data acquisition module is used for acquiring query information, POI search results related to the query information and a comment set corresponding to each POI in the POI search results;
the target determining module is used for determining a target comment corresponding to each POI in the comment set corresponding to each POI, and preset relevant conditions are met between the target comment and the query information;
the vector representing module is used for generating a first vector according to the target comment corresponding to each POI, and the first vector is used for representing the semantic relevance of the target comment and the query information;
and the model training module is used for training a target model for determining the matching degree by taking the first vector, the second vector corresponding to the query information, the third vector corresponding to each POI in the POI search result and the relevance marking information of the query information and each POI as training data.
According to a fourth aspect of embodiments of the present disclosure, there is provided an apparatus for determining a degree of matching, including:
the information receiving module is used for receiving current query information input by a user;
a candidate obtaining module, configured to obtain a candidate POI set related to the current query information;
and the matching calculation module is used for inputting each candidate POI in the candidate POI set and the current query information into a target model respectively, outputting the matching degree between each candidate POI and the current query information through the target model, and obtaining the target model by training according to the model training method for determining the matching degree.
According to a fifth aspect of embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor, a memory, and a computer program stored on the memory and executable on the processor, wherein the processor implements the aforementioned model training method for determining a degree of match when executing the program.
According to a sixth aspect of embodiments of the present disclosure, there is provided a readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the aforementioned model training method for determining a degree of matching.
The embodiment of the disclosure provides a model training method, a model training device, an electronic device and a readable storage medium for determining matching degree, wherein the method comprises the following steps:
in the training process of the target model for determining the matching degree, the training data is determined according to the query information, the POI search results related to the query information and the comment information corresponding to each POI in the POI search results, and the target comment information having semantic relevance with the query information is fused in the training process. Therefore, the trained target model can identify the deeper (implicit in comment information) incidence relation between the POI search result and the query information on the basis that the POI search result is related to the query information text, and the comment information comes from a large number of users, so that the real intention of the users can be reflected. Therefore, under the condition that the query information is relatively complex, even if the title or the introduction of the POI does not contain the content related to the query information, the target model of the embodiment of the disclosure can still retrieve the POI search result related to the query information, so that the success rate and the accuracy of the search can be improved, and the use experience of the user can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments of the present disclosure will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 shows a flow chart of steps of a model training method for determining a degree of match in one embodiment of the present disclosure;
FIG. 2 is a flow chart illustrating steps of a method for determining a degree of match in one embodiment of the present disclosure;
FIG. 3 is a block diagram of a model training apparatus for determining a degree of match in one embodiment of the present disclosure;
FIG. 4 is a block diagram illustrating an apparatus for determining a degree of match in one embodiment of the present disclosure;
fig. 5 shows a block diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
Technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some, but not all, of the embodiments of the present disclosure. All other embodiments, which can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present disclosure, belong to the protection scope of the embodiments of the present disclosure.
Example one
Referring to fig. 1, a flow chart illustrating steps of a model training method for determining a degree of matching in one embodiment of the present disclosure is shown, the method comprising:
step 101, acquiring query information, POI search results related to the query information and a comment set corresponding to each POI in the POI search results;
step 102, determining a target comment corresponding to each POI in a comment set corresponding to each POI, wherein preset relevant conditions are met between the target comment and the query information;
step 103, generating a first vector according to the target comment corresponding to each POI, wherein the first vector is used for expressing the semantic relevance between the target comment and the query information;
and 104, taking the first vector, the second vector corresponding to the query information, the third vector corresponding to each POI in the POI search result and the relevance marking information of the query information and each POI as training data, and training a target model for determining the matching degree.
The embodiment of the disclosure provides a model training method for determining matching degree, and in a POI search scene, the target model can be used for calculating the matching degree between query Information and a POI (Point of Information) search result. It can be understood that the embodiment of the present disclosure is described by taking an e-commerce search scenario as an example, and in practical applications, one POI may be one house, one merchant, one commodity, one mailbox, one bus station, and the like.
The query information is information input by a user in a search box of a search engine. It should be noted that, the specific type of the query information is not limited by the embodiments of the present disclosure. The query information may be text information, voice information, image information, etc. When the query information is text information, the text information can be analyzed to obtain query words or query sentences; when the query information is voice information, voice recognition can be carried out on the voice information to obtain corresponding text information, and then the text information is analyzed; when the query information is image information, if the image information contains text content, text recognition can be carried out on the image information to obtain corresponding text information, and then the text information is analyzed; if the image information does not include text content, the image information may be subjected to image processing to identify content included in the image information, and the content may be used as query information.
The disclosed embodiments may collect historical operational data of a user to construct training data for training a target model. The historical operation data includes user generated content data, user search behavior data, and the like. The historical operational data may be derived from a user log, e.g., a search log and/or click log of the user may be obtained, etc.
The historical operation data of the user comprises search data of the user, and each piece of search data comprises query information and search results, so that query information, POI search results related to the query information and a comment set corresponding to each POI in the POI search results can be extracted from each piece of search data.
In one example, the query message is "a place suitable for lovers to chat and drink coffee", and since the query message is complicated, it is difficult to query the POI search result matching the query message based on the name of the POI and the profile of the POI. In order to solve the problem, in the process of constructing the training data, the constructed training data contains deeper (implicit) incidence relation by fully mining the comment information of the POI and mining the incidence relation between the comment information of the POI and the query information by using the characteristics of various contents which are contained in the comment information and can reflect the intention of the user, so that the trained target model can identify the deeper correlation between the query information and the search result.
The data volume of the comment information corresponding to each POI is large, and the relevance between some comments and the query information is small. Therefore, in the comment set corresponding to each POI, the target comment meeting the preset relevant condition with the query information is determined in the embodiment of the disclosure. And the target comment and the query information have strong correlation when meeting the preset correlation condition. For example, for a certain POI, it is assumed that the POI includes the following comment information: the POI has strong correlation with the query information 'where lovers chat and drink coffee', and the comment information can be used as a target comment of the POI.
In an optional embodiment of the present disclosure, the determining, in the comment set corresponding to each POI, a target comment corresponding to each POI in step 102 may include:
step S11, for the comment set corresponding to each POI, semantic matching is carried out on each comment and the query information one by one, and the semantic relevance of each comment and the query information is obtained;
s12, ranking the comment sets corresponding to the POIs according to the semantic relevance to obtain a ranking result of each comment set;
and step S13, regarding the ranking result of the comment set corresponding to each POI, taking at least one comment of which the ranking result meets the preset requirement as a target comment corresponding to each POI.
Semantic matching, i.e. identifying a semantic relationship between two texts, is one of the fundamental tasks of natural language processing. According to the method and the device for processing the comment sets, semantic matching is conducted on each comment and the query information one by one for the comment set corresponding to each POI, and semantic relevancy of each comment and the query information is obtained.
It should be noted that the method for semantic matching is not limited in the embodiments of the present disclosure. For example, a semantic matching method based on a keyword may be employed, and the determination is assisted by word-level information such as word weight. Or, a semantic matching method based on a semantic matching model can be adopted, and the semantic relevance of non-keyword hit can be better calculated.
After the semantic relevance of each comment and the query information is obtained through calculation, ranking the comment sets corresponding to each POI according to the semantic relevance, and obtaining the ranking result of each comment set. In one example, the query message is "a place suitable for lovers to chat and drink coffee", and since the query message is complicated, it is difficult to retrieve a matching POI search result based on the name and profile of the POI. Therefore, the embodiment of the disclosure can extract the key words in the query information, and retrieve the POI search results related to the query information according to the key words. For example, the keyword "drink coffee" in the query information may be extracted, and POI search results related to "drink coffee" may be retrieved. For example, the following POIs are included in the POI search results retrieved as relevant to the query information: sculptured time cafes, intermittent coffees, know a cafe, and the like. Each POI in the POI search results corresponds to a respective set of comments. The method and the device for searching the POI rank the comment sets corresponding to the POI in the POI search result to obtain the rank result of each comment set. For example, for a comment set corresponding to the POI "carving time cafe", ranking is performed according to semantic relevance between each comment and query information, so as to obtain a ranking result. And taking at least one comment of which the sorting result meets a preset requirement as a target comment meeting a preset relevant condition with the query information. For example, as a result of ranking the set of comments corresponding to the POI "carving cafe", a comment of topK (K is a positive integer, e.g., K ═ 10) is taken as a target comment of the POI "carving cafe". Similarly, for POI "intermittent coffee", the ranking result of the set of comments corresponding to POI "intermittent coffee" has the comment of topK as the target comment of POI "intermittent coffee". By analogy, the comment sets corresponding to each POI in the POI search results corresponding to the query information 'where lovers chat and drink coffee' are ranked, and the target comment corresponding to each POI is determined. The number of target comments corresponding to each POI may be one or more.
In the comment set corresponding to each POI, after the target comment corresponding to each POI is determined, vectorizing the target comment corresponding to each POI to generate a first vector, wherein the first vector is used for expressing the semantic relevance between the target comment and the query information, vectorizing the query information, generating a second vector, vectorizing each POI in the POI search result, and generating a third vector corresponding to each POI. In addition, after acquiring query information, a POI search result related to the query information, and a comment set corresponding to each POI in the POI search result, the embodiment of the present disclosure performs relevance labeling on the query information and each POI in the POI search result to obtain relevance labeling information.
Therefore, the first vector, the second vector corresponding to the query information, the third vector corresponding to each POI in the POI search result, and the relevance labeling information of the query information and each POI can be used as training data to train a target model for determining the matching degree.
In an optional embodiment of the present disclosure, the generating a first vector according to the target comment corresponding to each POI in step 103 may include:
step S21, inputting the target comment corresponding to each POI and the query information into a deep reading understanding model, and outputting the relevance vector representation of each target comment corresponding to each POI and the query information through the deep reading understanding model;
step S22, according to the relevance vector representation between each target comment corresponding to each POI and the query information, generating a first vector for representing the relevance between all the target comments corresponding to each POI and the query information.
Machine Reading Comprehension (Machine Reading Comprehension) is one of core tasks of natural language processing, is also an important task for evaluating text understanding capability of a model, and can be regarded as a sentence relation matching task in nature.
And for POI search results related to the query information, inputting the target comment corresponding to each POI and the query information into a deep reading understanding model, and outputting a relevance vector representation of each target comment corresponding to each POI and the query information through the deep reading understanding model.
The deep reading understanding model mainly comprises the following functional modules: the device comprises an input encoding module, a content interaction module and an output prediction module. The input coding module receives input query information and a target comment corresponding to a current POI, and respectively performs word vector conversion on the query information and each target comment in the target comment to respectively obtain word vector representations of the query information and each target comment. The content interaction module is used for extracting the correlation between the query information and each target comment, specifically, calculating the similarity of each word in the query information and each word in the target comment, so that a similarity matrix can be generated, and the correlation vector representation of each target comment corresponding to each POI and the query information can be output.
And then, generating a first vector for representing the relevance of all the target comments corresponding to each POI and the query information according to the relevance vector representation of each target comment corresponding to each POI and the query information.
In an optional embodiment of the present disclosure, the step S22 of generating, according to the vector representation of the relevance between each target comment corresponding to each POI and the query information, a first vector for representing the relevance between all target comments corresponding to each POI and the query information includes:
and performing attention calculation on the relevance vector representation of each target comment corresponding to each POI and the query information to obtain a first vector for representing the relevance of all the target comments corresponding to each POI and the query information.
The reason why the disclosed embodiment introduces the attention mechanism in the process of calculating the first vector is that when a deep neural network is used to process a complex task (for example, when a large amount of input information is processed), if the input is merely converted into a corresponding vector representation through the deep neural network, the vector representation hardly reflects all semantic information of the input. Accordingly, only relevant information can be encoded according to the needs of the task by using the attention mechanism, and secondary information is omitted to reduce the input amount. Based on an attention mechanism, the relevant parts in the query information and the target comment can be focused, so that the useless information part is ignored, and the important information part is strengthened.
In addition, the embodiment of the disclosure can also perform multi-document cross check on all target comments corresponding to each POI by using a multi-document cross check function of the deep reading understanding model, and finally calculate the vector representation of the target comment corresponding to each POI.
After vectorizing the target comment corresponding to each POI to obtain a first vector, vectorizing the query information to obtain a second vector, vectorizing each POI in the POI search result to obtain a third vector corresponding to each POI, the first vector, the second vector, the third vector, and the relevance labeling information of the query information and each POI may be used as training data to train a target model for determining a matching degree.
It should be noted that, the embodiment of the present disclosure does not limit the model structure and the training method of the target model. The target model may be a classification model that fuses a plurality of neural networks. The neural network includes, but is not limited to, at least one or a combination, superposition, nesting of at least two of the following: CNN (convolutional neural Network), LSTM (Long Short-Term Memory) Network, RNN (simple recurrent neural Network), attention neural Network, and the like.
In an optional embodiment of the present disclosure, the training a target model for determining a matching degree by using the first vector, the second vector corresponding to the query information, the third vector corresponding to each POI in the POI search result, and the relevance labeling information of the query information and each POI as training data may include:
step S31, calculating the matching degree between the query information and each POI in the POI search results according to the first vector, the second vector and the third vector based on the initial model parameters;
and step S32, adjusting the initial model parameters according to the error between the matching degree and the correlation labeling information, and iterating until the error meets a preset condition to obtain a trained target model.
Firstly, the embodiment of the disclosure can construct and initialize a target model, and set initial model parameters of the initial model; and then, inputting the training data into the initial model one by one, performing iterative optimization on the initial model according to the error between the output result of the initial model and the correlation labeling information in the training data and a gradient descent algorithm, adjusting the parameters of the initial model until the error generated by the optimized model meets a preset condition, stopping iterative optimization, and taking the model obtained by the last optimization as a target model after training.
The method and the device for calculating the relevance of the query information and the POI search result extract the characteristics relevant to the query information in the user comment through the deep reading understanding model, and then further obtain the vector representation of the target comment relevant to the query information by utilizing the characteristics, so that the relevance calculation of the query information and the POI search result is realized. Since the business name and the business profile usually only contain basic information, and the comments of the user may contain more diversified information, through the embodiment of the present disclosure, when the input query information is the query information with the complex requirement of "suitable for lovers to chat and drink coffee", the embodiment of the present disclosure can match the POI search result matching the query information and meeting the user requirement.
In an optional embodiment of the present disclosure, after obtaining the trained target model, the method may further include:
and performing reinforcement learning on the target model based on the strategy gradient to obtain an optimized target model, wherein the reward value of the reinforcement learning is the matching degree output by the target model, the action of the reinforcement learning is whether to select the current comment as the target comment, and the state of the reinforcement learning is the target comment selected by history.
In the embodiment of the present disclosure, in the process of performing semantic matching on each comment and the query information one by one for the comment set corresponding to each POI, in order to reduce data processing cost, an unsupervised semantic matching model may be used. After the matching degree between the query information and each POI in the POI search results is calculated according to the first vector, the second vector and the third vector, the calculated matching degree is further subjected to reinforcement learning joint training so as to optimize model parameters of a target model.
Preferably, the disclosed embodiments employ a policy gradient based reinforcement learning algorithm. The rewarded (reward value) is the matching degree calculated by the target model, the action is whether the current comment is selected as the target comment or not, the status is the target comment selected by the history, and the policy gradient algorithm is used for solving.
Policygadient may output actions or probabilities of actions directly based on state, with reward values to increase and decrease the likelihood of selecting an action, good actions being increased by the probability of being selected next time, and bad actions being decreased by the probability of being selected next time.
In summary, in the process of training a target model for determining a matching degree, training data is determined according to query information, POI search results related to the query information, and comment information corresponding to each POI in the POI search results, and target comment information having semantic relevance to the query information is fused in the training process. Therefore, the trained target model can identify the deeper (implicit in comment information) incidence relation between the POI search result and the query information on the basis that the POI search result is related to the query information text, and the comment information comes from a large number of users, so that the real intention of the users can be reflected. Therefore, under the condition that the query information is relatively complex, even if the title or the introduction of the POI does not contain the content related to the query information, the target model of the embodiment of the disclosure can still retrieve the POI search result related to the query information, and the success rate and the accuracy rate of the search can be improved, so as to improve the use experience of the user.
Example two
Referring to fig. 2, a flow chart illustrating steps of a method of determining a degree of match in one embodiment of the present disclosure is shown, the method comprising:
step 201, receiving current query information input by a user;
step 202, obtaining a candidate POI set related to the current query information;
step 203, inputting each candidate POI in the candidate POI set and the current query information into a target model, and outputting the matching degree between each candidate POI and the current query information through the target model, wherein the target model is obtained by training according to the model training method for determining the matching degree.
After the target model is obtained through training by the model training method for determining the matching degree of the embodiment of the disclosure, the matching degree between the query information and the POI can be calculated by using the target model.
The method for determining the matching degree provided by the present disclosure can be applied to electronic devices, which specifically include but are not limited to: smart phones, tablet computers, electronic book readers, MP3 (Moving Picture Experts Group Audio Layer III) players, MP4 (Moving Picture Experts Group Audio Layer IV) players, laptop portable computers, car-mounted computers, desktop computers, set-top boxes, smart televisions, wearable devices, and the like.
The current Query information may be search information, such as Query, input by the user. Optionally, the current query information may be a part of or the whole content of the search information input by the user. For example, if the search information input by the user is "red wine recommendation", the current query information may be "red wine recommendation" or "red wine".
The candidate POI set may be a recall result of a search engine to recall for the current query information. Based on the method for determining the matching degree provided by the embodiment of the disclosure, the matching degree of the current query information and each candidate POI in the candidate POI set can be determined, and then the recall result can be ranked according to the matching degree of the current query information and each candidate POI in the candidate POI set, so as to obtain the target POI recommended to the user.
In a specific application, the number of recall results is usually large and includes a large number of search results with low relevance, and in order to improve the search efficiency, the candidate POI set may be a rough result obtained by preliminarily ranking the recall results. Then, based on the method for determining the matching degree provided by the embodiment of the disclosure, on the basis of the rough arrangement result, the matching degree of the current query information and each candidate POI in the candidate POI set is determined, and according to the matching degree, the rough arrangement result is further ranked to obtain the target POI recommended to the user.
Application scenarios of the disclosed embodiments include, but are not limited to, natural language processing, spam filtering, web searching, chat robots, and the like. The method is particularly suitable for determining the matching degree between the Query word Query and the searched merchant POI/SPU (Standard Product Unit) in the O2O search scene so as to improve the accuracy of the search result of the O2O search scene.
It is noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the disclosed embodiments are not limited by the described order of acts, as some steps may occur in other orders or concurrently with other steps in accordance with the disclosed embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required of the disclosed embodiments.
EXAMPLE III
Referring to fig. 3, a block diagram of a model training apparatus for determining a matching degree in an embodiment of the present disclosure is shown as follows.
The data acquisition module 301 is configured to acquire query information, POI search results related to the query information, and a comment set corresponding to each POI in the POI search results;
a target determining module 302, configured to determine a target comment corresponding to each POI in the comment set corresponding to each POI, where a preset relevant condition is satisfied between the target comment and the query information;
a vector representing module 303, configured to generate a first vector according to the target comment corresponding to each POI, where the first vector is used to represent semantic relevance between the target comment and the query information;
a model training module 304, configured to train a target model for determining a matching degree by using the first vector, the second vector corresponding to the query information, the third vector corresponding to each POI in the POI search result, and the relevance labeling information of the query information and each POI as training data.
Optionally, the goal determining module 302 includes:
the semantic matching sub-module is used for carrying out semantic matching on each comment and the query information one by one for the comment set corresponding to each POI to obtain the semantic relevancy of each comment and the query information;
the comment ordering submodule is used for ordering the comment sets corresponding to the POIs according to the semantic relevance to obtain an ordering result of each comment set;
and the target determining submodule is used for taking at least one comment of which the sequencing result meets the preset requirement as the target comment corresponding to each POI according to the sequencing result of the comment set corresponding to each POI.
Optionally, the vector representing module 303 includes:
the first calculation sub-module is used for inputting the target comment corresponding to each POI and the query information into a deep reading understanding model, and outputting a relevance vector representation of each target comment corresponding to each POI and the query information through the deep reading understanding model;
and the second calculation submodule is used for generating a first vector for expressing the relevance between all the target comments corresponding to each POI and the query information according to the relevance vector expression between each target comment corresponding to each POI and the query information.
Optionally, the second computation sub-module is specifically configured to perform attention computation on the vector representation of the relevance between each target comment corresponding to each POI and the query information, so as to obtain a first vector used for representing the relevance between all the target comments corresponding to each POI and the query information.
Optionally, the model training module 304 includes:
the initial calculation sub-module is used for calculating the matching degree between the query information and each POI in the POI search results according to the first vector, the second vector and the third vector based on initial model parameters;
and the iteration optimization submodule is used for adjusting the initial model parameters according to the error between the matching degree and the correlation labeling information, and iterating until the error meets a preset condition to obtain a trained target model.
Optionally, the apparatus further comprises:
and the reinforcement learning module is used for performing reinforcement learning on the target model based on the strategy gradient to obtain an optimized target model, wherein the reward value of the reinforcement learning is the matching degree output by the target model, the action of the reinforcement learning is whether the current comment is selected as the target comment, and the state of the reinforcement learning is the target comment selected by history.
According to the target model obtained by training through the model training device for determining the matching degree, under the condition that the query information is complex, even if the title or the brief introduction of the POI does not contain the content relevant to the query information, the target model of the embodiment of the disclosure can still retrieve the POI search result relevant to the query information, so that the success rate and the accuracy rate of search can be improved, and the use experience of a user can be improved.
Example four
Referring to fig. 4, a block diagram of an apparatus for determining a matching degree in an embodiment of the present disclosure is shown as follows.
An information receiving module 401, configured to receive current query information input by a user;
a candidate obtaining module 402, configured to obtain a candidate POI set related to the current query information;
a matching calculation module 403, configured to input each candidate POI in the candidate POI set and the current query information into a target model, and output a matching degree between each candidate POI and the current query information through the target model, where the target model is obtained by training according to the model training method for determining the matching degree.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
An embodiment of the present disclosure also provides an electronic device, referring to fig. 5, including: a processor 501, a memory 502 and a computer program 5021 stored on the memory and executable on the processor, which when executed by the processor implements the model training method for determining the degree of match of the foregoing embodiments.
Embodiments of the present disclosure also provide a readable storage medium, and instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the model training method for determining a matching degree of the foregoing embodiments.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present disclosure are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the embodiments of the present disclosure as described herein, and any descriptions of specific languages are provided above to disclose the best modes of the embodiments of the present disclosure.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the present disclosure may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the disclosure, various features of the embodiments of the disclosure are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that is, claimed embodiments of the disclosure require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of an embodiment of this disclosure.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
The various component embodiments of the disclosure may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in a sequencing device according to embodiments of the present disclosure. Embodiments of the present disclosure may also be implemented as an apparatus or device program for performing a portion or all of the methods described herein. Such programs implementing embodiments of the present disclosure may be stored on a computer readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit embodiments of the disclosure, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. Embodiments of the disclosure may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above description is only for the purpose of illustrating the preferred embodiments of the present disclosure and is not to be construed as limiting the embodiments of the present disclosure, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the embodiments of the present disclosure are intended to be included within the scope of the embodiments of the present disclosure.
The above description is only a specific implementation of the embodiments of the present disclosure, but the scope of the embodiments of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the embodiments of the present disclosure, and all the changes or substitutions should be covered by the scope of the embodiments of the present disclosure. Therefore, the protection scope of the embodiments of the present disclosure shall be subject to the protection scope of the claims.

Claims (16)

1. A model training method for determining a degree of match, the method comprising:
acquiring query information, POI search results related to the query information and a comment set corresponding to each POI in the POI search results;
determining a target comment corresponding to each POI in a comment set corresponding to each POI, wherein preset relevant conditions are met between the target comment and the query information;
generating a first vector according to the target comment corresponding to each POI, wherein the first vector is used for expressing the semantic relevance of the target comment and the query information;
and taking the first vector, the second vector corresponding to the query information, the third vector corresponding to each POI in the POI search result and the correlation labeling information of the query information and each POI as training data to train a target model for determining the matching degree.
2. The method of claim 1, wherein the determining the target comment corresponding to each POI in the set of comments corresponding to each POI comprises:
for the comment set corresponding to each POI, performing semantic matching on each comment and the query information one by one to obtain the semantic relevance of each comment and the query information;
sequencing the comment sets corresponding to the POIs according to the semantic relevance to obtain a sequencing result of each comment set;
and regarding the ranking result of the comment set corresponding to each POI, taking at least one comment of which the ranking result meets the preset requirement as a target comment corresponding to each POI.
3. The method of claim 1, wherein generating a first vector according to the target comment corresponding to each POI comprises:
inputting the target comment corresponding to each POI and the query information into a deep reading understanding model, and outputting a relevance vector representation of each target comment corresponding to each POI and the query information through the deep reading understanding model;
and generating a first vector for representing the relevance of all the target comments corresponding to each POI and the query information according to the relevance vector representation of each target comment corresponding to each POI and the query information.
4. The method of claim 3, wherein generating a first vector representing the relevance of all the target comments corresponding to each POI to the query information according to the relevance vector representation of each target comment corresponding to each POI to the query information comprises:
and performing attention calculation on the relevance vector representation of each target comment corresponding to each POI and the query information to obtain a first vector for representing the relevance of all the target comments corresponding to each POI and the query information.
5. The method according to claim 1, wherein training a target model for determining the matching degree by using the first vector, the second vector corresponding to the query information, the third vector corresponding to each POI in the POI search results, and the relevance labeling information of the query information and each POI as training data comprises:
calculating the matching degree between the query information and each POI in the POI search results according to the first vector, the second vector and the third vector based on initial model parameters;
and adjusting the initial model parameters according to the error between the matching degree and the correlation labeling information, and iterating until the error meets a preset condition to obtain a trained target model.
6. The method of claim 5, wherein after obtaining the trained target model, the method further comprises:
and performing reinforcement learning on the target model based on the strategy gradient to obtain an optimized target model, wherein the reward value of the reinforcement learning is the matching degree output by the target model, the action of the reinforcement learning is whether to select the current comment as the target comment, and the state of the reinforcement learning is the target comment selected by history.
7. A method for determining a degree of match, the method comprising:
receiving current query information input by a user;
acquiring a candidate POI set related to the current query information;
inputting each candidate POI in the candidate POI set and the current query information into a target model respectively, and outputting a matching degree between each candidate POI and the current query information through the target model, wherein the target model is obtained by training according to the model training method for determining the matching degree in one or more of the claims 1 to 6.
8. A model training apparatus for determining a degree of matching, the apparatus comprising:
the data acquisition module is used for acquiring query information, POI search results related to the query information and a comment set corresponding to each POI in the POI search results;
the target determining module is used for determining a target comment corresponding to each POI in the comment set corresponding to each POI, and preset relevant conditions are met between the target comment and the query information;
the vector representing module is used for generating a first vector according to the target comment corresponding to each POI, and the first vector is used for representing the semantic relevance of the target comment and the query information;
and the model training module is used for training a target model for determining the matching degree by taking the first vector, the second vector corresponding to the query information, the third vector corresponding to each POI in the POI search result and the relevance marking information of the query information and each POI as training data.
9. The apparatus of claim 8, wherein the goal determination module comprises:
the semantic matching sub-module is used for carrying out semantic matching on each comment and the query information one by one for the comment set corresponding to each POI to obtain the semantic relevancy of each comment and the query information;
the comment ordering submodule is used for ordering the comment sets corresponding to the POIs according to the semantic relevance to obtain an ordering result of each comment set;
and the target determining submodule is used for taking at least one comment of which the sequencing result meets the preset requirement as the target comment corresponding to each POI according to the sequencing result of the comment set corresponding to each POI.
10. The apparatus of claim 8, wherein the vector representation module comprises:
the first calculation sub-module is used for inputting the target comment corresponding to each POI and the query information into a deep reading understanding model, and outputting a relevance vector representation of each target comment corresponding to each POI and the query information through the deep reading understanding model;
and the second calculation submodule is used for generating a first vector for expressing the relevance between all the target comments corresponding to each POI and the query information according to the relevance vector expression between each target comment corresponding to each POI and the query information.
11. The apparatus according to claim 10, wherein the second computing sub-module is specifically configured to perform attention computation on a relevance vector representation of each target comment corresponding to each POI with the query information, so as to obtain a first vector representing relevance of all target comments corresponding to each POI with the query information.
12. The apparatus of claim 8, wherein the model training module comprises:
the initial calculation sub-module is used for calculating the matching degree between the query information and each POI in the POI search results according to the first vector, the second vector and the third vector based on initial model parameters;
and the iteration optimization submodule is used for adjusting the initial model parameters according to the error between the matching degree and the correlation labeling information, and iterating until the error meets a preset condition to obtain a trained target model.
13. The apparatus of claim 12, further comprising:
and the reinforcement learning module is used for performing reinforcement learning on the target model based on the strategy gradient to obtain an optimized target model, wherein the reward value of the reinforcement learning is the matching degree output by the target model, the action of the reinforcement learning is whether the current comment is selected as the target comment, and the state of the reinforcement learning is the target comment selected by history.
14. An apparatus for determining a degree of matching, the apparatus comprising:
the information receiving module is used for receiving current query information input by a user;
a candidate obtaining module, configured to obtain a candidate POI set related to the current query information;
a matching calculation module, configured to input each candidate POI in the candidate POI set and the current query information into a target model, and output a matching degree between each candidate POI and the current query information through the target model, where the target model is obtained by training according to the model training method for determining the matching degree in one or more of claims 1 to 6.
15. An electronic device, comprising:
processor, memory and computer program stored on the memory and executable on the processor, characterized in that the processor implements a model training method for determining a degree of matching as claimed in one or more of claims 1-6 when executing the program.
16. A readable storage medium, characterized in that instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform a model training method for determining a degree of matching as recited in one or more of method claims 1-6.
CN202010990380.3A 2020-09-18 2020-09-18 Model training method and device for determining matching degree, electronic equipment and readable storage medium Pending CN112182126A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010990380.3A CN112182126A (en) 2020-09-18 2020-09-18 Model training method and device for determining matching degree, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010990380.3A CN112182126A (en) 2020-09-18 2020-09-18 Model training method and device for determining matching degree, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN112182126A true CN112182126A (en) 2021-01-05

Family

ID=73956853

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010990380.3A Pending CN112182126A (en) 2020-09-18 2020-09-18 Model training method and device for determining matching degree, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN112182126A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113360769A (en) * 2021-06-28 2021-09-07 北京百度网讯科技有限公司 Information query method and device, electronic equipment and storage medium
CN113554280A (en) * 2021-06-30 2021-10-26 北京百度网讯科技有限公司 Training method, device, equipment and storage medium for power grid system scheduling model

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103970784A (en) * 2013-01-31 2014-08-06 百度在线网络技术(北京)有限公司 Retrieval method and equipment
CN105468790A (en) * 2015-12-30 2016-04-06 北京奇艺世纪科技有限公司 Comment information retrieval method and comment information retrieval apparatus
CN109446399A (en) * 2018-10-16 2019-03-08 北京信息科技大学 A kind of video display entity search method
US20200004835A1 (en) * 2018-06-28 2020-01-02 Microsoft Technology Licensing, Llc Generating candidates for search using scoring/retrieval architecture
US20200005149A1 (en) * 2018-06-28 2020-01-02 Microsoft Technology Licensing, Llc Applying learning-to-rank for search
CN111259271A (en) * 2018-12-03 2020-06-09 阿里巴巴集团控股有限公司 Comment information display method and device, electronic equipment and computer readable medium
CN111339452A (en) * 2020-02-18 2020-06-26 北京字节跳动网络技术有限公司 Method, terminal, server and system for displaying search result
CN111460264A (en) * 2020-03-30 2020-07-28 口口相传(北京)网络技术有限公司 Training method and device of semantic similarity matching model

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103970784A (en) * 2013-01-31 2014-08-06 百度在线网络技术(北京)有限公司 Retrieval method and equipment
CN105468790A (en) * 2015-12-30 2016-04-06 北京奇艺世纪科技有限公司 Comment information retrieval method and comment information retrieval apparatus
US20200004835A1 (en) * 2018-06-28 2020-01-02 Microsoft Technology Licensing, Llc Generating candidates for search using scoring/retrieval architecture
US20200005149A1 (en) * 2018-06-28 2020-01-02 Microsoft Technology Licensing, Llc Applying learning-to-rank for search
CN109446399A (en) * 2018-10-16 2019-03-08 北京信息科技大学 A kind of video display entity search method
CN111259271A (en) * 2018-12-03 2020-06-09 阿里巴巴集团控股有限公司 Comment information display method and device, electronic equipment and computer readable medium
CN111339452A (en) * 2020-02-18 2020-06-26 北京字节跳动网络技术有限公司 Method, terminal, server and system for displaying search result
CN111460264A (en) * 2020-03-30 2020-07-28 口口相传(北京)网络技术有限公司 Training method and device of semantic similarity matching model

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113360769A (en) * 2021-06-28 2021-09-07 北京百度网讯科技有限公司 Information query method and device, electronic equipment and storage medium
CN113360769B (en) * 2021-06-28 2024-02-09 北京百度网讯科技有限公司 Information query method, device, electronic equipment and storage medium
CN113554280A (en) * 2021-06-30 2021-10-26 北京百度网讯科技有限公司 Training method, device, equipment and storage medium for power grid system scheduling model
CN113554280B (en) * 2021-06-30 2023-06-16 北京百度网讯科技有限公司 Training method, device, equipment and storage medium of power grid system scheduling model

Similar Documents

Publication Publication Date Title
CN109918673B (en) Semantic arbitration method and device, electronic equipment and computer-readable storage medium
CN111832290B (en) Model training method and device for determining text relevance, electronic equipment and readable storage medium
CN103870973A (en) Information push and search method and apparatus based on electronic information keyword extraction
CN114238573A (en) Information pushing method and device based on text countermeasure sample
CN110955750A (en) Combined identification method and device for comment area and emotion polarity, and electronic equipment
CN110678882A (en) Selecting answer spans from electronic documents using machine learning
WO2021007159A1 (en) Identifying entity attribute relations
CN110334186A (en) Data query method, apparatus, computer equipment and computer readable storage medium
CN110362662A (en) Data processing method, device and computer readable storage medium
CN113821588A (en) Text processing method and device, electronic equipment and storage medium
CN111782793A (en) Intelligent customer service processing method, system and equipment
CN112182126A (en) Model training method and device for determining matching degree, electronic equipment and readable storage medium
CN116662495A (en) Question-answering processing method, and method and device for training question-answering processing model
CN110275953B (en) Personality classification method and apparatus
Deshai et al. Deep learning hybrid approaches to detect fake reviews and ratings
Sayeed et al. BERT: A Review of Applications in Sentiment Analysis
CN117609612A (en) Resource recommendation method and device, storage medium and electronic equipment
Ali et al. Identifying and Profiling User Interest over time using Social Data
CN116561271A (en) Question and answer processing method and device
CN116186220A (en) Information retrieval method, question and answer processing method, information retrieval device and system
CN112115258B (en) Credit evaluation method and device for user, server and storage medium
CN114647739A (en) Entity chain finger method, device, electronic equipment and storage medium
Chhabra et al. Exploring Hugging Face Transformer Library Impact on Sentiment Analysis: A Case Study
CN111369315A (en) Resource object recommendation method and device, and data prediction model training method and device
Chhabra et al. 6 Exploring Hugging Face Transformer Library Impact on Sentiment Analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination