WO2021128729A1 - 确定搜索结果的方法、装置、设备和计算机存储介质 - Google Patents
确定搜索结果的方法、装置、设备和计算机存储介质 Download PDFInfo
- Publication number
- WO2021128729A1 WO2021128729A1 PCT/CN2020/092742 CN2020092742W WO2021128729A1 WO 2021128729 A1 WO2021128729 A1 WO 2021128729A1 CN 2020092742 W CN2020092742 W CN 2020092742W WO 2021128729 A1 WO2021128729 A1 WO 2021128729A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- search
- query
- user
- sub
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9538—Presentation of query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/903—Querying
- G06F16/9035—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
Definitions
- This application relates to the field of computer application technology, in particular to methods, devices, equipment and computer storage media for determining search results in the field of intelligent search.
- the traditional related entity recommendation only considers the user's current search keywords. For the same search keyword, the same related entity recommendation is performed for all users. However, this situation cannot solve the problem of inaccurate related entities of ambiguous search keywords. For example, if the user's current search keyword is "Chicago", it is impossible to know whether the user refers to a city, movie, or opera. Therefore, the user's needs cannot be accurately reflected when recommending related entities.
- the present application provides a method, device, device, and computer storage medium for determining search results, so as to provide users with search results that more accurately reflect user needs.
- this application provides a method for determining search results, which includes:
- the result ranking model scores the candidate search results, determines the search result corresponding to the current query, and the second duration is greater than the first duration;
- the search result ranking model determines the score of the candidate results according to the first similarity and the second similarity, and the first similarity is the vector representation of the current query and the user’s
- the second similarity is the vector representation of the current query and the search history information of the user within the second time period
- the similarity between the integration of the vector representation and the vector representation of the candidate search result is the vector representation of the candidate results according to the first similarity and the second similarity.
- the search history information of the user within the first time period includes: a query sequence before the current query in the same session and a clicked search result corresponding to each query in the query sequence;
- the search history information of the user in the second time period includes: queries searched by the user in the second time period and search results clicked on.
- the vector representation of the search history information of the user within the first time period is obtained in the following manner:
- the vector representation of each query in the query sequence and the vector representation of the clicked search result corresponding to each query are weighted using an attention mechanism to obtain a vector representation of the search history information of the user within the first time period.
- the vector representation of the search history information of the user in the second time period is obtained in the following manner:
- the word set uses the distributed word bag PV-DBOW of the sentence vector to perform encoding processing to obtain the vector representation of the search history information of the user in the second time period.
- the candidate search results include related webpages or related entities
- the vector representation of related entities is: an integrated vector representation of the identification, name, and entity description of the related entities.
- the method further includes:
- the search result corresponding to the current query is displayed on the search result page.
- this application provides a method for training a search result ranking model, the method including:
- the training samples include: sample query, search history information within the first period of time before the user enters the sample query, search history information within the second period of time before the user enters the sample query, and corresponding to the sample query Search results and the clicked status of search results;
- the input of the ranking model includes the sample query, the search history information of the user in the first time period before inputting the sample query, and the first time before the user inputting the sample query.
- the search history information within two hours and the search results corresponding to the sample query
- the output of the ranking model includes the score of each search result;
- the score of each search result by the ranking model is determined according to the first similarity and the second similarity
- the first similarity is the similarity between the integration of the vector representation of the sample query and the vector representation of the search history information within the first time period and the vector representation of the search result
- the degree is the similarity between the integration of the vector representation of the sample query and the vector representation of the search history information within the second period of time and the vector representation of the search result
- the training objective includes: maximizing the search result The correlation between the clicked status and the score of the search result;
- the search history information for the first time period before the user enters the sample query includes: the query sequence before the sample query in the same session and the clicked search result corresponding to each query in the query sequence ;
- the search history information of the user within the second time period before inputting the sample query includes: the query searched by the user during the second time period before inputting the sample query and the search result clicked.
- the vector representation of the search history information in the first time period before the user inputs the sample query is obtained in the following manner:
- the vector representation of each query in the query sequence and the vector representation of the clicked search result corresponding to each query are weighted using an attention mechanism to obtain the vector representation of the search history information of the user within the first time period.
- the vector representation of the search history information within the second period of time before the user enters the sample query is obtained in the following manner:
- the word set uses the distributed word bag PV-DBOW of the sentence vector to perform encoding processing to obtain the vector representation of the search history information of the user in the second time period before inputting the sample query.
- the search results include related webpages or related entities
- the vector representation of related entities is: an integrated vector representation of the identification, name, and entity description of the related entities.
- the search results include: a first type of search result and a second type of search result;
- the ranking model includes: a shared vector sub-model, a first ranking sub-model, and a second ranking sub-model;
- Joint training is performed on the first ranking sub-model and the second ranking sub-model to achieve a preset training goal, and the training goal includes: maximizing the click status of the first type of search result and the score of the first type of search result And maximize the correlation between the second type of search result being clicked on and the second type of search result’s score;
- the search result ranking model is obtained by using one of the first ranking sub-model and the second ranking sub-model and the shared vector sub-model.
- the joint training of the first ranking sub-model and the second ranking sub-model includes:
- the first ranking sub-model and the second ranking sub-model are trained each time, and all the sub-models are updated using the outputs of the first ranking sub-model and the second ranking sub-model Model parameters.
- this application provides a device for determining search results, the device including:
- the acquiring unit is configured to acquire the current query of the user, the search history information of the user in the first time period, the search history information of the user in the second time period, and the candidate search results of the current query;
- the determining unit is used to input the current query, the search history information of the user in the first time period, the search history information of the user in the second time period, and the candidate search results of the current query into the search result ranking model according to Scoring the candidate search results by the search result ranking model to determine the search result corresponding to the current query, and the second duration is greater than the first duration;
- the search result ranking model determines the score of the candidate results according to the first similarity and the second similarity, and the first similarity is the vector representation of the current query and the user’s
- the second similarity is the vector representation of the current query and the search history information of the user within the second time period
- the similarity between the integration of the vector representation and the vector representation of the candidate search result is the vector representation of the candidate results according to the first similarity and the second similarity.
- this application provides a device for training a search result ranking model, the device including:
- the sample obtaining unit is used to obtain training samples by using the search log.
- the training samples include: sample query, search history information of the user in the first time period before inputting the sample query, and search history of the user in the second time period before inputting the sample query Information, search results corresponding to the sample query, and the clicked status of the search results;
- the model training unit is used to train the ranking model by using the training samples to achieve the preset training goal; wherein the input of the ranking model includes the sample query, the search history information of the user in the first time period before the user input the sample query, and the user The search history information within the second time period before the sample query is input and the search results corresponding to the sample query, the output of the ranking model includes a score for each search result; the ranking model for each search result is based on the first similarity And the second similarity determination, the first similarity is the similarity between the integration of the vector representation of the sample query and the vector representation of the search history information within the first duration and the vector representation of the search result The second similarity is the similarity between the integration of the vector representation of the sample query and the vector representation of the search history information within the second time period and the vector representation of the search result; the training target Including: maximizing the correlation between the clicked status of the search result and the score of the search result;
- the model acquisition unit is used to obtain the search result ranking model by using the training ranking model.
- this application provides an electronic device, including:
- At least one processor At least one processor
- a memory communicatively connected with the at least one processor; wherein,
- the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the method as described above.
- the present application provides a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to make the computer execute the method described above.
- this application comprehensively considers the search context information reflected in the user's short-term search history and the user's personalized preferences reflected in the long-term search history when determining the search results, thereby improving the accuracy of the search results.
- entity recommendation it can eliminate the ambiguity of the current query and provide more accurate related entities that meet the user's search requirements.
- Figure 1 provides an example diagram of related entity recommendation on the search result page
- FIG. 2 is a flowchart of a method for recommending related entities according to Embodiment 1 of this application;
- FIG. 3 is a schematic structural diagram of an entity sorting model provided in Embodiment 1 of this application.
- FIG. 5 is a schematic structural diagram of an entity sorting model provided in Embodiment 3 of this application.
- Fig. 6 is a structural diagram of an apparatus for determining search results provided by an embodiment of the application.
- FIG. 7 is a structural diagram of an apparatus for training a search result ranking model provided by an embodiment of the application.
- Fig. 8 is a block diagram of an electronic device used to implement an embodiment of the present application.
- the method provided in this application is applied to the search engine of a computer system, executed by a computer or a processor, and can be set on the server side to improve the effect of determining search results by using historical queries of users.
- the browser or client sends the current query to the server, and the server determines the search result using the method provided in the embodiment of this application, and then sends the search result to the browser.
- Server or client The search results involved in the embodiments of the present application may include related entities or related webpages.
- the method provided by this application can be used to recommend related entities for the current query, and can also be used to recommend related webpages for the current query.
- the related entity recommendation for the current query is taken as an example, and the principles for the recommendation of related webpages are similar.
- the application will be described in detail below in conjunction with embodiments.
- FIG. 2 is a flowchart of a method for recommending related entities provided in Embodiment 1 of this application. As shown in FIG. 2, the method may include the following steps:
- the user’s current query search keyword
- the user’s search history information in the first time period the user’s search history information in the second time period
- candidate related entities of the current query are obtained, where the second time period is greater than the first time period.
- the traditional entity recommendation system is only based on the recommendation of the current query.
- the current query refers to the query currently entered by the user, and cannot understand the user's real search needs, resulting in inaccurate recommendation of related entities and does not meet the user's needs.
- search history can provide very valuable clues, which can better help capture the real needs of users.
- Search history can be divided into two types: short-term search history and long-term search history.
- the short-term search history may correspond to the search history information of the user in this application in the first time period
- the long-term search history may correspond to the search history information of the user in this application in the second time period.
- the short-term search history may include previous user behaviors in the same search session as the current query, for example, the query sequence before the current query in the same session and the clicked search result corresponding to each query in the query sequence.
- the clicked search result may be the clicked webpage in the search result page, or the clicked related entity.
- the short-term search history can be regarded as the contextual information of the current query, which reflects the short-term instant interest of the user. If the user searched for "Dream Girl" in the same session before searching for "Chicago", then the user is more likely to be more interested in the movie. Similarly, if the user previously clicked on search results and recommended entities related to opera, the user may be more interested in opera.
- the above-mentioned "session” refers to a search session, and a widely used method for determining a search session can be used here. If the user does not have any search behavior before the first time period (for example, 30 minutes), the first search behavior within the first time period can be used as the start of the current session. In other words, if the user has continuous search behaviors within 30 minutes, then the continuous search behaviors within 30 minutes belong to the same session.
- the first time period for example, 30 minutes
- Long-term search history refers to all user search behaviors in the second time period before the current query, including all user search behaviors in all sessions of the user during the second time period, including the entered query and the webpage clicked on the search result page , Clicked related entities.
- the long-term search history reflects the user's long-term inherent interest preferences. If a user frequently searches for opera-related queries, clicks on opera-related web pages and related entities, then when the user searches for "Chicago", it is best to recommend opera-related entities to the user. Long-term search history is very helpful to build a personalized entity recommendation system.
- the above-mentioned first time length can be selected in a minute level or an hour level, for example, 30 minutes.
- the above-mentioned second duration can be selected from day level or month level, for example, 3 months.
- the application embodiment does not limit the acquisition method of the candidate related entities of the current query. For example, entities that co-occur with the current query in a window of a preset length can be selected from the text set, and the entities that meet the preset number of co-occurrences are regarded as the current query's entities.
- Candidate related entities For example, entities that co-occur with the current query in a window of a preset length can be selected from the text set, and the entities that meet the preset number of co-occurrences are regarded as the current query's entities.
- the user’s current query, the user’s search history information in the first time period, the user’s search history information in the second time period, and the candidate related entities of the current query are input into the entity ranking model, and the candidates are related according to the entity ranking model.
- the score of the entity determines the relevant entity recommended by the current query.
- the search result ranking model determines the score of the candidate results based on the first similarity and the second similarity.
- the first similarity is the integration and candidate of the vector representation of the current query and the vector representation of the user’s search history information within the first time period.
- the similarity between the vector representations of related entities, and the second similarity is the similarity between the integration of the vector representation of the current query and the vector representation of the user's search history information in the second time period and the vector representation of the candidate related entities.
- the entity ranking model is used to score each candidate related entity of the current query, so as to determine the recommended related entity of the current query according to the score.
- candidate related entities whose scores meet certain requirements may be used as recommendation related entities, and the display positions of the recommendation related entities may be further ranked according to the scores.
- the score meeting certain requirements may include: the score is ranked in the top M, and M is a preset positive integer; or the score exceeds a preset threshold.
- the entity ranking model is pre-trained from training data.
- the training method of the entity ranking model will be described in detail in the second embodiment.
- the entity ranking model obtained by training can output related entities for each candidate related entity for the input user's current query, the user's search history information in the first time period, the user's search history information in the second time period, and the candidate related entities of the current query. Rating.
- the structure of the entity sorting model can be as shown in Figure 3, which consists of a vector sub-model and a sorting sub-model.
- the current search keyword query, the user's search history information in the first time period, and the user's search history information in the second time period are used as the input of the vector sub-model.
- the vector sub-model outputs the vector representation of the current query and the user in the first time. Integration of vector representations of search history information within a time period.
- the output of the vector sub-model and the candidate related entities of the current query are used as the input of the ranking sub-model, and the ranking model outputs the scores of the candidate related entities.
- the input current query which is represented as q t in the figure, it can be encoded by a neural network, and the vector representation v q of the current query is obtained.
- the neural network is preferably BiLSTM (Bidirectional Long Short-Term Memory).
- q t [w 1 ,w 2 ,...,w n ]
- the word w i is transformed into a vector representation through a word vector matrix, and then a forward LSTM and a backward LSTM are used to convert
- the search query q t is separately encoded as a hidden vector: with Finally with Concatenated together as a vector representation of the search query q t Among them, [;] represents the splicing of vectors.
- C i may include the clicked webpage and/or clicked related entities.
- each q i can be encoded by a neural network (in a manner consistent with q t ) to obtain a vector representation v qi , and then the clicked webpage in C i can be represented as a vector Represent the clicked related entity in C i as a vector
- l is the number of the clicked page C i
- g is C i is the number of hits are related entities.
- a neural network can be used to encode the title of each clicked webpage to obtain the vector of each clicked webpage.
- the vector representation of the entity's identity, name, and entity description can be spliced, combined with certain offset parameters for activation processing, and the vector representation of each clicked entity can be obtained
- This method of determining the vector representation of an entity can effectively solve the OOV (Out-Of-Vocabulary) problem and the ambiguity problem.
- the entity description can be understood as the "description text" of the entity to describe the meaning of an entity.
- the first sentence of the encyclopedia entry corresponding to the entity can be used, or the abstract of the encyclopedia entry corresponding to the entity can be used.
- the function symbol Attention w ( ⁇ , ⁇ ) represents a weighted representation method based on the attention mechanism.
- v a is the model parameter, which is learned during the model training process.
- u For the user's search history information in the second time period, since it represents the user's personalized preference, it is denoted as u in the figure, which is regarded as a user representation.
- PV-DBOW Distributed Bag of Words version of Paragraph Vector
- the vector representation of history represented as v u in the figure.
- PV-DBOW is currently a relatively mature processing method, which can output the corresponding vector representation in the case of the input word set.
- v q and v c are spliced together to obtain an integrated vector representation v cq .
- other vector integration methods can also be used.
- other vector integration methods can also be used.
- the vector sub- model outputs two integrated vector representations: v cq and v uq.
- v cq , v uq and candidate related entities e are input to the ranking sub-model.
- the ranking sub-model uses two similarities to rank candidate related entities e: the first similarity and the second similarity.
- the first similarity is the similarity between the integration of the vector representation of the current query and the vector representation of the user's search history information within the first time period and the vector representation of the candidate related entities.
- S t ,q t ) can be calculated using the following formula:
- V represents the vector e e-related entity
- said vector e entity identification, entity name and description may be a rear spliced vector e entity binding constant offset parameter obtained after the activation treatment represented v e.
- v s is a vector representation obtained by mapping v cq through a fully connected layer (FC layer), which can be calculated using the following formula:
- W s is the parameter matrix
- b s is the offset vector
- ⁇ ( ⁇ ) is the activation function
- W s and b s are model parameters, which are learned during model training.
- the second similarity is the similarity between the integration of the vector representation of the current query and the vector representation of the user's search history information within the second time period and the vector representation of the candidate related entities.
- u,q t ) can be calculated using the following formula:
- v p is a vector representation obtained by mapping v uq through a fully connected layer (FC layer), which can be calculated using the following formula:
- W u is a parameter matrix
- b u is the offset vector
- ⁇ ( ⁇ ) is the activation function.
- Wu and b u are model parameters, which are learned during model training.
- the ranking sub-model can comprehensively use the above-mentioned first similarity and second similarity when scoring the candidate related entity e, which can specifically be:
- W f is the parameter matrix
- b f is the offset value
- ⁇ ( ⁇ ) is the activation function.
- W f and b f are model parameters, which are learned during model training.
- the determined recommendation related entities are displayed in the search result page.
- the recommendation related entity may be displayed in the search result page of the current query.
- the recommended related entities are displayed in the right area of the search result page of the current query.
- it can also be displayed in other positions on the search result page, and the present application does not limit the display position of the recommended related entity in the search result page.
- the user enters the current query "Chicago”, the user also searched for "Titanic” and “Moulin Rouge” in the same session, and searched and clicked on a lot of movie-related content within three months.
- the candidate related entity related to the movie has a higher score, so the candidate related entity related to the movie will be regarded as the recommendation related entity.
- the ambiguity in the current query can be eliminated, which not only considers the contextual information, but also considers the personalized preferences reflected by the user's long-term search history, so that the related entity recommendation can be more in line with the actual needs of the user.
- Fig. 4 is a flow chart of the method for training an entity ranking model provided in the second embodiment of the application. As shown in Fig. 4, the method may include the following steps:
- the search log is used to obtain training samples.
- the acquired training samples include the sample query, the search history information of the user in the first time period before entering the sample query, the search history information of the user in the second time period before entering the sample query, the search results corresponding to the sample query, and the clicks of the search results Situation, the second duration is greater than the first duration.
- the search logs for a continuous period of time are obtained, and the above-mentioned training samples are extracted from them.
- the search history information in the first time period before the user enters the sample query may include previous user behaviors in the same search session as the sample query, for example, the previous user behavior in the same session before the sample query.
- the query sequence and the clicked search result corresponding to each query in the query sequence but as a preferred embodiment, in addition to the clicked search result as a positive example in the training sample, the search result that is not clicked can also be obtained as a negative example.
- the search result can be a webpage in a search result page or a related entity.
- the search history information in the second time period before the user enters the sample query may include all user search behaviors in all sessions of the user during the second time period, including the entered query, the webpage clicked on the search result page, and the related entities clicked .
- the ranking model is trained using the training samples to achieve a preset training goal.
- the input of the ranking model includes the sample query, the search history information of the user in the first time period before entering the sample query, the search history information of the user in the second time period before entering the sample query, and the related entities corresponding to the sample query.
- the output includes a score for each related entity.
- the ranking model’s scoring of each related entity is determined based on the first similarity and the second similarity.
- the first similarity is the integration of the vector representation of the sample query and the vector representation of the search history information in the first time period and the correlation between the sample query.
- the similarity between the vector representations of the entities, the second similarity is the integration of the vector representation of the sample query and the vector representation of the search history information within the second time length and the similarity between the vector representations of the related entities corresponding to the sample query .
- the training objectives include: maximizing the correlation between the clicked status of the related entity and the score of the related entity.
- the model may be trained to sort shown in FIG. 3, the sample query q t as in FIG. 3, the user search history information in the first period of time as in FIG. 3, the input sample S t before the query ,
- the search history information in the second time period before the user enters the sample query is taken as u in Figure 3, and after the vector sub- model processing, two integrated vector representations are obtained: v cq and v uq, where v cq is the vector of q t represents the vector v q and v c S t represents the integration obtained, v uq q t is represented by a vector and the vector v q v u u represents the integration obtained.
- the relevant record in the first embodiment which will not be repeated here.
- the sorting sub-model obtains the first similarity P(e
- the model parameters can be updated iteratively through pairwise ranking learning and stochastic gradient descent.
- the training target of the ranking sub-model may adopt a form of minimizing a preset loss function.
- the loss function Loss can be determined by the negative log likelihood function of the clicked entity in the training sample, for example:
- e + is the clicked related entity of the sample query.
- the set of training samples used to sort the entities.
- E t is a collection of related entities of the sample query
- ⁇ is a preset parameter
- the iteration stop condition may include but is not limited to: loss e converges, loss e is less than a preset threshold, or the number of iterations reaches a preset threshold.
- an entity ranking model is obtained by using the trained ranking model.
- the model shown in FIG. 3 can be used as the entity sorting model.
- the entity sorting model includes a vector sub-model and a sorting sub-model.
- the present application may provide a preferred training method through the third embodiment, and propose a multi-task learning framework, that is, the entity ranking model is obtained by the method of multi-task model joint training.
- the entity ranking of the entity ranking model tends to recommend entities based on the most frequently mentioned meaning of the query.
- the entity click data corresponding to the less and rarely mentioned meanings are very sparse.
- most search engines will provide users with diverse search results. Therefore, when users search, it is easier to find results that match their own information needs in web search results compared with entity recommendation results.
- the overall model may include a shared vector sub-model, a first ranking sub-model, and a second ranking sub-model.
- the first sorting sub-model uses the sorting sub-model described in the second embodiment as the main task for sorting related entities.
- the second ranking sub-model is used as an auxiliary task for ranking web pages. This kind of multi-task learning model training can use the correlation between different task models to improve the scalability and ranking effect of the model.
- the training sample further includes the webpage search result corresponding to the sample query and the clicked status corresponding to the webpage.
- this model can be as shown in Figure 5.
- the sample query as q t in FIG. 5 the user inputs the first prior before the sample query search history information in the first period of time as in FIG. 5 S t, a user input samples query
- the search history information within two hours is taken as u in Figure 5, and after the vector sub- model processing, two integrated vector representations are obtained: v cq and v uq, where v cq is represented by the vector of q t and the vector of v q and S t v c represents the integration obtained, v uq represented by the vector q t u and vector v q v u represents the integration obtained.
- the relevant record in the first embodiment which will not be repeated here.
- the first sorting sub-model is similar to the sorting sub-model in the second embodiment, v cq , v uq and related entities e are used as the input of the first sorting sub-model, and the sorting sub-model obtains the first similarity according to the method described in the first embodiment After the degree P(e
- the v cq and v uq output by the vector sub-model are also used as the input of the second ranking sub-model, and the web search result d (hereinafter referred to as the candidate web page for short) is also used as the input of the second ranking sub-model.
- the second ranking sub-model uses two similarities to rank d: the first similarity and the second similarity.
- the first similarity in the second ranking sub-model is the similarity between the integration of the vector representation of the current query and the vector representation of the user's search history information within the first time period and the vector representation of the candidate web page.
- S t ,q t ) can be calculated using the following formula:
- v d represents the vector representation of the candidate web page d.
- v r is a vector representation obtained by mapping v cq through a fully connected layer (FC layer), which can be calculated using the following formula:
- W d is the parameter matrix
- b d is the offset vector
- ⁇ ( ⁇ ) is the activation function
- W d and b d are model parameters, which are learned during model training.
- the second similarity is the similarity between the integration of the vector representation of the current query and the vector representation of the user's search history information within the second time period and the vector representation of the candidate webpage.
- u,q t ) can be calculated using the following formula:
- v m is a vector representation obtained by mapping v uq through a fully connected layer (FC layer), which can be calculated using the following formula:
- W m is the parameter matrix
- b m is the offset vector
- ⁇ ( ⁇ ) is the activation function
- W m and b m are model parameters, which are learned during model training.
- the second ranking sub-model can comprehensively use the above-mentioned first similarity and second similarity when scoring the candidate webpage d, which can specifically be:
- W g is the parameter matrix
- b g is the offset value
- ⁇ ( ⁇ ) is the activation function.
- W g and b g are model parameters, which are learned during model training.
- one of the first ranking sub-model and the second ranking sub-model can be randomly selected for training each time, or the first ranking sub-model can be alternately selected each time. Select one of the model and the second ranking sub-model for training; and then use the output of the selected sub-model to update the model parameters of the selected sub-model and the shared vector sub-model each time.
- the first ranking sub-model when selecting the first ranking sub-model, use training samples to train to obtain P(e
- the model parameters of the sub-model then select the second ranking sub-model, use the training sample to train to obtain P(d
- d + is the clicked webpage of the sample query.
- Sorting web pages is the set of training samples corresponding to the second ranking sub-model.
- D t is the set of candidate web pages of the sample query, and ⁇ is the preset parameter.
- an integrated loss function can be used, for example:
- ⁇ is a hyperparameter, which can be manually set as an experimental value or an empirical value.
- the loss is calculated in each iteration process, and then the loss is used to update the model parameters of the shared vector sub-model, the first sorting sub-model, and the second sorting sub-model until the training goal is reached. For example, loss converges, loss is less than a preset threshold, or the number of iterations reaches the preset threshold.
- the entity ranking model is obtained by using the shared vector sub-model and the first ranking sub-model. That is to say, although in the training process, the multi-task model is used for training, that is, the second ranking sub-model assists the training of the first ranking sub-model, but the final entity ranking model used for recommendation of related entities is not Use the second ordering sub-model.
- a related web page ranking model can also be obtained, that is, a related web page ranking model can be obtained by using the shared sub-model and the second ranking sub-model.
- the first ranking sub-model assists the training of the second ranking sub-model, that is, the recommendation of related entities assists the ranking of related web pages.
- the related web page ranking model obtained in this way can obtain related web pages related to the current query after inputting the current query, the user's search history information in the first time period, the user's search history information in the second time period, and the related web page collection of the current query.
- the scores of the relevant webpages in the collection can be used to determine the relevant webpages of the current query to be displayed based on the scores, and serve as a basis for selection and ranking of the relevant webpages.
- the foregoing embodiment is described by taking the related entities and related webpages as the first-type search results and the second-type search results respectively as examples.
- this application is not limited to these two types of search results, and other types of search results can be used as the first type of search result and the second type of search result.
- FIG. 6 is a structural diagram of an apparatus for determining search results provided by an embodiment of the application.
- the apparatus may include: an acquisition unit 01 and a determination unit 02, and may also include a display unit 03.
- the main functions of each component are as follows:
- the obtaining unit 01 is used to obtain the user's current query, the user's search history information in the first time period, the user's search history information in the second time period, and the candidate search results of the current query.
- the determining unit 02 is used to input the current query, the user’s search history information in the first time period, the user’s search history information in the second time period, and the candidate search results of the current query into the search result ranking model, and the search results are ranked according to the search result ranking model.
- the score of the candidate search result determines the search result corresponding to the current query, and the second duration is greater than the first duration.
- the search result ranking model determines the score of the candidate results based on the first similarity and the second similarity.
- the first similarity is the integration and candidate of the vector representation of the current query and the vector representation of the user’s search history information within the first time period.
- the vector of the search results indicates the similarity between the vector representations
- the second similarity is the similarity between the integration of the vector representation of the current query and the vector representation of the user's search history information within the second time period and the vector representation of the candidate search results.
- the search history information of the user in the first time period includes: the query sequence before the current query in the same session and the clicked search result corresponding to each query in the query sequence.
- the vector representation of the search history information of the user within the first time period is obtained by the following method: the vector representation of each query in the query sequence and the vector representation of the clicked search result corresponding to each query adopts an attention mechanism Perform weighting processing to obtain a vector representation of the search history information of the user in the first time period.
- the search history information of the user in the second period of time may include the query searched by the user in the second period of time and the search result clicked (for example, the clicked webpage and related entities).
- the vector representation of the search history information of the user in the second time period is obtained by: obtaining the query set and the search result set clicked by the user in the second time period; performing word segmentation processing on the query set and the search result set After the union set is obtained, the word set is obtained; the word set is encoded using PV-DBOW, and the vector representation of the user's search history information in the second time period is obtained.
- the aforementioned candidate search results may include related webpages or related entities.
- the vector representation of related entities is: an integrated vector representation of the identification, name, and entity description of the related entities.
- the specific process of the determining unit 02 using the search ranking model to determine the search results can be found in the relevant records in the first embodiment, which is not described in detail here.
- the display unit 03 is used to display the search result corresponding to the current query in the search result page, that is, the search result corresponding to the current query is included in the search result page and sent to the browser or client.
- FIG. 7 is a structural diagram of an apparatus for training a search result ranking model provided by an embodiment of the application.
- the apparatus may include: a sample acquisition unit 11, a model training unit 12, and a model acquisition unit 13.
- the main functions of each component are as follows:
- the sample obtaining unit 11 is used to obtain training samples by using the search log.
- the training samples include: sample query, search history information in the first time period before the user enters the sample query, and search history information in the second time period before the user enters the sample query , The search results corresponding to the sample query and the clicked status of the search results.
- the model training unit 12 is used to train the ranking model with training samples to achieve the preset training goals;
- the input of the ranking model includes sample query, search history information of the user in the first time period before inputting the sample query, and user input samples
- the output of the ranking model includes the score of each search result; the ranking model’s score of each search result is determined based on the first similarity and the second similarity.
- the first similarity is the similarity between the vector representation of the sample query and the vector representation of the search history information within the first duration and the vector representation of the search results.
- the second similarity is the vector representation of the sample query and the second The similarity between the integration of the vector representation of the search history information and the vector representation of the search result within the duration; the training objective includes: maximizing the correlation between the clicked condition of the search result and the score of the search result.
- the model obtaining unit 13 is configured to obtain the search result ranking model by using the ranking model obtained through training.
- the search history information in the first time period before the user inputs the sample query includes: the query sequence before the sample query in the same session and the clicked search result corresponding to each query in the query sequence.
- the vector representation of the search history information in the first time period before the user enters the sample query is obtained by the following method: the vector representation of each query in the query sequence, and the vector representation of the clicked search result corresponding to each query is performed using the attention mechanism
- the weighting process obtains the vector representation of the search history information within the first time period before the user enters the sample query.
- the search history information of the user within the second time period before inputting the sample query includes: the query searched by the user during the second time period before inputting the sample query and the search result clicked.
- the vector representation of the search history information in the second time period before the user enters the sample query is obtained by: obtaining the query set searched by the user in the second time period before entering the sample query and the search result set clicked; and the query set After performing word segmentation with the search result set, the union set is obtained to obtain the word set; the word set is encoded using PV-DBOW to obtain the vector representation of the search history information within the second time period before the user enters the sample query.
- model training unit 12 to train the search result ranking model can be found in the relevant records in the second embodiment, which will not be described in detail here.
- the above-mentioned search results include: first-type search results and second-type search results.
- the sorting model includes: a shared vector sub-model, a first sorting sub-model, and a second sorting sub-model.
- the model training unit 12 is specifically used to input and share the sample query, the search history information of the user in the first time period before inputting the sample query, the search history information of the user in the second time period before inputting the sample query, and the search result corresponding to the sample query.
- the vector sub-model obtains the integration of the vector representation of the sample query output by the shared vector sub-model and the vector representation of the search history information in the first time period, as well as the vector representation of the sample query and the vector representation of the search history information in the second time period.
- the model acquisition unit 13 is specifically configured to obtain the search result ranking model by using one of the first ranking sub-model and the second ranking sub-model and the shared vector sub-model after the training of the model training unit is completed.
- model training unit 12 when the model training unit 12 performs joint training on the first ranking sub-model and the second ranking sub-model, it specifically executes:
- the first ranking sub-model and the second ranking sub-model are trained each time, and the model parameters of all the sub-models are updated using the outputs of the first ranking sub-model and the second ranking sub-model.
- the specific process of training the search result ranking model by the model training unit 12 in this manner can refer to the record in the third embodiment, which is not described in detail here.
- the present application also provides an electronic device and a readable storage medium.
- FIG. 8 it is a block diagram of an electronic device according to a method for determining a search result according to an embodiment of the present application.
- Electronic devices are intended to represent various forms of digital computers, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers.
- Electronic devices can also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices.
- the components shown herein, their connections and relationships, and their functions are merely examples, and are not intended to limit the implementation of the application described and/or required herein.
- the electronic device includes: one or more processors 801, a memory 802, and interfaces for connecting various components, including a high-speed interface and a low-speed interface.
- the various components are connected to each other using different buses, and can be installed on a common motherboard or installed in other ways as needed.
- the processor may process instructions executed in the electronic device, including instructions stored in or on the memory to display graphical information of the GUI on an external input/output device (such as a display device coupled to an interface).
- an external input/output device such as a display device coupled to an interface.
- multiple processors and/or multiple buses can be used with multiple memories and multiple memories.
- multiple electronic devices can be connected, and each device provides part of the necessary operations (for example, as a server array, a group of blade servers, or a multi-processor system).
- a processor 801 is taken as an example.
- the memory 802 is a non-transitory computer-readable storage medium provided by this application.
- the memory stores instructions executable by at least one processor, so that the at least one processor executes the method for determining search results provided in this application.
- the non-transitory computer-readable storage medium of the present application stores computer instructions, and the computer instructions are used to make a computer execute the method for determining search results provided by the present application.
- the memory 802 as a non-transitory computer-readable storage medium, can be used to store non-transitory software programs, non-transitory computer executable programs and modules, such as program instructions/modules corresponding to the method for determining search results in the embodiments of the present application.
- the processor 801 executes various functional applications and data processing of the server by running non-transient software programs, instructions, and modules stored in the memory 802, that is, realizing the method of determining search results in the foregoing method embodiments.
- the memory 802 may include a program storage area and a data storage area.
- the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of the electronic device.
- the memory 802 may include a high-speed random access memory, and may also include a non-transitory memory, such as at least one magnetic disk storage device, a flash memory device, or other non-transitory solid-state storage devices.
- the memory 802 may optionally include a memory remotely provided with respect to the processor 801, and these remote memories may be connected to the electronic device through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
- the electronic device of the method for determining search results may further include: an input device 803 and an output device 804.
- the processor 801, the memory 802, the input device 803, and the output device 804 may be connected through a bus or in other ways. In FIG. 8, the connection through a bus is taken as an example.
- the input device 803 can receive input digital or character information, and generate key signal input related to user settings and function control of the electronic device, such as touch screen, keypad, mouse, track pad, touch pad, indicator stick, one or more Input devices such as mouse buttons, trackballs, joysticks, etc.
- the output device 804 may include a display device, an auxiliary lighting device (for example, LED), a tactile feedback device (for example, a vibration motor), and the like.
- the display device may include, but is not limited to, a liquid crystal display (LCD), a light emitting diode (LED) display, and a plasma display. In some embodiments, the display device may be a touch screen.
- Various implementations of the systems and techniques described herein can be implemented in digital electronic circuit systems, integrated circuit systems, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: being implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, the programmable processor It can be a dedicated or general-purpose programmable processor that can receive data and instructions from the storage system, at least one input device, and at least one output device, and transmit the data and instructions to the storage system, the at least one input device, and the at least one output device. An output device.
- machine-readable medium and “computer-readable medium” refer to any computer program product, device, and/or device used to provide machine instructions and/or data to a programmable processor ( For example, magnetic disks, optical disks, memory, programmable logic devices (PLD)), including machine-readable media that receive machine instructions as machine-readable signals.
- machine-readable signal refers to any signal used to provide machine instructions and/or data to a programmable processor.
- the systems and techniques described here can be implemented on a computer that has: a display device for displaying information to the user (for example, a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) ); and a keyboard and a pointing device (for example, a mouse or a trackball) through which the user can provide input to the computer.
- a display device for displaying information to the user
- LCD liquid crystal display
- keyboard and a pointing device for example, a mouse or a trackball
- Other types of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (for example, visual feedback, auditory feedback, or tactile feedback); and can be in any form (including Acoustic input, voice input, or tactile input) to receive input from the user.
- the systems and technologies described herein can be implemented in a computing system that includes back-end components (for example, as a data server), or a computing system that includes middleware components (for example, an application server), or a computing system that includes front-end components (for example, A user computer with a graphical user interface or a web browser, through which the user can interact with the implementation of the system and technology described herein), or includes such back-end components, middleware components, Or any combination of front-end components in a computing system.
- the components of the system can be connected to each other through any form or medium of digital data communication (for example, a communication network). Examples of communication networks include: local area network (LAN), wide area network (WAN), and the Internet.
- the computer system can include clients and servers.
- the client and server are generally far away from each other and usually interact through a communication network.
- the relationship between the client and the server is generated by computer programs that run on the corresponding computers and have a client-server relationship with each other.
- the method, device, device, and computer storage medium provided by the embodiments of the present application can have the following advantages:
- This application comprehensively considers the search context information reflected in the user's short-term search history and the user's personalized preferences reflected in the long-term search history when determining search results, thereby improving the accuracy of search results and making them more in line with users Search needs.
- entity recommendation it can eliminate the ambiguity of the current query and provide more accurate related entities that meet the user's search requirements.
- the multi-task model learning framework is used to implement auxiliary training between different search results, for example, related web pages assist the training of the ranking model of related entities. In this way, the correlation between different tasks is used to improve the scalability and accuracy of the model.
- the multi-task model learning framework adopted in this application can alleviate the problem of sparse click data in the main task with the help of auxiliary tasks (that is, the second ranking sub-model is used as the auxiliary task of the first ranking sub-model).
- the multi-task model framework in this application implements knowledge transfer by sharing vector representations, and the joint learning of multiple related tasks can improve the generalization ability of the model. After experimental verification, a better training effect is obtained.
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims (22)
- 一种确定搜索结果的方法,其特征在于,该方法包括:获取用户的当前query、所述用户在第一时长内的搜索历史信息、所述用户在第二时长内的搜索历史信息以及当前query的候选搜索结果,并输入搜索结果排序模型,依据所述搜索结果排序模型对所述候选搜索结果的评分,确定所述当前query对应的搜索结果,所述第二时长大于所述第一时长;其中所述搜索结果排序模型对所述候选结果的评分依据第一相似度和第二相似度确定,所述第一相似度为所述当前query的向量表示和所述用户在第一时长内的搜索历史信息的向量表示的整合与所述候选搜索结果的向量表示之间的相似度,所述第二相似度为所述当前query的向量表示和所述用户在第二时长内的搜索历史信息的向量表示的整合与所述候选搜索结果的向量表示之间的相似度。
- 根据权利要求1所述的方法,其特征在于,所述用户在第一时长内的搜索历史信息包括:同一会话中在所述当前query之前的query序列和query序列中各query对应的被点击搜索结果;所述用户在第二时长内的搜索历史信息包括:所述用户在第二时长内搜索的query和点击的搜索结果。
- 根据权利要求2所述的方法,其特征在于,所述用户在第一时长内的搜索历史信息的向量表示由以下方式得到:将所述query序列中各query的向量表示、所述各query对应的被点击搜索结果的向量表示采用注意力机制进行加权处理,得到所述用户在第一时长内的搜索历史信息的向量表示。
- 根据权利要求2所述的方法,其特征在于,所述用户在第二时长内的搜索历史信息的向量表示由以下方式得到:获取用户在第二时长内搜索的query集合和点击的搜索结果集合;将所述query集合和搜索结果集合进行切词处理后求并集,得到词集合;对所述词集合使用句向量的分布词袋PV-DBOW进行编码处理,得到所述用户在第二时长内的搜索历史信息的向量表示。
- 根据权利要求1所述的方法,其特征在于,所述候选搜索结果包括相关网页或相关实体;其中相关实体的向量表示为:所述相关实体的标识、名称以及实体描述的整合向量表示。
- 根据权利要求1或5所述的方法,其特征在于,该方法还包括:在搜索结果页中展现所述当前query对应的搜索结果。
- 一种训练搜索结果排序模型的方法,其特征在于,该方法包括:利用搜索日志获取训练样本,所述训练样本包括:样本query、用户在输入样本query之前第一时长内的搜索历史信息、用户在输入样本query之前第二时长内的搜索历史信息、样本query对应的搜索结果以及搜索结果的被点击状况;利用所述训练样本训练排序模型,以达到预设的训练目标;其中所述排序模型的输入包括样本query、用户在输入样本query之前第一时长内的搜索历史信息、用户在输入样本query之前第二时长内的搜索历史信息以及样本query对应的搜索结果,所述排序模型的输出包括对各搜索结果的评分;所述排序模型对各搜索结果的评分依据第一相似度和第 二相似度确定,所述第一相似度为所述样本query的向量表示和所述第一时长内的搜索历史信息的向量表示的整合与所述搜索结果的向量表示之间的相似度,所述第二相似度为所述样本query的向量表示和所述在第二时长内的搜索历史信息的向量表示的整合与所述搜索结果的向量表示之间的相似度;所述训练目标包括:最大化搜索结果被点击状况与搜索结果的评分之间的相关度;利用训练得到的排序模型,获取搜索结果排序模型。
- 根据权利要求7所述的方法,其特征在于,所述用户在输入样本query之前第一时长内的搜索历史信息包括:同一会话中在所述样本query之前的query序列和query序列中各query对应的被点击搜索结果;所述用户在输入样本query之前第二时长内的搜索历史信息包括:所述用户在输入样本query之前第二时长内搜索的query和点击的搜索结果。
- 根据权利要求8所述的方法,其特征在于,所述用户在输入样本query之前第一时长内的搜索历史信息的向量表示由以下方式得到:将所述query序列中各query的向量表示、所述各query对应的被点击搜索结果的向量表示采用注意力机制进行加权处理,得到所述用户在第一时长内的搜索历史信息的向量表示。
- 根据权利要求8所述的方法,其特征在于,所述用户在输入样本query之前第二时长内的搜索历史信息的向量表示由以下方式得到:获取用户在输入样本query之前第二时长内搜索的query集合和点击的搜索结果集合;将所述query集合和搜索结果集合进行切词处理后求并集,得到词 集合;对所述词集合使用句向量的分布词袋PV-DBOW进行编码处理,得到所述用户在输入样本query之前第二时长内的搜索历史信息的向量表示。
- 根据权利要求7所述的方法,其特征在于,所述搜索结果包括相关网页或相关实体;其中相关实体的向量表示为:所述相关实体的标识、名称以及实体描述的整合向量表示。
- 根据权利要求7至11中任一项所述的方法,其特征在于,所述搜索结果包括:第一类搜索结果和第二类搜索结果;所述排序模型包括:共享向量子模型、第一排序子模型和第二排序子模型;将所述样本query、用户在输入样本query之前第一时长内的搜索历史信息、用户在输入样本query之前第二时长内的搜索历史信息以及样本query对应的搜索结果输入所述共享向量子模型,得到所述共享向量子模型输出的所述样本query的向量表示和所述第一时长内的搜索历史信息的向量表示的整合,以及所述样本query的向量表示和所述在第二时长内的搜索历史信息的向量表示的整合;将所述共享向量子模型的输出以及样本query的第一类搜索结果输入所述第一排序子模型,得到对所述第一搜索结果的评分;以及将所述共享向量子模型的输出以及样本query的第二类搜索结果输入所述第二排序子模型,得到对所述第二搜索结果的评分;对所述第一排序子模型和第二排序子模型进行联合训练,以达到预 设的训练目标,所述训练目标包括:最大化第一类搜索结果被点击状况与第一类搜索结果的评分之间的相关度,以及最大化第二类搜索结果被点击状况与第二类搜索结果的评分之间的相关度;训练结束后,利用所述第一排序子模型和所述第二排序子模型中的一个以及所述共享向量子模型,得到所述搜索结果排序模型。
- 根据权利要求12所述的方法,其特征在于,对所述第一排序子模型和第二排序子模型进行联合训练包括:在训练迭代过程中,每次随机从所述第一排序子模型和所述第二排序子模型中选择一个进行训练,利用被选择的子模型的输出更新被选择的子模型和共享向量子模型的模型参数;或者,在训练迭代过程中,每次交替从所述第一排序子模型和所述第二排序子模型中选择一个进行训练,利用被选择的子模型的输出更新被选择的子模型和共享向量子模型的模型参数;或者,在训练迭代过程中,每次均对所述第一排序子模型和所述第二排序子模型进行训练,利用所述第一排序子模型和所述第二排序子模型的输出更新所有子模型的模型参数。
- 一种确定搜索结果的装置,其特征在于,该装置包括:获取单元,用于获取用户的当前query、所述用户在第一时长内的搜索历史信息、所述用户在第二时长内的搜索历史信息以及当前query的候选搜索结果;确定单元,用于将所述当前query、所述用户在第一时长内的搜索历史信息、所述用户在第二时长内的搜索历史信息以及当前query的候选搜索结果输入搜索结果排序模型,依据所述搜索结果排序模型对所述候 选搜索结果的评分,确定所述当前query对应的搜索结果,所述第二时长大于所述第一时长;其中所述搜索结果排序模型对所述候选结果的评分依据第一相似度和第二相似度确定,所述第一相似度为所述当前query的向量表示和所述用户在第一时长内的搜索历史信息的向量表示的整合与所述候选搜索结果的向量表示之间的相似度,所述第二相似度为所述当前query的向量表示和所述用户在第二时长内的搜索历史信息的向量表示的整合与所述候选搜索结果的向量表示之间的相似度。
- 根据权利要求14所述的装置,其特征在于,所述用户在第一时长内的搜索历史信息包括:同一会话中在所述当前query之前的query序列和query序列中各query对应的被点击搜索结果;所述用户在第二时长内的搜索历史信息包括:用户在第二时长内搜索的query和点击的搜索结果。
- 根据权利要求14所述的装置,其特征在于,所述候选搜索结果包括相关网页或相关实体;其中相关实体的向量表示为:所述相关实体的标识、名称以及实体描述的整合向量表示。
- 一种训练搜索结果排序模型的装置,其特征在于,该装置包括:样本获取单元,用于利用搜索日志获取训练样本,所述训练样本包括:样本query、用户在输入样本query之前第一时长内的搜索历史信息、用户在输入样本query之前第二时长内的搜索历史信息、样本query对应的搜索结果以及搜索结果的被点击状况;模型训练单元,用于利用所述训练样本训练排序模型,以达到预设 的训练目标;其中所述排序模型的输入包括样本query、用户在输入样本query之前第一时长内的搜索历史信息、用户在输入样本query之前第二时长内的搜索历史信息以及样本query对应的搜索结果,所述排序模型的输出包括对各搜索结果的评分;所述排序模型对各搜索结果的评分依据第一相似度和第二相似度确定,所述第一相似度为所述样本query的向量表示和所述第一时长内的搜索历史信息的向量表示的整合与所述搜索结果的向量表示之间的相似度,所述第二相似度为所述样本query的向量表示和所述在第二时长内的搜索历史信息的向量表示的整合与所述搜索结果的向量表示之间的相似度;所述训练目标包括:最大化搜索结果被点击状况与搜索结果的评分之间的相关度;模型获取单元,用于利用训练得到的排序模型,获取搜索结果排序模型。
- 根据权利要求17所述的装置,其特征在于,所述用户在输入样本query之前第一时长内的搜索历史信息包括:同一会话中在所述样本query之前的query序列和query序列中各query对应的被点击搜索结果;所述用户在输入样本query之前第二时长内的搜索历史信息包括:用户在输入样本query之前第二时长内搜索的query和点击的搜索结果。
- 根据权利要求17或18所述的装置,其特征在于,所述搜索结果包括:第一类搜索结果和第二类搜索结果;所述排序模型包括:共享向量子模型、第一排序子模型和第二排序子模型;所述模型训练单元,具体用于将所述样本query、用户在输入样本query之前第一时长内的搜索历史信息、用户在输入样本query之前第二 时长内的搜索历史信息以及样本query对应的搜索结果输入所述共享向量子模型,得到所述共享向量子模型输出的所述样本query的向量表示和所述第一时长内的搜索历史信息的向量表示的整合,以及所述样本query的向量表示和所述在第二时长内的搜索历史信息的向量表示的整合;将所述共享向量子模型的输出以及样本query的第一类搜索结果输入所述第一排序子模型,得到对所述第一搜索结果的评分;以及将所述共享向量子模型的输出以及样本query的第二类搜索结果输入所述第二排序子模型,得到对所述第二搜索结果的评分;对所述第一排序子模型和第二排序子模型进行联合训练,以达到预设的训练目标,所述训练目标包括:最大化第一类搜索结果被点击状况与第一类搜索结果的评分之间的相关度,以及最大化第二类搜索结果被点击状况与第二类搜索结果的评分之间的相关度;所述模型获取单元,具体用于在所述模型训练单元训练结束后,利用所述第一排序子模型和所述第二排序子模型中的一个以及所述共享向量子模型,得到所述搜索结果排序模型。
- 根据权利要求19所述的装置,其特征在于,所述模型训练单元在对所述第一排序子模型和第二排序子模型进行联合训练时,具体执行:在训练迭代过程中,每次随机从所述第一排序子模型和所述第二排序子模型中选择一个进行训练,利用被选择的子模型的输出更新被选择的子模型和共享向量子模型的模型参数;或者,在训练迭代过程中,每次交替从所述第一排序子模型和所述第二排序子模型中选择一个进行训练,利用被选择的子模型的输出更新被选择的子模型和共享向量子模型的模型参数;或者,在训练迭代过程中,每次均对所述第一排序子模型和所述第二排序子模型进行训练,利用所述第一排序子模型和所述第二排序子模型的输出更新所有子模型的模型参数。
- 一种电子设备,其特征在于,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-13中任一项所述的方法。
- 一种存储有计算机指令的非瞬时计算机可读存储介质,其特征在于,所述计算机指令用于使所述计算机执行权利要求1-13中任一项所述的方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/602,304 US11734373B2 (en) | 2019-12-27 | 2020-05-28 | Method, apparatus, device and computer storage medium for determining search result |
EP20904682.0A EP3937032A4 (en) | 2019-12-27 | 2020-05-28 | Search result determination method, device, apparatus, and computer storage medium |
JP2022506430A JP7379804B2 (ja) | 2019-12-27 | 2020-05-28 | 検索結果を決定する方法、装置、機器、及びコンピュータ記憶媒体 |
KR1020217039536A KR20220003085A (ko) | 2019-12-27 | 2020-05-28 | 검색 결과를 결정하는 방법, 장치, 기기 및 컴퓨터 기록 매체 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911373544.1 | 2019-12-27 | ||
CN201911373544.1A CN111177551B (zh) | 2019-12-27 | 2019-12-27 | 确定搜索结果的方法、装置、设备和计算机存储介质 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2021128729A1 true WO2021128729A1 (zh) | 2021-07-01 |
WO2021128729A9 WO2021128729A9 (zh) | 2021-11-04 |
Family
ID=70655783
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/092742 WO2021128729A1 (zh) | 2019-12-27 | 2020-05-28 | 确定搜索结果的方法、装置、设备和计算机存储介质 |
Country Status (6)
Country | Link |
---|---|
US (1) | US11734373B2 (zh) |
EP (1) | EP3937032A4 (zh) |
JP (1) | JP7379804B2 (zh) |
KR (1) | KR20220003085A (zh) |
CN (1) | CN111177551B (zh) |
WO (1) | WO2021128729A1 (zh) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111177551B (zh) | 2019-12-27 | 2021-04-16 | 百度在线网络技术(北京)有限公司 | 确定搜索结果的方法、装置、设备和计算机存储介质 |
CN112163147A (zh) * | 2020-06-09 | 2021-01-01 | 中森云链(成都)科技有限责任公司 | 一种用于网站会话场景的推荐方法 |
CN111783452B (zh) * | 2020-06-30 | 2024-04-02 | 北京百度网讯科技有限公司 | 模型训练方法、信息处理方法、装置、设备及存储介质 |
CN111897943A (zh) * | 2020-08-17 | 2020-11-06 | 腾讯科技(深圳)有限公司 | 会话记录搜索方法、装置、电子设备及存储介质 |
CN112100480B (zh) * | 2020-09-15 | 2024-07-30 | 北京百度网讯科技有限公司 | 搜索方法、装置、设备及存储介质 |
CN112231545B (zh) * | 2020-09-30 | 2023-12-22 | 北京三快在线科技有限公司 | 聚块集合的排序方法、装置、设备及存储介质 |
CN112632406B (zh) * | 2020-10-10 | 2024-04-09 | 咪咕文化科技有限公司 | 查询方法、装置、电子设备及存储介质 |
CN112364235A (zh) * | 2020-11-19 | 2021-02-12 | 北京字节跳动网络技术有限公司 | 搜索处理方法、模型训练方法、装置、介质及设备 |
KR102373486B1 (ko) * | 2021-08-09 | 2022-03-14 | 쿠팡 주식회사 | 브랜드 정보 제공 방법 및 그 장치 |
CN116501951A (zh) * | 2022-01-19 | 2023-07-28 | 南京中兴新软件有限责任公司 | 搜索结果排序方法、搜索系统、计算机可读存储介质 |
CN115238126A (zh) * | 2022-07-28 | 2022-10-25 | 腾讯科技(深圳)有限公司 | 搜索结果重排序方法、装置、设备及计算机存储介质 |
WO2024096370A1 (ko) * | 2022-11-01 | 2024-05-10 | 한화솔루션(주) | 분산된 지식 정보의 싱글 윈도우 내 통합 검색 및 사용자 검색 패턴 기반 지식 정보의 추천을 수행하는 플랫폼 서버, 이를 포함하는 서비스 제공 시스템 및 서비스 제공 방법 |
CN115994664B (zh) * | 2023-01-04 | 2023-08-08 | 浙江中兴慧农信息科技有限公司 | 一种共享冷库模式的智能推荐的方法、装置及设备 |
CN117331893B (zh) * | 2023-09-20 | 2024-10-15 | 中移互联网有限公司 | 搜索方法、装置、电子设备和存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103577489A (zh) * | 2012-08-08 | 2014-02-12 | 百度在线网络技术(北京)有限公司 | 一种网页浏览历史查询方法及装置 |
CN105677780A (zh) * | 2014-12-31 | 2016-06-15 | Tcl集团股份有限公司 | 可拓展的用户意图挖掘方法及其系统 |
CN109033140A (zh) * | 2018-06-08 | 2018-12-18 | 北京百度网讯科技有限公司 | 一种确定搜索结果的方法、装置、设备和计算机存储介质 |
US20190130013A1 (en) * | 2017-10-26 | 2019-05-02 | Salesforce.com. inc. | User clustering based on query history |
CN111177551A (zh) * | 2019-12-27 | 2020-05-19 | 百度在线网络技术(北京)有限公司 | 确定搜索结果的方法、装置、设备和计算机存储介质 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8719025B2 (en) * | 2012-05-14 | 2014-05-06 | International Business Machines Corporation | Contextual voice query dilation to improve spoken web searching |
CN105528388B (zh) * | 2015-11-04 | 2020-12-11 | 百度在线网络技术(北京)有限公司 | 搜索推荐方法和装置 |
CN106649605B (zh) * | 2016-11-28 | 2020-09-29 | 百度在线网络技术(北京)有限公司 | 一种推广关键词的触发方法及装置 |
JP6744353B2 (ja) | 2017-04-06 | 2020-08-19 | ネイバー コーポレーションNAVER Corporation | ディープラーニングを活用した個人化商品推薦 |
CN107506402B (zh) * | 2017-08-03 | 2021-06-11 | 北京百度网讯科技有限公司 | 搜索结果的排序方法、装置、设备及计算机可读存储介质 |
CN110196904B (zh) * | 2018-02-26 | 2023-04-18 | 佛山市顺德区美的电热电器制造有限公司 | 一种获取推荐信息的方法、装置及计算机可读存储介质 |
CN108345702A (zh) * | 2018-04-10 | 2018-07-31 | 北京百度网讯科技有限公司 | 实体推荐方法和装置 |
CN110245289A (zh) * | 2019-05-20 | 2019-09-17 | 中国平安财产保险股份有限公司 | 一种信息搜索方法以及相关设备 |
-
2019
- 2019-12-27 CN CN201911373544.1A patent/CN111177551B/zh active Active
-
2020
- 2020-05-28 US US17/602,304 patent/US11734373B2/en active Active
- 2020-05-28 WO PCT/CN2020/092742 patent/WO2021128729A1/zh unknown
- 2020-05-28 KR KR1020217039536A patent/KR20220003085A/ko not_active Application Discontinuation
- 2020-05-28 EP EP20904682.0A patent/EP3937032A4/en active Pending
- 2020-05-28 JP JP2022506430A patent/JP7379804B2/ja active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103577489A (zh) * | 2012-08-08 | 2014-02-12 | 百度在线网络技术(北京)有限公司 | 一种网页浏览历史查询方法及装置 |
CN105677780A (zh) * | 2014-12-31 | 2016-06-15 | Tcl集团股份有限公司 | 可拓展的用户意图挖掘方法及其系统 |
US20190130013A1 (en) * | 2017-10-26 | 2019-05-02 | Salesforce.com. inc. | User clustering based on query history |
CN109033140A (zh) * | 2018-06-08 | 2018-12-18 | 北京百度网讯科技有限公司 | 一种确定搜索结果的方法、装置、设备和计算机存储介质 |
CN111177551A (zh) * | 2019-12-27 | 2020-05-19 | 百度在线网络技术(北京)有限公司 | 确定搜索结果的方法、装置、设备和计算机存储介质 |
Also Published As
Publication number | Publication date |
---|---|
EP3937032A4 (en) | 2022-06-29 |
WO2021128729A9 (zh) | 2021-11-04 |
JP7379804B2 (ja) | 2023-11-15 |
KR20220003085A (ko) | 2022-01-07 |
US11734373B2 (en) | 2023-08-22 |
CN111177551A (zh) | 2020-05-19 |
CN111177551B (zh) | 2021-04-16 |
JP2022540508A (ja) | 2022-09-15 |
EP3937032A1 (en) | 2022-01-12 |
US20220237251A1 (en) | 2022-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021128729A1 (zh) | 确定搜索结果的方法、装置、设备和计算机存储介质 | |
AU2020321751B2 (en) | Neural network system for text classification | |
CN109033140B (zh) | 一种确定搜索结果的方法、装置、设备和计算机存储介质 | |
KR102354716B1 (ko) | 딥 러닝 모델을 이용한 상황 의존 검색 기법 | |
US11232154B2 (en) | Neural related search query generation | |
Wu et al. | Fine-grained image captioning with global-local discriminative objective | |
US8984012B2 (en) | Self-tuning alterations framework | |
US20190251422A1 (en) | Deep neural network architecture for search | |
WO2018018626A1 (en) | Conversation oriented machine-user interaction | |
JP6361351B2 (ja) | 発話ワードをランク付けする方法、プログラム及び計算処理システム | |
CN110297890B (zh) | 使用交互式自然语言对话的图像获取 | |
US20170255693A1 (en) | Providing images for search queries | |
WO2021139209A1 (zh) | 查询自动补全的方法、装置、设备和计算机存储介质 | |
CN110929114A (zh) | 利用动态记忆网络来跟踪数字对话状态并生成响应 | |
WO2020086131A1 (en) | Method and system for decoding user intent from natural language queries | |
CN111125538B (zh) | 一个利用实体信息增强个性化检索效果的搜索方法 | |
CN112507091A (zh) | 检索信息的方法、装置、设备以及存储介质 | |
CN112579750A (zh) | 相似病案的检索方法、装置、设备及存储介质 | |
WO2021051587A1 (zh) | 基于语意识别的搜索结果排序方法、装置、电子设备及存储介质 | |
US12099803B2 (en) | Training a model in a data-scarce environment using added parameter information | |
CN113595770B (zh) | 群组点击率预估方法、装置、电子设备和存储介质 | |
US8577909B1 (en) | Query translation using bilingual search refinements | |
CN111881255B (zh) | 同义文本获取方法、装置、电子设备及存储介质 | |
US20240338553A1 (en) | Recommending backgrounds based on user intent | |
CN117668342A (zh) | 双塔模型的训练方法及商品召回方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20904682 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020904682 Country of ref document: EP Effective date: 20211006 |
|
ENP | Entry into the national phase |
Ref document number: 20217039536 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2022506430 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |