CN111241388A - Multi-policy recall method and device, electronic equipment and readable storage medium - Google Patents

Multi-policy recall method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN111241388A
CN111241388A CN201911286330.0A CN201911286330A CN111241388A CN 111241388 A CN111241388 A CN 111241388A CN 201911286330 A CN201911286330 A CN 201911286330A CN 111241388 A CN111241388 A CN 111241388A
Authority
CN
China
Prior art keywords
recall
user information
vector
strategy
target user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201911286330.0A
Other languages
Chinese (zh)
Inventor
苏义伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN201911286330.0A priority Critical patent/CN111241388A/en
Publication of CN111241388A publication Critical patent/CN111241388A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure provides a multi-policy recall method, a multi-policy recall device, an electronic device and a readable storage medium, wherein the method comprises the following steps: receiving an access request, the access request comprising: target user information; generating a recall object set corresponding to the target user information under each recall strategy based on at least one recall strategy, and taking the recall object set as an initial recall object set of the recall strategy; for each recall strategy, determining the matching degree of the target user information and the recall strategy according to a user behavior vector corresponding to the target user information and a vector of an initial recall object set object of the recall strategy, wherein the user behavior vector of the target user information is generated according to the vector of each object in a historical behavior sequence corresponding to the target user information; for each recall strategy, determining the recall number of the recall strategies according to the matching degree of the target user information and the recall strategies; and generating a comprehensive recall object set according to the recall quantity and the initial recall object set of each recall strategy. The present disclosure may improve the accuracy of recalls.

Description

Multi-policy recall method and device, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to the field of personalized recommendation technologies, and in particular, to a multi-policy recall method and apparatus, an electronic device, and a readable storage medium.
Background
With the rapid development of the internet, the search engine technology is gradually mature, and becomes a main entrance for people to find information. Generally, a search engine recalls query sentences input by a user on the internet, ranks the recalled related query results, and finally displays the top ranked query results to the user.
In the prior art, a commonly used recall method may include the following steps: firstly, recalling a plurality of corresponding object sets through a plurality of recalling strategies, for example, three object sets are obtained through recalling three recalling strategies, namely user interest, collaborative filtering and hot recalling respectively; then, selecting a proper number of objects from each object set according to experience; finally, a final set of recalled objects is generated based on the objects selected from each set of objects.
After the inventor researches the scheme, the inventor finds that the scheme selects a proper number of objects through experience, so that the user preference cannot be accurately reflected, the personalized requirement of the user cannot be met, and the recall accuracy is low.
Disclosure of Invention
The present disclosure provides a multi-policy recall method, apparatus, electronic device, and readable storage medium, which can determine matching degrees of target user information and recall policies according to user behavior vectors and vectors of an initial recall object set object, and determine recall numbers of recall policies according to the matching degrees, and since the user behavior vectors reflect preferences of users, personalized requirements of users are satisfied, and recall accuracy can be improved.
According to a first aspect of the present disclosure, there is provided a multi-policy recall method, the method comprising:
receiving an access request, the access request comprising: target user information;
generating a recall object set corresponding to the target user information under each recall strategy on the basis of at least one recall strategy, wherein the recall object set is used as an initial recall object set of the recall strategy;
for each recall strategy, determining the matching degree of the target user information and the recall strategy according to a user behavior vector corresponding to the target user information and a vector of an object in a primary recall object set of the recall strategy, wherein the user behavior vector of the target user information is generated according to the vector of each object in a historical behavior sequence corresponding to the target user information;
for each recall strategy, determining the number of recalls of the recall strategy according to the target user information and the matching degree of the recall strategy;
and generating a comprehensive recall object set according to the recall quantity of each recall strategy and the initial recall object set.
According to a second aspect of the present disclosure, there is provided a multi-policy recall apparatus comprising:
an access request receiving module, configured to receive an access request, where the access request includes: target user information;
the initial recall module is used for generating a recall object set corresponding to the target user information under each recall strategy based on at least one recall strategy and taking the recall object set as the initial recall object set of the recall strategy;
a matching degree determining module, configured to determine, for each recall policy, a matching degree between the target user information and the recall policy according to a user behavior vector corresponding to the target user information and a vector of an object in a set of initial recall objects of the recall policy, where the user behavior vector of the target user information is generated according to a vector of each object in a history behavior sequence corresponding to the target user information;
the recall quantity determining module is used for determining the recall quantity of the recall strategy according to the matching degree of the target user information and the recall strategy aiming at each recall strategy;
and the comprehensive recall module is used for generating a comprehensive recall object set according to the recall quantity of each recall strategy and the initial recall object set.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
a processor, a memory, and a computer program stored on the memory and executable on the processor, the processor implementing the aforementioned multi-policy recall method when executing the program.
According to a fourth aspect of the present disclosure, there is provided a readable storage medium having instructions therein which, when executed by a processor of an electronic device, enable the electronic device to perform the aforementioned multi-policy recall method.
The disclosure provides a multi-policy recall method, a multi-policy recall device, an electronic device and a readable storage medium, which can receive an access request, wherein the access request comprises: target user information; generating a recall object set corresponding to the target user information under each recall strategy on the basis of at least one recall strategy, wherein the recall object set is used as an initial recall object set of the recall strategy; for each recall strategy, determining the matching degree of the target user information and the recall strategy according to a user behavior vector corresponding to the target user information and a vector of an object in a primary recall object set of the recall strategy, wherein the user behavior vector of the target user information is generated according to the vector of each object in a historical behavior sequence corresponding to the target user information; for each recall strategy, determining the recall quantity of the recall strategy according to the matching degree of the target user information and the recall strategy; and generating a comprehensive recall object set according to the recall quantity of each recall strategy and the initial recall object set. The matching degree of the target user information and the recall strategy can be determined according to the user behavior vector and the vector of the initial recall object concentrated object, and the recall quantity of the recall strategy is determined according to the matching degree.
Drawings
In order to more clearly illustrate the technical solutions of the present disclosure, the drawings needed to be used in the description of the present disclosure will be briefly introduced below, and it is apparent that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive labor.
FIG. 1 illustrates a flow chart of steps of a multi-policy recall method of the present disclosure;
FIG. 2 illustrates a flowchart of the steps of the present disclosure to determine the degree of match of targeted user information and recall policies;
FIG. 3 illustrates a flowchart of the steps of the present disclosure to determine the number of recalls of a recall policy;
FIG. 4 illustrates a flowchart of the steps of the present disclosure to determine a vector for each object in a historical sequence of behaviors;
FIG. 5 illustrates a flow chart of steps of the present disclosure to determine a vector of objects in a set of recall objects of a recall policy;
FIG. 6 is a flowchart illustrating the steps of determining a user behavior vector corresponding to target user information according to the present disclosure;
FIG. 7 illustrates a flowchart of the steps of the present disclosure for determining a set of integrated recall objects;
FIG. 8 illustrates a block diagram of a multi-policy recall device of the present disclosure;
FIG. 9 is a block diagram illustrating a match determination module of the present disclosure;
FIG. 10 illustrates a block diagram of a recall quantity determination module of the present disclosure;
FIG. 11 illustrates a block diagram structure for determining a vector for each object in a sequence of historical behaviors of the present disclosure;
FIG. 12 illustrates a modular composition of a vector of objects in a set of recall objects to determine a recall policy of the present disclosure;
FIG. 13 illustrates a block diagram of a module for determining a set of integrated recall objects of the present disclosure;
FIG. 14 illustrates a block diagram of an integrated recall module of the present disclosure;
fig. 15 shows a block diagram of an electronic device of the present disclosure.
Detailed Description
The technical solutions in the present disclosure will be described clearly and completely with reference to the accompanying drawings in the present disclosure, and it is obvious that the described embodiments are some, not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
The embodiment of the disclosure can be applied to a background server of a personalized recommendation platform, and the background server can recommend an object suitable for a user to the user through a client, so that the click rate and the order rate of the user to the recommended object are higher, and finally higher economic return is realized. In the personalized recommendation process, recall is a very important link, and the accuracy of recall influences the click rate and the order placing rate. Embodiments of the present disclosure will focus on a multi-policy recall method. The method and the system can receive an access request sent by a client used by a user and generate a comprehensive recall object set to recommend to the user.
Referring to fig. 1, a flowchart illustrating steps of the multi-policy recall method of the present disclosure is shown, as follows:
step 101, receiving an access request, wherein the access request comprises: and target user information.
Wherein the access request may be generated by a client used by the user. For example, a user installs a client of a certain personalized recommendation platform on a mobile terminal, so that the user can input target user information and corresponding verification information in an interface provided by the client and click a login control, and at this time, the client generates an access request including the target user information.
The target user information in the access request may be a unique identifier indicating the user identity, for example, a mailbox account, a mobile phone account, and a unique identifier assigned by a personalized recommendation platform, which are used by the user to log in the client.
The authentication information input by the user may be information for authenticating the identity of the user, such as a password, a verification code, and the like.
And 102, generating a recall object set corresponding to the target user information under each recall strategy based on at least one recall strategy, wherein the recall object set is used as an initial recall object set of the recall strategy.
The recall policy is a rule for determining a part of objects from a large number of objects, and may include, but is not limited to: collaborative filtering, hotspot recall, user interest recall. It will be appreciated that the specific rules for different recall policies differ, and thus the recall objectives differ, as do the objects contained in the resulting recall object set.
For collaborative filtering, it may determine a set of recalled objects to recommend to the user based on similarities between the user and the user, and between the objects. For example, for User1, first determine the set of objects OBJS1 that User1 is interested in; then, a User set UserS2 with the same attention objects as the User1 and an object set OBJS2 to which each User2 in the User set UserS2 pays attention are determined; finally, the set of objects to which each User2 is interested, OBJS2, is referred to as the set of recall objects to recommend to User 1.
For the hot spot recall, the sales volume, the click volume and other indexes of the object representing the popularity of the object can be counted, so that a plurality of objects with higher popularity index values can be recalled and recommended to the user. For example, taking the sales amount as an example, if there are object sets OBJ1, OBJ2, OBJ3, OBJ4, OBJ5, OBJ6, OBJ7, OBJ8, OBJ9, and OBJ10 whose sales amounts are SVO1, SVO2, SVO3, SVO4, SVO5, SVO6, SVO7, SVO8, SVO9, and SVO10, and VO1> SVO2> SVO3> SVO4> SVO5> SVO6> SVO7> SVO8> SVO9> SVO10, the top 5 objects OBJ1, OBJ2, j3, OBJ4, and OBJ5 with a high sales amount can be recommended as a recall object set to a user who is visiting the object.
For user interest recalls, the similarity between the user feature vectors and the objects may be calculated, so that several objects with the highest similarity are recalled. For example, if there are object sets OBJ1, OBJ2, OBJ3, OBJ4, OBJ5, OBJ6, OBJ7, OBJ8, OBJ9, and OBJ10, the similarity between them and the user feature vector of a user is: SID1, SID2, SID3, SID4, SID5, SID6, SID7, SID8, SID9, SID10, and SID1> SID2> SID3> SID4> SID5> SID6> SID7> SID8> SID9> SID10, so that the first 5 objects OBJ1, OBJ2, OBJ3, OBJ4, OBJ5 with the highest similarity can be taken as a recall object set to recommend to the user.
Recall policies are well known in the art and at least one recall policy referred to in this disclosure may be any recall policy, the choice of which is not limited by this disclosure.
In addition, the objects mentioned in the different recall strategies can be any recommendation objects involved in the personalized recommendation platform. For example, when the personalized recommendation platform is a network sales platform, the recommendation object may be a merchant or a commodity on the network sales platform.
Step 103, for each recall strategy, determining the matching degree between the target user information and the recall strategy according to the user behavior vector corresponding to the target user information and the vector of the object in the initial recall object set of the recall strategy, wherein the user behavior vector of the target user information is generated according to the vector of each object in the history behavior sequence corresponding to the target user information.
The user behavior vector is generated according to a historical behavior sequence of the user, and the user behavior sequence comprises a plurality of objects which are visited or ordered by the user once, so that the preference of the user can be reflected. The user behavior vector may be the sum of the vectors for each object in the user behavior sequence. For example, for a user behavior sequence containing 3 objects, the vectors of the 3 objects are: [ OV11, OV12,…,OV1s],[OV21,OV22,…,OV2s],[OV31,OV32,…, OV3s]Thus, the vectors of 3 objects can be summed to obtain the user behavior vector: [ OV11+OV21+OV31,OV12+OV22+OV32,…,OV1s+OV2s+OV3s]。
In order to avoid that the user behavior sequence is too long, so that the value of each item in the user behavior vector is too large, which results in a large amount of subsequent operations, the user behavior vector may also be an average vector of vectors of all objects in the user behavior sequence. For example, for the user behavior sequence including 3 objects, the vector of the 3 objects may be averaged to obtain a user behavior vector: [ (OV 1)1+OV21+OV31)/3,(OV12+OV22+OV32)/3,…,(OV1s+OV2s+OV3s)/3]。
It can be understood that the vector of the object in the initial recall object set and the vector of the object in the user behavior sequence can both uniquely represent the object, and the vector of the object and the unique identifier of the object are identity information of two different forms of the object respectively. The vector of the object may be a vector representation of the unique identifier of the object, and may be obtained by converting the unique identifier of the object into a vector. The objects in the initial recall object set and the objects in the user behavior sequence can be part of the objects involved in the personalized recommendation platform, so that the vector of each object can be generated in advance for all the objects involved in the personalized recommendation platform.
For each recall policy, after acquiring a user behavior vector and a vector of an object in an initial recall object set of the recall policy, a matching degree of target user information and the recall policy may be determined. The degree of matching of the target user information and the recall policy may represent a degree of preference of the user for the recall policy. If the matching degree is larger, the preference degree of the user to the recall strategy is higher, namely the user likes the object recalled by the recall strategy more; if the matching degree is smaller, the preference degree of the user on the recall strategy is lower, namely, the user dislikes the object recalled by the recall strategy.
The user preference degree for the recall policy can be embodied as follows: the number of identical or similar objects contained in the user behavior sequence as in the recall object set of the recall policy is used. If the number of the same objects or similar objects is more, the preference degree of the user on the recall strategy is higher, and the similarity of the user behavior vector and the vector of the object in the initial recall object set is higher; if the number of the same objects or similar objects is less, the preference degree of the user on the recall strategy is lower, and the similarity between the user behavior vector and the vector of the object in the initial recall object set is lower. Therefore, the similarity between the user behavior vector and the object in the initial recall object set can be used as the matching degree of the target user information and the recall strategy.
And 104, aiming at each recall strategy, determining the number of recalls of the recall strategy according to the matching degree of the target user information and the recall strategy.
Wherein, the recall number and the matching degree are in a positive relation, namely: if the matching degree is larger, the number of recalls is larger; if the matching degree is smaller, the number of recalls is smaller. For a recall strategy, if the matching degree of the target user information and the recall strategy is larger, a larger recall quantity can be distributed to the recall strategy; if the matching degree of the target user information and the recall strategy is smaller, a smaller number of recalls can be allocated to the recall strategy. For example, for 3 recall policies: RES1, RES2, RES3, normalized matching degree of target user information and each recall strategy is respectively: MAD1, MAD2, MAD3, and MAD1> MAD2> MAD3, wherein MAD1 is 0.6, MAD2 is 0.3, MAD3 is 0.1, if the total number of recalls is 100, 100 may be first divided into three different sizes of number of recalls: 60. 30, 10, then assign recall number 60 to MAD1, assign recall number 30 to MAD2, and assign recall number 10 to MAD 3.
It should be noted that the present disclosure does not limit the specific algorithm for determining the number of recalls, as long as the above-mentioned forward relationship between the number of recalls and the degree of matching is ensured.
And 105, generating a comprehensive recall object set according to the recall quantity of each recall strategy and the initial recall object set.
Specifically, for each recall object set of the recall strategy, since the objects in the recall object set are in order, the recall object with the top rank can be selected from the recall object set according to the number of recalls to be used as the recall object set of the recall strategy, and finally the recall object sets of all the recall strategies form a comprehensive recall object set.
For example, there are three recall strategies: RES1, RES2, RES3, the number of recalls is 100, 150, 160 respectively, wherein, the initial recall object set of RES1 includes: REO11、REO12、…、 REO1MThe initial recall object set of RES2 includes: REO21、REO22、…、REO2L,RES3 The set of first recalled objects of (1) includes: REO31、REO32、…、REO3NThus, the set of recalling objects of RES1 includes: REO11、REO12、…、REO1100The set of recalling objects of RES2 includes: REO21、REO22、…、REO2150The set of recalling objects of RES3 includes: REO31、 REO32、…、REO3160Finally, obtaining the integrated recall object set comprises: REO11、REO12、…、 REO1100、REO21、REO22、…、REO2150、REO31、REO32、…、REO3160
After the set of integrated recall objects is generated, the set of integrated recall objects may be ranked and recommended to the user. In order to improve the click-through rate or the order placing rate, the click-through rate or the order placing rate of each object in the recall object set can be estimated, and the comprehensive recall object set is sorted according to the click-through rate or the order placing rate.
The click rate or the order rate can be estimated by a deep learning model, and the deep learning model is obtained by training a large number of training samples in advance. For a deep learning model for predictive click-through rates, training samples may include: for deep learning models used to predict the rate of orders, training samples may include: various characteristics of the subject and the rate of orders placed for supervised training.
In another embodiment of the present disclosure, referring to a flowchart of the step of determining the matching degree between the target user information and the recall policy shown in fig. 2, step 103 in fig. 1 includes sub-steps 1031 to 1033:
and a substep 1031, selecting a target number of objects from the initial recall object set of the recall strategy as a reference object set of the recall strategy for each recall strategy.
The target number is the number of objects included in the reference object set of each recall policy, and may be set according to an actual application scenario, which is not limited by the present disclosure. For example, when 300 objects are included in the initial recall object set, the target number may be set to 10.
It is to be appreciated that each recall strategy is recalled based on a target, such that the objects included in the set of recalled objects are in an order, and the present disclosure may retrieve a target number of objects ranked top. For example, for a hotspot recall based on sales volume, the objects in the initial recall object set are arranged in descending order of sales volume, so that the top 10 objects can be taken as the objects in the reference object set.
Sub-step 1032, for each of the recall policies, generating a recall vector for the recall policy from the vector for each object in the reference object set of the recall policy.
Specifically, for each recall policy, the vectors for all objects in its reference object set may be summed to obtain a recall vector for each recall policy. For example, for a reference object set containing 10 objects, the vectors of the 10 objects are respectively: [ OV11,OV12,…,OV1s],[OV21, OV22,…,OV2s],…,[OV101,OV102,…,OV10s]Thus, the vectors of 10 objects can be summed to obtain a recall vector of the recall strategy: [ OV11+OV21+…+OV101, OV12+OV22+…+OV102,…,OV1s+OV2s+…+OV10s]。
In order to avoid that the number of objects included in the reference object set is too large, so that the value of each item in the recall vector is too large, and the subsequent operand is large, the recall vector may also be an average vector of vectors of all objects in the reference object set. For example, for the reference object set including 10 objects, the vectors of 10 objects may be averaged to obtain a recall vector: [ (OV 1)1+OV21+…+OV101)/10,(OV12+OV22+…+OV102)/10,…, (OV1s+OV2s+…+OV10s)/10]。
And a substep 1033, determining, for each recall policy, a matching degree of the target user information and the recall policy according to a similarity of the user behavior vector and a recall vector of the recall policy.
Specifically, the similarity between the user behavior vector and the recall vector may adopt a similarity based on an euclidean distance, a cosine similarity, a similarity based on a manhattan distance, and the like, and the present disclosure does not impose a limitation on a specific operation method. In the present disclosure, the similarity based on the euclidean distance is taken as an example, and the similarity between the user behavior vector and the recall vector can be calculated according to the following formula:
Figure BDA0002318072840000101
wherein, the SIMiSimilarity of the ith recall strategy and the user behavior vector, S is the length of the recall vector and the user behavior vector, UBVsFor the s term of the user behavior vector, RECVsIs the s-th entry of the recall vector.
In practical application, the similarity obtained by the formula (1) can be directly used as the matching degree of the target user information and the ith recall strategy, or the similarity obtained by the formula (1) is transformed to obtain the matching degree of the target user information and the ith recall strategy. It can be understood that if the similarity is larger, the matching degree is larger; if the similarity is smaller, the matching degree is smaller.
According to the method and the device for matching the target user information and the recall strategy, the matching degree of the target user information and the recall strategy can be determined according to the similarity of the user behavior vector and the recall vector of the recall strategy, and the accuracy of the matching degree is improved.
In another embodiment of the present disclosure, referring to the flowchart of the step of determining the number of recalls of the recall policy shown in fig. 3, step 104 in fig. 1 includes sub-steps 1041 to 1042:
substep 1041, performing normalization processing on the matching degree of the target user information and each recall strategy to obtain a normalized matching degree of the target user information and each recall strategy.
The normalization process may convert the matching degree between the target user information and each recall policy into 0 to 1, and the sum of the matching degrees between the target user information and all the recall policies is 1. The simplest normalization process can be implemented using the following equation:
Figure BDA0002318072840000102
wherein, NMADiNormalized matching degree, MAD, for target user information and ith recall strategyiAnd the matching degree of the target user information and the ith recall strategy is shown, and I is the number of the recall strategies.
For 3 recall policies: RES1, RES2, and RES3, where the matching degree between each recall policy and the target user information is 0.8, 0.7, and 0.5, respectively, and the normalized matching degree between each recall policy and the target user information obtained based on the above formula (2) is: 0.8/(0.8+0.7+0.5) ═ 0.4, 0.7/(0.8 +0.7+0.5) ═ 0.35, and 0.5/(0.8+0.7+0.5) ═ 0.25.
In addition, a Softmax function can be adopted to perform normalization processing on the matching degree of the target user information and each recall strategy, and the following formula is specifically adopted:
Figure BDA0002318072840000111
the variables in formula (3) and the variables in formula (2) have the same meaning, and are not described herein again.
For 3 recall policies: RES1, RES2, and RES3, where the matching degree of each recall policy with the target user information is 0.8, 0.7, and 0.5, respectively, and the normalized matching degree of each recall policy with the target user information can be obtained based on the above formula (3) as follows: e.g. of the type0.8/(e0.8+e0.7+e0.5)=0.38、 e0.7/(e0.8+e0.7+e0.5)=0.34、e0.5/(e0.8+e0.7+e0.5)=0.28。
Compared with the formula (2), the formula (3) can adopt a Softmax function to calculate the normalized matching degree, so that for the recall strategy with the matching degree of 0 with the target user information, a small number of recalls can be allocated, and the recall strategy is not directly allocated to be 0, and the condition that no recall object exists in the recall strategy is avoided finally.
And a substep 1042 of calculating a product of the normalized matching degree and a preset recall total number for each recall policy to obtain the recall number of the recall policies.
Specifically, based on the matching degree of formula (2), the recall number of the recall strategy can be calculated according to the following formula:
Figure BDA0002318072840000113
wherein NUM is a preset total number of recalls, RECNiThe number of recalls for the ith recall policy.
According to the formula (4), the sum of the recall numbers of all the recall strategies is the preset recall total number, so that the preset recall total number can be reasonably and completely distributed to each recall strategy as far as possible. For example, for 3 recall policies RES1, RES2, RES3, the normalized matching degree of each recall policy and the target user information calculated by formula (2) is 0.4, 0.35, 0.25, respectively, so that a preset total number of recalls 100 can be assigned to RES1, RES2, RES3 based on formula (4), and the number of recalls is obtained as: 100 × 0.4 ═ 40, 100 × 0.35 ═ 35, 100 × 0.25 ═ 25.
Furthermore, based on the matching degree of formula (3), the recall number of the recall strategy can be calculated according to the following formula:
Figure BDA0002318072840000121
the variables in formula (5) and the variables in formula (4) have the same meaning, and are not described herein again.
According to the formula (5), the sum of the recall numbers of all the recall strategies is also the preset recall total number, so that the preset recall total number can be reasonably and completely distributed to each recall strategy as much as possible. For example, for 3 recall policies RES1, RES2, RES3, the normalized matching degrees of each recall policy and the target user information calculated by formula (3) are 0.38, 0.34, 0.28, respectively, so that a preset total number of recalls 100 can be assigned to RES1, RES2, RES3 based on formula (5), and the number of recalls is obtained as: 100 × 0.38 ═ 38, 100 × 0.34 ═ 34, 100 × 0.28 ═ 28.
The embodiment of the disclosure can normalize the matching degree, thereby ensuring that the preset recall total amount is reasonably and completely distributed to each recall strategy as far as possible.
In another embodiment of the present disclosure, referring to the flowchart of the step of determining a vector for each object in the historical behavior sequence shown in fig. 4, before step 101 in fig. 1, the method further includes steps 106 to 107:
and 106, acquiring a user information set according to a preset time period, wherein the user information set at least comprises the target user information.
The time period may be, for example, but not limited to: days, weeks, months, etc. When the time period is day, the information of all active users in the personalized recommendation platform in the day can be acquired in the morning of each day, and a user information set is formed.
It can be understood that, because the vector of each object in the historical behavior sequence corresponding to the target user information needs to be used in step 103, the user information set containing the target user information needs to be obtained in step 106 to be the vector of each object in the historical behavior sequence of each user information in the user information set in step 107, so that step 101 can obtain the vector of the object in the historical behavior sequence of the target user information from the vector of each object in the historical behavior sequence of the user information provided in step 107.
Step 107, inputting the historical behavior sequence of each piece of user information in the user information set into a vector generation model, and predicting to obtain the vector of each object in the historical behavior sequence of the user information, wherein the vector generation model is obtained in advance through unsupervised training.
The vector generation model may be a deep learning model, which may be trained by a large number of subjects, so that each subject may be converted into a unique vector through learning, and the length of the vector may be set according to the actual application, for example, may be set to 128.
Common deep learning models that generate vectors may include, but are not limited to: graph Embedding model, Word2vec model, etc.
The method and the device can generate the vectors of all the objects contained in the historical behavior sequences of all the active users in an off-line mode according to the time period, so that the vectors of the objects can be read directly during use conveniently, and the time consumed by recall is reduced in comparison with the real-time acquisition of the vectors of the objects during recall.
In another embodiment of the present disclosure, referring to the flowchart of the step of determining a vector of objects in the initial recall object set of the recall policy shown in fig. 5, after step 107 in fig. 4, the method further comprises step 108:
step 108, aiming at each object in the initial recall object set of the recall strategy, obtaining a vector of the object from an object vector set, wherein the object vector set is generated according to the vector of each object in the history behavior sequence of each user information and the object.
In the present disclosure, the vectors of the objects generated in step 107 and the objects may be stored in the object vector set according to the corresponding relationship in advance, so that the vectors corresponding to the objects are directly queried during recalling, which helps to reduce the time consumed by the recall.
In another embodiment of the present disclosure, referring to the flowchart of the step of determining the user behavior vector corresponding to the target user information shown in fig. 6, after 107 in fig. 4, the method further includes the step 109:
step 109, obtaining the user behavior vector of the target user information from a user behavior vector set, where the user behavior vector set is generated according to the user behavior vector of each piece of user information and the user information, and the user behavior vector of each piece of user information is generated according to the vector of each object in the historical behavior sequence of each piece of user information.
In this disclosure, the user behavior vector corresponding to each piece of user information may be determined in advance according to the vector of the object included in the historical behavior sequence corresponding to each piece of user information generated in step 107, and the user information and the user behavior vector may be stored in the user behavior vector set according to the corresponding relationship, so that the user behavior vector corresponding to the target user information is directly queried during recall, which is beneficial to reducing the time consumed by recall.
Wherein the user behavior vector may be a sum of vectors for each object in the user behavior sequence. For example, for a user behavior sequence containing 3 objects, the vectors of the 3 objects are: [ OV11,OV12,…,OV1s],[OV21,OV22,…,OV2s],[OV31,OV32,…, OV3s]Thus, the vectors of 3 objects can be summed to obtain the user behavior vector: [ OV11+OV21+OV31,OV12+OV22+OV32,…,OV1s+OV2s+OV3s]。
In order to avoid that the user behavior sequence is too long, so that the value of each item in the user behavior vector is too large, which results in a large amount of subsequent operations, the user behavior vector may also be an average vector of vectors of all objects in the user behavior sequence. For example, for the user behavior sequence including 3 objects, the vector of the 3 objects may be averaged to obtain a user behavior vector: [ (OV 1)1+OV21+OV31)/3,(OV12+OV22+OV32)/3,…,(OV1s+OV2s+OV3s)/3]。
In another embodiment of the present disclosure, referring to the flowchart of the step of determining a set of integrated recall objects illustrated in fig. 7, step 105 in fig. 1 includes sub-steps 1051 to 1052:
substep 1051, for each recall strategy, obtaining the recall quantity object from the initial recall object set of the recall strategy, and obtaining the recall object set of the recall strategy.
Specifically, for the primary recall object sets of each recall policy, since the primary recall object sets are sequential, the top-ranked recall object can be selected from the primary recall object sets according to the number of recalls to be used as the recall object sets of the recall policy. For example, there are three recall strategies: RES1, RES2, RES3, the number of recalls is 100, 150, 160 respectively, wherein, the initial recall object set of RES1 includes: REO11、REO12、…、REO1MThe initial recall object set of RES2 includes: REO21、REO22、…、REO2LThe initial recall object set of RES3 includes: REO31、 REO32、…、REO3NThus, the set of recalling objects of RES1 includes: REO11、 REO12、…、REO1100The set of recalling objects of RES2 includes: REO21、REO22、…、REO2150The set of recalling objects of RES3 includes: REO31、REO32、…、REO3160
And a substep 1052 of merging the recall object sets of the recall policies into an integrated recall object set.
For each recall object set of recall test strategies obtained in substep 1051, combining them to obtain a comprehensive recall object set may comprise: REO11、REO12、…、REO1100、REO21、 REO22、…、REO2150、REO31、REO32、…、REO3160
It is to be appreciated that in certain embodiments of the present disclosure, the objects within the set of integrated recall objects may also be ranked once after they are obtained. For example, the objects in the integrated recall object set may be re-ordered by click-through rate.
According to the embodiment of the disclosure, the recalling object set corresponding to each recall strategy can be accurately obtained according to the number of recalls of each recall strategy, and finally, the comprehensive recall object set is generated according to the recalling object set corresponding to the recall strategy.
In summary, the present disclosure provides a multi-policy recall method, including: receiving an access request, the access request comprising: target user information; generating a recall object set corresponding to the target user information under each recall strategy on the basis of at least one recall strategy, wherein the recall object set is used as an initial recall object set of the recall strategies; for each recall strategy, determining the matching degree of the target user information and the recall strategy according to a user behavior vector corresponding to the target user information and a vector of an object in a primary recall object set of the recall strategy, wherein the user behavior vector of the target user information is generated according to a vector of each object in a historical behavior sequence corresponding to the target user information; for each recall strategy, determining the number of recalls of the recall strategy according to the target user information and the matching degree of the recall strategy; and generating a comprehensive recall object set according to the recall quantity of each recall strategy and the initial recall object set. The matching degree of the target user information and the recall strategy can be determined according to the user behavior vector and the vector of the initial recall object concentrated object, and the recall quantity of the recall strategy is determined according to the matching degree.
Referring to FIG. 8, a block diagram of the multi-policy recall device of the present disclosure is shown, as follows:
an access request receiving module 201, configured to receive an access request, where the access request includes: and target user information.
A recall module 202, configured to generate, based on at least one recall policy, a recall object set corresponding to the target user information under each recall policy, as a recall object set of the recall policy.
A matching degree determining module 203, configured to determine, for each recall policy, a matching degree between the target user information and the recall policy according to a user behavior vector corresponding to the target user information and a vector of an object in a set of initial recall objects of the recall policy, where the user behavior vector of the target user information is generated according to a vector of each object in a history behavior sequence corresponding to the target user information.
A recall number determining module 204, configured to determine, for each recall policy, a recall number of the recall policy according to the target user information and the matching degree of the recall policy.
And the comprehensive recall module 205 is configured to generate a comprehensive recall object set according to the number of recalls of each recall policy and the initial recall object set.
In another embodiment of the present disclosure, referring to a structural diagram of the matching degree determining module shown in fig. 9, the matching degree determining module 203 in fig. 8 includes a reference object set determining sub-module 2031, a recall vector determining sub-module 2032, and a matching degree determining sub-module 2033:
the reference object set determining sub-module 2031 is configured to, for each recall policy, select a target number of objects from the initial recall object set of the recall policy as a reference object set of the recall policy.
A recall vector determining submodule 2032, configured to, for each recall policy, generate a recall vector of the recall policy according to a vector of each object in a reference object set of the recall policy.
The matching degree determining sub-module 2033 is configured to determine, for each recall policy, a matching degree between the target user information and the recall policy according to a similarity between the user behavior vector and a recall vector of the recall policy.
In another embodiment of the present disclosure, referring to an architecture diagram of the recall number determination module shown in fig. 10, the recall number determination module 204 in fig. 1 includes a matching degree normalization sub-module 2041 and a recall number determination sub-module 2042:
the matching degree normalization sub-module 2041 is configured to perform normalization processing on the matching degree of the target user information and each recall policy to obtain a normalized matching degree of the target user information and each recall policy.
The recall number determining submodule 2042 is configured to calculate, for each recall policy, a product of the normalized matching degree and a preset recall total number, so as to obtain the recall number of the recall policy.
In another embodiment of the present disclosure, referring to the block diagram for determining the vector of each object in the historical behavior sequence shown in fig. 11, the apparatus further includes a user information set obtaining module 206 and an object vector predicting module 207:
a user information set obtaining module 206, configured to obtain a user information set according to a preset time period, where the user information set at least includes the target user information.
And the object vector prediction module 207 is configured to input the historical behavior sequence of each piece of user information in the user information set into a vector generation model, and predict a vector of each object in the historical behavior sequence of the user information, where the vector generation model is obtained in advance through unsupervised training.
In another embodiment of the present disclosure, referring to a block diagram of determining vectors of objects in an initial recall object set of a recall policy shown in fig. 12, the apparatus further includes a user information set obtaining module 206, an object vector predicting module 207, and an object vector obtaining module 208:
a user information set obtaining module 206, configured to obtain a user information set according to a preset time period, where the user information set at least includes the target user information.
And the object vector prediction module 207 is configured to input the historical behavior sequence of each piece of user information in the user information set into a vector generation model, and predict a vector of each object in the historical behavior sequence of the user information, where the vector generation model is obtained in advance through unsupervised training.
An object vector obtaining module 208, configured to, for each object in an initial recall object set of the recall policy, obtain a vector of the object from an object vector set, where the object vector set is generated according to the vector of each object and the object in the history behavior sequence of each user information.
In another embodiment of the present disclosure, referring to the block diagram of determining the user behavior vector corresponding to the target user information shown in fig. 13, the apparatus further includes a user information set obtaining module 206, an object vector predicting module 207, and a user behavior vector obtaining module 209:
a user information set obtaining module 206, configured to obtain a user information set according to a preset time period, where the user information set at least includes the target user information.
And the object vector prediction module 207 is configured to input the historical behavior sequence of each piece of user information in the user information set into a vector generation model, and predict a vector of each object in the historical behavior sequence of the user information, where the vector generation model is obtained in advance through unsupervised training.
A user behavior vector obtaining module 209, configured to obtain a user behavior vector of the target user information from a user behavior vector set, where the user behavior vector set is generated according to the user behavior vector of each piece of user information and the user information, and the user behavior vector of each piece of user information is generated according to a vector of each object in the historical behavior sequence of each piece of user information.
In another embodiment of the present disclosure, referring to the structure diagram of determining an integrated recall object set shown in fig. 14, the integrated recall module 205 in fig. 8 includes a recall sub-module 2051 and an integrated recall sub-module 2052:
and a recalling submodule 2051, configured to, for each recall policy, obtain the objects of the recall number from the primary recall object set of the recall policy, and obtain a recalling object set of the recall policy.
And an integrated recall submodule 2052 for merging the recall object sets of the recall policies into an integrated recall object set.
In summary, the present disclosure provides a multi-policy recall apparatus, the apparatus comprising: an access request receiving module, configured to receive an access request, where the access request includes: target user information; the initial recall module is used for generating a recall object set corresponding to the target user information under each recall strategy based on at least one recall strategy and taking the recall object set as the initial recall object set of the recall strategy; a matching degree determining module, configured to determine, for each recall policy, a matching degree between the target user information and the recall policy according to a user behavior vector corresponding to the target user information and a vector of an object in a set of initial recall objects of the recall policy, where the user behavior vector of the target user information is generated according to a vector of each object in a history behavior sequence corresponding to the target user information; the recall quantity determining module is used for determining the recall quantity of the recall strategy according to the matching degree of the target user information and the recall strategy aiming at each recall strategy; and the comprehensive recall module is used for generating a comprehensive recall object set according to the recall quantity of each recall strategy and the initial recall object set. The matching degree of the target user information and the recall strategy can be determined according to the user behavior vector and the vector of the initial recall object set object, and the recall quantity of the recall strategy is determined according to the matching degree.
The present disclosure also provides an electronic device, referring to fig. 15, including: a processor 301, a memory 302, and a computer program 3021 stored on the memory 302 and executable on the processor, the processor 301 implementing the multi-policy recall method of the foregoing embodiments when executing the program.
The present disclosure also provides a readable storage medium, wherein the instructions of the storage medium, when executed by the processor of the electronic device, enable the electronic device to execute the multi-policy recall method of the aforementioned embodiments.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the above description. Moreover, this disclosure is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the present disclosure as described herein, and any descriptions above of specific languages are provided for disclosure of enablement and best mode of the present disclosure.
In the description provided herein, numerous specific details are set forth. It can be appreciated, however, that the present disclosure may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the disclosure, various features of the disclosure are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that is, the claimed disclosure requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this disclosure.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and placed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component and may be further divided into sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Various component embodiments of the disclosure may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in a multi-policy recall apparatus according to the present disclosure. The present disclosure may also be embodied as an apparatus or device program for performing a portion or all of the methods described herein. Such programs implementing the present disclosure may be stored on a computer-readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet web site or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the disclosure, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The disclosure may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above description is only exemplary of the present disclosure and should not be taken as limiting the disclosure, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A multi-policy recall method, the method comprising:
receiving an access request, the access request comprising: target user information;
generating a recall object set corresponding to the target user information under each recall strategy on the basis of at least one recall strategy, wherein the recall object set is used as an initial recall object set of the recall strategy;
for each recall strategy, determining the matching degree of the target user information and the recall strategy according to a user behavior vector corresponding to the target user information and a vector of an object in a primary recall object set of the recall strategy, wherein the user behavior vector of the target user information is generated according to a vector of each object in a historical behavior sequence corresponding to the target user information;
for each recall strategy, determining the recall quantity of the recall strategy according to the matching degree of the target user information and the recall strategy;
and generating a comprehensive recall object set according to the recall quantity of each recall strategy and the initial recall object set.
2. The method according to claim 1, wherein the step of determining, for each of the recall policies, a matching degree between the target user information and the recall policy according to a user behavior vector corresponding to the target user information and a vector of an object in a set of recall objects of the recall policy comprises:
selecting a target number of objects from an initial recall object set of the recall strategy as a reference object set of the recall strategy aiming at each recall strategy;
for each recall strategy, generating a recall vector of the recall strategy according to the vector of each object in a reference object set of the recall strategy;
and aiming at each recall strategy, determining the matching degree of the target user information and the recall strategy according to the similarity of the user behavior vector and the recall vector of the recall strategy.
3. The method according to claim 1, wherein the step of determining, for each of the recall policies, a recall number of the recall policy according to the matching degree of the target user information and the recall policy comprises:
normalizing the matching degree of the target user information and each recall strategy to obtain the normalized matching degree of the target user information and each recall strategy;
and aiming at each recall strategy, calculating the product of the normalized matching degree and the preset recall total quantity to obtain the recall quantity of the recall strategy.
4. The method of claim 1, further comprising:
acquiring a user information set according to a preset time period, wherein the user information set at least comprises the target user information;
and inputting the historical behavior sequence of each piece of user information in the user information set into a vector generation model, and predicting to obtain the vector of each object in the historical behavior sequence of the user information, wherein the vector generation model is obtained in advance through unsupervised training.
5. The method of claim 4, wherein the vector of objects in the set of recalled objects of the recall policy is obtained by:
and aiming at each object in an initial recall object set of the recall strategy, acquiring a vector of the object from an object vector set, wherein the object vector set is generated according to the vector of each object in the historical behavior sequence of each user information and the object.
6. The method according to claim 4, wherein the user behavior vector corresponding to the target user information is obtained by:
and acquiring the user behavior vector of the target user information from a user behavior vector set, wherein the user behavior vector set is generated according to the user behavior vector of each piece of user information and the user information, and the user behavior vector of each piece of user information is generated according to the vector of each object in the historical behavior sequence of each piece of user information.
7. The method of claim 1, wherein the step of generating an integrated set of recall objects from the number of recalls for each of the recall policies and the initial set of recall objects comprises:
for each recall strategy, acquiring the object of the recall quantity from an initial recall object set of the recall strategy to obtain a recall object set of the recall strategy;
and merging the recalling object sets of the recalling strategies into a comprehensive recall object set.
8. A multi-policy recall apparatus, the apparatus comprising:
an access request receiving module, configured to receive an access request, where the access request includes: target user information;
the initial recall module is used for generating a recall object set corresponding to the target user information under each recall strategy based on at least one recall strategy and taking the recall object set as the initial recall object set of the recall strategy;
a matching degree determining module, configured to determine, for each recall policy, a matching degree between the target user information and the recall policy according to a user behavior vector corresponding to the target user information and a vector of an object in a set of initial recall objects of the recall policy, where the user behavior vector of the target user information is generated according to a vector of each object in a history behavior sequence corresponding to the target user information;
the recall quantity determining module is used for determining the recall quantity of the recall strategy according to the matching degree of the target user information and the recall strategy aiming at each recall strategy;
and the comprehensive recall module is used for generating a comprehensive recall object set according to the recall quantity of each recall strategy and the initial recall object set.
9. An electronic device, comprising:
a processor, a memory, and a computer program stored on the memory and executable on the processor, wherein the processor implements the multi-policy recall method of any of claims 1-7 when executing the program.
10. A readable storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the multi-policy recall method of any of method claims 1-7.
CN201911286330.0A 2019-12-13 2019-12-13 Multi-policy recall method and device, electronic equipment and readable storage medium Withdrawn CN111241388A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911286330.0A CN111241388A (en) 2019-12-13 2019-12-13 Multi-policy recall method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911286330.0A CN111241388A (en) 2019-12-13 2019-12-13 Multi-policy recall method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN111241388A true CN111241388A (en) 2020-06-05

Family

ID=70863924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911286330.0A Withdrawn CN111241388A (en) 2019-12-13 2019-12-13 Multi-policy recall method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111241388A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112765241A (en) * 2021-02-04 2021-05-07 腾讯科技(深圳)有限公司 Recall data determining method, apparatus and storage medium
CN113205362A (en) * 2021-04-30 2021-08-03 北京有竹居网络技术有限公司 Method, apparatus, device, storage medium and program product for determining a promoter
CN113641721A (en) * 2021-10-13 2021-11-12 中航信移动科技有限公司 Air ticket display method and device, electronic equipment and storage medium
WO2022110789A1 (en) * 2020-11-27 2022-06-02 北京搜狗科技发展有限公司 Entry recommendation method and apparatus, and apparatus for recommending entries
CN116501976A (en) * 2023-06-25 2023-07-28 浙江天猫技术有限公司 Data recommendation, model training, similar user analysis methods, apparatus and media
CN112765241B (en) * 2021-02-04 2024-06-11 腾讯科技(深圳)有限公司 Recall data determining method, recall data determining device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190043A (en) * 2018-09-07 2019-01-11 北京三快在线科技有限公司 Recommended method and device, storage medium, electronic equipment and recommender system
CN110008375A (en) * 2019-03-22 2019-07-12 广州新视展投资咨询有限公司 Video is recommended to recall method and apparatus
CN110083688A (en) * 2019-05-10 2019-08-02 北京百度网讯科技有限公司 Search result recalls method, apparatus, server and storage medium
CN110532479A (en) * 2019-09-05 2019-12-03 北京思维造物信息科技股份有限公司 A kind of information recommendation method, device and equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190043A (en) * 2018-09-07 2019-01-11 北京三快在线科技有限公司 Recommended method and device, storage medium, electronic equipment and recommender system
CN110008375A (en) * 2019-03-22 2019-07-12 广州新视展投资咨询有限公司 Video is recommended to recall method and apparatus
CN110083688A (en) * 2019-05-10 2019-08-02 北京百度网讯科技有限公司 Search result recalls method, apparatus, server and storage medium
CN110532479A (en) * 2019-09-05 2019-12-03 北京思维造物信息科技股份有限公司 A kind of information recommendation method, device and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
原福永等: "融合用户经历的多策略自适应推荐模型" *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022110789A1 (en) * 2020-11-27 2022-06-02 北京搜狗科技发展有限公司 Entry recommendation method and apparatus, and apparatus for recommending entries
CN112765241A (en) * 2021-02-04 2021-05-07 腾讯科技(深圳)有限公司 Recall data determining method, apparatus and storage medium
CN112765241B (en) * 2021-02-04 2024-06-11 腾讯科技(深圳)有限公司 Recall data determining method, recall data determining device and storage medium
CN113205362A (en) * 2021-04-30 2021-08-03 北京有竹居网络技术有限公司 Method, apparatus, device, storage medium and program product for determining a promoter
CN113641721A (en) * 2021-10-13 2021-11-12 中航信移动科技有限公司 Air ticket display method and device, electronic equipment and storage medium
CN113641721B (en) * 2021-10-13 2022-03-04 中航信移动科技有限公司 Air ticket display method and device, electronic equipment and storage medium
CN116501976A (en) * 2023-06-25 2023-07-28 浙江天猫技术有限公司 Data recommendation, model training, similar user analysis methods, apparatus and media
CN116501976B (en) * 2023-06-25 2023-11-17 浙江天猫技术有限公司 Data recommendation, model training, similar user analysis methods, apparatus and media

Similar Documents

Publication Publication Date Title
CN111241388A (en) Multi-policy recall method and device, electronic equipment and readable storage medium
JP6578244B2 (en) Determining suitability accuracy based on historical data
CN109241415B (en) Project recommendation method and device, computer equipment and storage medium
US9460475B2 (en) Determining connectivity within a community
US9443004B2 (en) Social graph data analytics
US20180341898A1 (en) Demand forecast
Wu et al. A novel method for calculating service reputation
CN107808314B (en) User recommendation method and device
Xu et al. Integrated collaborative filtering recommendation in social cyber-physical systems
JP6985518B2 (en) Client, server, and client-server systems adapted to generate personalized recommendations
CN111461812A (en) Object recommendation method and device, electronic equipment and readable storage medium
CN109858919B (en) Abnormal account number determining method and device, and online ordering method and device
US20100161544A1 (en) Context-based interests in computing environments and systems
CN111369313A (en) Processing method and device for house-ordering failure order, computer equipment and storage medium
CN110766513A (en) Information sorting method and device, electronic equipment and readable storage medium
KR20110096488A (en) Collaborative networking with optimized inter-domain information quality assessment
CN108667877B (en) Method and device for determining recommendation information, computer equipment and storage medium
CN111797320A (en) Data processing method, device, equipment and storage medium
CN111259272B (en) Search result ordering method and device
CN111523964A (en) Clustering-based recall method and apparatus, electronic device and readable storage medium
CN111967948A (en) Bank product recommendation method and device, server and storage medium
CN111160951A (en) Method and device for predicting excitation result, electronic equipment and readable storage medium
US8856110B2 (en) Method and apparatus for providing a response to a query
CN109829593B (en) Credit determining method and device for target object, storage medium and electronic device
US8175902B2 (en) Semantics-based interests in computing environments and systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200605