CN114998024A - Product recommendation method, device, equipment and medium based on click rate - Google Patents

Product recommendation method, device, equipment and medium based on click rate Download PDF

Info

Publication number
CN114998024A
CN114998024A CN202210683138.0A CN202210683138A CN114998024A CN 114998024 A CN114998024 A CN 114998024A CN 202210683138 A CN202210683138 A CN 202210683138A CN 114998024 A CN114998024 A CN 114998024A
Authority
CN
China
Prior art keywords
product
products
probability
list
sorted list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210683138.0A
Other languages
Chinese (zh)
Inventor
唐珊珊
王凯
周洪菊
赵培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202210683138.0A priority Critical patent/CN114998024A/en
Publication of CN114998024A publication Critical patent/CN114998024A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/06Asset management; Financial planning or analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance

Abstract

The disclosure provides a product recommendation method, device, equipment and medium based on click rate, and relates to the technical field of artificial intelligence. The method comprises the steps of obtaining a plurality of pieces of historical transaction data of products purchased by a user to form a training data set; acquiring m products to be recommended; extracting the product characteristics and the user behavior characteristics of each product in the m products according to the training data set to generate m characteristic vectors; calculating posterior probabilities respectively corresponding to the m feature vectors by using a naive Bayes algorithm, and obtaining a first ranking list of the m products according to the posterior probabilities; inputting the m feature vectors into a deep learning network model to obtain click rate pre-estimated values of m products, and obtaining a second ordered list of the m products according to the click rate pre-estimated values, wherein the deep learning network model is obtained by pre-training based on a training data set; and correcting the first sorted list and the second sorted list to obtain a third sorted list, and sequentially recommending m products according to the third sorted list.

Description

Product recommendation method, device, equipment and medium based on click rate
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to a product recommendation method and apparatus, an electronic device, a storage medium, and a program product based on a click rate.
Background
With the rise of the recommendation of the financial products in the financial services, the recommendation of three major products of fund, insurance and financing (hereinafter referred to as basic insurance) becomes popular. If products that the customer is interested in and prefers can be placed at the head of the recommendation list, this will promote an increase in the volume of transactions for that type of product. In the prior art, a common practice is to estimate a Click-Through-Rate (CTR) of a base warranty product according to attributes of the product, and then form a recommendation list of products to be recommended according to the estimated CTR in a sequence from high to low.
At present, sorting methods after estimating CTR of a base warranty product are mainly divided into two types: a traditional Machine learning method represented by collaborative filtering, matrix decomposition algorithm, Logistic Regression (LR), Gradient Boosting Decision Tree (GBDT) + Logistic Regression, and Factorization Machine (FM); the other is a deep learning method represented by deep fm. The core of the two methods is to give the estimated CTR of the product by learning the characteristics of the product, and then sort the products from high to low according to the estimated CTR.
However, in implementing the concept of the present disclosure, the inventors found that at least the following problems exist in the related art: (1) in the traditional machine learning method, a collaborative filtering and matrix decomposition algorithm can only give out the similarity between different products to be recommended and products purchased by a user in history, LR depends heavily on the selection of artificial features, GBDT + LR can automatically select features but cannot process high-dimensional discrete features, the generalization capability of a model is weak, and FM can process sparse features but does not consider the deep-level relation among the features. (2) In the deep learning method, the core is the reformation of a neural network model, and the current better method is a deep FM model. Although the deep fm model considers both shallow features and deep features of the features, the recommendation list of the deep fm is obtained according to the estimated CTR height sequence of each product, and any correction is not performed on the estimated CTR and the sequencing result, so that the recommendation effect is sometimes poor.
Disclosure of Invention
In view of the foregoing, the present disclosure provides a product recommendation method, apparatus, electronic device, storage medium, and program product based on click rate.
According to a first aspect of the present disclosure, there is provided a product recommendation method based on click rate, including: acquiring a plurality of historical transaction data of products purchased by a user to form a training data set; obtaining m products to be recommended, wherein m is more than or equal to 2 and is an integer; extracting the product characteristics and the user behavior characteristics of each product in the m products according to the training data set to generate m characteristic vectors; calculating posterior probabilities respectively corresponding to the m feature vectors by using a naive Bayes algorithm, and obtaining a first ranking list of the m products according to the posterior probabilities; inputting the m feature vectors into a deep learning network model to obtain click rate pre-estimated values of m products, and obtaining a second ordered list of the m products according to the click rate pre-estimated values, wherein the deep learning network model is obtained by pre-training based on a training data set; and correcting the first sorted list and the second sorted list to obtain a third sorted list, and sequentially recommending m products according to the third sorted list.
According to an embodiment of the present disclosure, the product characteristics of each product include a plurality of sub-characteristics, the sub-characteristics including a product type, a product goodness, a product sales amount, a marketing novelty, and a discount degree, each sub-characteristic including a plurality of attribute values; the user behavior characteristic of each product is a discrete characteristic, which characterizes whether the user has purchased the product.
According to the embodiment of the disclosure, the posterior probabilities respectively corresponding to the m feature vectors are calculated by using a naive Bayes algorithm, which includes: calculating the prior probability of each discrete feature in the training data set; calculating the conditional probability of different attribute values of each sub-feature under each discrete feature according to the prior probability; and calculating the posterior probability of each product in the m products belonging to each discrete feature according to the prior probability and the conditional probability.
According to the embodiment of the present disclosure, when the discrete features include a first discrete feature and a second discrete feature, the first discrete feature represents that the user has purchased the corresponding product, the second discrete feature represents that the user has not purchased the corresponding product, and the m products include the first product, after calculating the posterior probability that the first product belongs to each discrete feature, the method further includes: judging whether the posterior probability of the first product belonging to the first discrete feature is greater than the posterior probability of the first product belonging to the second discrete feature, and if so, determining the posterior probability belonging to the first discrete feature as the recommended probability of the first product; otherwise, determining the probability that the first product is recommended according to the complementary event probability of the posterior probabilities belonging to the second discrete features; a first ranked list of m products is determined based on a probability that the first product is recommended.
According to an embodiment of the disclosure, when the posterior probability that the first product belongs to the first discrete feature is not greater than the posterior probability that the first product belongs to the second discrete feature, the probability that the first product is recommended is determined according to the following formula:
Figure BDA0003696351520000031
wherein p is k Representing a probability that the first product is recommended;
Figure BDA0003696351520000032
representing a posterior probability that the first product belongs to the second discrete feature.
According to an embodiment of the present disclosure, when the prior probability or the conditional probability is 0, the laplacian smoothing method is used for the preprocessing.
According to the embodiment of the disclosure, the deep learning network model is constructed according to the deep learning FM network model, and the activation function sigma of the hidden layer of the deep learning FM network model 2 (x) Comprises the following steps:
Figure BDA0003696351520000033
wherein x represents the input of a hidden layer neuron; sigma 2 (x) Representing the output of the hidden layer neurons.
According to an embodiment of the present disclosure, a deep learning network model is trained by: splitting a training data set into a training set, a verification set and a test set; training the constructed deep learning network model by using a training set and a verification set, and updating network parameters of the deep learning network model according to a preset loss function; the effectiveness of the deep learning network model is evaluated using a test set.
According to an embodiment of the present disclosure, modifying the first sorted list and the second sorted list to obtain a third sorted list includes: judging whether the first sorted list and the second sorted list are completely consistent, and if so, determining the first sorted list or the second sorted list as a third sorted list; and if not, multiplying the elements in the first sorted list and the second sorted list respectively to obtain a third sorted list.
According to the embodiment of the present disclosure, after generating m feature vectors, the method further includes: and extracting product identifications of the m products according to the training data set, and establishing one-to-one mapping between the product identifications of the m products and the m characteristic vectors.
According to the embodiment of the present disclosure, according to the third sorted list, m products are recommended in sequence, including: and recommending and mapping the product identification list of the third ordered list in sequence.
According to an embodiment of the disclosure, the method further comprises: obtaining authorization of a user to a plurality of pieces of historical transaction data of a product purchased by the user; after authorization of the user, a plurality of historical transaction data are obtained.
A second aspect of the present disclosure provides a product recommendation device based on click rate, including: the data acquisition module is used for acquiring a plurality of historical transaction data of products purchased by a user to form a training data set; the product acquisition module is used for acquiring m products to be recommended, wherein m is greater than or equal to 2 and is an integer; the vector generation module is used for extracting the product characteristics and the user behavior characteristics of each product in the m products according to the training data set to generate m characteristic vectors; the first sequencing module is used for calculating the posterior probabilities corresponding to the m feature vectors by using a naive Bayes algorithm, and obtaining a first sequencing list of the m products according to the posterior probabilities; the second sorting module is used for inputting the m characteristic vectors into the deep learning network model to obtain click rate pre-estimated values of the m products, and obtaining a second sorting list of the m products according to the click rate pre-estimated values, wherein the deep learning network model is obtained by pre-training based on a training data set; and the product recommending module is used for correcting the first sorted list and the second sorted list to obtain a third sorted list, and recommending m products in sequence according to the third sorted list.
A third aspect of the present disclosure provides an electronic device, comprising: one or more processors; a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the click-through-rate-based product recommendation method described above.
The fourth aspect of the present disclosure also provides a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to perform the above click-through rate-based product recommendation method.
The fifth aspect of the present disclosure also provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the computer program implements the above click-through rate-based product recommendation method.
Compared with the prior art, the product recommendation method, the product recommendation device, the electronic equipment, the storage medium and the program product based on the click rate have at least the following beneficial effects:
(1) the method can estimate the click rate of a plurality of products to be marketed by combining the traditional machine learning and the deep learning, and further recommend the information of the base warranty product, does not need manual feature screening, can process the relationship between shallow features and the relationship between deep features, further corrects the estimated CTR given by the model, and can improve the accuracy of recommendation;
(2) the method and the company with the user geographic position characteristics construct a federal learning platform, which is beneficial to improving the accuracy of the bank abnormal user early warning model, can save assets, stop loss in time, prevent transactions, and even provide the value of partial information of abnormal users for other external monitoring parties.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be apparent from the following description of embodiments of the disclosure, which proceeds with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates an application scenario suitable for implementing a click-through rate based product recommendation method and apparatus according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a method of click-through rate based product recommendation in accordance with an embodiment of the present disclosure;
FIG. 3 schematically shows a flow chart of a posterior probability calculation process according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a flow chart of a first sorted list determination process according to an embodiment of the disclosure;
FIG. 5 schematically illustrates a flow diagram of a deep learning network model training process according to an embodiment of the disclosure;
FIG. 6 schematically illustrates a flow chart of a third sorted-list modification process according to an embodiment of the disclosure;
FIG. 7 schematically illustrates a flow diagram of a product identification mapping establishment procedure in accordance with an embodiment of the present disclosure;
FIG. 8 schematically illustrates a flow chart of a product identification list recommendation process according to an embodiment of the present disclosure;
FIG. 9 schematically illustrates a block diagram of a click-through rate based product recommendation device according to an embodiment of the present disclosure;
FIG. 10 schematically illustrates a block diagram of an electronic device suitable for implementing a click-through rate based product recommendation method in accordance with an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some block diagrams and/or flow diagrams are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations thereof, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, which execute via the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks. The techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). In addition, the techniques of this disclosure may take the form of a computer program product on a computer-readable storage medium having instructions stored thereon for use by or in connection with an instruction execution system.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure, application and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations, necessary confidentiality measures are taken, and the customs of the public order is not violated.
In the technical scheme of the disclosure, before the personal information of the user is acquired or collected, the authorization or the consent of the user is acquired.
The embodiment of the disclosure provides a product recommendation method, a product recommendation device, equipment, a storage medium and a program product based on click rate, and relates to the technical field of artificial intelligence. The method comprises the following steps: acquiring a plurality of pieces of historical transaction data of products purchased by a user to form a training data set; obtaining m products to be recommended, wherein m is more than or equal to 2 and is an integer; extracting the product characteristics and the user behavior characteristics of each product in the m products according to the training data set to generate m characteristic vectors; calculating posterior probabilities respectively corresponding to the m feature vectors by using a naive Bayes algorithm, and obtaining a first ranking list of the m products according to the posterior probabilities; inputting the m feature vectors into a deep learning network model to obtain click rate pre-estimated values of m products, and obtaining a second ordered list of the m products according to the click rate pre-estimated values, wherein the deep learning network model is obtained by pre-training based on a training data set; and correcting the first sorted list and the second sorted list to obtain a third sorted list, and sequentially recommending m products according to the third sorted list.
Fig. 1 schematically illustrates an application scenario suitable for implementing a click-through rate based product recommendation method and apparatus according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of an application scenario in which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the application scenario 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104, and a server 105. Network 104 is the medium used to provide communication links between terminal devices 101, 102, 103 and server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (for example only) providing support for websites browsed by users using the terminal devices 101, 102, 103. The backend management server may analyze and process the received data such as the user request, and feed back a processing result (for example, a web page, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that the product recommendation method based on click rate provided by the embodiment of the present disclosure may be generally executed by the server 105. Accordingly, the click-through rate based product recommendation device provided by the embodiments of the present disclosure may be generally disposed in the server 105. The product recommendation method based on click rate provided by the embodiment of the present disclosure may also be executed by a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the product recommendation device based on click rate provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
The click-through rate based product recommendation method according to the embodiment of the present disclosure will be described in detail below with reference to fig. 2 to 8 based on the application scenario described in fig. 1.
FIG. 2 schematically shows a flowchart of a click-through rate based product recommendation method according to an embodiment of the present disclosure.
As shown in fig. 2, the click-through-rate-based product recommendation method of this embodiment may include operations S210 to S260.
In operation S210, a plurality of pieces of historical transaction data of products purchased by a user are acquired, constituting a training data set.
In an embodiment of the present disclosure, before acquiring a plurality of pieces of historical transaction data of a product purchased by a user, the method may further include: obtaining authorization of a user to a plurality of pieces of historical transaction data of a product purchased by the user; after authorization of the user, a plurality of historical transaction data are obtained. Thus, a request for obtaining a plurality of pieces of historical transaction data of a product purchased by a user can be sent to the user before obtaining the plurality of pieces of historical transaction data of the product purchased by the user. The operation S210 is performed in case that the user agrees or authorizes that a plurality of pieces of historical transaction data of his/her purchased product can be acquired.
In operation S220, m products to be recommended are obtained, where m is an integer greater than or equal to 2.
The number m of the products is preset, and the embodiment is mainly used for recommending the optimal sequence of the m products for the user so as to meet the interest and the demand of the user, improve the marketing efficiency and avoid the interference of invalid recommendation information on the user.
In operation S230, product features and user behavior features of each of the m products are extracted according to the training data set, and m feature vectors are generated.
In operation S240, posterior probabilities respectively corresponding to the m feature vectors are calculated using a naive bayes algorithm, and a first ranking table of the m products is obtained according to the posterior probabilities.
In operation S250, the m feature vectors are input into the deep learning network model to obtain click rate estimated values of m products, and a second ordered list of the m products is obtained according to the click rate estimated values, where the deep learning network model is obtained by pre-training based on a training data set.
In operation S260, the first sorted list and the second sorted list are modified to obtain a third sorted list, and m products are recommended in sequence according to the third sorted list.
Through the embodiment of the disclosure, the click rate of a plurality of products to be marketed is estimated by combining the traditional machine learning and the deep learning, and then the information of the base warranty product is recommended. And obtaining a final sorted list based on the sorted lists obtained by two different algorithms of the posterior probability and the deep neural network so as to recommend products to users in sequence and improve the marketing efficiency and quality of the products. In addition, the method does not need manual feature screening, can process the relationship between shallow features and the relationship between deep features, further corrects the estimated click rate given by the model, and can improve the recommendation accuracy.
In the disclosed embodiment, the product characteristics of each product include a plurality of sub-characteristics, each sub-characteristic including a product type, a product goodness, a product sales amount, a marketing novelty, and a discount degree, and each sub-characteristic includes a plurality of attribute values. The user behavior characteristic of each product is a discrete characteristic, which characterizes whether the user has purchased the product.
Specifically, a formal description of the technical problem to be solved by the present embodiment is given below. For the base warranty products, assuming that a user purchases one or more products, after the user authorizes the products and acquires a plurality of pieces of historical transaction data, the product characteristics and the user behavior characteristics of the products can be extracted for each piece of historical transaction data. For ease of description, the product characteristics and user behavior characteristics of a product are formally represented as:
Figure BDA0003696351520000091
wherein N is the total number of the acquired historical transaction data; n is the number of sub-features included in a product feature in a piece of historical transaction data, and n is not less than 2 and is an integer; x is the number of i Is the overall characteristics of the ith product; x is the number of i (j) J is the jth sub-feature of product i, j is 1, 2, …, n, x i (j) ∈{a j1 ,a j2 ,…,a jp In which a jl The value of the ith attribute under the jth sub-feature is 1, 2, …, p; p is the number of attribute values under the jth sub-characteristic, and p is a positive integer; y is i The user behavior characteristic corresponding to the product i is a discrete characteristic, and the value is-1 or + 1; y is i +1 indicates that the user clicked to purchase product i; y is i With-1 indicating that the user has not clicked on the purchase of product i. Then, the recommendation problem for the base warranty product can be formally expressed as:
inputting:
(1) a training data set T formed by a plurality of pieces of historical transaction data of products purchased by a user:
T={(x 1 ,y 1 ),(x 2 ,y 2 ),...,(x N ,y N )}
(2) the method comprises the following steps of generating m feature vectors by product features of m base warranty products to be recommended and user behavior features of a current given user:
Figure BDA0003696351520000101
wherein x is t And the t-th feature vector corresponds to the t-th product to be recommended, and t is 1, 2, … and m. At this time, the products to be recommended are not sorted, so the serial number t is obtained only according to the generation order.
(3) Product identification id of m base warranty products to be recommended 1 ,id 2 ,...,id m And (3) establishing one-to-one mapping between the product identifications of the m base warranty products and the m feature vectors in the step (2).
And (3) outputting:
and the recommendation list is m in length, and the content in the recommendation list is the product identification corresponding to the m products to be recommended from high to low according to the estimated click rate.
Based on the formal description of the problems, the embodiment of the disclosure firstly uses a naive Bayes algorithm to calculate the posterior probabilities respectively corresponding to the m eigenvectors, and obtains a first ranking list of m base warranty products to be recommended based on the posterior probabilities; then, training the training data set by using a deep neural network algorithm to obtain a trained deep learning network model, respectively estimating click rates of m base warranty products by using the deep learning network model, and obtaining a second ranking list of the m products based on the estimated click rates; and finally, correcting the sorted list obtained based on the two different algorithms to obtain a final sorted list so as to sequentially recommend the m products.
FIG. 3 schematically shows a flow chart of a posterior probability calculation process according to an embodiment of the disclosure.
As shown in fig. 3, in the embodiment of the present disclosure, the calculating, by using a naive bayes algorithm, posterior probabilities corresponding to the m feature vectors in operation S240 may specifically include operation S2401 to operation S2403.
In operation S2401, a prior probability of each discrete feature in the training data set is calculated.
The prior probability is calculated according to the following formula:
Figure BDA0003696351520000111
Figure BDA0003696351520000112
wherein P (Y ═ 1) represents a prior probability that the user has purchased a product; p (Y ═ -1) represents a prior probability that the user has not purchased the product; i (y) i +1) represents the number of times the user has purchased product i in the plurality of pieces of historical transaction data; i (y) i Or-1) represents the number of times the user has not purchased product i in the plurality of pieces of historical transaction data.
In operation S2402, a conditional probability of a different attribute value of each sub-feature under each discrete feature is calculated according to the prior probability.
The conditional probability is calculated according to the following formula:
Figure BDA0003696351520000113
Figure BDA0003696351520000114
wherein, P (X) (j) =a jl Y ═ 1) represents the conditional probability of the l-th attribute value under the j-th sub-feature in the case where the user has purchased the product i; p (X) (j) =a jl Y ═ 1) denotes a conditional probability of the l-th attribute value under the j-th sub feature in the case where the user does not purchase the product i;
Figure BDA0003696351520000115
an l attribute value under a j sub-feature representing a case where the user does not purchase the product i;
Figure BDA0003696351520000116
indicates the l-th attribute value under the j-th sub-feature in the case where the user has purchased the product i.
Further, based on the calculation result, when the prior probability or the conditional probability is 0, the laplacian smoothing method may be employed for the preprocessing.
In operation S2403, a posterior probability that each of the m products belongs to each of the discrete features is calculated according to the prior probability and the conditional probability.
Specifically, for the kth feature vector of the given m feature vectors:
Figure BDA0003696351520000121
calculating the posterior probability of the kth product to be recommended according to the following formula:
Figure BDA0003696351520000122
Figure BDA0003696351520000123
wherein, P (Y +1| X ═ X) k )、
Figure BDA0003696351520000124
Representing a posterior probability that the kth product belongs to the first discrete feature;
Figure BDA0003696351520000125
P(Y=-1|X=x k ) Representing the posterior probability that the kth product belongs to the second discrete feature.
Fig. 4 schematically illustrates a flow chart of a first ranked list determination process according to an embodiment of the present disclosure.
For convenience of description, in the embodiment of the present disclosure, when the discrete features include a first discrete feature and a second discrete feature, the first discrete feature represents that the user has purchased the corresponding product, the second discrete feature represents that the user has not purchased the corresponding product, and the m products include the first product, as shown in fig. 4, after the posterior probability that the first product belongs to each discrete feature is calculated in operation S240, operations S2404 to S2405 may be further included.
In operation S2404, it is determined whether the posterior probability that the first product belongs to the first discrete feature is greater than the posterior probability that the first product belongs to the second discrete feature, and if so, the posterior probability that the first product belongs to the first discrete feature is determined as a recommended probability of the first product; otherwise, the probability of the first product being recommended is determined according to the complementary event probabilities of the posterior probabilities belonging to the second discrete features.
In operation S2405, a first ranked list of m products is determined according to a probability that the first product is recommended.
In the disclosed embodiment, when the posterior probability that the first product belongs to the first discrete feature is not greater than the posterior probability that the first product belongs to the second discrete feature, the recommended probability of the first product is determined according to the following formula:
Figure BDA0003696351520000126
wherein p is k Representing a probability that the first product is recommended;
Figure BDA0003696351520000127
representing a posterior probability that the first product belongs to the second discrete feature.
In particular, it is assumed that the posterior probability that the first product belongs to the first discrete feature is expressed as
Figure BDA0003696351520000131
The posterior probability that the first product belongs to the second discrete feature is expressed as
Figure BDA0003696351520000132
The probability of the first product being recommended is denoted p k Then if
Figure BDA0003696351520000133
Then order
Figure BDA0003696351520000134
Otherwise make
Figure BDA0003696351520000135
Then, for m products to be recommended, sorting the m products according to the calculated posterior probability from large to small to obtain a probability list L p And a corresponding product identification list L id I.e. the first sorted list.
In the embodiment of the disclosure, the deep learning network model is constructed according to the deep FM network modelActivation function sigma of hidden layer of built DeepFM network model 2 (x) Comprises the following steps:
Figure BDA0003696351520000136
wherein x represents the input of a hidden layer neuron; sigma 2 (x) Representing the output of the hidden layer neurons.
The activation function is different from that of a conventional deep learning network model.
FIG. 5 schematically shows a flow diagram of a deep learning network model training process according to an embodiment of the disclosure.
As shown in fig. 5, in the embodiment of the present disclosure, the deep learning network model in operation S250 is trained through the following operations S2501 to S2503.
In operation S2501, a training data set is split into a training set, a validation set, and a test set.
In operation S2502, the constructed deep learning network model is trained using the training set and the verification set, and network parameters of the deep learning network model are updated according to a preset loss function.
In operation S2503, the effectiveness of the deep learning network model is evaluated using the test set.
Thus, the training data set T { (x) 1 ,y 1 ),(x 2 ,y 2 ),...,(x N ,y N ) And (4) splitting the test set into a training set, a verification set and a test set for model training and evaluation.
And then, for m products to be recommended, obtaining click rate pre-estimated values of the m products by using the trained deep learning network model.
Then, sorting is performed according to the click rate estimated value from large to small, so that a probability list L can be obtained p 2 And a corresponding product identification list L id 2 I.e. the second sorted listing.
Fig. 6 schematically illustrates a flow chart of a third sorted-list modification process according to an embodiment of the disclosure.
As shown in fig. 6, in the embodiment of the present disclosure, the modifying the first sorted list and the second sorted list in operation S260 to obtain a third sorted list may specifically include operation S2601.
In operation S2601, it is determined whether the first sorted list and the second sorted list are completely consistent, and if so, the first sorted list or the second sorted list is determined as a third sorted list; and if not, multiplying the elements in the first sorted list and the second sorted list respectively to obtain a third sorted list.
Specifically, for the first sorted list L derived as described above id And a second sorted listing L id 2 The comparison result is as follows:
(1) if the two are identical, then L id Is the final sorted list;
(2) otherwise, the first ranking list L id And a second sorted list L id 2 Respectively multiplying each element in the table to obtain a probability list
Figure BDA0003696351520000143
Will be provided with
Figure BDA0003696351520000144
The probabilities in the product are sorted from big to small, and the sorted product identification list is recorded
Figure BDA0003696351520000145
I.e. the final sorted list.
FIG. 7 schematically shows a flow diagram of a product identification mapping establishment procedure according to an embodiment of the disclosure.
As shown in fig. 7, in the embodiment of the present disclosure, after the m feature vectors are generated in operation S230, operation S2301 may also be included.
In operation S2301, product identifiers of m products are extracted according to the training data set, and one-to-one mapping between the product identifiers of the m products and the m feature vectors is established.
FIG. 8 schematically shows a flow diagram of a product identification list recommendation process according to an embodiment of the present disclosure.
As shown in fig. 8, further recommending m products in sequence according to the third sorted list in operation S260 may further include operation S2602.
In operation S2602, a product identifier list mapped to the third sorted list is recommended in turn.
Therefore, the product information recommended to the user, specifically the product identification list, is visual and concise.
Based on the product recommendation method based on the click rate, the present disclosure also provides a product recommendation device based on the click rate, which will be described in detail below with reference to fig. 9.
FIG. 9 schematically shows a block diagram of a click-through rate based product recommendation device according to an embodiment of the present disclosure.
As shown in fig. 9, the product recommendation apparatus 900 based on click rate of this embodiment includes a data acquisition module 910, a product acquisition module 920, a vector generation module 930, a first sorting module 940, a second sorting module 950, and a product recommendation module 960.
The data acquisition module 910 is configured to acquire a plurality of pieces of historical transaction data of products purchased by a user, and form a training data set. In an embodiment, the data obtaining module 910 may be configured to perform the operation S210 described above, which is not described herein again.
The product obtaining module 920 is configured to obtain m products to be recommended, where m is an integer greater than or equal to 2. In an embodiment, the product obtaining module 920 may be configured to perform the operation S220 described above, which is not described herein again.
The vector generating module 930 is configured to extract product features and user behavior features of each of the m products according to the training data set, and generate m feature vectors. In an embodiment, the vector generation module 930 may be configured to perform the operation S230 described above, which is not described herein again.
The first sorting module 940 is configured to calculate posterior probabilities respectively corresponding to the m feature vectors using a naive bayes algorithm, and obtain a first sorting table of the m products according to the posterior probabilities. In an embodiment, the first sorting module 940 may be configured to perform the operation S240 described above, and is not described herein again.
The second sorting module 950 is configured to input the m feature vectors into the deep learning network model to obtain click rate pre-estimated values of the m products, and obtain a second sorted list of the m products according to the click rate pre-estimated values, where the deep learning network model is obtained by pre-training based on a training data set. In an embodiment, the second sorting module 950 may be configured to perform the operation S250 described above, which is not described herein again.
And the product recommending module 960 is configured to modify the first sorted list and the second sorted list to obtain a third sorted list, and sequentially recommend m products according to the third sorted list. In an embodiment, the product recommendation module 960 may be configured to perform the operation S260 described above, which is not described herein again.
According to an embodiment of the present disclosure, any multiple modules of the data obtaining module 910, the product obtaining module 920, the vector generating module 930, the first ordering module 940, the second ordering module 950, and the product recommending module 960 may be combined into one module to be implemented, or any one of the modules may be split into multiple modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the data obtaining module 910, the product obtaining module 920, the vector generating module 930, the first ordering module 940, the second ordering module 950 and the product recommending module 960 may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or by any one of three implementations of software, hardware and firmware, or by a suitable combination of any several of them. Alternatively, at least one of the data acquisition module 910, the product acquisition module 920, the vector generation module 930, the first ordering module 940, the second ordering module 950 and the product recommendation module 960 may be implemented at least in part as a computer program module that, when executed, may perform a corresponding function.
FIG. 10 schematically illustrates a block diagram of an electronic device suitable for implementing a click-through rate based product recommendation method in accordance with an embodiment of the present disclosure.
As shown in fig. 10, an electronic device 1000 according to an embodiment of the present disclosure includes a processor 1001 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)1002 or a program loaded from a storage section 1008 into a Random Access Memory (RAM) 1003. Processor 1001 may include, for example, a general purpose microprocessor (e.g., CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., Application Specific Integrated Circuit (ASIC)), among others. The processor 1001 may also include onboard memory for caching purposes. The processor 1001 may include a single processing unit or multiple processing units for performing different actions of a method flow according to embodiments of the present disclosure.
In the RAM 1003, various programs and data necessary for the operation of the electronic apparatus 1000 are stored. The processor 1001, ROM 1002, and RAM 1003 are connected to each other by a bus 1004. The processor 1001 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 1002 and/or the RAM 1003. Note that the programs may also be stored in one or more memories other than the ROM 1002 and the RAM 1003. The processor 1001 may also perform various operations of the method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
Electronic device 1000 may also include an input/output (I/O) interface 1005, the input/output (I/O) interface 1005 also being connected to bus 1004, according to an embodiment of the present disclosure. The electronic device 1000 may also include one or more of the following components connected to the I/O interface 1005: an input section 1006 including a keyboard, a mouse, and the like; an output section 1007 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 1008 including a hard disk and the like; and a communication section 1009 including a network interface card such as a LAN card, a modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. A drive 1010 is also connected to the I/O interface 1005 as necessary. A removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1010 as necessary, so that a computer program read out therefrom is mounted into the storage section 1008 as necessary.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement a click-through-rate-based product recommendation method according to an embodiment of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM 1002 and/or the RAM 1003 described above and/or one or more memories other than the ROM 1002 and the RAM 1003.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the method illustrated in the flow chart. When the computer program product runs in a computer system, the program code is used for causing the computer system to implement the click-through rate based product recommendation method provided by the embodiment of the disclosure.
The computer program performs the above-described functions defined in the system/apparatus of the embodiments of the present disclosure when executed by the processor 1001. The systems, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In one embodiment, the computer program may be hosted on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted in the form of a signal on a network medium, distributed, downloaded and installed via the communication part 1009, and/or installed from the removable medium 1011. The computer program containing program code may be transmitted using any suitable network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program may be downloaded and installed from a network through the communication part 1009 and/or installed from the removable medium 1011. The computer program performs the above-described functions defined in the system of the embodiment of the present disclosure when executed by the processor 1001. The above described systems, devices, apparatuses, modules, units, etc. may be implemented by computer program modules according to embodiments of the present disclosure.
In accordance with embodiments of the present disclosure, program code for executing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, these computer programs may be implemented using high level procedural and/or object oriented programming languages, and/or assembly/machine languages. The programming language includes, but is not limited to, programming languages such as Java, C + +, python, the "C" language, or the like. The program code may execute entirely on the user computing device, partly on the user device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It will be appreciated by a person skilled in the art that various combinations or/and combinations of features recited in the various embodiments of the disclosure and/or in the claims may be made, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (16)

1. A product recommendation method based on click rate comprises the following steps:
acquiring a plurality of pieces of historical transaction data of products purchased by a user to form a training data set;
obtaining m products to be recommended, wherein m is more than or equal to 2 and is an integer;
extracting the product characteristics and the user behavior characteristics of each product in the m products according to the training data set to generate m characteristic vectors;
calculating posterior probabilities respectively corresponding to the m feature vectors by using a naive Bayes algorithm, and obtaining a first ranking list of the m products according to the posterior probabilities;
inputting the m feature vectors into a deep learning network model to obtain click rate pre-estimated values of the m products, and obtaining a second ordered list of the m products according to the click rate pre-estimated values, wherein the deep learning network model is obtained by pre-training based on the training data set;
and correcting the first sorted list and the second sorted list to obtain a third sorted list, and sequentially recommending the m products according to the third sorted list.
2. The method of claim 1, wherein the product characteristics of each product comprise a plurality of sub-characteristics, the sub-characteristics comprising a product type, a product goodness, a product sales, a marketing novelty, and a discount degree, each of the sub-characteristics comprising a plurality of attribute values;
the user behavior characteristic of each product is a discrete characteristic which represents whether the user purchases the product.
3. The method of claim 2, wherein the calculating the posterior probabilities corresponding to the m feature vectors respectively using a naive bayes algorithm comprises:
calculating a prior probability of each of the discrete features in the training data set;
calculating the conditional probability of different attribute values of each sub-feature under each discrete feature according to the prior probability;
and calculating the posterior probability of each product in the m products belonging to each discrete feature according to the prior probability and the conditional probability.
4. The method of claim 3, wherein after calculating the posterior probability that the first product belongs to each of the discrete features when the discrete features include a first discrete feature and a second discrete feature, the first discrete feature characterizing that the user has purchased a corresponding product, the second discrete feature characterizing that the user has not purchased a corresponding product, the m products including the first product, further comprising:
judging whether the posterior probability of the first product belonging to the first discrete feature is greater than the posterior probability of the first product belonging to the second discrete feature, if so, determining the posterior probability belonging to the first discrete feature as the recommended probability of the first product; otherwise, determining the probability that the first product is recommended according to the complementary event probability of the posterior probabilities belonging to the second discrete features;
and determining a first ranking list of the m products according to the recommended probability of the first product.
5. The method of claim 4, wherein, when the posterior probability that the first product belongs to a first discrete feature is not greater than the posterior probability that the first product belongs to a second discrete feature, the probability that the first product is recommended is determined according to the following equation:
Figure FDA0003696351510000021
wherein p is k Representing a probability that the first product is recommended;
Figure FDA0003696351510000022
indicating the first productA posterior probability that the article belongs to the second discrete feature.
6. The method of claim 3, wherein when the prior probability or conditional probability is 0, a Laplace smoothing method is used for preprocessing.
7. The method of claim 1, wherein the deep learning network model is constructed from a deep fm network model that has an activation function σ of hidden layers 2 (x) Comprises the following steps:
Figure FDA0003696351510000023
wherein x represents the input of a hidden layer neuron; sigma 2 (x) Representing the output of the hidden layer neurons.
8. The method of claim 7, wherein the deep learning network model is trained by:
splitting the training data set into a training set, a verification set and a test set;
training the constructed deep learning network model by using the training set and the verification set, and updating network parameters of the deep learning network model according to a preset loss function;
evaluating the effectiveness of the deep learning network model using the test set.
9. The method of claim 1, wherein modifying the first sorted listing and the second sorted listing to obtain a third sorted listing comprises:
judging whether the first sorted list and the second sorted list are completely consistent, and if so, determining the first sorted list or the second sorted list as the third sorted list; and if not, multiplying the elements in the first and second sorted lists respectively to obtain the third sorted list.
10. The method of claim 1, wherein after the generating m feature vectors, further comprising:
and extracting the product identifications of the m products according to the training data set, and establishing one-to-one mapping between the product identifications of the m products and the m feature vectors.
11. The method of claim 10, wherein recommending the m products in order according to the third sorted listing comprises:
and recommending and mapping the product identification lists of the third ordered list in sequence.
12. The method of claim 1, wherein the method further comprises:
obtaining authorization of a user to a plurality of pieces of historical transaction data of a product purchased by the user;
and acquiring the plurality of pieces of historical transaction data after obtaining the authorization of the user.
13. A click-through rate based product recommendation device comprising:
the data acquisition module is used for acquiring a plurality of historical transaction data of products purchased by a user to form a training data set;
the product acquisition module is used for acquiring m products to be recommended, wherein m is greater than or equal to 2 and is an integer;
the vector generation module is used for extracting the product characteristics and the user behavior characteristics of each product in the m products according to the training data set to generate m characteristic vectors;
the first sorting module is used for calculating the posterior probabilities corresponding to the m feature vectors by using a naive Bayes algorithm, and obtaining a first sorting list of the m products according to the posterior probabilities;
the second ordering module is used for inputting the m feature vectors into a deep learning network model to obtain click rate pre-estimated values of the m products, and obtaining a second ordering list of the m products according to the click rate pre-estimated values, wherein the deep learning network model is obtained by pre-training based on the training data set;
and the product recommending module is used for correcting the first sorted list and the second sorted list to obtain a third sorted list, and sequentially recommending the m products according to the third sorted list.
14. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-12.
15. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method of any one of claims 1 to 12.
16. A computer program product comprising a computer program which, when executed by a processor, implements a method according to any one of claims 1 to 12.
CN202210683138.0A 2022-06-15 2022-06-15 Product recommendation method, device, equipment and medium based on click rate Pending CN114998024A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210683138.0A CN114998024A (en) 2022-06-15 2022-06-15 Product recommendation method, device, equipment and medium based on click rate

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210683138.0A CN114998024A (en) 2022-06-15 2022-06-15 Product recommendation method, device, equipment and medium based on click rate

Publications (1)

Publication Number Publication Date
CN114998024A true CN114998024A (en) 2022-09-02

Family

ID=83035023

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210683138.0A Pending CN114998024A (en) 2022-06-15 2022-06-15 Product recommendation method, device, equipment and medium based on click rate

Country Status (1)

Country Link
CN (1) CN114998024A (en)

Similar Documents

Publication Publication Date Title
US11734609B1 (en) Customized predictive analytical model training
US10354184B1 (en) Joint modeling of user behavior
JP7160980B2 (en) INFORMATION PROVIDING DEVICE, INFORMATION PROVIDING METHOD AND PROGRAM
US20210056458A1 (en) Predicting a persona class based on overlap-agnostic machine learning models for distributing persona-based digital content
US9269049B2 (en) Methods, apparatus, and systems for using a reduced attribute vector of panel data to determine an attribute of a user
US11334750B2 (en) Using attributes for predicting imagery performance
US10937070B2 (en) Collaborative filtering to generate recommendations
US20190080352A1 (en) Segment Extension Based on Lookalike Selection
US20210342744A1 (en) Recommendation method and system and method and system for improving a machine learning system
CN111400613A (en) Article recommendation method, device, medium and computer equipment
CN112529665A (en) Product recommendation method and device based on combined model and computer equipment
CN112598472A (en) Product recommendation method, device, system, medium and program product
CN111429214B (en) Transaction data-based buyer and seller matching method and device
CN111966886A (en) Object recommendation method, object recommendation device, electronic equipment and storage medium
US20230099627A1 (en) Machine learning model for predicting an action
CN116308641A (en) Product recommendation method, training device, electronic equipment and medium
WO2023284516A1 (en) Information recommendation method and apparatus based on knowledge graph, and device, medium, and product
CN111400567B (en) AI-based user data processing method, device and system
CN114998024A (en) Product recommendation method, device, equipment and medium based on click rate
CN113744030A (en) Recommendation method, device, server and medium based on AI user portrait
CN113392200A (en) Recommendation method and device based on user learning behaviors
KR20200029647A (en) Generalization method for curated e-Commerce system by user personalization
CN110197056B (en) Relation network and associated identity recognition method, device, equipment and storage medium
US11842533B2 (en) Predictive search techniques based on image analysis and group feedback
Sharma Identifying Factors Contributing to Lead Conversion Using Machine Learning to Gain Business Insights

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination