CN112837116A - Product recommendation method and device - Google Patents

Product recommendation method and device Download PDF

Info

Publication number
CN112837116A
CN112837116A CN202110045020.0A CN202110045020A CN112837116A CN 112837116 A CN112837116 A CN 112837116A CN 202110045020 A CN202110045020 A CN 202110045020A CN 112837116 A CN112837116 A CN 112837116A
Authority
CN
China
Prior art keywords
user
information
product
target
products
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110045020.0A
Other languages
Chinese (zh)
Inventor
李沫含
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Agricultural Bank of China
Original Assignee
Agricultural Bank of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agricultural Bank of China filed Critical Agricultural Bank of China
Priority to CN202110045020.0A priority Critical patent/CN112837116A/en
Publication of CN112837116A publication Critical patent/CN112837116A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a product recommendation method and device, and the method comprises the following steps: simultaneously inputting the characteristic information of N products in a product set and the characteristic information of a target user into a deep neural network model to obtain reward indexes of the N products; n is an integer greater than 1; taking the product with the largest reward index in the N products as a target product; recommending the target product to the target user and acquiring feedback information of the target user; and updating the deep neural network model according to the feedback information. Therefore, the product recommendation method provided by the embodiment of the application comprehensively considers the characteristic information of a plurality of products when obtaining the reward index of a certain product, so that the precision of the product recommendation method in the application is improved.

Description

Product recommendation method and device
Technical Field
The present application relates to the field of computers, and in particular, to a method and an apparatus for recommending a product.
Background
At present, in order to meet the personalized requirements of users, products with higher matching degree can be recommended to the users according to the matching degree of the characteristics of the products and the users. However, in some practical application scenarios, the universal connection between different products is ignored, and the accuracy of recommending the products to the user through the matching degree of the characteristics of the products and the user is low. Therefore, the improvement of the product recommendation accuracy for users becomes a technical problem to be solved urgently in the current field.
Disclosure of Invention
In order to solve the technical problem, the present application provides a product recommendation method and device, which are used for obtaining a reward index of a certain product according to feature information of a plurality of products, so as to recommend the product.
In order to achieve the above purpose, the technical solutions provided in the embodiments of the present application are as follows:
the embodiment of the application provides a product recommendation method, which comprises the following steps:
simultaneously inputting the characteristic information of N products in a product set and the characteristic information of a target user into a deep neural network model to obtain reward indexes of the N products; n is an integer greater than 1;
taking the product with the largest reward index in the N products as a target product;
recommending the target product to the target user and acquiring feedback information of the target user;
and updating the deep neural network model according to the feedback information.
Optionally, the method further comprises:
acquiring original information of a user;
obtaining user characteristic information according to the user original information;
initializing the deep neural network model through the user characteristic information;
the user characteristic information comprises at least one of age, gender, browsing history and purchasing history of the user.
Optionally, the obtaining feedback information of the target user includes: and obtaining feedback information of the target user within a preset time limit.
Optionally, the feedback information includes: and at least one of browsing information of the target product by the user, collection information of the target product by the user, forwarding information of the target product by the user and purchasing information of the target product by the user.
Optionally, the updating the deep neural network model according to the feedback information includes:
and updating a strategy function in the deep neural network model according to the feedback information.
An embodiment of the present application further provides a product recommendation device, the device includes:
the reward acquisition module is used for simultaneously inputting the characteristic information of N products in the product set and the characteristic information of a target user to the deep neural network model to acquire reward indexes of the N products; n is an integer greater than 1;
the target product obtaining module is used for taking the product with the largest reward index in the N products as a target product;
the recommending module is used for recommending the target product to the target user and acquiring feedback information of the target user;
and the updating module is used for updating the deep neural network model according to the feedback information.
Optionally, the apparatus further comprises:
the original information acquisition module is used for acquiring original information of a user;
the characteristic information obtaining module is used for obtaining user characteristic information according to the user original information;
the initialization module is used for initializing the deep neural network model through the user characteristic information;
the user characteristic information comprises at least one of age, gender, browsing history and purchasing history of the user.
Optionally, the recommendation module is specifically configured to: recommending the target product to the target user, and obtaining feedback information of the target user within a preset time limit.
Optionally, the feedback information includes: and at least one of browsing information of the target product by the user, collection information of the target product by the user, forwarding information of the target product by the user and purchasing information of the target product by the user.
Optionally, the update module is specifically configured to:
and updating a strategy function in the deep neural network model according to the feedback information.
According to the technical scheme, the method has the following beneficial effects:
the embodiment of the application provides a product recommendation method and a product recommendation device, and the method comprises the following steps: simultaneously inputting the characteristic information of N products in a product set and the characteristic information of a target user into a deep neural network model to obtain reward indexes of the N products; n is an integer greater than 1; taking the product with the largest reward index in the N products as a target product; recommending the target product to the target user and acquiring feedback information of the target user; and updating the deep neural network model according to the feedback information.
Therefore, according to the product recommendation method provided by the embodiment of the application, the reward indexes of the N products are obtained by simultaneously inputting the characteristic information of the N products into the deep neural network model, so that the deep neural network can obtain the reward index of any one of the N products through the characteristic information of the N products. Therefore, when the method provided by the embodiment of the application obtains the reward index of a certain product, the characteristic information of a plurality of products is comprehensively considered, and therefore the precision of the product recommendation method in the application is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a product recommendation method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a product recommendation device according to an embodiment of the present application.
Detailed Description
In order to help better understand the scheme provided by the embodiment of the present application, before describing the method provided by the embodiment of the present application, a scenario of an application of the scheme of the embodiment of the present application is described.
At present, in order to meet the personalized requirements of users, products with higher matching degree can be recommended to the users according to the matching degree of the characteristics of the products and the users. However, in some practical application scenarios, the universal connection between different products is ignored, and the accuracy of recommending the products to the user through the matching degree of the characteristics of the products and the user is low. Therefore, the improvement of the product recommendation accuracy for users becomes a technical problem to be solved urgently in the current field.
In order to solve the above technical problem, an embodiment of the present application provides a product recommendation method and apparatus, where the method includes: simultaneously inputting the characteristic information of N products in a product set and the characteristic information of a target user into a deep neural network model to obtain reward indexes of the N products; n is an integer greater than 1; taking the product with the largest reward index in the N products as a target product; recommending the target product to the target user and acquiring feedback information of the target user; and updating the deep neural network model according to the feedback information.
Therefore, according to the product recommendation method provided by the embodiment of the application, the reward indexes of the N products are obtained by simultaneously inputting the characteristic information of the N products into the deep neural network model, so that the deep neural network can obtain the reward index of any one of the N products through the characteristic information of the N products. Therefore, when the method provided by the embodiment of the application obtains the reward index of a certain product, the characteristic information of a plurality of products is comprehensively considered, and therefore the precision of the product recommendation method in the application is improved.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, embodiments accompanying the drawings are described in detail below.
Referring to fig. 1, a schematic flowchart of a product recommendation method according to an embodiment of the present application is shown. As shown in fig. 1, the product recommendation method provided in the embodiment of the present application includes the following steps:
s101: simultaneously inputting the characteristic information of N products in the product set and the characteristic information of a target user into the deep neural network model to obtain reward indexes of the N products; n is an integer greater than 1.
S102: and taking the product with the largest reward index in the N products as a target product.
S103: recommending the target product to the target user and obtaining feedback information of the target user.
S104: and updating the deep neural network model according to the feedback information.
It should be noted that, the reward index of the product a in the embodiment of the present application is a reward to the deep neural network model, which may be obtained by feedback information of the user after the product a is recommended to the target user.
It should be noted that, in the embodiment of the present application, feature information of N products in a product set and feature information of a target user are simultaneously input into a deep neural network model, and an obtained reward index of the N products may be an N-dimensional vector. Each of the N-dimensional vectors represents a bonus index for a product. It should be noted that in some practical application scenarios (e.g. bank product recommendation), there is a ubiquitous association between products, such as: the characteristic information of product a may have an effect on the bonus index of product B. Therefore, according to the method provided by the application, the reward index of any one of the N products is obtained through the characteristic information of the N products, the reward index of a certain product is obtained by integrating the characteristic information of a plurality of products, and therefore the accuracy of the product recommendation method in the application is improved. As an example, N in the embodiment of the present application may be 4.
It should be noted that, in the embodiment of the present application, when obtaining the bonus index of product a in N products, the characteristic information of other N-1 products may have an influence on the bonus index of product a. For example, when N products are product a, product B, and product C, respectively, the bonus index of product a is 8 due to the influence of the characteristic information of product B and the characteristic information of product C. When the N products are the product a, the product C, and the product D, respectively, the award index of the product a is 6 due to the influence of the characteristic information of the product C and the characteristic information of the product D. It should be noted that the bonus index in the embodiment of the present application may be a positive number or a negative number, and the embodiment of the present application is not limited herein.
It should be noted that, in the product recommendation method provided in the embodiment of the present application, after a target product is recommended to a target user, feedback information of the target user is also obtained, and the deep neural network model is updated according to the feedback information. Therefore, the method provided by the embodiment of the application is interactive dynamic recommendation, the interaction degree between the recommendation method and the user is enhanced, the product can be dynamically recommended according to the interest change of the user, and the product recommendation accuracy of the method provided by the application is improved.
As a possible implementation manner, in order to avoid product duplication of multiple recommendations, the method provided in the embodiment of the present application may randomly obtain different N products in a product set, so as to recommend different target products to a user. Therefore, the method provided by the embodiment of the application avoids repeated recommended products, and enables the use experience of the user to be better.
In an embodiment of the present application, as a possible implementation manner, the method provided in the embodiment of the present application may further include: acquiring original information of a user; acquiring user characteristic information according to user original information; and initializing the deep neural network model through the user characteristic information. In an embodiment of the present application, the user characteristic information includes at least one of an age, a gender, a browsing history, and a purchase history of the user. It should be noted that the original information of the user in the embodiment of the present application may be obtained by updating the original information database of the user. In the embodiment of the present application, obtaining the user feature information according to the user original information may be: and cleaning the original information of the user to obtain the characteristic information of the user. It should be noted that, in the embodiment of the present application, the cleaning of the original information of the user may be at least one of filling missing values and deleting abnormal points, so as to obtain features that can be identified by the deep neural network model in the embodiment of the present application.
As a possible implementation manner, in an embodiment of the present application, obtaining feedback information of a target user includes: and obtaining feedback information of the target user within a preset time limit.
As a possible implementation manner, the feedback information in the embodiment of the present application may include: at least one of browsing information of the target product by the user, collecting information of the target product by the user, forwarding information of the target product by the user and purchasing information of the target product by the user.
As a possible implementation manner, in the embodiment of the present application, updating the deep neural network model according to the feedback information may include: and updating the strategy function in the deep neural network model according to the feedback information. In particular, the deep neural network Q is an artificial intelligence algorithm, and the core of the algorithm needs to continuously give instructions to the intelligent agent to obtain the maximum reward set by the user. The algorithm requires a function to map: when the agent is in a certain state or environment, a corresponding optimal action is taken for this environment, after which the agent is given a reward value, or penalty value, by the environment or other factors, the magnitude of which is defined by the effect produced after the action. This context-to-action mapping function, hereinafter referred to as the policy function, is then updated based on the feedback.
The policy function in the embodiment of the present application may be:
Q*(st,a)=Q(st,a)+a(r(st,a))+maxQ(st+1,ai)-Q(st,a) (1)
wherein, Q(s)tA) denotes a policy function, stRepresenting feedback information of the user, a representing a learning rate, r representing a reward function, i representing an identifier of an ith action, st+1Indicating feedback information of the user after obtaining the bank product, aiIndicating a certain action that is currently selectable.
The embodiment of the application also provides a parameter calculation model of the deep neural network model:
θ=argmaxθ(yj-qeval(st,at))2 (2)
yj=r+amaxqtarget(sj+1,ai) (3)
where θ represents a parameter of the deep neural network model, qevalAnd q istargetAll representing an estimate of a user characteristic, stCharacteristic information representing a user, atRecommended bank products representing the system at time t, yjFor intermediate variables, r denotes a reward function, sj+1Indicating feedback information of the user after obtaining the bank product, aiRepresenting currently recommendable bank products and a represents the learning rate.
It should be noted that, when selecting an action, a greedy method greedy is adopted to prompt the algorithm to be a faster optimal strategy, the algorithm idea is to select an action which enables the deep neural network Q value to be maximum according to a probability value from 0 to 1, and 1-belongs to a random action. In the early stage of the use, the value of the probability value e is set to be 0.2, namely, in the early stage of the use of the algorithm, the random exploration phase is adopted, however, as the algorithm is gradually updated, the probability value e is continuously increased, and the upper limit is set to be 0.8, so that the algorithm can detect other actions, and the algorithm can find new other actions to enable the reward to be maximum.
In the embodiment of the application, the target network does not need to be trained, and the Loss function Loss expression adopts min sigma (q)target-qeval)2Least squares sum form. Wherein q istargetAnd q isevalEach representing a user feature vector. Let R be the reward function in reinforcement learning, and the following is an updating method of the deep Q neural network: and constructing a deep neural network as an evaluated deep neural network Q, wherein the parameter is theta. Copying a deep neural network Q as a target Q-loop; input user state as s1Circularly inputting the user state from 1 to T time at the time T; selecting user action state a by the greedy algorithmt(ii) a Record the next stateSubscriber st+1And the reward value r(s) of the current actiont,at) Composition(s)t,at,r(st,at),st+1) Training set for training, let yj=r+a maxqtarget(sj+1,ai) Training θ ═ argmax θ (y)j-qeval(st,at))2Q is added every C steps (C is a user preset value, as an example C can be 50)evalDirectly copying the parameter value of q totargetAnd updating the probability value epsilon and ending the loop.
In summary, according to the product recommendation method provided in the embodiment of the present application, the reward indexes of the N products are obtained by simultaneously inputting the feature information of the N products into the deep neural network model, so that the deep neural network can obtain the reward index of any one of the N products through the feature information of the N products. Therefore, when the method provided by the embodiment of the application obtains the reward index of a certain product, the characteristic information of a plurality of products is comprehensively considered, and therefore the precision of the product recommendation method in the application is improved.
According to the product recommendation method provided by the embodiment, the embodiment of the application further provides a product method:
referring to fig. 2, the drawing is a schematic structural diagram of a product recommendation device according to an embodiment of the present application. As shown in fig. 2, the product recommendation device provided in the embodiment of the present application includes:
the reward obtaining module 100 is configured to simultaneously input feature information of N products in a product set and feature information of a target user to a deep neural network model, and obtain reward indexes of the N products; n is an integer greater than 1;
a target product obtaining module 200, configured to take a product with the largest winning incentive index among the N products as a target product;
the recommending module 300 is used for recommending the target product to the target user and obtaining feedback information of the target user;
and an updating module 400, configured to update the deep neural network model according to the feedback information.
As a possible implementation manner, the product recommendation device provided in the embodiment of the present application may further include: the original information acquisition module is used for acquiring original information of a user; the characteristic information acquisition module is used for acquiring user characteristic information according to the user original information; and the initialization module is used for initializing the deep neural network model through the user characteristic information. It should be noted that the user characteristic information in the embodiment of the present application includes at least one of an age, a sex, a browsing history, and a purchase history of the user.
As a possible implementation manner, the recommendation module provided in the embodiment of the present application is specifically configured to: recommending the target product to the target user, and obtaining feedback information of the target user within a preset time limit.
As a possible implementation manner, the feedback information provided in the embodiment of the present application includes: at least one of browsing information of the target product by the user, collecting information of the target product by the user, forwarding information of the target product by the user and purchasing information of the target product by the user.
As a possible implementation manner, the update module provided in the embodiment of the present application is specifically configured to: and updating the strategy function in the deep neural network model according to the feedback information.
To sum up, the product recommendation device provided in the embodiment of the present application obtains the reward indexes of the N products by simultaneously inputting the feature information of the N products to the deep neural network model, so that the deep neural network can obtain the reward index of any one of the N products through the feature information of the N products. Therefore, when the method provided by the embodiment of the application obtains the reward index of a certain product, the characteristic information of a plurality of products is comprehensively considered, and the product recommendation accuracy in the application is improved.
As can be seen from the above description of the embodiments, those skilled in the art can clearly understand that all or part of the steps in the above embodiment methods can be implemented by software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network communication device such as a media gateway, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
It should be noted that, in the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. The method disclosed by the embodiment corresponds to the system disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the system part for description.
It should also be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing description of the disclosed embodiments will enable those skilled in the art to make or use the invention in various modifications to these embodiments, which will be apparent to those skilled in the art, and the general principles defined herein may be implemented in other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for recommending products, the method comprising:
simultaneously inputting the characteristic information of N products in a product set and the characteristic information of a target user into a deep neural network model to obtain reward indexes of the N products; n is an integer greater than 1;
taking the product with the largest reward index in the N products as a target product;
recommending the target product to the target user and acquiring feedback information of the target user;
and updating the deep neural network model according to the feedback information.
2. The method of claim 1, further comprising:
acquiring original information of a user;
obtaining user characteristic information according to the user original information;
initializing the deep neural network model through the user characteristic information;
the user characteristic information comprises at least one of age, gender, browsing history and purchasing history of the user.
3. The method of claim 1, wherein the obtaining feedback information of the target user comprises: and obtaining feedback information of the target user within a preset time limit.
4. The method of claim 1, wherein the feedback information comprises: and at least one of browsing information of the target product by the user, collection information of the target product by the user, forwarding information of the target product by the user and purchasing information of the target product by the user.
5. The method of any one of claims 1 to 4, wherein said updating the deep neural network model according to the feedback information comprises:
and updating a strategy function in the deep neural network model according to the feedback information.
6. A product recommendation device, the device comprising:
the reward acquisition module is used for simultaneously inputting the characteristic information of N products in the product set and the characteristic information of a target user to the deep neural network model to acquire reward indexes of the N products; n is an integer greater than 1;
the target product obtaining module is used for taking the product with the largest reward index in the N products as a target product;
the recommending module is used for recommending the target product to the target user and acquiring feedback information of the target user;
and the updating module is used for updating the deep neural network model according to the feedback information.
7. The apparatus of claim 6, further comprising:
the original information acquisition module is used for acquiring original information of a user;
the characteristic information obtaining module is used for obtaining user characteristic information according to the user original information;
the initialization module is used for initializing the deep neural network model through the user characteristic information;
the user characteristic information comprises at least one of age, gender, browsing history and purchasing history of the user.
8. The apparatus of claim 6, wherein the recommendation module is specifically configured to: recommending the target product to the target user, and obtaining feedback information of the target user within a preset time limit.
9. The apparatus of claim 6, wherein the feedback information comprises: and at least one of browsing information of the target product by the user, collection information of the target product by the user, forwarding information of the target product by the user and purchasing information of the target product by the user.
10. The apparatus according to any one of claims 6 to 9, wherein the update module is specifically configured to:
and updating a strategy function in the deep neural network model according to the feedback information.
CN202110045020.0A 2021-01-13 2021-01-13 Product recommendation method and device Pending CN112837116A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110045020.0A CN112837116A (en) 2021-01-13 2021-01-13 Product recommendation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110045020.0A CN112837116A (en) 2021-01-13 2021-01-13 Product recommendation method and device

Publications (1)

Publication Number Publication Date
CN112837116A true CN112837116A (en) 2021-05-25

Family

ID=75928055

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110045020.0A Pending CN112837116A (en) 2021-01-13 2021-01-13 Product recommendation method and device

Country Status (1)

Country Link
CN (1) CN112837116A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180108048A1 (en) * 2016-10-19 2018-04-19 Samsung Sds Co., Ltd. Method, apparatus and system for recommending contents
CN108182621A (en) * 2017-12-07 2018-06-19 合肥美的智能科技有限公司 The Method of Commodity Recommendation and device for recommending the commodity, equipment and storage medium
CN110598120A (en) * 2019-10-16 2019-12-20 信雅达系统工程股份有限公司 Behavior data based financing recommendation method, device and equipment
CN111104595A (en) * 2019-12-16 2020-05-05 华中科技大学 Deep reinforcement learning interactive recommendation method and system based on text information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180108048A1 (en) * 2016-10-19 2018-04-19 Samsung Sds Co., Ltd. Method, apparatus and system for recommending contents
CN108182621A (en) * 2017-12-07 2018-06-19 合肥美的智能科技有限公司 The Method of Commodity Recommendation and device for recommending the commodity, equipment and storage medium
CN110598120A (en) * 2019-10-16 2019-12-20 信雅达系统工程股份有限公司 Behavior data based financing recommendation method, device and equipment
CN111104595A (en) * 2019-12-16 2020-05-05 华中科技大学 Deep reinforcement learning interactive recommendation method and system based on text information

Similar Documents

Publication Publication Date Title
CN111061946B (en) Method, device, electronic equipment and storage medium for recommending scenerized content
US11971884B2 (en) Interactive search experience using machine learning
CN109408731A (en) A kind of multiple target recommended method, multiple target recommended models generation method and device
WO2022166115A1 (en) Recommendation system with adaptive thresholds for neighborhood selection
US20160132601A1 (en) Hybrid Explanations In Collaborative Filter Based Recommendation System
KR102203253B1 (en) Rating augmentation and item recommendation method and system based on generative adversarial networks
CN109726331B (en) Object preference prediction method, device and computer readable medium
CN112800893A (en) Human face attribute editing method based on reinforcement learning
Paleti et al. Approaching the cold-start problem using community detection based alternating least square factorization in recommendation systems
CN114764471A (en) Recommendation method, recommendation device and storage medium
CN115867919A (en) Graph structure aware incremental learning for recommendation systems
CN112925892A (en) Conversation recommendation method and device, electronic equipment and storage medium
CN111682972A (en) Method and device for updating service prediction model
CN113688306A (en) Recommendation strategy generation method and device based on reinforcement learning
CN112837116A (en) Product recommendation method and device
CN115599990A (en) Knowledge perception and deep reinforcement learning combined cross-domain recommendation method and system
KR20200142871A (en) Method and apparatus for recommending items using explicit and implicit feedback
CN115545121A (en) Model training method and device
CN115730143A (en) Recommendation system, method, terminal and medium based on task alignment meta learning and augmentation graph
CN115391662A (en) Personalized recommendation method and device based on article attribute sampling and storage medium
CN113626721B (en) Regrettful exploration-based recommendation method and device, electronic equipment and storage medium
KR102612986B1 (en) Online recomending system, method and apparatus for updating recommender based on meta-leaining
CN112861001B (en) Recommendation value generation method and device for digital content, electronic equipment and storage medium
CN116541716B (en) Recommendation model training method and device based on sequence diagram and hypergraph
CN114971817B (en) Product self-adaptive service method, medium and device based on user demand portrait

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination