CN113869992B - Artificial intelligence based product recommendation method and device, electronic equipment and medium - Google Patents
Artificial intelligence based product recommendation method and device, electronic equipment and medium Download PDFInfo
- Publication number
- CN113869992B CN113869992B CN202111469251.0A CN202111469251A CN113869992B CN 113869992 B CN113869992 B CN 113869992B CN 202111469251 A CN202111469251 A CN 202111469251A CN 113869992 B CN113869992 B CN 113869992B
- Authority
- CN
- China
- Prior art keywords
- vector representation
- user
- node
- nodes
- product
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0631—Item recommendations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
Abstract
The invention relates to the technical field of artificial intelligence, and provides a product recommendation method and related equipment based on artificial intelligence, wherein a graph neural network with users, agents and products as nodes is constructed, after user vector representation, agent vector representation and product vector representation are obtained, iteratively training the graph neural network and updating the user vector representation, updating the agent vector representation, and updating the product vector representation each time the iterative training is performed, constructing a cross entropy loss function and an unsupervised loss function according to the updated user vector representation, the updated agent vector representation and the updated product vector representation, performing optimization training on the graph neural network based on the cross entropy loss function and the unsupervised loss function to obtain a product recommendation model, and finally recommending products for the user to be recommended by using the product recommendation model. The invention improves the recommendation effect of the product.
Description
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a product recommendation method and device based on artificial intelligence, electronic equipment and a medium.
Background
The life insurance agent usually needs to promote the customer consumption by recommending some customer-locking products to the customer, thereby achieving the purpose of locking customers, improving the user stickiness of the customer and laying a cushion for subsequent long-insurance sales.
The inventor finds that, in the process of implementing the invention, when facing a plurality of products available for sale, the life insurance agent generally recommends the same customer-locking product for similar users through the similarity among the users, however, the recommendation method cannot accurately select the customer-locking product meeting the requirements, and the product recommendation effect is poor.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a product recommendation method, device, electronic device and storage medium based on artificial intelligence, which can improve the product recommendation effect.
A first aspect of the invention provides an artificial intelligence based product recommendation method, the method comprising:
acquiring user vector representation based on user information, acquiring agent vector representation based on agent information, and acquiring product vector representation based on product information;
constructing a graph neural network with users, agents and products as nodes;
iteratively training the graph neural network, and updating the user vector representation, updating the agent vector representation, and updating the product vector representation each time the iterative training is performed;
constructing a cross entropy loss function and an unsupervised loss function according to the updated user vector representation, the updated agent vector representation and the updated product vector representation;
performing optimization training on the neural network of the graph based on the cross entropy loss function and the unsupervised loss function to obtain a product recommendation model;
and recommending the product for the user to be recommended by using the product recommendation model.
In an optional embodiment, the updating the user vector representation at each iterative training comprises:
determining similar neighbor nodes and heterogeneous neighbor nodes of the user node;
acquiring a first vector representation of the similar neighbor node after the previous round of training is finished and acquiring a second vector representation of the heterogeneous neighbor node after the previous round of training is finished;
calculating a first attention weight of the similar neighbor node during the current training round, and calculating a second attention weight of the heterogeneous neighbor node during the current training round;
calculating to obtain a first aggregation vector representation according to the first vector representation and the first attention weight;
calculating to obtain a second aggregate vector representation according to the second vector representation and the second attention weight;
and fusing the first aggregation vector representation and the second aggregation vector representation to obtain an updated user vector representation.
representing user nodesIn thatThe user vector representation at the end of the round of training,is a user nodeFirst, theThe parameters of the attention weight of the similar neighbor nodes solved in the round of training are learning parameters in the model training,representing user nodesThe neighbor node of (1), wherein | represents vector splicing;
representing user nodesIn thatThe user vector representation at the end of the round of training,is a user nodeFirst, theThe parameters of the attention weight of the heterogeneous neighbor nodes solved in the round of training are learning parameters in the model training.
In an optional embodiment, the fusing the first aggregated vector representation and the second aggregated vector representation to obtain an updated user vector representation includes:
calculating a first calculation weight of the similar neighbor node in the training of the current round;
calculating a second calculation weight of the heterogeneous neighbor node in the current training round;
fusing the first aggregation vector representation and the second aggregation vector representation by the first calculation weight and the second calculation weight to obtain a target fusion vector representation;
and carrying out nonlinear transformation on the target fusion vector representation and the user vector representation of the user node after the previous training round is finished to obtain the updated user vector representation.
In an optional embodiment, the constructing the cross-entropy loss function and the unsupervised loss function according to the updated user vector representation, the updated agent vector representation, and the updated product vector representation includes:
generating a target user vector representation according to the updated user vector representation and the corresponding updated agent vector representation;
determining a target user corresponding to the target user vector representation and determining a target product corresponding to the updated product vector representation;
calculating a predicted purchase probability of the target user to purchase the target product according to the target user vector representation and the updated product vector representation;
constructing a cross entropy loss function based on the predicted purchase probability and the corresponding real purchase label;
and randomly negative sampling the graph neural network, and constructing an unsupervised loss function based on the node vector representation of the negative sampling.
In an optional embodiment, the constructing a graph neural network with users, agents and products as nodes comprises:
constructing an initial network structure diagram, wherein nodes in the initial network structure diagram are agents, users and products;
establishing an edge between nodes corresponding to users who have bought the same product in the same time period;
establishing an edge between nodes corresponding to the agents belonging to the same unit;
establishing an edge between nodes corresponding to products of the same category;
establishing an edge between nodes corresponding to users and products with purchasing relations;
and establishing an edge between the agent with the interactive relation and the node corresponding to the user to obtain the graph neural network.
In an optional embodiment, the recommending a product for the user to be recommended by using the product recommendation model includes:
acquiring user vector representation of the user to be recommended;
determining a target agent interacting with the user to be recommended;
obtaining updated agent vector representations and a plurality of updated product vector representations corresponding to the target agents during the last iterative training of the graph neural network;
generating an input vector representation according to the user vector representation of the user to be recommended, the agent vector representation of the target agent and the updated product vector representations;
inputting the input vector representation into the product recommendation model, and acquiring a plurality of prediction probabilities output by the product recommendation model;
and recommending products for the user to be recommended according to the plurality of prediction probabilities.
A second aspect of the present invention provides an artificial intelligence based product recommendation apparatus, the apparatus comprising:
the acquisition module is used for acquiring user vector representation based on the user information, acquiring agent vector representation based on the agent information and acquiring product vector representation based on the product information;
the first building module is used for building a graph neural network with users, agents and products as nodes;
the updating module is used for performing iterative training on the graph neural network, and updating the user vector representation, the agent vector representation and the product vector representation during each iterative training;
the second construction module is used for constructing a cross entropy loss function and an unsupervised loss function according to the updated user vector representation, the updated agent vector representation and the updated product vector representation;
the optimization module is used for carrying out optimization training on the neural network of the graph based on the cross entropy loss function and the unsupervised loss function to obtain a product recommendation model;
and the recommending module is used for recommending the product for the user to be recommended by using the product recommending model.
A third aspect of the invention provides an electronic device comprising a processor for implementing the artificial intelligence based product recommendation method when executing a computer program stored in a memory.
A fourth aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the artificial intelligence based product recommendation method.
In summary, the product recommendation method, apparatus, electronic device and storage medium based on artificial intelligence of the present invention construct a graph neural network using users, agents and products as nodes, the graph neural network has rich information expression, and after obtaining user vector representation, agent vector representation and product vector representation, the graph neural network is iteratively trained, and the user vector representation is updated, the agent vector representation is updated and the product vector representation is updated during each iterative training, so that the vector representation of the nodes is better, a cross entropy loss function and an unsupervised loss function are constructed according to the updated user vector representation, the updated agent vector representation and the updated product vector representation, the graph neural network is trained in combination with the cross entropy loss function and the unsupervised loss function, so that the graph neural network can capture information in graph topology logic more efficiently, therefore, the training effect of the graph neural network is improved, the product recommendation model with better performance is obtained, and finally, when the product recommendation model is used for recommending products for the user to be recommended, the product recommendation effect is improved, and the recommendation accuracy is higher.
Drawings
FIG. 1 is a flowchart of an artificial intelligence based product recommendation method according to an embodiment of the present invention.
FIG. 2 is a schematic diagram of a graph neural network constructed in accordance with the present invention.
Fig. 3 is a block diagram of an artificial intelligence based product recommendation apparatus according to a second embodiment of the present invention.
Fig. 4 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a detailed description of the present invention will be given below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments of the present invention and features of the embodiments may be combined with each other without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
The product recommendation method based on artificial intelligence provided by the embodiment of the invention is executed by the electronic equipment, and correspondingly, the product recommendation device based on artificial intelligence operates in the electronic equipment.
Example one
FIG. 1 is a flowchart of an artificial intelligence based product recommendation method according to an embodiment of the present invention. The artificial intelligence based product recommendation method specifically comprises the following steps, and the sequence of the steps in the flow chart can be changed and some steps can be omitted according to different requirements.
S11, obtaining user vector representation based on the user information, obtaining agent vector representation based on the agent information, and obtaining product vector representation based on the product information.
The user information, the agent information and the product information can be acquired from a database which is locally stored in the electronic equipment, wherein the database comprises a user information database, an agent information database and a product information database, the first information of the user is recorded in the user information database, the second information of the agent is recorded in the agent information database, and the third information of the product is recorded in the product database.
The first information may include: gender, age, occupation, wealth level, personal preferences, etc. may be used to characterize the user representation, and the second information may include: gender, age of employment, job level, sales, etc. can be used for the data of the client agent image, and the third information may include: product manufacturer, date of manufacture, batch, shelf life, place of manufacture, raw materials, etc. can be used to describe the data of the product.
And after data cleaning and normalization are carried out on a plurality of data in the first information, splicing is carried out to obtain user vector representation, after data cleaning and normalization are carried out on a plurality of data in the second information, splicing is carried out to obtain agent vector representation, and after data cleaning and normalization are carried out on a plurality of data in the third information, splicing is carried out to obtain product vector representation.
And S12, constructing a graph neural network with the user, the agent and the product as nodes.
In the life insurance recommendation scene, a plurality of association relations such as purchasing, selling, interacting and the like exist among agents, users and products, and the association relations among the agents, the users, the products and the agents are also stored in the local of the electronic equipment.
The agent, the user and the product are respectively used as nodes in the graph neural network, and if an incidence relation exists between the nodes in the graph neural network, an edge is established between the nodes with the incidence relation.
In an optional embodiment, the constructing a graph neural network with users, agents and products as nodes comprises:
constructing an initial network structure diagram, wherein nodes in the initial network structure diagram are agents, users and products;
establishing an edge between nodes corresponding to users who have bought the same product in the same time period;
establishing an edge between nodes corresponding to the agents belonging to the same unit;
establishing an edge between nodes corresponding to products of the same category;
establishing an edge between nodes corresponding to users and products with purchasing relations;
and establishing an edge between the agent with the interactive relation and the node corresponding to the user to obtain the graph neural network.
The same time period may refer to weekly, monthly, or quarterly. The units may be in groups, divisions or groups.
As shown in fig. 2, it is assumed that a local database of the electronic device records a user 1, a user 2, a user 3, an agent 1, an agent 2, a product 1, a product 2, and a product 3, the user 1 purchased the product 1 and the product 3, the user 1 interacted with the agent 1 and purchased the product 1 and the product 3, the user 2 interacted with the agent 2 and purchased the product 3, the time when the user 1 purchased the product 3 is the same time period as the time when the user 2 purchased the product 3, the user 3 interacted with the agent 2 and purchased the product 2, and the product 1 and the product 2 are products of the same category, and the established graph neural network includes the user 1 node, the user 2 node, the user 3 node, the agent 1 node, the agent 2 node, the product 1 node, the product 2 node, and the product 3 node.
Since the user 1 and the user 2 purchase the product 3 in the same time period, an edge is established between the user 1 node and the user 2 node, an edge is established between the user 1 node and the product 3 node, and an edge is established between the user 2 node and the product 3 node. The user 1 also purchases the product 1, an edge is established between the user 1 node and the product 1 node, the user 3 purchases the product 2, and an edge is established between the user 3 node and the product 2 node.
If the agent 1 and the agent 2 belong to two different units, an edge is not established between the agent 1 node and the agent 2 node.
Product 1 and product 2 are products of the same category, and an edge is established between the product 1 node and the product 2 node.
When the agent 1 and the user 1 have over-interaction, and the agent 2 and the user 3 and the user 2 have over-interaction, an edge is established between the agent 1 node and the user 1 node, and an edge is respectively established between the agent 2 node and the user 3 node and between the agent 2 node and the user 2 node.
According to the optional implementation mode, the association relations of purchase, sale, interaction and the like among agents, users and products are fully utilized to construct the heterogeneous graph structure, and the information expression of the graph neural network is richer.
S13, performing iterative training on the graph neural network, and updating the user vector representation, the agent vector representation and the product vector representation during each iterative training.
A first round of training of the graph neural network may be performed based on the user vector representation, the agent vector representation, and the product vector representation; updating user vector representation, agent vector representation and product vector representation during the first round of training when the graph neural network is subjected to the second round of training; when the third round of training of the graph neural network is finished, updating user vector representation, agent vector representation and product vector representation obtained when the second round of training is finished; when the fourth round of training of the graph neural network is finished, updating user vector representation, agent vector representation and product vector representation obtained when the third round of training is finished; and so on; and when the last round of training of the graph neural network is finished, updating the user vector representation, the agent vector representation and the product vector representation obtained when the penultimate round of training is finished.
In an optional embodiment, the updating the user vector representation at each iterative training comprises:
determining similar neighbor nodes and heterogeneous neighbor nodes of the user node;
acquiring a first vector representation of the similar neighbor node after the previous round of training is finished and acquiring a second vector representation of the heterogeneous neighbor node after the previous round of training is finished;
calculating a first attention weight of the similar neighbor node during the current training round, and calculating a second attention weight of the heterogeneous neighbor node during the current training round;
calculating to obtain a first aggregation vector representation according to the first vector representation and the first attention weight;
calculating to obtain a second aggregate vector representation according to the second vector representation and the second attention weight;
and fusing the first aggregation vector representation and the second aggregation vector representation to obtain an updated user vector representation.
The similar neighbor nodes of the user node refer to other user nodes with edges established between the user node and the similar neighbor nodes of the user node, and the vector representation of the similar neighbor nodes of the user node refers to the user vector representation of the other user nodes. For example, for the user 1 node, the homogeneous neighbor node of the user 1 node refers to the user 2 node, and the vector representation of the homogeneous neighbor node of the user 1 node is the user vector representation of the user 2 node. For the user 2 node, the similar neighbor node of the user 2 node refers to the user 1 node, and the vector representation of the similar neighbor node of the user 2 node is the user vector representation of the user 1 node. For the user 3 node, the similar neighbor nodes of the user 3 node are empty.
The heterogeneous neighbor node of the user node is a product node with an edge established between the heterogeneous neighbor node and the user node, and the vector representation of the heterogeneous neighbor node of the user node is the product vector representation of the product node. It should be noted that, since a user usually interacts with an agent, the heterogeneous neighbor nodes of the user node only include the product node and do not include the agent node. That is, the agent vector representations of the agent neighbor nodes are not aggregated, but only the product vector representations of the product neighbor nodes are aggregated.
Hereinafter, the user node will be referred toFor example, to illustrate the process of obtaining the updated user node vector representation after the iterative training of this round is finished,indicating the total round.
Defining the same kind of neighbor nodes of the user node asThe heterogeneous neighbor nodes of the user node are。
User nodeIn the first placeUser vector representation of a wheelVector representation and user node aggregating neighbor nodes (homogeneous neighbor nodes and heterogeneous neighbor nodes)In the first placeVector representation of a wheel。
To distinguish neighbor nodes from user nodesTo the user nodeAnd respectively carrying out aggregation calculation on the similar neighbor nodes and the heterogeneous neighbor nodes. To user nodeThe similar neighbor nodes are subjected to aggregation calculation to obtain a first aggregation vector, and the first aggregation vector is used for the user nodeAnd carrying out aggregation calculation on the heterogeneous neighbor nodes to obtain a second aggregation vector.
representing user nodesTo (1) aA neighbor node of the same kind isThe first attention weight of the round of training,is shown asA neighbor node of the same kind isUser vector representation at the end of round training (first vector representation).
representing user nodesIn thatThe user vector representation at the end of the round of training,is a user nodeFirst, theThe parameters of the attention weight of the similar neighbor nodes solved in the round of training are learning parameters in the model training,representing user nodesThe neighbor nodes (including the same-class neighbor nodes and the heterogeneous neighbor nodes) of (1), and | represents vector splicing.
representing user nodesTo (1) aA heterogeneous neighbor node isThe second attention weight of the round of training,is shown asA heterogeneous neighbor node isThe product vector representation at the end of the round of training (second vector representation),representing user nodesTo the productThe transformation matrix of (2).
representing user nodesIn thatThe user vector representation at the end of the round of training,is a user nodeFirst, theThe parameters of the attention weight of the heterogeneous neighbor nodes solved in the round of training are learning parameters in the model training.
Since the user vector representation and the product vector representation belong to different vector spaces, a user node needs to be setTo product nodeHeterogeneous neighbor transformation matrix ofHeterogeneous neighbor transformation matrixIs a learning parameter during model training.
In an optional embodiment, the fusing the first aggregated vector representation and the second aggregated vector representation to obtain an updated user vector representation includes:
calculating a first calculation weight of the similar neighbor node in the training of the current round;
calculating a second calculation weight of the heterogeneous neighbor node in the current training round;
fusing the first aggregation vector representation and the second aggregation vector representation by the first calculation weight and the second calculation weight to obtain a target fusion vector representation;
and carrying out nonlinear transformation on the target fusion vector representation and the user vector representation of the user node after the previous training round is finished to obtain the updated user vector representation.
At the point of obtaining the user nodeIs represented by a first aggregate vectorAnd user nodeSecond aggregate vector representation ofThen, the first aggregation vector corresponding to the same type of nodes is representedSecond aggregate vector representation corresponding to heterogeneous nodesFusing to obtain user nodesVector representation fusing all neighbor nodes, i.e. target fusion vector representation。
wherein the content of the first and second substances,representing user nodesIn the first placeThe first calculated weight of the vector representation of the homogeneous neighbor node in the round of training,representing user nodesIn the first placeAnd calculating weights of the vector representations of the heterogeneous neighbor nodes in the training process.Is the training parameters needed to solve the calculated weights,representing user nodesIs determined by the target fusion vector representation of the neighboring node.
The first calculation weight and the second calculation weight may be found by an attention mechanism.
Obtaining user nodesIn the first placeUser vector representation at the end of round trainingThen the user node is connectedIn the first placeUser vector representation at the end of round trainingAnd user nodeIn the first placeTarget fusion vector representation during round trainingSplicing, and carrying out nonlinear transformation on the spliced vector representation to obtain user nodesUpdated user vector representation of。
User nodeUpdated user vector table ofDisplay deviceExpressed using the following formula:,in order to activate the function(s),the matrix is used for linear transformation, and is a learning parameter of the model to be trained.
Similarly, in the process of updating the proxy vector representation in each iterative training, the similar neighbor nodes of the proxy node are defined asAnd heterogeneous neighbor nodes areAnd determining similar neighbor nodes and heterogeneous neighbor nodes of the agent nodes, wherein the similar neighbor nodes of the agent nodes refer to other agent nodes with edges established between the similar neighbor nodes and the agent nodes, and the heterogeneous neighbor nodes of the agent nodes refer to user nodes and product nodes with edges established between the heterogeneous neighbor nodes and the agent nodes.
In the process of updating the product vector representation during each iterative training, the similar neighbor nodes of the product nodes are defined asAnd heterogeneous neighbor nodes areDetermining similar neighbor nodes and heterogeneous neighbor nodes of the product nodes, wherein the similar neighbor nodes of the product nodes refer to other product nodes with edges established between the similar neighbor nodes and the product nodes, and the product nodesThe heterogeneous neighbor nodes of the nodes refer to user nodes and agent nodes which are established with edges between the product nodes.
The process of updating the agent vector representation and the product vector representation and the process of updating the user vector representation are not described in detail herein.
In the optional embodiment, the accuracy of vector representation of the user node, the product node and the agent node is improved by converging vector representation of the neighbor nodes; in addition, heterogeneous neighbor transformation matrices are usedThe problem of information loss caused by training of heterogeneous nodes in the same vector space is solved, and the accuracy of vector representation of the nodes is further improved.
In the process of carrying out iterative training on the graph neural network, the updated user vector representation, the agent vector representation and the product vector representation are updated, and the updated user vector representation and the updated agent vector representation are spliced to obtain the final user vector representation (namely, the target user vector representation), so that the method is suitable for a passenger-locking product recommendation scene with agent intervention. And the agent vector representation can also be used for other business scenarios such as agent clustering analysis and the like.
And S14, constructing a cross entropy loss function and an unsupervised loss function according to the updated user vector representation, the updated agent vector representation and the updated product vector representation.
At the end of each round of training, an updated user vector representation, an updated agent vector representation, and an updated product vector representation are obtained. In order to determine the condition for ending the iterative training, a loss function needs to be constructed according to the updated user vector representation, the updated agent vector representation, and the updated product vector representation, and the iterative training process is ended by minimizing the loss function.
In an optional embodiment, the constructing the cross-entropy loss function and the unsupervised loss function according to the updated user vector representation, the updated agent vector representation, and the updated product vector representation includes:
generating a target user vector representation according to the updated user vector representation and the corresponding updated agent vector representation;
determining a target user corresponding to the target user vector representation and determining a target product corresponding to the updated product vector representation;
calculating a predicted purchase probability of the target user to purchase the target product according to the target user vector representation and the updated product vector representation;
constructing a cross entropy loss function based on the predicted purchase probability and the corresponding real purchase label;
and randomly negative sampling the graph neural network, and constructing an unsupervised loss function based on the node vector representation of the negative sampling.
And splicing the updated user vector representation and the corresponding updated agent vector representation to obtain the target user vector representation.
Respectively inputting the target user vector representation and the corresponding updated product vector representation into MLP (Multi-level programmable logic processor), obtaining vector representations with the same dimension, and then performing inner product operation on the two vectors to obtain a vector representation with the same dimension as the vector representation of the target user and the corresponding updated product vector representation, and obtaining the vector representation of the target user and the corresponding updated product vector representationInteractive userPurchased productsPredicted purchase probability of。
wherein the content of the first and second substances,is and agentInteractive userWhether or not to purchase a productTrue purchase tag of, and agentInteractive userPurchased productThen the corresponding true purchase tag is 1, and the agentInteractive userHas not purchased the productThen the corresponding genuine purchase tag is 0.
According to the principle that adjacent nodes in the graph neural network are similar, a plurality of nodes are randomly and negatively sampled from the graph neural network, and therefore a node vector is constructedUnsupervised loss function ofExpressed as follows:
wherein the content of the first and second substances,is a vector representation of a node that is randomly negatively sampled,is thatThe vector of the neighbor node is expressed, and the two vector points are multiplied and then pass through an activation functionThen, log transformation is carried out,is thatIs determined by the non-neighbor node's vector representation,is the number of nodes sampled negatively.
The random negative sampling means that for any node, a node adjacent to the node and a node not adjacent to the node are obtained, that is, a node with an edge established between the node and a node without an edge established between the node are obtained, the number of the nodes adjacent to the node is calculated, and then random sampling is performed from the nodes not adjacent to the node, so that the number of the randomly sampled non-adjacent nodes is the same as the number of the adjacent nodes.
According to the optional implementation mode, on the basis of constructing the cross entropy loss function, the unsupervised loss function is constructed on the basis of the principle that adjacent nodes are closer, so that the graph neural network is trained by combining the cross entropy loss function and the unsupervised loss function together, the graph neural network can capture information in graph topological logic more efficiently, and the training effect of the graph neural network is improved.
And S15, performing optimization training on the neural network of the graph based on the cross entropy loss function and the unsupervised loss function to obtain a product recommendation model.
And performing summation calculation on the cross entropy loss function and the unsupervised loss function to obtain a final loss function, performing optimization training on the graph neural network based on the minimization of the final loss function as a training target, and determining the graph neural network corresponding to the minimum final loss function as a product recommendation model.
The vector representation obtained by training the graph neural network is directly used for predicting whether a user purchases a product or not, namely, whether the graph neural network is trained by purchasing a label with supervised training or not is different from a two-stage training mode (the expression of a node is used as an independent training stage and is not related to a prediction task of whether the user purchases the product or not at last) such as node2vec, so that better node vector representation can be obtained, and the recommendation effect of the obtained product recommendation model is better.
And S16, recommending products for the user to be recommended by using the product recommendation model.
After the electronic equipment obtains the product recommendation model through training, the product recommendation model can be used for product recommendation, and therefore the product recommendation accuracy is improved.
In an optional embodiment, the recommending a product for the user to be recommended by using the product recommendation model includes:
acquiring user vector representation of the user to be recommended;
determining a target agent interacting with the user to be recommended;
obtaining updated agent vector representations and a plurality of updated product vector representations corresponding to the target agents during the last iterative training of the graph neural network;
generating an input vector representation according to the user vector representation of the user to be recommended, the agent vector representation of the target agent and the updated product vector representations;
inputting the input vector representation into the product recommendation model, and acquiring a plurality of prediction probabilities output by the product recommendation model;
and recommending products for the user to be recommended according to the plurality of prediction probabilities.
The user to be recommended refers to a user needing product recommendation, data which are used for depicting the user portrait of the user to be recommended, such as gender, age, occupation, wealth level, personal preference and the like of the user to be recommended, are obtained, and the data which are used for depicting the user portrait of the user to be recommended are subjected to data cleaning and normalization and then are spliced to obtain user vector representation of the user to be recommended. When the iterative training is finished, the graph neural network obtains updated agent vector representation of each agent node and updated product vector representation of each product node.
Determining a target agent interacting with a user to be recommended, then determining updated agent vector representation of the target agent, and forming a triple (user vector representation of the user to be recommended, updated agent vector representation of the target agent and updated product vector representation of each product node) by the user vector representation of the user to be recommended, the updated agent vector representation of the target agent and the updated product vector representation of each product node. And inputting each triad into the product recommendation model, so that the prediction probability is output through the product recommendation model.
The prediction probability is used for representing the possibility that the user to be recommended interacting with the target agent purchases the corresponding product, the higher the prediction probability is, the higher the possibility that the corresponding product is purchased is, and the lower the prediction probability is, the lower the possibility that the corresponding product is purchased is. And recommending the product corresponding to the maximum prediction probability to the user to be recommended.
The product recommendation method based on artificial intelligence of the invention constructs a graph neural network with users, agents and products as nodes, the information expression of the graph neural network is rich, after obtaining user vector representation, agent vector representation and product vector representation, the graph neural network is iteratively trained, the user vector representation is updated, the agent vector representation is updated and the product vector representation is updated during each iterative training, thereby the vector representation of the nodes is better, a cross-entropy loss function and an unsupervised loss function are constructed according to the updated user vector representation, the updated agent vector representation and the updated product vector representation, the graph neural network is trained by combining the cross-entropy loss function and the unsupervised loss function, so that the graph neural network can more efficiently capture the information in the graph topology logic, therefore, the training effect of the graph neural network is improved, the product recommendation model with better performance is obtained, and finally, when the product recommendation model is used for recommending products for the user to be recommended, the product recommendation effect is improved, and the recommendation accuracy is higher.
Example two
Fig. 3 is a block diagram of an artificial intelligence based product recommendation apparatus according to a second embodiment of the present invention.
In some embodiments, the artificial intelligence based product recommender 30 may comprise a plurality of functional modules comprised of computer program segments. The computer programs of the various program segments in the artificial intelligence based product recommendation device 30 may be stored in a memory of an electronic device and executed by at least one processor to perform the functions of artificial intelligence based product recommendation (described in detail in fig. 1).
In this embodiment, the artificial intelligence based product recommendation device 30 may be divided into a plurality of functional modules according to the functions performed by the device. The functional module may include: an obtaining module 301, a first building module 302, an updating module 303, a second building module 304, an optimizing module 305, and a recommending module 306. The module referred to herein is a series of computer program segments capable of being executed by at least one processor and capable of performing a fixed function and is stored in memory. In the present embodiment, the functions of the modules will be described in detail in the following embodiments.
The obtaining module 301 is configured to obtain user vector representation based on user information, obtain agent vector representation based on agent information, and obtain product vector representation based on product information.
The user information, the agent information and the product information can be acquired from a database which is locally stored in the electronic equipment, wherein the database comprises a user information database, an agent information database and a product information database, the first information of the user is recorded in the user information database, the second information of the agent is recorded in the agent information database, and the third information of the product is recorded in the product database.
The first information may include: gender, age, occupation, wealth level, personal preferences, etc. may be used to characterize the user representation, and the second information may include: gender, age of employment, job level, sales, etc. can be used for the data of the client agent image, and the third information may include: product manufacturer, date of manufacture, batch, shelf life, place of manufacture, raw materials, etc. can be used to describe the data of the product.
And after data cleaning and normalization are carried out on a plurality of data in the first information, splicing is carried out to obtain user vector representation, after data cleaning and normalization are carried out on a plurality of data in the second information, splicing is carried out to obtain agent vector representation, and after data cleaning and normalization are carried out on a plurality of data in the third information, splicing is carried out to obtain product vector representation.
The first building module 302 is configured to build a graph neural network with a user, an agent, and a product as nodes.
In the life insurance recommendation scene, a plurality of association relations such as purchasing, selling, interacting and the like exist among agents, users and products, and the association relations among the agents, the users, the products and the agents are also stored in the local of the electronic equipment.
The agent, the user and the product are respectively used as nodes in the graph neural network, and if an incidence relation exists between the nodes in the graph neural network, an edge is established between the nodes with the incidence relation.
In an optional embodiment, the first building module 302 building a graph neural network with users, agents and products as nodes includes:
constructing an initial network structure diagram, wherein nodes in the initial network structure diagram are agents, users and products;
establishing an edge between nodes corresponding to users who have bought the same product in the same time period;
establishing an edge between nodes corresponding to the agents belonging to the same unit;
establishing an edge between nodes corresponding to products of the same category;
establishing an edge between nodes corresponding to users and products with purchasing relations;
and establishing an edge between the agent with the interactive relation and the node corresponding to the user to obtain the graph neural network.
The same time period may refer to weekly, monthly, or quarterly. The units may be in groups, divisions or groups.
As shown in fig. 2, it is assumed that a local database of the electronic device records a user 1, a user 2, a user 3, an agent 1, an agent 2, a product 1, a product 2, and a product 3, the user 1 purchased the product 1 and the product 3, the user 1 interacted with the agent 1 and purchased the product 1 and the product 3, the user 2 interacted with the agent 2 and purchased the product 3, the time when the user 1 purchased the product 3 is the same time period as the time when the user 2 purchased the product 3, the user 3 interacted with the agent 2 and purchased the product 2, and the product 1 and the product 2 are products of the same category, and the established graph neural network includes the user 1 node, the user 2 node, the user 3 node, the agent 1 node, the agent 2 node, the product 1 node, the product 2 node, and the product 3 node.
Since the user 1 and the user 2 purchase the product 3 in the same time period, an edge is established between the user 1 node and the user 2 node, an edge is established between the user 1 node and the product 3 node, and an edge is established between the user 2 node and the product 3 node. The user 1 also purchases the product 1, an edge is established between the user 1 node and the product 1 node, the user 3 purchases the product 2, and an edge is established between the user 3 node and the product 2 node.
If the agent 1 and the agent 2 belong to two different units, an edge is not established between the agent 1 node and the agent 2 node.
Product 1 and product 2 are products of the same category, and an edge is established between the product 1 node and the product 2 node.
When the agent 1 and the user 1 have over-interaction, and the agent 2 and the user 3 and the user 2 have over-interaction, an edge is established between the agent 1 node and the user 1 node, and an edge is respectively established between the agent 2 node and the user 3 node and between the agent 2 node and the user 2 node.
According to the optional implementation mode, the association relations of purchase, sale, interaction and the like among agents, users and products are fully utilized to construct the heterogeneous graph structure, and the information expression of the graph neural network is richer.
The updating module 303 is configured to perform iterative training on the graph neural network, and update the user vector representation, update the agent vector representation, and update the product vector representation during each iterative training.
A first round of training of the graph neural network may be performed based on the user vector representation, the agent vector representation, and the product vector representation; updating user vector representation, agent vector representation and product vector representation during the first round of training when the graph neural network is subjected to the second round of training; when the third round of training of the graph neural network is finished, updating user vector representation, agent vector representation and product vector representation obtained when the second round of training is finished; when the fourth round of training of the graph neural network is finished, updating user vector representation, agent vector representation and product vector representation obtained when the third round of training is finished; and so on; and when the last round of training of the graph neural network is finished, updating the user vector representation, the agent vector representation and the product vector representation obtained when the penultimate round of training is finished.
In an alternative embodiment, the updating module 303 updates the user vector representation at each iterative training including:
determining similar neighbor nodes and heterogeneous neighbor nodes of the user node;
acquiring a first vector representation of the similar neighbor node after the previous round of training is finished and acquiring a second vector representation of the heterogeneous neighbor node after the previous round of training is finished;
calculating a first attention weight of the similar neighbor node during the current training round, and calculating a second attention weight of the heterogeneous neighbor node during the current training round;
calculating to obtain a first aggregation vector representation according to the first vector representation and the first attention weight;
calculating to obtain a second aggregate vector representation according to the second vector representation and the second attention weight;
and fusing the first aggregation vector representation and the second aggregation vector representation to obtain an updated user vector representation.
The similar neighbor nodes of the user node refer to other user nodes with edges established between the user node and the similar neighbor nodes of the user node, and the vector representation of the similar neighbor nodes of the user node refers to the user vector representation of the other user nodes. For example, for the user 1 node, the homogeneous neighbor node of the user 1 node refers to the user 2 node, and the vector representation of the homogeneous neighbor node of the user 1 node is the user vector representation of the user 2 node. For the user 2 node, the similar neighbor node of the user 2 node refers to the user 1 node, and the vector representation of the similar neighbor node of the user 2 node is the user vector representation of the user 1 node. For the user 3 node, the similar neighbor nodes of the user 3 node are empty.
The heterogeneous neighbor node of the user node is a product node with an edge established between the heterogeneous neighbor node and the user node, and the vector representation of the heterogeneous neighbor node of the user node is the product vector representation of the product node. It should be noted that, since a user usually interacts with an agent, the heterogeneous neighbor nodes of the user node only include the product node and do not include the agent node. That is, the agent vector representations of the agent neighbor nodes are not aggregated, but only the product vector representations of the product neighbor nodes are aggregated.
Hereinafter, the user node will be referred toThe process of the updated user node vector representation obtained after the iterative training of the current round is finished is explained as an example.
Defining the same kind of neighbor nodes of the user node asThe heterogeneous neighbor nodes of the user node are。
User nodeIn the first placeUser vector representation of a wheelVector representation and user node aggregating neighbor nodes (homogeneous neighbor nodes and heterogeneous neighbor nodes)In the first placeVector representation of a wheel。
To distinguish neighbor nodes from user nodesTo the user nodeAnd respectively carrying out aggregation calculation on the similar neighbor nodes and the heterogeneous neighbor nodes. To user nodeThe similar neighbor nodes are subjected to aggregation calculation to obtain a first aggregation vector, and the first aggregation vector is used for the user nodeAnd carrying out aggregation calculation on the heterogeneous neighbor nodes to obtain a second aggregation vector.
representing user nodesTo (1) aA neighbor node of the same kind isThe first attention weight of the round of training,is shown asA neighbor node of the same kind isUser vector representation at the end of round training (first vector representation).
representing user nodesIn thatThe user vector representation at the end of the round of training,is a user nodeFirst, theThe parameters of the attention weight of the similar neighbor nodes solved in the round of training are learning parameters in the model training,representing user nodesThe neighbor nodes (including the same-class neighbor nodes and the heterogeneous neighbor nodes) of (1), and | represents vector splicing.
representing user nodesTo (1) aA heterogeneous neighbor node isThe second attention weight of the round of training,is shown asA heterogeneous neighbor node isThe product vector representation at the end of the round of training (second vector representation),representing user nodesTo the productThe transformation matrix of (2).
representing user nodesIn thatThe user vector representation at the end of the round of training,is a user nodeFirst, theThe parameters of the attention weight of the heterogeneous neighbor nodes solved in the round of training are learning parameters in the model training.
Since the user vector representation and the product vector representation belong to different vector spaces, a user node needs to be setTo product nodeHeterogeneous neighbor transformation matrix ofHeterogeneous neighbor transformation matrixIs a learning parameter during model training.
In an optional embodiment, the fusing the first aggregated vector representation and the second aggregated vector representation to obtain an updated user vector representation includes:
calculating a first calculation weight of the similar neighbor node in the training of the current round;
calculating a second calculation weight of the heterogeneous neighbor node in the current training round;
fusing the first aggregation vector representation and the second aggregation vector representation by the first calculation weight and the second calculation weight to obtain a target fusion vector representation;
and carrying out nonlinear transformation on the target fusion vector representation and the user vector representation of the user node after the previous training round is finished to obtain the updated user vector representation.
At the point of obtaining the user nodeIs represented by a first aggregate vectorAnd user nodeSecond aggregate vector representation ofThen, the first aggregation vector corresponding to the same type of nodes is representedSecond aggregate vector representation corresponding to heterogeneous nodesFusing to obtain user nodesVector representation fusing all neighbor nodes, i.e. target fusion vector representation。
wherein the content of the first and second substances,representing user nodesIn the first placeThe first calculated weight of the vector representation of the homogeneous neighbor node in the round of training,representing user nodesIn the first placeAnd calculating weights of the vector representations of the heterogeneous neighbor nodes in the training process.Is the training parameters needed to solve the calculated weights,representing user nodesIs determined by the target fusion vector representation of the neighboring node.
The first calculation weight and the second calculation weight may be found by an attention mechanism.
Obtaining user nodesIn the first placeUser vector representation at the end of round trainingThen the user node is connectedIn the first placeUser vector representation at the end of round trainingAnd user nodeIn the first placeTarget fusion vector representation during round trainingSplicing, and carrying out nonlinear transformation on the spliced vector representation to obtain user nodesUpdated user vector representation of。
User nodeUpdated user vector representation ofExpressed using the following formula:,in order to activate the function(s),the matrix is used for linear transformation, and is a learning parameter of the model to be trained.
Similarly, in the process of updating the proxy vector representation in each iterative training, the similar neighbor nodes of the proxy node are defined asAnd heterogeneous neighbor nodes areDeterminingThe similar neighbor nodes of the agent nodes refer to other agent nodes with edges established between the similar neighbor nodes of the agent nodes and the agent nodes, and the heterogeneous neighbor nodes of the agent nodes refer to user nodes and product nodes with edges established between the heterogeneous neighbor nodes of the agent nodes and the agent nodes.
In the process of updating the product vector representation during each iterative training, the similar neighbor nodes of the product nodes are defined asAnd heterogeneous neighbor nodes areAnd determining similar neighbor nodes and heterogeneous neighbor nodes of the product nodes, wherein the similar neighbor nodes of the product nodes refer to other product nodes with edges established between the similar neighbor nodes of the product nodes and the product nodes, and the heterogeneous neighbor nodes of the product nodes refer to user nodes and agent nodes with edges established between the heterogeneous neighbor nodes of the product nodes and the product nodes.
The process of updating the agent vector representation and the product vector representation and the process of updating the user vector representation are not described in detail herein.
In the optional embodiment, the accuracy of vector representation of the user node, the product node and the agent node is improved by converging vector representation of the neighbor nodes; in addition, heterogeneous neighbor transformation matrices are usedThe problem of information loss caused by training of heterogeneous nodes in the same vector space is solved, and the accuracy of vector representation of the nodes is further improved.
In the process of carrying out iterative training on the graph neural network, the updated user vector representation, the agent vector representation and the product vector representation are updated, and the updated user vector representation and the updated agent vector representation are spliced to obtain the final user vector representation (namely, the target user vector representation), so that the method is suitable for a passenger-locking product recommendation scene with agent intervention. And the agent vector representation can also be used for other business scenarios such as agent clustering analysis and the like.
The second constructing module 304 is configured to construct a cross entropy loss function and an unsupervised loss function according to the updated user vector representation, the updated agent vector representation, and the updated product vector representation.
At the end of each round of training, an updated user vector representation, an updated agent vector representation, and an updated product vector representation are obtained. In order to determine the condition for ending the iterative training, a loss function needs to be constructed according to the updated user vector representation, the updated agent vector representation, and the updated product vector representation, and the iterative training process is ended by minimizing the loss function.
In an alternative embodiment, the second constructing module 304 constructs the cross-entropy loss function and the unsupervised loss function according to the updated user vector representation, the updated agent vector representation, and the updated product vector representation, including:
generating a target user vector representation according to the updated user vector representation and the corresponding updated agent vector representation;
determining a target user corresponding to the target user vector representation and determining a target product corresponding to the updated product vector representation;
calculating a predicted purchase probability of the target user to purchase the target product according to the target user vector representation and the updated product vector representation;
constructing a cross entropy loss function based on the predicted purchase probability and the corresponding real purchase label;
and randomly negative sampling the graph neural network, and constructing an unsupervised loss function based on the node vector representation of the negative sampling.
And splicing the updated user vector representation and the corresponding updated agent vector representation to obtain the target user vector representation.
Respectively inputting the target user vector representation and the corresponding updated product vector representation into MLP (Multi-level programmable logic processor), obtaining vector representations with the same dimension, and then performing inner product operation on the two vectors to obtain a vector representation with the same dimension as the vector representation of the target user and the corresponding updated product vector representation, and obtaining the vector representation of the target user and the corresponding updated product vector representationInteractive userPurchased productsPredicted purchase probability of。
wherein the content of the first and second substances,is and agentInteractive userWhether or not to purchase a productTrue purchase tag of, and agentInteractive userPurchased productThen the corresponding true purchase tag is 1, and the agentInteractive userHas not purchased the productThen the corresponding genuine purchase tag is 0.
According to the principle that adjacent nodes in the graph neural network are similar, a plurality of nodes are randomly and negatively sampled from the graph neural network, and therefore a node vector is constructedUnsupervised loss function ofExpressed as follows:
wherein the content of the first and second substances,is a vector representation of a node that is randomly negatively sampled,is thatThe vector of the neighbor node is expressed, and the two vector points are multiplied and then pass through an activation functionThen, log transformation is carried out,is thatIs determined by the non-neighbor node's vector representation,is the number of nodes sampled negatively.
The random negative sampling means that for any node, a node adjacent to the node and a node not adjacent to the node are obtained, that is, a node with an edge established between the node and a node without an edge established between the node are obtained, the number of the nodes adjacent to the node is calculated, and then random sampling is performed from the nodes not adjacent to the node, so that the number of the randomly sampled non-adjacent nodes is the same as the number of the adjacent nodes.
According to the optional implementation mode, on the basis of constructing the cross entropy loss function, the unsupervised loss function is constructed on the basis of the principle that adjacent nodes are closer, so that the graph neural network is trained by combining the cross entropy loss function and the unsupervised loss function together, the graph neural network can capture information in graph topological logic more efficiently, and the training effect of the graph neural network is improved.
The optimization module 305 is configured to perform optimization training on the neural network based on the cross entropy loss function and the unsupervised loss function to obtain a product recommendation model.
And performing summation calculation on the cross entropy loss function and the unsupervised loss function to obtain a final loss function, performing optimization training on the graph neural network based on the minimization of the final loss function as a training target, and determining the graph neural network corresponding to the minimum final loss function as a product recommendation model.
The vector representation obtained by training the graph neural network is directly used for predicting whether a user purchases a product or not, namely, whether the graph neural network is trained by purchasing a label with supervised training or not is different from a two-stage training mode (the expression of a node is used as an independent training stage and is not related to a prediction task of whether the user purchases the product or not at last) such as node2vec, so that better node vector representation can be obtained, and the recommendation effect of the obtained product recommendation model is better.
The recommending module 306 is configured to recommend a product for the user to be recommended by using the product recommending model.
After the electronic equipment obtains the product recommendation model through training, the product recommendation model can be used for product recommendation, and therefore the product recommendation accuracy is improved.
In an optional embodiment, the recommending module 306 recommending the product for the user to be recommended by using the product recommendation model includes:
acquiring user vector representation of the user to be recommended;
determining a target agent interacting with the user to be recommended;
obtaining updated agent vector representations and a plurality of updated product vector representations corresponding to the target agents during the last iterative training of the graph neural network;
generating an input vector representation according to the user vector representation of the user to be recommended, the agent vector representation of the target agent and the updated product vector representations;
inputting the input vector representation into the product recommendation model, and acquiring a plurality of prediction probabilities output by the product recommendation model;
and recommending products for the user to be recommended according to the plurality of prediction probabilities.
The user to be recommended refers to a user needing product recommendation, data which are used for depicting the user portrait of the user to be recommended, such as gender, age, occupation, wealth level, personal preference and the like of the user to be recommended, are obtained, and the data which are used for depicting the user portrait of the user to be recommended are subjected to data cleaning and normalization and then are spliced to obtain user vector representation of the user to be recommended. When the iterative training is finished, the graph neural network obtains updated agent vector representation of each agent node and updated product vector representation of each product node.
Determining a target agent interacting with a user to be recommended, then determining updated agent vector representation of the target agent, and forming a triple (user vector representation of the user to be recommended, updated agent vector representation of the target agent and updated product vector representation of each product node) by the user vector representation of the user to be recommended, the updated agent vector representation of the target agent and the updated product vector representation of each product node. And inputting each triad into the product recommendation model, so that the prediction probability is output through the product recommendation model.
The prediction probability is used for representing the possibility that the user to be recommended interacting with the target agent purchases the corresponding product, the higher the prediction probability is, the higher the possibility that the corresponding product is purchased is, and the lower the prediction probability is, the lower the possibility that the corresponding product is purchased is. And recommending the product corresponding to the maximum prediction probability to the user to be recommended.
The product recommending device based on artificial intelligence constructs a graph neural network with users, agents and products as nodes, the information expression of the graph neural network is rich, after the user vector representation, the agent vector representation and the product vector representation are obtained, the graph neural network is subjected to iterative training, the user vector representation is updated, the agent vector representation is updated and the product vector representation is updated during each iterative training, so that the vector representation of the nodes is better, a cross-entropy loss function and an unsupervised loss function are constructed according to the updated user vector representation, the updated agent vector representation and the updated product vector representation, the graph neural network is trained by combining the cross-entropy loss function and the unsupervised loss function, so that the graph neural network can capture the information in a graph topology logic more efficiently, therefore, the training effect of the graph neural network is improved, the product recommendation model with better performance is obtained, and finally, when the product recommendation model is used for recommending products for the user to be recommended, the product recommendation effect is improved, and the recommendation accuracy is higher.
EXAMPLE III
The present embodiment provides a computer-readable storage medium, which stores thereon a computer program, which when executed by a processor implements the steps in the above-mentioned artificial intelligence based product recommendation method embodiments, such as S11-S16 shown in fig. 1:
s11, acquiring user vector representation based on the user information, acquiring agent vector representation based on the agent information, and acquiring product vector representation based on the product information;
s12, constructing a graph neural network with users, agents and products as nodes;
s13, performing iterative training on the graph neural network, and updating the user vector representation, the agent vector representation and the product vector representation during each iterative training;
s14, constructing a cross entropy loss function and an unsupervised loss function according to the updated user vector representation, the updated agent vector representation and the updated product vector representation;
s15, carrying out optimization training on the neural network of the graph based on the cross entropy loss function and the unsupervised loss function to obtain a product recommendation model;
and S16, recommending products for the user to be recommended by using the product recommendation model.
Alternatively, the computer program, when executed by the processor, implements the functions of the modules/units in the above-described device embodiments, for example, the module 301 in fig. 3 and 306:
the obtaining module 301 is configured to obtain user vector representation based on user information, obtain agent vector representation based on agent information, and obtain product vector representation based on product information;
the first construction module 302 is configured to construct a graph neural network with a user, an agent, and a product as nodes;
the updating module 303 is configured to perform iterative training on the graph neural network, and update the user vector representation, the agent vector representation, and the product vector representation during each iterative training;
the second construction module 304 is configured to construct a cross entropy loss function and an unsupervised loss function according to the updated user vector representation, the updated agent vector representation, and the updated product vector representation;
the optimization module 305 is configured to perform optimization training on the neural network based on the cross entropy loss function and the unsupervised loss function to obtain a product recommendation model;
the recommending module 306 is configured to recommend a product for the user to be recommended by using the product recommending model.
Example four
Fig. 4 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention. In the preferred embodiment of the present invention, the electronic device 4 comprises a memory 41, at least one processor 42, at least one communication bus 43, and a transceiver 44.
It will be appreciated by those skilled in the art that the configuration of the electronic device shown in fig. 4 does not constitute a limitation of the embodiment of the present invention, and may be a bus-type configuration or a star-type configuration, and the electronic device 4 may include more or less hardware or software than those shown, or different component arrangements.
In some embodiments, the electronic device 4 is a device capable of automatically performing numerical calculation and/or information processing according to instructions set or stored in advance, and the hardware includes but is not limited to a microprocessor, an application specific integrated circuit, a programmable gate array, a digital processor, an embedded device, and the like. The electronic device 4 may also include a client device, which includes, but is not limited to, any electronic product capable of interacting with a client through a keyboard, a mouse, a remote controller, a touch pad, or a voice control device, for example, a personal computer, a tablet computer, a smart phone, a digital camera, etc.
It should be noted that the electronic device 4 is only an example, and other existing or future electronic products, such as those that can be adapted to the present invention, should also be included in the scope of the present invention, and are included herein by reference.
In some embodiments, the memory 41 has stored therein a computer program that, when executed by the at least one processor 42, performs all or part of the steps of the artificial intelligence based product recommendation method as described. The Memory 41 includes a Read-Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), a One-time Programmable Read-Only Memory (OTPROM), an electronically Erasable rewritable Read-Only Memory (Electrically-Erasable Programmable Read-Only Memory (EEPROM)), an optical Read-Only disk (CD-ROM) or other optical disk Memory, a magnetic disk Memory, a tape Memory, or any other medium readable by a computer capable of carrying or storing data.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
In some embodiments, the at least one processor 42 is a Control Unit (Control Unit) of the electronic device 4, connects various components of the electronic device 4 by various interfaces and lines, and executes various functions and processes data of the electronic device 4 by running or executing programs or modules stored in the memory 41 and calling data stored in the memory 41. For example, the at least one processor 42, when executing the computer program stored in the memory, implements all or a portion of the steps of the artificial intelligence based product recommendation method described in embodiments of the invention; or implement all or part of the functionality of an artificial intelligence based product recommendation device. The at least one processor 42 may be composed of an integrated circuit, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips.
In some embodiments, the at least one communication bus 43 is arranged to enable connection communication between the memory 41 and the at least one processor 42, etc.
Although not shown, the electronic device 4 may further include a power source (such as a battery) for supplying power to the components, and preferably, the power source may be logically connected to the at least one processor 42 through a power management device, so as to implement functions of managing charging, discharging, and power consumption through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 4 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
The integrated unit implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, an electronic device, or a network device) or a processor (processor) to execute parts of the methods according to the embodiments of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or that the singular does not exclude the plural. A plurality of units or means recited in the specification may also be implemented by one unit or means through software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.
Claims (8)
1. An artificial intelligence based product recommendation method, the method comprising:
acquiring user vector representation based on user information, acquiring agent vector representation based on agent information, and acquiring product vector representation based on product information;
the method for constructing the graph neural network with the users, the agents and the products as nodes comprises the following steps: constructing an initial network structure diagram, wherein nodes in the initial network structure diagram are agents, users and products; establishing an edge between nodes corresponding to users who have bought the same product in the same time period; establishing an edge between nodes corresponding to the agents belonging to the same unit; establishing an edge between nodes corresponding to products of the same category; establishing an edge between nodes corresponding to users and products with purchasing relations; establishing an edge between the agent with the interaction relation and the node corresponding to the user to obtain a graph neural network;
iteratively training the graph neural network, and updating the user vector representation, updating the agent vector representation, and updating the product vector representation each time the iterative training is performed, wherein the updating the user vector representation comprises: determining similar neighbor nodes and heterogeneous neighbor nodes of the user nodes, wherein the similar neighbor nodes of the user nodes refer to other user nodes with edges established between the similar neighbor nodes of the user nodes and the user nodes, and the heterogeneous neighbor nodes of the user nodes refer to product nodes with edges established between the heterogeneous neighbor nodes of the user nodes and the user nodes; acquiring a first vector representation of the similar neighbor node of the user node after the previous round of training is finished, and acquiring a second vector representation of the heterogeneous neighbor node of the user node after the previous round of training is finished; calculating a first attention weight of a similar neighbor node of the user node during the current training, and calculating a second attention weight of a heterogeneous neighbor node of the user node during the current training; calculating to obtain a first aggregation vector representation of the user node according to the first vector representation of the user node and the first attention weight of the user node; calculating to obtain a second aggregation vector representation of the user node according to the second vector representation of the user node and a second attention weight of the user node; fusing the first aggregation vector representation of the user node and the second aggregation vector representation of the user node to obtain an updated user vector representation; said updating the proxy human vector representation comprises: determining similar neighbor nodes and heterogeneous neighbor nodes of the agent nodes, wherein the similar neighbor nodes of the agent nodes refer to other agent nodes with edges established between the similar neighbor nodes and the agent nodes, and the heterogeneous neighbor nodes of the agent nodes refer to user nodes and product nodes with edges established between the heterogeneous neighbor nodes and the agent nodes; acquiring a third vector representation of the similar neighbor node of the agent node after the previous round of training is finished, and acquiring a fourth vector representation of the heterogeneous neighbor node of the agent node after the previous round of training is finished; calculating a third attention weight of a similar neighbor node of the agent node during the current round of training, and calculating a fourth attention weight of a heterogeneous neighbor node of the agent node during the current round of training; calculating to obtain a third aggregated vector representation of the agent node according to the third vector representation of the agent node and a third attention weight of the agent node; calculating to obtain a fourth aggregation vector representation of the agent node according to a fourth vector representation of the agent node and a fourth attention weight of the agent node; fusing the third aggregation vector representation of the agent node and the fourth aggregation vector representation of the agent node to obtain an updated agent vector representation; the updating the product vector representation comprises: determining similar neighbor nodes and heterogeneous neighbor nodes of the product nodes, wherein the similar neighbor nodes of the product nodes refer to other product nodes with edges established between the similar neighbor nodes of the product nodes and the product nodes, and the heterogeneous neighbor nodes of the product nodes refer to user nodes and agent nodes with edges established between the heterogeneous neighbor nodes of the product nodes and the product nodes; acquiring a fifth vector representation of the similar neighbor node of the product node after the previous round of training is finished, and acquiring a sixth vector representation of the heterogeneous neighbor node of the product node after the previous round of training is finished; calculating a fifth attention weight of a similar neighbor node of the product node during the current training round, and calculating a sixth attention weight of a heterogeneous neighbor node of the product node during the current training round; calculating to obtain a fifth aggregation vector representation of the product node according to the fifth vector representation of the product node and the fifth attention weight of the product node; calculating to obtain a sixth aggregation vector representation of the product node according to a sixth vector representation of the product node and a sixth attention weight of the product node; fusing the fifth aggregation vector representation of the product node and the sixth aggregation vector representation of the product node to obtain an updated product vector representation;
constructing a cross entropy loss function and an unsupervised loss function according to the updated user vector representation, the updated agent vector representation and the updated product vector representation;
performing optimization training on the neural network of the graph based on the cross entropy loss function and the unsupervised loss function to obtain a product recommendation model;
and recommending the product for the user to be recommended by using the product recommendation model.
2. The artificial intelligence based product recommendation method of claim 1, wherein the first attention weightCalculated by the following formula:
wherein No (u) represents the homogeneous neighbor nodes of the user node,user vector representation representing the user node u at the end of the t-1 round of training, bu,tThe parameter is the attention weight of the similar neighbor node solved in the tth round of training of the user node u, and is the learning parameter in the model training, k represents the neighbor node of the user node u, and | | represents vector splicing;
wherein N ise(u) heterogeneous neighbor nodes representing user nodes,user vector representation, C, representing user node u at the end of t-1 rounds of trainingu,tThe parameter is the attention weight of the heterogeneous neighbor node solved in the tth round of training of the user node u, and is the learning parameter in the model training;
t represents the total number of rounds and the number of rounds,represents the vector representation of the ith homogeneous neighbor node at the end of the t-1 round of training,represents the vector representation of the jth heterogeneous neighbor node at the end of the t-1 round of training, WUPThe heterogeneous neighbor transformation matrix representing the user node u to the product node P is a learning parameter during model training,and representing the vector representation of the neighbor nodes of the user node u at the end of the t-first 1-round training.
3. The artificial intelligence based product recommendation method of claim 2, wherein the fusing the first aggregated vector representation and the second aggregated vector representation to obtain an updated user vector representation comprises:
calculating a first calculation weight of the similar neighbor node in the training of the current round;
calculating a second calculation weight of the heterogeneous neighbor node in the current training round;
fusing the first aggregation vector representation and the second aggregation vector representation by the first calculation weight and the second calculation weight to obtain a target fusion vector representation;
and carrying out nonlinear transformation on the target fusion vector representation and the user vector representation of the user node after the previous training round is finished to obtain the updated user vector representation.
4. The artificial intelligence based product recommendation method of claim 1, wherein constructing a cross-entropy loss function and an unsupervised loss function based on the updated user vector representation, the updated agent vector representation, and the updated product vector representation comprises:
generating a target user vector representation according to the updated user vector representation and the corresponding updated agent vector representation;
determining a target user corresponding to the target user vector representation and determining a target product corresponding to the updated product vector representation;
calculating a predicted purchase probability of the target user to purchase the target product according to the target user vector representation and the updated product vector representation;
constructing a cross entropy loss function based on the predicted purchase probability and the corresponding real purchase label;
and randomly negative sampling the graph neural network, and constructing an unsupervised loss function based on the node vector representation of the negative sampling.
5. The artificial intelligence based product recommendation method of any one of claims 1-4, wherein the recommending products for the user to be recommended using the product recommendation model comprises:
acquiring user vector representation of the user to be recommended;
determining a target agent interacting with the user to be recommended;
obtaining updated agent vector representations and a plurality of updated product vector representations corresponding to the target agents during the last iterative training of the graph neural network;
generating an input vector representation according to the user vector representation of the user to be recommended, the agent vector representation of the target agent and the updated product vector representations;
inputting the input vector representation into the product recommendation model, and acquiring a plurality of prediction probabilities output by the product recommendation model;
and recommending products for the user to be recommended according to the plurality of prediction probabilities.
6. An artificial intelligence based product recommendation apparatus, the apparatus comprising:
the acquisition module is used for acquiring user vector representation based on the user information, acquiring agent vector representation based on the agent information and acquiring product vector representation based on the product information;
the first building module is used for building a graph neural network with users, agents and products as nodes, and comprises the following steps: constructing an initial network structure diagram, wherein nodes in the initial network structure diagram are agents, users and products; establishing an edge between nodes corresponding to users who have bought the same product in the same time period; establishing an edge between nodes corresponding to the agents belonging to the same unit; establishing an edge between nodes corresponding to products of the same category; establishing an edge between nodes corresponding to users and products with purchasing relations; establishing an edge between the agent with the interaction relation and the node corresponding to the user to obtain a graph neural network;
an update module configured to iteratively train the graph neural network, and update the user vector representation, the agent vector representation, and the product vector representation each time iterative training is performed, wherein updating the user vector representation includes: determining similar neighbor nodes and heterogeneous neighbor nodes of the user nodes, wherein the similar neighbor nodes of the user nodes refer to other user nodes with edges established between the similar neighbor nodes of the user nodes and the user nodes, and the heterogeneous neighbor nodes of the user nodes refer to product nodes with edges established between the heterogeneous neighbor nodes of the user nodes and the user nodes; acquiring a first vector representation of the similar neighbor node of the user node after the previous round of training is finished, and acquiring a second vector representation of the heterogeneous neighbor node of the user node after the previous round of training is finished; calculating a first attention weight of a similar neighbor node of the user node during the current training, and calculating a second attention weight of a heterogeneous neighbor node of the user node during the current training; calculating to obtain a first aggregation vector representation of the user node according to the first vector representation of the user node and the first attention weight of the user node; calculating to obtain a second aggregation vector representation of the user node according to the second vector representation of the user node and a second attention weight of the user node; fusing the first aggregation vector representation of the user node and the second aggregation vector representation of the user node to obtain an updated user vector representation; said updating the proxy human vector representation comprises: determining similar neighbor nodes and heterogeneous neighbor nodes of the agent nodes, wherein the similar neighbor nodes of the agent nodes refer to other agent nodes with edges established between the similar neighbor nodes and the agent nodes, and the heterogeneous neighbor nodes of the agent nodes refer to user nodes and product nodes with edges established between the heterogeneous neighbor nodes and the agent nodes; acquiring a third vector representation of the similar neighbor node of the agent node after the previous round of training is finished, and acquiring a fourth vector representation of the heterogeneous neighbor node of the agent node after the previous round of training is finished; calculating a third attention weight of a similar neighbor node of the agent node during the current round of training, and calculating a fourth attention weight of a heterogeneous neighbor node of the agent node during the current round of training; calculating to obtain a third aggregated vector representation of the agent node according to the third vector representation of the agent node and a third attention weight of the agent node; calculating to obtain a fourth aggregation vector representation of the agent node according to a fourth vector representation of the agent node and a fourth attention weight of the agent node; fusing the third aggregation vector representation of the agent node and the fourth aggregation vector representation of the agent node to obtain an updated agent vector representation; the updating the product vector representation comprises: determining similar neighbor nodes and heterogeneous neighbor nodes of the product nodes, wherein the similar neighbor nodes of the product nodes refer to other product nodes with edges established between the similar neighbor nodes of the product nodes and the product nodes, and the heterogeneous neighbor nodes of the product nodes refer to user nodes and agent nodes with edges established between the heterogeneous neighbor nodes of the product nodes and the product nodes; acquiring a fifth vector representation of the similar neighbor node of the product node after the previous round of training is finished, and acquiring a sixth vector representation of the heterogeneous neighbor node of the product node after the previous round of training is finished; calculating a fifth attention weight of a similar neighbor node of the product node during the current training round, and calculating a sixth attention weight of a heterogeneous neighbor node of the product node during the current training round; calculating to obtain a fifth aggregation vector representation of the product node according to the fifth vector representation of the product node and the fifth attention weight of the product node; calculating to obtain a sixth aggregation vector representation of the product node according to a sixth vector representation of the product node and a sixth attention weight of the product node; fusing the fifth aggregation vector representation of the product node and the sixth aggregation vector representation of the product node to obtain an updated product vector representation;
the second construction module is used for constructing a cross entropy loss function and an unsupervised loss function according to the updated user vector representation, the updated agent vector representation and the updated product vector representation;
the optimization module is used for carrying out optimization training on the neural network of the graph based on the cross entropy loss function and the unsupervised loss function to obtain a product recommendation model;
and the recommending module is used for recommending the product for the user to be recommended by using the product recommending model.
7. An electronic device, comprising a processor and a memory, wherein the processor is configured to implement the artificial intelligence based product recommendation method of any one of claims 1-5 when executing the computer program stored in the memory.
8. A computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the artificial intelligence based product recommendation method of any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111469251.0A CN113869992B (en) | 2021-12-03 | 2021-12-03 | Artificial intelligence based product recommendation method and device, electronic equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111469251.0A CN113869992B (en) | 2021-12-03 | 2021-12-03 | Artificial intelligence based product recommendation method and device, electronic equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113869992A CN113869992A (en) | 2021-12-31 |
CN113869992B true CN113869992B (en) | 2022-03-18 |
Family
ID=78985836
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111469251.0A Active CN113869992B (en) | 2021-12-03 | 2021-12-03 | Artificial intelligence based product recommendation method and device, electronic equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113869992B (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110866190A (en) * | 2019-11-18 | 2020-03-06 | 支付宝(杭州)信息技术有限公司 | Method and device for training neural network model for representing knowledge graph |
CN110969516A (en) * | 2019-12-25 | 2020-04-07 | 清华大学 | Commodity recommendation method and device |
CN111429161A (en) * | 2020-04-10 | 2020-07-17 | 杭州网易再顾科技有限公司 | Feature extraction method, feature extraction device, storage medium, and electronic apparatus |
US10878505B1 (en) * | 2020-07-31 | 2020-12-29 | Agblox, Inc. | Curated sentiment analysis in multi-layer, machine learning-based forecasting model using customized, commodity-specific neural networks |
CN112488355A (en) * | 2020-10-28 | 2021-03-12 | 华为技术有限公司 | Method and device for predicting user rating based on graph neural network |
CN112507224A (en) * | 2020-12-11 | 2021-03-16 | 南京大学 | Service recommendation method of man-machine object fusion system based on heterogeneous network representation learning |
CN112989842A (en) * | 2021-02-25 | 2021-06-18 | 电子科技大学 | Construction method of universal embedded framework of multi-semantic heterogeneous graph |
CN113095439A (en) * | 2021-04-30 | 2021-07-09 | 东南大学 | Heterogeneous graph embedding learning method based on attention mechanism |
CN113254803A (en) * | 2021-06-24 | 2021-08-13 | 暨南大学 | Social recommendation method based on multi-feature heterogeneous graph neural network |
CN113343041A (en) * | 2021-06-21 | 2021-09-03 | 北京邮电大学 | Message reply relation judgment system based on graph model representation learning |
CN113409121A (en) * | 2021-06-29 | 2021-09-17 | 南京财经大学 | Cross-border e-commerce recommendation method based on heterogeneous graph expression learning |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110704626B (en) * | 2019-09-30 | 2022-07-22 | 北京邮电大学 | Short text classification method and device |
CN111241311B (en) * | 2020-01-09 | 2023-02-03 | 腾讯科技(深圳)有限公司 | Media information recommendation method and device, electronic equipment and storage medium |
CN111814921B (en) * | 2020-09-04 | 2020-12-18 | 支付宝(杭州)信息技术有限公司 | Object characteristic information acquisition method, object classification method, information push method and device |
CN112381179B (en) * | 2020-12-11 | 2024-02-23 | 杭州电子科技大学 | Heterogeneous graph classification method based on double-layer attention mechanism |
CN112883170B (en) * | 2021-01-20 | 2023-08-18 | 中国人民大学 | User feedback guided self-adaptive dialogue recommendation method and system |
CN113177159B (en) * | 2021-05-11 | 2022-08-05 | 清华大学 | Binding recommendation method based on multichannel hypergraph neural network |
-
2021
- 2021-12-03 CN CN202111469251.0A patent/CN113869992B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110866190A (en) * | 2019-11-18 | 2020-03-06 | 支付宝(杭州)信息技术有限公司 | Method and device for training neural network model for representing knowledge graph |
CN110969516A (en) * | 2019-12-25 | 2020-04-07 | 清华大学 | Commodity recommendation method and device |
CN111429161A (en) * | 2020-04-10 | 2020-07-17 | 杭州网易再顾科技有限公司 | Feature extraction method, feature extraction device, storage medium, and electronic apparatus |
US10878505B1 (en) * | 2020-07-31 | 2020-12-29 | Agblox, Inc. | Curated sentiment analysis in multi-layer, machine learning-based forecasting model using customized, commodity-specific neural networks |
CN112488355A (en) * | 2020-10-28 | 2021-03-12 | 华为技术有限公司 | Method and device for predicting user rating based on graph neural network |
CN112507224A (en) * | 2020-12-11 | 2021-03-16 | 南京大学 | Service recommendation method of man-machine object fusion system based on heterogeneous network representation learning |
CN112989842A (en) * | 2021-02-25 | 2021-06-18 | 电子科技大学 | Construction method of universal embedded framework of multi-semantic heterogeneous graph |
CN113095439A (en) * | 2021-04-30 | 2021-07-09 | 东南大学 | Heterogeneous graph embedding learning method based on attention mechanism |
CN113343041A (en) * | 2021-06-21 | 2021-09-03 | 北京邮电大学 | Message reply relation judgment system based on graph model representation learning |
CN113254803A (en) * | 2021-06-24 | 2021-08-13 | 暨南大学 | Social recommendation method based on multi-feature heterogeneous graph neural network |
CN113409121A (en) * | 2021-06-29 | 2021-09-17 | 南京财经大学 | Cross-border e-commerce recommendation method based on heterogeneous graph expression learning |
Non-Patent Citations (1)
Title |
---|
基于多维社交关系嵌入的深层图神经网络推荐方法;何昊晨等;《计算机应用》;20201010(第10期);第13-21页 * |
Also Published As
Publication number | Publication date |
---|---|
CN113869992A (en) | 2021-12-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Min | Artificial intelligence in supply chain management: theory and applications | |
CN111553759A (en) | Product information pushing method, device, equipment and storage medium | |
Rodger | Application of a fuzzy feasibility Bayesian probabilistic estimation of supply chain backorder aging, unfilled backorders, and customer wait time using stochastic simulation with Markov blankets | |
Miao et al. | Context‐based dynamic pricing with online clustering | |
Chang | A novel supplier selection method that integrates the intuitionistic fuzzy weighted averaging method and a soft set with imprecise data | |
Felfernig et al. | An overview of recommender systems in requirements engineering | |
CA3007940A1 (en) | Systems and methods of utilizing multiple forecast models in forecasting customer demands for products at retail facilities | |
US11631031B2 (en) | Automated model generation platform for recursive model building | |
US20180204163A1 (en) | Optimizing human and non-human resources in retail environments | |
CN114663198A (en) | Product recommendation method, device and equipment based on user portrait and storage medium | |
Demizu et al. | Inventory management of new products in retailers using model-based deep reinforcement learning | |
JP2021528707A (en) | Configuration price quote with advanced approval control | |
Liu et al. | Multi-objective product configuration involving new components under uncertainty | |
Narechania | Machine Learning as Natural Monopoly | |
Vijayakumar | Digital twin in consumer choice modeling | |
CN113901236A (en) | Target identification method and device based on artificial intelligence, electronic equipment and medium | |
Simchi-Levi et al. | Online learning and optimization for revenue management problems with add-on discounts | |
Zhang et al. | A new customization model for enterprises based on improved framework of customer to business: A case study in automobile industry | |
US20230186331A1 (en) | Generalized demand estimation for automated forecasting systems | |
US20210035186A1 (en) | Data Representations for Collection of Complex Asset System Data | |
Hua et al. | Markdowns in e-commerce fresh retail: A counterfactual prediction and multi-period optimization approach | |
CN113850654A (en) | Training method of item recommendation model, item screening method, device and equipment | |
Zhou et al. | Scheduling just-in-time part replenishment of the automobile assembly line with unrelated parallel machines | |
Neumann et al. | Genetic algorithms for planning and scheduling engineer-to-order production: a systematic review | |
CN111652282B (en) | Big data-based user preference analysis method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |