CN114297493A - Object recommendation method, object recommendation device, electronic equipment and storage medium - Google Patents

Object recommendation method, object recommendation device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114297493A
CN114297493A CN202111628802.3A CN202111628802A CN114297493A CN 114297493 A CN114297493 A CN 114297493A CN 202111628802 A CN202111628802 A CN 202111628802A CN 114297493 A CN114297493 A CN 114297493A
Authority
CN
China
Prior art keywords
scene
information
feature
user
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111628802.3A
Other languages
Chinese (zh)
Inventor
杨天持
张路浩
方瑞玉
胡懋地
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202111628802.3A priority Critical patent/CN114297493A/en
Publication of CN114297493A publication Critical patent/CN114297493A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses an object recommendation method, an object recommendation device, electronic equipment and a storage medium, and belongs to the technical field of artificial intelligence. The method comprises the following steps: acquiring reference information, wherein the reference information comprises scene information of a plurality of scenes, or the reference information comprises at least one of user information of a target user, object information of a plurality of objects and scene information of the plurality of scenes; determining reference features based on the reference information, wherein the reference features comprise scene features of each scene, or the reference features comprise at least one of user features of a target user, object features of each object and scene features of each scene; determining an object to be recommended from a plurality of objects based on the reference features; and recommending the object to be recommended to the target user. The object to be recommended is determined based on the scene information or at least one item of the user information and the object information and the scene information, so that the accuracy is improved, and the use time, the use frequency and the like of the application program are improved.

Description

Object recommendation method, object recommendation device, electronic equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of artificial intelligence, in particular to an object recommendation method, an object recommendation device, electronic equipment and a storage medium.
Background
With the development of artificial intelligence technology, the functions of application programs are also more and more powerful. Many applications on the market have a recommendation function, for example, a video application has a video recommendation function, and a food related application has a food recommendation function.
In the related art, there is a class of applications having an object recommendation function. In the application program, an object selected by a user history is recommended to the user, so that the user selects a target object from the recommended objects first, and then selects a target resource from various resources issued by the target object. Because object recommendation is performed only according to the objects selected by the user history (namely, the user information and the object information), the accuracy is low, and the use condition of the user on the application program is influenced.
Disclosure of Invention
The embodiment of the application provides an object recommendation method, an object recommendation device, an electronic device and a storage medium, which can be used for solving the problems in the related art.
In one aspect, an embodiment of the present application provides an object recommendation method, where the method includes:
acquiring reference information, wherein the reference information comprises scene information of a plurality of scenes, or the reference information comprises user information of a target user, at least one item of object information of a plurality of objects and scene information of the plurality of scenes, the scenes represent the types of interactive behaviors, and the interactive behaviors are behaviors of resources issued by users in environments for selecting the objects;
determining reference features based on the reference information, wherein the reference features comprise scene features of each scene, or the reference features comprise at least one of user features of the target user, object features of each object and scene features of each scene;
determining an object to be recommended from the plurality of objects based on the reference features;
and recommending the object to be recommended to the target user.
In another aspect, an embodiment of the present application provides an object recommendation apparatus, where the apparatus includes:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring reference information, the reference information comprises scene information of a plurality of scenes, or the reference information comprises user information of a target user, at least one item of object information of a plurality of objects and scene information of the plurality of scenes, the scenes represent the type of interactive behaviors, and the interactive behaviors are behaviors of a user selecting resources issued by the objects in an environment;
a determining module, configured to determine reference features based on the reference information, where the reference features include scene features of the respective scenes, or the reference features include at least one of user features of the target user, object features of the respective objects, and scene features of the respective scenes;
the determining module is further used for determining an object to be recommended from the plurality of objects based on the reference features;
and the recommending module is used for recommending the object to be recommended to the target user.
On the other hand, an embodiment of the present application provides an electronic device, where the electronic device includes a processor and a memory, where the memory stores at least one program code, and the at least one program code is loaded and executed by the processor, so that the electronic device implements any one of the object recommendation methods described above.
In another aspect, a computer-readable storage medium is provided, in which at least one program code is stored, and the at least one program code is loaded and executed by a processor to make a computer implement any one of the object recommendation methods described above.
In another aspect, a computer program or a computer program product is provided, in which at least one computer instruction is stored, and the at least one computer instruction is loaded and executed by a processor, so as to enable a computer to implement any one of the object recommendation methods described above.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
the technical scheme provided by the embodiment of the application is that the object to be recommended is determined from a plurality of objects based on the scene characteristics of each scene, or based on at least one of the user characteristics of the target user, the object characteristics of each object and the scene characteristics of each scene, so that the object to be recommended is determined based on the scene information or the comprehensive user information, at least one of the object information and the scene information, the accuracy is improved, and the time, frequency and the like of using the application program by the user are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an implementation environment of an object recommendation method according to an embodiment of the present application;
FIG. 2 is a flowchart of an object recommendation method provided in an embodiment of the present application;
FIG. 3 is a schematic view of a scenario provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of another scenario provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a heterogeneous scene hypergraph provided by an embodiment of the present application;
fig. 6 is a schematic diagram of determining an object to be recommended according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an object recommendation device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a terminal device according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an implementation environment of an object recommendation method provided in an embodiment of the present application, where the implementation environment includes an electronic device 11 as shown in fig. 1, and the object recommendation method in the embodiment of the present application may be executed by the electronic device 11. Illustratively, the electronic device 11 may include at least one of a terminal device or a server.
The terminal device may be at least one of a smartphone, a desktop computer, a tablet computer, an e-book reader, and a laptop portable computer. The server may be one server, or a server cluster formed by multiple servers, or any one of a cloud computing platform and a virtualization center, which is not limited in this embodiment of the present application. The server can be in communication connection with the terminal device through a wired network or a wireless network. The server may have functions of data processing, data storage, data transceiving, and the like, and is not limited in the embodiment of the present application.
Based on the foregoing implementation environment, an embodiment of the present application provides an object recommendation method, which may be executed by the electronic device 11 in fig. 1, taking a flowchart of the object recommendation method provided in the embodiment of the present application shown in fig. 2 as an example. As shown in fig. 2, the method includes steps 201 to 204.
Step 201, obtaining reference information, where the reference information includes scene information of multiple scenes, or the reference information includes at least one of user information of a target user, object information of multiple objects, and scene information of multiple scenes, where the scenes represent types of interactive behaviors, and the interactive behaviors are behaviors of a user selecting resources issued by the objects in an environment.
The number of the target users is not limited in the embodiment of the application. The target user may be any one user or a plurality of users. The user information includes at least one of attribute information such as gender, age, birthday, height, and the like.
The object may be referred to as a Point Of Interest (POI). In the embodiment of the present application, the object may be a store, and in this case, the object information includes at least one item of attribute information such as a store area, a store position, and an affiliated industry. In this embodiment, the object may also be a media information Uploader (Up master), and in this case, the object information includes at least one item of attribute information such as a gender, an account name, and a profile.
The scene represents the type of the interactive behavior, and the interactive behavior is the behavior of the resource which is issued by the user selecting the object in the environment. The environment refers to an environment in which the user is located, and one environment may be represented by environment information, for example, the environment information of the environment in which the user is located includes at least one item of attribute information such as time, place, weather, and the like. When the object is a shop, the resource distributed by the object may be an article, and when the object is a media information uploader, the resource distributed by the object may be media information.
Referring to fig. 3, fig. 3 is a schematic view of a scenario provided in the embodiment of the present application. Two interactive behaviors are included in FIG. 3, denoted interactive behavior 1 and interactive behavior 2, respectively. Interaction behavior 1 is: zhang three (i.e., the user) selects pizza and juice (i.e., resources) released by a pizza shop (i.e., the subject) in an environment of dinner, office building, weekday, etc. The interaction behavior 2 is: lee four (i.e., the user) selects the idea (i.e., resource) of the pizza shop (i.e., the object) publication in the environment of lunch, office buildings, etc. The "scenario 1: weekday, fast food "to characterize the type of interactive activity 1 and the type of interactive activity 2. It can be understood that the scenario 1 may correspond to other interactive behaviors besides the interactive behaviors 1 and 2, and the embodiment of the present application does not limit the number of the interactive behaviors corresponding to the scenario 1.
Next, please refer to fig. 4, wherein fig. 4 is a schematic view of another scenario provided in the embodiment of the present application. Two interactive behaviors are included in FIG. 4, denoted interactive behavior 3 and interactive behavior 4, respectively. Interaction behavior 3 is: zhang (i.e., the user) selects coffee and juice (i.e., resources) released by a coffee shop (i.e., the object) in the afternoon, office building, etc. The interaction behavior 4 is: wangwu (namely a user) selects milk tea (namely a resource) released by a milk tea shop (namely a subject) in the environments of afternoon, school and the like. The "scenario 2: afternoon, awake "to characterize the type of interactive behavior 3 and the type of interactive behavior 4. It can be understood that the scenario 2 may correspond to other interactive behaviors besides the interactive behaviors 3 and the interactive behaviors 4, and the embodiment of the present application does not limit the number of the interactive behaviors corresponding to the scenario 2.
It should be noted that the scenario in the embodiment of the present application corresponds to at least one interactive behavior. The scene is determined based on the interactive behavior, and the scene is obtained by abstracting and summarizing the user, the environment, the object and the resource in the interactive behavior so as to represent the type of the interactive behavior by the scene. Accordingly, the scene information includes at least one item of attribute information such as a user type, an environment type, an object type, a resource type, and the like.
Optionally, a quadruplet < U, P, I, C > is given, where U represents user information of a plurality of users, P represents object information of a plurality of objects, I represents resource information of a plurality of resources, and C represents environment information of a plurality of environments. At this time, one interactive behavior may be defined as: τ ═ u, p, i, c >. And tau represents the interaction behavior, U belongs to U to represent the user information of a user, P belongs to P to represent the object information of the object selected by the user, I belongs to I to represent the resource information of the resource released by the object selected by the user, and C belongs to C to represent the environment information of the environment.
For an interactive behavior, a scenario may be determined based on the interactive behavior, and the scenario may be represented as: s ═ ψ (τ). Wherein s characterizes a scene, ψ characterizes a scene function for determining a scene based on an interactive behavior, and τ characterizes the interactive behavior.
In the embodiment of the application, scene information of a plurality of scenes can be acquired, and at least one of user information of a target user, object information of a plurality of objects, and scene information of a plurality of scenes can also be acquired.
Alternatively, user information of a target user, object information of a plurality of objects, and scene information of a plurality of scenes may be acquired. The user information of the target user, the object information of the objects, and the scene information of the scenes may be stored using a medium, and the user information of the target user, the object information of the objects, and the scene information of the scenes may be obtained by obtaining the medium. The medium is not limited in the embodiments of the present application, and may be, for example, a heterogeneous scene hypergraph. For convenience of description, the following alternative embodiments are illustrated by taking a super-map of a scene in which the medium is heterogeneous.
The heterogeneous scene hypergraph comprises user nodes, object nodes and scene edges. The number of the user nodes is multiple, one user node represents the user information of one user, the multiple user nodes comprise target user nodes, and the target user nodes represent the user information of the target user. The number of the object nodes is also multiple, and one object node represents the object information of one object. The number of the scene edges is also multiple, and one scene edge represents the scene information of one scene. Optionally, a scene edge may be a closed graph, and the scene edge represents a type of an interactive behavior, so the closed graph includes a user node corresponding to a user in the interactive behavior and an object node corresponding to an object.
In the embodiment of the application, a heterogeneous scene hypergraph can be constructed based on user information of a plurality of users, object information of a plurality of objects and scene information of a plurality of scenes, and the user information of a target user, the object information of the plurality of objects and the scene information of the plurality of scenes on the heterogeneous scene hypergraph can be acquired by acquiring the heterogeneous scene hypergraph.
And step 202, determining reference characteristics based on the reference information, wherein the reference characteristics comprise scene characteristics of each scene, or the reference characteristics comprise at least one of user characteristics of a target user, object characteristics of each object and scene characteristics of each scene.
In the embodiment of the application, the user characteristics of the target user are determined based on the user information of the target user, the object characteristics of each object are determined based on the object information of each object, and the scene characteristics of each scene are determined based on the scene information of each scene.
Optionally, the heterogeneous scene hypergraph includes user information of the target user, object information of the plurality of objects, and scene information of the plurality of scenes, and the user characteristics of the target user, the object characteristics of each object, and the scene characteristics of each scene may be determined based on the heterogeneous scene hypergraph.
The determination of the user characteristics of the target user (see implementations a1-A3), the determination of the object characteristics of each object (see implementations B1-B4) and the determination of the scene characteristics of each scene (see implementations C1-C2) will be described below, respectively.
Implementation a1, the reference information includes user information of the target user, and the determining the reference feature based on the reference information includes: determining a first characteristic of a target user based on user information of the target user; acquiring at least one piece of first associated information, and determining a second characteristic of the target user based on the at least one piece of first associated information, wherein any piece of first associated information comprises object information of any object selected by the target user; based on the first characteristic of the target user and the second characteristic of the target user, a user characteristic of the target user is determined.
In the embodiment of the application, the heterogeneous scene hypergraph includes user information of a plurality of users, and the user information of a target user can be determined from the user information of the plurality of users, wherein the target user is at least one of the plurality of users. Then, a first characteristic of the target user is determined based on the user information of the target user. Wherein the user information of the target user includes at least one attribute information, the first characteristic of the target user may be determined based on the user information of the target user according to formula (1) shown below.
u=h(e1,e2,…,eF) Formula (1)
Wherein u characterizes a first feature of the target user, h characterizes an aggregation function, e1,e2,…,eFRespectively representing the characteristics corresponding to the attribute information, and F representing the quantity of the attribute information. Alternatively, the aggregation function may be an average function, a layer normalization function, or a composite function of the layer normalization function and the average function, that is, the aggregation function
Figure BDA0003440588810000044
The LayerNorm characterizes the layer normalization function,
Figure BDA0003440588810000043
the composite function is characterized and the AVG is characterized as the mean function.
Alternatively, the characteristic corresponding to the attribute information is determined according to formula (2) shown below.
ei=1/(q·xi·Mi) I ═ 1, 2, …, formula (2)
Wherein e isiCharacterizing the characteristics corresponding to the ith attribute information, i taking any one of 1 to F, F characterizing the quantity of the attribute information, and q characterizing xiThe number of the non-zero elements in the group,
Figure BDA0003440588810000041
One-Hot Encoding (One-Hot Encoding) or Multi-Hot Encoding (Multi-Hot Encoding) for representing ith attribute information, R represents real number, 1 xdiThe dimensions characterizing one-hot or multi-hot encoding,
Figure BDA0003440588810000042
an embedded matrix characterizing the ith attribute information, di×deThe dimensions of the embedding matrix are characterized.
The heterogeneous scene hypergraph of the embodiment of the application further comprises a selection edge, wherein two ends of the selection edge are respectively a user node and an object node, namely the heterogeneous scene hypergraph comprises the user node, the selection edge and the object node, and the user node, the selection edge and the object node represent any user to select any object. And determining a target user node-selecting edge-object node from the heterogeneous scene hypergraph, wherein the target user node is a user node corresponding to a target user.
Wherein the number of target user nodes-selection edges-object nodes is at least one. For any "target user node-selection edge-object node", the object information represented by the object node is used as a first association information. In this way, the respective first association information can be determined.
In the embodiment of the application, the first characteristics of each object selected by the target user may be respectively determined based on each piece of first associated information, and the second characteristics of the target user may be determined based on the first characteristics of each object selected by the target user.
Wherein the object information includes at least one attribute information, and the first feature of the object may be expressed as p ═ h (e)1,e2,…,eF) Wherein p characterizes a first feature of the object, h characterizes an aggregation function, e1,e2,…,eFThe features corresponding to the attribute information are respectively represented, and F represents the number of the attribute information, and the features corresponding to the attribute information can be determined according to the formula (2) mentioned above.
In the embodiment of the present application, the second feature of the target user is determined based on the first feature of each object selected by the target user according to formula (3) shown below.
up=Transformer(p1,p2,…,pn) Formula (3)
Wherein u ispA second feature characterizing the target user, the Transformer characterizing the model parameters of the transformation model (a model of the self-attention mechanism), p1,p2,…,pnA first feature characterizing each object selected by the target user.
After the first characteristic of the target user and the second characteristic of the target user are determined, the first characteristic of the target user and the second characteristic of the target user can be fused to obtain the user characteristic of the target user. The embodiment of the present application does not limit the fusion manner, and the fusion manner may be, for example, addition, splicing, and the like.
Implementation a2, the reference information includes user information of the target user, and the determining the reference feature based on the reference information includes: determining a first characteristic of a target user based on user information of the target user; acquiring at least one piece of second associated information, and determining a third feature of the target user based on the at least one piece of second associated information, wherein any piece of second associated information comprises scene information of any scene and object information of any object when the target user selects any object in any scene; and determining the user characteristics of the target user based on the first characteristics of the target user and the third characteristics of the target user.
The first feature of determining the target user based on the user information of the target user has been introduced in implementation a1, and will not be described herein.
In the embodiment of the application, the heterogeneous scene hypergraph further comprises scene edges, the scene edges represent scene information of one scene, and the scene edges can be a closed graph containing user nodes and object nodes, so that the heterogeneous scene hypergraph comprises the user nodes, the scene edges and the object nodes, and the user nodes, the scene edges and the object nodes represent any user to select any object in any scene. A target user node-scene edge-object node may be determined from the heterogeneous scene hypergraph.
Wherein the number of the target user nodes-scene edges-object nodes is at least one. For any "target user node-scene edge-object node", a second association information is determined by using the scene information represented by the scene edge and the object information represented by the object node. In this way, the respective second association information can be determined.
In the embodiment of the application, the first characteristics of each scene where the target user is located and the first characteristics of each object selected by the target user in each scene may be respectively determined based on each piece of second associated information, and the third characteristics of the target user may be determined based on the first characteristics of each scene where the target user is located and the first characteristics of each object selected by the target user in each scene.
Wherein the scene information includes at least one attribute information, and the first feature of the scene may be represented as s-h (e)1,e2,…,eF) Where s characterizes a first feature of the scene, h characterizes an aggregation function, e1,e2,…,eFThe features corresponding to the attribute information are respectively represented, and F represents the number of the attribute information, and the features corresponding to the attribute information can be determined according to the formula (2) mentioned above.
In the embodiment of the present application, the third feature of the target user is determined according to the following formula (4) based on the first feature of each scene where the target user is located and the first feature of each object selected by the target user in each scene.
usp=AVG({pj⊙sj1, 2, …, n }) formula (4)
Wherein u isspThird characteristic for characterizing the target user, AVG characterizing the mean function, pjCharacterizing a first feature, s, of an object selected by a target user in a jth scenejThe first characteristic of the target user in the j-th scene is represented, n is the number of the target user in the scene, an operation sign for representing a Hadamard product, and the Hadamard product is an operation for multiplying two corresponding elements of a vector.
After the first characteristic of the target user and the third characteristic of the target user are determined, the first characteristic of the target user and the third characteristic of the target user can be fused to obtain the user characteristic of the target user. The embodiment of the present application does not limit the fusion manner, and the fusion manner may be, for example, addition, splicing, and the like.
Implementation a3, the reference information includes user information of the target user, and the determining the reference feature based on the reference information includes: determining a first characteristic of a target user based on user information of the target user; acquiring at least one piece of third association information, and determining a fourth feature of the target user based on the at least one piece of third association information, wherein any piece of third association information comprises scene information of any scene and resource information of any resource when the target user selects any resource in any scene; and determining the user characteristics of the target user based on the first characteristics of the target user and the fourth characteristics of the target user.
The first feature of determining the target user based on the user information of the target user has been introduced in implementation a1, and will not be described herein.
The heterogeneous scene hypergraph in the embodiment of the application further comprises a plurality of resource nodes, wherein one resource node represents resource information of one resource, and the resource information comprises at least one attribute information. The resource in the embodiment of the application is a resource issued by an object, and when the object is a shop, the resource may be clothes, food, electric appliances and the like, and when the object is a media information uploader, the resource may be a video, an image, an article and the like.
Since the scene represents the type of the behavior of the user selecting the resource issued by the object in the environment, the scene edge can be a closed graph containing the user node, the object node and the resource node, and therefore, the heterogeneous scene hypergraph comprises the user node, the scene edge and the resource node, and the user node, the scene edge and the resource node represent that any user selects any resource in any scene. A target user node-scene edge-resource node may be determined from the heterogeneous scene hypergraph.
Wherein the number of the target user nodes-scene edges-resource nodes is at least one. For any 'target user node-scene edge-resource node', determining third associated information by using the scene information represented by the scene edge and the resource information represented by the resource node. In this way, the respective third associated information can be determined.
In the embodiment of the application, the first characteristics of each scene where the target user is located and the first characteristics of each resource selected by the target user in each scene may be respectively determined based on each third related information, and the fourth characteristics of the target user may be determined based on the first characteristics of each scene where the target user is located and the first characteristics of each resource selected by the target user in each scene.
Wherein the resource information includes at least one attribute information, and the first feature of the resource may be represented as i ═ h (e)1,e2,…,eF) Wherein i characterizes a first feature of the resource, h characterizes an aggregation function, e1,e2,…,eFThe features corresponding to the attribute information are respectively represented, and F represents the number of the attribute information, and the features corresponding to the attribute information can be determined according to the formula (2) mentioned above.
In the embodiment of the present application, the fourth feature of the target user is determined according to the following formula (5) based on the first feature of each scene where the target user is located and the first feature of each resource selected by the target user in each scene.
usi=AVG({ij⊙sj1, 2, …, n }) formula (5)
Wherein u issiFourth feature characterizing the target user, AVG characterizing the mean function, ijFirst feature, s, characterizing the resources selected by the target user in the jth scenariojThe first feature indicating that the target user is in the j-th scene, n is the number of the scene in which the target user is, and an operation symbol indicating a Hadamard product.
After the first characteristic of the target user and the fourth characteristic of the target user are determined, the first characteristic of the target user and the fourth characteristic of the target user can be fused to obtain the user characteristic of the target user. The embodiment of the present application does not limit the fusion manner, and the fusion manner may be, for example, addition, splicing, and the like.
It is to be understood that the manner of determining the user characteristics of the target user may also vary depending on the application scenario. For example, in one application scenario, the first feature of the target user may be directly used as the user feature of the target user, and in another application scenario, the user feature of the target user may be determined based on the second feature of the target user and the third feature of the target user. Therefore, the embodiment of the present application does not limit the determination manner of the user characteristics of the target user, and the user characteristics of the target user may be determined based on at least one of the first characteristics of the target user, the second characteristics of the target user, the third characteristics of the target user, and the fourth characteristics of the target object.
Optionally, after the first feature of the target user, the second feature of the target user, the third feature of the target user, and the fourth feature of the target user are determined, the second feature of the target user, the third feature of the target user, and the fourth feature of the target user are spliced. And then, fusing the spliced features by using a multilayer perceptron, and adding the fused features and the first features of the target user to obtain the user features of the target user. Namely, it is
Figure BDA0003440588810000061
Wherein u' represents the user characteristics of the target user, u represents the first characteristics of the target user, MLP represents the multi-layer perceptron, and upSecond characteristics, u, of target usersspThird feature, u, characterizing the target usersiA fourth feature characterizing the target user,
Figure BDA0003440588810000062
and characterizing the splicing operation.
Implementation B1, the reference information includes object information of a plurality of objects, and the determining the reference feature based on the reference information includes: for any one object, determining a first feature of any one object based on object information of any one object; acquiring at least one piece of fourth associated information corresponding to any one object, and determining a second feature of any one object based on the at least one piece of fourth associated information corresponding to any one object, wherein any one piece of fourth associated information corresponding to any one object comprises user information of any one user when any one object is selected by any one user; an object feature of any one of the objects is determined based on the first feature of any one of the objects and the second feature of any one of the objects.
In the embodiment of the application, the heterogeneous scene hypergraph comprises object information of a plurality of objects. For any object, the first feature of the object may be determined based on the object information of the object, where the determination manner of the first feature of the object is described in implementation a1, and is not described herein again.
The heterogeneous scene hypergraph in the embodiment of the application comprises user nodes, selection edges and object nodes. For any object, a user node, a selection edge, and any object node can be determined from the heterogeneous scene hypergraph, and any object node is an object node corresponding to any object. Wherein, the number of any object node of the user node-selection edge-is at least one. For any "user node-selection edge-any object node", a fourth association information is determined by using the user information characterized by the user node. In this way, the respective fourth associated information can be determined.
In this embodiment of the application, the first characteristics of each user when any one object is selected by each user may be respectively determined based on each fourth associated information, the second characteristics of any one object may be determined based on the first characteristics of each user when any one object is selected by each user, and a determination manner of the second characteristics of any one object is similar to a determination manner of the second characteristics of the target user, which is not described herein again.
After the first feature of any object and the second feature of any object are determined, the first feature of any object and the second feature of any object may be fused to obtain the object feature of any object. The embodiment of the present application does not limit the fusion manner, and the fusion manner may be, for example, addition, splicing, and the like.
Implementation B2, the reference information includes object information of a plurality of objects, and the determining the reference feature based on the reference information includes: for any one object, determining a first feature of any one object based on object information of any one object; acquiring at least one piece of fifth associated information corresponding to any one object, and determining a third feature of any one object based on the at least one piece of fifth associated information corresponding to any one object, wherein any one piece of fifth associated information corresponding to any one object comprises scene information of any one scene and user information of any one user when any one object is selected by any one user in any one scene; an object feature of any one of the objects is determined based on the first feature of any one of the objects and the third feature of any one of the objects.
For any object, the first feature of the object may be determined based on the object information of the object, where the determination manner of the first feature of the object is described in implementation a1, and is not described herein again.
The heterogeneous scene hypergraph in the embodiment of the application comprises user nodes, scene edges and object nodes. For any object, user nodes-scene edges-any object node can be determined from the heterogeneous scene hypergraph. Wherein, the number of any object node of the user node-scene edge-is at least one. For any "user node-scene edge-any object node", a fifth association information is determined by using the user information characterized by the user node and the scene information characterized by the scene edge. In this way, the respective fifth association information can be determined.
In the embodiment of the application, the first characteristics of each scene where any object is located and the first characteristics of each user when any object is selected by each user in each scene may be respectively determined based on each fifth related information, and the third characteristics of any object may be determined based on the first characteristics of each scene where any object is located and the first characteristics of each user when any object is selected by each user in each scene. The determination manner of the third feature of any object is similar to that of the third feature of the target user, and is not described herein again.
After the first feature of any object and the third feature of any object are determined, the first feature of any object and the third feature of any object may be fused to obtain the object feature of any object. The embodiment of the present application does not limit the fusion manner, and the fusion manner may be, for example, addition, splicing, and the like.
Implementation B3, the reference information includes object information of a plurality of objects, and the determining the reference feature based on the reference information includes: for any one object, determining a first feature of any one object based on object information of any one object; acquiring at least one piece of sixth associated information corresponding to any one object, and determining the fourth feature of any one object based on the at least one piece of sixth associated information corresponding to any one object, wherein any one piece of sixth associated information corresponding to any one object comprises resource information of any one resource issued by any one object; an object feature of any one of the objects is determined based on the first feature of any one of the objects and the fourth feature of any one of the objects.
For any object, the first feature of the object may be determined based on the object information of the object, where the determination manner of the first feature of the object is described in implementation a1, and is not described herein again.
The heterogeneous scene hypergraph of the embodiment of the application further comprises a publishing edge, wherein the two ends of the publishing edge are respectively an object node and a resource node, namely the heterogeneous scene hypergraph comprises an object node, a publishing edge and a resource node, and the object node, the publishing edge and the resource node represent that any object publishes any resource. Any one object node-publishing edge-resource node may be determined from the heterogeneous scene hypergraph. Wherein the number of any object node-issuing edge-resource node is at least one. For any "any object node-publishing edge-resource node", a sixth association information is determined by using the resource information characterized by the resource node. In this way, the respective sixth association information can be determined.
In the embodiment of the present application, the first characteristics of each resource issued by any object may be determined based on each sixth related information, and the fourth characteristics of any object may be determined based on the first characteristics of each resource issued by any object. The determination manner of the fourth feature of any object is similar to that of the second feature of the target user, and is not described herein again.
After the first feature of any object and the fourth feature of any object are determined, the first feature of any object and the fourth feature of any object may be fused to obtain the object feature of any object. The embodiment of the present application does not limit the fusion manner, and the fusion manner may be, for example, addition, splicing, and the like.
Implementation B4, the reference information includes object information of a plurality of objects, and the determining the reference feature based on the reference information includes: for any one object, determining a first feature of any one object based on object information of any one object; acquiring at least one piece of seventh associated information corresponding to any one object, and determining a fifth feature of any one object based on the at least one piece of seventh associated information corresponding to any one object, wherein any one piece of seventh associated information corresponding to any one object comprises scene information of any one scene and resource information of any one resource when any one object releases any one resource in any one scene; an object feature of any one of the objects is determined based on the first feature of any one of the objects and the fifth feature of any one of the objects.
For any object, the first feature of the object may be determined based on the object information of the object, where the determination manner of the first feature of the object is described in implementation a1, and is not described herein again.
The heterogeneous scene hypergraph in the embodiment of the application comprises object nodes, scene edges and resource nodes. For any object, any object node-scene edge-resource node can be determined from the heterogeneous scene hypergraph. Wherein the number of any one object node-scene edge-resource node is at least one. For any "any object node-scene edge-resource node", a seventh association information is determined by using the scene information represented by the scene edge and the resource information represented by the resource node. In this way, the respective seventh associated information can be determined.
In this embodiment of the application, the first characteristics of each scene where any object is located and the first characteristics of each resource issued by any object in each scene may be respectively determined based on each seventh related information, and the fifth characteristics of any object may be determined based on the first characteristics of each scene where any object is located and the first characteristics of each resource issued by any object in each scene. The determination manner of the fifth feature of any object is similar to that of the third feature of the target user, and is not described herein again.
After the first feature of any object and the fifth feature of any object are determined, the first feature of any object and the fifth feature of any object may be fused to obtain the object feature of any object. The embodiment of the present application does not limit the fusion manner, and the fusion manner may be, for example, addition, splicing, and the like.
It will be appreciated that the manner in which the object characteristics of any one object are determined may also vary from application scenario to application scenario. For example, in an application scenario, the first feature of any one object may be directly taken as the object feature of any one object. Therefore, the embodiment of the present application does not limit the determination method of the object feature of any object, and the object feature of any object may be determined based on at least one of the first feature of any object, the second feature of any object, the third feature of any object, the fourth feature of any object, and the fifth feature of any object.
Alternatively, a formula may be formulated based on the first feature of any one of the objects, the second feature of any one of the objects, the third feature of any one of the objects, the fourth feature of any one of the objects, and the fifth feature of any one of the objects
Figure BDA0003440588810000081
Figure BDA0003440588810000082
To determine the object characteristics of any one object. Wherein p' characterizes an object feature of any one object, p characterizes a first feature of any one object, MLP characterizes a multi-layer perceptron, pu denotes a second feature for any one object, psuThird feature, p, characterizing any one of the objectsiFourth feature, p, characterizing any one of the objectssiA fifth feature characterizing any of the objects,
Figure BDA0003440588810000083
and characterizing the splicing operation.
In implementation Cl, the reference information includes scene information of a plurality of scenes, and the determining the reference feature based on the reference information includes: for any one scene, determining a first feature of any one scene based on scene information of any one scene; acquiring at least one eighth associated information corresponding to any one scene, and determining a second feature of any one scene based on the at least one eighth associated information corresponding to any one scene, wherein any one eighth associated information corresponding to any one scene comprises scene information of any one scene and user information of any one user corresponding to any one scene; scene features of any one scene are determined based on the first features of any one scene and the second features of any one scene.
In the embodiment of the application, the heterogeneous scene hypergraph comprises a plurality of scene edges, and any scene edge represents scene information of a scene. For any scene, the first feature of the scene may be determined based on the scene information of the scene, where the determination manner of the first feature of the scene has been described above, and is not described herein again.
For any scene edge, the scene edge may be a closed graph including at least one user node, and a user corresponding to the user node included in the scene edge is a user corresponding to a scene corresponding to the scene edge. And determining eighth association information by using the scene information represented by the scene edge and the user information represented by a user node contained in the scene edge. In this way, the respective eighth association information can be determined.
In the embodiment of the present application, the first feature of any scene and the first features of the users corresponding to any scene may be determined based on the eighth related information, and the second feature of any scene may be determined according to the following formula (6) based on the first feature of any scene and the first features of the users corresponding to any scene.
su=AVG({ujAs | j ═ 1, 2, …, n }) formula (6)
Wherein s isuSecond feature characterizing either scenario, AVG characterizing the mean function, ujA first feature indicating a jth user corresponding to any one of the scenes, wherein s indicates the first feature of any one of the scenes, and n is the number of users corresponding to any one of the scenes, and an operation symbol indicating a hadamard product.
After the first feature of any scene and the second feature of any scene are determined, the first feature of any scene and the second feature of any scene may be fused to obtain the scene feature of any scene. The embodiment of the present application does not limit the fusion manner, and the fusion manner may be, for example, addition, splicing, and the like.
Implementation C2, the reference information includes scene information of a plurality of scenes, and the determining the reference feature based on the reference information includes: for any one scene, determining a first feature of any one scene based on scene information of any one scene; acquiring at least one ninth associated information corresponding to any one scene, and determining a third feature of any one scene based on the at least one ninth associated information corresponding to any one scene, wherein any one ninth associated information corresponding to any one scene comprises scene information of any one scene and object information of any one object corresponding to any one scene; scene features of any one scene are determined based on the first features of any one scene and the third features of any one scene.
In this embodiment of the application, for any scene, the first feature of the scene may be determined based on the scene information of the scene, where the determination manner of the first feature of the scene has been described above, and is not described herein again.
For any scene edge, the scene edge may be a closed graph including at least one object node, and an object corresponding to the object node included in the scene edge is an object corresponding to a scene corresponding to the scene edge. And determining ninth associated information by using the scene information represented by the scene edge and the object information represented by an object node contained in the scene edge. In this way, the respective ninth associated information can be determined.
In the embodiment of the present application, the first feature of any scene and the first feature of each object corresponding to any scene may be determined based on each of the ninth related information, and the third feature of any scene may be determined according to the following formula (7) based on the first feature of any scene and the first feature of each object corresponding to any scene.
sp=AVG({pjAs | j ═ 1, 2, …, n }) formula (7)
Wherein s ispThird feature characterizing any of the scenarios, AVG characterizing the mean function, pjA first feature indicating a jth object corresponding to any one of the scenes, s indicates a first feature of any one of the scenes, n is a number of objects corresponding to any one of the scenes, and an operation symbol indicating a hadamard product.
After the first feature of any scene and the third feature of any scene are determined, the first feature of any scene and the third feature of any scene may be fused to obtain the scene feature of any scene. The embodiment of the present application does not limit the fusion manner, and the fusion manner may be, for example, addition, splicing, and the like.
It is to be understood that the scene characteristics of any one scene may be determined based on at least one of the first characteristics of any one scene, the second characteristics of any one scene, and the third characteristics of any one scene.
Optionally, after the first feature of any scene, the second feature of any scene, and the third feature of any scene are determined, the second feature of any scene and the third feature of any scene are firstly spliced, then the spliced features are fused by using the multilayer perceptron, and the fused features are added to the first feature of any scene to obtain the scene feature of any scene. Namely, it is
Figure BDA0003440588810000101
Wherein s' characterizes a scene feature of any scene, s characterizes a first feature of any scene, MLP characterizes a multi-layered perceptron, suSecond features, s, of either scenepA third feature characterizing any of the scenarios,
Figure BDA0003440588810000102
and characterizing the splicing operation.
Step 203, determining an object to be recommended from a plurality of objects based on the reference features.
In the embodiment of the application, an algorithm or a model can be utilized to determine the object to be recommended from a plurality of objects based on the reference features. Namely, an algorithm or a model is used for determining the object to be recommended from a plurality of objects based on the scene characteristics of each scene. Or determining the object to be recommended from a plurality of objects by using an algorithm or a model based on at least one of the user characteristics of the target user, the object characteristics of each object and the scene characteristics of each scene. Optionally, the object to be recommended is determined from the plurality of objects based on the user characteristics of the target user, the object characteristics of each object, and the scene characteristics of each scene using an algorithm or a model.
In one possible implementation manner, determining an object to be recommended from a plurality of objects based on the reference feature includes: determining index information of each object based on the reference features, wherein the index information of any object is used for representing the matching degree of the target user and any object; and screening the objects to be recommended with index information meeting screening conditions from the multiple objects based on the index information of each object.
In the embodiment of the application, the index information of each object may be determined based on the scene characteristics of each scene, or based on at least one of the user characteristics of the target user, the object characteristics of each object, and the scene characteristics of each scene. The index information of any object may be a probability of 0 or more and 1 or less, or may be 0 or a positive number. The larger the index information of any object is, the higher the matching degree of the representation target user and any object is.
For any object, if the index information of the object is greater than the first threshold, it indicates that the index information of the object meets the screening condition, and the object can be taken as the object to be recommended. If the index information of the object is not larger than the first threshold value, the index information of the object does not meet the screening condition, and the object cannot be used as the object to be recommended. In this way, the objects to be recommended are screened out from the plurality of objects.
In the embodiment of the application, the scene to be recommended can be determined from a plurality of scenes based on the scene features of the scenes, and then the object to be recommended can be determined from a plurality of objects based on the scene features of the scene to be recommended. Or determining a scene to be recommended from a plurality of scenes based on the user characteristics of the target user and the scene characteristics of each scene, and then determining an object to be recommended from a plurality of objects based on the scene characteristics of the scene to be recommended. The method can also be used for determining the scene to be recommended from a plurality of scenes based on the scene characteristics of the scenes, and then determining the object to be recommended from a plurality of objects based on the scene characteristics of the scene to be recommended and the object characteristics of the objects.
In a possible implementation manner, the reference features include a user feature of the target user, an object feature of each object, and a scene feature of each scene, and the determining, based on the reference features, an object to be recommended from a plurality of objects includes: determining a scene to be recommended from a plurality of scenes based on the user characteristics of the target user and the scene characteristics of each scene; and determining the object to be recommended from the plurality of objects based on the scene characteristics of the scene to be recommended and the object characteristics of the objects.
In the embodiment of the application, the index information of each scene is determined based on the user characteristics of the target user and the scene characteristics of each scene, and the index information of any scene represents the matching degree of the target user and any scene. The index information of any scene may be a probability of 0 or more and 1 or less, or may be 0 or a positive number. The larger the index information of any scene is, the higher the matching degree of the representation target user and any scene is.
For any scene, if the index information of the scene is greater than the second threshold, it is indicated that the index information of the scene meets the target condition, and the scene can be taken as a scene to be recommended. If the index information of the scene is not greater than the second threshold, it is indicated that the index information of the scene does not meet the target condition, and the scene cannot be used as a scene to be recommended. By the method, the scenes to be recommended are screened out from the multiple scenes.
Optionally, determining a scene to be recommended from a plurality of scenes based on the user characteristics of the target user and the scene characteristics of each scene, including: acquiring environment information of a plurality of environments, and respectively determining the environment characteristics of each environment based on the environment information of each environment; and determining a scene to be recommended from a plurality of scenes based on the user characteristics of the target user, the scene characteristics of each scene and the environment characteristics of each environment.
The heterogeneous scene hypergraph of the embodiment of the application further comprises a plurality of environment sides, one environment side represents environment information of one environment, and the environment information comprises at least one item of attribute information such as time, place and weather. The environment edge may be a closed graph including user nodes, object nodes, and resource nodes.
It can be understood that, since the environment edge and the scene edge can be closed graphs including the user node, the object node and the resource node, in order to simplify the heterogeneous scene hypergraph, the environment edge and the scene edge can be represented by using the same closed graph. For example, the closed figure itself is the ambient edge, and the color of the closed figure is the scene edge.
Wherein, for any environment, a first characteristic of the environment may be determined based on the environment information of the environment, and an environmental characteristic of the environment may be determined based on the first characteristic of the environment. Since the environment information includes at least one attribute information, the first feature of the environment may be expressed as c ═ h (e)1,e2,…,eF) Wherein c characterizes a first feature of the environment, h characterizes an aggregation function, e1,e2,…,eFThe features corresponding to the attribute information are respectively represented, and F represents the number of the attribute information, and the features corresponding to the attribute information can be determined according to the formula (2) mentioned above.
In the embodiment of the present application, the index information of each scene may be determined according to the following formula (8) based on the user characteristics of the target user, the scene characteristics of each scene, and the environment characteristics of each environment.
g (s | c, u) ═ sum (c | _ u | _ s) formula (8)
Wherein g (s | c, u) represents index information of a scene, sum represents a summation function, c represents an environment feature of an environment, u represents a user feature of a target user, s represents a scene feature of a scene, and an operation symbol represents a hadamard product.
After the index information of each scene is determined, the scene to be recommended is determined from a plurality of scenes on the basis of the index information of each scene. Optionally, the index information of each scene is sorted from large to small, and a plurality of scenes before sorting are selected as the scenes to be recommended according to a formula (9) shown below.
Figure BDA0003440588810000111
Wherein the content of the first and second substances,
Figure BDA0003440588810000112
representing a scene to be recommended, j is a serial number, ksThe number of scenes to be recommended is characterized,
Figure BDA0003440588810000113
the index information of each scene is ranked from big to small according to the representation, and k before ranking is selectedsAnd taking the scene as a scene to be recommended.
After the scene to be recommended is determined, the index information of each object in the scene to be recommended is determined according to a formula (10) shown below based on the scene features of the scene to be recommended and the object features of each object.
Figure BDA0003440588810000114
Wherein the content of the first and second substances,
Figure BDA0003440588810000115
index information representing a next object in a scene to be recommended, sum represents a summation function, c represents an environment characteristic of an environment, u represents a user characteristic of a target user, s represents a scene characteristic of a scene, p represents an object characteristic of an object, and an operation symbol representing a Hadamard product. The formula (8) and the formula (9) are multiplexed in the formula (10), and the scene to be recommended can be determined by using the formula (8) and the formula (9).
After the index information of each object in the scene to be recommended is determined, the object to be recommended is determined from the plurality of objects based on the index information of each object in the scene to be recommended. Optionally, the index information of each object in the scene to be recommended is sorted from large to small, and a plurality of objects before sorting are selected as the objects to be recommended according to a formula (11) shown below.
Figure BDA0003440588810000116
Wherein the content of the first and second substances,
Figure BDA0003440588810000117
representing an object to be recommended, j is a serial number, kpThe number of objects to be recommended is characterized,
Figure BDA0003440588810000118
characterizing a set of scenarios to be recommended
Figure BDA0003440588810000119
Any of the scenes to be recommended
Figure BDA00034405888100001110
Figure BDA00034405888100001111
Characterization pair in scene to be recommended
Figure BDA00034405888100001112
Sorting the index information of each object from big to small, and selecting k before sortingpThe object is used as an object to be recommended.
In one possible implementation, determining the reference feature based on the reference information includes: acquiring a recommendation model, wherein the recommendation model comprises a feature extraction sub-model and a feature processing sub-model which are sequentially connected; determining, by the feature extraction submodel, reference features based on the reference information; determining an object to be recommended from a plurality of objects based on the reference features, including: and determining the object to be recommended from the plurality of objects based on the reference features by the feature processing submodel.
At least one object to be recommended may be determined from the plurality of objects based on the scene characteristics of each scene, or based on at least one of the user characteristics of the target user, the object characteristics of each object, and the scene characteristics of each scene, using the recommendation model. The recommendation model comprises a feature extraction sub-model and a feature processing sub-model which are connected in sequence. The embodiment of the application does not limit the model structure and the model size of the recommended model.
Optionally, the heterogeneous scene hypergraph is input into the recommendation model, the feature extraction submodel determines the user features of the target user based on the user information of the target user, determines the object features of the objects based on the object information of the objects, and determines the scene features of the scenes based on the scene information of the scenes. And determining the index information of each object by the feature processing submodel based on the user features of the target user, the object features of each object and the scene features of each scene, and screening the objects to be recommended, of which the index information meets the screening conditions, from the multiple objects based on the index information of each object.
The recommendation model can be obtained based on neural network model training. Optionally, a sample heterogeneous scene hypergraph is obtained, which may be recorded as a positive sample heterogeneous scene hypergraph, and the positive sample heterogeneous scene hypergraph includes user information of a plurality of sample users, object information of a plurality of sample objects, and scene information of a plurality of sample scenes. And replacing at least one of a sample user, a sample object, a sample scene and the like in the heterogeneous scene hypergraph of the positive sample to obtain the heterogeneous scene hypergraph of the negative sample, wherein the content of the related sample heterogeneous scene hypergraph is described above in relation to the heterogeneous scene hypergraph, the implementation principles of the two steps are similar, and the description is omitted here.
In the embodiment of the application, the neural network model is used for determining the index information of each sample object corresponding to the positive sample based on the heterogeneous scene hypergraph of the positive sample, and the neural network model is used for determining the index information of each sample object corresponding to the negative sample based on the heterogeneous scene hypergraph of the negative sample. The index information of any sample object is used to characterize the matching degree between the target sample user (one sample user among multiple sample users) and any sample object, where the manner of determining the index information of each sample object is described in step 202 and step 203, and is not described herein again.
Next, the loss value of the neural network model is determined using the index information of each sample object corresponding to the positive and negative samples according to the following formula (12). And then training the neural network model based on the loss value of the neural network model to obtain a recommended model.
Figure BDA0003440588810000121
Figure BDA0003440588810000122
Wherein, (f (p | c, u; s)) represents the index information of any sample object corresponding to the positive sample or the negative sample, sigmoid represents the activation function,
Figure BDA0003440588810000123
the probability of any sample object corresponding to a positive or negative sample is characterized.
Figure BDA0003440588810000124
A loss value characterizing the neural network model,
Figure BDA0003440588810000125
the number of individual sample objects corresponding to the positive sample is characterized,
Figure BDA0003440588810000126
and characterizing the number of each sample object corresponding to the negative sample. y isjLabeling information characterizing the jth sample object,
Figure BDA0003440588810000127
the probability of characterizing the jth sample object.
And step 204, recommending the object to be recommended to the target user.
In the embodiment of the application, k can be determinedsFor any scene to be recommended, k can be determinedpAnd (4) the object to be recommended. Thus, k can be substituteds·kpAnd recommending the object to be recommended to the target user.
The recommendation method of the embodiment of the present application is described above from the perspective of method steps, and will be further described below with reference to fig. 5 and 6.
Referring to fig. 5, fig. 5 is a schematic diagram of a heterogeneous scene super map according to an embodiment of the present application. The heterogeneous scene hypergraph of the embodiment of the application comprises three nodes, namely a user node, an object node and a resource node, and further comprises three edges, namely a selection edge, a distribution edge, an environment edge and a scene edge. Wherein, the selection edge is represented by a thick solid line, the release edge is represented by a dotted line, the environment edge and the scene edge are the same edge, and are represented by a closed graph.
At the environment side c1And scene edge s1The corresponding closed graph comprises user nodes u1Object node p1Resource node i1Resource node i2User node u1And object node p1Between the selection edge, object node p1And resource node i1Between publishing edge and object node p1And resource node i2The publishing edge in between. At the environment side c2And scene edge s2The corresponding closed graph comprises user nodes u2Object node p1Resource node i3User node u2And object node p1Between the selection edge, object node p1And resource node i3The publishing edge in between. At the environment side c3And scene edge s3The corresponding closed graph comprises user nodes u1Object node p2Resource node i4Resource node i5Object node p2And resource node i4Between publishing edge and object node p2And resource node i5The publishing edge in between.
Next, please refer to fig. 6, where fig. 6 is a schematic diagram illustrating determining an object to be recommended according to an embodiment of the present application. For the environment c, obtaining the characteristics corresponding to each attribute information of the environment c by looking up the coding table, and normalizing the characteristics corresponding to each attribute information of the environment c to obtain the characteristics corresponding to the environment c, wherein the characteristics corresponding to the environment c are the first characteristics of the environment c mentioned above.
Based on the same principle, for the user u, by looking up the coding table and normalizing, the feature corresponding to the user u, that is, the first feature of the user u mentioned above, can be obtained. For the scene s, by looking up the coding table and normalizing, the feature corresponding to the scene s, i.e. the first feature of the scene s mentioned above, can be obtained. For the object p, by looking up the coding table and normalizing, the corresponding feature of the object p, i.e. the first feature of the object p mentioned above, can be obtained. For the resource i, by looking up the coding table and normalizing, the corresponding characteristic of the resource i, i.e. the first characteristic of the resource i mentioned above, can be obtained.
For the environment c, based on the first feature of the environment c, a feature corresponding to another environment c may be determined, which is the above-mentioned environment feature of the environment c.
For the user u, multiplying the first feature of the object p and the first feature of the scene s may determine a feature corresponding to the user u, where the feature corresponding to the user u is the above-mentioned third feature of the user u. Based on the first feature of the object p, a feature corresponding to another user u may be determined, where the feature corresponding to the user u is the above-mentioned second feature of the user u. Multiplying the first feature of the resource i and the first feature of the scene s may determine a feature corresponding to another user u, which is the above-mentioned fourth feature of the user u. And then, after the second feature of the user u, the third feature of the user u and the fourth feature of the user u are fused through a multilayer perceptron, the fused features and the first feature of the user u are merged to obtain a feature corresponding to the user u, wherein the feature corresponding to the user u is the user feature of the user u mentioned above.
For a scene s, multiplying the first feature of the user u and the first feature of the scene s may determine a feature corresponding to the scene s, which is the above-mentioned second feature of the scene s. Multiplying the first feature of the object p and the first feature of the scene s can determine a feature corresponding to another scene s, which is the third feature of the scene s mentioned above. And then, after the second feature of the scene s and the third feature of the scene s are fused through a multilayer perceptron, the fused features are combined with the first feature of the scene s to obtain a feature corresponding to the scene s, wherein the feature corresponding to the scene s is the scene feature of the scene s mentioned above.
For the object p, based on the first feature of the user u, a feature corresponding to the object p may be determined, and the feature corresponding to the object p is the above-mentioned second feature of the object p. Based on the first feature of the resource i, a feature corresponding to another object p may be determined, where the feature corresponding to the object p is the above-mentioned fourth feature of the object p. Multiplying the first feature of the user u and the first feature of the scene s may determine a feature corresponding to another object p, which is the third feature of the object p mentioned above. Multiplying the first feature of the resource i and the first feature of the scene s may determine a feature corresponding to another object p, where the feature corresponding to the object p is the above-mentioned fifth feature of the object p. Then, after the second feature of the object p, the third feature of the object p, the fourth feature of the object p, and the fifth feature of the object p are fused by the multilayer perceptron, the fused features are combined with the first feature of the object p to obtain a feature corresponding to the object p, where the feature corresponding to the object p is the above-mentioned object feature of the object p.
Next, based on the environment features of the environment c, the user features of the user u, and the scene features of the scene s, a scene s to be recommended is determined from each scene s, and the scene features of the scene s to be recommended are determined. Then, the object p to be recommended is determined from the objects p based on the scene features of the scene s to be recommended and the object features of the objects p. The method for determining the scene to be recommended and the object to be recommended is described above with reference to step 203, and is not described herein again.
The embodiment of the application obtains 8-day interactive behaviors, and a positive sample can be obtained based on each interactive behavior, wherein one positive sample comprises a user, an object, a scene and a triple scene-object-user. And replacing at least one of the user, the object and the scene in the positive sample to obtain a negative sample.
In the embodiment of the application, 4 data sets are constructed by using 8-day positive samples and 8-day negative samples, the 4 data sets are respectively recorded as 1 day, 3 days, 5 days and 7 days, and each data set comprises a training set and a testing set. And n (the value of n is 1, 3, 5 or 7) days are constructed by adopting the positive samples and the negative samples of the days 1 to n, and the test set corresponding to the n days is constructed by adopting the positive samples and the negative samples of the day n + 1. For example, a training set corresponding to 5 days is constructed by using the positive and negative samples of days 1-5, and a test set corresponding to 5 days is constructed by using the positive and negative samples of day 6.
The number of positive samples, the number of negative samples, the number of users, the number of objects, the number of scene-object-users, and the number of scenes in each training set, each testing set are counted, and the number of newly added users, the number of newly added objects, the number of newly added scene-object-users, and the number of newly added scenes in a testing set are counted compared with the training set in one data set, so as to obtain table 1 shown below.
TABLE 1
Figure BDA0003440588810000141
For each training set in table 1, training is respectively performed by using 50%, 75%, and 100% training sets to obtain a recommendation model, a recommendation object of a user in each test set is output by using The recommendation model, and an Area Under a Curve (Area Under The Curve, AUC) is calculated by using an output result of The recommendation model and The test set to obtain an AUC value corresponding to The object recommendation method (recorded as The method) in The embodiment of The present application. Meanwhile, other methods such as deep decomposition machine (deep FM) and deep FM are calculated respectively in the embodiment of the applicationSAutomatic Feature Interaction Learning (AutoInt) based on Self-attention Neural networkSNeighbor-based end-to-end Recommendation Interaction Model For Recommendation, NIRecSHeterogeneous graph Networks-intention recommendations (MEREC), Hierarchical Attention Networks (HAN), heterogeneous NetworksGraphical Attention Network (HGAT), Self-attentive hypergraphic Neural Network (A Self-attentive Based graphical Network For Hypergraphics, Hyper-SAGNN), Hyper-SAGNNSEquivalent AUC values were obtained by comparing the method with other methods to obtain the enhancement (in%) of the method, as shown in table 2 below.
TABLE 2
Figure BDA0003440588810000142
Figure BDA0003440588810000151
As can be seen from table 2, compared with other methods, the method has an improvement in AUC value, which indicates that the recommended method of the embodiment of the present application can improve accuracy.
In addition, the recall rate indexes of the method and the three methods of HAN and HGAT are calculated respectively. Wherein, the recall index may be the HR @ K index, which is exemplified by HR @ K indices HR @10-S and HR @100-P, respectively, as shown in Table 3 below.
TABLE 3
Figure BDA0003440588810000152
As can be seen from Table 3, HR @10-S and HR @100-P of the method are both larger compared with HAN and HGAT, indicating that the method of the embodiment of the present application has higher accuracy.
The method determines the object to be recommended from the multiple objects based on the scene characteristics of each scene, or based on at least one of the user characteristics of the target user, the object characteristics of each object and the scene characteristics of each scene, so that the object to be recommended is determined based on the scene information or the integrated at least one of the user information and the object information and the scene information, the accuracy is improved, and the time, the frequency and the like of using the application program by the user are improved.
Fig. 7 is a schematic structural diagram of an object recommendation apparatus according to an embodiment of the present application, and as shown in fig. 7, the apparatus includes:
an obtaining module 701, configured to obtain reference information, where the reference information includes scene information of multiple scenes, or the reference information includes at least one of user information of a target user, object information of multiple objects, and scene information of multiple scenes, where a scene represents a type of an interactive behavior, and the interactive behavior is a behavior of a user selecting a resource issued by an object in an environment;
a determining module 702, configured to determine reference features based on the reference information, where the reference features include scene features of the respective scenes, or the reference features include at least one of user features of the target user, object features of the respective objects, and scene features of the respective scenes;
a determining module 702, configured to determine an object to be recommended from a plurality of objects based on the reference features;
a recommending module 703, configured to recommend the object to be recommended to the target user.
In a possible implementation manner, the reference information includes user information of the target user, and the determining module 702 is configured to determine the first feature of the target user based on the user information of the target user; acquiring at least one piece of first associated information, and determining a second characteristic of the target user based on the at least one piece of first associated information, wherein any piece of first associated information comprises object information of any object selected by the target user; based on the first characteristic of the target user and the second characteristic of the target user, a user characteristic of the target user is determined.
In a possible implementation manner, the reference information includes user information of the target user, and the determining module 702 is configured to determine the first feature of the target user based on the user information of the target user; acquiring at least one piece of second associated information, and determining a third feature of the target user based on the at least one piece of second associated information, wherein any piece of second associated information comprises scene information of any scene and object information of any object when the target user selects any object in any scene; and determining the user characteristics of the target user based on the first characteristics of the target user and the third characteristics of the target user.
In a possible implementation manner, the reference information includes user information of the target user, and the determining module 702 is configured to determine the first feature of the target user based on the user information of the target user; acquiring at least one piece of third association information, and determining a fourth feature of the target user based on the at least one piece of third association information, wherein any piece of third association information comprises scene information of any scene and resource information of any resource when the target user selects any resource in any scene; and determining the user characteristics of the target user based on the first characteristics of the target user and the fourth characteristics of the target user.
In a possible implementation, the reference information includes object information of a plurality of objects, and the determining module 702 is configured to determine, for any one object, a first feature of any one object based on the object information of any one object; acquiring at least one piece of fourth associated information corresponding to any one object, and determining a second feature of any one object based on the at least one piece of fourth associated information corresponding to any one object, wherein any one piece of fourth associated information corresponding to any one object comprises user information of any one user when any one object is selected by any one user; an object feature of any one of the objects is determined based on the first feature of any one of the objects and the second feature of any one of the objects.
In a possible implementation, the reference information includes object information of a plurality of objects, and the determining module 702 is configured to determine, for any one object, a first feature of any one object based on the object information of any one object; acquiring at least one piece of fifth associated information corresponding to any one object, and determining a third feature of any one object based on the at least one piece of fifth associated information corresponding to any one object, wherein any one piece of fifth associated information corresponding to any one object comprises scene information of any one scene and user information of any one user when any one object is selected by any one user in any one scene; an object feature of any one of the objects is determined based on the first feature of any one of the objects and the third feature of any one of the objects.
In a possible implementation, the reference information includes object information of a plurality of objects, and the determining module 702 is configured to determine, for any one object, a first feature of any one object based on the object information of any one object; acquiring at least one piece of sixth associated information corresponding to any one object, and determining the fourth feature of any one object based on the at least one piece of sixth associated information corresponding to any one object, wherein any one piece of sixth associated information corresponding to any one object comprises resource information of any one resource issued by any one object; an object feature of any one of the objects is determined based on the first feature of any one of the objects and the fourth feature of any one of the objects.
In a possible implementation, the reference information includes object information of a plurality of objects, and the determining module 702 is configured to determine, for any one object, a first feature of any one object based on the object information of any one object; acquiring at least one piece of seventh associated information corresponding to any one object, and determining a fifth feature of any one object based on the at least one piece of seventh associated information corresponding to any one object, wherein any one piece of seventh associated information corresponding to any one object comprises scene information of any one scene and resource information of any one resource when any one object releases any one resource in any one scene; an object feature of any one of the objects is determined based on the first feature of any one of the objects and the fifth feature of any one of the objects.
In a possible implementation manner, the reference information includes scene information of a plurality of scenes, and the determining module 702 is configured to determine, for any one of the scenes, a first feature of any one of the scenes based on the scene information of any one of the scenes; acquiring at least one eighth associated information corresponding to any one scene, and determining a second feature of any one scene based on the at least one eighth associated information corresponding to any one scene, wherein any one eighth associated information corresponding to any one scene comprises scene information of any one scene and user information of any one user corresponding to any one scene; scene features of any one scene are determined based on the first features of any one scene and the second features of any one scene.
In a possible implementation manner, the reference information includes scene information of a plurality of scenes, and the determining module 702 is configured to determine, for any one of the scenes, a first feature of any one of the scenes based on the scene information of any one of the scenes; acquiring at least one ninth associated information corresponding to any one scene, and determining a third feature of any one scene based on the at least one ninth associated information corresponding to any one scene, wherein any one ninth associated information corresponding to any one scene comprises scene information of any one scene and object information of any one object corresponding to any one scene; scene features of any one scene are determined based on the first features of any one scene and the third features of any one scene.
In one possible implementation, the reference features include user features of the target user, object features of each object, and scene features of each scene; a determining module 702, configured to determine a scene to be recommended from multiple scenes based on a user characteristic of a target user and a scene characteristic of each scene; and determining the object to be recommended from the plurality of objects based on the scene characteristics of the scene to be recommended and the object characteristics of the objects.
In a possible implementation manner, the determining module 702 is configured to obtain environment information of multiple environments, and determine the environment characteristics of each environment based on the environment information of each environment; and determining a scene to be recommended from a plurality of scenes based on the user characteristics of the target user, the scene characteristics of each scene and the environment characteristics of each environment.
In a possible implementation manner, the determining module 702 is configured to determine index information of each object based on the reference features, where the index information of any object is used to represent a matching degree between a target user and any object; and screening the objects to be recommended with index information meeting screening conditions from the multiple objects based on the index information of each object.
In a possible implementation manner, the determining module 702 is configured to obtain a recommendation model, where the recommendation model includes a feature extraction sub-model and a feature processing sub-model that are sequentially connected; determining, by the feature extraction submodel, reference features based on the reference information; and determining the object to be recommended from the plurality of objects based on the reference features by the feature processing submodel.
The device determines the object to be recommended from the objects based on the scene characteristics of each scene, or based on at least one of the user characteristics of the target user, the object characteristics of each object and the scene characteristics of each scene, so that the object to be recommended is determined based on the scene information or the integrated at least one of the user information and the object information and the scene information, the accuracy is improved, and the time, the frequency and the like of using the application program by the user are improved.
It should be understood that, when the apparatus provided in fig. 7 implements its functions, it is only illustrated by the division of the functional modules, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Fig. 8 shows a block diagram of a terminal device 800 according to an exemplary embodiment of the present application. The terminal device 800 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3(Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4) player, a notebook computer or a desktop computer. The terminal device 800 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, the terminal device 800 includes: a processor 801 and a memory 802.
The processor 801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 801 may be implemented in at least one hardware form of a DSP (digital signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 801 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 801 may be integrated with a GPU (Graphics Processing Unit) which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 801 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 802 may include one or more computer-readable storage media, which may be non-transitory. Memory 802 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 802 is used to store at least one instruction for execution by processor 801 to implement the object recommendation method provided by method embodiments herein.
In some embodiments, the terminal device 800 may further include: a peripheral interface 803 and at least one peripheral. The processor 801, memory 802 and peripheral interface 803 may be connected by bus or signal lines. Various peripheral devices may be connected to peripheral interface 803 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 804, a display screen 805, a camera assembly 806, an audio circuit 807, a positioning assembly 808, and a power supply 809.
The peripheral interface 803 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 801 and the memory 802. In some embodiments, the processor 801, memory 802, and peripheral interface 803 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 801, the memory 802, and the peripheral interface 803 may be implemented on separate chips or circuit boards, which are not limited by this embodiment.
The Radio Frequency circuit 804 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 804 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 804 converts an electrical signal into an electromagnetic signal to be transmitted, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 804 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 804 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 804 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 805 is a touch display, the display 805 also has the ability to capture touch signals on or above the surface of the display 805. The touch signal may be input to the processor 801 as a control signal for processing. At this point, the display 805 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 805 may be one, and is disposed on the front panel of the terminal device 800; in other embodiments, the number of the display screens 805 may be at least two, and the at least two display screens are respectively disposed on different surfaces of the terminal device 800 or are in a folding design; in other embodiments, the display 805 may be a flexible display, disposed on a curved surface or a folded surface of the terminal device 800. Even further, the display 805 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 805 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 806 is used to capture images or video. Optionally, camera assembly 806 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 806 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 807 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 801 for processing or inputting the electric signals to the radio frequency circuit 804 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different positions of the terminal device 800. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 801 or the radio frequency circuit 804 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 807 may also include a headphone jack.
The positioning component 808 is used to locate the current geographic Location of the terminal device 800 to implement navigation or LBS (Location Based Service). The Positioning component 808 may be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
The power supply 809 is used to supply power to various components in the terminal device 800. The power supply 809 can be ac, dc, disposable or rechargeable. When the power supply 809 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal device 800 also includes one or more sensors 810. The one or more sensors 810 include, but are not limited to: acceleration sensor 811, gyro sensor 812, pressure sensor 813, optical sensor 814, and proximity sensor 815.
The acceleration sensor 811 can detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal apparatus 800. For example, the acceleration sensor 811 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 801 may control the display 805 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 811. The acceleration sensor 811 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 812 may detect a body direction and a rotation angle of the terminal device 800, and the gyro sensor 812 may cooperate with the acceleration sensor 811 to acquire a 3D motion of the user on the terminal device 800. From the data collected by the gyro sensor 812, the processor 801 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 813 may be disposed on the side bezel of terminal device 800 and/or underneath display screen 805. When the pressure sensor 813 is arranged on the side frame of the terminal device 800, the holding signal of the user to the terminal device 800 can be detected, and the processor 801 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 813. When the pressure sensor 813 is disposed at a lower layer of the display screen 805, the processor 801 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 805. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The optical sensor 814 is used to collect the ambient light intensity. In one embodiment, the processor 801 may control the display brightness of the display 805 based on the ambient light intensity collected by the optical sensor 814. Specifically, when the ambient light intensity is high, the display brightness of the display screen 805 is increased; when the ambient light intensity is low, the display brightness of the display 805 is reduced. In another embodiment, processor 801 may also dynamically adjust the shooting parameters of camera head assembly 806 based on the ambient light intensity collected by optical sensor 814.
The proximity sensor 815, also called a distance sensor, is generally provided on the front panel of the terminal apparatus 800. The proximity sensor 815 is used to collect the distance between the user and the front surface of the terminal device 800. In one embodiment, when the proximity sensor 815 detects that the distance between the user and the front surface of the terminal device 800 gradually decreases, the processor 801 controls the display 805 to switch from the bright screen state to the dark screen state; when the proximity sensor 815 detects that the distance between the user and the front surface of the terminal device 800 is gradually increased, the processor 801 controls the display 805 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 8 is not limiting of terminal device 800 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
Fig. 9 is a schematic structural diagram of a server provided in this embodiment of the present application, where the server 900 may generate a relatively large difference due to a difference in configuration or performance, and may include one or more processors 901 and one or more memories 902, where the one or more memories 902 store at least one program code, and the at least one program code is loaded and executed by the one or more processors 901 to implement the object recommendation method provided in the foregoing method embodiments, and exemplarily, the processor 901 is a CPU. Certainly, the server 900 may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input and output, and the server 900 may also include other components for implementing device functions, which are not described herein again.
In an exemplary embodiment, there is also provided a computer-readable storage medium having at least one program code stored therein, the at least one program code being loaded and executed by a processor to cause an electronic device to implement any one of the object recommendation methods described above.
Alternatively, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program or a computer program product having at least one computer instruction stored therein, the at least one computer instruction being loaded and executed by a processor to cause a computer to implement any one of the object recommendation methods described above.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the principles of the present application should be included in the protection scope of the present application.

Claims (18)

1. An object recommendation method, characterized in that the method comprises:
acquiring reference information, wherein the reference information comprises scene information of a plurality of scenes, or the reference information comprises user information of a target user, at least one item of object information of a plurality of objects and scene information of the plurality of scenes, the scenes represent the types of interactive behaviors, and the interactive behaviors are behaviors of resources issued by users in environments for selecting the objects;
determining reference features based on the reference information, wherein the reference features comprise scene features of each scene, or the reference features comprise at least one of user features of the target user, object features of each object and scene features of each scene;
determining an object to be recommended from the plurality of objects based on the reference features;
and recommending the object to be recommended to the target user.
2. The method of claim 1, wherein the reference information comprises user information of the target user, and wherein the determining the reference feature based on the reference information comprises:
determining a first characteristic of the target user based on user information of the target user;
acquiring at least one piece of first associated information, and determining a second characteristic of the target user based on the at least one piece of first associated information, wherein any piece of first associated information comprises object information of any object selected by the target user;
determining a user characteristic of a target user based on the first characteristic of the target user and the second characteristic of the target user.
3. The method of claim 1, wherein the reference information comprises user information of the target user, and wherein the determining the reference feature based on the reference information comprises:
determining a first characteristic of the target user based on user information of the target user;
acquiring at least one piece of second associated information, and determining a third feature of the target user based on the at least one piece of second associated information, wherein any piece of second associated information comprises scene information of any scene and object information of any object when the target user selects any object in any scene;
determining a user characteristic of the target user based on the first characteristic of the target user and the third characteristic of the target user.
4. The method of claim 1, wherein the reference information comprises user information of the target user, and wherein the determining the reference feature based on the reference information comprises:
determining a first characteristic of the target user based on user information of the target user;
acquiring at least one piece of third association information, and determining a fourth feature of the target user based on the at least one piece of third association information, wherein any piece of third association information comprises scene information of any scene and resource information of any resource when the target user selects any resource in any scene;
determining a user characteristic of the target user based on the first characteristic of the target user and the fourth characteristic of the target user.
5. The method of claim 1, wherein the reference information comprises object information of the plurality of objects, and wherein determining the reference feature based on the reference information comprises:
for any object, determining a first feature of the any object based on object information of the any object;
acquiring at least one fourth associated information corresponding to any one object, and determining a second feature of any one object based on the at least one fourth associated information corresponding to any one object, wherein any one fourth associated information corresponding to any one object comprises user information of any one user when any one object is selected by any one user;
determining an object feature of the any one object based on the first feature of the any one object and the second feature of the any one object.
6. The method of claim 1, wherein the reference information comprises object information of the plurality of objects, and wherein determining the reference feature based on the reference information comprises:
for any object, determining a first feature of the any object based on object information of the any object;
acquiring at least one piece of fifth associated information corresponding to any one object, and determining a third feature of any one object based on the at least one piece of fifth associated information corresponding to any one object, wherein any one piece of fifth associated information corresponding to any one object comprises scene information of any one scene and user information of any one user when any one object is selected by any one user in any one scene;
determining an object feature of the any one object based on the first feature of the any one object and the third feature of the any one object.
7. The method of claim 1, wherein the reference information comprises object information of the plurality of objects, and wherein determining the reference feature based on the reference information comprises:
for any object, determining a first feature of the any object based on object information of the any object;
acquiring at least one piece of sixth associated information corresponding to any one object, and determining a fourth feature of any one object based on the at least one piece of sixth associated information corresponding to any one object, wherein any one piece of sixth associated information corresponding to any one object comprises resource information of any one resource issued by any one object;
determining an object feature of the any one object based on the first feature of the any one object and the fourth feature of the any one object.
8. The method of claim 1, wherein the reference information comprises object information of the plurality of objects, and wherein determining the reference feature based on the reference information comprises:
for any object, determining a first feature of the any object based on object information of the any object;
acquiring at least one piece of seventh associated information corresponding to any one object, and determining a fifth feature of any one object based on the at least one piece of seventh associated information corresponding to any one object, wherein any piece of seventh associated information corresponding to any one object comprises scene information of any one scene and resource information of any one resource when any one object issues any one resource in any one scene;
determining an object feature of the any one object based on the first feature of the any one object and the fifth feature of the any one object.
9. The method of claim 1, wherein the reference information comprises scene information of the plurality of scenes, and wherein determining the reference feature based on the reference information comprises:
for any scene, determining a first feature of the any scene based on scene information of the any scene;
acquiring at least one eighth associated information corresponding to any one scene, and determining a second feature of any one scene based on the at least one eighth associated information corresponding to any one scene, wherein any eighth associated information corresponding to any one scene comprises scene information of any one scene and user information of any one user corresponding to any one scene;
determining scene features of the any one scene based on the first features of the any one scene and the second features of the any one scene.
10. The method of claim 1, wherein the reference information comprises scene information of the plurality of scenes, and wherein determining the reference feature based on the reference information comprises:
for any scene, determining a first feature of the any scene based on scene information of the any scene;
acquiring at least one ninth associated information corresponding to any one of the scenes, and determining a third feature of any one of the scenes based on the at least one ninth associated information corresponding to any one of the scenes, wherein any one of the ninth associated information corresponding to any one of the scenes comprises scene information of any one of the scenes and object information of any one of the objects corresponding to any one of the scenes;
determining scene features of the any one scene based on the first features of the any one scene and the third features of the any one scene.
11. The method according to any one of claims 1 to 10, wherein the reference features include a user feature of the target user, an object feature of the respective object, and a scene feature of the respective scene; determining an object to be recommended from the plurality of objects based on the reference features, including:
determining a scene to be recommended from the plurality of scenes based on the user characteristics of the target user and the scene characteristics of the scenes;
and determining the objects to be recommended from the plurality of objects based on the scene features of the scenes to be recommended and the object features of the objects.
12. The method according to claim 11, wherein the determining a scene to be recommended from the plurality of scenes based on the user characteristics of the target user and the scene characteristics of the respective scenes comprises:
acquiring environment information of a plurality of environments, and respectively determining the environment characteristics of each environment based on the environment information of each environment;
and determining a scene to be recommended from the plurality of scenes based on the user characteristics of the target user, the scene characteristics of the scenes and the environment characteristics of the environments.
13. The method according to any one of claims 1 to 10, wherein the determining the object to be recommended from the plurality of objects based on the reference feature comprises:
determining index information of each object based on the reference features, wherein the index information of any object is used for representing the matching degree of the target user and the any object;
and screening the objects to be recommended with index information meeting screening conditions from the plurality of objects based on the index information of each object.
14. The method according to any one of claims 1 to 10, wherein said determining a reference feature based on said reference information comprises:
acquiring a recommendation model, wherein the recommendation model comprises a feature extraction sub-model and a feature processing sub-model which are sequentially connected;
determining, by the feature extraction submodel, reference features based on the reference information;
the determining an object to be recommended from the plurality of objects based on the reference features comprises:
determining, by the feature processing submodel, an object to be recommended from the plurality of objects based on the reference feature.
15. An object recommendation apparatus, characterized in that the apparatus comprises:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring reference information, the reference information comprises scene information of a plurality of scenes, or the reference information comprises user information of a target user, at least one item of object information of a plurality of objects and scene information of the plurality of scenes, the scenes represent the type of interactive behaviors, and the interactive behaviors are behaviors of a user selecting resources issued by the objects in an environment;
a determining module, configured to determine reference features based on the reference information, where the reference features include scene features of the respective scenes, or the reference features include at least one of user features of the target user, object features of the respective objects, and scene features of the respective scenes;
the determining module is further used for determining an object to be recommended from the plurality of objects based on the reference features;
and the recommending module is used for recommending the object to be recommended to the target user.
16. An electronic device, comprising a processor and a memory, wherein at least one program code is stored in the memory, and the at least one program code is loaded and executed by the processor to cause the electronic device to implement the object recommendation method according to any one of claims 1 to 14.
17. A computer-readable storage medium having at least one program code stored therein, the at least one program code being loaded and executed by a processor to cause a computer to implement the object recommendation method of any one of claims 1 to 14.
18. A computer program product having at least one computer instruction stored therein, the at least one computer instruction being loaded and executed by a processor to cause a computer to implement the object recommendation method of any of claims 1 to 14.
CN202111628802.3A 2021-12-28 2021-12-28 Object recommendation method, object recommendation device, electronic equipment and storage medium Pending CN114297493A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111628802.3A CN114297493A (en) 2021-12-28 2021-12-28 Object recommendation method, object recommendation device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111628802.3A CN114297493A (en) 2021-12-28 2021-12-28 Object recommendation method, object recommendation device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114297493A true CN114297493A (en) 2022-04-08

Family

ID=80971420

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111628802.3A Pending CN114297493A (en) 2021-12-28 2021-12-28 Object recommendation method, object recommendation device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114297493A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115270686A (en) * 2022-06-24 2022-11-01 无锡芯光互连技术研究院有限公司 Chip layout method based on graph neural network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115270686A (en) * 2022-06-24 2022-11-01 无锡芯光互连技术研究院有限公司 Chip layout method based on graph neural network

Similar Documents

Publication Publication Date Title
CN110097019B (en) Character recognition method, character recognition device, computer equipment and storage medium
CN110471858B (en) Application program testing method, device and storage medium
CN111506758B (en) Method, device, computer equipment and storage medium for determining article name
CN109784351B (en) Behavior data classification method and device and classification model training method and device
CN111931877B (en) Target detection method, device, equipment and storage medium
CN108320756B (en) Method and device for detecting whether audio is pure music audio
CN111897996A (en) Topic label recommendation method, device, equipment and storage medium
CN111738365B (en) Image classification model training method and device, computer equipment and storage medium
CN112052354A (en) Video recommendation method, video display method and device and computer equipment
CN114547428A (en) Recommendation model processing method and device, electronic equipment and storage medium
CN110942046A (en) Image retrieval method, device, equipment and storage medium
CN112131473B (en) Information recommendation method, device, equipment and storage medium
CN113886609A (en) Multimedia resource recommendation method and device, electronic equipment and storage medium
CN114117206A (en) Recommendation model processing method and device, electronic equipment and storage medium
CN112053360B (en) Image segmentation method, device, computer equipment and storage medium
CN114691860A (en) Training method and device of text classification model, electronic equipment and storage medium
CN111782950A (en) Sample data set acquisition method, device, equipment and storage medium
CN111563201A (en) Content pushing method, device, server and storage medium
CN113343709B (en) Method for training intention recognition model, method, device and equipment for intention recognition
CN114741602A (en) Object recommendation method, and training method, device and equipment of target model
CN114817709A (en) Sorting method, device, equipment and computer readable storage medium
CN114297493A (en) Object recommendation method, object recommendation device, electronic equipment and storage medium
CN113139614A (en) Feature extraction method and device, electronic equipment and storage medium
CN114764480A (en) Group type identification method and device, computer equipment and medium
CN112287193A (en) Data clustering method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination