CN117972220B - Item recommendation method, item recommendation device, and computer-readable storage medium - Google Patents
Item recommendation method, item recommendation device, and computer-readable storage medium Download PDFInfo
- Publication number
- CN117972220B CN117972220B CN202410384820.9A CN202410384820A CN117972220B CN 117972220 B CN117972220 B CN 117972220B CN 202410384820 A CN202410384820 A CN 202410384820A CN 117972220 B CN117972220 B CN 117972220B
- Authority
- CN
- China
- Prior art keywords
- user
- item
- matrix
- representing
- social
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 239000011159 matrix material Substances 0.000 claims abstract description 184
- 230000003993 interaction Effects 0.000 claims abstract description 62
- 230000004927 fusion Effects 0.000 claims abstract description 55
- 230000006870 function Effects 0.000 claims abstract description 54
- 238000010586 diagram Methods 0.000 claims abstract description 19
- 230000007246 mechanism Effects 0.000 claims abstract description 17
- 238000012549 training Methods 0.000 claims abstract description 6
- 238000004590 computer program Methods 0.000 claims description 13
- 238000013528 artificial neural network Methods 0.000 claims description 9
- 230000004913 activation Effects 0.000 claims description 6
- 230000004044 response Effects 0.000 claims description 4
- 102100029469 WD repeat and HMG-box DNA-binding protein 1 Human genes 0.000 claims description 3
- 101710097421 WD repeat and HMG-box DNA-binding protein 1 Proteins 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000007792 addition Methods 0.000 description 6
- 238000013434 data augmentation Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000001914 filtration Methods 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9536—Search customisation based on social or collaborative filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/042—Knowledge-based neural networks; Logical representations of neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0631—Item recommendations
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Business, Economics & Management (AREA)
- Finance (AREA)
- Accounting & Taxation (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Embodiments of the present disclosure provide an item recommending method, an item recommending apparatus, and a computer-readable storage medium. The article recommendation method comprises the following steps: obtaining a user social embedding matrix according to the historical interaction information of the user and the article; adding the user social embedding matrix and the user collaborative embedding matrix to generate a user fusion embedding matrix; inputting the user fusion embedding matrix, the article collaborative embedding matrix and the user article interaction diagram into a lightweight diagram convolution network to generate a user fusion global embedding representation and an article global embedding representation; using a door mechanism to balance the weights of the embedded representations of the user under the two domains to obtain a user global embedded representation; performing contrast learning by adopting a data expansion mode and a non-data expansion mode to determine a first loss function and a second loss function; performing iterative training based on the first, second, and recommended loss functions to generate an item recommendation model from the user global embedded representation and the item global embedded representation; based on the item recommendation model, items of interest are recommended to the user.
Description
Technical Field
Embodiments of the present disclosure relate to the field of electronic digital data processing technology, and in particular, to an item recommendation method, an item recommendation device, and a computer-readable storage medium.
Background
Along with the development of the internet, information in reality rises exponentially, and a recommendation system is generated as to how to screen content of interest of a user from massive information while enjoying the information to bring convenience. 1992. The collaborative filtering algorithm proposed in the years is the beginning of the recommendation system.
The development of the recommendation system is that collaborative filtering algorithm based on the graph neural network becomes the core of the recommendation system. The idea of collaborative filtering is essentially to enhance a user representation with items that the user interacted with, and to enhance an item representation with users that have interacted with. Most graph collaborative filtering algorithms focus on the way collaborative signals are aggregated. However, the graph neural network has its own defect that an excessive smoothing phenomenon occurs with the increase of the propagation layer number, so that the effect of the recommendation system is poor. In addition, an increase in the number of layers increases the time complexity. The user-user, item-item relationships can only have a first order relationship, which can result in insufficient mining of information in the user-item interaction graph.
In addition, recommendation systems need to be trained with rich tagged data (i.e., user-item interactions), however recommendation algorithms often face data sparseness problems, i.e., user-item interaction records are often very sparse, which can make models prone to over-fitting and poor generalization.
Disclosure of Invention
Embodiments described herein provide an item recommending method, an item recommending apparatus, and a computer-readable storage medium storing a computer program.
According to a first aspect of the present disclosure, there is provided an item recommendation method. The article recommendation method comprises the following steps: establishing a social network of a user according to historical interaction information of the user and the article to obtain a user social embedding matrix, wherein the user social embedding matrix is an embedding representation of the user in a social domain; element-level addition is carried out on the user social embedding matrix and the user collaborative embedding matrix to generate a user fusion embedding matrix, wherein the user collaborative embedding matrix is the embedding representation of a user in a collaborative domain; inputting a user fusion embedding matrix, an article collaborative embedding matrix and a user article interaction diagram into a lightweight diagram convolution network to generate a user fusion global embedding representation and an article global embedding representation, wherein the article collaborative embedding matrix is the embedding representation of an article under a collaborative domain; using a gate mechanism to balance weights of embedded representations of users under collaborative and social domains to obtain a user global embedded representation; adopting a data expansion mode to perform contrast learning by utilizing the user fusion embedded matrix and the article collaborative embedded matrix so as to determine a first loss function; performing contrast learning by using the user fusion global embedded representation and the article global embedded representation in a non-data expansion mode to determine a second loss function; performing iterative training based on the first loss function, the second loss function, and the recommended loss function to generate an item recommendation model from the user global embedded representation and the item global embedded representation; and recommending the item of interest to the user based on the item recommendation model.
In some embodiments of the present disclosure, establishing a social network of a user to obtain a user social embedding matrix from historical interaction information of the user with an item includes: constructing a user-article interaction matrix according to the historical interaction information of the user and the article; establishing a social network according to the user-item interaction matrix to generate a social matrix, wherein the social matrix represents social relations among users in the social network; generating a user structure embedding matrix according to the social matrix and the user collaborative embedding matrix; calculating an interest reliability index for each edge in the social network by using the user structure embedding matrix; the social graph and the user collaborative embedding matrix are utilized by the graph attention neural network to generate a user social embedding matrix.
In some embodiments of the present disclosure, establishing a social network from a user-item interaction matrix to generate a social matrix includes: multiplying the user-object interaction matrix by a transpose of the user-object interaction matrix to obtain an intermediate matrix; determining whether the value of each intermediate element in the intermediate matrix is within a preset range; setting the value of an element corresponding to an intermediate element in the social matrix as a first value in response to the value of any intermediate element in the intermediate matrix being within a preset range; and setting the value of the element corresponding to the intermediate element in the social matrix to be a second value in response to the value of any intermediate element in the intermediate matrix being outside the preset range.
In some embodiments of the present disclosure, calculating an interest reliability index for each edge in a social network using a user structure embedding matrix includes: for elements in the social matrix having a first value, calculating the similarity between the u-th user and the v-th user associated with the elements according to the following formula:
,
calculating an interest reliability index of an edge connecting the u-th user and the v-th user according to the following formula:
,
Taking the interest reliability index as the weight of the edge connecting the user u and the user v;
Wherein, Representing elements of the user structure embedded matrix corresponding to the u-th user,/>Representing elements of the user structure embedded matrix corresponding to the v-th user,/>Representing the similarity between the u-th and v-th users,/>Interest reliability index representing edge connecting the u-th user and v-th user,/>Representing the L2 norm.
In some embodiments of the present disclosure, performing contrast learning with a user fusion embedding matrix and an item collaborative embedding matrix to determine a first loss function using a data augmentation approach includes: adding first noise to the user fusion embedding matrix and the article collaborative embedding matrix to construct a first view; adding second noise to the user fusion embedding matrix and the article collaborative embedding matrix to construct a second view; inputting the denoised user fusion embedding matrix, the denoised article collaborative embedding matrix and the user article interaction diagram into a lightweight diagram convolution network under the first view and the second view respectively to generate denoised user fusion global embedding representation and denoised article global embedding representation; under the first view and the second view respectively, using a door mechanism to balance weights of embedded representations of users under a collaborative domain and a social domain so as to obtain a noisy user global embedded representation; will beAnd/>As positive sample and/>And/>Contrast learning between the first view and the second view is performed as a negative sample, thereby calculating a first loss function as:
,
Wherein,
,
,
Wherein,Representing elements corresponding to the u-th user in the denoised user global embedded representation in the first view,Representing elements corresponding to a u-th user in the denoised user global embedded representation in the second view,/>Representing elements corresponding to the i-th item in the noisy item global embedded representation under the first view,/>Representing elements corresponding to the ith item in the noisy item global embedded representation under the second view,/>Representing elements corresponding to the v-th user in the denoised user global embedded representation under the second view,/>Representing elements corresponding to the j-th item in the noisy item global embedded representation under the second view,/>Representing the temperature coefficient,/>A first loss function is represented, U represents a set of users, and J represents a set of items.
In some embodiments of the present disclosure, each of the first noise and the second noise is generated by:
,
Wherein, Matrix representing noise to be added,/>The noise with the same dimension as E is randomly generated and L2 norm normalization is carried out, sign is respectively assigned to be 1 and-1, epsilon represents the noise size, and delta represents the generated noise.
In some embodiments of the present disclosure, balancing weights of embedded representations of users under collaborative and social domains using a door mechanism to obtain a noisy user global embedded representation includes: generating a noisy user global embedded representation by:
,
Wherein, Representing noisy user global embedded representations, gate representing Gate mechanisms,/>User fusion global embedded representation after representation plus noise,/>Representing the user social embedding matrix.
In the Gate function, forAnd/>Performs the following formula for each element:
,
Wherein, ,/>Representation/>Element in/>Representation/>Element in/>AndRepresenting a transformation matrix for balancing weights of embedded representations of users under collaborative and social domains,/>Representing an activation function,/>Representation/>Is a component of the group.
In some embodiments of the present disclosure, performing contrast learning with the user fused global embedded representation and the item global embedded representation in a non-data augmentation manner to determine the second loss function includes: will beAnd/>As a positive sample,/>And/>Contrast learning is performed as a negative sample, thereby calculating a second loss function as:
,
Wherein,
,
,
Wherein k is an even number,Representing elements corresponding to the u-th user in an even layer of a user fusion global embedded representation,/>Representing elements corresponding to a u-th user in an odd layer of a user fusion global embedded representation,/>Representing an element corresponding to an ith item in an odd layer of the item global embedded representation,/>Representing an element corresponding to an ith item in an even layer of the item global embedded representation,/>Representing the v user-corresponding element in the odd layer of the user-fused global embedded representation,Representing elements corresponding to the j-th item in an odd layer of the item global embedded representation,/>Representing the temperature coefficient,/>Representing a second loss function, U representing a set of users, J representing a set of items, R representing a set of user-items having an interaction history, a U-th user having an interaction history with an i-th item, and a v-th user not having an interaction history with a J-th item.
According to a second aspect of the present disclosure, an item recommendation device is provided. The item recommendation device includes at least one processor; and at least one memory storing a computer program. The computer program, when executed by at least one processor, causes the item recommendation device to perform the steps of the item recommendation method according to the first aspect of the present disclosure.
According to a third aspect of the present disclosure, there is provided a computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the steps of the item recommendation method according to the first aspect of the present disclosure.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the following brief description of the drawings of the embodiments will be given, it being understood that the drawings described below relate only to some embodiments of the present disclosure, not to limitations of the present disclosure, in which:
FIG. 1 is an exemplary flow chart of an item recommendation method according to an embodiment of the present disclosure;
FIG. 2 is an exemplary flow chart of a process of obtaining a user social embedding matrix in the embodiment shown in FIG. 1;
Fig. 3 is a schematic block diagram of an item recommendation device according to an embodiment of the present disclosure.
It is noted that the elements in the drawings are schematic and are not drawn to scale.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings. It will be apparent that the described embodiments are some, but not all, of the embodiments of the present disclosure. All other embodiments, which can be made by those skilled in the art based on the described embodiments of the present disclosure without the need for creative efforts, are also within the scope of the protection of the present disclosure.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the presently disclosed subject matter belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the specification and relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. In addition, terms such as "first" and "second" are used merely to distinguish one component (or portion of a component) from another component (or another portion of a component).
As described above, the conventional recommendation system based on the graph neural network may have an excessive smoothing phenomenon with an increase in the number of propagation layers, and may not fully mine information in the user-object interaction graph. Accordingly, the present disclosure proposes generating a social network from user interests to fully explore user-user higher order relationships and effectively fuse information in social (social network) and collaborative (user-item interaction network) domains. In addition, aiming at the problem of sparse data in the existing recommendation system, the method and the device provide a construction method for performing comparison learning tasks by adopting a data expansion mode and a non-data expansion mode at the same time so as to perform multi-view learning. Thus, the recommendation effect of the recommendation system can be improved.
FIG. 1 illustrates an exemplary flowchart of an item recommendation method 100 according to an embodiment of the present disclosure.
At block S102 of FIG. 1, a social network of a user is established based on historical interaction information of the user with an item to obtain a user social embedding matrix. The user social embedding matrix is an embedded representation of the user under the social domain. In this context, the number of users may be denoted by n and the number of items may be denoted by m. The article may be a physical article or a virtual article. The user's historical interactions with the item may include: the user purchased the item, clicked on the item, viewed the item, etc.
FIG. 2 illustrates an exemplary flow chart of a process of obtaining a user social embedding matrix in the embodiment illustrated in FIG. 1.
At block S202, a user-item interaction matrix may be constructed from historical interaction information of the user with the item. In this context, it is possible to useTo represent a user-item interaction matrix. In the user-item interaction matrix R, if there is a history interaction between the u-th user and the i-th item, the element R ui in R is a non-null element, and if there is no history interaction between the u-th user and the i-th item, the element R ui in R is a null element (in an alternative example, it may also be a null element).
At block S204, a social network may be established from the user-item interaction matrix to generate a social matrix. Here, the social matrix represents social relationships between users in a social network. In some embodiments of the present disclosure, a social matrix may be generated by:
The user-item interaction matrix is multiplied by the transpose of the user-item interaction matrix to obtain an intermediate matrix, i.e., p=r·r T. Wherein, Representing an intermediate matrix,/>Representing a user-item interaction matrix.
Then, it is determined whether the value of each intermediate element in the intermediate matrix P is within a preset range. The preset range includes a lower limit and an upper limit. The purpose of setting the lower limit is to reduce complexity, while the reason for setting the upper limit is that users with a large number of common interactions can learn directly through the user-item interaction graph.
If the value of any intermediate element in the intermediate matrix is within the preset range, the value of the element in the social matrix corresponding to the intermediate element is set to a first value (e.g., 1). If the value of any intermediate element in the intermediate matrix is outside of the preset range, the value of the element in the social matrix corresponding to the intermediate element is set to a second value (e.g., 0). Here, "corresponding" means that the positions of the elements in the matrix are the same.
The social matrix may be calculated by:
,
Wherein P uv represents the element of the ith row and the nth column in the intermediate matrix P, and s uv represents the social matrix The element in the ith row and the ith column of the formula (i) represents the social relationship between the ith user and the ith user, if s uv = 1, the ith user and the ith user have a unidirectional social relationship, and if s uv = 0, the ith user and the ith user do not have a unidirectional social relationship. L_threshold represents a lower limit, and u_threshold represents an upper limit.
Users (s uv = 1) with a common interaction number between a lower limit and an upper limit may be connected to generate a social network.
To fully explore the higher order relationships between users, embodiments of the present disclosure consider both the interests of the users and the social network structure, and therefore, at block S206, generate a user structure embedding matrix from the social matrix and the user collaborative embedding matrix. In one example, the user structure embedding matrix may be generated by:
Wherein, Representing a user collaborative embedding matrix,/>Representing the user structure embedding matrix. /(I)With the user' S interest information and S with social structure information. d represents the embedded dimension corresponding to each user.
At block S208, an interest reliability index is calculated for each edge in the social network using the user structure embedding matrix (Interest Condidence).
In some embodiments of the present disclosure, in calculating an interest reliability index for each edge in a social network using a user structure embedding matrix, a similarity between a u-th user and a v-th user associated with an element having a first value in the social matrix may be calculated according to the following formula:
,
Then, an interest reliability index connecting edges of the u-th user and the v-th user is calculated according to the following formula:
,
And then, taking the interest reliability index as the weight of the edge connecting the u user and the v user.
In the formulas (3) and (4),Representing elements of the user structure embedded matrix corresponding to the u-th user,/>Representing elements of the user structure embedded matrix corresponding to the v-th user,/>Representing the similarity between the u-th and v-th users,/>Interest reliability index representing edge connecting the u-th user and v-th user,/>Representing the L2 norm.
At block S210, a user social embedding matrix is generated using the social graph and the user collaborative embedding matrix through a graph attention neural network (GAT). In one example, the user social embedding matrix may be generated by:
,
Wherein, Representing a social graph,/>The representation user social embedding matrix, which is an embedded representation of the user under the social domain. Here, the schematic force neural network may be an existing schematic force neural network or a future developed schematic force neural network.
Returning to FIG. 1, at block S104, the user social embedding matrix is element-level added to the user collaborative embedding matrix to generate a user fusion embedding matrix. The user collaborative embedding matrix is an embedded representation of a user under a collaborative domain. In one example, the user fusion embedding matrix may be generated by:
,
Wherein, Representing element-level additions,/>Representing a user social embedding matrix,/>Representing a user collaborative embedding matrix,/>Representing the user fusion embedding matrix.
At block S106, the user fusion embedding matrix, the item co-embedding matrix, and the user item interaction graph are input into a lightweight graph convolution network (LightGCN) to generate a user fusion global embedded representation and an item global embedded representation. An item co-embedding matrix is an embedded representation of an item under a co-domain. In one example, the user fused global embedded representation and the item global embedded representation may be generated by:
,
Wherein, Representing a user fusing a globally embedded representation,/>Representing an item globally embedded representation,/>Representing a user fusion embedding matrix,/>Representing an item co-embedding matrix,/>A user item interaction diagram is shown, LGCN being LightGCN encoder.
Although social information is incorporated into the graph representation learning of collaborative domains, collaborative domains are still dominated by the interests of users, and the inventors of the present disclosure consider the representation of social domains to be also very important, so the present disclosure proposes the use of a portal mechanism to aggregate user embeddings of social and collaborative domains. The gating mechanism can precisely balance information between two domains. At block S107, the weights of the embedded representations of the user under the collaborative and social domains are balanced using a door mechanism to obtain a user global embedded representation. The user global embedded representation may be generated by:
,
Wherein, Representing a user global embedded representation, gate represents a gating mechanism,/>Representing a user fusing a globally embedded representation,/>Representing the user social embedding matrix.
In the Gate function, forAnd/>Performs the following formula for each element:
Wherein, ,/>Representation/>Element in/>Representation/>Element in/>AndRepresenting a transformation matrix for balancing weights of embedded representations of users under collaborative and social domains,/>Representing an activation function,/>Representation/>Is a component of the group.
Globally embedding a user into a representationWith item Global embedding representation/>As a predicted user-item interaction matrix.
At block S108, a data augmentation mode is employed to perform a contrast learning with the user fusion embedding matrix and the item co-embedding matrix to determine a first loss function.
In some embodiments of the present disclosure, a first noise may be added to the user fusion and item co-embedding matrices to construct a first view, and a second noise may be added to the user fusion and item co-embedding matrices to construct a second view.
In some embodiments of the present disclosure, each of the first noise and the second noise is generated by:
,
Where E represents a matrix to which noise is to be added, Represents randomly generating noise of the same dimension as E and performing L2 norm normalization, sign represents assigning positive and negative numbers to 1 and-1, respectively, ε represents the noise size, while ε represents the Hadamard product and Δ represents the generated noise.
For example, in a first view, the embedding matrix is fused to the userWhen noise is added, will/>Substituted into (10) to generate/>. User fusion embedding matrix after noise addition/>. In the cooperative embedding matrix/>, for an articleWhen noise is added, will/>Substituted into (10) to generate. User fusion embedding matrix after noise addition/>。
In the second view, the embedded matrix is fused to the userWhen noise is added, will/>Substituted into (10) to generate. User fusion embedding matrix after noise addition/>. In the cooperative embedding matrix/>, for an articleWhen noise is added, will/>Substituted into (10) to generate. User fusion embedding matrix after noise addition/>。
Since rand_like randomly generates noise, in the first viewAnd under the second viewDifferent.
And then, inputting the denoised user fusion embedding matrix, the denoised article collaborative embedding matrix and the user article interaction diagram into a lightweight diagram convolution network under the first view and the second view respectively to generate a denoised user fusion global embedding representation and a denoised article global embedding representation. For example, in the first view, one will,/>And/>Substituted into (7) to generate/>And/>Wherein/>Representing the denoised user fusion global embedded representation in the first view,/>The noisy item global embedded representation in the first view is represented. In the second view, will/>,/>And/>Substituted into (7) to generate/>And/>Wherein/>Representing the denoised user fusion global embedded representation in the second view,/>Representing the noisy global embedded representation of the item under the second view.
Then, under the first view and the second view, respectively, a door mechanism is used to balance weights of the embedded representations of the user under the collaborative domain and the social domain to obtain a noisy global embedded representation of the user. In some embodiments of the present disclosure, under the first view, the noisy user global embedded representation may be generated by:
,
Wherein, Representing noisy user global embedded representations, gate representing Gate mechanisms,/>User fusion global embedded representation after representation plus noise,/>Representing the user social embedding matrix.
In the Gate function, forAnd/>Performs the following formula for each element:
Wherein, ,/>Representation/>Element in/>Representation/>Element in/>AndTransformation matrix representing weights for balancing embedded representations of users under collaborative and social domains, σ representing an activation function,/>Representation/>Is a component of the group.
In the second view, the noisy user global embedded representation may be generated by:
,
Wherein, Representing noisy user global embedded representations, gate representing Gate mechanisms,/>User fusion global embedded representation after representation plus noise,/>Representing the user social embedding matrix.
In the Gate function, forAnd/>Performs the following formula for each element:
,
Wherein, ,/>Representation/>Element in/>Representation/>Element in/>AndTransformation matrix representing weights for balancing embedded representations of users under collaborative and social domains, σ representing an activation function,/>Representation/>Is a component of the group.
For contrast learning between views, the goal is to maximize the embedded representation of the same user/item in different views, and therefore, willAnd/>As positive sample and/>And/>Contrast learning between the first view and the second view is performed as a negative sample, thereby calculating a first loss function as:
,
Wherein,
,
,
Wherein,Representing elements corresponding to the u-th user in the denoised user global embedded representation in the first view,Representing elements corresponding to a u-th user in the denoised user global embedded representation in the second view,/>Representing elements corresponding to the i-th item in the noisy item global embedded representation under the first view,/>Representing elements corresponding to the ith item in the noisy item global embedded representation under the second view,/>Representing elements corresponding to the v-th user in the denoised user global embedded representation under the second view,/>Representing elements corresponding to the j-th item in the noisy item global embedded representation under the second view,/>Representing the temperature coefficient, L N representing the first loss function, U representing the user set, J representing the item set.
Contrast learning is constructed by adding noise to perform data expansion in block S108, and the inventors of the present disclosure found that data expansion would change the embedded representation in the main frame. Therefore, on the basis of constructing contrast learning by using the data expansion mode, the present disclosure also proposes a mode of constructing contrast learning without data expansion. Meanwhile, the multi-view learning is carried out by adopting a data expansion mode and a non-data expansion mode, so that the advantages of different comparison learning strategies can be combined, and the balance between the preservation of the semantic information of the original data and the realization of good generalization capability can be realized.
At block S110, a comparison study is performed using the user fused global embedded representation and the item global embedded representation in a non-data augmentation manner to determine a second loss function.
The inventors of the present disclosure believe that the embedded representations of the different layers of the user and the item represent different semantic information. For a user, the embedded representations of even layers represent a set of user embedded representations of similar interest to the user, and the embedded representations of odd layers represent a set of item embedded representations of interactive interest to the user. While the hierarchical meaning of the item is similar to the user.
Accordingly, some embodiments of the present disclosure propose to beAnd/>As a positive sample of the sample,And/>Contrast learning is performed as a negative sample, thereby calculating a second loss function as:
,
Wherein,
,
,
Wherein k is an even number,Representing elements corresponding to the u-th user in an even layer of a user fusion global embedded representation,/>Representing elements corresponding to a u-th user in an odd layer of a user fusion global embedded representation,/>Representing an element corresponding to an ith item in an odd layer of the item global embedded representation,/>Representing an element corresponding to an ith item in an even layer of the item global embedded representation,/>Representing the v user-corresponding element in the odd layer of the user-fused global embedded representation,Representing elements corresponding to the j-th item in an odd layer of the item global embedded representation,/>Representing a temperature coefficient, L S representing a second loss function, U representing a set of users, J representing a set of items, R representing a set of user-items with an interaction history, a U-th user having an interaction history with an i-th item, and a v-th user not having an interaction history with a J-th item.
At block S112, iterative training is performed based on the first loss function, the second loss function, and the recommended loss function to generate an item recommendation model from the user global embedded representation and the item global embedded representation.
For the recommended master task, embodiments of the present disclosure employ a bayesian loss function, as shown in equation (21):
,
Wherein sigma represents the activation function, Representing elements corresponding to the u-th user in the user global embedded representation, and superscript T represents transposition,/>Representing an element corresponding to an i-th item in the item global embedded representation,/>Representing an element corresponding to a j-th item in the item global embedded representation,/>Is a triplet set obtained by negative sampling. For each set of interactions/>, in the training setA j-th item that the u-th user has not interacted with is randomly selected as a negative sample.
To jointly optimize the recommended and comparative losses, a multitasking learning approach may be used, with the total loss function shown in equation (22):
,
Where Θ represents the embedded parameters of the user and the item (i.e., And/>) Lambda 1 and lambda 2 are responsible for controlling the weights of L N and L S, lambda 3 being the regularization coefficient.
After iterative training, the article recommendation model can obtain trained user global embedded representationAnd item Global embedded representation/>Globally embedding a user into a representation/>With item Global embedding representation/>As a predicted user-item interaction matrix.
At block S114, items of interest are recommended to the user based on the item recommendation model. In some embodiments of the present disclosure, a number of items (Top-K items) with highest scores for a single user in a predicted user-item interaction matrix may be recommended to the user.
To evaluate the performance of an item recommendation method under Top-K recommendation tasks according to embodiments of the present disclosure, embodiments of the present disclosure employ four metrics: hit Ratio (Hit Ratio), precision (Precision), recall Ratio (Recall), and normalized loss cumulative gain (NDCG). For each user in the test set, all candidate items may be ranked to ensure reliability of the results. The specific calculation details of each index are as follows,
,
Wherein U represents a user set, represents a Top-K item set recommended for a U-th user by an item recommendation method according to an embodiment of the present disclosure,Item set representing true clicks of the u-th user in the test set,/>Representing recommendation list/>Item k,/>Is an indicator function if/>Then/>Otherwise, 0.
Fig. 3 shows a schematic block diagram of an item recommendation device 300 according to an embodiment of the present disclosure. As shown in fig. 3, the item recommendation device 300 may include a processor 310 and a memory 320 storing a computer program. The computer program, when executed by the processor 310, causes the item recommendation device 300 to perform the steps of the item recommendation method 100 as shown in fig. 1. In one example, the item recommendation apparatus 300 may be a computer device or a cloud computing node. The item recommendation device 300 may establish a social network of the user based on historical interaction information of the user with the item to obtain a user social embedding matrix. The user social embedding matrix is an embedded representation of the user under the social domain. The item recommendation device 300 may add the user social embedding matrix to the user collaborative embedding matrix at an element level to generate a user fusion embedding matrix. The user collaborative embedding matrix is an embedded representation of a user under a collaborative domain. The item recommendation device 300 may input the user fusion embedding matrix, the item co-embedding matrix, and the user item interaction graph into a lightweight graph convolutional network to generate a user fusion global embedded representation and an item global embedded representation. An item co-embedding matrix is an embedded representation of an item under a co-domain. The item recommendation device 300 may use a door mechanism to balance the weight of the embedded representation of the user under the collaborative and social domains to obtain a user global embedded representation. The item recommendation device 300 may perform a comparison learning using the user fusion embedding matrix and the item collaborative embedding matrix in a data augmentation manner to determine the first loss function. The item recommendation device 300 may use a non-data augmentation approach to conduct a comparison study with the user fused global embedded representation and the item global embedded representation to determine the second loss function. The item recommendation device 300 may be iteratively trained based on the first loss function, the second loss function, and the recommended loss function to generate an item recommendation model from the user global embedded representation and the item global embedded representation. The item recommendation device 300 may recommend items of interest to the user based on the item recommendation model.
In embodiments of the present disclosure, processor 310 may be, for example, a Central Processing Unit (CPU), a microprocessor, a Digital Signal Processor (DSP), a processor of a multi-core based processor architecture, or the like. Memory 320 may be any type of memory implemented using data storage technology including, but not limited to, random access memory, read only memory, semiconductor-based memory, flash memory, disk storage, and the like.
Further, in embodiments of the present disclosure, the item recommendation apparatus 300 may also include an input device 330, such as a microphone, keyboard, mouse, etc., for inputting historical interaction information of the user with the item. In addition, the item recommendation apparatus 300 may further include an output device 340, such as a speaker, display, etc., for outputting recommended items.
In other embodiments of the present disclosure, there is also provided a computer readable storage medium storing a computer program, wherein the computer program is capable of implementing the steps of the item recommendation method as shown in fig. 1 to 2 when being executed by a processor.
In summary, the item recommending method and the item recommending apparatus according to the embodiments of the present disclosure generate a social network according to user interests to fully explore user-user high-order relationships and effectively fuse information in a social domain (social network) and a collaborative domain (user-item interaction network). The article recommending method and the article recommending device according to the embodiments of the present disclosure also perform construction of a contrast learning task simultaneously by adopting both a data expansion mode and a non-data expansion mode, so as to perform multi-view learning. According to the item recommending method and the item recommending device, the recommending effect can be improved, and better user experience is provided.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus and methods according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As used herein and in the appended claims, the singular forms of words include the plural and vice versa, unless the context clearly dictates otherwise. Thus, when referring to the singular, the plural of the corresponding term is generally included. Similarly, the terms "comprising" and "including" are to be construed as being inclusive rather than exclusive. Likewise, the terms "comprising" and "or" should be interpreted as inclusive, unless such an interpretation is expressly prohibited herein. Where the term "example" is used herein, particularly when it follows a set of terms, the "example" is merely exemplary and illustrative and should not be considered exclusive or broad.
Further aspects and scope of applicability will become apparent from the description provided herein. It is to be understood that various aspects of the application may be implemented alone or in combination with one or more other aspects. It should also be understood that the description and specific examples are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.
While several embodiments of the present disclosure have been described in detail, it will be apparent to those skilled in the art that various modifications and variations can be made to the embodiments of the present disclosure without departing from the spirit and scope of the disclosure. The scope of the present disclosure is defined by the appended claims.
Claims (7)
1. An item recommendation method, characterized in that the item recommendation method comprises:
Establishing a social network of a user according to historical interaction information of the user and an article to obtain a user social embedding matrix, wherein the user social embedding matrix is an embedding representation of the user in a social domain;
element-level addition is carried out on the user social embedding matrix and the user collaborative embedding matrix to generate a user fusion embedding matrix, wherein the user collaborative embedding matrix is an embedding representation of the user under a collaborative domain;
Inputting the user fusion embedding matrix, the article collaborative embedding matrix and the user article interaction diagram into a lightweight diagram convolution network to generate a user fusion global embedding representation and an article global embedding representation, wherein the article collaborative embedding matrix is the embedding representation of the article under the collaborative domain;
using a gate mechanism to balance weights of the embedded representations of the user under the collaborative domain and the social domain to obtain a user global embedded representation;
The method for determining the first loss function by utilizing the user fusion embedding matrix and the article collaborative embedding matrix to perform contrast learning in a data expansion mode comprises the following steps:
adding first noise to the user fusion embedding matrix and the article collaborative embedding matrix to construct a first view;
adding a second noise to the user fusion embedding matrix and the article collaborative embedding matrix to construct a second view;
Inputting the denoised user fusion embedding matrix, the denoised article collaborative embedding matrix and the user article interaction diagram into a lightweight diagram convolution network under the first view and the second view respectively to generate denoised user fusion global embedding representation and denoised article global embedding representation;
Using a door mechanism to balance weights of embedded representations of the user under the collaborative domain and the social domain under the first view and the second view, respectively, to obtain a noisy user global embedded representation;
Will be And/>As positive sample and/>And/>Contrast learning between the first view and the second view is performed as a negative sample, thereby calculating a first loss function as:
,
Wherein,
,
,
Wherein,Representing elements corresponding to a u-th user in the denoised user global embedded representation under the first view,/>, andRepresenting elements corresponding to the u-th user in the denoised user global embedded representation in the second view,/>Representing the element corresponding to the i-th item in the globally embedded representation of the noisy item under the first view,Representing the element corresponding to the ith item in the globally embedded representation of the noisy item in the second view,Representing elements corresponding to the v user in the denoised user global embedded representation in the second view,/>Representing elements corresponding to the j-th item in the globally embedded representation of the denoised item in the second view,/>Representing the temperature coefficient,/>Representing the first loss function,/>Representing a set of users,/>Representing a collection of items;
each of the first noise and the second noise is generated by:
,
Where E represents a matrix to which noise is to be added, Representing randomly generated noise with the same dimension as E and carrying out L2 norm normalization, and sign respectively assigning positive numbers and negative numbers as 1 and-1,/>, respectivelyRepresenting noise level,/>Representing the generated noise;
performing contrast learning by using the user fusion global embedded representation and the item global embedded representation in a non-data expansion manner to determine a second loss function, including:
Will be And/>As a positive sample,/>And/>Contrast learning is performed as a negative sample, whereby the second loss function is calculated as:
,
Wherein,
,
,
Wherein k is an even number,Representing elements corresponding to a u-th user in an even layer of the user fusion global embedded representation,/>Representing elements corresponding to the u-th user in an odd layer of the user fusion global embedded representation,/>Representing an element corresponding to the i-th item in an odd layer of the item global embedded representation,/>Representing an element corresponding to the ith item in an even layer of the item global embedded representation,/>Representing elements corresponding to the v-th user in an even layer of the user fusion global embedded representation,/>Representing elements corresponding to the v-th user in an odd layer of the user fusion global embedded representation,/>Representing elements corresponding to the j-th item in the even layer of the item global embedded representation,Representing an element corresponding to the j-th item in an odd layer of the item global embedded representation,/>Representing the temperature coefficient,/>Representing the second loss function,/>Representing a set of users,/>Representing a set of items, R representing a set of user-items having an interaction history, the ith user having an interaction history with the ith item, the v user not having an interaction history with the jth item;
Performing iterative training based on the first loss function, the second loss function, and the recommended loss function to generate an item recommendation model from the user global embedded representation and the item global embedded representation; and
Recommending the interested items to the user based on the item recommendation model.
2. The item recommendation method of claim 1, wherein establishing a social network of the user to obtain the user social embedding matrix based on historical interaction information of the user with the item comprises:
constructing a user-article interaction matrix according to the historical interaction information of the user and the article;
Establishing the social network according to the user-object interaction matrix to generate a social matrix, wherein the social matrix represents social relations among users in the social network;
generating a user structure embedding matrix according to the social matrix and the user collaborative embedding matrix;
calculating an interest reliability index for each edge in the social network by using the user structure embedding matrix;
the social user embedding matrix is generated by a graph attention neural network using a social graph and the collaborative user embedding matrix.
3. The item recommendation method of claim 2, wherein establishing the social network from the user-item interaction matrix to generate a social matrix comprises:
multiplying the user-item interaction matrix by a transpose of the user-item interaction matrix to obtain an intermediate matrix;
determining whether the value of each intermediate element in the intermediate matrix is within a preset range;
setting the value of an element corresponding to any intermediate element in the social matrix to be a first value in response to the value of any intermediate element in the intermediate matrix being within the preset range;
And setting the value of an element corresponding to any intermediate element in the social matrix to be a second value in response to the value of any intermediate element in the intermediate matrix being outside the preset range.
4. The item recommendation method of claim 3, wherein calculating an interest reliability index for each edge in the social network using the user structure embedding matrix comprises:
for the element with the first value in the social matrix, calculating the similarity between the u user and the v user associated with the element according to the following formula:
,
calculating an interest reliability index connecting edges of the u-th user and the v-th user according to:
,
Taking the interest reliability index as a weight of a side connecting the u-th user and the v-th user to generate the social graph;
Wherein, Representing elements of the user structure embedding matrix corresponding to the u-th user,/>Representing elements of the user structure embedding matrix corresponding to the v-th user,/>Representing a similarity between the u-th user and the v-th user,/>And representing an interest reliability index of a side connecting the u-th user and the v-th user.
5. The item recommendation method of claim 1, wherein balancing weights of embedded representations of the user under the collaborative domain and the social domain using a door mechanism to obtain a noisy user global embedded representation comprises:
generating the denoised user global embedded representation by:
,
Wherein, Representing the denoised user global embedded representation, gate representing the gating mechanism,/>Representing the denoised user fusion global embedded representation,/>Representing the user social embedding matrix;
In the Gate function, for And/>Performs the following formula for each element:
,
Wherein, ,/>Representation/>Element in/>Representation/>Element in/>And/>Representing a transformation matrix for balancing weights of the embedded representations of the user under the collaborative domain and the social domain, σ representing an activation function,/>Representation/>Is a component of the group.
6. An article recommendation device, characterized in that the article recommendation device comprises:
At least one processor; and
At least one memory storing a computer program;
wherein the computer program, when executed by the at least one processor, causes the item recommendation device to perform the steps of the item recommendation method according to any one of claims 1 to 5.
7. A computer-readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the steps of the item recommendation method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410384820.9A CN117972220B (en) | 2024-04-01 | 2024-04-01 | Item recommendation method, item recommendation device, and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410384820.9A CN117972220B (en) | 2024-04-01 | 2024-04-01 | Item recommendation method, item recommendation device, and computer-readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117972220A CN117972220A (en) | 2024-05-03 |
CN117972220B true CN117972220B (en) | 2024-06-21 |
Family
ID=90859915
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410384820.9A Active CN117972220B (en) | 2024-04-01 | 2024-04-01 | Item recommendation method, item recommendation device, and computer-readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117972220B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118227891B (en) * | 2024-05-20 | 2024-08-20 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Item recommending method and device based on multiple modes and computer readable storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114817663A (en) * | 2022-05-05 | 2022-07-29 | 杭州电子科技大学 | Service modeling and recommendation method based on class perception graph neural network |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11645524B2 (en) * | 2019-05-10 | 2023-05-09 | Royal Bank Of Canada | System and method for machine learning architecture with privacy-preserving node embeddings |
US11720590B2 (en) * | 2020-11-06 | 2023-08-08 | Adobe Inc. | Personalized visualization recommendation system |
-
2024
- 2024-04-01 CN CN202410384820.9A patent/CN117972220B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114817663A (en) * | 2022-05-05 | 2022-07-29 | 杭州电子科技大学 | Service modeling and recommendation method based on class perception graph neural network |
Non-Patent Citations (1)
Title |
---|
IDVT: Interest-aware Denoising and View-guided Tuning for Social Recommendation;Dezhao Yang, etc;https://arxiv.org/pdf/2308.15926 20230830;20230830;第1-9页 * |
Also Published As
Publication number | Publication date |
---|---|
CN117972220A (en) | 2024-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Guo et al. | Combining geographical and social influences with deep learning for personalized point-of-interest recommendation | |
CN117972220B (en) | Item recommendation method, item recommendation device, and computer-readable storage medium | |
Sun et al. | Fused adaptive lasso for spatial and temporal quantile function estimation | |
US20160321362A1 (en) | Determining a company rank utilizing on-line social network data | |
CN114281976A (en) | Model training method and device, electronic equipment and storage medium | |
CN114020999A (en) | Community structure detection method and system for movie social network | |
CN113590976A (en) | Recommendation method of space self-adaptive graph convolution network | |
Alhamdani et al. | Recommender system for global terrorist database based on deep learning | |
CN111340601B (en) | Commodity information recommendation method and device, electronic equipment and storage medium | |
Chen et al. | Recommendation Algorithm in Double‐Layer Network Based on Vector Dynamic Evolution Clustering and Attention Mechanism | |
Zhu | Network Course Recommendation System Based on Double‐Layer Attention Mechanism | |
Yıldız | On the performance of the Jackknifed Liu-type estimator in linear regression model | |
Li et al. | From edge data to recommendation: A double attention-based deformable convolutional network | |
CN111930926B (en) | Personalized recommendation algorithm combined with comment text mining | |
Zheng | Non-dominated differential context modeling for context-aware recommendations | |
Liu | POI recommendation model using multi-head attention in location-based social network big data | |
Deng et al. | A Trust-aware Neural Collaborative Filtering for Elearning Recommendation. | |
Liu | More on Liu-type estimator in linear regression | |
Liu et al. | Geographically weighted regression model-assisted estimation in survey sampling | |
Dong et al. | A hierarchical network with user memory matrix for long sequence recommendation | |
Liu et al. | Collaborative social deep learning for celebrity recommendation | |
Wang et al. | Few-Shot aerial image classification with deep economic network and teacher knowledge | |
Feng et al. | Variable selection for partially varying coefficient single-index model | |
Zhang | Research collaboration prediction and recommendation based on network embedding in co‐authorship networks | |
Dana | What makes improper linear models tick? |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |