CN111104604B - Lightweight socialization recommendation method based on Hash learning - Google Patents

Lightweight socialization recommendation method based on Hash learning Download PDF

Info

Publication number
CN111104604B
CN111104604B CN201911165736.3A CN201911165736A CN111104604B CN 111104604 B CN111104604 B CN 111104604B CN 201911165736 A CN201911165736 A CN 201911165736A CN 111104604 B CN111104604 B CN 111104604B
Authority
CN
China
Prior art keywords
user
matrix
model
social
scoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911165736.3A
Other languages
Chinese (zh)
Other versions
CN111104604A (en
Inventor
邬俊
罗芳媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN201911165736.3A priority Critical patent/CN111104604B/en
Publication of CN111104604A publication Critical patent/CN111104604A/en
Application granted granted Critical
Publication of CN111104604B publication Critical patent/CN111104604B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention provides a lightweight socialization recommendation method based on hash learning. The method comprises the following steps: constructing a user-object scoring matrix and a user-user social network, and generating a social corpus by applying truncated random walk and negative sampling to the user-user social network; training a discrete matrix decomposition and continuous network embedding hybrid model according to the user-object scoring matrix and the social corpus to obtain a binarized user feature matrix and object feature matrix; and estimating the preference scores of the users on the unscored items according to the user characteristic matrix and the item characteristic matrix, and recommending one or more unscored items with the highest estimated scores to the users. The method provided by the invention has the performance equivalent to that of the actual value recommendation method of the current mainstream, but the lightweight model design thought is adopted, so that the obtained binarized user and object features have lower calculation and storage cost.

Description

Lightweight socialization recommendation method based on Hash learning
Technical Field
The invention relates to the technical field of computer application, in particular to a lightweight socialization recommendation method based on hash learning.
Background
As an effective supplementary means of information retrieval systems, recommendation systems play an important role in providing personalized information services. Collaborative filtering is a core technology for constructing a personalized recommendation system; among the many collaborative filtering methods, matrix decomposition is one of the most popular methods at present. The core idea of matrix decomposition is to map users and items to the same low-dimensional hidden space by decomposing a partially observed "user-item" interaction matrix (UI matrix for short), and then predict unobserved "user-item" correlations according to the inner product between the user and item hidden feature vectors. Typically, observed "user-item" interaction records account for only a small portion of the UI matrix, a so-called "data sparsity" problem, which severely constrains the performance of the matrix factorization model.
With the popularization of social media, some students try to relieve the problem of UI matrix sparseness by using social relations among users, so that a social recommendation system is promoted. The traditional socialization recommendation method directly expands a matrix decomposition model, social data is used based on a heuristic strategy, and representative methods such as a SoRec model, a SoReg model and the like are adopted. In recent years, scholars combine matrix decomposition models with network embedding models in order to utilize and mine social data to a greater extent, and representative methods such as CUNE models, graphRec and the like.
On the other hand, with the increasing number of online users and items, recommendation systems face serious real-time challenges, and in this context, discrete collaborative filtering models have been developed that replace real-valued users and item hidden representations in European space with binary codes in Hamming space, thereby saving computation and storage costs. However, binary coding has less information content than real-valued representations, making its recommendation accuracy slightly impaired; in other words, the discrete collaborative filtering model adopts a processing thought of 'efficiency in performance'. To further compensate for the performance loss of the discrete collaborative filtering model, the learner further designed a discrete socialization recommendation (discrete social recommendation, DSR) model to compensate for the performance loss caused by binary encoding. Essentially, the model is a binary version of the traditional socialization recommendation model SoRec, social data cannot be processed by combining with the latest network embedded research results, and the recommendation accuracy is required to be further improved.
The discrete socialization recommendation model DSR in the prior art has two disadvantages:
1) The DSR model learns social representations of users by a variable sharing method, and the method only considers the direct connection between each user and the first-order neighbor of the user, ignores the indirect connection between the user and the higher-order neighbor of the user, and obtains social characteristics of the user to be further improved;
2) Because of adopting a shared variable design thought, social characteristics of a user learned by the DSR model are also in a binary form; however, the user social representation only serves as a byproduct in the modeling process and does not participate in the final recommendation calculation, while the binary representation carries less information than the real-valued representation, thus causing unnecessary coding loss.
Disclosure of Invention
The embodiment of the invention provides a lightweight socialization recommendation method based on hash learning, which aims to overcome the problems in the prior art.
In order to achieve the above purpose, the present invention adopts the following technical scheme.
A lightweight socialization recommendation method based on hash learning comprises the following steps:
s1, constructing a user-article scoring matrix for recording scoring behaviors of a user on articles, and normalizing scoring data in the user-article scoring matrix;
s2, constructing a user-user social network for recording the connection relation among users, and generating a social corpus by applying truncated random walk and negative sampling to the user-user social network;
s3, training a discrete matrix decomposition and continuous network embedding mixed model according to the user-object scoring matrix and the social corpus to obtain a binarized user feature matrix and an object feature matrix;
S4, estimating the preference scores of the users for the unscored items according to the user characteristic matrix and the item characteristic matrix, and recommending one or more unscored items with the highest estimated scores to the users.
Preferably, the construction of the user-item scoring matrix for recording the scoring action of the user on the item, and the normalization processing of the scoring data in the user-item scoring matrix comprises the following steps:
construction of a user-item scoring matrix R.epsilon.0, 1] m×n And m and n respectively represent the number of users and articles, the scoring data in the user-article scoring matrix Ru is used for recording the scoring behavior of the users on the articles, the scoring data is normalized, the scoring data is quantized into decimal numbers, the closer the numerical value is to 1, the more the users like the articles, and 0 represents no scoring.
Preferably, the constructing a user-user social network for recording a connection relationship between users, generating a social corpus by applying truncated random walk and negative sampling to the user-user social network, includes:
constructing a user-user social network, wherein social data in the user-user social network is used for recording the connection relation between users, if the two users are friend relations, the social data is marked as 1, otherwise; social data is marked 0; generating a social corpus by applying truncated random walk and negative sampling to the user-user social network Wherein the method comprises the steps ofAnd->Representing the contextual user set and the negative sample set of user u, respectively.
Preferably, the social corpusThe generation step of (1) comprises:
s2-1: generating a set of contextual users for a userRunning a truncated random walk on the user-user social network to obtain a node sequence of each user, and searching a context user set of each user from the node sequence of the user by utilizing a sliding window; when the sliding window stops at a certain position in the node sequence, the user in the middle position is called central user u, the user in other positions in the window is called contextual user +.>In the random walk process, the probability that the user u jumps to the user v is defined as follows:
wherein co (u, v) represents the number of co-scoring actions of user u and user v, d + (u) represents the degree of departure of user u,a set of friends representing user u;
assuming that the length of the truncated random walk sequence is L, for the user u, calculating the probability of transitioning from the user u to friends thereof according to a probability transition formula, and selecting the friend v with the highest probability as the node of the next hop, wherein the node of the next hop is also calculated according to the probability transition formulaCalculating the probability of transferring itself to friends, selecting the friends with the highest probability as the nodes of the next hop, and so on until generating a node sequence with the length L, and taking the node sequence as the context user set of the user u
S2-2: generating a negative set of samples for a userFor any user u, according to the occurrence frequency of the non-contextual user in the social corpus and the activity degree of the non-contextual user in the scoring data, acquiring a negative sample set +.>Given a certain user +.>The probability of being selected as a negative sample for user u is defined as follows:
wherein f (v) represents the frequency of occurrence of the user v in the social corpus, r (v) represents the number of articles evaluated by the user v in the scoring data,representing the whole user set, wherein the hyper-parameter a is an experience value;
s2-3: contextual user collection based on the userAnd negative sample set +.>Generating social corpus->
Preferably, the training the discrete matrix decomposition and continuous network embedding hybrid model according to the user-item scoring matrix and the social corpus to obtain a binarized user feature matrix and item feature matrix includes:
the objective function of the discrete matrix decomposition and continuous network embedded hybrid model is defined as follows:
wherein the method comprises the steps ofAnd->Respectively representing loss functions of a discrete matrix decomposition model and a continuous network embedding model; />The smooth item between the discrete matrix decomposition model and the continuous network embedded model is used for connecting the discrete matrix decomposition model and the continuous network embedded model;
The loss function of the discrete matrix decomposition model is defined as follows:
s.t.B∈{±1} f×m ,D∈{±1} f×n
B1 m =0,D1 n =0,BB T =mI f ,DD T =nI f
where Ω is a set of (u, i) index pairs corresponding to observed scores,is the (u) th column of matrix B, +.>The i-th column of the matrix D is the binarized eigenvectors corresponding to the user u and the object i respectively; aboutIn beam conditions, B1 m =0 and D1 n =0 for controlling feature code balance, BB T =mI f And DD T =nI f The matrix B is used for controlling feature coding independence, the matrix B represents a binarized user feature matrix, and the matrix D represents a binarized article feature matrix;
the continuous network embedded model is a neural network comprising a hidden layer, and is provided withRepresenting a connection weight matrix between an input layer and a hidden layer of a neural network, < >>Representing a connection weight matrix between the hidden layer and the output layer; for a user u, it corresponds to two socialization features w u And v u From the ith column of matrix W and the ith row of matrix V, W, respectively u Called input vector, v u Referred to as an output vector;
the loss function of the continuous network embedding model is defined as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,and->Mean vectors representing all positive and negative samples of user u, respectively, σ (z) =1/(1+e) -z ) For converting an input variable into a probability output; lambda (lambda) w ,λ v Is a super parameter and is used for adjusting the proportion of the regular term in the loss function;
Smoothing terms between matrix factorization model and continuous network embedding modelThe definition is as follows:
after merging the terms, the objective functions of the discrete matrix decomposition and continuous network embedded hybrid model are expressed as follows:
s.t.B∈{±1} f×m ,D∈{±1} f×n
B1 m =0,D1 n =0,BB T =mI f ,DD T =nI f
wherein alpha, beta > 0 is super parameter, which is used to regulate the specific gravity of each item in the objective function; defining two continuous variablesAnd->Further relaxing the balance constraint and the decorrelation constraint to +.>Andwith tr (B) T Y) and tr (D) T Y) replace->And->The objective function of the discrete matrix decomposition and continuous network embedded hybrid model is equivalently transformed into the following optimization problem:
s.tB∈{±1} f×m ,D∈{±1} f×n
X1 m =0,Y1 n =0,XX T =mI f ,YY T =nI f
wherein lambda is B ,λ B The parameter more than 0 is a super parameter and is used for regulating and controlling the relaxation degree of the target variable;
the training process of the discrete matrix decomposition and continuous network embedded mixed model comprises the following steps: initializing model parameters B, D, W, V, X and Y, and entering an iterative training process: fixing D, W, V, X, Y, optimizing each b in parallel u For each b using DCD algorithm u Carrying out bit-by-bit updating to obtain updated B; fixing B, W, V, X, Y, optimizing each d in parallel i For each d using DCD algorithm i Carrying out bit-by-bit updating to obtain updated D; b, D, X and Y are fixed, W and V are updated by utilizing an SGD algorithm, wherein a BP algorithm is adopted in gradient calculation; b, D, W, V and Y are fixed, and X is updated by means of SVD algorithm; b, D, W, V and X are fixed, and Y is updated by means of SVD algorithm; repeating the steps, continuously and alternately updating the parameters B, D, W, V, X and Y until convergence conditions are met, stopping the training process, and finally outputting the binarized user characteristic matrix B and the object characteristic matrix D.
Preferably, the training process of the discrete matrix decomposition and continuous network embedded hybrid model specifically comprises the following steps:
s3-1: model initialization, the optimization problem is relaxed into real value space, and the SGD algorithm is used for alternately optimizing each parameter to obtain an optimal solution (P) under continuous space * ,Q * ,W * ,V * ) Initializing a discrete model according to the following rules:
B=sgn(P * ),D=sgm(Q * ),
W=W * ,V=V * ,X=P * ,Y=Q *
s3-2: fixing the objective functions of D, W, V, X, Y, updating B, discrete matrix decomposition and continuous network embedding hybrid model is equivalent to the following optimization problem:
wherein Ω u Representing a set of (u, i) index pairs u corresponding to observed scores;
pair b using DCD algorithm u Bitwise update, definitionAnd->Wherein b uk And d ik Respectively represent b u And d i K-th bit of>And->Respectively represent removal b uk And d ik Vector formed by other hash codes, concrete b uk The update rule of (2) is as follows:
when a+.0, K (a, b) =a, otherwise K (a, b) =b; if it isNot to b uk Updating;
s3-3: b, W, V, X, Y are fixed, and D is updated; similar to update B, the objective function of the joint model is equivalent to the following optimization problem:
wherein Ω i Representing a set of (u, i) index pairs i corresponding to observed scores; the DCD algorithm is adopted to update di bit by bit; specific d ik The update rule of (2) is as follows:
also, ifUpdate d ik Otherwise, not to d ik And updating.
S3-4: fixing B, D, X, Y, updating W, V, the objective function of the hybrid model is equivalent to the following optimization problem:
updating W and V by adopting an SGD algorithm, wherein gradient calculation is realized by means of a BP algorithm;
s3-5: fixing B, D, W, V, Y, updating X, the objective function of the hybrid model is equivalent to the following optimization problem:
the updating rule of the specific X is as follows:
wherein P is b And Q b Respectively represent the matrixA left singular matrix and a right singular matrix obtained by Singular Value Decomposition (SVD); />Representing a feature matrix corresponding to the zero feature value in the SVD process; further, by the pair [ Q ] b 1]Performing Schmidt orthogonalizationIs transformed into->
S3-6: fixing B, D, W, V, X, updating Y, the objective function of the hybrid model is equivalent to the following optimization problem:
the update rule of the specific Y is as follows:
wherein P is d And Q d Respectively represent the matrixA left singular matrix and a right singular matrix obtained through SVD; />Representing a feature matrix corresponding to the zero feature value in the SVD process; further, by the pair [ Q ] d 1]Orthogonalization of Schmidt to give +.>
S3-7: and repeating the steps S3-2 to S3-6 until convergence conditions are met, stopping the training process, and finally outputting the binarized user characteristic matrix B and the object characteristic matrix D.
Preferably, the convergence condition includes: the objective function value is smaller than a certain preset threshold value; alternatively, each bit in matrices B and D is no longer changed.
Preferably, the estimating the preference scores of the users for the unscored items according to the user feature matrix and the item feature matrix, and recommending one or more unscored items with highest estimated scores to the users includes:
reconstructing a scoring matrix according to the binarized user feature matrix B and the article feature matrix DThe reconstructed score represents an estimate of the user's preference for the item; reconstruction matrix->And (3) arranging the items in a descending order row by row, and recommending one or more unscored items with highest estimated scores to the user.
According to the technical scheme provided by the embodiment of the invention, the lightweight socialization recommendation method based on hash learning can learn the binarization characteristics of the user and the object by using the scoring data and the social data at the same time, and then the object is rapidly and effectively recommended to the user by means of logic operation. The method of the invention greatly reduces the cost of online calculation and storage of the model on the premise of ensuring a certain recommendation accuracy.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a process flow diagram of a lightweight social recommendation method based on hash learning provided by an embodiment of the invention;
FIG. 2 is a training workflow diagram of a discrete matrix decomposition and continuous network embedded hybrid model provided by an embodiment of the present invention;
FIG. 3 shows the result of a comparison experiment between the method according to the embodiment of the present invention and the conventional discrete recommendation method
Fig. 4 shows the results of a comparison experiment between the method and a real-valued version thereof in terms of accuracy recommendation, storage overhead and time overhead.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the present invention and are not to be construed as limiting the present invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or coupled. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
For the purpose of facilitating an understanding of the embodiments of the invention, reference will now be made to the drawings of several specific embodiments illustrated in the drawings and in no way should be taken to limit the embodiments of the invention.
Supplementing "user-item" scoring data with "user-user" social data has become one of the effective means to improve the performance of recommendation systems, but the computational efficiency of current social recommendation methods is severely limited by increasing numbers of users and items, especially for mobile recommendation scenarios where computing and storage resources are severely limited; in view of this, embodiments of the present invention provide a lightweight social recommendation method based on hash learning to improve two drawbacks of DSR. On one hand, the method of the invention uses the network embedding model to process the social relationship of the users, and can effectively mine the higher-order neighbor relationship among the users, thereby further strengthening the social characteristics of the users; on the other hand, the method adopts a 'discrete-continuous' mixed modeling thought, simultaneously learns the binary user preference characteristics and the real-valued user social characteristics, and minimizes the difference between the binary user preference characteristics and the real-valued user social characteristics in a characteristic alignment mode; based on a unified optimization framework, the discrete learning task and the continuous learning task can mutually promote, and simultaneously, the binarization coding loss is reduced to the maximum extent, so that a more accurate recommendation result is obtained.
The lightweight recommendation method refers to an online recommendation method which occupies less computing and storage resources.
In the embodiment of the invention, the network refers to a social network, the nodes represent users, and the connections represent social relations among users, such as friend relations (user A pays attention to user B), forwarding relations (user A forwards posts of user B), comment relations (user A reviews posts of user B), and the like. By means of the network embedding model, real valued feature vectors of the user can be obtained, and the feature vectors keep social structure information of the user.
The processing flow of the lightweight social recommendation method based on hash learning provided by the embodiment of the invention is shown in fig. 1, and comprises the following processing steps:
s1, constructing a scoring matrix of 'user-article', and normalizing scoring data to obtain a scoring matrix RE [0,1 ]] m×n (where m and n represent the user and the number of items, respectively) to record the scoring behavior of the items by the user; the scoring behavior of a user on a commodity is quantified as a decimal, the closer the value is to table 1The user is shown to prefer the item and vice versa, with 0 indicating no score.
S2, constructing a user-user social network for recording the connection relation between users; if the two users are in a friend relationship, marking the friend relationship as 1, otherwise marking the friend relationship as 0; generating a social corpus by applying truncated random walk and negative sampling to a "user-to-user" social network Wherein->And->Representing the contextual user set and the negative sample set of user u, respectively.
Step S3, according to the scoring matrix R and the social corpusTraining a discrete matrix decomposition and continuous network embedded hybrid model to obtain a binarized user feature matrix +.>Article feature matrix->Where f is the feature space dimension.
S4, reconstructing a scoring matrix according to the user characteristic matrix and the article characteristic matrixAnd reconstruct the scoring matrix->The reconstructed scores of the (a) are arranged in a descending order row by row; the reconstructed score represents a predicted score of the user for the item preference, and one or more unscored items with the highest predicted scores are recommended to the user.
S2 social corpusThe generation specifically comprises the following steps:
s2-1: generating a set of contextual users for a userRunning a truncated random walk on a 'user-user' social network to obtain a node sequence of each user, and then searching a context user set of each user from the node sequence of the user by utilizing a sliding window; when the sliding window stops at a certain position in the node sequence, the user in the middle position is called central user u, the user in other positions in the window is called contextual user +. >In the random walk process, the probability of user u jumping to user v is defined as follows:
where co (u, v) represents the number of co-scored actions of user u and user v, d + (u) represents the degree of departure of user u,representing the set of friends of user u.
Let the length of truncated random walk sequence be L, for user u, calculate the probability of transitioning from user u to its friends according to the probability transition formula, then select friend v with the highest probability as its next hop node, this next hop node also calculates the probability of transitioning itself to its friends according to the probability transition formula, select friend with the highest probability as its next hop node, and so on until a node sequence with length L is generated, take this node sequence as the user u's context user set
S2-2: generating a negative set of samples for a userFor any user u, according to the occurrence frequency of the non-contextual user in the social corpus and the activity degree of the non-contextual user in the scoring data, acquiring a negative sample set +.>Given a certain user +.>The probability of being selected as a negative sample for user u is defined as follows:
wherein f (v) represents the frequency of occurrence of the user v in the social corpus, r (v) represents the number of items evaluated by the user v in the scoring data, Representing the whole user set, wherein the hyper-parameter a is an experience value;
s2-3: contextual user collection of the userAnd negative sample set +.>Together constitute the social corpus of the user>
S3, defining an objective function of the discrete matrix decomposition and continuous network embedded hybrid model as follows:
wherein the method comprises the steps ofAnd->Respectively representing loss functions of a discrete matrix decomposition model and a continuous network embedding model; />Embedding smooth terms between the model for the discrete matrix decomposition model and the continuous network for connecting the two models: a discrete matrix decomposition model and a continuous network embedding model;
the loss function of the discrete matrix decomposition model is defined as follows:
s.t.B∈{±1} f×m ,D∈{±1} f×n
B1 m =0,D1 n =0,BB T =mI f ,DD T =nI f
where Ω is a set of (u, i) index pairs corresponding to observed scores,is the (u) th column of matrix B, +.>The i-th column of the matrix D is the binarized eigenvectors corresponding to the user u and the object i respectively; in the constraint, B1 m =0 and D1 n =0 for controlling feature code balance, BB T =mI f And DD T =nI f For controlling feature coding independence. Matrix B represents the binarized user feature matrix and matrix D represents the binarized article feature matrix.
The continuous network embedding model is actually a neural network, which is assumed here to include only one hidden layer for simplicity of problem expression; is provided with Representing a connection weight matrix between the input layer and hidden layer of the neural network,representing a connection weight matrix between the hidden layer and the output layer; for a user u, it corresponds to two socialization features w u And v u From the ith column of matrix W and the ith row of matrix V, W, respectively u Called input vector, v u Referred to as an output vector; the goal of the network embedding model is to make the input vector of a user as similar as possible to the output vector of its contextual user, while being as different as possible to the output vector of its non-contextual user.
The specific loss function of the continuous network embedding model is defined as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,and->Mean vectors representing all positive and negative samples of user u, respectively, σ (z) =1/(1+e) -z ) For converting an input variable into a probability output; lambda (lambda) ω ,λ υ Is a super parameter used for adjusting the specific gravity of the regular term in the loss function.
The smoothing term between the matrix factorization model and the continuous network embedding model is defined as:
the smoothing term is used for connecting the two models, so that the binary preference characteristics of the same user are similar to real-value socialization characteristics as much as possible;
after merging the various items, the objective functions of the discrete matrix decomposition and the continuous network embedded hybrid model are expressed as follows:
s.t.B∈{±1} f×m ,D∈{±1} f×n
B1 m =0,D1 n =0,BB T =mI f ,DD T =nI f
Wherein alpha, beta > 0 is super parameter, which is used to regulate the specific gravity of each item in the objective function; to facilitate solving the discrete optimization problem described above, two continuous variables are first definedAndthereby relaxing the balance constraint and the decorrelation constraint toAnd->Since the two norms of B and D are constant, there is no effect on the optimization, tr (B T X) and tr (D) T Y) replace->And->Thus, the objective function of the discrete matrix decomposition and continuous network embedded hybrid model is equivalently transformed into the following optimization problem:
s.t.B∈{±1} f×m ,D∈{±1} f×n
X1 m =0,Y1 n =0,XX T =mI f ,YY T =nI f
wherein lambda is B ,λ B And > 0 is a super parameter for regulating and controlling the relaxation degree of the target variable.
S3 discrete momentThe workflow of matrix decomposition and continuous network embedded hybrid model training is as shown in fig. 2, firstly, model parameters B, W, V, X, Y are initialized, and then an iterative training process is entered: fixing B, W, V, X, Y, optimizing each B in parallel u For each b using DCD algorithm u Carrying out bit-by-bit updating to obtain updated B; fixing B, W, V, X, Y, optimizing each d in parallel i For each d using DCD algorithm i Carrying out bit-by-bit updating to obtain updated D; b, D, X and Y are fixed, W and V are updated by utilizing an SGD algorithm, wherein a BP algorithm is adopted in gradient calculation; b, D, W, V, X and Y are fixed, and X is updated by means of SVD algorithm; b, D, W, V and X are fixed, and Y is updated by means of SVD algorithm; repeating the above steps, continuously and alternately updating the parameters B, D, W, V, X and Y until the stopping condition is met, for example, the objective function value is smaller than a certain preset threshold value or each bit of B and D is not changed any more, and finally outputting the binarized user characteristic matrix B and the object characteristic matrix D.
The method specifically comprises the following steps:
s3-1: model initialization, the optimization problem is relaxed into a real value space, and the SGD (random gradient descent) algorithm is used for alternately optimizing each parameter to obtain an optimal solution (P) under a continuous space * ,Q * ,W * ,V * ) The discrete model is then initialized according to the following rules:
B=sGn(P * ),D=sgn(Q * ),
W=W * ,V=V * ,X=P * ,Y=Q *
s3-2: fixing the objective functions of D, W, V, X, Y, updating B, discrete matrix decomposition and continuous network embedding hybrid model is equivalent to the following optimization problem:
wherein Ω u Represents the set of (u, i) index pairs u corresponding to the observed scores.
The invention adopts DCD (coordinate descent method, discrete)Coordinate descent) algorithm pair b u Bitwise update, definitionAnd->Wherein b uk And d ik Respectively represent b u And d i K-th bit of>And->Respectively represent removal b uk And d ik Vector formed by other hash codes, concrete b uk The update rule of (2) is as follows:
when a+.0, K (a, b) =a, otherwise K (a, b) =b; if it isNot to b uk And updating.
S3-3: b, W, V, X, Y are fixed, and D is updated; similar to update B, the objective function of the joint model is first equivalent to the following optimization problem:
wherein Ω i Representing a set of (u, i) index pairs i corresponding to observed scores; the discrete coordinate descent DCD algorithm pair d can also be adopted i Carrying out bit-by-bit updating; specific d ik The update rule of (2) is as follows:
also, ifUpdate d ik Otherwise, not to d ik And updating.
S3-4: fixing B, D, X, Y, updating W, V, the objective function of the hybrid model is equivalent to the following optimization problem:
the problem is a standard neural network optimization problem, where W, V can be updated with a random gradient descent SGD algorithm, where the gradient computation can be by means of a BP (Back Propagation) algorithm.
S3-5: fixing B, D, W, V, Y, updating X, the objective function of the hybrid model is equivalent to the following optimization problem:
the updating rule of the specific X is as follows:
wherein P is b And Q b Respectively represent the matrixLeft and right singular matrices obtained by SVD (Singular Value Decomposition ); />Representing feature moment corresponding to zero feature value in SVD processAn array; further, by the pair [ Q ] b 1]Orthogonalization of Schmidt to give +.>
S3-6: fixing B, D, W, V, X, updating Y, the objective function of the hybrid model is equivalent to the following optimization problem:
the update rule of the specific Y is as follows:
wherein P is d And Q d Respectively represent the matrixA left singular matrix and a right singular matrix obtained by Singular Value Decomposition (SVD); />Representing a feature matrix corresponding to the zero feature value in the SVD process; further, by the pair [ Q ] d 1]Orthogonalization of Schmidt to give +.>
S3-7: and repeating the steps S3-2 to S3-6 until convergence conditions are met, for example, the objective function value is smaller than a certain preset threshold value or each bit of the B and D is not changed any more, the training process is stopped, and finally the binarized user characteristic matrix B and the article characteristic matrix D are output.
Experiments are carried out on FilmTrust, ciaoDVD and Epinions data sets, and the method (Discrete Matrix factorization with network Embedding, referred to as DME for short) is compared with the prior two mainstream discrete recommendation methods for experimental analysis; the comparison method comprises a discrete socialization recommendation method (Discrete Social Recommendation, DSR for short) with the best performance in the field at present, which is published in an artificial intelligence field top-level conference AAAI2019; classical discrete collaborative filtering method (Discrete Collaborative Filtering, abbreviated as DCF) is published in top-level conference SIGIR2016 in the field of information retrieval. In addition, the method of the invention also carries out comparative experimental analysis with a real valued version (Matrix factorization with network Embedding, ME for short) thereof in three aspects of recommended performance, calculation and storage cost.
The FilmTrust dataset originates from a movie rating website, with the rows of the UI matrix representing the viewers, the columns representing the movies, and the rating scale: 0.5 to 4.0 minutes; there are also user social relations (focus vs is focused) as auxiliary information. The dataset includes 1,508 users, 2,071 items, 35,497 scoring records, 1,853 friend connections; the thickness of the "user-object" interaction data is 1.14%, and the thickness of the "user-user" social data is 0.42%.
The CiaoDVD dataset originates from the video review website with the rows of the UI matrix representing reviewers, the columns representing videos, and the scoring ranges: 1.0 to 5.0 minutes; user social relationships (trust vs are trusted) are also used as auxiliary information. The dataset includes 17,615 users, 16,121 items, 72,665 scoring records, 40,133 friend connections; the thickness of the "user-object" interaction data is 0.03%, and the thickness of the "user-user" social data is 0.65%.
The epinits dataset originated from an online commodity review website, with the rows of the UI matrix representing reviewers, the columns representing commodity, and the scoring ranges: 1.0 to 5.0 minutes; user social relationships (trust vs are trusted) are also used as auxiliary information. The dataset includes 40,163 users, 139,738 items, 664,824 scoring records, 487,183 friend connections; the thickness of the user-object interaction data is 0.01%, and the thickness of the user-user social data is 0.03%.
Fig. 3 shows the results of comparative experiments of the DME, DSR, DCF discrete recommendation method on FilmTrust, ciaoDVD and epinits datasets, with the evaluation index being normalized loss cumulative gain (Normalized Discount Cumulative Gain, NDCG), where x represents the optimal value; FIG. 4 shows the model performance, memory and storage between the lightweight method DME and its real valued version ME Time consumption contrast, where ∈or ∈represents the percentage of performance decrease or increase,representing a multiple of storage or time overhead improvement; the experimental results show that: compared with the current mainstream discrete recommendation method, the method has the advantages that the performance is improved to a greater extent (the higher the NDCG is, the better); compared with the real valued version thereof, the method has a great improvement in terms of calculation and storage overhead on the premise of close recommendation performance.
In summary, the lightweight social recommendation method based on hash learning in the embodiment of the invention can learn the binarization characteristics of the user and the object by using the scoring data and the social data at the same time, and then rapidly and effectively recommend the object to the user by means of logic operation. The method of the invention greatly reduces the cost of online calculation and storage of the model on the premise of ensuring a certain recommendation accuracy.
The lightweight socialization recommendation method based on hash learning in the embodiment of the invention integrates a discrete matrix decomposition model and a continuous network embedding model in a seamless way. Based on the network embedded model, the higher-order neighbor relation among users can be processed, and the obtained hidden characteristics of the users have stronger characterization capability; by adopting a 'discrete-continuous' mixed modeling thought, based on the same optimization framework, a discrete preference learning task and a continuous social representation learning task can mutually promote, and the obtained binarized user and article features have lower coding loss.
Those of ordinary skill in the art will appreciate that: the drawing is a schematic diagram of one embodiment and the modules or flows in the drawing are not necessarily required to practice the invention.
From the above description of embodiments, it will be apparent to those skilled in the art that the present invention may be implemented in software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present invention.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for apparatus or system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, with reference to the description of method embodiments in part. The apparatus and system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The present invention is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.

Claims (7)

1. A lightweight socialization recommendation method based on hash learning is characterized by comprising the following steps:
s1, constructing a user-article scoring matrix for recording scoring behaviors of a user on articles, and normalizing scoring data in the user-article scoring matrix;
s2, constructing a user-user social network for recording the connection relation among users, and generating a social corpus by applying truncated random walk and negative sampling to the user-user social network;
s3, training a discrete matrix decomposition and continuous network embedding mixed model according to the user-object scoring matrix and the social corpus to obtain a binarized user feature matrix and an object feature matrix;
s4, estimating the preference scores of the users for the unscored items according to the user characteristic matrix and the item characteristic matrix, and recommending one or more unscored items with the highest estimated scores to the users;
The training of the discrete matrix decomposition and continuous network embedding mixed model according to the user-object scoring matrix and the social corpus to obtain a binarized user feature matrix and object feature matrix comprises the following steps:
the objective function of the discrete matrix decomposition and continuous network embedded hybrid model is defined as follows:
wherein the method comprises the steps ofAnd->Respectively representing loss functions of a discrete matrix decomposition model and a continuous network embedding model; />The smooth item between the discrete matrix decomposition model and the continuous network embedded model is used for connecting the discrete matrix decomposition model and the continuous network embedded model;
the loss function of the discrete matrix decomposition model is defined as follows:
s.t.B∈{±1} f×m ,D∈{±1} f×n
B1 m =0,D1 n =0,BB T =mI f ,DD T =nI f
where Ω is a set of (u, i) index pairs corresponding to observed scores,is the (u) th column of the matrix B,The i-th column of the matrix D is the binarized eigenvectors corresponding to the user u and the object i respectively; in the constraint, B1 m =0 and D1 n =0 for controlling feature code balance, BB T =mI f And DD T =nI f The matrix B is used for controlling feature coding independence, the matrix B represents a binarized user feature matrix, and the matrix D represents a binarized article feature matrix;
the continuous network embedded model is a neural network comprising a hidden layer, and is provided with Representing a connection weight matrix between an input layer and a hidden layer of a neural network, < >>Representing a connection weight matrix between the hidden layer and the output layer; for a user u, it corresponds to two socialization features w u And v u From the ith column of matrix W and the ith row of matrix V, W, respectively u Called input vector, v u Referred to as an output vector;
the loss function of the continuous network embedding model is defined as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,and->Mean vectors representing all positive and negative samples of user u, respectively, σ (z) =1/(1+e) -z ) For converting an input variable into a probability output; lambda (lambda) w ,λ v Is a super parameter and is used for adjusting the proportion of the regular term in the loss function;
smoothing terms between matrix factorization model and continuous network embedding modelThe definition is as follows:
after merging the terms, the objective functions of the discrete matrix decomposition and continuous network embedded hybrid model are expressed as follows:
s.t.B∈{±1} f×m ,D∈{±1} f×n
B1 m =0,D1 n =0,BB T =mI f ,DD T =nI f
wherein alpha, beta>0 is a super parameter for regulating and controlling the specific gravity of each item in the objective function; defining two continuous variablesAnd->Further relaxing the balance constraint and the decorrelation constraint to +.>Andwith tr (B) T X) and tr (D) T Y) replace->And->The objective function of the discrete matrix decomposition and continuous network embedded hybrid model is equivalently transformed into the following optimization problem:
s.t.B∈{±1} f×m ,D∈{±1} f×n
X1 m =0,Y1 n =0,XX T =mI f ,YY T =nI f
Wherein lambda is B ,λ B The parameter more than 0 is a super parameter and is used for regulating and controlling the relaxation degree of the target variable;
the training process of the discrete matrix decomposition and continuous network embedded mixed model comprises the following steps: initializing model parameters B, D, W, V, X and Y, and entering an iterative training process: fixing D, W, V, X, Y, optimizing each b in parallel u For each b using DCD algorithm u Carrying out bit-by-bit updating to obtain updated B; fixing B, W, V, X, Y, optimizing each d in parallel i For each d using DCD algorithm i Carrying out bit-by-bit updating to obtain updated D; b, D, X and Y are fixed, W and V are updated by utilizing an SGD algorithm, wherein a BP algorithm is adopted in gradient calculation; b, D, W, V and Y are fixed, and X is updated by means of SVD algorithm; b, D, W, V and X are fixed, and Y is updated by means of SVD algorithm;repeating the steps, continuously and alternately updating the parameters B, D, W, V, X and Y until convergence conditions are met, stopping the training process, and finally outputting the binarized user characteristic matrix B and the object characteristic matrix D.
2. The method of claim 1, wherein said constructing a user-item scoring matrix for recording user scoring actions on items and normalizing scoring data in said user-item scoring matrix comprises:
Construction of a user-item scoring matrix R.epsilon.0, 1] m×n And m and n respectively represent the number of users and the number of articles, wherein the scoring data in the user-article scoring matrix R is used for recording the scoring behavior of the users on the articles, the scoring data is normalized, the scoring data is quantized into decimal numbers, the closer the numerical value is to 1, the more the users like the articles, and 0 represents no scoring.
3. The method of claim 1, wherein the constructing a user-user social network for recording connection relationships between users, generating social corpus by applying truncated random walk and negative sampling to the user-user social network, comprises:
constructing a user-user social network, wherein social data in the user-user social network is used for recording the connection relation between users, if the two users are friend relations, the social data is marked as 1, otherwise; social data is marked 0; generating a social corpus by applying truncated random walk and negative sampling to the user-user social networkWherein->And->Contextual users respectively representing user uCollection and negative sample collection.
4. The method of claim 3, wherein the social corpus The generation step of (1) comprises:
s2-1: generating a set of contextual users for a userRunning a truncated random walk on the user-user social network to obtain a node sequence of each user, and searching a context user set of each user from the node sequence of the user by utilizing a sliding window; when the sliding window stops at a certain position in the node sequence, the user in the middle position is called central user u, the user in other positions in the window is called contextual user +.>In the random walk process, the probability of user u jumping to user v is defined as follows:
where co (u, u) represents the number of co-scored actions of user u and user v, d + (u) represents the degree of departure of user u,a set of friends representing user u;
let the length of the truncated random walk sequence be L, for user u, calculate the probability of transitioning from user u to its friends according to the probability transition formula, select friend v with the highest probability as the node of its next hop, the node of this next hop also calculates the probability of transitioning itself to its friends according to the probability transition formula, select the friend with the highest probability as its next hopOne-hop nodes, and so on, until a node sequence of length L is generated, which is taken as the context user set of user u
S2-2: generating a negative set of samples for a userFor any user u, according to the occurrence frequency of the non-contextual user in the social corpus and the activity degree of the non-contextual user in the scoring data, acquiring a negative sample set +.>Given a certain user +.>The probability of being selected as a negative sample for user u is defined as follows:
wherein f (v) represents the frequency of occurrence of the user v in the social corpus, r (v) represents the number of articles evaluated by the user v in the scoring data, u represents the whole user set, and the super parameter a is an experience value;
s2-3: contextual user collection based on the userAnd negative sample set +.>Generating social corpus->
5. The method of claim 1, wherein the training process of the discrete matrix decomposition and continuous network embedded hybrid model specifically comprises:
s3-1: model initialization, the optimization problem is relaxed into real value space, and the SGD algorithm is used for alternately optimizing each parameter to obtain an optimal solution (P) under continuous space * ,Q * ,W * ,V * ) Initializing a discrete model according to the following rules:
B=sgn(P * ),D=sgn(Q * ),
W=W * ,V=V * ,X=P * ,Y=Q *
s3-2: fixing the objective functions of D, W, V, X, Y, updating B, discrete matrix decomposition and continuous network embedding hybrid model is equivalent to the following optimization problem:
Wherein Ω u Representing a set of (u, i) index pairs u corresponding to observed scores;
the DCD algorithm is adopted to update bu bit by bit, and definition is definedAnd->Wherein b uk And d ik Respectively represent b u And d i K-th bit of>And->Respectively represent removal b uk And d ik Vector formed by other hash codes, concrete b uk The update rule of (2) is as follows:
when a+.0, K (a, b) =a, otherwise K (a, b) =b; if it isNot to b uk Updating;
s3-3: b, W, V, X, Y are fixed, and D is updated; similar to update B, the objective function of the joint model is equivalent to the following optimization problem:
wherein Ω i Representing a set of (u, i) index pairs i corresponding to observed scores; d is calculated by adopting DCD algorithm i Carrying out bit-by-bit updating; specific d ik The update rule of (2) is as follows:
also, ifUpdate d ik Otherwise, not to d ik Updating;
s3-4: fixing B, D, X, Y, updating W, V, the objective function of the hybrid model is equivalent to the following optimization problem:
updating W and V by adopting an SGD algorithm, wherein gradient calculation is realized by means of a BP algorithm;
s3-5: fixing B, D, W, V, Y, updating X, the objective function of the hybrid model is equivalent to the following optimization problem:
s.t.X1 m =0,XX T =mI f
the updating rule of the specific X is as follows:
wherein P is b And Q b Respectively represent the matrix A left singular matrix and a right singular matrix obtained by Singular Value Decomposition (SVD); />Representing a feature matrix corresponding to the zero feature value in the SVD process; further, by the pair [ Q ] b 1]Orthogonalization of Schmidt to give +.>
S3-6: fixing B, D, W, V, X, updating Y, the objective function of the hybrid model is equivalent to the following optimization problem:
s.t.Y1 n =0,YY T =nI f
the update rule of the specific Y is as follows:
wherein P is d And Q d Respectively represent the matrixA left singular matrix and a right singular matrix obtained through SVD; />Representing a feature matrix corresponding to the zero feature value in the SVD process; further, by the pair [ Q ] d 1]Orthogonalization of Schmidt to give +.>
S3-7: and repeating the steps S3-2 to S3-6 until convergence conditions are met, stopping the training process, and finally outputting the binarized user characteristic matrix B and the object characteristic matrix D.
6. The method of claim 1, wherein the convergence condition comprises: the objective function value is smaller than a certain preset threshold value; alternatively, each bit in matrices B and D is no longer changed.
7. The method of claim 1, wherein estimating the user's preference score for the unobserved items based on the binarized user feature matrix and the item feature matrix and recommending one or more unobserved items with highest estimated scores to the user comprises:
Reconstructing a scoring matrix according to the binarized user feature matrix B and the article feature matrix DAnd reconstruct the scoring matrix->The reconstructed scores of the items are arranged in a descending order row by row, the reconstructed scores represent estimated scores of the user on the item preference degree, and one or more unscored items with the highest estimated scores are recommended to the user.
CN201911165736.3A 2019-11-25 2019-11-25 Lightweight socialization recommendation method based on Hash learning Active CN111104604B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911165736.3A CN111104604B (en) 2019-11-25 2019-11-25 Lightweight socialization recommendation method based on Hash learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911165736.3A CN111104604B (en) 2019-11-25 2019-11-25 Lightweight socialization recommendation method based on Hash learning

Publications (2)

Publication Number Publication Date
CN111104604A CN111104604A (en) 2020-05-05
CN111104604B true CN111104604B (en) 2023-07-21

Family

ID=70421219

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911165736.3A Active CN111104604B (en) 2019-11-25 2019-11-25 Lightweight socialization recommendation method based on Hash learning

Country Status (1)

Country Link
CN (1) CN111104604B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11836159B2 (en) 2019-10-11 2023-12-05 Visa International Service Association System, method, and computer program product for analyzing a relational database using embedding learning
CN113377973B (en) * 2021-06-10 2022-06-14 电子科技大学 Article recommendation method based on countermeasures hash
CN113627598B (en) * 2021-08-16 2022-06-07 重庆大学 Twin self-encoder neural network algorithm and system for accelerating recommendation
CN113887719B (en) * 2021-09-13 2023-04-28 北京三快在线科技有限公司 Model compression method and device
CN114564742A (en) * 2022-02-18 2022-05-31 北京交通大学 Lightweight federated recommendation method based on Hash learning
CN116401458B (en) * 2023-04-17 2024-01-09 南京工业大学 Recommendation method based on Lorenz chaos self-adaption

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6321221B1 (en) * 1998-07-17 2001-11-20 Net Perceptions, Inc. System, method and article of manufacture for increasing the user value of recommendations
CN107122411A (en) * 2017-03-29 2017-09-01 浙江大学 A kind of collaborative filtering recommending method based on discrete multi views Hash
CN110321494A (en) * 2019-06-26 2019-10-11 北京交通大学 Socialization recommended method based on matrix decomposition Yu internet startup disk conjunctive model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6321221B1 (en) * 1998-07-17 2001-11-20 Net Perceptions, Inc. System, method and article of manufacture for increasing the user value of recommendations
CN107122411A (en) * 2017-03-29 2017-09-01 浙江大学 A kind of collaborative filtering recommending method based on discrete multi views Hash
CN110321494A (en) * 2019-06-26 2019-10-11 北京交通大学 Socialization recommended method based on matrix decomposition Yu internet startup disk conjunctive model

Also Published As

Publication number Publication date
CN111104604A (en) 2020-05-05

Similar Documents

Publication Publication Date Title
CN111104604B (en) Lightweight socialization recommendation method based on Hash learning
CN110929164B (en) Point-of-interest recommendation method based on user dynamic preference and attention mechanism
CN110321494B (en) Socialized recommendation method based on matrix decomposition and network embedding combined model
Huang et al. LoAdaBoost: Loss-based AdaBoost federated machine learning with reduced computational complexity on IID and non-IID intensive care data
Wang et al. A deep convolutional neural network for topology optimization with perceptible generalization ability
Chen et al. Deep reinforcement learning in recommender systems: A survey and new perspectives
CN109635204A (en) Online recommender system based on collaborative filtering and length memory network
CN112215604B (en) Method and device for identifying transaction mutual-party relationship information
CN111079409B (en) Emotion classification method utilizing context and aspect memory information
Liu et al. Dynamic knowledge graph reasoning based on deep reinforcement learning
Yang et al. A surrogate-assisted particle swarm optimization algorithm based on efficient global optimization for expensive black-box problems
Cho et al. Adversarial tableqa: Attention supervision for question answering on tables
Wu et al. Estimating fund-raising performance for start-up projects from a market graph perspective
Yousefi et al. A robust hybrid artificial neural network double frontier data envelopment analysis approach for assessing sustainability of power plants under uncertainty
Huang et al. On the improvement of reinforcement active learning with the involvement of cross entropy to address one-shot learning problem
Chuang et al. TPR: Text-aware preference ranking for recommender systems
Kang et al. Multitype drug interaction prediction based on the deep fusion of drug features and topological relationships
Wang et al. Session-based recommendation with time-aware neural attention network
CN114298783A (en) Commodity recommendation method and system based on matrix decomposition and fusion of user social information
Yang et al. Time-aware dynamic graph embedding for asynchronous structural evolution
Zhang et al. MIRN: A multi-interest retrieval network with sequence-to-interest EM routing
Luan et al. LRP‐based network pruning and policy distillation of robust and non‐robust DRL agents for embedded systems
Liu et al. Job and employee embeddings: A joint deep learning approach
Liang et al. A normalizing flow-based co-embedding model for attributed networks
CN115344794A (en) Scenic spot recommendation method based on knowledge map semantic embedding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant