CN115964568A - Personalized recommendation method based on edge cache - Google Patents
Personalized recommendation method based on edge cache Download PDFInfo
- Publication number
- CN115964568A CN115964568A CN202310097744.9A CN202310097744A CN115964568A CN 115964568 A CN115964568 A CN 115964568A CN 202310097744 A CN202310097744 A CN 202310097744A CN 115964568 A CN115964568 A CN 115964568A
- Authority
- CN
- China
- Prior art keywords
- service
- user
- group
- vector
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses an edge cache-based personalized recommendation method. The method carries out personalized recommendation by mining the social relation of the user. Meanwhile, the transmission delay and the consumption are reduced by utilizing the edge cache, so that the low-delay and high-precision service requirements of users are met. The method comprises the following steps: constructing a user embedded vector and a service embedded vector; compressing the user embedded vector and dividing the group; constructing a group embedding vector, and acquiring a preference characteristic vector of the group by using a gate cycle unit; outputting the probability of service cache through a full connection layer and a sigmoid function to realize edge cache; obtaining user social relationship characteristics; and compressing the service embedding vector by using a multilayer perceptron network, inputting the compressed service embedding vector and the social relation characteristics of the user into a full connection layer, outputting the recommended probability of the service through a sigmoid function, and realizing personalized recommendation according to a Top-k strategy.
Description
Technical Field
The invention relates to the field of edge computing and recommending systems, in particular to an edge cache-based personalized recommending method.
Background
With the rapid development of emerging technologies such as 5G, cloud computing, edge computing and big data, the number and types of services are in explosive growth. However, when faced with a large number of services, users find it difficult to find the desired service according to their own preferences. The recommendation system is used as a method for effectively relieving information overload, and aims to extract user preferences through information such as portraits and historical behavior records of users and recommend items or services which may be interested to the users.
Most of traditional recommendation systems adopt a cloud-end two-point architecture, user data are uploaded to a cloud server from a client, modeling and calculation of a model are completed in the cloud, and then a recommendation result is returned to a user. However, as the service scale expands and the user log grows, the cloud server load will further increase, and it is difficult to meet the low-delay and high-precision service requirement of the user. In addition, the user information and the service information are often sparse, which results in high-dimensional sparsity of the user feature vector and the service feature vector, and better recommendation quality is difficult to obtain.
Disclosure of Invention
In view of the foregoing problems, the present invention aims to provide a personalized recommendation method based on edge caching. The method comprises the steps of deploying a service matching model at the cloud end by adjusting a recommended application architecture, dividing users into different groups, and caching the service meeting the requirements of most people into an edge server by mining group preferences so as to reduce the time delay and consumption of service transmission. Meanwhile, user behaviors are captured at the edge end, the social relationship of the users is modeled by using a hypergraph, neighborhood level features among the users are calculated through a convolution network, personalized recommendation is achieved, and the recommendation accuracy is further improved.
The technical scheme is as follows: in order to solve the technical problems, the technical scheme adopted by the invention is as follows:
in a first aspect, the present invention provides a personalized recommendation method based on edge caching, including:
step 1, constructing a user embedded vector and a service embedded vector according to a user historical service call record;
step 2, based on the user embedded vector, predicting by using a service cache probability prediction model to obtain a service cache probability prediction value:
201. compressing the user-embedded vector using a stacked noise reduction encoder;
202. based on the compressed user embedded vector, realizing group division through a DBSCAN clustering algorithm to obtain a group division result;
203. according to the group division result, fusing the service call records of the users in the same group at the same time period, splicing the service call records at different time periods, constructing an embedded vector of the group, compressing the embedded vector of the group by using a multilayer perceptron network, and modeling the dynamic change of the group preference characteristics by using a gate cycle unit to obtain the preference characteristic vector of the group; outputting the probability of the service cache through a full connection layer and a sigmoid function;
step 3, utilizing a Top-k strategy to realize edge caching according to the service caching probability predicted value;
and 4, predicting by using a service recommendation probability prediction model based on the service embedding vector and the obtained social relationship characteristics of the user to obtain a service recommendation probability prediction value:
401. modeling social relations among users by using a hypergraph, constructing a hypergraph for each user, wherein each hypergraph comprises all neighbor user nodes of the user, and mining high-order connected information in the hypergraph by using a graph convolution network so as to obtain the social relation characteristics of the users;
402. compressing the service embedded vector by using a multilayer perceptron network, splicing the compressed service embedded vector and the social relation characteristics of the user, inputting the service embedded vector into a full connection layer, and outputting the recommended probability of the service through a sigmoid function;
step 5, utilizing a Top-k strategy to perform descending ordering on the service recommendation probability predicted values, and selecting the first k services to recommend to a user;
and 6, if the recommended service is cached in the edge server, the edge server provides the service for the user, otherwise, the cloud server provides the service for the user.
In some embodiments, step 1, a user-embedded vector is constructed based on a user's historical service invocation recordWherein->Representing the number of times user i interacts with service j;
constructing service embedded vector according to user historical service call recordWherein->Representing the number of times service j has interacted with user i.
In some embodiments, step 2 specifically includes:
201. compressing the user-embedded vector using a stacked noise reduction encoder includes:
embedding sparse user into vector u using stacked noise reduction encoder i From high dimensional space compression to low dimensional hidden space, for each layer of the stacked noise reduction encoder, the input h l-1 And output h l Expressed as:
h l =f(W l h l-1 +b l ),where h 0 =u i
where L is in the range of {1,2,. Cndot., L }, W ∈ l And b l Is the parameter to be learned in the L-th layer, the front L/2 layer of the stacked noise reduction encoder is the encoder, the back L/2 layer is the decoder, the objective function is defined as:
argmin||u i -h L || 2
training stacked noise reduction encoder model by back propagation, using output characteristic x of L/2 layer of stacked noise reduction encoder u As clustering samples;
202. based on the compressed user embedded vector, the group division realized by the DBSCAN clustering algorithm comprises the following steps:
based on the compressed user embedded vector, calculating Euclidean distance d between each sample point ij When d is ij When the distance is not greater than the distance threshold r of the field, the point j is considered to be contained in the field of the point i;
if and only if the number of the sample points contained in the field of the point i is larger than or equal to the sample threshold value M in the field, creating a new class taking the point i as a core point;
and repeatedly searching points with the direct density or the density reachable with the core point, adding the points into the corresponding class, combining the classes with the reachable density among the core points into the same class until no new points can be added into the existing class, and finishing the group division of the user.
Step 203 comprises:
dividing the users into m groups according to the group division result, fusing the behaviors of the users in the same group at the same time interval, splicing the service call records at different time intervals to obtain the embedded vector of the group i Representing the number of times of interaction between users in the ith group and the service j at the time a;
inputting the embedded vectors of the group i into a multi-layer perceptron network, compressing the embedded vectors from a high-dimensional space to a low-dimensional hidden space, and inputting each layerAnd output->Comprises the following steps: />
Where L is in the range of {1,2,. Cndot., L }, W ∈ (l) 、b (l) Is the parameter to be learned at the l-th layer,after passing through the L layer, a low-dimensional embedded representation is obtained->As a subsequent input;
inputting the output of the multilayer perceptron network into a gating circulation unit, wherein the processing process of the gating circulation unit is represented as:
where T ∈ {1,2,. T }, z t Is an update gate that controls how much information, r, is kept from the history for the current state t Is a reset gate for controlling the current candidate stateWhether or not to depend on the state h at the previous moment t-1 σ is sigmoid activation function, W z ,W r And W h Is a parameter matrix; b is a mixture of z ,b r And b h Is a deviation; />Is an input at time t, h t Is the state of the t-th hidden node; modeling the dynamic change of the group preference characteristics through a gated recurrent neural network to obtain the group preferenceGood feature vector
Preference feature vector based on the groupOutputting the probability of service caching using a full connection layer and a sigmoid function:
In some embodiments, a binary cross entropy loss function is selected as the loss function L for service cache probability prediction model training e :
Wherein N denotes the number of services, y e,i The actual value is represented by the value of,the predicted values are represented, and the model parameters are trained through back propagation.
In some embodiments, step 3, implementing edge caching by using Top-k policy according to the service caching probability prediction value includes:
and performing TOP-k matching for each group, performing result fusion according to the group scale, deploying services meeting the preference of most people in an edge server, and realizing edge cache based on group preference perception and service representation learning.
In some embodiments, step 401, modeling social relationships among users using a hypergraph, constructing a hyper-edge for each user, where each hyper-edge contains all neighboring user nodes of the user, and mining high-order connected information in the hypergraph using a graph convolution network, so as to obtain user social relationship characteristics, includes:
the hypergraph is formalized as G = (V, E), a node set V represents the characteristics of a user, a hyper-edge set E represents the social relationship of the user, and for a target user, all the neighbor user node sets of the target user form hyper-edges about the user; using adjacency matrix H ∈ R |V|×|E| Represents the hypergraph, where each element h (v, e) in the adjacency matrix:
mining high-order connectivity information in the hypergraph using the graph convolution network:
wherein L is an element {1,2,. Cndot., L },Θ (l-1) representing the parameters to be learned in the l-1 st layer, D is a degree matrix, and the value on the diagonal is the degree of HG; and H G The multiplication of (d) represents the aggregation of the vertex feature into the super-edge feature, and->The multiplication operation of (2) represents the aggregation from the super-edge characteristic to the vertex characteristic, and after the L layer, the social relation characteristic of the user is obtained>
In some embodiments, step 402, compressing the service embedding vector by using a multi-layer perceptron network, splicing the compressed service embedding vector and the user social relationship feature, inputting the service embedding vector into a full connection layer, and outputting the probability that the service is recommended by a sigmoid function, includes:
wherein the content of the first and second substances,is a service buffer probability prediction value, based on the measured value>Is a user social relationship feature, x s Is a service embedding vector, W s Is a parameter matrix, b s Is a deviation.
In some embodiments, the cross entropy loss function is selected as a loss function for service recommendation probability prediction model training:
wherein y is s The actual value is represented by the value of,and representing a predicted value, and training model parameters through back propagation.
In a second aspect, the present invention provides an edge cache-based personalized recommendation apparatus, including a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to the first aspect.
In a third aspect, the invention provides a storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect.
Has the advantages that: the invention provides an edge cache-based personalized recommendation method, which has the following advantages: in the aspect of edge caching, service response time delay and transmission consumption are reduced by deploying the service on the edge side closer to the user. Meanwhile, the deep embedded clustering is used for classifying the users, the common preference of the users is better mined, the service meeting the requirements of most people is deployed in the edge nodes, and the hit rate of the service cache is further improved. The used gated-loop cell network can effectively model and represent the dynamic preference of the group. In the aspect of personalized recommendation, nonlinear social relations among users are modeled through hypergraphs, and rich information in the social relations among the users is mined by utilizing a graph convolution network, so that the problem of data sparsity is relieved, and the recommendation accuracy is further improved.
Drawings
FIG. 1 is a flow chart of a method of an embodiment of the present invention;
FIG. 2 is a diagram of a model architecture according to an embodiment of the present invention.
Detailed Description
In order to more clearly illustrate the technical solution of the present invention, the present invention is further described below with reference to the accompanying drawings. The following description is only a part of the embodiments, and it will be obvious to those skilled in the art that the technical solutions of the present invention can be applied to other similar situations without creative efforts.
In the description of the present invention, reference to the description of "one embodiment", "some embodiments", "illustrative embodiments", "examples", "specific examples", or "some examples", etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Example 1
A personalized recommendation method based on edge cache comprises the following steps:
step 1, constructing a user embedded vector and a service embedded vector according to a user historical service call record;
step 2, based on the user embedded vector, predicting by using a service cache probability prediction model to obtain a service cache probability prediction value:
201. compressing the user-embedded vector using a stacked noise reduction encoder;
202. based on the compressed user embedded vector, realizing group division through a DBSCAN clustering algorithm to obtain a group division result;
203. according to the group division result, fusing the service call records of the users in the same group at the same time period, splicing the service call records at different time periods, constructing an embedded vector of the group, compressing the embedded vector by using a multilayer perceptron network, and modeling the dynamic change of the group preference characteristics by using a gate cycle unit to obtain the preference characteristic vector of the group; outputting the probability of the service cache through a full connection layer and a sigmoid function;
step 3, utilizing a Top-k strategy to realize edge caching according to the service caching probability predicted value;
and 4, predicting by using a service recommendation probability prediction model based on the service embedding vector and the obtained social relationship characteristics of the user to obtain a service recommendation probability prediction value:
401. modeling social relations among users by using a hypergraph, constructing a hypergraph for each user, wherein each hypergraph comprises all neighbor user nodes of the user, and mining high-order connected information in the hypergraph by using a graph convolution network so as to obtain the social relation characteristics of the users;
402. compressing the service embedded vector by using a multilayer perceptron network, splicing the compressed service embedded vector and the social relation characteristics of the user, inputting the service embedded vector into a full connection layer, and outputting the recommended probability of the service through a sigmoid function;
step 5, utilizing a Top-k strategy to perform descending ordering on the service recommendation probability predicted values, and selecting the first k services to recommend to a user;
and 6, if the recommended service is cached in the edge server, the edge server provides the service for the user, otherwise, the cloud server provides the service for the user.
In some embodiments, step 1, a user-embedded vector is constructed based on a user's historical service invocation recordWherein->Representing the number of times user i interacts with service j;
constructing service embedded vector according to user historical service call recordsWherein->Representing the number of times service j has interacted with user i.
In some embodiments, step 2 specifically includes:
201. compressing the user-embedded vector using a stacked noise reduction encoder includes:
embedding sparse user into vector u using stacked noise reduction encoder i From high dimensional space compression to low dimensional hidden space, h is input for each layer of the stacked noise reduction encoder l-1 And an output h l Expressed as:
h l =f(W l h l-1 +b l ),where h 0 =u i
where L is in the range of {1,2,. Cndot., L }, W ∈ l And b l Is the parameter to be learned in the L-th layer, the front L/2 layer of the stacked noise reduction encoder is the encoder, the back L/2 layer is the decoder, the objective function is defined as:
argmin||u i -h L || 2
training stacked noise reduction encoder model by back propagation with output feature x of L/2 th layer of stacked noise reduction encoder u As clustering samples;
202. based on the compressed user embedded vector, the group division realized by the DBSCAN clustering algorithm comprises the following steps:
based on the compressed user embedded vector, calculating Euclidean distance d between each sample point ij When d is ij When the distance is not larger than the distance threshold r of the domain, the point j is considered to be contained in the domain of the point i;
if and only if the number of the sample points contained in the field of the point i is larger than or equal to the sample threshold value M in the field, creating a new class taking the point i as a core point;
and repeatedly searching points with the direct density reachable or the density reachable with the core points, adding the points into the corresponding class, combining the classes with the reachable density among the core points into the same class until no new points can be added into the existing class, and finishing the group division of the user.
Step 203 comprises:
according to the group division result, the users are divided into m groups, the behaviors of the users in the same group at the same time interval are fused, the service call records at different time intervals are spliced, and the embedded vector of the group i is obtained Representing the number of interactions of users in the ith group with the service j at the moment a;
inputting the embedded vectors of the group i into a multi-layer perceptron network, compressing the embedded vectors from a high-dimensional space to a low-dimensional hidden space, and inputting each layerAnd output->Comprises the following steps:
where L is in the range of {1,2,. Cndot., L }, W ∈ (l) 、b (l) Is the parameter to be learned at the l-th layer,after passing through the L layer, a low-dimensional embedded representation is obtained->As a subsequent input;
inputting the output of the multilayer perceptron network into a gating circulation unit, wherein the processing process of the gating circulation unit is represented as:
where T ∈ {1,2,. T }, z t Is an update gate that controls how much information, r, is kept from the history for the current state t Is a reset gate for controlling the current candidate stateWhether or not to depend on the state h at the previous moment t-1 σ is sigmoid activation function, W z ,W r And W h Is a parameter matrix; b z ,b r And b h Is a deviation; />Is an input at time t, h t Is the state of the t-th hidden node; modeling the dynamic change of the group preference characteristics through a gated recurrent neural network,obtaining preference feature vectors for groups
Preference feature vector based on the groupOutputting the probability of service caching using a full connection layer and a sigmoid function:
In some embodiments, a binary cross-entropy loss function is selected as the loss function L of the service cache probability prediction model training e :
Wherein N denotes the number of services, y e,i The actual value is represented by a value representing,and representing a predicted value, and training model parameters through back propagation.
In some embodiments, step 3, implementing edge caching by using Top-k policy according to the service caching probability prediction value includes:
and performing TOP-k matching for each group, performing result fusion according to the group scale, deploying services meeting the preference of most people in an edge server, and realizing edge cache based on group preference perception and service representation learning.
In some embodiments, step 401, modeling social relationships among users using a hypergraph, constructing a hyper-edge for each user, where each hyper-edge contains all neighboring user nodes of the user, and mining high-order connected information in the hypergraph using a graph convolution network, so as to obtain user social relationship characteristics, includes:
the hypergraph is formalized as G = (V, E), a node set V represents user characteristics, a hyperedge set E represents user social relations, and for a target user, all neighbor user node sets form hyperedges about the user; using adjacency matrix H ∈ R |V|×|E| Represents the hypergraph, where each element h (v, e) in the adjacency matrix:
mining high-order connectivity information in the hypergraph using graph convolution networks:
where L is in {1,2,. Cndot., L },Θ (l-1) representing the parameters to be learned in the l-1 st layer, D is a degree matrix, and the value on the diagonal is the degree of HG; and H G The multiplication of (d) represents the aggregation of the vertex feature into the super-edge feature, and->The multiplication operation of (2) represents the aggregation from the super-edge characteristic to the vertex characteristic, and after the L layer, the social relation characteristic of the user is obtained>
In some embodiments, the step 402 of compressing the service embedding vector by using a multi-layer perceptron network, splicing the compressed service embedding vector and the user social relationship feature, inputting the service embedding vector into a full connection layer, and outputting the probability that the service is recommended by a sigmoid function includes:
wherein the content of the first and second substances,is a service buffer probability prediction value, based on the measured value>Is a user social relationship feature, x s Is a service embedding vector, W s Is a parameter matrix, b s Is the deviation.
In some embodiments, the cross-entropy loss function is selected as a loss function trained by the service recommendation probability prediction model:
wherein y is s The actual value is represented by the value of,and representing a predicted value, and training model parameters through back propagation.
In some specific embodiments, the present embodiment provides a personalized recommendation method based on edge caching, as shown in the method flowchart in fig. 1 and the model architecture diagram in fig. 2, specifically including:
step 1, characteristic engineering: and constructing a user and service embedded vector as the input of a subsequent network according to the historical service call record of the user.
In a certain scenario, the number of users n =1200, the number of services m =1920, the user embedded vector is constructed from the user historical service invocation record,wherein +>Representing the number of times user i interacts with service j. Get the service embedded vector->Wherein +>Representing the number of times service j has interacted with user i.
Step 2, grouping: and compressing the user embedded vector by using a stacked noise reduction encoder, and realizing group division by using a DBSCAN clustering algorithm.
Compressing sparse user embedding from a high-dimensional space to a low-dimensional hidden space with a stacked noise-reducing encoder, the input and output can be expressed for each layer of the stacked noise-reducing encoder as:
h l =f(W l h l-1 +b l ),where h 0 =u i
where L is in the range of {1,2,. Cndot., L }, W ∈ l And b l Is the parameter to be learned in the L-th layer, the front L/2 layer of the stacked noise reduction encoder is the encoder, the back L/2 layer is the decoder, the objective function is defined as:
argmin||u i -h L || 2
training the model by back propagation, using the output characteristics x of the L/2 layer of the stacked noise reduction encoder u As a clustered sample.
Dividing users by using a DBSCAN clustering algorithm, and calculating Euclidean distance d between each sample point ij When d is ij Not greater than the distance threshold of the domain of 0.5, point j is considered to be contained within the domain of point i. And if and only if the number of the sample points contained in the field of the point i is greater than or equal to the in-field sample threshold value 5, creating a new class taking the point i as a core point. And repeatedly searching points with direct density reachable or density reachable with the core points, adding the points into corresponding classes, and combining the classes with reachable density among the core points into the same class. When no new points can be added to the existing class, the algorithm ends.
Step 3, edge caching: and according to the obtained group division result, fusing the service call records of the users in the same group at the same time period, splicing the service call records at different time periods, constructing an embedded representation of the group, compressing the embedded vector by using a multilayer perceptron network, inputting the compressed vector into a gate cycle unit, modeling the dynamic change of the group preference characteristics, and obtaining the preference characteristic vector of the group. And outputting the probability of the service cache through a full connection layer and a sigmoid function, and realizing the edge cache according to a Top-k strategy.
Dividing the users into m groups according to the clustering result, fusing the behaviors of the users with similar characteristics in the same time period, and splicing the behaviors in different time periods to obtain the embedded vector of the group i: indicating the number of interactions with service j by users in the ith group at time a. Inputting the data into a multi-layer perceptron network, compressing the data from a high-dimensional space to a low-dimensional hidden space, wherein the input and output of each layer are as follows:
where L is in the range of {1,2,. Cndot., L }, W ∈ l And b l Is the parameter to be learned at the l-th layer,after a pass of l layers, a low-dimensional embedded representation is obtained>
Where T ∈ {1,2,. T }, z t Is an update gate that controls how much information, r, is kept from the history for the current state t Is a reset gate for controlling the current candidate stateWhether or not to depend on the state h of the last moment t-1 σ is sigmoid activation function, W z ,W r And W h Is a parameter matrix; b z ,b r And b h Is a deviation; />Is an input at time t, h t Is the state of the t-th hidden node; modeling the dynamic change of the group preference characteristics through a gated recurrent neural network to obtain the preference characteristic vector of the group
Preference feature vector based on the groupOutputting the probability of service caching by using a full connection layer and a sigmoid function:
Selecting a binary cross entropy loss function as a loss function L of service cache probability prediction model training e :
Wherein N denotes the number of services, y e,i The actual value is represented by the value of,and representing a predicted value, and training model parameters through back propagation.
And performing TOP-k matching for each group, performing result fusion according to the group scale, deploying the service meeting the preference of most people in the edge server, and realizing edge cache based on group preference perception and service representation learning.
Step 4, mining the social relationship of the user: modeling the social relationship among users by using a hypergraph, constructing a hyperedge for each user, wherein each hyperedge comprises all neighbor user nodes of the user, and mining high-order connected information in the hypergraph by using a graph volume network so as to obtain the social relationship characteristics of the user.
The hypergraph is formalized as G = (V, E), where V is a node set representing user characteristics, E is a hyper-edge set representing user social relationships, and for a target user, all its neighboring user node sets constitute hyper-edges for that user. Using adjacency matrix H ∈ R |V|×|E| Represents the hypergraph, where each element h (v, e) in the adjacency matrix:
Mining high-order connectivity information in the hypergraph using graph convolution networks:
where L is in {1,2,. Cndot., L },Θ (l-1) representing the parameters to be learned in the l-1 st layer, D is a degree matrix, and the value on the diagonal line is the degree of HG; and H G The multiplication of (d) represents the aggregation of the vertex feature into the super-edge feature, and->The multiplication operation of (4) represents the aggregation from the super edge characteristic to the vertex characteristic, and after the aggregation passes through an L layer, the social relation characteristic->
Step 5, personalized recommendation: and (3) compressing the service embedding vector obtained in the step (1) by using a multilayer perceptron network, inputting the compressed service embedding vector and the user social relation characteristic obtained in the step (4) into a full connection layer, outputting the probability of service recommendation through a sigmoid function, and realizing personalized recommendation according to a Top-k strategy.
Compressing the service embedding vector obtained in the step 1 by using a multilayer perceptron network, splicing the service embedding vector with the social relation characteristics of the user, inputting the service embedding vector into a full connection layer, and outputting the recommended probability of the service through a sigmoid function:
wherein the content of the first and second substances,is a service buffer probability prediction value, based on the measured value>Is a user social relationship feature, x s Is a service embedding vector, W s Is a parameter matrix, b s Is a deviation.
Selecting a cross entropy loss function as a loss function of the service recommendation probability prediction model training:
wherein y is s The actual value is represented by the value of,and representing a predicted value, and training model parameters through back propagation.
And according to the Top-k strategy, sorting the probability recommended by each service in a descending order, and selecting the Top k services to recommend to the user.
And if the recommended service is cached in the edge server, the edge server provides the service for the user, otherwise, the cloud server provides the service for the user.
Example 2
In a second aspect, the present embodiment provides an edge cache-based personalized recommendation apparatus, including a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to embodiment 1.
Example 3
In a third aspect, the present embodiment provides a storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of embodiment 1.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications of the invention may be made without departing from the scope of the invention.
Claims (10)
1. A personalized recommendation method based on edge cache is characterized by comprising the following steps:
step 1, constructing a user embedded vector and a service embedded vector according to a user historical service call record;
step 2, based on the user embedded vector, predicting by using a service cache probability prediction model to obtain a service cache probability prediction value:
201. compressing the user-embedded vector using a stacked noise reduction encoder;
202. based on the compressed user embedded vector, realizing group division through a DBSCAN clustering algorithm to obtain a group division result;
203. according to the group division result, fusing the service call records of the users in the same group at the same time period, splicing the service call records at different time periods, constructing an embedded vector of the group, compressing the embedded vector of the group by using a multilayer perceptron network, and modeling the dynamic change of the group preference characteristics by using a gate cycle unit to obtain the preference characteristic vector of the group; outputting the probability of the service cache through a full connection layer and a sigmoid function;
step 3, utilizing a Top-k strategy to realize edge caching according to the service caching probability predicted value;
and 4, predicting by using a service recommendation probability prediction model based on the service embedding vector and the obtained social relationship characteristics of the user to obtain a service recommendation probability prediction value:
401. modeling social relations among users by using a hypergraph, constructing a hypergraph for each user, wherein each hypergraph comprises all neighbor user nodes of the user, and mining high-order connected information in the hypergraph by using a graph volume network so as to obtain the social relation characteristics of the user;
402. compressing the service embedding vector by using a multilayer perceptron network, splicing the compressed service embedding vector and the social relation characteristics of the user, inputting the service embedding vector into a full connection layer, and outputting the recommended probability of the service through a sigmoid function;
step 5, utilizing a Top-k strategy to perform descending ordering on the service recommendation probability predicted values, and selecting the first k services to recommend to a user;
and 6, if the recommended service is cached in the edge server, providing the service for the user by the edge server, otherwise, providing the service for the user from the cloud server.
2. The personalized recommendation method based on the edge cache as claimed in claim 1, wherein in step 1, the user embedded vector is constructed according to the user history service call record Wherein->Representing the number of times user i interacts with service j;
3. The personalized recommendation method based on edge cache of claim 1, wherein 201, compressing the user embedded vector by using the stacked noise reduction encoder comprises:
embedding sparse user into vector u using stacked noise reduction encoder i From high dimensional space compression to low dimensional hidden space, h is input for each layer of the stacked noise reduction encoder l-1 And an output h l Expressed as:
h l =f(W l h l-1 +b l ),where h 0 =u i
wherein L is an element {1,2,. L }, W l And b l Is the parameter to be learned at layer l, heapThe front L/2 layer of the stack noise reduction encoder is an encoder, the rear L/2 layer is a decoder, and an objective function is defined as:
argmin||u i -h L || 2
training stacked noise reduction encoder model by back propagation with output feature x of L/2 th layer of stacked noise reduction encoder u As clustering samples;
and/or 202, based on the compressed user embedded vector, the group division is realized through a DBSCAN clustering algorithm, and the method comprises the following steps:
based on the compressed user embedded vector, calculating Euclidean distance d between each sample point ij When d is ij When the distance is not larger than the distance threshold r of the domain, the point j is considered to be contained in the domain of the point i;
if and only if the number of the sample points contained in the field of the point i is larger than or equal to the sample threshold value M in the field, creating a new class taking the point i as a core point;
repeatedly searching points with the direct density reachable or the density reachable with the core points, adding the points into the corresponding class, combining the classes with the reachable density among the core points into the same class until no new points can be added into the existing class, and finishing the group division of the user;
and/or, step 203 comprises:
dividing the users into m groups according to the group division result, fusing the behaviors of the users in the same group at the same time interval, splicing the service call records at different time intervals to obtain the embedded vector of the group i Representing the number of times of interaction between users in the ith group and the service j at the time a;
inputting the embedded vectors of the group i into a multi-layer perceptron network, compressing the embedded vectors from a high-dimensional space to a low-dimensional hidden space, and inputting each layerAnd output->Comprises the following steps:
where L is in the range of {1,2,. Cndot., L }, W ∈ (l) 、b (l) Is the parameter to be learned at the l-th layer,after passing through the L layer, a low-dimensional embedded representation is obtained->As a subsequent input;
inputting the output of the multilayer perceptron network into a gating circulation unit, wherein the processing process of the gating circulation unit is represented as follows:
where T ∈ {1,2,. T }, z t Is an update gate that controls how much information, r, is kept from the history for the current state t Is a reset gate for controlling the current timeSelect stateWhether or not to depend on the state h at the previous moment t-1 σ is sigmoid activation function, W z ,W r And W h Is a parameter matrix; b z ,b r And b h Is a deviation; />Is the input at time t, h t Is the state of the t-th hidden node; modeling the dynamic change of the preference feature of the group through a gated recurrent neural network to obtain the preference feature vector of the group>
Preference feature vector based on the groupOutputting the probability of service caching using a full connection layer and a sigmoid function:
wherein y is e Service cache probability prediction, W e Is a parameter matrix, b e Is a deviation.
4. The personalized recommendation method based on the edge cache as claimed in claim 1, wherein a binary cross entropy loss function is selected as the loss function L of the service cache probability prediction model training e :
5. The personalized recommendation method based on the edge cache according to claim 1, wherein step 3, implementing the edge cache by using a Top-k policy according to the service cache probability prediction value comprises:
and performing TOP-k matching for each group, performing result fusion according to the group scale, deploying the service meeting the preference of most people in the edge server, and realizing edge cache based on group preference perception and service representation learning.
6. The personalized recommendation method based on the edge cache as claimed in claim 1, wherein 401, modeling social relationships among users by using a hypergraph, constructing a hyperedge for each user, wherein each hyperedge contains all neighbor user nodes of the user, and mining high-order connected information in the hypergraph by using a graph convolution network, thereby obtaining the social relationship characteristics of the users comprises:
the hypergraph is formalized as G = (V, E), a node set V represents user characteristics, a hyperedge set E represents user social relations, and for a target user, all neighbor user node sets form hyperedges about the user; using adjacency matrix H ∈ R |V|×|E| Represents the hypergraph, where each element h (v, e) in the adjacency matrix:
mining high-order connectivity information in the hypergraph using the graph convolution network:
where l ∈{1,2,..,L},Θ (l-1) Representing the parameters to be learned in the l-1 st layer, D is a degree matrix, and the value on the diagonal line is the degree of HG; and H G Represents an aggregation from vertex features to hyper-edge features, and->The multiplication operation of (2) represents the aggregation from the super-edge characteristic to the vertex characteristic, and after the L layer, the social relation characteristic of the user is obtained>
7. The personalized recommendation method based on the edge cache as claimed in claim 1, wherein step 402, compressing the service embedding vector by using a multi-layer perceptron network, splicing the compressed service embedding vector and the user social relationship feature, inputting the service embedding vector into a full connection layer, and outputting the probability that the service is recommended by a sigmoid function, comprises:
8. The personalized recommendation method based on the edge cache as claimed in claim 1, wherein a cross entropy loss function is selected as a loss function of the service recommendation probability prediction model training:
9. The personalized recommendation device based on the edge cache is characterized by comprising a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate in accordance with the instructions to perform the steps of the method according to any one of claims 1 to 8.
10. A storage medium having a computer program stored thereon, the computer program, when being executed by a processor, performing the steps of the method of any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310097744.9A CN115964568A (en) | 2023-02-10 | 2023-02-10 | Personalized recommendation method based on edge cache |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310097744.9A CN115964568A (en) | 2023-02-10 | 2023-02-10 | Personalized recommendation method based on edge cache |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115964568A true CN115964568A (en) | 2023-04-14 |
Family
ID=87361551
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310097744.9A Pending CN115964568A (en) | 2023-02-10 | 2023-02-10 | Personalized recommendation method based on edge cache |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115964568A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116226540A (en) * | 2023-05-09 | 2023-06-06 | 浙江大学 | End-to-end federation personalized recommendation method and system based on user interest domain |
CN116610868A (en) * | 2023-07-13 | 2023-08-18 | 支付宝(杭州)信息技术有限公司 | Sample labeling method, end-edge cloud cooperative training method and device |
CN117493697A (en) * | 2024-01-02 | 2024-02-02 | 西安电子科技大学 | Web API recommendation method and system based on multi-mode feature fusion |
-
2023
- 2023-02-10 CN CN202310097744.9A patent/CN115964568A/en active Pending
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116226540A (en) * | 2023-05-09 | 2023-06-06 | 浙江大学 | End-to-end federation personalized recommendation method and system based on user interest domain |
CN116226540B (en) * | 2023-05-09 | 2023-09-26 | 浙江大学 | End-to-end federation personalized recommendation method and system based on user interest domain |
CN116610868A (en) * | 2023-07-13 | 2023-08-18 | 支付宝(杭州)信息技术有限公司 | Sample labeling method, end-edge cloud cooperative training method and device |
CN116610868B (en) * | 2023-07-13 | 2023-09-29 | 支付宝(杭州)信息技术有限公司 | Sample labeling method, end-edge cloud cooperative training method and device |
CN117493697A (en) * | 2024-01-02 | 2024-02-02 | 西安电子科技大学 | Web API recommendation method and system based on multi-mode feature fusion |
CN117493697B (en) * | 2024-01-02 | 2024-04-26 | 西安电子科技大学 | Web API recommendation method and system based on multi-mode feature fusion |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109544306B (en) | Cross-domain recommendation method and device based on user behavior sequence characteristics | |
CN115964568A (en) | Personalized recommendation method based on edge cache | |
CN111667022A (en) | User data processing method and device, computer equipment and storage medium | |
CN111538827A (en) | Case recommendation method and device based on content and graph neural network and storage medium | |
CN112364242B (en) | Graph convolution recommendation system for context awareness | |
Wang et al. | Trust-aware collaborative filtering with a denoising autoencoder | |
CN113761359B (en) | Data packet recommendation method, device, electronic equipment and storage medium | |
CN110245310B (en) | Object behavior analysis method, device and storage medium | |
CN113609337A (en) | Pre-training method, device, equipment and medium of graph neural network | |
CN115982467A (en) | Multi-interest recommendation method and device for depolarized user and storage medium | |
CN115718826A (en) | Method, system, device and medium for classifying target nodes in graph structure data | |
CN112836125A (en) | Recommendation method and system based on knowledge graph and graph convolution network | |
CN116090504A (en) | Training method and device for graphic neural network model, classifying method and computing equipment | |
CN117077735A (en) | Dimension-dependent integrated service quality prediction method based on convolutional neural network | |
CN112364245A (en) | Top-K movie recommendation method based on heterogeneous information network embedding | |
CN116821519A (en) | Intelligent recommendation method for system filtering and noise reduction based on graph structure | |
CN116861070A (en) | Recommendation model processing method, device, computer equipment and storage medium | |
CN113449176A (en) | Recommendation method and device based on knowledge graph | |
CN116471281A (en) | Decentralised service combination method considering node selfiness | |
CN113256024B (en) | User behavior prediction method fusing group behaviors | |
CN117033754A (en) | Model processing method, device, equipment and storage medium for pushing resources | |
CN114334035A (en) | Drug analysis method, model training method, device, storage medium and equipment | |
CN111931035B (en) | Service recommendation method, device and equipment | |
CN115482019A (en) | Activity attention prediction method and device, electronic equipment and storage medium | |
CN113792163B (en) | Multimedia recommendation method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |