CN107659444A - Secret protection cooperates with the difference privacy forecasting system and method for Web service quality - Google Patents

Secret protection cooperates with the difference privacy forecasting system and method for Web service quality Download PDF

Info

Publication number
CN107659444A
CN107659444A CN201710875787.XA CN201710875787A CN107659444A CN 107659444 A CN107659444 A CN 107659444A CN 201710875787 A CN201710875787 A CN 201710875787A CN 107659444 A CN107659444 A CN 107659444A
Authority
CN
China
Prior art keywords
user
qos
value
data
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710875787.XA
Other languages
Chinese (zh)
Inventor
毛睿
李荣华
陆敏华
王毅
罗秋明
商烁
刘刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201710875787.XA priority Critical patent/CN107659444A/en
Priority to PCT/CN2017/113486 priority patent/WO2019056573A1/en
Publication of CN107659444A publication Critical patent/CN107659444A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5009Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/142Network analysis or design using statistical or mathematical methods

Abstract

The invention discloses a kind of difference privacy forecasting system of secret protection collaboration Web service quality, including data collection module, data camouflage module, collaborative filtering module and prediction result module;The data collection module is being collected locally service quality value, i.e. qos value for each user;The data camouflage module is used to pretend the service quality value collected;The collaborative filtering module is used for the service quality value that data camouflage module camouflage described in collaborative filtering is collected;The prediction result module is according to the service quality value through the collaborative filtering modular filtration come prediction result.In addition, invention additionally discloses the difference privacy Forecasting Methodology of secret protection collaboration Web service quality.Difference privacy is introduced collaborative Web service QoS prediction frameworks by the present invention first, and user can obtain maximum secret protection by ensuring the availability of data.Test result indicates that system and method for the present invention provide safety and accurately the QoS predictions of cooperation Web service.

Description

Differential privacy prediction system and method for privacy protection and Web service quality coordination
Technical Field
The invention belongs to the field of computers, and particularly relates to a privacy protection system, in particular to a differential privacy prediction system for privacy protection and Web service quality; in addition, the invention also relates to a differential privacy prediction method for the privacy protection and the cooperative Web service quality.
Background
Quality of service (QoS) is widely used to describe the non-functional characteristics of web services. Quality of service based selection, composition, and recommendation web services technologies are widely discussed in recent papers. The premise of these methods is that accurate QoS values for Web services are always available. But obtaining an accurate quality of service value is not an easy task. On the one hand, qoS values published by service providers or third party communities are not accurate for service users, as they are susceptible to an uncertain internet environment. On the other hand, it is impractical for a service user to directly assess the QoS of all available services due to time, cost and other resource constraints. To address this problem, a breakthrough point is the QoS prediction of personalized collaborative Web services. The basic idea is that users of similar characters tend to observe similar QoS values for the same service, and therefore when it is desired to predict the QoS value that a particular user observes for a web service, the values observed by users of similar characters may be substituted.
In this way, different users are also typically given different QoS prediction values for the same service, and the final prediction value is actually dependent on their particular context. Based on these provided QoS values, various techniques have been employed to improve the quality, particularly the accuracy of the prediction.
Collaborative Web services QoS prediction has become an important tool for generating accurate personalized QoS. Although much effort has been made in research to improve the accuracy of collaborative QoS prediction, there is insufficient effort to protect user privacy in this process. In fact, the observed QoS values may be sensitive information, and thus users may be reluctant to share them with others. For example, the observed response time fed back by a user is typically dependent on her location, which means that the user's location can be inferred from the QoS information she provides. Therefore, one problem is whether the recommendation system can accurately predict the personalized QoS for the user on the premise of protecting the privacy of the user.
Homomorphic encryption, which allows computation on ciphertext, is a straightforward way to achieve privacy. However, all these operations do not require significant computational cost, and require continuous communication between the parties, even without considering the difficulties of applying some complex computations into the cryptographic domain. Therefore, it is not feasible to deal with our problem by using homomorphic encryption.
Another technique, random perturbation, proposed by Polat et al, claims that accurate recommendations can still be obtained with this technique, while randomness from a specific distribution is added to the raw data to prevent information leakage. However, the range of randomness α is empirically chosen and there is no provable privacy guarantee. However, for applications that perturb data to cluster, an adversary can accurately infer the user's private data with up to 70% accuracy.
Thus, while the privacy preserving method of random perturbation is insecure, it motivates us to design a lightweight and provable random perturbation. Specifically, a privacy protection QoS prediction model and a difference privacy model are developed for a user, the model can powerfully protect privacy data and has provable privacy guarantee, and the model is the most advanced privacy data protection state technology at present. Differential privacy has attracted extensive attention because it is intended to provide an effective method to minimize the noise added to the original data.
Despite the widespread interest in differential privacy, the application of QoS prediction remains quite limited. Reference 1[ 2], [ F.McSherry and I.Mironov.erentially private recommender systems:building privacy into the net.SIGKDD 2009:627-636]And reference 2[ 2], [ A.Machanavajhala, A.Korolova and A.D.Sara.Personalized specific specifications: acid or private ] PVLDB 2011 4 (7): 440-450]Is two privacy-based privacy protection recommendation systems, which is the most relevant work for our problem. Machanavajhala et al [ reference 2]Personalized privacy protection of social recommendations has been studied, which is based entirely on the user's social graph. With differential privacy, sensitive links in a social graph can be effectively protected, meaning that an attacker cannot infer the presence of a single link in the graph by passively observing the recommendation. However, another problem is that quality recommendations can only be achieved with weak privacy parameters, or only for a small fraction of users. McSheery and Mironov [ reference 1]Differential privacy for collaborative filtering [ R.M. Bell and Y.Koren.Scalable collaborative filtering with collaborative neighboring weighted information weights. ICDM 2007: 43-52]This is the usual solution for recommendation systems. They split the recommendation algorithm into two parts: the method comprises a learning stage and an individual recommendation stage, wherein the learning stage is executed by using differential privacy guarantee, and the individual recommendation stage uses a learning result for individual prediction. Unlike the work done in reference 1 and reference 2, where the present invention focuses on privacy assurance of data distribution rather than knowledge learning, the present invention explores other approaches than that being studied in reference 1, such as the latent factor model.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a differential privacy prediction system for privacy protection and Web service quality, the differential privacy is introduced into a cooperative Web service QoS prediction framework for the first time, and a user can obtain the maximum privacy protection by ensuring the availability of data. Therefore, the invention also provides a differential privacy prediction method for protecting the privacy and cooperating with the Web service quality. Experimental results show that the system and the method provide safe and accurate QoS prediction of the cooperative Web service.
In order to solve the technical problem, the invention provides a differential privacy prediction system for protecting the privacy and cooperating with the Web service quality, which comprises a data collection module, a data disguising module, a cooperating filtering module and a prediction result module;
the data collection module is used for locally collecting a quality of service value, namely a QoS value, by each user;
the data disguising module is used for disguising the collected quality of service values;
the collaborative filtering module is used for collaboratively filtering the service quality value disguised and collected by the data disguising module;
the predicted result module predicts a result according to the quality of service value filtered by the collaborative filtering module.
As a preferred technical scheme of the invention, the collaborative filtering module adopts a neighborhood-based collaborative filtering module or a model-based collaborative filtering module.
As the preferred technical scheme of the invention, the data disguising module achieves the purpose of disguising data by randomly interfering the original data; randomness should ensure that sensitive information cannot be derived from perturbed data, including quality of service values for each individual user; when the number of users is very large, the aggregated information of the users can still be evaluated with high accuracy.
As a preferred technical solution of the present invention, the data disguising module adopts the following data disguising method:
using r ui To represent the quality of service value-QoS value collected by user u for web service i, r u Represents the entire vector of QoS values evaluated by user u, and similarly, I ui And I u Respectively representing a binary element and a vector, R, indicating whether a QoS value is present or not u Representing the disguised data; the epsilon-differential privacy for each user u is achieved by the following equation:
R ui =r ui +Laplace(Δf/ε)
where ε is the privacy parameter for exploiting privacy, Δ f is defined as the maximum difference between QoS values, i.e.:
Δf=max(r ui -r uj )
r ui representing a quality of service value-QoS value, r, collected by user u for a web service i uj Represents a quality of service value-QoS value collected by user u for web service j;
and the meaning of Laplace () is given by the following formula:
if the probability density function for a random variable x is:
the random variable x has a laplace (μ, b) distribution; μ and b are a position parameter and a scale parameter, respectively; let μ =0, so the distribution is considered as the standard deviation ofA symmetric exponential distribution of (a); to add noise that obeys the laplacian distribution, let b = Δ f/ε, and the generation of noise is referred to as laplace (Δ f/ε).
As a preferred solution of the invention, said privacy parameter epsilon is given by each user, and by using differential privacy, the random number added in the observed QoS value is the minimum value that maintains a considerable accuracy with respect to the specific privacy.
As a preferred technical solution of the present invention, the method for predicting the result of the prediction result module specifically comprises: after the QoS value of a certain service is obtained through collaborative filtering, other users are retrieved aiming at the QoS value of the same service, the user with the closest value is selected, which indicates that the two users have similar interests and hobbies, similar recommendation is made based on the user interest and the QoS value of the next user is adopted as the prediction result of the previous user; based on the quality of service value predicted by the prediction results module, the server runs applications that include selection, combination, and recommendation based on the quality of service value.
In addition, the invention also provides a differential privacy prediction method for the privacy protection and the cooperative Web service quality, which comprises the following steps:
firstly, collecting data;
secondly, data disguising;
thirdly, collaborative filtering, namely adopting a neighborhood-based collaborative filtering method or a model-based collaborative filtering method;
and fourthly, predicting a result.
As a preferred technical solution of the present invention, in the second step, the data camouflage is performed by a method including:
using r ui To represent the quality of service value-QoS value, r, collected by user u for web service i u Represents the entire vector of QoS values evaluated by user u, and similarly, I ui And I u Respectively representing a binary element and a vector, R, indicating whether a QoS value is present or not u Representing the disguised data; ε -differential privacy for each user u is achieved by the following equation:
R ui =r ui +Laplace(Δf/ε)
where ε is the privacy parameter for exploiting privacy, Δ f is defined as the maximum difference between QoS values, i.e.:
Δf=max(r ui -r uj )
r ui representing a quality of service value-QoS value, r, collected by user u for a web service i uj Represents a quality of service value-QoS value collected by user u for web service j;
and the meaning of Laplace () is given by the following formula:
if the probability density function for a random variable x is:
the random variable x has a laplace (μ, b) distribution; μ and b are a position parameter and a scale parameter, respectively; let μ =0, so the distribution is considered as the standard deviation ofIs symmetrical to meanNumber distribution; to add noise that obeys the laplacian distribution, let b = Δ f/ε, and the generation of noise is referred to as laplace (Δ f/ε).
As a preferred technical solution of the present invention, in the third step, the neighborhood-based collaborative filtering method includes the steps of:
(1) Normalization: z-fraction normalization is performed on the QoS values using the following equation:
wherein r is ui Represents the quality of service value-QoS value collected by user u for web service i,is a QoS vector r u Mean value, ω u Is a QoS vector r u Standard deviation of (2). After normalization, the QoS data has zero mean and unit variance;
(2) Data disguising: spoofing the normalized QoS value according to the following formula:
Q ui =q ui +Laplace(Δf/ε)
where ε is the privacy parameter, set by user u, and Δ f is defined by the distribution of QoS values, i.e., Δ f = max (r) ui -r uj ) (ii) a After disguising, the user disguises the value Q of the user ui Sending to the server, and storing the original data q at random ui Sensitive information of (2);
(3) Neighborhood-based collaborative filtering: the similarity between two users u and v is calculated based on the service they typically invoke using the following equation:
wherein S = S u ∩S v Is the set of services, r, normally invoked by user u and user v u,i Is the QoS value of service i observed by user u,is the average QoS value of all services observed by user u;
using Q ui The similarity values are approximately calculated as follows:
the normalization is carried out according to the z,and calculating the similarity by substituting the formula into the calculation
During the time of the z-normalization,is easy to obtain
It can be shown that the scalar product property between two vectors remains unchanged despite the use of data masquerading; thus, it is obtained
Sim (u, v) ranges from [ -1,1], with larger values indicating that two users or services are more similar; based on the similarity value, the QoS value of the service i observed by the user u can be directly predicted; similar users of user u are utilized by the following equation:
as a preferred technical solution of the present invention, in the third step, the model-based collaborative filtering method specifically comprises: by usingFactorization of matrices MF, assuming sparse matrix Q n*m Represents the observed QoS values of n users and m services, where each element q ij Reflecting QoS values of user i using service j, using input matrix Q n*m MF aims at serving the users to the matrix Q n*m The factorization is into two matrices of lower dimension d: user factor matrix U n*d And a service factor matrix V m*d (ii) a Then, Q n*m The null element in (b) may be approximated as the product of U and V, i.e., the unknown QoS value q' ij ByTo estimate;
MF is often converted to an optimization problem and local optimal solutions are obtained by iteration; the objective function or loss function of the MF is defined as:
first partIs the squared difference between the existing QoS matrix and the prediction matrix, but only for elements that have been evaluated by the user; the latter part lambda (| | U) i || 2 +||V j || 2 ) Is a regularization term added to handle overfitting due to input sparsity; by processing the optimization, a user factor matrix U is finally obtained n*d And a service factor matrix V m*d (ii) a This problem is solved by using a random gradient descent SGD, whose iterative equation is as follows:
where γ is the learning rate and λ' is the regularization coefficient; the choice of two parameters will significantly affect the result, which will diverge rather than converge when the value of γ is large; to obtain convergence, γ is empirically set to 0.001, and likewise λ' is empirically set to 0.01, although longer training times are required; the iteration will terminate when the objective function value is less than a certain threshold.
Compared with the prior art, the invention has the following beneficial effects: the invention makes QoS prediction of privacy protection cooperative Web service based on differential privacy. The invention provides a privacy protection cooperation QoS prediction framework which can protect private data of a user and simultaneously reserve the capability of generating accurate QoS prediction. The invention introduces differential privacy as preprocessing of QoS data prediction, and the differential privacy is a strict and provable privacy protection technology. The present invention implements the proposed method based on a general method called laplace mechanism and performs extensive experiments to study its performance on real data sets. The privacy accuracy of the experiment is evaluated under different conditions, and the result shows that under some constraints, the method can achieve better performance than the baseline. The invention has the following advantages:
1. for the method proposed by the present invention, the privacy-preserving algorithm can be parameterized and used to match the prediction to its non-private analogues. Although there are some specialized analytical requirements, the method itself is relatively straightforward and readily available.
2. By integrating privacy protection into the application, users can be provided with unrestricted access to the original data with much less than the entire data set that meets privacy criteria at the end of its output.
3. The present invention tests the method with a real dataset. The result shows that the prediction accuracy of the disguised data of the invention is very close to that of the private data of the user.
Drawings
The invention is further illustrated with reference to the following figures and examples.
FIG. 1 is a block flow diagram of a differential privacy prediction system for privacy preserving collaborative Web quality of service according to the present invention.
Fig. 2 is a schematic diagram of a privacy preserving collaborative QoS prediction model.
FIG. 3 is a schematic diagram showing the comparison between the QoS prediction based on differential privacy and the original method under different privacy conditions for the privacy and the accuracy in the experiment of the present invention; fig. 3 (a) represents the response time, and fig. 3 (b) represents the total time.
FIG. 4 is a schematic diagram of the comparison of the impact of service between differential privacy based QoS prediction and the original method under different privacy in the experiment of the present invention; fig. 4 (a) represents the response time, and fig. 4 (b) represents the total time.
FIG. 5 is a schematic diagram showing a comparison of user impact between differential privacy based QoS prediction and original methods under different privacy in the experiment of the present invention; fig. 5 (a) represents the response time, and fig. 5 (b) represents the total time.
FIG. 6 is a graph showing the results of the comparison of the precision at different densities between the QoS prediction based on differential privacy and the original method at different privacy in the experiment of the present invention; fig. 6 (a) represents the response time, and fig. 6 (b) represents the total time.
Detailed Description
The present invention will now be described in further detail with reference to the accompanying drawings. These drawings are simplified schematic views, and merely illustrate the basic structure of the present invention, and therefore, they show only the components related to the present invention.
1. System model and problem definition
1. Differential privacy
It is necessary to distinguish between differential privacy and conventional cryptographic systems. Differential privacy gives a strictly quantitative definition of privacy disclosure under a very strict attack model, and demonstrates that: based on the idea of differential privacy, users can maximally obtain privacy protection and ensure the usability of data. The method has the following advantages: although the data is distorted, the noise required for the disturbance is independent of the data size. We can achieve a high level of privacy protection by adding a very small amount of noise. Although many privacy preserving methods have been proposed, such as k-anonymity and l-diversity, differential privacy is still considered to be the most stringent and robust privacy preserving model on its solid mathematical basis.
2.1 Security Definitions under differential privacy
Differential privacy has two preconditions. One is that the output of any computation (e.g., SUM) should not be affected by operations like inserting or deleting records. Another is that it gives a strictly quantitative definition of the privacy disclosure under a very strict attack model: an attacker cannot distinguish records with a probability greater than epsilon even if she knows the entire data set except the target. The formula is defined as follows:
definition 1: (epsilon-differential privacy) if D1 and D2 differ by at most one element for all datasets and all S e Range (K), then the random function K gives epsilon-differential privacy,
d is a database of rows, D1 is a subset of D2, and the larger data set D2 contains exactly one additional row. The probability space Pr [ ] in any case is over the coin flip of K. The privacy parameter ε >0 is public, with smaller ε yielding a stronger privacy guarantee.
Since differential privacy is defined under probability, any method to achieve this is necessarily random. Some of these methods rely on the addition of controlled noise, such as laplace mechanisms [ c.dwork, f.mcsherry, k.nissm and a.smith. Calibration noise to sensitivity in private data analysis. Tcc 2006-284 ]. Others, such as exponential mechanisms and a posteriori sampling, sample from a problem-dependent distribution. We will explain the structure in detail in the following section.
2.2 Laplace mechanism of Global sensitivity
In addition to the definition of differential privacy, dwork c.dwork f.mcsherry, k.nissm and a.smith. Calibration noise to sensitivity in private data analysis.tcc 2006-284 also states that differential privacy can be achieved by adding random noise that obeys a laplacian distribution. If the probability density function for a random variable is:
the random variable has a laplacian (μ, b) distribution. μ and b are the location parameter and the scale parameter, respectively. For simplicity, we assume μ =0, so the distribution can be considered as the standard deviation ofA symmetric exponential distribution of (a).
To add noise that obeys the laplacian distribution, let b = Δ f/∈, and the generation of noise is referred to as
laplace(Δf/ε)
Here, Δ f is the global sensitivity, defined below. ε is a privacy parameter for exploiting privacy. As we see from the equation, the added noise is proportional to Δ f and inversely proportional to ∈.
Definition 2: (Global sensitivity) pair f: D → R d L of f k -sensitivity is defined as:
for all D1, D2 differs by at most one element, | | | · | luminance k Represents L k And (4) norm.
3.1 System model
[ S.Zhung, J.Ford and F.Makedon.Deriving Private Information from random modulated sources SDM 2006 ] has proven that random perturbations are unsafe because it can be inferred by clustering techniques, but the system model proposed [ J.Zhu, P.He, Z.Zhung and M.R.Lyu.A Private-compressing QoS Prediction Framework for Web Service recommendation. ICWS 2015 ] is mature and applicable to many scenarios, and therefore this model is applied here. As shown in fig. 2, in particular, each USER (USER 1, USER2 \8230; USER, etc.) invokes and collects QoS values locally and masquerades the QoS values she observes, and then sends all the owners of the masqueraded QoS values to the SERVER (SERVER). The QoS value can then be uploaded securely because the server cannot export any personally sensitive information with masquerading data. However, the data masquerading scheme should still be able to allow the server to perform collaborative filtering (neighborhood-based or model-based) from the masquerading data. Based on the predicted QoS value (QoS Prediction), the server may run various applications, such as selection, combination, and recommendation based on the QoS value.
Data masquerading is a key component of privacy protection collaborative Web service QoS prediction. The basic idea of data masquerading is to randomly disturb the original data among these attributes:
a) Randomness should be able to guarantee that sensitive information (e.g. QoS value of each individual user) cannot be derived from the perturbed data;
b) Although personal information is limited, when the number of users is very large, the aggregated information of the users can be evaluated with high accuracy.
This property is useful for calculations based on aggregated information. Without knowing the exact values of the individual data items, we can still derive meaningful results, since the required aggregate information can be estimated from the perturbed data.
Another important point of our approach is the trade-off between accuracy and privacy. The more random numbers, the greater the gap between the masquerading data and the original data, which provides a higher level of privacy protection. Conversely, the fewer the random numbers, the more apparent the data characteristics. For context-based calculations, this indicates that the results are more accurate. Dealing with the balance between accuracy and privacy is an open question. In the present invention, privacy is parameterized as ε and is given by each user. By utilizing differential privacy, the random number added in the observed QoS value is a minimum value that maintains a fair degree of accuracy with respect to the specific privacy.
2. The invention relates to a differential privacy prediction system for privacy protection and Web service quality coordination
As shown in fig. 1, the differential privacy prediction system for privacy protection and collaborative Web service quality according to the present invention includes a data collection module, a data disguising module, a collaborative filtering module, and a prediction result module;
the data collection module is used for locally collecting the service quality value by each user;
the data disguising module is used for disguising the collected service quality value; the data disguising module achieves the purpose of disguising data by randomly interfering the original data; randomness should ensure that sensitive information cannot be derived from the perturbed data, including quality of service values for each individual user; when the number of users is very large, the aggregated information of the users can still be evaluated with high accuracy.
The collaborative filtering module is used for collaboratively filtering the service quality value disguised and collected by the data disguising module; the collaborative filtering module adopts a neighborhood-based collaborative filtering module or a model-based collaborative filtering module.
The prediction result module predicts a result according to the service quality value filtered by the collaborative filtering module. Based on the quality of service values predicted by the prediction results module, the server runs applications that include selections, combinations, and recommendations based on the quality of service values.
3. The invention relates to a differential privacy prediction method for privacy protection and Web service quality
The invention discloses a differential privacy prediction method for privacy protection and Web service quality, which comprises the following steps:
firstly, collecting data;
secondly, data disguising;
thirdly, collaborative filtering, namely adopting a neighborhood-based collaborative filtering method or a model-based collaborative filtering method;
and fourthly, predicting a result.
The data disguising in the second step adopts the following method:
using r ui To represent the quality of service value-QoS value collected by user u for web service i, r u Represents the entire vector of QoS values evaluated by user u, and similarly, I ui And I u Respectively representing binary elements and vectors, R, indicating whether QoS values are present or not u Representing the disguised data; the epsilon-differential privacy for each user u is achieved by the following equation:
R ui =r ui +Laplace(Δf/ε)
where ε is the privacy parameter for exploiting privacy, Δ f is defined as the maximum difference between QoS values, i.e.:
Δf=max(r ui -r uj )
r ui representing a quality of service value-QoS value, r, collected by user u for a web service i uj Represents the quality of service value-QoS value collected by user u for web service j;
and the meaning of Laplace () is given by the following formula:
if the probability density function for a random variable x is:
the random variable x has a laplace (μ, b) distribution; μ and b are a position parameter and a scale parameter, respectively; let μ =0, so the distribution is considered as the standard deviation ofA symmetric exponential distribution of; to add noise that obeys the laplacian distribution, let b = Δ f/ε, and the generation of noise is referred to as laplace (Δ f/ε).
The privacy parameter epsilon is given by each user and by using differential privacy the random number added in the observed QoS value is a minimum value that maintains a considerable accuracy with respect to the specific privacy.
Wherein, the third step is collaborative filtering. Collaborative Filtering (CF) is a mature technology employed by most modern recommendation systems. In QoS prediction of collaborative Web services, a user is required to provide an observed QoS value of a service that the user uses to a recommendation system. Based on the collected QoS values, the recommendation system may predict the QoS of all available services for the user through some premium algorithm. The more QoS values a user provides, the higher the prediction accuracy. In the present invention, we employ two representative collaborative filtering methods: neighborhood-based collaborative filtering and model-based collaborative filtering. We will show how differential privacy can be integrated into two representative collaborative filtering methods for Web services QoS prediction.
1. Differential privacy based on data masquerading
We use r ui To represent the QoS value, r, collected by user u for web service i u Represents the entire vector of QoS values evaluated by user u, and similarly, I ui And I u Respectively representing a binary element and a vector indicating whether a QoS value exists. c. C u =|I u Is the number of QoS values evaluated by user u. In our discussion, differential privacy is a key technology for data masquerading. Laplace mechanism [ c.dwork, f.mcsherry, k.nissm and a.smith.calibration noise to sensitivity in private data analysis. Tcc 2006]Epsilon-differential privacy is obtained by increasing the noise of the laplacian distribution.
Definition 3: (laplace mechanism [ c.erential privacy.Encyclopedia of Cryptography and Security.2011:338-340.]) Giving a function: g = D → R d The following calculation maintains epsilon-difference privacy
X=g(x)+Laplace(Δf/ε)
Where epsilon is a privacy parameter for exploiting privacy, and a smaller epsilon provides a stronger privacy guarantee. Δ f is the de-global sensitivity. Here, we use L 1 Norm calculation Δf:
For simplicity, ε -differential privacy for each user u is achieved by the following equation:
R ui =r ui +Laplace(Δf/ε)
where Δ f is defined as the maximum difference between the QoS values, i.e.:
Δf=max(r ui -r uj )
r ui represents the quality of service value collected by user u for web service i-QoS value, r uj Represents the quality of service value-QoS value collected by user u for web service j;
after masquerading, all users send the masquerading QoS value R to the server u Random retention of the original data r ui Of the sensitive information. However, we can still estimate the aggregate information of the users. Thus, R can be directly accessed independently ui To perform QoS prediction.
2. Collaborative Web services QoS prediction
Next, we will show how to extend the two representative collaborative filtering methods to perform differential privacy based QoS prediction from masquerading data.
1) Neighborhood-based collaborative filtering
Here, we divide all the processes into three parts: z-score normalization, data masquerading and QoS prediction.
First step z-score normalization: to eliminate the variance between user data and improve accuracy, the user needs to perform z-score normalization on the observed QoS data. Z-fraction normalization is performed on the QoS values using the following equation:
wherein r is ui Indicating that the user is u-pointedQuality of service value-QoS value collected for web service i,is the average value, ω u Is a QoS vector r u Standard deviation of (d). After normalization, the QoS data has zero mean and unit variance.
The second step of data disguise: spoofing the normalized QoS value according to the following formula:
Q ui =q ui +Laplace(Δf/ε)
where ε is a privacy parameter, set by user u. Δ f is defined according to the distribution of QoS values, i.e., Δ f = max (r) ui -r uj ). After disguising, the user disguises the value Q of the user ui Sending to server, and storing original data q randomly ui Of the sensitive information. However, the aggregated information of the users can still be estimated. Thus, Q can be accessed directly ui To make a prediction of QoS.
And step three, qoS prediction: in the process of QoS prediction, two types of similarity are calculated to improve prediction accuracy: user affinity and service affinity. In particular, the similarity between two users u and v is calculated based on the service they typically use invoked in the following equation:
wherein S = S u ∩S v Is the set of services that user u and user v usually invoke, r u,i Is the QoS value of service i observed by user u,is the average QoS value of all services observed by user u.
However, due to the masquerading of QoS values, we only have the masquerading QoS value Q at the server side ui Instead of the true value q ui . Therefore, we consider using Q ui The similarity value is approximately calculated as follows.
The normalization is carried out according to the z,and by substituting this formula into the calculation, the similarity can be calculated as
Also, we observe that during z-normalization,then, it is easy to obtain
Next, we will demonstrate that the scalar product property between two vectors remains unchanged despite the use of data masquerading. For clarity, we denote the two vectors as a = (a), respectively 1 ,a 2 ,...,a n ) And b = (b) 1 ,b 2 ,...,b n ). After masquerading, the two vectors become a = (a) 1 ,A 2 ,...,A n ) And B = (B) 1 ,B 2 ,...,B n ). We have the advantage that,
because a is i And Laplace (Δ f) bb ) Is an independent vector, and Laplace (Δ f) bb ) Is a symmetric exponential distribution of μ =0, we can derive ∑ a i Laplace(Δf bb ) 0. Also, ∑ b may be derived i Laplace(Δf aa ) 0, and ∑ Laplace (Δ f) aa )Laplace(Δf bb ) 0. Therefore, we derive the following equation:
AB≈∑a i b i =ab
furthermore, we can also derive:
note that Sim (u, v) ranges from [ -1,1], with larger values indicating more similarity between two users (or services). Based on the above similarity values, the QoS value of service i observed by user u can be directly predicted. Similar users of user u are utilized by the following equation:
as with the user-based QoS forecast, the project-based QoS forecast may also be computed in such a way that the two approaches may be combined to improve the accuracy of the QoS forecast.
2) Model-based collaborative filtering
Matrix Factorization (MF) [ Z.ZHENG, H.Ma, M.R.Lyu and I.King. QoS-aware web service communication by collaborative filtering. TSC 2011,4 (2): 140-152] is a typical solution for model-based collaborative filtering, and the accuracy of prediction can be effectively improved by researching potential factors of a model.
Assuming a sparse matrix Q n*m Represents the observed QoS values of n users and m services, where each element q ij Reflecting the QoS value of user i using service j. Using input matrix Q n*m MF aims at serving the users to the matrix Q n*m Factorized into two matrices of lower dimension d: user factor matrix U n*d And a service factor matrix V m*d . Then, Q n*m The null element in (b) may be approximated as the product of U and V, i.e., the unknown QoS value q' ij ByTo estimate.
The MF is often converted to an optimization problem and a locally optimal solution is obtained by iteration. The objective function (or loss function) of the MF is defined as:
the first partIs the squared difference between the existing QoS matrix and the prediction matrix, but only for elements that have been evaluated by the user. The latter part lambda (| | U) i || 2 +||V j || 2 ) Is a regularization term added to handle overfitting due to input sparsity. By processing this optimization, we finally get the user factor matrix U n*d And a service factor matrix V m*d
Alternative Least Squares (ALS) and random gradient descent (SGD) are two common methods to solve this optimization problem. Since the inverse matrix needs to be computed to replace least squares (ALS) is more difficult, we use random gradient descent (SGD) to solve this problem. The iteration equation for SGD is as follows:
where γ is the learning rate and λ' is the regularization coefficient. The choice of two parameters will significantly affect the result. When the value of γ is large, the result will diverge rather than converge. To achieve convergence we empirically set γ to 0.001, although longer training times are required. Also, λ' is empirically set to 0.01.
In the first iteration, U and V are randomly set. But appropriate settings may make the later efficient calculations faster. Therefore, we initialize U and V around the average of all QoS values observed. The iteration will terminate when the objective function value is less than a certain threshold.
And in the fourth step, after the QoS value of a certain service is obtained through collaborative filtering, searching other users for the QoS value of the same service, selecting the user with the closest value, which indicates that the two users have similar interests, performing similar recommendation based on the selected user, and adopting the related value of the next user as the prediction result of the previous user.
4. Experiment of the invention
In this section, we performed three series of experiments on real datasets to evaluate our privacy preserving QoS prediction framework. The first series of experiments investigated the balance between privacy and accuracy when using the proposed method. Two additional series of experiments investigated some important data features, including the effect of size and density on the performance of our method.
TABLE 1 data set statistics
4.1 Experimental configuration
We first note that [ Z.ZHENG, Y.ZHANG and M.R.Lyu.inquiring QoS of Real-World Web services.TSC 2014 7 (1): 32-39; z.zheng, y.zhang and m.r.lyu.distributed QoS Evaluation for Real-World Web services. Icws 2010, 83-90, introduces a Real Web service QoS dataset comprising 5,825 Real Web service QoS values observed by 339 users. This data set is very useful in studying the accuracy of QoS predictions. From the data set, we focus on two representative QoS attributes: response Time (RT) and total Time (TP). Table 1 describes the statistics of the data set, AVE and STD are mean and standard deviation, respectively, and density refers to the ratio of observed data to all data. More details of the dataset may be found in [ Z.Zheng, Y.Zhang and M.R.Lyu.investigating QoS of Real-World Web services.TSC 2014 7 (1): 32-39; Z.Zheng, Y.Zhang and M.R.Lyu.distributed QoS Evaluation for Real-World Web services. ICWS 2010 83-90 ].
We use cross-validation to train and evaluate QoS predictions. The data set is relatively complete, but in practice, due to limited time and resources, users usually call only a few services, and the data density is generally below 10%. To simulate this sparsity in our experiments, we randomly removed entries from the complete data set, leaving only a small density of historical QoS values as our training set. The deleted data is used as a test set for accuracy evaluation.
Then, we perform a QoS prediction algorithm on the training set and predict the test set. We implemented and evaluated four algorithms. UIPCC proposed in [ z.zheng, h.ma, m.r.lyu and i.king.wsrec: a Collaborative Filtering Based Web Service recontaminer system. Icws 2009 ] is a representative implementation Based on neighborhood Collaborative Filtering and MF introduced in [ z.zheng, h.ma, m.r.lyu and i.king.qos-aware Web Service registration by Collaborative filtering.tsc 2011,4 (2): 140-152] is an implementation Based on model Collaborative Filtering. LUIPCC and LYMPH are two different privacy integration approaches implemented by the laplace mechanism.
To quantify the accuracy of QoS predictions, we take Root Mean Square Error (RMSE) as a widely used metric in related work (e.g., [ a.berlioz, a.friedman, m.a.kaafr, r.boreli and s.berkovsky. Applying differential privacy to matrix factor. Recsys 2015.erentially private recommender systems:building privacy into the net.SIGKDD 2009:627-636]):
R consists of all values in the training set that need to be predicted, and | R | is the number of elements in R. q's' ui Is a predictor of set R, q ui Are the corresponding values in the test set. Generally, a smaller RMSE indicates a better prediction result.
Note that the default parameter settings are shown in table 2. We empirically select the parameters of UIPCC and MF. By default, ε is set to 0.5, which protects sufficient privacy.
TABLE 2 parameter settings
UIPCC k=20 λ=0.1 -
MF d=20 γ=0.001 λ'=0.01
Laplace ε=0.5 - -
4.2 privacy and accuracy
Fig. 3 is a comparison between our differential privacy based QoS prediction and the original method under different privacy, corresponding to RT and TP. By introducing differential privacy into QoS prediction, a user may achieve privacy protection. But for users who adopt our approach, they do need to consider a balance between privacy and accuracy. On the one hand, the user can obtain higher privacy protection by adding more laplacian noise, which certainly reduces the effectiveness of the data. In the other extreme, the user can obtain 100% accuracy without adding any laplacian noise. To study the performance of varying accuracy, we performed QoS prediction algorithms on the test set and predicted the test set. The privacy parameter epsilon is incremented by steps 0.5 in the range 0.5 to 4. We can observe that both LUIPCC and LMF drop to RMSE as epsilon increases. A larger epsilon implies a looser privacy constraint and the utility of the data is not limited, so the user can get better accuracy. It is also worth noting that when epsilon becomes large (e.g., greater than 2.0) in fig. 3, our privacy preserving methods LUIPCC and LMF can achieve almost the same or even higher accuracy as UIPCC. Particularly when ε is greater than 4, the prediction accuracy of LMF is better than UIPCC. In addition, i have found MF to be superior to UIPCC. This demonstrates the superiority of the model-based approach in capturing the underlying structure of the QoS data. Another fact that we need to pay attention to is that although a recent work [ j.zhu, p.he, z.zheng and m.r.lyu. A Privacy-serving QoS Prediction Framework for Web Service recommendation. Icws ] claims better performance than the original algorithms (UIPCC and MF), the added randomness to prevent Information leakage is not large enough, the adversary can accurately infer the user's Privacy data with the application of the cluster [ s.zhang, j.ford and f.major.
In summary, our differential privacy based algorithm can provide privacy preserving QoS prediction with parameterized privacy. The results show that the user data we disguise is very close to the loose constraints of the user private data.
4.3 impact data size
To evaluate the impact of data size, we designed experiments by varying the number of services and users, respectively. In fig. 4, step 1000 sets the number of users as 339, and the number of services changes from 1000 to 5000, where the services are randomly selected from the original data set. Other parameter settings for the experiment are shown in table 2. We performed the same experimental setup in fig. 5, which contained 5825 services.
It is clear that both the number of services and the number of users have a positive influence on the accuracy of the algorithm, which means that the more data is given, the better the prediction. In other words, with more data, we can provide better accuracy.
Another finding is that the trend of the original algorithm and our privacy-based differential privacy algorithm is the same, such as the trend of UIPCC and LUIPCC or the trend of MF and LMF, although the precision difference between different data sizes is large. This means that the noise required for digital concealment is independent of the data size, so the user can achieve a high level of privacy protection by adding a very small amount of noise.
4.4 Effect of Density
In addition to the data size, the density, denoted as θ, is also a major factor in algorithm performance. Figure 6 shows the results of the accuracy comparisons at different densities. Although the effect of density on the original algorithm is not significant, it does have a significant effect of our difference-based algorithm. Higher density datasets perform better. This result means that density is also a key factor in determining the performance of the differential privacy method. More importantly, as the number of services becomes larger, the gap between the traditional approach and our privacy-based differencing approach becomes smaller and smaller. More specifically, when the density is set to 5 in fig. 6, the gap between the LUIPCC and UIPCC is 5. However, as the density increased to 30, the gap between LUIPCC and UIPCC decreased to 1. Thus, users are suggested to use a higher density data set to make the prediction closer to the original result.
5. Conclusion
The invention introduces differential privacy into a collaborative Web service QoS prediction framework for the first time. Differential privacy gives a strict quantitative definition of privacy leakage under very strict constraints. Based on the idea of differential privacy, users can obtain maximum privacy protection by ensuring the availability of data. Experimental results show that the system and the method provide safe and accurate QoS prediction of the cooperative Web service.
In light of the foregoing description of the preferred embodiment of the present invention, many modifications and variations will be apparent to those skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.

Claims (10)

1. A differential privacy prediction system for privacy protection and collaborative Web service quality is characterized by comprising a data collection module, a data disguising module, a collaborative filtering module and a prediction result module;
the data collection module is used for locally collecting a quality of service value, namely a QoS value, by each user;
the data disguising module is used for disguising the collected service quality value;
the collaborative filtering module is used for collaboratively filtering the service quality value disguised and collected by the data disguising module;
the predicted result module predicts a result according to the quality of service value filtered by the collaborative filtering module.
2. The system of claim 1, wherein the collaborative filtering module employs a neighborhood-based collaborative filtering module or a model-based collaborative filtering module.
3. The system of claim 1, wherein the data disguise module serves the purpose of disguising data by randomly disturbing the original data; randomness should ensure that sensitive information cannot be derived from perturbed data, including quality of service values for each individual user; when the number of users is very large, the aggregated information of the users can still be evaluated with high accuracy.
4. The system of claim 1, wherein the data masquerading module employs the following data masquerading method:
using r ui To represent the quality of service value-QoS value collected by user u for web service i, r u Represents the entire vector of QoS values evaluated by user u, and similarly, I ui And I u Respectively representing a binary element and a vector, R, indicating whether a QoS value is present or not u Representing the disguised data; ε -differential privacy for each user u is achieved by the following equation:
R ui =r ui +Laplace(Δf/ε)
where ε is the privacy parameter for exploiting privacy, Δ f is defined as the maximum difference between QoS values, i.e.:
Δf=max(r ui -r uj )
r ui representing a quality of service value-QoS value, r, collected by user u for a web service i uj Represents a quality of service value-QoS value collected by user u for web service j;
and the meaning of Laplace () is given by the following formula:
if the probability density function for a random variable x is:
the random variable x has a laplacian (μ, b) distribution; μ and b are a position parameter and a scale parameter, respectively; let μ =0, so the distribution is considered as a standard deviation ofA symmetric exponential distribution of (a); to add noise that obeys the laplacian distribution, let b = Δ f/ε, and the generation of noise is referred to as laplace (Δ f/ε).
5. A system according to claim 4, characterized in that the privacy parameter ε is given by each user, and that by using differential privacy, the random number added in the observed QoS value is the minimum value that maintains a fair accuracy with respect to the specific privacy.
6. The system of claim 1, wherein the method of predicting outcomes of the outcome prediction module is specifically: after the QoS value of a certain service is obtained through collaborative filtering, other users are retrieved aiming at the QoS value of the same service, the user with the closest value is selected, which indicates that the two users have similar interests and hobbies, similar recommendation is made based on the user interest and the QoS value of the next user is adopted as the prediction result of the previous user; based on the quality of service value predicted by the prediction results module, the server runs applications that include selection, combination, and recommendation based on the quality of service value.
7. A differential privacy prediction method for privacy protection and Web service quality is characterized by comprising the following steps:
firstly, collecting data;
secondly, data disguising;
thirdly, collaborative filtering, namely adopting a neighborhood-based collaborative filtering method or a model-based collaborative filtering method;
and fourthly, predicting a result.
8. The method of claim 7, wherein, in the second step, the data masquerading employs the following method:
using r ui To represent the quality of service value-QoS value collected by user u for web service i, r u Represents the entire vector of QoS values evaluated by user u, and similarly, I ui And I u Respectively representing binary elements and vectors, R, indicating whether QoS values are present or not u Representing the disguised data; the epsilon-differential privacy for each user u is achieved by the following equation:
R ui =r ui +Laplace(Δf/ε)
where ε is the privacy parameter for exploiting privacy, Δ f is defined as the maximum difference between QoS values, i.e.:
Δf=max(r ui -r uj )
r ui represents the quality of service value collected by user u for web service i-QoS value, r uj Represents the quality of service value-QoS value collected by user u for web service j;
and the meaning of Laplace () is given by the following formula:
if the probability density function for a random variable x is:
the random variable x has a laplace (μ, b) distribution; μ and b are a position parameter and a scale parameter, respectively; let μ =0, so the distribution is considered as a standard deviation ofA symmetric exponential distribution of; to add noise that obeys the laplacian distribution, let b = Δ f/ε, and the generation of noise is referred to as laplace (Δ f/ε).
9. The method of claim 7, wherein in the third step, the neighborhood based collaborative filtering method comprises the steps of:
(1) Normalization: z-fraction normalization is performed on the QoS values using the following equation:
wherein r is ui Represents the quality of service value-QoS value collected by user u for web service i,is a QoS vector r u Mean value, ω u Is a QoS vector r u Standard deviation of (d); after normalization, the QoS data has zero mean and unit variance;
(2) Data disguising: spoofing the normalized QoS value according to the following formula:
Q ui =q ui +Laplace(Δf/ε)
where ε is a privacy parameter, set by user u, and Δ f is defined according to the distribution of QoS values, i.e., Δ f = max (r) ui -r uj ) (ii) a After disguising, the user disguises the value Q of the user ui Sending to the server, and storing the original data q at random ui Sensitive information of (2);
(3) Neighborhood-based collaborative filtering: the similarity between two users u and v is calculated based on the service they typically invoke using the following equation:
wherein S = S u ∩S v Is the set of services, r, normally invoked by user u and user v u,i Is the QoS value of service i observed by user u,is the average QoS value of all services observed by user u;
using Q ui The similarity values are approximately calculated as follows:
the normalization is carried out according to the z,and calculating the similarity by substituting the formula into the calculation
During the time of the z-normalization,is easy to obtain
It can be demonstrated that the scalar product property between the two vectors remains unchanged despite the use of data masquerading; thus, it is obtained
Sim (u, v) ranges from [ -1,1], with larger values indicating that two users or services are more similar; based on the similarity value, the QoS value of the service i observed by the user u can be directly predicted; similar users of user u are utilized by the following equation:
10. the method according to claim 7, characterized in that, in the third step, the model-based collaborative filtering method is specifically: using factorization of the matrix MF, assuming a sparse matrix Q n * m Represents the observed QoS values of n users and m services, where each element q ij Reflecting QoS values of user i using service j, using input matrix Q n * m MF aims at serving the users to the matrix Q n * m Factorize into two matrices of lower dimension d: user factor matrix U n * d And a service factor matrix V m * d (ii) a Then, Q n * m The null element in (b) may be approximated as the product of U and V, i.e., the unknown QoS value q' ij ByTo estimate;
MF is often converted to an optimization problem and local optimal solutions are obtained by iteration; the objective function or loss function of the MF is defined as:
the first partIs the squared difference between the existing QoS matrix and the prediction matrix, but only for elements that have been evaluated by the user; the latter part λ (| | U) i || 2 +||V j || 2 ) Is a regularization term added to handle overfitting due to input sparsity; by processing the optimization, a user factor matrix U is finally obtained n*d And a service factor matrix V m*d (ii) a This problem is solved by using a random gradient descent SGD, whose iterative equation is as follows:
where γ is the learning rate and λ' is the regularization coefficient; the choice of two parameters will significantly affect the result, which will diverge rather than converge when the value of γ is large; to obtain convergence, γ is empirically set to 0.001, and likewise λ' is empirically set to 0.01, although longer training times are required; the iteration will terminate when the objective function value is less than a certain threshold.
CN201710875787.XA 2017-09-25 2017-09-25 Secret protection cooperates with the difference privacy forecasting system and method for Web service quality Pending CN107659444A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710875787.XA CN107659444A (en) 2017-09-25 2017-09-25 Secret protection cooperates with the difference privacy forecasting system and method for Web service quality
PCT/CN2017/113486 WO2019056573A1 (en) 2017-09-25 2017-11-29 Differential privacy-based system and method for collaborative web quality-of-service prediction for privacy protection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710875787.XA CN107659444A (en) 2017-09-25 2017-09-25 Secret protection cooperates with the difference privacy forecasting system and method for Web service quality

Publications (1)

Publication Number Publication Date
CN107659444A true CN107659444A (en) 2018-02-02

Family

ID=61129864

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710875787.XA Pending CN107659444A (en) 2017-09-25 2017-09-25 Secret protection cooperates with the difference privacy forecasting system and method for Web service quality

Country Status (2)

Country Link
CN (1) CN107659444A (en)
WO (1) WO2019056573A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109257217A (en) * 2018-09-19 2019-01-22 河海大学 Web service QoS prediction technique based on secret protection under mobile peripheral surroundings
CN109543094A (en) * 2018-09-29 2019-03-29 东南大学 A kind of secret protection content recommendation method based on matrix decomposition
CN110022531A (en) * 2019-03-01 2019-07-16 华南理工大学 A kind of localization difference privacy municipal refuse data report and privacy calculation method
CN110443430A (en) * 2019-08-13 2019-11-12 汕头大学 A kind of service quality prediction technique based on block chain
WO2020062165A1 (en) * 2018-09-29 2020-04-02 区链通网络有限公司 Method, node and system for training reinforcement learning model, and storage medium
CN112288154A (en) * 2020-10-22 2021-01-29 汕头大学 Block chain service reliability prediction method based on improved neural collaborative filtering
CN112395638A (en) * 2019-08-16 2021-02-23 国际商业机器公司 Collaborative AI with respect to privacy-assured transactional data
CN116595254A (en) * 2023-05-18 2023-08-15 杭州绿城信息技术有限公司 Data privacy and service recommendation method in smart city

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112214788B (en) * 2020-08-28 2023-07-25 国网江西省电力有限公司信息通信分公司 Ubiquitous power Internet of things dynamic data publishing method based on differential privacy
CN112700067A (en) * 2021-01-14 2021-04-23 安徽师范大学 Method and system for predicting service quality under unreliable mobile edge environment
CN113204793A (en) * 2021-06-09 2021-08-03 辽宁工程技术大学 Recommendation method based on personalized differential privacy protection
CN114091100B (en) * 2021-11-23 2024-05-03 北京邮电大学 Track data collection method and system meeting local differential privacy
CN115455483B (en) * 2022-09-21 2023-12-26 广州大学 Big data frequency number estimation method based on local differential privacy
CN116132347B (en) * 2023-04-06 2023-06-27 湖南工商大学 Bi-LSTM-based service QoS prediction method in computing network convergence environment
CN116489636A (en) * 2023-04-21 2023-07-25 北京交通大学 Personalized differential privacy protection method under cloud-edge cooperative scene
CN116341014B (en) * 2023-05-29 2023-08-29 之江实验室 Multiparty federal private data resource interaction method, device and medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104394509B (en) * 2014-11-21 2018-10-30 西安交通大学 A kind of efficient difference disturbance location intimacy protection system and method
CN106209813B (en) * 2016-07-05 2019-05-07 中国科学院计算技术研究所 A kind of method for secret protection and device based on position anonymity

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109257217A (en) * 2018-09-19 2019-01-22 河海大学 Web service QoS prediction technique based on secret protection under mobile peripheral surroundings
CN109543094A (en) * 2018-09-29 2019-03-29 东南大学 A kind of secret protection content recommendation method based on matrix decomposition
WO2020062165A1 (en) * 2018-09-29 2020-04-02 区链通网络有限公司 Method, node and system for training reinforcement learning model, and storage medium
CN110022531B (en) * 2019-03-01 2021-01-19 华南理工大学 Localized differential privacy urban garbage data report and privacy calculation method
CN110022531A (en) * 2019-03-01 2019-07-16 华南理工大学 A kind of localization difference privacy municipal refuse data report and privacy calculation method
CN110443430B (en) * 2019-08-13 2023-08-22 汕头大学 Block chain-based service quality prediction method
CN110443430A (en) * 2019-08-13 2019-11-12 汕头大学 A kind of service quality prediction technique based on block chain
CN112395638A (en) * 2019-08-16 2021-02-23 国际商业机器公司 Collaborative AI with respect to privacy-assured transactional data
CN112395638B (en) * 2019-08-16 2024-04-26 国际商业机器公司 Collaborative AI with respect to transaction data with privacy guarantee
CN112288154A (en) * 2020-10-22 2021-01-29 汕头大学 Block chain service reliability prediction method based on improved neural collaborative filtering
CN112288154B (en) * 2020-10-22 2023-11-03 汕头大学 Block chain service reliability prediction method based on improved neural collaborative filtering
CN116595254A (en) * 2023-05-18 2023-08-15 杭州绿城信息技术有限公司 Data privacy and service recommendation method in smart city
CN116595254B (en) * 2023-05-18 2023-12-12 杭州绿城信息技术有限公司 Data privacy and service recommendation method in smart city

Also Published As

Publication number Publication date
WO2019056573A1 (en) 2019-03-28

Similar Documents

Publication Publication Date Title
CN107659444A (en) Secret protection cooperates with the difference privacy forecasting system and method for Web service quality
CN107679415A (en) Secret protection cooperates with the collaborative filtering method based on model of Web service prediction of quality
CN107609421A (en) Secret protection cooperates with the collaborative filtering method based on neighborhood of Web service prediction of quality
Truex et al. LDP-Fed: Federated learning with local differential privacy
Meng et al. Personalized privacy-preserving social recommendation
Sun et al. LDP-FL: Practical private aggregation in federated learning with local differential privacy
Arachchige et al. A trustworthy privacy preserving framework for machine learning in industrial IoT systems
Ruzafa-Alcázar et al. Intrusion detection based on privacy-preserving federated learning for the industrial IoT
Zhu et al. A privacy-preserving QoS prediction framework for web service recommendation
Dhinakaran et al. Protection of data privacy from vulnerability using two-fish technique with Apriori algorithm in data mining
Liu et al. Differential private collaborative Web services QoS prediction
JP2016531513A (en) Method and apparatus for utility-aware privacy protection mapping using additive noise
Singh et al. Differentially-private federated neural architecture search
KR20150115772A (en) Privacy against interference attack against mismatched prior
Kuang et al. A privacy protection model of data publication based on game theory
Chen et al. Privacy and fairness in Federated learning: on the perspective of Tradeoff
Mireshghallah et al. A principled approach to learning stochastic representations for privacy in deep neural inference
US20230325511A1 (en) Cyber threat scoring, cyber security training and proactive defense by machine and human agents incentivized with digital assets
Firdaus et al. A secure federated learning framework using blockchain and differential privacy
Kang et al. Weighted distributed differential privacy ERM: Convex and non-convex
Galli et al. Group privacy for personalized federated learning
Ni et al. Federated learning model with adaptive differential privacy protection in medical IoT
Wang et al. Protecting data privacy in federated learning combining differential privacy and weak encryption
Liu et al. Privacy-preserving collaborative web services QoS prediction via differential privacy
Jiang et al. Differential privacy in privacy-preserving big data and learning: Challenge and opportunity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180202