CN109241764B - User requirement track privacy protection method - Google Patents

User requirement track privacy protection method Download PDF

Info

Publication number
CN109241764B
CN109241764B CN201810751655.0A CN201810751655A CN109241764B CN 109241764 B CN109241764 B CN 109241764B CN 201810751655 A CN201810751655 A CN 201810751655A CN 109241764 B CN109241764 B CN 109241764B
Authority
CN
China
Prior art keywords
user
privacy
attacker
tar
demand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810751655.0A
Other languages
Chinese (zh)
Other versions
CN109241764A (en
Inventor
曹斌
闫春柳
吕劭鹏
徐烨
张钦宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN201810751655.0A priority Critical patent/CN109241764B/en
Publication of CN109241764A publication Critical patent/CN109241764A/en
Application granted granted Critical
Publication of CN109241764B publication Critical patent/CN109241764B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6281Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database at program execution time, where the protection is within the operating system

Abstract

The invention provides a user requirement track privacy protection method, which is supposed to be usedThere are M requirements for a household: s ═ S1,s2,...,sMA user's demand is represented as a discrete-time trajectory at time T {1, 2. }; event(s)<S,t>Is represented as a demand S at time t; the replacement set O is consistent with the real requirement set S of the user; the target event is Star;OpreA subset of alternatives representing query things prior to a current demand query; o iscurAlternatives, representing the needs of the user's current time, are known to both the attacker and the user. The invention has the beneficial effects that: the track privacy of the query content of the user in the social network can be effectively protected, and Laplacian noise (differential privacy) is added to the confidence coefficient between the required things, so that the track privacy of the user is further protected; by adopting the privacy protection method of the game, the requirement privacy of the user is protected, and meanwhile, the service quality of the user can be well guaranteed.

Description

User requirement track privacy protection method
Technical Field
The invention relates to a privacy protection method, in particular to a user requirement track privacy protection method.
Background
At present, the existing technologies are basically to protect the position track privacy of users, and the position privacy protection technology is widely applied to three major types, namely a K-anonymization generalization technology, a noise technology and a dynamic pseudonymization method. In 2003, Gruteser et al first applied a K-anonymous method to protect the true location of users, and each user needs to be indistinguishable from other K-1 users at a certain time and space, so an attacker cannot distinguish target users and infer user locations. Later, some researchers apply K-anonymization to track privacy protection, in 2009, Ghintta et al propose to construct an anonymous area at the user moving speed, but the method is easy to attack and cannot well guarantee the position privacy of the user. The core of the noise-based track protection method is that the self position is sent to a service provider, and meanwhile, some false positions are sent to the service provider, and in 2010, Suzuki and the like propose constraints such as the moving speed of a user and a road network and the like when the false positions are generated, so that the false positions are closer to the real positions of the user on some characteristics, and an attacker cannot easily distinguish the real positions of the user. The pseudonym method mainly replaces the identity of a user with a pseudonym when the user sends a request to a service provider, and some scholars find that the user always uses one pseudonym to cause an attacker to find the privacy of the user, so that the pseudonym of the user needs to be dynamically replaced. In 2007, Freudigerj et al propose a dynamic pseudonymization technology of Mix-zones, and the method can effectively protect the position track privacy of the user.
However, at present, no special requirement track privacy method is provided for protecting the requirement track privacy of people, so that the requirement privacy of people is seriously disclosed, and the life of people is seriously influenced. Therefore, how to protect the privacy of the query content track of people needs to be solved.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a user requirement track privacy protection method.
The invention provides a method for protecting user requirement track privacy,
suppose a user has M requirements: s ═ S1,s2,...,sMA user's demand is represented as a discrete-time trajectory at time T {1, 2. }; event(s)<S,t>Is represented as a demand S at time t;
the replacement set O is consistent with the real requirement set S of the user; the target event is Star;OpreA subset of alternatives representing query things prior to a current demand query; o iscurAlternatives, representing the user's current time of demand, are known to both the attacker and the user; p (O)cur|Star,Opre) Expressed at a given prior knowledge OpreAnd target requirement StarIn case the protection mechanism generates OcurThe current demand replacement object only aims at the time t;
privacy quantified as attacker inferenceSubscriber StarError in time, ω (S)tar|Opre) The prior knowledge of the attacker, namely, the probability distribution of the real requirement of the user is deduced through the observation of the past things, namely, the prior knowledge of the attacker before the current things are observed;
Figure BDA0001725696720000021
representing an attacker pair StarThe estimated value of (A), the value of (A) and (S)tarSimilarly, all are elements in user demand S;
Figure BDA0001725696720000022
expressed at a given prior knowledge OpreAnd the current observed value OcurIn case, the attacker estimates the probability value of the user's real target;
Figure BDA0001725696720000023
a privacy gain is represented, and the range is greater than or equal to 0; if it is not
Figure BDA0001725696720000024
The confidence between the two is 1, and the privacy gain is 0; dpThe value is determined by a user, if the user is sensitive to a certain requirement, the user selects a replacing object with lower confidence to replace the real requirement; defining the privacy of the user, i.e. the inference error of the attacker:
Figure BDA0001725696720000031
as a further refinement of the present invention, the mean confidence distance reflecting QoS loss is defined as:
Figure BDA0001725696720000032
cL(Star,(Ocur,Opre) Represents the confidence after adding Laplace noise, when StarAnd (O)cur,Opre) When the confidence distance is equal to 0, the quality loss is avoided; when the two are different, then dqThe value is positive, the larger the value is, the larger the quality loss is, and the larger the required privacy is;
defining QoS loss for a user:
Figure BDA0001725696720000033
the maximum QoS loss that a user can accept needs to be within a certain range:
Figure BDA0001725696720000034
as a further improvement of the invention, the QoS of the user is protected while the privacy of the user requirement is protected; considering that an attacker knows a strategy p of a protection mechanism, and making an attack strategy q of the attacker according to the strategy p; the balance point of the game is to find the best p*And q is*,p*Is a protection policy that maximizes the privacy of the user, and q*Is to minimize the privacy of the user, and p*Corresponding;
because the contradictions between the user and the attacker are the privacy of the user, the problems form a zero-sum game, the user is a leader, and the attacker is a follower;
assuming that the user receives a maximum loss of quality of service
Figure BDA0001725696720000041
Let P and Q be represented as the policy spaces of the user and attacker, respectively:
Figure BDA0001725696720000042
Figure BDA0001725696720000043
the attacker knows the a priori knowledge omega (S) of the usertar|Opre) And protection probability distribution p (O)cur|Star,Opre) At a given ω (S)tar|Opre),
Figure BDA0001725696720000044
And
Figure BDA0001725696720000045
under the condition of (1), calculating the optimal protection mechanism of a user and the optimal attack mechanism of an attacker by establishing two linear plans;
let pi denote the user's demand privacy, i.e.
Figure BDA0001725696720000046
Optimal policy for the user:
Figure BDA0001725696720000047
best strategy for the attacker:
Figure BDA0001725696720000048
nash equilibrium exists and is unique.
As a further improvement of the present invention, the mobility of the modeling user is a first order Markov chain.
As a further improvement of the present invention, if the user wants to protect the demand at times t-1 and t, Star=(st-1,st)。
The invention has the beneficial effects that: by the scheme, the track privacy of the query content of the user in the social network can be effectively protected, and Laplace noise (differential privacy) is added to the confidence coefficient between the required things, so that the track privacy of the user is further protected; by adopting the privacy protection method of the game, the requirement privacy of the user is protected, and meanwhile, the service quality of the user can be well guaranteed.
Drawings
Fig. 1 is a schematic diagram of a relationship between privacy and service quality of a user requirement track privacy protection method according to the present invention.
Fig. 2 is a schematic diagram of the relationship between privacy and service quality (different transition matrices) of a user requirement trajectory privacy protection method according to the present invention.
FIG. 3 is a schematic diagram of privacy-quality of service relationship between privacy and quality of service according to the method for protecting user requirement trajectory privacy (now-in the future).
Detailed Description
The invention is further described with reference to the following description and embodiments in conjunction with the accompanying drawings.
As shown in fig. 1, a method for protecting track privacy of a user requirement specifically includes the following steps:
suppose a user has M requirements: s ═ S1,s2,...,sMA user's demand is represented as a discrete-time trajectory at time T {1, 2. Event(s)<S,t>Is represented as a demand S at time t. The mobility of the user is probabilistic in nature, and in the model of the invention, the mobility of the user is modeled as a first order markov chain (other mobility models are also possible).
The protection mechanism is the same as, and still an alternative to, the single demand protection mechanism. The replacement set O is consistent with the user' S real demand set S. The target event is StarFor example, the user wants to protect the demand at times t-1 and t, i.e., Star=(st-1,st)。OpreSubsets of alternatives representing query things prior to a current demand query, e.g., at times t-1 and t-2, Opre=(Ot-2,Ot-1)。OcurAlternatives (possibly 1 alternative or multiple alternatives) representing a desired thing at the user's current time, e.g., Ocur=(Ot) Known to both the attacker and the user. p (O)cur|Star,Opre) Expressed at a given prior knowledge OpreAnd target requirement StarIn case the protection mechanism generates OcurThe probability of (c). StarAnd OcurIs not necessarily the same, the goal may be to protect the demand for several previous instants, while the current demand replacement only aims at instant t.
Privacy is quantified as attacker inferred user StarError in time. Omega (S)tar|Opre) Is the attacker's a priori knowledge, i.e. by observing past things, and then deducing the probability distribution of the user's true needs, i.e. the attacker has a priori knowledge before observing the current things.
Figure BDA0001725696720000061
Representing an attacker pair StarThe estimated value of (A), the value of (A) and (S)tarSimilarly, are all elements of the user' S requirements S.
Figure BDA0001725696720000062
Expressed at a given prior knowledge OpreAnd the current observed value OcurIn this case, the attacker estimates the probability value of the user's true target.
Figure BDA0001725696720000063
The privacy gain is represented, and the range thereof is 0 or more. If it is not
Figure BDA0001725696720000064
The confidence between the two is 1 and the privacy gain is 0. dpThe value is decided by the user, and if the user is sensitive to a certain demand, the user selects a replacement object with lower confidence to replace the real demand. Defining the privacy of the user, i.e. the inference error of the attacker:
Figure BDA0001725696720000065
protection of multiple requirements, as with a single requirement, also presents a quality of service loss problem. The mean confidence distance is defined to reflect the QoS loss as:
Figure BDA0001725696720000066
cL(Star,(Ocur,Opre) Represents the confidence after adding Laplace noise, when StarAnd (O)cur,Opre) When the same, i.e. no noise, the mean confidence distance is 0, i.e. no quality loss. When the two are different, then dqPositive values, the larger the value, the greater the loss of quality, and the greater the required privacy. Defining QoS loss for a user:
Figure BDA0001725696720000071
similar to single demand protection, the maximum QoS loss that a user can accept needs to be within a certain range:
Figure BDA0001725696720000072
the invention aims to design a protection mechanism to protect the required privacy of a user and protect the QoS of the user. The design process needs to consider that an attacker knows the strategy p of the protection mechanism of the invention, and the attacker makes the attack strategy q according to p. The balance point of the game is to find the best p*And q is*,p*Is a protection policy that maximizes the privacy of the user, and q*Is to minimize the privacy of the user, and p*And correspondingly.
Because the contradictions between the user and the attacker are the privacy of the user, the above problems constitute a zero-sum game, the user is the leader, and the attacker is the follower. Assuming that the user receives a maximum loss of quality of service
Figure BDA0001725696720000073
Let P and Q be represented as the policy spaces of the user and attacker, respectively:
Figure BDA0001725696720000074
Figure BDA0001725696720000075
the attacker knows the a priori knowledge omega (S) of the usertar|Opre) And protection probability distribution p (O)cur|Star,Opre) At a given ω (S)tar|Opre),
Figure BDA0001725696720000076
And
Figure BDA0001725696720000077
under the condition of (1), the invention calculates the optimal protection mechanism of the user and the optimal attack mechanism of the attacker by establishing two linear plans. Let pi denote the user's demand privacy, i.e.
Figure BDA0001725696720000078
Optimal policy for the user:
Figure BDA0001725696720000079
best strategy for the attacker:
Figure BDA0001725696720000081
nash equilibrium exists and is unique, and a single desired privacy protection for a user is a special case of multiple desired track privacy protections.
And (3) analyzing an experimental scene and a result:
scenario (1): protecting past-present needs
Based on the previous analysis, on this basis, the user can protect his trajectory privacy according to his own needs, first analyzing privacy and QoS issues protecting the user's past and present needs. The user' S demand at time t is S, which means StThe demand in k past times is Sττ is { t-1, t-1. The present invention protects the user' S past and present needs, namely Star=(Sτ,St) At this time, the privacy gain is:
Figure BDA0001725696720000082
a priori knowledge being a collection of alternatives Oτ(substitute for past user demand), and O is the substitute for the user's current demandt. Loss of QoS:
Figure BDA0001725696720000083
the privacy of the user's past and present needs is:
Figure BDA0001725696720000084
the QoS loss for a user is:
Figure BDA0001725696720000085
firstly, a first-order Markov chain is established for the requirements of a user, namely the requirements of the user at t-1 and t time, a probability transition matrix is P, and the P meets the requirement of non-periodic irreducible, so that the steady-state probability is solved. The prior probability is:
ω(St-1,St|Ot-1)=Pr(St|St-1,Ot-1)Pr(St-1|Ot-1) (14)
among them, the demand at the present moment is independent of the observation at the past moment, so:
Pr(St|St-1,Ot-1)=Pr(St|St-1) (15)
the probability of the above equation can be obtained by the probability transition matrix. According to a Bayesian formula, the method can be obtained as follows:
Figure BDA0001725696720000091
wherein, Pr (S)t-1) Can be obtained by calculating the steady-state probabilities of the transition matrices. Therefore, if Pr (O) is obtainedt-1|St-1) Then the prior probability can be solved. When t is 2, Pr (O)t-1|St-1) The corresponding probability is the probability when protection is required at a single moment, namely the probability is known due to previous work, and then the prior probability can be obtained.
The experimental conditions are as follows:
the user wants to protect the requirements at the time t-1 and the time t, each user has 10 requirements (a Markov transition matrix P can be obtained, and the steady-state probability of each requirement of the user can be obtained) at the time t-1 and the time t, and the total requirements of the two times are 5. There were 20 fuzzy requirements of the user, and in the experiment, each real requirement of the user was fuzzy into 6 other requirements (i.e. different real requirements of the user were fuzzy into different requirements). Differential privacy (i.e., laplacian noise) is added, the noise figure is set to 0.4, and the confidence is between 0 and 1. The loss of service quality for the user is between 0 and 1, and the privacy of the user obtained by simulation is normalized to be between 0 and 1.
Simulation analysis:
from the analysis of fig. 1, in the beginning stage, the privacy of the user increases with the increase of the QoS loss, and when the QoS reaches a certain value, the privacy of the user is not changed, and the equilibrium point of the game is reached. In the experiment, the transfer matrix is randomly generated, different transfer matrices of the same user are corresponding to the three lines in fig. 2, and it can be known from fig. 2 that the privacy of the user is different for different transfer matrices, and the transfer matrix required by the user has a priori knowledge, so the transfer matrix has a large influence on the privacy of the user.
Scenario (2): protecting present-future needs
User demand at time t is StThe demand at time t +1 is St+1The goal of the user is to protect present and future needs, i.e. Star=(St,St+1) The corresponding fuzzy requirement is Ocur=(Ot,Ot+1) The past needs of the user are not involved in protecting present and future needs, so a priori knowledge of past needs is not considered.
The privacy of a user is defined as:
Figure BDA0001725696720000101
the QoS loss is defined as:
Figure BDA0001725696720000102
the method of gaming is still adopted to optimize the privacy and QoS of the users.
The experimental conditions are as follows:
basically the same as the experimental conditions in the past-present model are protected, but in the protection of the present-future demand privacy, the transition matrix does not need to be established for the past demand, the prior knowledge of the past demand is not required, but the prior knowledge of the present and future demands, namely omega (S) is still requiredtar) This is the same as the individual need to protect the user. The specific experimental conditions were as follows: the user wants to protect the requirements of the time t and the time t +1, each user has 10 requirements at the time t and the time t +1 respectively, and the two requirements are 5 in common. There are 20 fuzzy requirements, and in the experiment, each real requirement of the user is fuzzy into 6 other requirements, and differential privacy (namely laplacian noise) is also required to be added, the noise coefficient is set to be 0.4, and the confidence coefficient is between 0 and 1. The loss of service quality for the user is between 0 and 1, and the privacy of the user obtained by simulation is normalized to be between 0 and 1.
And (4) analyzing results:
fig. 3 is a diagram of privacy of a single user and QoS loss, and it can be seen from fig. 3 that when the QoS loss is small, the privacy of the user increases with the increase of the QoS loss, and when the loss reaches a certain value, the privacy of the user does not change any more, and the equilibrium point of the game is reached, which is similar to the previous experimental result.
The invention provides a method for protecting user requirement track privacy, which is a method for protecting the track privacy of inquiry content of people in a social network, can protect the requirement track privacy of people, can effectively protect the track privacy of inquiry content of users in the social network, and adds Laplace noise (differential privacy) to the confidence coefficient between required things so as to further protect the track privacy of users. By adopting the privacy protection method of the game, the requirement privacy of the user is protected, and meanwhile, the service quality of the user can be well guaranteed.
Compared with the prior art, the user requirement track privacy protection method provided by the invention has the following differences:
(1) different application scenes, the invention aims at the requirement track privacy of users in the social network.
(2) According to the method, the privacy track is protected by applying the association rule and the differential privacy technology.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (5)

1. A user requirement track privacy protection method is characterized by comprising the following steps:
suppose a user has M requirements: s ═ S1,s2,...,sMA user's demand is represented as a discrete-time trajectory at time T {1, 2. }; event(s)<S,t>Is represented as at time tA requirement S;
the replacement set O is consistent with the real requirement set S of the user; the target event is Star;OpreA subset of alternatives representing query things prior to a current demand query; o iscurAlternatives, representing the user's current time of demand, are known to both the attacker and the user; p (O)cur|Star,Opre) Expressed at a given prior knowledge OpreAnd target requirement StarIn case the protection mechanism generates OcurThe current demand replacement object only aims at the time t;
privacy is quantified as attacker inferred user StarError in time, ω (S)tar|Opre) The prior knowledge of the attacker, namely, the probability distribution of the real requirement of the user is deduced through the observation of the past things, namely, the prior knowledge of the attacker before the current things are observed;
Figure FDA0001725696710000011
representing an attacker pair StarThe estimated value of (A), the value of (A) and (S)tarSimilarly, all are elements in user demand S;
Figure FDA0001725696710000012
expressed at a given prior knowledge OpreAnd the current observed value OcurIn case, the attacker estimates the probability value of the user's real target;
Figure FDA0001725696710000013
a privacy gain is represented, and the range is greater than or equal to 0; if it is not
Figure FDA0001725696710000014
The confidence between the two is 1, and the privacy gain is 0; dpThe value is determined by a user, if the user is sensitive to a certain requirement, the user selects a replacing object with lower confidence to replace the real requirement; defining the privacy of the user, i.e. the inference error of the attacker:
Figure FDA0001725696710000015
2. the user-required trajectory privacy protection method of claim 1, wherein:
the mean confidence distance is defined to reflect the QoS loss as:
Figure FDA0001725696710000021
cL(Star,(Ocur,Opre) Represents the confidence after adding Laplace noise, when StarAnd (O)cur,Opre) When the confidence distance is equal to 0, the quality loss is avoided; when the two are different, then dqThe value is positive, the larger the value is, the larger the quality loss is, and the larger the required privacy is;
defining QoS loss for a user:
Figure FDA0001725696710000022
the maximum QoS loss that a user can accept needs to be within a certain range:
Figure FDA0001725696710000023
3. the user-required trajectory privacy protection method of claim 2, wherein:
the QoS of the user is protected while the privacy of the user is protected; considering that an attacker knows a strategy p of a protection mechanism, and making an attack strategy q of the attacker according to the strategy p; the balance point of the game is to find the best p*And q is*,p*Is a protection policy that maximizes the privacy of the user, and q*Is to minimize the privacy of the user, and p*Corresponding;
because the contradictions between the user and the attacker are the privacy of the user, the problems form a zero-sum game, the user is a leader, and the attacker is a follower;
assuming that the user receives a maximum loss of quality of service
Figure FDA0001725696710000024
Let P and Q be represented as the policy spaces of the user and attacker, respectively:
Figure FDA0001725696710000031
Figure FDA0001725696710000032
the attacker knows the a priori knowledge omega (S) of the usertar|Opre) And protection probability distribution p (O)cur|Star,Opre) At a given ω (S)tar|Opre),
Figure FDA0001725696710000033
And
Figure FDA0001725696710000034
under the condition of (1), calculating the optimal protection mechanism of a user and the optimal attack mechanism of an attacker by establishing two linear plans;
let pi denote the user's demand privacy, i.e.
Figure FDA0001725696710000035
Optimal policy for the user:
Figure FDA0001725696710000036
best strategy for the attacker:
Figure FDA0001725696710000037
nash equilibrium exists and is unique.
4. The user-required trajectory privacy protection method of claim 1, wherein: the mobility of the user is modeled as a first order markov chain.
5. The user-required trajectory privacy protection method of claim 1, wherein: if the user wants to protect the demand at times t-1 and t, Star=(st-1,st)。
CN201810751655.0A 2018-07-10 2018-07-10 User requirement track privacy protection method Active CN109241764B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810751655.0A CN109241764B (en) 2018-07-10 2018-07-10 User requirement track privacy protection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810751655.0A CN109241764B (en) 2018-07-10 2018-07-10 User requirement track privacy protection method

Publications (2)

Publication Number Publication Date
CN109241764A CN109241764A (en) 2019-01-18
CN109241764B true CN109241764B (en) 2021-08-17

Family

ID=65071987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810751655.0A Active CN109241764B (en) 2018-07-10 2018-07-10 User requirement track privacy protection method

Country Status (1)

Country Link
CN (1) CN109241764B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110087194B (en) * 2019-04-25 2021-05-11 东华大学 Game-based position data poisoning attack prototype system in Internet of vehicles
CN112235787B (en) * 2020-09-30 2023-04-28 南京工业大学 Position privacy protection method based on generation countermeasure network
CN112241554A (en) * 2020-10-30 2021-01-19 浙江工业大学 Model stealing defense method and device based on differential privacy index mechanism
CN112364379B (en) * 2020-11-18 2024-03-22 浙江工业大学 Differential privacy-based position privacy protection method for guaranteeing service quality

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103607371A (en) * 2013-07-02 2014-02-26 燕山大学 Method for protecting Internet user privacy through third-party platform
CN104050267A (en) * 2014-06-23 2014-09-17 中国科学院软件研究所 Individuality recommendation method and system protecting user privacy on basis of association rules
CN104680072A (en) * 2015-03-16 2015-06-03 福建师范大学 Personalized track data privacy protection method based on semantics
EP3062547A1 (en) * 2015-02-26 2016-08-31 Alcatel Lucent User tracking
CN106874782A (en) * 2015-12-11 2017-06-20 北京奇虎科技有限公司 The seamless application method and mobile terminal of a kind of mobile terminal
CN107862219A (en) * 2017-11-14 2018-03-30 哈尔滨工业大学深圳研究生院 The guard method of demand privacy in a kind of social networks

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103607371A (en) * 2013-07-02 2014-02-26 燕山大学 Method for protecting Internet user privacy through third-party platform
CN104050267A (en) * 2014-06-23 2014-09-17 中国科学院软件研究所 Individuality recommendation method and system protecting user privacy on basis of association rules
EP3062547A1 (en) * 2015-02-26 2016-08-31 Alcatel Lucent User tracking
CN104680072A (en) * 2015-03-16 2015-06-03 福建师范大学 Personalized track data privacy protection method based on semantics
CN106874782A (en) * 2015-12-11 2017-06-20 北京奇虎科技有限公司 The seamless application method and mobile terminal of a kind of mobile terminal
CN107862219A (en) * 2017-11-14 2018-03-30 哈尔滨工业大学深圳研究生院 The guard method of demand privacy in a kind of social networks

Also Published As

Publication number Publication date
CN109241764A (en) 2019-01-18

Similar Documents

Publication Publication Date Title
CN109241764B (en) User requirement track privacy protection method
Yang et al. Location privacy preservation mechanism for location-based service with incomplete location data
CN113609523B (en) Vehicle networking private data protection method based on block chain and differential privacy
CN107872449B (en) Denial of service attack defense method based on predictive control
CN114884682B (en) Crowd sensing data stream privacy protection method based on self-adaptive local differential privacy
Kuang et al. A privacy protection model of data publication based on game theory
Misra et al. Extracting mobility pattern from target trajectory in wireless sensor networks
Xie et al. Detecting latent attack behavior from aggregated Web traffic
CN116800488A (en) Group cooperation privacy game method based on blockchain
Shivashankar et al. Privacy preservation of data using modified rider optimization algorithm: optimal data sanitization and restoration model
Liu et al. Dynamic User Clustering for Efficient and Privacy-Preserving Federated Learning
Niu et al. A framework for personalized location privacy
CN113312635B (en) Multi-agent fault-tolerant consistency method based on state privacy protection
Kökciyan et al. Turp: Managing trust for regulating privacy in internet of things
CN109711197B (en) User privacy protection method for continuous query attack of road network
Li et al. A personalized trajectory privacy protection method
Peng et al. Location correlated differential privacy protection based on mobile feature analysis
Finner et al. False Discovery Rate Control of Step‐Up‐Down Tests with Special Emphasis on the Asymptotically Optimal Rejection Curve
Li et al. Quantifying location privacy risks under heterogeneous correlations
Sirisala et al. A novel trust recommendation model in online social networks using soft computing methods
CN114826649B (en) Website fingerprint confusion method based on countermeasure patches
Sarode et al. Combination of Fitness-Mated Lion Algorithm with Neural Network for Optimal Query Ordering Data Aggregation Model in WSN
Zhang et al. An Adaptive Recommendation Method Based on Small-World Implicit Trust Network.
Wen et al. Protecting locations with differential privacy against location-dependent attacks in continuous lbs queries
CN114510731A (en) Smart home security access control method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant