CN112235787B - Position privacy protection method based on generation countermeasure network - Google Patents
Position privacy protection method based on generation countermeasure network Download PDFInfo
- Publication number
- CN112235787B CN112235787B CN202011059560.6A CN202011059560A CN112235787B CN 112235787 B CN112235787 B CN 112235787B CN 202011059560 A CN202011059560 A CN 202011059560A CN 112235787 B CN112235787 B CN 112235787B
- Authority
- CN
- China
- Prior art keywords
- user
- privacy
- granularity
- protection
- protection strategy
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 14
- 238000012549 training Methods 0.000 claims description 11
- 238000005457 optimization Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000006978 adaptation Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000003094 perturbing effect Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W12/00—Security arrangements; Authentication; Protecting privacy or anonymity
- H04W12/02—Protecting privacy or anonymity, e.g. protecting personally identifiable information [PII]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/025—Services making use of location information using location based information parameters
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
The invention discloses a position privacy protection method based on a generation countermeasure network, which relates to a position privacy protection mechanism with a user as a center. The invention designs a third party trusted server based on the Stackelberg game model, introduces the generation of the anti-network participation protection strategy, can obviously shorten the generation time of a protection mechanism while losing certain service quality within an allowable range, and reduces the utility cost as much as possible.
Description
Technical Field
The invention relates to a position privacy protection method based on a generation countermeasure network, and belongs to the field of privacy protection.
Background
In recent years, with the development of internet technology and communication technology, a large number of intelligent mobile devices such as smart phones and smart watches are popularized, and the lives of people are enriched. With the progress of mobile location technology and the development of mobile location equipment, location Based Services (LBS) have penetrated into people's life and locations have become essential key information in social life. However, the user needs to report his location and query attributes to the service provider when obtaining LBS, including user location privacy and other personally sensitive information. By collecting information in the user LBS request, such as location or POI (point-of-interest), a malicious attacker can obtain and infer the user's privacy, revealing that would introduce immeasurable loss to the user. The generation of big data technology and the rising of machine learning further aggravate privacy problems by virtue of its ability to perform powerful analysis on massive data.
The position offset and blurring technique achieves the effect of protecting the privacy of the user by adding noise to reduce the position accuracy, such as a small range of movement of the real position or replacing the real position with an area, but at the same time care must be taken to balance privacy against utility.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: aiming at the problem that the time complexity of the existing linear programming solution in an actual scene is too high, an efficient algorithm is provided for generating a position privacy protection strategy, the generation time of the position privacy protection strategy is shortened under the background that a certain service quality is lost within an allowable range, and the utility cost is reduced as much as possible.
The invention adopts the following technical scheme for solving the technical problems, and comprises the following specific steps:
the invention provides a position privacy protection method based on a generation countermeasure network, which comprises the following steps:
step one: the position privacy protection server acquires the position privacy background knowledge of the user and performs granularity;
step two: constructing a linear programming equation set, and solving a preliminary position protection strategy;
step three: the primary position protection strategy is subjected to granularity with higher precision than the primary position protection strategy in the first step;
step four: training by adopting a generated countermeasure network, and solving a final protection strategy result.
Furthermore, the background knowledge acquired in the first step is two-dimensional probability distribution of the user at different positions in a certain area position, and the position coordinate point is represented by longitude and latitude.
Furthermore, the method for protecting the position privacy based on the generation countermeasure network provided by the invention comprises the following specific steps of:
step 1: acquiring longitude and latitude coordinates (x) of lower left corner and upper right corner of region L ,y L ) (x) R ,y R );
Step 2: from the division granularity a, b, the mesh size (x 0 ,y 0 ) In the first step, a is less than or equal to 150, and in the third step, a is less than or equal to 10 4 :
Step 3: converting longitude and latitude coordinates of the original point into grid coordinates:
wherein int () is a rounding function, n represents a position point;
step 4: calculating the probability of each grid:
wherein exact (x, y) is defined as: outputting 1 when x=y, and outputting 0 in the rest cases;
Furthermore, the method for protecting the position privacy based on the generation of the countermeasure network provided by the invention comprises the following specific steps:
defining the real position of the user as s, and disturbing to a false position o after a position protection strategy; the attacker deduces the position of the user according to background knowledge after receiving the false position o, and the deduced result is that
For a particular location s, the location presumed by an attacker is represented by a privacy protection level lDistance from sIs>Is defined as +.>Euclidean distance from s:
in the formula (1), p (o|s) is a protection strategy of LPS (location Perturbination Server),inference policies for an attacker;
p(o|s)=Pr{O=o|S=s} (2)
position s i Expanding to the whole position set S, the privacy protection level L of the user in the whole area can be obtained:
in order to ensure user privacy, a lower limit value L needs to be set for L min And L is greater than or equal to L min ;
The optimal attack strategy of the attacker is as follows:
the optimal protection policy for the user is:
Subject to
according to the three formulas (5), (6) and (7), the optimal protection strategy of the user can be obtained through inference:
Subject to
Q loss representing the quality of service cost of the location privacy preserving server, and (8) and (9) forming an optimization solving problem, and obtaining in the step oneAnd L is known to be min Substituting the protection strategy to obtain the optimal protection strategy p.
Furthermore, the method for protecting the position privacy based on the generation of the countermeasure network provided by the invention comprises the following specific steps:
step 401: taking p of finer granularity generated in the step three as a sample R for generating an countermeasure network, setting an upper limit N of iteration times, and setting an iteration counter n=0;
step 402: the generator G outputs a corresponding probability distribution x=g (z) by receiving the input noise z;
step 403: the discriminator compares the received G (z) with the samples in R and outputs D (G (z)) according to the similarity of probability distribution of the two samples;
step 404: according to the formula (4), calculating a privacy protection level L (z) corresponding to G (z);
step 405: calculating a discrimination factor c from L (z):
wherein the definition of the softplus function is as follows
softplus(a,b)=ln(1+e (a-b) ) (11)
Step 406: outputting final result of the discriminator according to D (G (z)) and discrimination factor
y=c*D(G(z)) (12)
Step 407: judging whether N is equal to N, if not, making n=n+1 and returning to the step 402 to continue training; if so, the training is ended.
The invention adopts the technical means, and has the following technical effects compared with the prior art:
the method provided by the invention shortens the calculation time compared with the traditional linear programming algorithm on the premise of ensuring the privacy and the practicability, so that the privacy protection model has higher practicability.
Drawings
Fig. 1 is a system frame diagram according to the present invention.
Fig. 2 is a frame diagram of a generation countermeasure network according to the present invention.
Detailed Description
The technical scheme of the invention is further described in detail below with reference to the accompanying drawings:
it will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The invention provides a position privacy protection method based on a generation countermeasure network, which specifically comprises the following steps:
step one: the position privacy protection server acquires the position privacy background knowledge of the user and performs granularity with lower precision. The background knowledge is two-dimensional probability distribution of the user at different positions in a certain area position, and position coordinate points are expressed by longitude and latitude.
Step two: and constructing a linear programming equation set, and solving a preliminary position protection strategy. The specific steps of the second step include:
defining the real position of the user as s, and disturbing to a false position o after a position protection strategy; the attacker deduces the position of the user according to the background knowledge after receiving o, and the deduced result is that
For a particular location s, the privacy protection level l may be expressed as the location presumed by the attackerDistance from sIs>Is defined as +.>Euclidean distance from s:
p(o|s)=Pr{O=o|S=s} (2)
Position s i Expanding to the whole set S, the privacy protection level L of the users in the whole area can be obtained:
in order to ensure user privacy, a lower limit value L needs to be set for L min And L is greater than or equal to L min 。
The optimal attack strategy of the attacker is as follows:
the optimal protection policy for the user is:
Subject to
according to the three formulas (5), (6) and (7), the optimal protection strategy of the user can be obtained through inference:
Subject to
(8) And (9) constructing an optimization solution problem by combining the values obtained in step oneAnd L is known to be min Substituting the protection strategy to obtain the optimal protection strategy p.
Step three: and granularity with higher precision is carried out on the primary protection strategy and the primary position protection strategy. Wherein, the method for graining is the same as the method for graining the position in the step one, the division granularity a and b in the step one generally follow a x b less than or equal to 150, and the division granularity a x b less than or equal to 10 in the step three generally follow a x b less than or equal to 10 4 。
The granularity algorithm comprises the following specific steps:
step 1: acquiring longitude and latitude coordinates (x) of lower left corner and upper right corner of region L ,y L ) (x) R ,y R );
Step 2: according to the division granularity a, b, calculating the grid size:
step 3: converting longitude and latitude coordinates of the original point into grid coordinates:
where int () is a rounding function.
Step 4: calculating the probability of each grid:
wherein exact (x, y) is defined as: when x=y, 1 is output, and the rest outputs 0.
Step four: training by generating an countermeasure network, and solving a final protection strategy result, wherein the specific steps comprise:
step 401: taking p of finer granularity generated in the step three as a sample R for generating an countermeasure network, setting an upper limit N of iteration times, and setting an iteration counter n=0;
step 402: the generator G outputs a corresponding probability distribution x=g (z) by receiving the input noise z;
step 403: the discriminator compares the received G (z) with the samples in R and outputs D (G (z)) according to the similarity of probability distribution of the two samples;
step 404: according to the formula (4), calculating a privacy protection level L (z) corresponding to G (z);
step 405: calculating a discrimination factor c from L (z):
wherein the definition of the softplus function is as follows
softplus(a,b)=ln(1+e (a-b) ) (11)
Step 406: outputting final result of the discriminator according to D (G (z)) and discrimination factor
y=c*D(G(z)) (12)
Step 407: judging whether N is equal to N, if not, making n=n+1 and returning to the step 402 to continue training; if so, the training is ended.
Aiming at the problems of overhigh calculation complexity and low practicability of the traditional linear programming position privacy protection algorithm in a real scene, the invention designs a third party trusted server based on a Stackelberg game model, and introduces the generation of an anti-network participation protection strategy, so that the generation time of a protection mechanism can be obviously shortened while a certain service quality is lost within an allowable range, and the utility cost is reduced as much as possible.
The following describes a specific embodiment of the present invention in connection with a specific scenario, comprising the steps of:
step one: establishing a location privacy protection model according to the attached figure 1, wherein a location perturbation server is responsible for receiving an LBS request of a user, perturbing a real location contained in the request and then sending the perturbed real location to the LBS server; and at the same time, the return value of the LBS server is accepted and transmitted back to the user.
Step two: setting maximum service quality cost for the location privacy protection server according to actual conditionsAnd a location privacy protection level minimum L min 。
Since the LPS perturbs the user location from the real location s to the false location o, the query results from the LBS server are all o-based. In most LBS-based service scenarios, the further the o distance s is, the worse the quality of service is, and therefore the quality of service cost Q loss The expression can be expressed as follows:
obviously Q loss And cannot be too large otherwise the returned results from the LBS server will lose usable value. The invention assumes that the maximum service quality cost that the user can bear isThere is->
Similarly, for a particular location s, the privacy level l may be expressed as the location presumed by the attackerDistance from s>Is>Is defined as +.>Euclidean distance from s:
p(o|s)=Pr{O=o|S=s} (15)
In practice, the position S of the user within the area is not a single value, they form a set s= { S 1 ,s 2 ,s 3 ,…s n -a disturbed false position o an attacker deduces the user positionAlso in this collection +.> Where n is the total number of possible positions. Position s i Expanding to the whole set S, the privacy protection level L of the users in the whole area can be obtained:
in order to ensure user privacy, a lower limit value L needs to be set for L min And L is greater than or equal to L min 。
Step three: granularity and summarization of the user's background knowledgeInput to location privacy protection suitAnd (5) in the server. The particle sizes a and b can be properly adjusted according to the needs, but a is less than or equal to 150.
Step four: and calculating a preliminary location privacy protection policy p corresponding to the user.
Step five: finer granularity is performed on the background and p of the user, and the granularity a, b e (40, 100) is generally set in this step.
Step six: setting an upper limit of training iteration times, adopting a generated countermeasure network to start training, and waiting for output results.
Step seven: establishing an archive associated with the user identity on a server by using the generated position protection strategy, and disturbing the position and sending the position to an LBS server when receiving an LBS request of the user; and after receiving the return information of the LBS server, returning the return information to the user.
The foregoing is only a partial embodiment of the present invention, and it should be noted that it will be apparent to those skilled in the art that modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the present invention.
Claims (1)
1. A method for protecting location privacy based on generation of a countermeasure network, comprising the steps of:
step one: the position privacy protection server acquires the position privacy background knowledge of the user and performs granularity; the acquired background knowledge is two-dimensional probability distribution of the user at different positions in a certain region position, and position coordinate points are represented by longitude and latitude;
step two: constructing a linear programming equation set, and solving a preliminary position protection strategy;
step three: the primary position protection strategy is subjected to granularity with higher precision than the primary position protection strategy in the first step;
step four: training by adopting a generated countermeasure network, and solving a final protection strategy result;
the specific steps of the granularity of the positions in the first step and the third step comprise the following steps:
step 1:acquiring longitude and latitude coordinates (x) of lower left corner and upper right corner of region L ,y L ) (x) R ,y R );
Step 2: from the division granularity a, b, the mesh size (x 0 ,y 0 ):
Step 3: converting longitude and latitude coordinates of the original point into grid coordinates:
wherein int () is a rounding function, i represents a position point;
step 4: calculating the probability of each grid:
wherein exact (x, y) is defined as: outputting 1 when x=y, and outputting 0 in the rest cases;
Wherein, a is less than or equal to 150 when the granularity is formed in the first step, and a is less than or equal to 10 when the granularity is formed in the third step 4 ;
The specific steps of the second step comprise:
defining the real position of the user as s, and disturbing to a false position o after a position protection strategy; the attacker deduces the position of the user according to background knowledge after receiving the false position o, and the deduced result is that
For a particular location s, we use the privacy protection level l to represent attacker speculationTo the positionDistance from s>Is>Is defined as +.>Euclidean distance from s:
in the formula (1), p (o|s) is a protection strategy of the position disturbance server,inference policies for an attacker;
p(o|s)=Pr{O=o|S=s} (2)
pr { } represents the probability of event occurrence, O represents the set of false positions;
position s i Expanding to the whole position set S, the privacy protection level L of the user in the whole area can be obtained:
in order to ensure user privacy, a lower limit value L needs to be set for L min And L is greater than or equal to L min ;
The optimal attack strategy of the attacker is as follows:
the optimal protection policy for the user is:
Subject to
according to the three formulas (5), (6) and (7), the optimal protection strategy of the user can be obtained through inference:
Subject to
Q loss representing the quality of service cost of the location privacy preserving server, and (8) and (9) forming an optimization solving problem, and obtaining in the step oneAnd L is known to be min Substituting to obtain the optimal protection strategy p;
the specific steps of the fourth step comprise:
step 401: taking p of finer granularity generated in the step three as a sample R for generating an countermeasure network, setting an upper limit N of iteration times, and setting an iteration counter N 1 =0;
Step 402: the generator G outputs a corresponding probability distribution x=g (z) by receiving the input noise z;
step 403: the discriminator compares the received G (z) with the samples in R and outputs D (G (z)) according to the similarity of probability distribution of the two samples;
step 404: according to the formula (4), calculating a privacy protection level L (z) corresponding to G (z);
step 405: calculating a discrimination factor c from L (z):
wherein the definition of the softplus function is as follows
softplus(a,b)=ln(1+e (a-b) ) (11)
Step 406: outputting final result of the discriminator according to D (G (z)) and discrimination factor
y=c*D(G(z)) (12)
Step 407: judging n 1 If equal to N, if not, let N 1 =n 1 +1 and returning to step 402 to continue training; if so, the training is ended.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011059560.6A CN112235787B (en) | 2020-09-30 | 2020-09-30 | Position privacy protection method based on generation countermeasure network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011059560.6A CN112235787B (en) | 2020-09-30 | 2020-09-30 | Position privacy protection method based on generation countermeasure network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112235787A CN112235787A (en) | 2021-01-15 |
CN112235787B true CN112235787B (en) | 2023-04-28 |
Family
ID=74120951
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011059560.6A Active CN112235787B (en) | 2020-09-30 | 2020-09-30 | Position privacy protection method based on generation countermeasure network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112235787B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112866993B (en) * | 2021-02-06 | 2022-10-21 | 北京信息科技大学 | Time sequence position publishing method and system |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107368752B (en) * | 2017-07-25 | 2019-06-28 | 北京工商大学 | A kind of depth difference method for secret protection based on production confrontation network |
CN109241764B (en) * | 2018-07-10 | 2021-08-17 | 哈尔滨工业大学(深圳) | User requirement track privacy protection method |
CN110460600B (en) * | 2019-08-13 | 2021-09-03 | 南京理工大学 | Joint deep learning method capable of resisting generation of counterattack network attacks |
CN110636065B (en) * | 2019-09-23 | 2021-12-07 | 哈尔滨工程大学 | Location point privacy protection method based on location service |
CN111666588B (en) * | 2020-05-14 | 2023-06-23 | 武汉大学 | Emotion differential privacy protection method based on generation countermeasure network |
-
2020
- 2020-09-30 CN CN202011059560.6A patent/CN112235787B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN112235787A (en) | 2021-01-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112714106B (en) | Block chain-based federal learning casual vehicle carrying attack defense method | |
Shi et al. | Implicit authentication through learning user behavior | |
CN110795768B (en) | Model learning method, device and system based on private data protection | |
CN112185395B (en) | Federal voiceprint recognition method based on differential privacy | |
US20070130147A1 (en) | Exponential noise distribution to optimize database privacy and output utility | |
CN110020546B (en) | Privacy data grading protection method | |
CN110602145B (en) | Track privacy protection method based on location-based service | |
CN107689950A (en) | Data publication method, apparatus, server and storage medium | |
CN114598539B (en) | Root cause positioning method and device, storage medium and electronic equipment | |
CN103049704A (en) | Self-adaptive privacy protection method and device for mobile terminal | |
US20220405310A1 (en) | Computer-based systems configured for efficient entity resolution for database merging and reconciliation | |
CN114662157B (en) | Block compressed sensing indistinguishable protection method and device for social text data stream | |
CN114328640A (en) | Differential privacy protection and data mining method and system based on mobile user dynamic sensitive data | |
CN108418835A (en) | A kind of Port Scan Attacks detection method and device based on Netflow daily record datas | |
CN113507704A (en) | Mobile crowd sensing privacy protection method based on double attribute decision | |
CN112235787B (en) | Position privacy protection method based on generation countermeasure network | |
Murakami et al. | Designing a location trace anonymization contest | |
CN117009095B (en) | Privacy data processing model generation method, device, terminal equipment and medium | |
CN116828453B (en) | Unmanned aerial vehicle edge computing privacy protection method based on self-adaptive nonlinear function | |
Li et al. | Privacy measurement method using a graph structure on online social networks | |
CN116506206A (en) | Big data behavior analysis method and system based on zero trust network user | |
CN113868695B (en) | Block chain-based trusted privacy protection method in crowd-sourced data aggregation | |
Jiang | [Retracted] Research on Machine Learning Algorithm for Internet of Things Information Security Management System Research and Implementation | |
CN112182638B (en) | Histogram data publishing method and system based on localized differential privacy model | |
CN110990869B (en) | Power big data desensitization method applied to privacy protection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |