CN112235787B - Position privacy protection method based on generation countermeasure network - Google Patents

Position privacy protection method based on generation countermeasure network Download PDF

Info

Publication number
CN112235787B
CN112235787B CN202011059560.6A CN202011059560A CN112235787B CN 112235787 B CN112235787 B CN 112235787B CN 202011059560 A CN202011059560 A CN 202011059560A CN 112235787 B CN112235787 B CN 112235787B
Authority
CN
China
Prior art keywords
user
privacy
granularity
protection
protection strategy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011059560.6A
Other languages
Chinese (zh)
Other versions
CN112235787A (en
Inventor
白光伟
魏礼奇
赵志宏
沈航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Tech University
Original Assignee
Nanjing Tech University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Tech University filed Critical Nanjing Tech University
Priority to CN202011059560.6A priority Critical patent/CN112235787B/en
Publication of CN112235787A publication Critical patent/CN112235787A/en
Application granted granted Critical
Publication of CN112235787B publication Critical patent/CN112235787B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/02Protecting privacy or anonymity, e.g. protecting personally identifiable information [PII]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a position privacy protection method based on a generation countermeasure network, which relates to a position privacy protection mechanism with a user as a center. The invention designs a third party trusted server based on the Stackelberg game model, introduces the generation of the anti-network participation protection strategy, can obviously shorten the generation time of a protection mechanism while losing certain service quality within an allowable range, and reduces the utility cost as much as possible.

Description

Position privacy protection method based on generation countermeasure network
Technical Field
The invention relates to a position privacy protection method based on a generation countermeasure network, and belongs to the field of privacy protection.
Background
In recent years, with the development of internet technology and communication technology, a large number of intelligent mobile devices such as smart phones and smart watches are popularized, and the lives of people are enriched. With the progress of mobile location technology and the development of mobile location equipment, location Based Services (LBS) have penetrated into people's life and locations have become essential key information in social life. However, the user needs to report his location and query attributes to the service provider when obtaining LBS, including user location privacy and other personally sensitive information. By collecting information in the user LBS request, such as location or POI (point-of-interest), a malicious attacker can obtain and infer the user's privacy, revealing that would introduce immeasurable loss to the user. The generation of big data technology and the rising of machine learning further aggravate privacy problems by virtue of its ability to perform powerful analysis on massive data.
The position offset and blurring technique achieves the effect of protecting the privacy of the user by adding noise to reduce the position accuracy, such as a small range of movement of the real position or replacing the real position with an area, but at the same time care must be taken to balance privacy against utility.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: aiming at the problem that the time complexity of the existing linear programming solution in an actual scene is too high, an efficient algorithm is provided for generating a position privacy protection strategy, the generation time of the position privacy protection strategy is shortened under the background that a certain service quality is lost within an allowable range, and the utility cost is reduced as much as possible.
The invention adopts the following technical scheme for solving the technical problems, and comprises the following specific steps:
the invention provides a position privacy protection method based on a generation countermeasure network, which comprises the following steps:
step one: the position privacy protection server acquires the position privacy background knowledge of the user and performs granularity;
step two: constructing a linear programming equation set, and solving a preliminary position protection strategy;
step three: the primary position protection strategy is subjected to granularity with higher precision than the primary position protection strategy in the first step;
step four: training by adopting a generated countermeasure network, and solving a final protection strategy result.
Furthermore, the background knowledge acquired in the first step is two-dimensional probability distribution of the user at different positions in a certain area position, and the position coordinate point is represented by longitude and latitude.
Furthermore, the method for protecting the position privacy based on the generation countermeasure network provided by the invention comprises the following specific steps of:
step 1: acquiring longitude and latitude coordinates (x) of lower left corner and upper right corner of region L ,y L ) (x) R ,y R );
Step 2: from the division granularity a, b, the mesh size (x 0 ,y 0 ) In the first step, a is less than or equal to 150, and in the third step, a is less than or equal to 10 4
Figure BDA0002711870260000021
Step 3: converting longitude and latitude coordinates of the original point into grid coordinates:
Figure BDA0002711870260000022
wherein int () is a rounding function, n represents a position point;
step 4: calculating the probability of each grid:
Figure BDA0002711870260000023
wherein exact (x, y) is defined as: outputting 1 when x=y, and outputting 0 in the rest cases;
step 5: summarizing probabilities of all grids into a two-dimensional probability distribution
Figure BDA0002711870260000029
Furthermore, the method for protecting the position privacy based on the generation of the countermeasure network provided by the invention comprises the following specific steps:
defining the real position of the user as s, and disturbing to a false position o after a position protection strategy; the attacker deduces the position of the user according to background knowledge after receiving the false position o, and the deduced result is that
Figure BDA0002711870260000027
For a particular location s, the location presumed by an attacker is represented by a privacy protection level l
Figure BDA00027118702600000210
Distance from s
Figure BDA00027118702600000211
Is>
Figure BDA00027118702600000212
Is defined as +.>
Figure BDA0002711870260000028
Euclidean distance from s:
Figure BDA0002711870260000024
in the formula (1), p (o|s) is a protection strategy of LPS (location Perturbination Server),
Figure BDA00027118702600000213
inference policies for an attacker;
p(o|s)=Pr{O=o|S=s} (2)
Figure BDA0002711870260000025
position s i Expanding to the whole position set S, the privacy protection level L of the user in the whole area can be obtained:
Figure BDA0002711870260000026
in order to ensure user privacy, a lower limit value L needs to be set for L min And L is greater than or equal to L min
The optimal attack strategy of the attacker is as follows:
Figure BDA0002711870260000031
the optimal protection policy for the user is:
Figure BDA0002711870260000032
Subject to
Figure BDA0002711870260000033
according to the three formulas (5), (6) and (7), the optimal protection strategy of the user can be obtained through inference:
Figure BDA0002711870260000034
Subject to
Figure BDA0002711870260000035
Q loss representing the quality of service cost of the location privacy preserving server, and (8) and (9) forming an optimization solving problem, and obtaining in the step one
Figure BDA0002711870260000036
And L is known to be min Substituting the protection strategy to obtain the optimal protection strategy p.
Furthermore, the method for protecting the position privacy based on the generation of the countermeasure network provided by the invention comprises the following specific steps:
step 401: taking p of finer granularity generated in the step three as a sample R for generating an countermeasure network, setting an upper limit N of iteration times, and setting an iteration counter n=0;
step 402: the generator G outputs a corresponding probability distribution x=g (z) by receiving the input noise z;
step 403: the discriminator compares the received G (z) with the samples in R and outputs D (G (z)) according to the similarity of probability distribution of the two samples;
step 404: according to the formula (4), calculating a privacy protection level L (z) corresponding to G (z);
step 405: calculating a discrimination factor c from L (z):
Figure BDA0002711870260000037
wherein the definition of the softplus function is as follows
softplus(a,b)=ln(1+e (a-b) ) (11)
Step 406: outputting final result of the discriminator according to D (G (z)) and discrimination factor
y=c*D(G(z)) (12)
Step 407: judging whether N is equal to N, if not, making n=n+1 and returning to the step 402 to continue training; if so, the training is ended.
The invention adopts the technical means, and has the following technical effects compared with the prior art:
the method provided by the invention shortens the calculation time compared with the traditional linear programming algorithm on the premise of ensuring the privacy and the practicability, so that the privacy protection model has higher practicability.
Drawings
Fig. 1 is a system frame diagram according to the present invention.
Fig. 2 is a frame diagram of a generation countermeasure network according to the present invention.
Detailed Description
The technical scheme of the invention is further described in detail below with reference to the accompanying drawings:
it will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The invention provides a position privacy protection method based on a generation countermeasure network, which specifically comprises the following steps:
step one: the position privacy protection server acquires the position privacy background knowledge of the user and performs granularity with lower precision. The background knowledge is two-dimensional probability distribution of the user at different positions in a certain area position, and position coordinate points are expressed by longitude and latitude.
Step two: and constructing a linear programming equation set, and solving a preliminary position protection strategy. The specific steps of the second step include:
defining the real position of the user as s, and disturbing to a false position o after a position protection strategy; the attacker deduces the position of the user according to the background knowledge after receiving o, and the deduced result is that
Figure BDA0002711870260000041
For a particular location s, the privacy protection level l may be expressed as the location presumed by the attacker
Figure BDA0002711870260000042
Distance from s
Figure BDA0002711870260000043
Is>
Figure BDA0002711870260000044
Is defined as +.>
Figure BDA0002711870260000045
Euclidean distance from s:
Figure BDA0002711870260000046
in the formula (1), p (o|s) is a protection strategy of LPS,
Figure BDA0002711870260000047
an inference policy for an attacker.
p(o|s)=Pr{O=o|S=s} (2)
Figure BDA0002711870260000051
Position s i Expanding to the whole set S, the privacy protection level L of the users in the whole area can be obtained:
Figure BDA0002711870260000052
in order to ensure user privacy, a lower limit value L needs to be set for L min And L is greater than or equal to L min
The optimal attack strategy of the attacker is as follows:
Figure BDA0002711870260000053
the optimal protection policy for the user is:
Figure BDA0002711870260000054
Subject to
Figure BDA0002711870260000055
according to the three formulas (5), (6) and (7), the optimal protection strategy of the user can be obtained through inference:
Figure BDA0002711870260000056
Subject to
Figure BDA0002711870260000057
(8) And (9) constructing an optimization solution problem by combining the values obtained in step one
Figure BDA0002711870260000058
And L is known to be min Substituting the protection strategy to obtain the optimal protection strategy p.
Step three: and granularity with higher precision is carried out on the primary protection strategy and the primary position protection strategy. Wherein, the method for graining is the same as the method for graining the position in the step one, the division granularity a and b in the step one generally follow a x b less than or equal to 150, and the division granularity a x b less than or equal to 10 in the step three generally follow a x b less than or equal to 10 4
The granularity algorithm comprises the following specific steps:
step 1: acquiring longitude and latitude coordinates (x) of lower left corner and upper right corner of region L ,y L ) (x) R ,y R );
Step 2: according to the division granularity a, b, calculating the grid size:
Figure BDA0002711870260000061
step 3: converting longitude and latitude coordinates of the original point into grid coordinates:
Figure BDA0002711870260000062
where int () is a rounding function.
Step 4: calculating the probability of each grid:
Figure BDA0002711870260000063
wherein exact (x, y) is defined as: when x=y, 1 is output, and the rest outputs 0.
Step 5: summarizing probabilities of all grids into a two-dimensional probability distribution
Figure BDA0002711870260000064
Step four: training by generating an countermeasure network, and solving a final protection strategy result, wherein the specific steps comprise:
step 401: taking p of finer granularity generated in the step three as a sample R for generating an countermeasure network, setting an upper limit N of iteration times, and setting an iteration counter n=0;
step 402: the generator G outputs a corresponding probability distribution x=g (z) by receiving the input noise z;
step 403: the discriminator compares the received G (z) with the samples in R and outputs D (G (z)) according to the similarity of probability distribution of the two samples;
step 404: according to the formula (4), calculating a privacy protection level L (z) corresponding to G (z);
step 405: calculating a discrimination factor c from L (z):
Figure BDA0002711870260000065
wherein the definition of the softplus function is as follows
softplus(a,b)=ln(1+e (a-b) ) (11)
Step 406: outputting final result of the discriminator according to D (G (z)) and discrimination factor
y=c*D(G(z)) (12)
Step 407: judging whether N is equal to N, if not, making n=n+1 and returning to the step 402 to continue training; if so, the training is ended.
Aiming at the problems of overhigh calculation complexity and low practicability of the traditional linear programming position privacy protection algorithm in a real scene, the invention designs a third party trusted server based on a Stackelberg game model, and introduces the generation of an anti-network participation protection strategy, so that the generation time of a protection mechanism can be obviously shortened while a certain service quality is lost within an allowable range, and the utility cost is reduced as much as possible.
The following describes a specific embodiment of the present invention in connection with a specific scenario, comprising the steps of:
step one: establishing a location privacy protection model according to the attached figure 1, wherein a location perturbation server is responsible for receiving an LBS request of a user, perturbing a real location contained in the request and then sending the perturbed real location to the LBS server; and at the same time, the return value of the LBS server is accepted and transmitted back to the user.
Step two: setting maximum service quality cost for the location privacy protection server according to actual conditions
Figure BDA0002711870260000071
And a location privacy protection level minimum L min
Since the LPS perturbs the user location from the real location s to the false location o, the query results from the LBS server are all o-based. In most LBS-based service scenarios, the further the o distance s is, the worse the quality of service is, and therefore the quality of service cost Q loss The expression can be expressed as follows:
Figure BDA0002711870260000072
obviously Q loss And cannot be too large otherwise the returned results from the LBS server will lose usable value. The invention assumes that the maximum service quality cost that the user can bear is
Figure BDA0002711870260000073
There is->
Figure BDA0002711870260000074
Similarly, for a particular location s, the privacy level l may be expressed as the location presumed by the attacker
Figure BDA0002711870260000075
Distance from s>
Figure BDA0002711870260000076
Is>
Figure BDA0002711870260000077
Is defined as +.>
Figure BDA0002711870260000078
Euclidean distance from s:
Figure BDA0002711870260000079
in the formula (1), p (o|s) is a protection strategy of LPS,
Figure BDA00027118702600000710
an inference policy for an attacker.
p(o|s)=Pr{O=o|S=s} (15)
Figure BDA00027118702600000711
In practice, the position S of the user within the area is not a single value, they form a set s= { S 1 ,s 2 ,s 3 ,…s n -a disturbed false position o an attacker deduces the user position
Figure BDA00027118702600000716
Also in this collection +.>
Figure BDA00027118702600000712
Figure BDA00027118702600000713
Where n is the total number of possible positions. Position s i Expanding to the whole set S, the privacy protection level L of the users in the whole area can be obtained:
Figure BDA00027118702600000714
in order to ensure user privacy, a lower limit value L needs to be set for L min And L is greater than or equal to L min
Step three: granularity and summarization of the user's background knowledge
Figure BDA00027118702600000715
Input to location privacy protection suitAnd (5) in the server. The particle sizes a and b can be properly adjusted according to the needs, but a is less than or equal to 150.
Step four: and calculating a preliminary location privacy protection policy p corresponding to the user.
Step five: finer granularity is performed on the background and p of the user, and the granularity a, b e (40, 100) is generally set in this step.
Step six: setting an upper limit of training iteration times, adopting a generated countermeasure network to start training, and waiting for output results.
Step seven: establishing an archive associated with the user identity on a server by using the generated position protection strategy, and disturbing the position and sending the position to an LBS server when receiving an LBS request of the user; and after receiving the return information of the LBS server, returning the return information to the user.
The foregoing is only a partial embodiment of the present invention, and it should be noted that it will be apparent to those skilled in the art that modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the present invention.

Claims (1)

1. A method for protecting location privacy based on generation of a countermeasure network, comprising the steps of:
step one: the position privacy protection server acquires the position privacy background knowledge of the user and performs granularity; the acquired background knowledge is two-dimensional probability distribution of the user at different positions in a certain region position, and position coordinate points are represented by longitude and latitude;
step two: constructing a linear programming equation set, and solving a preliminary position protection strategy;
step three: the primary position protection strategy is subjected to granularity with higher precision than the primary position protection strategy in the first step;
step four: training by adopting a generated countermeasure network, and solving a final protection strategy result;
the specific steps of the granularity of the positions in the first step and the third step comprise the following steps:
step 1:acquiring longitude and latitude coordinates (x) of lower left corner and upper right corner of region L ,y L ) (x) R ,y R );
Step 2: from the division granularity a, b, the mesh size (x 0 ,y 0 ):
Figure QLYQS_1
Step 3: converting longitude and latitude coordinates of the original point into grid coordinates:
Figure QLYQS_2
wherein int () is a rounding function, i represents a position point;
step 4: calculating the probability of each grid:
Figure QLYQS_3
wherein exact (x, y) is defined as: outputting 1 when x=y, and outputting 0 in the rest cases;
step 5: summarizing probabilities of all grids into a two-dimensional probability distribution
Figure QLYQS_4
Wherein, a is less than or equal to 150 when the granularity is formed in the first step, and a is less than or equal to 10 when the granularity is formed in the third step 4
The specific steps of the second step comprise:
defining the real position of the user as s, and disturbing to a false position o after a position protection strategy; the attacker deduces the position of the user according to background knowledge after receiving the false position o, and the deduced result is that
Figure QLYQS_5
For a particular location s, we use the privacy protection level l to represent attacker speculationTo the position
Figure QLYQS_6
Distance from s>
Figure QLYQS_7
Is>
Figure QLYQS_8
Is defined as +.>
Figure QLYQS_9
Euclidean distance from s:
Figure QLYQS_10
in the formula (1), p (o|s) is a protection strategy of the position disturbance server,
Figure QLYQS_11
inference policies for an attacker;
p(o|s)=Pr{O=o|S=s} (2)
Figure QLYQS_12
pr { } represents the probability of event occurrence, O represents the set of false positions;
position s i Expanding to the whole position set S, the privacy protection level L of the user in the whole area can be obtained:
Figure QLYQS_13
in order to ensure user privacy, a lower limit value L needs to be set for L min And L is greater than or equal to L min
The optimal attack strategy of the attacker is as follows:
Figure QLYQS_14
the optimal protection policy for the user is:
Figure QLYQS_15
Subject to
Figure QLYQS_16
according to the three formulas (5), (6) and (7), the optimal protection strategy of the user can be obtained through inference:
Figure QLYQS_17
Subject to
Figure QLYQS_18
Q loss representing the quality of service cost of the location privacy preserving server, and (8) and (9) forming an optimization solving problem, and obtaining in the step one
Figure QLYQS_19
And L is known to be min Substituting to obtain the optimal protection strategy p;
the specific steps of the fourth step comprise:
step 401: taking p of finer granularity generated in the step three as a sample R for generating an countermeasure network, setting an upper limit N of iteration times, and setting an iteration counter N 1 =0;
Step 402: the generator G outputs a corresponding probability distribution x=g (z) by receiving the input noise z;
step 403: the discriminator compares the received G (z) with the samples in R and outputs D (G (z)) according to the similarity of probability distribution of the two samples;
step 404: according to the formula (4), calculating a privacy protection level L (z) corresponding to G (z);
step 405: calculating a discrimination factor c from L (z):
Figure QLYQS_20
wherein the definition of the softplus function is as follows
softplus(a,b)=ln(1+e (a-b) ) (11)
Step 406: outputting final result of the discriminator according to D (G (z)) and discrimination factor
y=c*D(G(z)) (12)
Step 407: judging n 1 If equal to N, if not, let N 1 =n 1 +1 and returning to step 402 to continue training; if so, the training is ended.
CN202011059560.6A 2020-09-30 2020-09-30 Position privacy protection method based on generation countermeasure network Active CN112235787B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011059560.6A CN112235787B (en) 2020-09-30 2020-09-30 Position privacy protection method based on generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011059560.6A CN112235787B (en) 2020-09-30 2020-09-30 Position privacy protection method based on generation countermeasure network

Publications (2)

Publication Number Publication Date
CN112235787A CN112235787A (en) 2021-01-15
CN112235787B true CN112235787B (en) 2023-04-28

Family

ID=74120951

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011059560.6A Active CN112235787B (en) 2020-09-30 2020-09-30 Position privacy protection method based on generation countermeasure network

Country Status (1)

Country Link
CN (1) CN112235787B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866993B (en) * 2021-02-06 2022-10-21 北京信息科技大学 Time sequence position publishing method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107368752B (en) * 2017-07-25 2019-06-28 北京工商大学 A kind of depth difference method for secret protection based on production confrontation network
CN109241764B (en) * 2018-07-10 2021-08-17 哈尔滨工业大学(深圳) User requirement track privacy protection method
CN110460600B (en) * 2019-08-13 2021-09-03 南京理工大学 Joint deep learning method capable of resisting generation of counterattack network attacks
CN110636065B (en) * 2019-09-23 2021-12-07 哈尔滨工程大学 Location point privacy protection method based on location service
CN111666588B (en) * 2020-05-14 2023-06-23 武汉大学 Emotion differential privacy protection method based on generation countermeasure network

Also Published As

Publication number Publication date
CN112235787A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
CN112714106B (en) Block chain-based federal learning casual vehicle carrying attack defense method
Shi et al. Implicit authentication through learning user behavior
CN110795768B (en) Model learning method, device and system based on private data protection
CN112185395B (en) Federal voiceprint recognition method based on differential privacy
US20070130147A1 (en) Exponential noise distribution to optimize database privacy and output utility
CN110020546B (en) Privacy data grading protection method
CN110602145B (en) Track privacy protection method based on location-based service
CN107689950A (en) Data publication method, apparatus, server and storage medium
CN114598539B (en) Root cause positioning method and device, storage medium and electronic equipment
CN103049704A (en) Self-adaptive privacy protection method and device for mobile terminal
US20220405310A1 (en) Computer-based systems configured for efficient entity resolution for database merging and reconciliation
CN114662157B (en) Block compressed sensing indistinguishable protection method and device for social text data stream
CN114328640A (en) Differential privacy protection and data mining method and system based on mobile user dynamic sensitive data
CN108418835A (en) A kind of Port Scan Attacks detection method and device based on Netflow daily record datas
CN113507704A (en) Mobile crowd sensing privacy protection method based on double attribute decision
CN112235787B (en) Position privacy protection method based on generation countermeasure network
Murakami et al. Designing a location trace anonymization contest
CN117009095B (en) Privacy data processing model generation method, device, terminal equipment and medium
CN116828453B (en) Unmanned aerial vehicle edge computing privacy protection method based on self-adaptive nonlinear function
Li et al. Privacy measurement method using a graph structure on online social networks
CN116506206A (en) Big data behavior analysis method and system based on zero trust network user
CN113868695B (en) Block chain-based trusted privacy protection method in crowd-sourced data aggregation
Jiang [Retracted] Research on Machine Learning Algorithm for Internet of Things Information Security Management System Research and Implementation
CN112182638B (en) Histogram data publishing method and system based on localized differential privacy model
CN110990869B (en) Power big data desensitization method applied to privacy protection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant