CN111353173A - Sensitive tag track data publishing method using graph difference privacy model - Google Patents

Sensitive tag track data publishing method using graph difference privacy model Download PDF

Info

Publication number
CN111353173A
CN111353173A CN202010164862.3A CN202010164862A CN111353173A CN 111353173 A CN111353173 A CN 111353173A CN 202010164862 A CN202010164862 A CN 202010164862A CN 111353173 A CN111353173 A CN 111353173A
Authority
CN
China
Prior art keywords
vertex
data
privacy
track
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010164862.3A
Other languages
Chinese (zh)
Other versions
CN111353173B (en
Inventor
姚琳
陈振宇
孙云栋
吴国伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN202010164862.3A priority Critical patent/CN111353173B/en
Publication of CN111353173A publication Critical patent/CN111353173A/en
Application granted granted Critical
Publication of CN111353173B publication Critical patent/CN111353173B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • G06F21/6254Protecting personal data, e.g. for financial or medical purposes by anonymising data, e.g. decorrelating personal data from the owner's identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention belongs to the technical field of information security, and relates to a sensitive tag track data publishing method based on a graph difference privacy model, which comprises the following steps: clustering and generalization of tracks, graph-based time-space point differential privacy, and publishing track data with privacy tags. In the first stage, all the space-time points are first divided into hot spot regions and outliers, and then each specific location in the trajectory data is generalized using the center position of the hot spot region. In the second phase, a noise graph is established, hotspots, outliers and privacy labels are mapped to the directed weighted graph, different privacy budgets are set for different vertices in order to control the magnitude of the added noise and to protect the usability of the data as much as possible, and then laplacian noise is added to achieve differential privacy. In the release stage, firstly, each record in the generalized track data set is restored, a head vertex is selected, the track data is generated according to a heuristic method, the anonymous data are obtained, and the high availability of the data is ensured.

Description

Sensitive tag track data publishing method using graph difference privacy model
Technical Field
The invention relates to a sensitive tag track data publishing method based on a graph difference privacy model, and belongs to the technical field of information security.
Background
With the explosive growth of location aware devices and wireless communications, the spatiotemporal trajectories of moving objects can be easily collected and analyzed. Applications in daily life that rely on these trajectory data to provide rich services to users are increasing, such as location social networks, location-based services, and mobile health. In such data-driven applications, a track is often composed of a tag and a series of time-space points. The track data sources are very rich, such as various GPS devices, mobile devices such as mobile phones, base stations, even social network shared locations, and the like. With the development of data acquisition equipment and the accumulation of track data, a large amount of information hidden in the track data is more and more concerned by many governments, enterprises and scientific research institutions.
The trajectory data can be used to provide better services for the user, and these LBS applications can recommend merchants more consistent with the query content or services meeting the user's requirements, such as location-based dining, entertainment, and travel services, for the user based on the trajectory data of the user. As mobile devices become more and more a part of people's lives, ambulatory medicine has received more attention because of its convenience. In the mobile medical treatment, doctors and patients do not need to know the illness state face to face each other every time, but the positions and other physiological information of the patients are remotely located and acquired through mobile equipment so as to realize long-term attention to the patients, and the mobile equipment is greatly helpful for diagnosis and treatment of patients with chronic diseases, mental patients and other special groups. The track data also has important application in disaster management, and the use of the track data is beneficial to early detection and range control of large infectious diseases. For example, in the case of epidemic monitoring, information including trajectory data enables an epidemic outbreak to be detected early. In addition, trajectory data is also widely used in applications such as route recommendation, traffic condition prediction, taxi service, and the like.
However, many of these data are private data, and belong to private sensitive information. The track data contain abundant space-time information, privacy information such as behavior habits, frequent places, social relations and the like of the user can be obtained through analysis in a large amount of track data, and an attacker can dig out information such as a family work address, an interest place or a frequent activity place of the user. Besides the track data, privacy tags (such as diseases) of some users are published together with the track data to form track data with the privacy tags, so that an attacker can analyze privacy information in the track and possibly predict corresponding tag information. In the above application, there are many privacy tags published with the track, or belonging to the same moving object as the track data, or some privacy tags that can be analyzed in the track data. Even after a correct simple anonymous user identity, trace data publication still causes privacy disclosure of the user's personal information. This is because with some background knowledge such as frequently visited locations, the location data in these tracks can be used as a quasi-identifier for some users. The user's privacy, such as issued sensitive tags (e.g., diseases) and location privacy, can then be identified through a series of background knowledge attacks.
Montjoye et al have experimented with a few trace points in the trace data as background knowledge, and have shown that 95% of users can be uniquely determined by using four space-time points in the trace as background knowledge. Therefore, the privacy protection problem in the distribution of the trajectory data becomes a problem of great concern to researchers and users.
Differential privacy was first proposed by Dwork in 2006. To overcome the disadvantage of the partition-based approach in privacy protection, differential privacy comes into the line of sight of researchers and has become a research hotspot. The differential privacy method does not make an assumption about background knowledge that an attacker can acquire, and a method of adding noise to trajectory data is used to achieve differential privacy. Differential privacy based privacy protection methods can be divided into interactive and non-interactive privacy protection methods. In the non-interactive method, a data owner anonymizes data according to an anonymization method, and issues an anonymized data version for data analysis and data mining. Of course, the published anonymous data may not be the details of the entire data, but some statistical information of the original data, etc. In the interactive method, a data owner or a trusted third party holds data, if a user wants to use the data, corresponding query information is provided to a query interface provided by the data owner, a corresponding response is obtained instead of directly obtaining all the data, and noise is added to the response so as to meet the privacy protection requirement. Only non-interactive differential privacy protection methods for data distribution are contemplated herein.
Disclosure of Invention
In order to effectively solve the problem of privacy disclosure in sensitive tag track data release, the invention provides a graph-based differential privacy model (PTDP) for resisting differential attacks on a space-time point and an SA in sensitive tag track data release, which comprises three stages: clustering and generalization of tracks, graph-based time-space point difference privacy, and publishing track data with privacy tags. Through the three stages of processing, track data with sensitive tags can be released safely, and high availability of the data can be protected.
The technical scheme of the invention is as follows:
table 1 defines the variables commonly used in the present invention:
table 1 variables and descriptions used in general
Figure BDA0002407080780000031
A sensitive label track data publishing method using a graph difference privacy model comprises the following steps:
(1) clustering and generalization of traces
(1.1) firstly, acquiring an original trajectory data set D, and then searching a candidate position area containing a hot spot position and an abnormal value by adopting a DBSCAN algorithm; hotspot locations are regions where the space-time points are relatively dense, and outliers are space-time points that are far from the space-time points in other data sets or that are inconsistent under some metric;
(1.2) acquiring an abnormal value set O and a cluster set C serving as a hot spot position, and replacing the position of a space-time point in the original track data with the center of the corresponding hot spot position to generalize each specific position in the track data;
(1.3) obtaining a generalized trajectory data set D' ═ { C, O };
(2) graph-based space-time point differential privacy
(2.1) establishing a noise graph, and mapping the hot spots, the abnormal values and the privacy labels SA to a directed weighted graph; mapping all different SA values in D 'to head vertexes in G, wherein G is a graph mapped by D'; the head vertex is treated as the beginning of each record, and the vertex weight of each head vertex is the number of tracks with the SA value in the original data;
(2.2) mapping each position in all the trajectories in D' to a trajectory vertex;
(2.3) setting different privacy budget epsilon values for different vertices; setting epsilon of each head vertex according to the privacy degree of the SA value, setting the privacy degree according to the requirements of a data owner, and setting different epsilon values aiming at different SA values;
for each trajectory vertex v, a pass through data owner { v1,v2,v3,...vnThe epsilon is determined by voting, and the specific method is as follows:
Figure BDA0002407080780000041
wherein the content of the first and second substances,
Figure BDA0002407080780000042
is expressed as viAnd the weight of the edge between v,
Figure BDA0002407080780000043
is upsiloniThe weight of the vertex of (a),
Figure BDA0002407080780000044
is upsiloniThe privacy budget of (1);
(2.4) generating a noise map G', respectively mapping each Laplace noise
Figure BDA0002407080780000045
Is added to the corresponding upsiloniVertex weight of
Figure BDA0002407080780000046
Upper, upsiloniAfter-noise vertex weights of
Figure BDA0002407080780000047
The calculation is as follows:
Figure BDA0002407080780000048
(3) publishing track data with privacy tags
(3.1) reducing each trace in D': for each track in D ', starting from a head node, namely an SA node, determining the SA of each record, traversing other vertexes through the edge in G' until the vertex without the edge is reached, and finishing the determination of the track; when generating a track, if the weight of the vertex is more than 0, the weight of the vertex passing through in the traversal process is reduced by 1, while the weight of the edge is unchanged, and the generated track has the same SA value and the same space-time point as the track in the original D'; if the vertex weight is reduced due to the negative noise during the noise adding process, and the vertex weight of the vertex existing on the path is 0 in the track one-by-one generation process, deleting the vertexes when generating the same track as the original D';
(3.2) generating trajectory data according to a heuristic method: firstly, acquiring head vertexes with the vertex weights not equal to zero as an alternative set S, and selecting upsilon with the maximum vertex weight in S based on the heuristic of the alternative set SiThen select upsiloniTo viThe vertex with the largest edge weight. Once such a vertex is selected, the vertex weight for that vertex is decreased by 1; repeating the selection until no edge exists at the currently selected vertex, and then deleting the selected vertex with zero weight to generate a whole track; repeating the trajectory generation process until there are no vertices remaining in G';
and (3.3) obtaining anonymous data.
The invention has the beneficial effects that: trajectory data can generate enormous social benefits. Analytical mining of trajectory data can produce significant value in government decision-making, scientific research, medical treatment, traffic management, and epidemic detection, among many other areas. Meanwhile, the track data can be used for providing better user service, so that the user is more convenient, but a series of privacy disclosure problems are brought while convenience is brought, and the track data are private information of each user, wherein much information is private. These data are distributed externally after they have to be processed and their availability needs to be guaranteed. Therefore, the sensitive label track data publishing method based on the graph difference privacy model provided by the invention effectively solves the privacy disclosure problem in the sensitive label track data publishing process and ensures the usability of data.
Drawings
Fig. 1 is an architecture diagram of the PTDP model according to the present invention.
FIG. 2 is a flow chart of the clustering and generalization of traces according to the present invention.
FIG. 3 is a flow chart of graph-based differential privacy in accordance with the present invention.
Fig. 4 is a flowchart of publishing track data with privacy tags according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail by examples and drawings.
A sensitive label track data publishing method based on a graph difference privacy model comprises three stages of clustering and generalization of tracks, graph-based space-time point difference privacy and publishing track data with privacy labels;
in the first stage, referring to fig. 2, the specific operation process of clustering and generalization of the tracks is as follows:
step 1, acquiring an original track data set D.
And 2, searching a candidate position area containing the hot spot position and the abnormal value by adopting a DBSCAN algorithm. Hotspot locations are areas where the space-time points are relatively dense, such as densely populated places like buildings (malls, hospitals, etc.) or road junctions. Outliers are points in time that are far from the points in time in other data sets or that are inconsistent under some metric.
And 3, obtaining the abnormal value set O and the cluster set C serving as the hot spot position.
And 4, after obtaining the abnormal value set O and the cluster set C, replacing the positions of the space-time points in the original track data with the center of the corresponding hot spot position to generalize the track.
And 5, obtaining the generalized track data set D' ═ { C, O }.
In the second stage, referring to fig. 3, the specific operation process of the graph-based time-space point differential privacy is as follows:
before the specific steps in this stage are introduced, the following concepts are introduced: both the spatio-temporal points in D 'and the SA values are mapped to a set of vertices V, each vertex being associated with a number of spatio-temporal points or a number of SAs in D', called vertex weights. The transitions between these points are mapped to edge E in the graph, whose edge weight represents the number of vertex-to-vertex transitions in D'.
And 6, establishing a noise map, and mapping all different SA values in D' to head vertexes in G. The head vertex is also considered the beginning of each record. The vertex weight for each head vertex is the number of tracks in the original data that have the SA value.
And 7, mapping each position in all the tracks in the D' to a track vertex, wherein the position at this time is the position of the hot spot and the abnormal value obtained according to the clustering in the step 2. And the vertex weight of the vertex of the trajectory is the number of occurrences of the space-time point in all the trajectories. Two vertices υiAnd upsilonjThe edge in between means that there is a secondary upsilon in a certain trackiTo upsiloniAnd its edge weight represents the total number of such transitions in all tracks.
And 8, setting different privacy budget epsilon values for different vertexes. The degree of privacy of each head vertex is set according to the degree of privacy of the SA value, and different values of epsilon are set for different SA values according to the requirements of the data owner.
For each trajectory vertex v, its epsilon cannot be determined by a single user, since each trajectory vertex involves multiple users. Therefore, a data passing through these data owners { v } is set1,v2,v3,...vnThe epsilon is determined by voting, and the specific method is as follows:
Figure BDA0002407080780000071
wherein the content of the first and second substances,
Figure BDA0002407080780000072
is expressed as viAnd the weight of the edge between v,
Figure BDA0002407080780000073
is upsiloniThe weight of the vertex of (a),
Figure BDA0002407080780000074
is viThe privacy budget.
Step 9, generating a noise map G', and respectively generating Laplace noise
Figure BDA0002407080780000075
Is added to the corresponding upsiloniVertex weight of
Figure BDA0002407080780000076
The above. Upsilon isiAfter-noise vertex weights of
Figure BDA0002407080780000077
The calculation is as follows:
Figure BDA0002407080780000078
in the third stage, referring to fig. 4, a specific process of publishing track data with a privacy tag is as follows:
and 10, restoring each track in the D'. For each track in D ', starting with a head node (namely an SA node), the SA of each record is determined, and then traversing other vertexes through the edge in G' until a vertex without an edge is reached, so that the track is determined completely. When generating the track, if the vertex weight of the vertex is greater than 0, the weight of the vertex passed through in the traversal process is reduced by 1, while the weight of the edge is unchanged, and the generated track has the same SA value and the same space-time point as the track in the original D'. However, there is also a special case that if the vertex weight is reduced due to the negative noise at the time of noise addition, and the vertex weight of the vertices existing on the path may be 0 during the trace-by-trace generation, these vertices will be deleted when the same trace as in the original D' is generated.
And 11, generating track data according to a heuristic method. Firstly, acquiring head vertexes with the vertex weights not equal to zero as an alternative set S, and selecting upsilon with the maximum vertex weight in S based on the heuristic of the alternative set Si. Then select upsiloniTo viThe vertex with the largest edge weight. Once such a vertex is selected, the vertex weight for that vertex is decreased by 1. This selection is repeated until the currently selected vertex has no edges. At this point, the entire trajectory is generated after deleting the selected vertices with zero weight. After the vertices with zero digits are deleted, the trajectory generation process is repeated until there are no vertices remaining in G'.
And 12, obtaining anonymous data.

Claims (1)

1. A sensitive label track data publishing method based on a graph difference privacy model is characterized by comprising the following specific steps:
(1) clustering and generalization of traces
(1.1) firstly, acquiring an original trajectory data set D, and then searching a candidate position area containing a hot spot position and an abnormal value by adopting a DBSCAN algorithm; hotspot locations are regions where the space-time points are relatively dense, and outliers are space-time points that are far from the space-time points in other data sets or that are inconsistent under some metric;
(1.2) acquiring an abnormal value set O and a cluster set C serving as a hot spot position, and replacing the position of a space-time point in the original track data with the center of the corresponding hot spot position to generalize each specific position in the track data;
(1.3) obtaining a generalized trajectory data set D' ═ { C, O };
(2) graph-based space-time point differential privacy
(2.1) establishing a noise graph, and mapping the hot spots, the abnormal values and the privacy labels SA to a directed weighted graph; mapping all different SA values in D 'to head vertexes in G, wherein G is a graph mapped by D'; the head vertex is treated as the beginning of each record, and the vertex weight of each head vertex is the number of tracks with the SA value in the original data;
(2.2) mapping each position in all the trajectories in D' to a trajectory vertex;
(2.3) setting different privacy budget epsilon values for different vertices; setting epsilon of each head vertex according to the privacy degree of the SA value, setting the privacy degree according to the requirements of a data owner, and setting different epsilon values aiming at different SA values;
for each trajectory vertex v, a pass through data owner { v1,v2,v3,...vnThe epsilon is determined by voting, and the specific method is as follows:
Figure FDA0002407080770000011
wherein the content of the first and second substances,
Figure FDA0002407080770000012
is expressed as viAnd the weight of the edge between v,
Figure FDA0002407080770000013
is upsiloniThe weight of the vertex of (a),
Figure FDA0002407080770000014
is upsiloniThe privacy budget of (1);
(2.4) generating a noise map G', respectively mapping each Laplace noise
Figure FDA0002407080770000021
Is added to the corresponding upsiloniVertex weight of
Figure FDA0002407080770000022
Upper, upsiloniAfter-noise vertex weights of
Figure FDA0002407080770000023
The calculation is as follows:
Figure FDA0002407080770000024
(3) publishing track data with privacy tags
(3.1) reducing each trace in D': for each track in D ', starting from a head node, namely an SA node, determining the SA of each record, traversing other vertexes through the edge in G' until the vertex without the edge is reached, and finishing the determination of the track; when generating a track, if the weight of the vertex is more than 0, the weight of the vertex passing through in the traversal process is reduced by 1, while the weight of the edge is unchanged, and the generated track has the same SA value and the same space-time point as the track in the original D'; if the vertex weight is reduced due to the negative noise during the noise adding process, and the vertex weight of the vertex existing on the path is 0 in the track one-by-one generation process, deleting the vertexes when generating the same track as the original D';
(3.2) generating trajectory data according to a heuristic method: firstly, acquiring head vertexes with the vertex weights not equal to zero as an alternative set S, and selecting upsilon with the maximum vertex weight in S based on the heuristic of the alternative set SiThen select upsiloniTo viThe vertex with the largest edge weight; once such a vertex is selected, the vertex weight for that vertex is decreased by 1; repeating selection until the currently selected vertex has no edge, and deleting the vertexGenerating a whole track by using the selected vertex with the zero weight; repeating the trajectory generation process until there are no vertices remaining in G';
and (3.3) obtaining anonymous data.
CN202010164862.3A 2020-03-11 2020-03-11 Sensitive tag track data publishing method using graph difference privacy model Active CN111353173B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010164862.3A CN111353173B (en) 2020-03-11 2020-03-11 Sensitive tag track data publishing method using graph difference privacy model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010164862.3A CN111353173B (en) 2020-03-11 2020-03-11 Sensitive tag track data publishing method using graph difference privacy model

Publications (2)

Publication Number Publication Date
CN111353173A true CN111353173A (en) 2020-06-30
CN111353173B CN111353173B (en) 2022-09-20

Family

ID=71197339

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010164862.3A Active CN111353173B (en) 2020-03-11 2020-03-11 Sensitive tag track data publishing method using graph difference privacy model

Country Status (1)

Country Link
CN (1) CN111353173B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113869384A (en) * 2021-09-17 2021-12-31 大连理工大学 Privacy protection image classification method based on domain self-adaption
CN114021191A (en) * 2021-11-05 2022-02-08 江苏安泰信息科技发展有限公司 Safe production informatization sensitive data management method and system
CN114092729A (en) * 2021-09-10 2022-02-25 南方电网数字电网研究院有限公司 Heterogeneous electricity consumption data publishing method based on cluster anonymization and differential privacy protection
CN114626033A (en) * 2022-03-07 2022-06-14 福建中信网安信息科技有限公司 Implementation method and terminal of data security room

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110750806A (en) * 2019-07-16 2020-02-04 黑龙江省科学院自动化研究所 TP-MFSA (TP-Multi-function document analysis) inhibition release-based high-dimensional position track data privacy protection release system and method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110750806A (en) * 2019-07-16 2020-02-04 黑龙江省科学院自动化研究所 TP-MFSA (TP-Multi-function document analysis) inhibition release-based high-dimensional position track data privacy protection release system and method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
冯登国等: "基于差分隐私模型的位置轨迹发布技术研究", 《电子与信息学报》 *
刘晓迁等: "基于聚类匿名化的差分隐私保护数据发布方法", 《通信学报》 *
徐振强等: "面向轨迹数据发布的隐私保护技术研究进展", 《测绘科学技术学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114092729A (en) * 2021-09-10 2022-02-25 南方电网数字电网研究院有限公司 Heterogeneous electricity consumption data publishing method based on cluster anonymization and differential privacy protection
CN113869384A (en) * 2021-09-17 2021-12-31 大连理工大学 Privacy protection image classification method based on domain self-adaption
CN113869384B (en) * 2021-09-17 2024-05-10 大连理工大学 Privacy protection image classification method based on field self-adaption
CN114021191A (en) * 2021-11-05 2022-02-08 江苏安泰信息科技发展有限公司 Safe production informatization sensitive data management method and system
CN114626033A (en) * 2022-03-07 2022-06-14 福建中信网安信息科技有限公司 Implementation method and terminal of data security room

Also Published As

Publication number Publication date
CN111353173B (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN111353173B (en) Sensitive tag track data publishing method using graph difference privacy model
Hou et al. Survey on data analysis in social media: A practical application aspect
Shaham et al. Privacy preserving location data publishing: A machine learning approach
Ghasemzadeh et al. Anonymizing trajectory data for passenger flow analysis
Al-Hussaeni et al. Privacy-preserving trajectory stream publishing
Cunningham et al. Real-world trajectory sharing with local differential privacy
Yang et al. Local trajectory privacy protection in 5G enabled industrial intelligent logistics
Pellet et al. Localising social network users and profiling their movement
US9635507B2 (en) Mobile device analytics
Sui et al. A study of enhancing privacy for intelligent transportation systems: $ k $-correlation privacy model against moving preference attacks for location trajectory data
Xu et al. Sume: Semantic-enhanced urban mobility network embedding for user demographic inference
To et al. A Hilbert-based framework for preserving privacy in location-based services
Buchel et al. Geospatial analysis
Houfaf-Khoufaf et al. Geographically masking addresses to study COVID-19 clusters
Yan et al. Perturb and optimize users’ location privacy using geo-indistinguishability and location semantics
Zhao et al. A Privacy‐Preserving Trajectory Publication Method Based on Secure Start‐Points and End‐Points
Olawoyin et al. Privacy preservation of COVID-19 contact tracing data
Ho et al. Clustering indoor location data for social distancing and human mobility to combat COVID-19
Sai et al. User motivation based privacy preservation in location based social networks
Liu et al. Trajectory privacy data publishing scheme based on local optimisation and R-tree
Yang et al. P4mobi: A probabilistic privacy-preserving framework for publishing mobility datasets
Lin Geo-indistinguishable masking: enhancing privacy protection in spatial point mapping
Li et al. Spatial data analysis for intelligent buildings: Awareness of context and data uncertainty
Eusuf et al. A Web-based system for efficient contact tracing query in a large spatio-temporal database
Liu et al. Integration of museum user behavior information based on wireless network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant