CN110457940B - Differential privacy measurement method based on graph theory and mutual information quantity - Google Patents
Differential privacy measurement method based on graph theory and mutual information quantity Download PDFInfo
- Publication number
- CN110457940B CN110457940B CN201910621081.XA CN201910621081A CN110457940B CN 110457940 B CN110457940 B CN 110457940B CN 201910621081 A CN201910621081 A CN 201910621081A CN 110457940 B CN110457940 B CN 110457940B
- Authority
- CN
- China
- Prior art keywords
- graph
- privacy
- information
- differential privacy
- data set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000691 measurement method Methods 0.000 title claims abstract description 14
- 238000004891 communication Methods 0.000 claims abstract description 26
- 238000000034 method Methods 0.000 claims abstract description 22
- 239000011159 matrix material Substances 0.000 claims description 40
- 238000012546 transfer Methods 0.000 claims description 14
- 238000011002 quantification Methods 0.000 claims description 10
- 230000005540 biological transmission Effects 0.000 claims description 6
- 239000002131 composite material Substances 0.000 claims description 3
- 238000013139 quantization Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims description 2
- 238000006467 substitution reaction Methods 0.000 claims 1
- 238000005259 measurement Methods 0.000 abstract description 6
- 238000004364 calculation method Methods 0.000 abstract description 5
- 230000002452 interceptive effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000007418 data mining Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012502 risk assessment Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6227—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database where protection concerns the structure of data, e.g. records, types, queries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Bioethics (AREA)
- General Health & Medical Sciences (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a differential privacy measurement method based on graph theory and mutual information quantity. According to the method, a differential privacy protection framework is reconstructed by an information theory communication model, a differential privacy information communication model is constructed, an original data set is represented as an information source, a published data set is represented as an information sink, and an inquiry mechanism and a noise mechanism are represented as communication channels; the proposed differential privacy measurement model is based on an information communication model, a mutual informatization calculation method of privacy leakage is provided by using the characteristics of a graph and combining information entropy, the boundary of the privacy leakage only depends on the number of attributes, the number of attribute values and differential privacy budget parameters of an original data set, and an adversary with any attack ability can be established for the original data set distributed at will. The differential privacy measurement method provided by the invention can provide the privacy disclosure mutual information upper bound of differential privacy protection, has fewer limiting conditions, is suitable for all channels, and does not depend on the distribution of an original data set.
Description
Technical Field
The invention relates to the technical field of information security, in particular to a differential privacy measurement method based on graph theory and mutual information quantity.
Background
The coming of big data era and the popularization of mobile internet have brought about the extensive concern and worry of people about privacy while having huge commercial and social values, and more concealed and diversified data collection, storage and data mining lead to privacy disclosure and privacy stealing more frequently, and harm and influence are more huge. On one hand, a data owner directly publishes data containing privacy information without any protection processing, so that personal privacy information can be leaked; on the other hand, a malicious attacker steals sensitive information in the published data by utilizing mature technologies such as data mining and the like. Therefore, it is urgent to solve the privacy disclosure problem.
The privacy protection problem of data has been studied for a long time, and it was first traced back to the concept of database privacy information proposed by the statistician dalenui in 1977, who thought that exact information about any individual could not be obtained even if the attacker had background knowledge in accessing the data. Under the definition, corresponding privacy protection models and methods are proposed in succession. Early privacy protection technologies were mainly based on an anonymity model, and the basic idea was to hide one record in another group of records by anonymizing the quasi-identifiers in the records so that all records are divided into several equivalence classes. Although the traditional anonymous protection model and the derived algorithm model can protect personal privacy information of a user to a certain extent, the traditional anonymous protection model and the derived algorithm model cannot resist background knowledge attacks, homogeneous attacks and similarity attacks. It is known that Dwork of Microsoft institute of technology in 2006 proposed a concept of differential privacy protection, which neglects the largest background knowledge attack in the model, ensuring that neighboring data sets that differ by at most one record are indistinguishable in probabilistic output.
Differential privacy protection is a privacy protection technology based on data distortion, and achieves privacy protection by adding noise disturbance to an original data set or a statistical result, and meanwhile, certain data attributes or statistical attributes in the data set are kept unchanged. Differential privacy protection techniques ensure that changes to individual records in a dataset do not affect query results, and queries to neighboring datasets can be guaranteed to be probabilistically indistinguishable even if the attacker has unlimited background knowledge.
Differential privacy protection can be divided into two broad categories depending on the implementation environment: interactive differential privacy and non-interactive differential privacy. The interactive differential privacy protection mechanism is characterized in that a user submits a query request to a data owner through a query interface, the data owner conducts query in an original data set according to the query request, and then a query result is fed back to the user after noise disturbance is added. The non-interactive differential privacy protection mechanism means that a data manager directly publishes a published data set which meets the differential privacy protection, and then queries the published data set according to the request of a user.
The privacy budget parameter epsilon of the differential privacy represents the privacy protection strength, the selection of the parameter is highly dependent on experience, and an effective information quantification method is still lacked for carrying out pre-quantification on the differential privacy strength and the privacy leakage amount, so that how to quantify the privacy leakage amount by using an information theory method and a quantification method for the upper bound of the differential privacy protection degree of a given data set become the key for optimizing a differential privacy algorithm and designing a privacy risk assessment scheme.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a differential privacy measurement method based on graph theory and mutual information quantity, which solves the difficult problem existing in the quantification of privacy leakage in a differential privacy protection mechanism: (1) At present, the strength and the effect of differential privacy protection can only be evaluated a posteriori, and the method highly depends on a privacy budget parameter epsilon selected empirically, so that the strength of privacy protection and the privacy disclosure amount are difficult to be quantified in advance. (2) In the differential privacy protection mechanism, once the privacy budget ε is exhausted, the differential privacy protection will be destroyed and the privacy protection algorithm will lose its meaning. The existing privacy measurement method is mainly based on a privacy measurement model of information entropy, how to combine Shannon information theory and differential privacy to quantify privacy leakage in a differential privacy protection mechanism, and prove that solving the upper bound of the privacy leakage in the differential privacy protection mechanism is a difficult problem to be mainly solved by the invention.
The invention is realized by the following steps: the differential privacy measurement method based on the graph theory and the mutual information quantity comprises the following steps:
step 1: firstly, reconstructing a differential privacy protection framework by using an information theory communication model, constructing an information communication model of differential privacy, expressing an original data set in a differential privacy protection mechanism as an information source, expressing a released data set as an information sink, and expressing the differential privacy protection mechanism as a communication channel;
step 2: constructing a privacy quantification model, and modeling a communication channel in the differential privacy communication model into a query mechanism and a noise mechanism:
and step 3: then, the information source and the information sink are regarded as graph structures, and the channel transfer matrix is regarded as a composite graph of the information source graph and the information sink graph;
and 4, step 4: converting the channel matrix M into a maximum diagonal matrix M'; moving the maximum value of each row of elements in the first n rows of the channel matrix M to a diagonal line, wherein the matrix M' still meets epsilon-difference privacy and the conditional entropy H (X | Y) between the original data set and the published data set is unchanged;
and 5: transforming the maximum diagonal matrix M 'to a Hamming matrix M' based on graph distance regularization and point transfer such that the elements on the diagonals are all equal and equal to the maximum element in the matrix, i.e., and the conditional entropy H (X | Y) between the original dataset and the published dataset is unchanged;
step 6: the self-isomorphic and adjacency relation of the graph is utilized, the privacy disclosure quantity of the differential privacy protection mechanism is proved to be in the upper bound through a scaling formula method, and a formula for calculating the upper bound of the privacy disclosure is provided.
In step 2, based on the privacy quantization model of the communication mechanism, the original data set in the differential privacy protection mechanism is represented as an information source, the issued data set is represented as an information sink, and the inquiry mechanism and the noise mechanism are represented as communication channels.
The privacy quantification method based on the graph theory in the steps 4 and 5 utilizes the self-isomorphism, point transmission and regular distance properties of the graph to give an upper bound of privacy disclosure, wherein a permutation sigma on a self-isomorphism known point set V (G) is called self-isomorphism of the graph G, namely, if V-V ', sigma (V) -sigma (V ') exist for any vertex V, V ' ∈ V; point transmission refers to any vertex V in the graph G, and V 'epsilon is self-isomorphism so that sigma (V) = V', so that the graph G is called a point transmission top; distance regular refers to the integer b if present d And c d (d∈0,1,K d max ) Let an arbitrary vertex v, v 'in the graph G, where d (v, v') = d, vertex v has b d A neighbor belongs to the set V <d+1> (v) The vertex v' has c d A neighbor belongs to the set V <d-1> (v) Then, the graph G is called a distance regular graph.
According to the method, a differential privacy protection framework is reconstructed by an information theory communication model, a differential privacy information communication model is constructed, an original data set is represented as an information source, a published data set is represented as an information sink, and an inquiry mechanism and a noise mechanism are represented as communication channels; further regarding the information source and the information sink as graphs, regarding the channel transfer matrix as a composite graph of the information source graph and the information sink graph, converting the channel transfer matrix into a Hamming graph based on graph distance regularization and point transfer, and providing a privacy disclosure mutual information quantization method of differential privacy; the self-isomorphic and adjacency relation of the graph is utilized, the privacy disclosure quantity of the differential privacy protection mechanism is proved to be in an upper bound through a scaling formula method, and a formula for calculating the privacy disclosure upper bound is provided. The proposed differential privacy measurement model is based on an information communication model, a mutual informatization calculation method of privacy disclosure is given by using the characteristics of a graph and combining information entropy, the boundary of the privacy disclosure only depends on the number of attributes, the number of attribute values and differential privacy budget parameters of an original data set, and an adversary with any attack ability can be established for the original data set distributed at will.
In the scheme, the combination of graph theory and mutual information amount is utilized, and compared with a traditional posterior evaluation method for selecting privacy budget based on experience, the method not only provides a specific privacy thinking and quantification method, but also takes the upper bound problem of privacy disclosure in inquiry into consideration, utilizes the self-structure and adjacency relation of the graph, proves that the privacy disclosure amount of a differential privacy protection mechanism exists in the upper bound through a scaling formula method, and provides a calculation formula of the upper bound of the privacy disclosure. Analysis proves that the differential privacy measurement method can provide the privacy disclosure mutual information upper bound of differential privacy protection, has fewer limiting conditions, is suitable for all channels, and does not depend on the distribution of an original data set.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a diagram of a differential privacy metric model of the present invention;
fig. 3 is a diagram of channel matrix transformation in a communication model of the present invention.
Detailed Description
The invention is further described in the following with reference to the accompanying drawings and implementations.
The embodiment of the invention comprises the following steps: a differential privacy measurement method based on graph theory and mutual information quantity and a technical process are shown in figure 1, and the method comprises 6 steps of constructing a differential privacy channel model and a privacy quantification model, giving a measurement method based on graph theory and mutual information quantity, proving that a differential privacy protection mechanism has an upper privacy disclosure boundary, and giving a calculation formula.
The channel transfer process in the communication model is shown in fig. 2, and includes two steps: the channel matrix M is converted to a maximum diagonal matrix M'. Moving the maximum value of each column element in the first n columns of the channel matrix M to the diagonal; graph-based distance regularization and point transfer transforms the maximum diagonal matrix M 'into a Hamming matrix M' such that the elements on the diagonal are all equal and equal to the maximum element in the matrix.
The differential privacy measurement model based on graph theory and mutual information quantity is shown in fig. 3, the privacy leakage quantity between the original data set and the released data set is quantified based on the differential privacy measurement model in fig. 2, and the maximum privacy leakage quantity of the released data set to the original data set in the privacy measurement model is recorded as ML. Firstly, taking an original data set, distributed data and a data set as an undirected graph, constructing a channel matrix (channel map) based on a binary graph structure, and converting the channel matrix M by utilizing distance regularization and point transfer of the graph to obtain a Hamming matrix (Hamming map); then, the lower bound of the conditional entropy of the original data set and the issued data set is proved through the adjacency relation of the Hamming matrix, the lower bound of the conditional entropy is further obtained through the symmetry and the self-isomorphic relation of the Hamming matrix, and the upper bound of the privacy leakage amount is calculated for the input data set which is distributed randomly through a mutual information calculation method.
Defined by conditional entropy:
then the method is obtained by the definition of information entropy and the principle of uniformly distributing maximum entropy
Also due to M i,j ≤max M″ Therefore, it is
Since the conditional entropy between the original data set and the transmitted data set is unchanged after the channel matrix is converted into the hamming matrix, i.e. H M (X|Y)=H M "(X | Y), so
H(X|Y)≥-log 2 max M
As defined by the extended definition of differential privacy, assuming that the channel matrix M satisfies ε -differential privacy, for any column j, and any pair of rows i and h (i-h), there is
When h = j, the elements on the diagonal of the matrix M "are equal and equal to the maximum element value, so for each element M ″ i,j Is provided with
max M″ ≤e εd(i,j) M″ i,j
And because any row element in the matrix M' is probability distribution, then sigma j M″ i,j Is not less than 1, therefore
And is obtained by grouping the distance of the graphic structure elements
Obtained by inequality transformation
If the input graph structure of the communication model is a distance regular graph and point transfer, for each d ∈ S G ,|X <d> (i) The values of | are identical and depend only on d, and are denoted as N d I.e. N d =|X <d> (i) L. Therefore, it is
By changing the value of an individual in the u-tuple representing i, each element j with a distance d from x can be obtained. These individuals have(v-1) possible cases for each selection, so
Then
Therefore, it is
And because
Therefore, it is
When the probability distribution of the original data set is uniform, the entropy of the information has a maximum value, i.e., H (X) = log 2 n=log 2 v u . According to the definition of mutual information quantity
It can be derived from the above-mentioned proof that when the probability distribution of the original data set is uniform, the original data set has the maximum information entropy, and at this time, the mutual information leakage is the largest, so that when the original data set is random, the result still holds. Therefore, the upper bound on the amount of mutual information is valid for arbitrary distribution over the original data set. Furthermore, since the differential privacy mechanism is still satisfied in the proposed model, the mutual information context is valid for any background knowledge that an adversary may have.
The present invention is described in detail above with reference to the specific drawings, which are not to be construed as limiting the invention. Many variations and modifications may be made by one of ordinary skill in the art without departing from the principles of the present invention, which should also be considered within the scope of the present invention.
Claims (3)
1. A differential privacy measurement method based on graph theory and mutual information content is characterized by comprising the following steps:
step 1: firstly, reconstructing a differential privacy protection framework by using an information theory communication model, constructing an information communication model of differential privacy, representing an original data set in a differential privacy protection mechanism as an information source, representing a released data set as an information sink, and representing the differential privacy protection mechanism as a communication channel;
step 2: constructing a privacy quantification model, and modeling a communication channel in the differential privacy communication model into a query mechanism and a noise mechanism:
and step 3: then, the information source and the information sink are regarded as graph structures, and the channel transfer matrix is regarded as a composite graph of the information source graph and the information sink graph;
and 4, step 4: converting the channel matrix M into a maximum diagonal matrix M'; moving the maximum value of each row of elements in the first n rows of the channel matrix M to a diagonal line, wherein the matrix M' still meets epsilon-difference privacy and the conditional entropy H (X | Y) between the original data set and the published data set is unchanged;
and 5: transforming the maximum diagonal matrix M 'to a Hamming matrix M' based on graph distance regularization and point transfer such that the elements on the diagonals are all equal and equal to the maximum element in the matrix, i.e., and the conditional entropy H (X | Y) between the original dataset and the published dataset is unchanged; the point transmission refers to that any vertex V in the graph G exists in a self-isomorphism mode, V 'is belonged to V, so that sigma (V) = V', and the graph G is called point transmission;
step 6: the self-isomorphic and adjacency relation of the graph is utilized, the privacy disclosure quantity of the differential privacy protection mechanism is proved to be stored in the upper bound through a scaling formula method, and a formula for calculating the upper bound of the privacy disclosure is provided;
defined by conditional entropy:
then the method is obtained by the definition of information entropy and the principle of uniformly distributing maximum entropy
Also due to M i,j ≤max M″ Therefore, it is
After the channel matrix is converted into the hamming matrix, the conditional entropy between the original data set and the transmitted data set is unchanged, i.e. H M (X|Y)=H M″ (X | Y), so
H(X|Y)≥-log 2 max M
As defined by the extended definition of differential privacy, assuming that the channel matrix M satisfies ε -differential privacy, for any column j, and any pair of rows i and h (i-h), there is
When h = j, the elements on the diagonal of the matrix M "are equal and equal to the maximum element value, so for each element M ″ i,j Is provided with
max M″ ≤e εd(i,j) M″ i,j
Since any row element in the matrix M' is probability distribution, then sigma j M″ i,j Is not less than 1, therefore
And is obtained by grouping the distance of the graphic structure elements
Obtained by inequality transformation
If the input graph structure of the communication model is a distance regular graph and point transfer, for each d ∈ S G ,|X <d> (i) The values of | are identical and depend only on d, and are denoted as N d I.e. N d =|X <d> (i) L, |; therefore, it is
By changing the value of an individual in the u-tuple representing i, each element j at a distance d from x can be obtained; these individuals have(v-1) possible choices for each choice, therefore
Then
Therefore, it is
And because of
Therefore, it is
When the probability distribution of the original data set is uniform, the entropy of the information has a maximum value, i.e., H (X) = log 2 n=log 2 v u (ii) a According to the definition of mutual information quantity
2. The differential privacy measurement method based on graph theory and mutual information quantity according to claim 1, characterized in that: in the step 2, based on the privacy quantization model of the communication mechanism, the original data set in the differential privacy protection mechanism is represented as an information source, the issued data set is represented as an information sink, and the inquiry mechanism and the noise mechanism are represented as communication channels.
3. The differential privacy measurement method based on graph theory and mutual information quantity according to claim 1, characterized in that: the privacy quantification method based on graph theory in the steps 4 and 5 utilizes the self-isomorphism, point transmission and regular distance properties of the graph to give an upper bound of privacy disclosure, wherein the substitution sigma on a self-isomorphism known point set V (G) is called the self-isomorphism of the graph G, namely, sigma (V) -sigma (V ') exists for any vertex V, V ' ∈ V if V-V '; point transfer refers to that any vertex V in the graph G exists in a self-isomorphism mode, V 'is belonged to V, so that sigma (V) = V', and the graph G is called point transfer; distance regular refers to the integer b if present d And c d (d∈0,1,...d max ) Let any vertex v, v 'in the graph G, where d (v, v') = d, vertex v has b d A neighbor belongs to the set V <d+1> (v) The vertex v' has c d A neighbor belongs to the set V <d-1> (v) Then, the graph G is called a distance regular graph.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910621081.XA CN110457940B (en) | 2019-07-10 | 2019-07-10 | Differential privacy measurement method based on graph theory and mutual information quantity |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910621081.XA CN110457940B (en) | 2019-07-10 | 2019-07-10 | Differential privacy measurement method based on graph theory and mutual information quantity |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110457940A CN110457940A (en) | 2019-11-15 |
CN110457940B true CN110457940B (en) | 2023-04-11 |
Family
ID=68482567
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910621081.XA Active CN110457940B (en) | 2019-07-10 | 2019-07-10 | Differential privacy measurement method based on graph theory and mutual information quantity |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110457940B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117240982B (en) * | 2023-11-09 | 2024-01-26 | 沐城测绘(北京)有限公司 | Video desensitization method based on privacy protection |
CN117371046B (en) * | 2023-12-07 | 2024-03-01 | 清华大学 | Multi-party collaborative optimization-oriented data privacy enhancement method and device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015026384A1 (en) * | 2013-08-19 | 2015-02-26 | Thomson Licensing | Method and apparatus for utility-aware privacy preserving mapping against inference attacks |
WO2015077542A1 (en) * | 2013-11-22 | 2015-05-28 | The Trustees Of Columbia University In The City Of New York | Database privacy protection devices, methods, and systems |
CN109766710A (en) * | 2018-12-06 | 2019-05-17 | 广西师范大学 | The difference method for secret protection of associated social networks data |
-
2019
- 2019-07-10 CN CN201910621081.XA patent/CN110457940B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015026384A1 (en) * | 2013-08-19 | 2015-02-26 | Thomson Licensing | Method and apparatus for utility-aware privacy preserving mapping against inference attacks |
WO2015077542A1 (en) * | 2013-11-22 | 2015-05-28 | The Trustees Of Columbia University In The City Of New York | Database privacy protection devices, methods, and systems |
CN109766710A (en) * | 2018-12-06 | 2019-05-17 | 广西师范大学 | The difference method for secret protection of associated social networks data |
Non-Patent Citations (4)
Title |
---|
DP2G_(sister):差分隐私社交网络图发布模型;殷轶平等;《信息技术与网络安全》;20180610(第06期);全文 * |
Group Differential Privacy-Preserving Disclosure of Multi-level Association Graphs;Balaji Palanisamy 等;《IEEE》;20170717;全文 * |
基于层次随机图的社会网络差分隐私数据发布;张伟等;《南京邮电大学学报(自然科学版)》;20160629(第03期);全文 * |
隐私保护的信息熵模型及其度量方法;彭长根;《软件学报》;20161231;第1891-1902页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110457940A (en) | 2019-11-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113557512B (en) | Secure multi-party arrival frequency and frequency estimation | |
Zhang et al. | A survey on federated learning | |
Gkoulalas-Divanis et al. | Modern privacy-preserving record linkage techniques: An overview | |
Ding et al. | A novel privacy preserving framework for large scale graph data publishing | |
CN110413652B (en) | Big data privacy retrieval method based on edge calculation | |
CN107092837A (en) | A kind of Mining Frequent Itemsets and system for supporting difference privacy | |
Yin et al. | GANs Based Density Distribution Privacy‐Preservation on Mobility Data | |
Liu et al. | Face image publication based on differential privacy | |
CN110602145B (en) | Track privacy protection method based on location-based service | |
Liu et al. | A Density-based Clustering Method for K-anonymity Privacy Protection. | |
Liu et al. | Hybrid differential privacy based federated learning for Internet of Things | |
US11023594B2 (en) | Locally private determination of heavy hitters | |
CN110457940B (en) | Differential privacy measurement method based on graph theory and mutual information quantity | |
Ding et al. | Privacy preserving similarity joins using MapReduce | |
WO2023134076A1 (en) | Data protection method and system, and storage medium | |
Kuang et al. | A privacy protection model of data publication based on game theory | |
Zhou et al. | Optimizing the numbers of queries and replies in convex federated learning with differential privacy | |
CN114662157B (en) | Block compressed sensing indistinguishable protection method and device for social text data stream | |
Ahuja et al. | A neural approach to spatio-temporal data release with user-level differential privacy | |
Zhang et al. | APDP: Attribute-based personalized differential privacy data publishing scheme for social networks | |
Telikani et al. | An edge-aided parallel evolutionary privacy-preserving algorithm for Internet of Things | |
Ning et al. | Allocation of carbon quotas with local differential privacy | |
Gao et al. | Compressed sensing-based privacy preserving in labeled dynamic social networks | |
Ning et al. | Dp-agm: a differential privacy preserving method for binary relationship in mobile networks | |
Song et al. | Digital Privacy Under Attack: Challenges and Enablers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |