CN108363928B - Adaptive differential privacy protection method in associated medical data - Google Patents

Adaptive differential privacy protection method in associated medical data Download PDF

Info

Publication number
CN108363928B
CN108363928B CN201810129671.6A CN201810129671A CN108363928B CN 108363928 B CN108363928 B CN 108363928B CN 201810129671 A CN201810129671 A CN 201810129671A CN 108363928 B CN108363928 B CN 108363928B
Authority
CN
China
Prior art keywords
attribute
medical
sensitive
user
privacy protection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810129671.6A
Other languages
Chinese (zh)
Other versions
CN108363928A (en
Inventor
李先贤
罗春枫
王利娥
刘鹏
于东然
赵华兴
唐雨薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Normal University
Original Assignee
Guangxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Normal University filed Critical Guangxi Normal University
Priority to CN201810129671.6A priority Critical patent/CN108363928B/en
Publication of CN108363928A publication Critical patent/CN108363928A/en
Application granted granted Critical
Publication of CN108363928B publication Critical patent/CN108363928B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Medical Treatment And Welfare Office Work (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a self-adaptive differential privacy protection method in associated medical data, aiming at privacy problems caused by the association between attributes, a rough set theory is utilized to analyze the association between an alignment identifier and a sensitive attribute, a local differential privacy method is provided to realize the protection of the alignment identifier, and compared with the traditional differential privacy, the utility of the data is ensured; aiming at the problem of privacy disclosure caused by relevance between records and the problem of data utility caused by different sensitivity degrees of different diseases, a global differential privacy protection method is provided to realize protection of sensitive attributes. The invention can effectively improve the security of privacy protection of medical data.

Description

Adaptive differential privacy protection method in associated medical data
Technical Field
The invention relates to the technical field of data privacy protection, in particular to a self-adaptive differential privacy protection method in associated medical data.
Background
The development of electronic medical records is promoted by the progress of scientific and technical information, more and more medical institutions adopt electronic medical record systems to generate massive medical data, and the electronic medical records are analyzed and mined to provide evidences for clinical decision support, clinical path optimization, personalized medical treatment and other applications. Because of the large amount of sensitive information contained in these data, privacy-preserving data mining and privacy-preserving data distribution have received much attention. However, there are complex association characteristics of electronic medical data, including the association between records (such as inheritance, complications, and the like) and between attributes (such as age and disease), making it more difficult to implement privacy protection for the records. Kifer et al first proposed in 2011 that if correlations between records are ignored, differential privacy does not provide sufficient privacy assurance that attackers can use these correlations to improve inferences about attack targets; if there is a correlation between attributes, such as bronchitis, a smoker can easily have bronchitis, we also need to protect this attribute if smoking, because it can increase the probability that an attacker can deduce whether a patient has bronchitis.
In the existing privacy protection research, there are many related researches on the privacy protection of medical data, but the research field considering the privacy protection of the related medical data is blank at present. However, because the relevance of the medical data can increase the success rate of an attacker for deducing that a certain person gets a certain disease, it is very necessary to protect privacy by considering the relevance of the medical data, and there are several main privacy challenges:
(1) due to the characteristics of multi-dimensional attributes, redundant records, high sensitivity and the like of the electronic medical data, the mainstream k-anonymous model designed aiming at the relational data is directly applied, and the problems of high information loss, low data effectiveness and the like exist.
(2) Due to the fact that relevance exists among records, such as genetic relation, complication relation and the like, background knowledge of an attacker is greatly increased, another privacy mainstream model, namely differential privacy, is directly applied, and the expected privacy requirement cannot be met.
(3) Because there is also an association between a quasi-identifier attribute and a sensitive attribute, in addition to privacy protection of the sensitive attribute, privacy protection operations should be performed on the associated quasi-identifier attribute, but how to ensure the utility of data while protecting the quasi-identifier attribute is also a challenge at present.
In the current research of privacy protection methods aiming at the medical industry, in addition to related privacy protection algorithms for genome-associated data, the privacy protection technology for electronic medical data mainly applies K-anonymity, L-diversity, rho-uncertainty, differential privacy and the expansion of the K-anonymity, the L-diversity, the rho-uncertainty and the differential privacy, but does not consider the association between the data. Since the correlations in the electronic medical data are indeed present, if they are ignored, this can greatly increase the background knowledge of the attacker, resulting in privacy disclosure. Recent studies on privacy protection of associated data are mainly: bayes differential privacy provided by Yang et a determines a privacy budget epsilon by defining Bayes differential privacy disclosure; the correlation differential privacy proposed by Zhu et al determines the magnitude of the added noise by calculating the correlation sensitivity. However, due to the characteristics of electronic medical data, these privacy protection methods for processing associated data cannot be directly applied to associated medical data.
Disclosure of Invention
The invention aims to solve the problem that the existing privacy protection method of the associated data cannot be directly applied to the associated medical data, and provides a self-adaptive differential privacy protection method in the associated medical data.
In order to solve the problems, the invention is realized by the following technical scheme:
the self-adaptive differential privacy protection method in the associated medical data comprises the following steps:
step 1, a user puts forward a query request aiming at an original medical data set;
step 2, judging whether the user inquires the quasi-identifier attribute or the inquiry sensitive attribute; when the user inquires about the quasi-identifier attribute, turning to the step 3A, and adopting a local differential privacy protection strategy; when the user inquires that the sensitive attribute is, turning to the step 3B, and adopting a global differential privacy protection strategy;
step 3A, local differential privacy protection strategy:
step 3A-1, performing relevance analysis by using a rough set theory to align the identifiers and the sensitive attributes, and determining whether the attributes of all quasi identifiers in the original medical data set are related to the sensitive attributes;
step 3A-2, classifying the medical records meeting the query request in the original medical data set according to the quasi-identifier attribute queried by the user: when the medical record meeting the query request contains the quasi-identifier attribute related to the sensitive attribute, classifying the medical record meeting the query request into a data subset needing to be protected; otherwise, the medical record meeting the query request is classified into a data subset which does not need to be protected;
3A-3, adding Laplace noise to the medical records in the data subset needing to be protected, returning the medical records to the user, and directly returning the medical records in the data subset needing not to be protected to the user;
step 3B, global differential privacy protection strategy:
step 3B-1, calculating the associated sensitivity of the medical records in the original medical data set meeting the query request
Figure BDA0001574509340000021
Step 3B-2, determining privacy budget epsilon of each sensitive attribute according to given total privacy budgeti
Step 3B-3, adding the medical record meeting the query request in the original medical data set
Figure BDA0001574509340000022
And returning the noise to the user.
Compared with the prior art, the invention carries out different privacy protection strategies aiming at the inquired attribute types: for the quasi-identifier attribute, considering the relevance between the quasi-identifier attribute and the sensitive attribute, analyzing the relevance between the quasi-identifier attribute and the sensitive attribute by using a rough set theory, and carrying out local differential privacy according to the relevance aiming at the identifier attribute, so that the utility of data is improved; for the sensitive attribute, the correlation between the records is considered, and the correlation differential privacy is adopted to calculate the correlation sensitivity of the sensitive attribute
Figure BDA0001574509340000023
Then different privacy budgets epsilon are allocated according to the sensitive attributesiAdding Laplace noise in statistical query
Figure BDA0001574509340000024
Adaptive differential privacy is achieved. The invention can effectively improve the security of privacy protection of medical data.
Drawings
FIG. 1 is a flow chart of a method of adaptive differential privacy protection in correlated medical data.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings in conjunction with specific examples.
The research of the invention is oriented to interactive counting query, namely, a user puts forward a query request to a system, and the system returns a query result to the user after adding noise to an object needing to be protected. Because the interactive query is directed to the original medical data set, the invention adopts different protection strategies aiming at the identifier attribute and the sensitive attribute: aiming at the quasi-identifier attribute, the invention adopts a local differential privacy protection strategy; aiming at the sensitive attribute, the invention adopts a self-adaptive differential privacy protection strategy.
Referring to fig. 1, a method for adaptive differential privacy protection in associated medical data specifically includes the following steps:
step 1: the user makes a query request. In the original medical data set shown in table 1, the name is ID, gender, age, and body temperature are quasi-identifier attributes, and diseases are sensitive attributes.
Name (I) Sex Age (age) Body temperature Disease and disorder
1 Bob F 25 Height of Influenza virus
2 Alice F 8 Is normal Cancer treatment
3 Mike M 35 Is normal Heart disease
4 Lonia M 21 Height of Cancer treatment
5 Jasper F 13 Is normal Heart disease
6 Jake F 41 Is very high Influenza virus
7 Linda M 56 Height of Cancer treatment
8 Helen F 60 Is normal Influenza virus
9 David M 37 Is very high Heart disease
Table 1 raw table data
Step 2: it is determined whether the user is querying for quasi-identifier attributes or query-sensitive attributes. If the quasi-identifier attribute is inquired, adopting a local differential privacy protection strategy aiming at the quasi-identifier attribute; and if the sensitive attribute is queried, adopting a global differential privacy protection strategy aiming at the sensitive attribute.
(1) Local differential privacy protection policies are employed for quasi-identifier attributes:
and step 3: although the quasi-identifier attribute is a non-sensitive attribute, some quasi-identifier attributes increase the probability of leakage of the sensitive attribute, and the quasi-identifier attribute is considered to be sensitive. For example, if the user Bob has influenza, the attribute of age is not related to the influenza, the attribute of age of the user Bob is non-sensitive, but the user Alice has cancer, the attribute of age is highly related to the cancer, and the attribute of age of the user Alice is sensitive, so that for each disease in the original table data, correlation analysis is performed on the identifier attribute and the sensitive attribute by using rough set theory to obtain which quasi-identifier attributes are related to each disease, for example, how many people are between 20 and 40 years old in the first 3 medical records in table 1, and which attribute of age of the user is sensitive is firstly analyzed according to the disease suffered by each user. Firstly, the 1 st medical record is analyzed, the user Bob of the 1 st medical record has influenza, the attribute of age is determined to be related to the influenza according to the roughness and the theory, and the 2 nd medical record and the 3 rd medical record are the same in the following specific way:
step 3.1: some of the attributes in the raw medical data set are defined according to rough set theory. In rough set theory, the quasi-identifier attribute is called a conditional attribute and the sensitive attribute is called a decision attribute. In the present embodiment, for the disease influenza in the 1 st medical record, the user set in the original table is U ═ e1,e2,e3,…,u9The condition attribute set is C ═ gender, age, body temperature, the decision attribute is D ═ influenza, let C be C ═ influenza }1Gender ═ C2Age, C3Body temperature. The following steps are all processed around the attribute of the flu to find out which sensitive attributes are related to the flu.
Step 3.2: each attribute is classified. In this example, the gender can be divided into two categories (male and female); the age can be divided into three categories (0-20 years old, 21-40 years old, 41-60 years old); body temperature can be divided into three categories (normal, high, very high); the diseases can be divided into two categories (with influenza, without influenza). To obtain U/C1={{e3,e4,e7,e9},{e1,e2,e5,e6,e8}},U/C2={{e2,e5},{e1,e3,e4,e9},{e6,e7,e8}},U/C3={{e2,e3,e5,e8},{e1,e4,e7},{e6,e9}},U/D={{e1,e6,e8},{e2,e3,e4,e5,e7,e9}}。
Step 3.3: and acquiring a knowledge base set. The set of intersections of these three classes is denoted as U/C { { e { ]1},{e2},{e3,e4,e9},{e5},{e7},{e6,e8And the knowledge base is the U/C and the union of the sets in the U/C. Such as: U/C1In { e1,e2,e5,e6,e8Denotes female, U/C2In { e2,e5Denotes a person between the ages of 0-20, then the union of these two sets { e }2,e5It means women between 0-20 years old, which is a knowledge.
Step 3.4: delete the age attribute and get another knowledge base. The steps are the same as above.
Step 3.5: it is determined whether an age attribute is associated with influenza. In the original medical data set, the set of records for flu-related disease is { e }1,e6,e8If in both repositories { e }1,e6,e8The upper and lower approximations are all { e }1,e6,e8It indicates that the age attribute is irrelevant to the sensitive attribute influenza, and vice versa.
The upper and lower approximation is the concept of a rough set of edges. No matter how intersection is required in some sets in the knowledge base, the union can not be obtained, so that the concepts of upper approximation and lower approximation are obtained. The lower approximation set is derived from a union in the knowledge base. The upper approximation is obtained by the union of the sets in the knowledge base. Such as: and { e1,e6,e7That no matter how intersection is performed in the knowledge base, and that no intersection can be obtained, the lower approximation of the intersection is obtained by the operation of intersection6,e7Get its upper approximation { e } through union operation1,e2,e6,e7}。
And 4, step 4: according to the quasi-identifier attribute inquired by a user, dividing an original medical data set into two types, wherein one type is a data subset needing to be protected, and medical records in the data subset contain the quasi-identifier attribute related to a sensitive attribute; one type is a subset of data that does not require protection, and which does not contain a quasi-identifier attribute associated with a sensitive attribute.
For example, the first 3 medical records in table 1 are aged 20-40 years, and as can be seen from the above, the age of the 1 st medical record is not sensitive, and the ages of the 2 nd medical record and the 3 rd medical record are sensitive, so that the 1 st medical record is a data subset that does not need protection, and the 2 nd medical record and the 3 rd medical record are data subsets that need protection.
And 5: and (6) local noise adding. And adding Laplace (Laplacian) noise to the query result of the set needing protection, and not adding noise to the set needing no protection.
There are 2 parameters for Laplace: one is a scale parameter, determined from the query sensitivity and the privacy budget, equal to the query sensitivity divided by the privacy budget; one is a location parameter, which is 0 by default. The location parameter defaults to 0, the scale parameter is determined according to the query sensitivity and the privacy budget, and the scale parameter is the query sensitivity divided by the privacy budget. Here, the query sensitivity is 1 and the privacy budget is given.
Step 6: and returning the counting result to the user.
(2) Adopting a global differential privacy protection strategy aiming at sensitive attributes:
and step 3: and acquiring a correlation matrix. Obtaining the evidence of the correlation degree is to analyze the correlation degree between the records. This can be done in various ways by our background knowledge and data characteristics, the most typical one is that we already know this matrix of relevance as background knowledge.
For example: the number of people with heart diseases in the user lookup table is known by background knowledge, and the correlation degree between the people with heart diseases is obtained to obtain a correlation degree matrix delta which belongs to delta.
Figure BDA0001574509340000051
There are: 1) deltaij=δjiIndicating that the association between two records is independent of their order; 2) the element on the diagonal is 1, indicating that each record is fully relevant to itself; 3) threshold delta0To eliminate weak correlation, | δ in Δij|≥δ0If is | δij|<δ0Then, let δij0; 4) only parts are interrelated.
And 4, step 4: and calculating the associated sensitivity.
Step 4.1: the recording sensitivity of each record was calculated.
For a correlation matrix triangle and a query Q, the record sensitivity of the ith record is:
Figure BDA0001574509340000052
wherein Q (D)j) Representing a query on a data set D, Q (D)-j) Indicating that a query is made for a deleted dataset that differs from dataset D by one record j, n being the number of records in dataset D, δijIndicating the degree of association of the ith record with the jth record, δijE.Δ, recording sensitivity represents the current recording riThe impact on all records in the dataset. This concept combines the number of relevant recordings and the degree of correlation, and when the data sets D are independent of each other, the recording sensitivity CSiEqual to the global sensitivity.
Step 4.2: the associated sensitivities of all recordings are calculated.
The associated sensitivity is the maximum of all recording sensitivities. For one query Q, the correlation sensitivity is equal to the maximum recording sensitivity:
Figure BDA0001574509340000053
where Q is the record set for query Q.
And 5: calculating the privacy budget epsilon allocated for each diseasei
Although the sensitive attributes need to be protected, if the same noise is added, that is, the same privacy budget is allocated, the sensitive attributes may be over-protected if the sensitive attributes are all the same, and the protection degree for the sensitive attributes is not enough. We make privacy budget allocations based on the distribution of the disease, and for the disease attribute we have reason to believe that the higher the frequency of occurrence of the disease, the lower its sensitivity. Thus, we assume a total privacy budget of ε, a data set size of n, and a frequency of occurrence of each disease of miThen the resulting privacy budget for the ith disease is:
Figure BDA0001574509340000061
in the formula, miThe frequency of occurrence of the sensitive attribute in the original medical data set; n is the number of sensitive attributes in the original medical data set; ε is the total privacy budget.
It can be seen here that the higher the frequency of occurrence of the disease, the greater the assigned privacy budget. This is because the higher the frequency of occurrence of diseases, the lower the sensitivity, and according to the definition of differential privacy, the larger the privacy budget, the lower the privacy, and the less noise is added.
Step 6: laplace noise is added to the accurate counting result.
Associated sensitivity calculated according to step 4
Figure BDA0001574509340000062
And privacy budget ε calculated in step 5iDetermining the added noise as:
Figure BDA0001574509340000063
and 7: and returning the counting result to the user.
Such as: we looked up how many people had heart disease in the first 9 medical records and calculated the associated sensitivity as
Figure BDA0001574509340000064
The privacy budget which is obtained by the calculation of the step 5 is epsiloniThe result of our count query obtained by step 6 is
Figure BDA0001574509340000065
As shown in table 2:
Figure BDA0001574509340000066
TABLE 2 count query
Since the association between the electronic medical data does exist, it is necessary to consider the association when implementing privacy protection. Aiming at the privacy problem caused by the relevance between the attributes, the relevance between the alignment identifier and the sensitive attribute is analyzed by utilizing a rough set theory, and a local differential privacy method is provided to protect the alignment identifier, so that the data effectiveness is ensured compared with the traditional differential privacy; aiming at the problem of privacy disclosure caused by relevance between records and the problem of data utility caused by different sensitivity degrees of different diseases, a self-adaptive differential privacy protection method is provided to realize protection of sensitive attributes.
It should be noted that, although the above-mentioned embodiments of the present invention are illustrative, the present invention is not limited thereto, and thus the present invention is not limited to the above-mentioned embodiments. Other embodiments, which can be made by those skilled in the art in light of the teachings of the present invention, are considered to be within the scope of the present invention without departing from its principles.

Claims (1)

1. The self-adaptive differential privacy protection method in the associated medical data is characterized by comprising the following steps of:
step 1, a user puts forward a query request aiming at an original medical data set;
step 2, judging whether the user inquires the quasi-identifier attribute or the inquiry sensitive attribute; when the user inquires about the quasi-identifier attribute, turning to the step 3A, and adopting a local differential privacy protection strategy; when the user inquires that the sensitive attribute is, turning to the step 3B, and adopting a global differential privacy protection strategy;
step 3A, local differential privacy protection strategy:
step 3A-1, performing relevance analysis by using a rough set theory to align the identifiers and the sensitive attributes, and determining whether the attributes of all quasi identifiers in the original medical data set are related to the sensitive attributes;
step 3A-2, classifying the medical records meeting the query request in the original medical data set according to the quasi-identifier attribute queried by the user: when the medical record meeting the query request contains the quasi-identifier attribute related to the sensitive attribute, classifying the medical record meeting the query request into a data subset needing to be protected; otherwise, the medical record meeting the query request is classified into a data subset which does not need to be protected;
3A-3, adding Laplace noise to the medical records in the data subset needing to be protected, returning the medical records to the user, and directly returning the medical records in the data subset needing not to be protected to the user;
step 3B, global differential privacy protection strategy:
step 3B-1, calculating the associated sensitivity of the medical records in the original medical data set meeting the query request
Figure FDA0001574509330000011
Step 3B-2, determining privacy budget epsilon of each sensitive attribute according to given total privacy budgeti
Step 3B-3, adding the medical record meeting the query request in the original medical data set
Figure FDA0001574509330000012
And returning the noise to the user.
CN201810129671.6A 2018-02-08 2018-02-08 Adaptive differential privacy protection method in associated medical data Active CN108363928B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810129671.6A CN108363928B (en) 2018-02-08 2018-02-08 Adaptive differential privacy protection method in associated medical data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810129671.6A CN108363928B (en) 2018-02-08 2018-02-08 Adaptive differential privacy protection method in associated medical data

Publications (2)

Publication Number Publication Date
CN108363928A CN108363928A (en) 2018-08-03
CN108363928B true CN108363928B (en) 2021-08-03

Family

ID=63005171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810129671.6A Active CN108363928B (en) 2018-02-08 2018-02-08 Adaptive differential privacy protection method in associated medical data

Country Status (1)

Country Link
CN (1) CN108363928B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020046398A1 (en) * 2018-08-31 2020-03-05 Google Llc Privacy-first on-device federated health modeling and intervention
CN110968887B (en) * 2018-09-28 2022-04-05 第四范式(北京)技术有限公司 Method and system for executing machine learning under data privacy protection
CN111177521A (en) * 2018-10-24 2020-05-19 北京搜狗科技发展有限公司 Method and device for determining query term classification model
CN109388972A (en) * 2018-10-29 2019-02-26 山东科技大学 Medical data Singular variance difference method for secret protection based on OPTICS cluster
US11647041B2 (en) * 2019-04-08 2023-05-09 United States Of America As Represented By The Secretary Of The Air Force System and method for privacy preservation in cyber threat
CN110188567B (en) * 2019-05-23 2022-12-20 复旦大学 Associated access control method for preventing sensitive data jigsaw
CN110472437B (en) * 2019-07-29 2023-07-04 上海电力大学 Periodic sensitivity differential privacy protection method for user power consumption data
CN110889141B (en) * 2019-12-11 2022-02-08 百度在线网络技术(北京)有限公司 Data distribution map privacy processing method and device and electronic equipment
CN111079179A (en) * 2019-12-16 2020-04-28 北京天融信网络安全技术有限公司 Data processing method and device, electronic equipment and readable storage medium
CN111797428B (en) * 2020-06-08 2024-02-27 武汉大学 Medical self-correlation time sequence data differential privacy release method
CN112329069B (en) * 2020-11-30 2022-05-03 海南大学 User difference privacy protection method across data, information and knowledge modes
CN112749407A (en) * 2020-12-18 2021-05-04 广东精点数据科技股份有限公司 Data desensitization device based on medical data
CN113553363B (en) * 2021-09-23 2021-12-14 支付宝(杭州)信息技术有限公司 Query processing method and device
CN113742781B (en) * 2021-09-24 2024-04-05 湖北工业大学 K anonymous clustering privacy protection method, system, computer equipment and terminal
CN115587385A (en) * 2022-10-09 2023-01-10 九有技术(深圳)有限公司 Data desensitization method combining localized differential privacy and centralized differential privacy

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101727534A (en) * 2008-10-30 2010-06-09 北大方正集团有限公司 Patient document retrieval authorization control method and system
CN103533049A (en) * 2013-10-14 2014-01-22 无锡中盛医疗设备有限公司 Electronic privacy information protection system for intelligent medical care
CN105608389A (en) * 2015-10-22 2016-05-25 广西师范大学 Differential privacy protection method of medical data dissemination

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101727534A (en) * 2008-10-30 2010-06-09 北大方正集团有限公司 Patient document retrieval authorization control method and system
CN103533049A (en) * 2013-10-14 2014-01-22 无锡中盛医疗设备有限公司 Electronic privacy information protection system for intelligent medical care
CN105608389A (en) * 2015-10-22 2016-05-25 广西师范大学 Differential privacy protection method of medical data dissemination

Also Published As

Publication number Publication date
CN108363928A (en) 2018-08-03

Similar Documents

Publication Publication Date Title
CN108363928B (en) Adaptive differential privacy protection method in associated medical data
Loukides et al. Disassociation for electronic health record privacy
Poulis et al. Anonymizing data with relational and transaction attributes
Poulis et al. Anonymizing datasets with demographics and diagnosis codes in the presence of utility constraints
Kim et al. A framework to preserve the privacy of electronic health data streams
Loukides et al. Utility-preserving transaction data anonymization with low information loss
Puri et al. Privacy preserving publication of relational and transaction data: Survey on the anonymization of patient data
Poulis et al. Distance-based k^ m-anonymization of trajectory data
Ruggieri et al. Anti-discrimination analysis using privacy attack strategies
Gkoulalas-Divanis et al. Utility-guided clustering-based transaction data anonymization.
CN112131608B (en) Classification tree differential privacy protection method meeting LKC model
Mueller et al. SoK: Differential privacy on graph-structured data
Gkountouna et al. Anonymizing collections of tree-structured data
Loukides et al. Utility-aware anonymization of diagnosis codes
CN112035880A (en) Track privacy protection service recommendation method based on preference perception
Yang et al. Associated attribute-aware differentially private data publishing via microaggregation
Bewong et al. A relative privacy model for effective privacy preservation in transactional data
Shaham et al. Machine learning aided anonymization of spatiotemporal trajectory datasets
Loukides et al. Preventing range disclosure in k-anonymised data
Liu et al. A dynamic privacy protection mechanism for spatiotemporal crowdsourcing
Jafer et al. Privacy-aware filter-based feature selection
Sánchez et al. A semantic-preserving differentially private method for releasing query logs
CN107832633B (en) Privacy protection method for relation transaction data release
Usha et al. Sensitive attribute based non-homogeneous anonymization for privacy preserving data mining
Lu et al. A semantic-based k-anonymity scheme for health record linkage

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant