US20210049282A1 - Simulated risk contribution - Google Patents

Simulated risk contribution Download PDF

Info

Publication number
US20210049282A1
US20210049282A1 US16/991,199 US202016991199A US2021049282A1 US 20210049282 A1 US20210049282 A1 US 20210049282A1 US 202016991199 A US202016991199 A US 202016991199A US 2021049282 A1 US2021049282 A1 US 2021049282A1
Authority
US
United States
Prior art keywords
quasi
identifying
values
data
risk
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US16/991,199
Other languages
English (en)
Inventor
David Nicholas Maurice Di Valentino
Muhammad Oneeb Rehman Mian
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Privacy Analytics Inc
Original Assignee
Privacy Analytics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Privacy Analytics Inc filed Critical Privacy Analytics Inc
Priority to CA3089835A priority Critical patent/CA3089835A1/en
Priority to US16/991,199 priority patent/US20210049282A1/en
Assigned to Privacy Analytics Inc. reassignment Privacy Analytics Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DI VALENTINO, DAVID NICHOLAS MAURICE, MIAN, MUHAMMAD ONEEB REHMAN
Publication of US20210049282A1 publication Critical patent/US20210049282A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • G06F21/6254Protecting personal data, e.g. for financial or medical purposes by anonymising data, e.g. decorrelating personal data from the owner's identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/57Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
    • G06F21/577Assessing vulnerabilities and evaluating computer system security
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/03Indexing scheme relating to G06F21/50, monitoring users, programs or devices to maintain the integrity of platforms
    • G06F2221/034Test or assess a computer or a system

Definitions

  • the present disclosure relates to datasets containing personally identifiable or confidential information and in particular to risk assessment of the datasets.
  • the information score is defined by a number of information binary bits provided by the quasi-identifying value.
  • the population distribution is a single variable or multi-variable distribution, which maps value to a probability of an individual or entity having that value.
  • system and method further comprising creating an aggregate result of a plurality of re-identification metric for a plurality of data subject profiles on a larger dataset.
  • creating the aggregate result for the data subjects in a single value result creating the aggregate result for the data subjects in a single value result.
  • the aggregate result is one of a type of disclosure risk metric, or an arithmetic average.
  • the multi-valued summary is an array or matrix of results.
  • creating the aggregate information scores is a summation of information scores for the subject.
  • the method further comprising: aggregating information scores within the record; aggregating information score from related records from within a child table associated with the record; and aggregating information score from the child table.
  • calculating re-identification metric is defined a value associated with anonymity, equivalence class size, or re-identification risk.
  • anonymity value is a metric measured in bits, where if the anonymity value is greater than zero there are many individuals or entities who would match this record in the population, if the anonymity is equal to zero the individual is unique in the population, and if the anonymity value is less than zero the individual or entity is unlikely to exist in the dataset or population.
  • system and method further comprising generating a histogram from a plurality of calculated anonymity values to estimate a number of data subjects who are unique in the dataset.
  • aspects of the present invention comprise computing devices utilizing computer-readable media to implement methods arranged for deriving risk contribution models from a dataset. Rather than inspect the entire data model in order to identify all quasi-identifying fields, the computing device develops a list of commonly-occurring but difficult-to-detect quasi-identifying fields. For each such field, the computing device creates a distribution of values/information values from other sources. Then, when risk measurement is performed, random simulated values (or information values) are selected for these fields. Quasi-identifying values are then selected for each field with multiplicity equal to the associated randomly-selected count. These are incorporated into the overall risk measurement and utilized in the anonymization process. In typical implementations, the overall average of re-identification risk measurement results prove to be generally consistent with the results which are obtained on the fully-classified data model.
  • simulated contributions can simplify classification, reduce manual effort, and increase the computing device's execution of the anonymization process of the dataset. This can, overall, save computing resources by reducing processor and memory usage during the anonymization process. Furthermore, additional resources can be focused on automation for de-identification, where the identifiers are transformed. Rather than a prescriptive approach, de-identification can be customized to maintain maximum data utility in the most desired fields.
  • FIG. 1 shows an example data subject profile that may be processed by the disclosed method and system
  • FIG. 2 shows a flowchart for a method of estimating disclosure risk of a single individual or entity in a dataset
  • FIG. 3 shows a representation of complex schema aggregation method
  • FIG. 4 shows another representation of a complex schema aggregation method
  • FIG. 5 illustrates quasi-identifier or confidential groups
  • FIG. 6 illustrates measurement of information and probability on a simple subject profile
  • FIG. 7 shows a graph of the relative error of a low risk data set
  • FIG. 8 shows a graph of the relative error of a medium risk data set
  • FIG. 9 shows a graph of the relative error of a high-risk data set
  • FIG. 10 shows a system for determining disclosure risk
  • FIG. 11 shows an illustrative process flow chart for deriving risk contribution models from data.
  • FIG. 12 shows an illustrative algorithm for simulating L2 contributions to risk measurement
  • FIGS. 13 and 15 show flowcharts of illustrative methods in which distribution identifiers and values are simulated for use in an anonymization or confidentialization process
  • FIG. 14 is a chart showing an illustrative comparison of the true and simulated average risk measurement values considering patient height, weight, medical codes (e.g., MedDRA HLT) and concomitant medication codes (e.g., 4-digit ATC);
  • medical codes e.g., MedDRA HLT
  • concomitant medication codes e.g., 4-digit ATC
  • FIG. 16 shows an illustrative approach for leveraging simulated risk contributions of “core” quasi-identifiers and actual risk contributions of non-core quasi-identifiers to compute a single disclosure risk measurement
  • FIG. 17 shows an illustrative approach for combining simulated risk contributions and risk contributions from synthetic data to form a single risk measurement.
  • Embodiments are described below, by way of example only, with reference to FIGS. 1-17 .
  • An information theory-based replacement is provided for traditional risk measures, such as k-anonymity, or expected number of correct re-identifications, or re-identification risk.
  • Methods based on k-anonymity compare records or data subjects within dataset to one another. If the dataset is a sample of an electronic database, then risk associated with the dataset is then extrapolated to a larger population contained in the electronic database.
  • the disclosed computer system and computer implemented method directly estimates the risk of a record against a population and does not compare individuals against one-another but against a population, which allows this method to process a single record without a dataset being processed in order to provide a risk assessment.
  • the system and method are effective at generating a risk measure because it can account unequal probabilities of matching records.
  • Entropy has been proposed for use in disclosure control of aggregate data, which predicts an attacker's ability to impute a missing value or values from views on the same data. Entropy can be used to estimate the average amount of information in QI and how the size of the population limits the amount of information that can be released about each subject.
  • the system and method disclosed take as input one or more subject profiles to determine risk of the dataset.
  • the individual person is a subject or patient present in a dataset.
  • the data of a subject profile is a description of the individual in structured form.
  • the structure may be expressed in a database, extensible mark-up language (XML), JavaScript Object Notation (JSON), or another structured format.
  • the subject profile consists of fields and associated values that describe the subject. For example, a subject profile may contain date of birth, province or state of residence, gender. Furthermore, a subject profile may contain “longitudinal data” (or temporal data) which either changes in time or describes an event at a particular time.
  • Examples of longitudinal data might be information about a hospital visit (admission data, length of stay, diagnosis), financial transactions (vendor, price, date, time, store location), or an address history (address, start date, end date). It is noted that the term “individual” as used herein may include and/or be applicable one or more entities in some cases as will be evident by the accompanying description and context of a given use.
  • Element 102 contains the top-level subject information such as demographic information.
  • Element 104 contains longitudinal data describing various doctors' visits. There are many doctors' visits related to a single subject. For each doctors' visit, there are child elements 106 , 108 , 110 , which describe the treatment from each visit. Notice again there may be many treatments for a single visit. In a database, elements 106 , 108 , and 110 would normally be in a single table. Connected to the subject demographics there are also a number of vaccination events listed 112 .
  • a data subject profile may in fact be data extracted from a text file and assigned to certain meaningful fields. If a dataset is being processed that contains multiple individuals, they are not required to have the same field. By not requiring the same fields to be present enables processing of unstructured, semi-structured and textual dataset, where individuals may not have the same schema.
  • XML, or JSON format there is a schema which defines, which fields exists, what they contain, and any relationships between fields, elements, records, or tables.
  • the relationships are usually of the form 1-to-1 or 1-to-many. For example, consider the relationship between a subject and DOB, Gender(1-to-1), or subject and some financial transactions (1-to-many). There are scenarios where many-to-many and many-to-one relations exist and these should not be excluded, however the disclosed examples provided will focus on the more common relationships within a subject profile.
  • each field in a schema is classified into direct-identifiers (DI), quasi-identifiers (aka indirect identifiers) (QI), and non-identifiers (NI).
  • DI direct-identifiers
  • QI quasi-identifiers
  • NI non-identifiers
  • QIs may be assumed to incorporate any relevant confidential attributes needed to estimate disclosure risk.
  • the system can generically apply to any value regardless of classification, however QIs (or QI fields) will be referred to as this is normally utilized in risk measurement.
  • a population distribution for each QI in the schema is retrieved ( 202 ) from a storage device.
  • a population distribution may be associated with one or more QIs and multiple distributions may be required for the schema.
  • the population distribution is associated by the type of data contained in the dataset. For example, the population distribution may be from census data which can be determined based upon the QI in the schema.
  • the association of the dataset with population distributions may be determined automatically by analyzing content of the dataset or by predefined associations.
  • a population distribution maps a value to probability, which represents the probability of someone in the population having this value.
  • each value in a data subject profile is assigned an information or disclosure score ( 204 ).
  • information scores are measured in bits and based on information theory.
  • a postal code calculation of information bits is described the method of determining the number of information bits is applicable to other QIs in a similar manner.
  • Aggregation of information scores is performed to create a single information score from several values ( 206 ). There are several different aggregation techniques, each serves to model certain types of relationships. Aggregation techniques can be composed where one aggregation technique uses the results of other aggregation techniques. Regardless the complexity of a schema, the end result is a single information score that is measured in bits, which describes the accumulated or total information available for re-identification of the data subject. The resulting single value is referred to as the given_bits.
  • Anonymity can then be calculated using given_bits and the population size as input ( 208 ).
  • the population is the group of subjects from which the subject profile (or dataset) is sampled. For example, if a dataset contains a random sample of voters then the population is the total number of voters.
  • Negative anonymity suggests a person is unique usually even on a subset of their subject profile. The magnitude of negative anonymity indicates how much suppression or generalization by de-identification techniques will be required to have the person look like another person in the population. Anonymity can be used to establish the probability that someone else would look like this person. Negative anonymity can be used to determine if there is sufficient information to link records across dataset with a significant confidence level.
  • Anonymity can be converted to equivalence or similarity class size and re-identification risk. All of these metrics are established standards.
  • a result of the process defined here is that the risk is measured on an individual, not on a dataset.
  • Other methodologies focus on measuring re-identification metrics on datasets but cannot necessarily assign a risk to a data subject in a dataset or an individual data subject (i.e. dataset of 1 data subject). This enables processing subject profiles individually, leading to linear time processing, instead of other k-anonymity methods, which are usually quadratic or worse processing times. Furthermore, this enables measuring re-identification metric of profiles coming from text documents, which are not contained in a dataset or having a common schema.
  • Re-identification Risk can be one of a maximum risk or an average risk of someone randomly choosing a record from the dataset and trying to re-identify it in the population. In the case of average risk, it may be calculated as
  • n the total number of data subjects in the sample, i iterates over each data subject, and reid_risk i is the risk of re-identification for subject (i).
  • Re-identification Risk can be an average risk of someone randomly choosing a subject in the population and trying to re-identify their record in the dataset. This average is the number of equivalence classes divided by the population size. The equation is
  • n the total number of data subjects in the sample, i iterates over each data subject
  • K i and k i are the number of records matching a subject in the sample
  • anonymity may be aggregated into histogram. Since anonymity is normally a real value (i.e. continuous or decimal) if anonymity values are converted into an integer value, the anonymity profile of dataset can be concisely expressed. In part, this is because anonymity is in a logarithmic scale, expressing magnitudes of difference. However, operations like round, round-up (ceil), round-down (floor), will change the average risk profile of the histogram.
  • This histogram is an effective tool for estimating the number of data subjects with a particular anonymity. A common use for this would be to estimate the number of data subjects who are unique.
  • the second histogram models sample and population anonymity and maintain the average risk profile of the population to sample re-identification.
  • a two-dimensional histogram describes the population and sample anonymity as a matrix of values, the row and column number represent integer anonymity values for the population and sample, while the cells contain real values indicating the number of people with this (population, sample) anonymity.
  • a population distribution defines a mapping of quasi-identifying values to the probabilities of those values occurring in the range, region, or demographic profile covering the data subjects associated with/contained within the dataset.
  • the algorithm is agnostic of the source of the priors, however a number of methods are defined to obtain priors including Estimated Sample Distribution (ESD) measurement.
  • ESD Estimated Sample Distribution
  • a population distribution may be derived from census data or other pre-existing data sources.
  • the probability of value (pr(v)) is defined as
  • pr ⁇ ( v ) populationHaving ⁇ ( v ) population
  • a population distribution may be approximated using the distribution from the dataset.
  • the method for estimating population distributions using sample data is provided by determining the sample distribution, this is a map of values to the number of people with this value. Each value is classified as common or rare. Common values occur when more than X individuals have that value in the sample distribution. Rare values occur when a value is associated with X or less data subjects in the sample distribution where Xis normally set to 1. Thus, to the total number of values is the sum of the rare values and common values.
  • the total number of values is estimated including unseen values, that is values that did not occur in the data (sample) but occur in the population. Estimation of the total number of values can use, but is not limited to species estimators, such as Bias Chao estimator or Abundance Coverage-based Estimator (ACE). These estimators are dependent on the distribution selected.
  • species estimators such as Bias Chao estimator or Abundance Coverage-based Estimator (ACE). These estimators are dependent on the distribution selected.
  • a distribution may be compared against a standard distribution, such as a uniform distribution or normal distribution. If they match in shape within a certain tolerance (error), then information about the sample distribution can be used to estimate the number of values that have not been seen. Assuming all unseen values are in fact rare values the number of rare values in the population is calculated where
  • the resulting population distribution for a common value is the probability of value occurring in the sample distribution.
  • pr pop (v common ) pr sample (v), where pr sample (v) is the sample probability and pr pop (v) is the population probability.
  • pr pop ⁇ ( v rare ) pr sample ⁇ ( v rare ) * rareValues sample rareValues pop
  • a population distribution may be approximated using a uniform distribution. Given the size of the value space (how many values are possible), then assume the probability of any given value is 1/NumberOfValues. On average this leads to an overestimate of the risk of re-identification (a conservative assumption), however on any individual case it can underestimate or overestimate the probability of a value and lead to under or overestimation of risk.
  • a distribution may be based on known or published averages. This average may be returned as the probability for a value occurring, which satisfy the value specificity. For example, a publication may claim that “80% of Canadians see a doctor at least once a year”. The probability would be 80% and the specificity is 1 year. The population distribution can return that the year (date without month or day) of a doctor's visit has an 80% probability (i.e. 80% of the population visited a doctor that year).
  • a distribution based on known or published averages may be made more granular (more specific) by combining a known average and uniform distribution over the specificity.
  • 80% is the probability and 1 year is the specificity, however the values are in days.
  • a joint distribution may be used to more accurately model probabilities and correlations between values.
  • the probability of set/combination of quasi-identifier values occurring can be expressed as the joint distribution over two or more quasi-identifying values.
  • a joint quasi-identifier may be defined as a tuple of values, for example a zip code and date of birth (90210, Apr. 1, 1965).
  • a joint distribution of the quasi-identifiers can be used to calculate the probability of this combination of values occurring.
  • a joint distribution may be acquired by any methods for acquiring a population distribution.
  • a method for assigning an information score can incorporate the expected (probable or likely) knowledge of an average adversary.
  • the expected information from value I(v) can be calculated as
  • Assigning an information score can incorporate the probability of knowing a value and compute the weighted average risk of all combinations of knowledge scenarios.
  • a knowledge scenario (KS) is the set of values known by an adversary (KS ⁇ V).
  • the set of all knowledge scenarios is the power set of V V (i.e. ⁇ (V)).
  • the probability of a particular value being known be k(v i ).
  • risk(KS) the risk associated with a knowledge scenario.
  • the weight average of all knowledge scenarios is
  • Values can be aggregated into a single information score for a data subject. This score is referred to as the given_bits for the data subject.
  • This score is referred to as the given_bits for the data subject.
  • Simple Mutual Information is a method where information scores are aggregated yet account for correlations.
  • correlation is expressed as mutual information.
  • the relationship between two values is expressed in pointwise mutual information. If the values are correlated, that is they tend to co-occur, then the total information from the two value is less than the sum of the two independent values. This occurs because one value may be inferred from another value, thus knowing the second value does not increase information.
  • pr(v i , v j ) is the value from the joint distribution that is calculated.
  • the given_bits for values 1 . . . n is calculated. This may be done via the method of Aggregation of Total Knowledge but is not limited to this.
  • given_bits′ given_bits+ ⁇ (v i , v j ) ⁇ SPV PMI ( v i , v j ).
  • a general and extensible method for aggregating information score in complex schema consisting of multiple table (or table like elements) is described.
  • a dataset may be expressed as a schema, which has tables and relations between tables.
  • the model is described as if it was in a database forming a directed acyclic graph.
  • the top or root table 302 would be the subject table, since all measurements are based on subjects as shown in FIG. 3 .
  • a complex schema usually has a top-level table 302 containing key data for each data subject. Each record in this table 302 refers to a different data subject.
  • the top-level table 302 is a parent table, child tables can also be parents based on perspective.
  • Child tables 306 and 310 link to parent tables 302 on one or multiple keys. For each record in a parent table 302 there may be zero or more records in the child table 306 and 310 . Information from related records, or example within a child table 306 and 310 about the same parent record are aggregated into tables 308 and table 312 . Information from child tables are aggregated into table 304 . The aggregation process can be repeated for recursive data structures. Traversal method such as for example infix traversal may be utilized.
  • Total Information The information in each record is summed to obtain the total information contained in all child records for the given parent. This is effectively aggregation of total information.
  • Maximum Adversary Power X Select the X records with the most information in them related to the given parent as defined by the information score. Total (sum) the information in X records.
  • Table Aggregation is applied to information scores from child tables (result of related records aggregation) relating to a single parent record.
  • a parent record may have multiple child records in multiple child tables.
  • the purpose of aggregation is to determine how much of this information from these child tables is aggregated up to the parent record. This resulting information is added to the information of the parent record.
  • Total Information The information from each child table for this parent record is summed and added to the information of the parent record.
  • Maximum Table add the information from the child table, which has the high information contribution, to the parent record.
  • FIG. 4 shows another representation of a complex schema aggregation method.
  • the previous complex schema aggregation is particularly easy to implement and quite efficient on databases.
  • a variation of the previous complex schema aggregation allows better modelling of the risks associated with multiple tables. This is important when the event for adversary power may be spread across different tables, however this method is best implemented using subject profiles that are single data structure (not spread across different tables).
  • all related records from child tables 306 and 310 together are collected together into an aggregate table 404 .
  • the difference is related records are not combined from a single table into an information score, instead all records are pushed or included into a single collection of records (from child tables) and all child records identify what table they are from.
  • Aggregating all information from child records can be fulfilled by any methods described for related record aggregation, such as total power, average adversary power X, and maximum adversary power X Note that now the adversary power aggregation would be over all child claims instead of limited to a single table.
  • the Back Fill Adversary Power is a variant of Average Adversary Power X; under many circumstances it behaves as average adversary power X and maximum Table would have behaved under the first aggregation scheme, however in case were the information is spread across different tables and adversary power X cannot be fulfilled by a single table, then it includes X events.
  • average adversary power X is calculated for each table.
  • this method calculates a u, which is the average information in a QI. This algorithm will refer to u t as the information in an average data element for table t. The data element and information values are initially set to 0.
  • a QI groups mechanism can be used to approximate known correlation by only including one of the correlated variables in the risk measurement.
  • a group of QI is defined as a set of tuples table and column and effectively replaces these QIs (table and column) with a single pseudo QI.
  • the pseudo QI must also have a place in the data structure (particular table that it will be placed into).
  • the information score of the pseudo QI may be defined by many procedures. One procedure is that the information score of the pseudo QI is the maximum of information score of any QI contains within it (in the tuple of table and columns).
  • FIG. 5 illustrates QI groups.
  • a single pseudo QI is created from Table 502 (QI 1 , QI 2 , and QI 3 ) and Table 504 (QI A, QI B and QI C). The resulting pseudo QI is the maximum of all of the information values. Creation of QI groups happens after assigning information scores to each value but before aggregating information scores.
  • QI groups There are many uses of QI groups, one common structure in medical database will store the diagnosis encoding in multiple columns, depending on the encoding scheme (e.g. International Statistical Classification of Diseases (ICD)-9, ICD-10, multilingual European Registration Agency (MEDRA)). For any single record one or more of the columns may have values, however there is usually never a single completely populated column.
  • ICD International Statistical Classification of Diseases
  • MEDRA multilingual European Registration Agency
  • Measuring the risk on a single sparse column would underestimate the risk. Measuring the risk on all columns would over-estimate the risk (including the same diagnosis multiple times if two encodings are present). Instead with a QI group the most information diagnosis will be used and the other encodes should be subsumed by this.
  • probabilities may be utilized instead of information scores.
  • FIG. 6 shows the parallel of using probability and information theory to estimate the risk of re-identification.
  • the schema 602 identified the QIs that are present in a record. In this example patient ID, age, Zip code, gender, and diagnosis.
  • the data 604 provides the information associated with the subject record.
  • Information scores 606 are assigned to each QI and then aggregate them into a total 607 which in this example is 11 bits.
  • Probabilities 608 are assigned for each score and are aggregated into a product 609 , which in this exampled is 1/2048.
  • Graphic 610 illustrates how the inclusion of each QI narrows the possible number of population matches.
  • a probability is assigned to each value, it is assumed that the distributions already return probabilities. The probabilities can then be aggregated where an addition on a logarithmic scale is the same as multiplication on a linear scale. It is a known mathematical identity
  • An expected number of matching people in the population is calculated by:
  • the re-identification risk is then calculated by
  • a - log 2 ⁇ ( expected_matches )
  • k max ⁇ ( 1 , expected_matches )
  • reid_risk min ⁇ ( 1 , 1 expected_matches )
  • Aggregation is then performed as previously described as the same re-identification metrics are provided.
  • FIGS. 7 to 9 show the relative error of some methods when compared against the actual population risk and varying the sampling fraction.
  • FIG. 7 shows a graph 700 of a low risk dataset plotted results are estimate sample distribution (ESD), simple mutual information (MI known), using known population distributions (known), and the Zayatz-Korte method (currently one of the most accurate estimation techniques).
  • FIG. 8 show a graph 800 of a medium risk data
  • FIG. 9 show a graph 900 of a high-risk data set.
  • the Zayatz-Korte method often has much higher relative error than the ESD.
  • the Zayatz-Korte method shows an increase in risk as sampling fraction decreases.
  • the ESD method provides consistent results almost without regard for sampling fraction.
  • the ESD method provides conservative estimates on the high-risk data shown in FIG. 9 when compared to the baseline.
  • FIG. 10 shows a system for performing risk assessment of a dataset.
  • the system 1000 is executed on a computer comprising a processor 1002 , memory 1004 , and input/output interface 1006 .
  • the memory 1004 executes instruction for providing a risk assessment module 1010 which performs an assessment of re-identification risk.
  • the risk assessment may also include a de-identification module 1016 for performing further de-identification of the database or dataset based upon the assessed risk.
  • a storage device 1050 either connected directly to the system 1000 or accessed through a network (not shown) which stores the dataset 1052 and possibly the sample population distribution 1054 (from which the dataset is derived).
  • a display device 1030 allows the user to access data and execute the risk assessment process.
  • Input devices such as keyboard and/or mouse provide user input to the I/O module 1006 .
  • the user input enables selection of desired parameters utilized in performing risk assessment but may also be selected remotely through a web-based interface.
  • the instructions for performing the risk assessment may be provided on a computer readable memory.
  • the computer readable memory may be external or internal to the system 1000 and provided by any type of memory such as read-only memory (ROM) or random-access memory (RAM).
  • the databases may be provided by a storage device such compact disc (CD), digital versatile disc (DVD), non-volatile storage such as a hard drive, USB flash memory or external networked storage.
  • the memory may be non-transitory and does not include waves, signals, and/or other transitory and/or intangible communication media.
  • One or more components of the system or functions of the system may be performed, accessed, or retrieved remotely through a network.
  • Each element in the embodiments of the present disclosure may be implemented as hardware, software/program, or any combination thereof.
  • Software codes either in its entirety or a part thereof, may be stored in a computer readable medium or memory (e.g., as a ROM, for example a non-volatile memory such as flash memory, CD ROM, DVD ROM, Blu-rayTM, a semiconductor ROM, USB, or a magnetic recording medium, for example a hard disk).
  • the program may be in the form of source code, object code, a code intermediate source and object code such as partially compiled form, or in any other form.
  • FIGS. 1-17 may include components not shown in the drawings.
  • elements in the drawings are not necessarily to scale, are only schematic and are non-limiting of the elements' structures. It will be apparent to persons skilled in the art that a number of variations and modifications can be made without departing from the scope of the invention as defined in the claims.
  • a population distribution for each quasi-identifier (QI) in a schema is retrieved by a computing device from a storage device.
  • a population distribution may be associated with one or more QIs and multiple distributions may be required for the schema.
  • the population distribution is associated by the type of data contained in the dataset. For example, the population distribution may be from census data which can be determined based upon the QI in the schema.
  • the association of the dataset with population distributions may be determined automatically by analyzing content of the dataset or by a predefined association.
  • a population distribution maps a value to probability, which represents the probability of someone in the population having this value.
  • a population distribution defines a mapping of quasi-identifying values to the probabilities of those values occurring in the range, region, or demographic profile covering the data subjects associated with/contained within the dataset.
  • the dataset is agnostic of the source of the population distribution; however, a number of methods are defined to obtain population distributions, including Estimated Sample Distribution (ESD) measurement.
  • ESD Estimated Sample Distribution
  • a population distribution may be derived from census data or other pre-existing data sources.
  • a population distribution may be approximated using the distribution from the dataset.
  • a distribution may be based on un-structured data as well, for example using natural language processing or other suitable functionalities. The distributions from un-structured data can be combined with other distributions for a given QI.
  • a distribution may be based on known or published averages.
  • a distribution based on known or published averages may be made more granular (more specific) by combining a known average and uniform distribution over the specificity.
  • O( 1800 ) quasi-identifiers which are identified and considered for transformation in risk mitigation.
  • These QIs can be cross sectional or longitudinal in nature.
  • Level 1 (L1) QIs are cross sectional in nature, such that they are found once for each individual or subject in the data set.
  • Level 2 (L2) QIs are longitudinal in nature, such that more than one instance may be recorded for each individual or subject in the data set.
  • the high-level process flow for deriving risk contribution models for identifying variables is shown in the diagram 1100 in FIG. 11 .
  • the identification of quasi-identifiers in the data can be achieved either manually or automatically, with the latter being achievable using a deterministic algorithm, a set of pre-defined rules, or an algorithm that leverages probabilistic decision-making, for example through machine learning or artificial intelligence.
  • the computing device Rather than inspect the entire data model in order to identify all quasi-identifying fields, the computing device develops a list of commonly-occurring but difficult-to-detect quasi-identifying fields. For each such field, the computing device creates a distribution of values/information values from other sources. Then, when risk measurement is performed, random simulated values (or information values) are selected for these fields. Quasi-identifying values are then selected for each field with multiplicity equal to the associated randomly-selected count. These values are incorporated into the overall risk measurement and utilized in the anonymization process. In typical implementations, the overall average of disclosure risk measurement results prove to be generally consistent with the results which are obtained on the fully-classified data model.
  • the computing device may also automatically identify quasi-identifying fields within the dataset, using a deterministic algorithm, a set of pre-defined rules, or an algorithm that leverages probabilistic decision-making, for example through machine learning or artificial intelligence.
  • the combination of automatic identification of quasi-identifying fields with simulated values may be used in conjunction with, or in lieu of, real data by the computing device to avoid complex risk measurements, or to speed up processing of large/complex datasets.
  • Possible additional uses of automatic identification of quasi-identifying fields with simulated values by the computing device include, but are not limited to, the detection and inclusion in risk measurement of personal information within streaming data, or the performance of on-device processing of data, such as disclosure risk measurement or anonymization. Simulated values and distributions derived as part of the process explained above can also help feed into a natural language processing algorithm or other algorithms to detect identifying and other information in un-structured data as well.
  • simulated contributions can simplify classification, reduce manual effort, and increase the computing device's execution of the anonymization process of the dataset. This can, overall, save computing resources by reducing processor and memory usage during the anonymization process. Furthermore, additional resources can be focused on automation for de-identification, where the identifiers are transformed. Rather than a prescriptive approach, de-identification can be customized to maintain maximum data utility in the most desired fields.
  • a computing device such as a remote service or a local computing device operating an application, is configured to generate value distributions and then select quasi-identifying fields in order to streamline a data anonymization process which utilizes the classified data in subsequent processing (e.g., performing de-identification, risk assessment, etc.).
  • a data anonymization process which utilizes the classified data in subsequent processing (e.g., performing de-identification, risk assessment, etc.).
  • two distinct steps are performed to streamline data classification. The first is an up-front one-time (or infrequently recurring) step of generating value distributions. The second either precedes or embellishes the first step of previous submission on a per-measurement basis.
  • a population distribution For any quasi-identifying field which is to be simulated, a population distribution must be created. These distributions can be obtained from a variety of sources, including, but not limited to a single large dataset, an aggregation of small dataset, census or other data sources, research papers, unstructured data etc. A population distribution may also be derived from other distributions, including but not limited to joint distributions. The distribution may be comprised of the distribution of actual values, the distribution of the raw information values of the actual values, or the distribution of knowable information values of the actual values.
  • a second distribution reflects the number of longitudinal quasi-identifying values held by individuals in the population.
  • Longitudinal quasi-identifying values are those which a person has an unknown number of, such as medical diagnoses, as opposed to those which always have a cardinality of one, such as date of birth).
  • the counts may be sourced from a single dataset, an aggregation of multiple datasets, or other external sources.
  • the raw population distributions may be processed in various manners, such as by smoothing algorithms.
  • a single set of distributions may be created for multiple risk measurement projects or created in a bespoke manner for each individual risk measurement project.
  • a computing device can be configured to store the source(s) of the two types of distributions as a whole, or the source(s) of actual values, frequency of values, the information values of the actual values, or the number of longitudinal quasi-identifying values held by individuals in the population.
  • distributions may also be compared or validated against historical/prior information by the computing device, such that any new data/evidence obtained can be used by the computing device to generate or update a posterior risk estimate.
  • Such an extension can be used in applications including, but not limited to, Bayesian risk estimation, and anonymization of streaming data.
  • the computing device When a dataset is received for a risk measurement assessment, for each data subject the computing device randomly selects a random value for each demographic quasi-identifying value from the associated population distribution. A random count of longitudinal values from the distribution of counts for that data subject (either a single count for that data subject which is shared across all longitudinal quasi-identifying values, or a separate count for each longitudinal quasi-identifying field). Quasi-identifying values are then selected for each field with multiplicity equal to the associated randomly-selected count. Once the identifying variables are sufficiently identified in the dataset, the computing device then proceeds with the remainder of the process and retrieves the appropriate population distributions for the randomly-generated quasi-identifying fields. Other (true) quasi-identifying fields use their own population distributions as applicable.
  • Cross sectional (or L1) QIs are those that are found once for each individual or subject in the data set.
  • subject height and weight at intake tend to be included in risk measurement and appear as a measured value in many clinical trials.
  • certain assumptions can be made about the height and weight distributions that enables modeling on a per-participant basis.
  • height and weight measurements tend to follow unimodal distributions centered about global averages and given an assumption of independence in the present risk measurement methodology, correlations between height and weight can be safely ignored if generated randomly for each participant.
  • the simulated heights and weights for individual subjects may vary meaningfully from their true values, taken in aggregate, their contribution to average risk may closely mirror that of the real data.
  • histograms can be built using the desired L1 quantities for each participant by aggregating L1 data across a number of completed studies, such that the resultant histograms can be used by the computing device to derive probability densities, specifically representing the probability of having a certain value of the desired quantity.
  • Sample frequencies can also be computed from this aggregated data, which can be used directly in risk measurement.
  • These estimates may also be used by the computing device in the context of Bayesian risk estimation, wherein the given data/evidence is compared to historical/prior information to generate a posterior risk estimate.
  • Bayesian risk estimation wherein the given data/evidence is compared to historical/prior information to generate a posterior risk estimate.
  • Such an implementation would have applications within the anonymization of streaming data, for example.
  • the longitudinal QIs that tend to enter risk measurement take the form of dates and codes—for example, in clinical data, codified fields related to subject medical history and concomitant medications are present, but in practice other L2 QIs may also be subject to risk measurement, including but not limited to transactional data such as that, for example, associated with financial transactions. As a matter of convention, such L2 quantities will be referred to as “claims” going forward.
  • Models can be built using an approach similar to that for the cross sectional (L1) variables, wherein subject claims from different studies are aggregated together, whether in a stratified or non-stratified fashion, from which distributions can be drawn representing the number of claims or transactions per participant or individual, as well as the sample frequencies of each claim. These distributions can then be used by the computing device to derive approximate probability density functions for the number of claims and the frequency of each claim, from which each participant receives a simulated number of claims, as well as a simulated prior value for each claim.
  • the computing device simulates distributions of identifiers from data collected from other (similar) sources. For example, in some scenarios, such as SIPD (structured individual patient data) from clinical trials, there are known elements to include for risk measurement. If one class of identifiers dominates the manual classification stage, then the use of simulated risk contributions by the computing device reduces the amount of work necessary to classify the identifiers which dominate the classification stage and reduces manual efforts. Furthermore, by simulating the contribution of some of the main drivers of risk, like those that dominate the classification stage, then the requirement of classifying these identifiers is eliminated.
  • SIPD structured individual patient data
  • step 1310 the computing device classifies the remaining identifying variables that were not contained in the simulation of step 1305 .
  • the classified identifiers are also used to de-identify to reduce risk below the threshold in subsequent steps.
  • step 1315 the computing device performs de-identification by determining a candidate de-identification solution.
  • step 1120 the computing device performs risk assessment by calculating risk from classified risk drivers plus simulated contributions.
  • step 1325 the computing device compares the risk assessment to a risk threshold. When the comparison indicates that the risk threshold is not met, then the process reverts back to step 1315 in which de-identification is performed. When the comparison indicates that the risk threshold is met, then the anonymization process is concluded, as shown in step 1330 .
  • the selection of the random values can be injected into the step in which the population distribution data associated with the dataset is retrieved. Retrieving population distributions still occurs, but only for the identified actual quasi-identifying fields. Additionally, when applying information values to each quasi-identifying value in the dataset, random counts of longitudinal fields are created, and information values are directly sourced from the distribution rather than quasi-identifying values. Re-identification risk measurement then proceeds as with the previous submission.
  • Random simulated quasi-identifier values can be applied to a direct-comparison-based risk measurement algorithm. While the previous submissions describe the case of simulated risk contributions applied to and evaluated against an average disclosure risk measurement, the same approach could be used by the computing device to evaluate an expected maximum risk measurement, either through a single run, or as a Monte Carlo simulation. While the use of simulated risk contributions by the device would not identify which records exceed a maximum risk threshold, an expected count of the number of data subjects who would exceed this value could be evaluated.
  • FIG. 14 is a chart 1400 showing an illustrative comparison of the true and simulated average risk measurement values considering patient height, weight, medical codes (e.g., MedDRA HLT) and concomitant medication codes (e.g., 4-digit ATC).
  • medical codes e.g., MedDRA HLT
  • concomitant medication codes e.g., 4-digit ATC
  • Comparisons of true and simulated average risk values including subject height, weight, medical history codes, and concomitant medication codes are shown in FIG. 14 for a number of real clinical trials datasets.
  • verbatim medical terms have been generalized to high-level terms using the MedDRA dictionary, and drug names have been generalized to 4-digit ATC codes.
  • the reference populations used to determine risk were reflective of the true values determined for each study.
  • the risk measurement values are also scaled assuming a 30% chance that a re-identification or disclosure attack will be performed, which is a reasonable estimate for release of anonymized structured data on a clinical data portal.
  • the simulated risk values presented in FIG. 14 are more conservative with respect to the real values, but in all cases, when the true risk is below threshold, the simulated risk is also below threshold. In some respects, this is an agreeable result, as a systematic underestimation of risk from the simulation would be more problematic from a liability perspective.
  • FIG. 15 is a flowchart 1500 that shows the overall process flow.
  • full datasets are classified using data from one or more prior datasets 1510 .
  • a current dataset 1515 may also be selected for validation as indicated at decision block 1520 . If so selected, the data is used to classify a full dataset at block 1525 .
  • Population distributions are built at block 1530 using the classified full datasets and/or census-type data 1535 . The built distributions are stored as value distributions 1540 .
  • the data is utilized to classify a minimal dataset at block 1545 .
  • the value distributions 1540 and classified minimal dataset are inputs to block 1550 at which risk is calculated for each subject.
  • An average risk is calculated at block 1555 and provided to block 1560 , model validation.
  • the subject data is included in a de-identified dataset 1570 . If not, then a modified de-identification solution may be implemented at block 1475 and the calculation of subject and average risk is repeated in a loop.
  • the classified full dataset at block 1425 may also be utilized to calculate risk for each subject at block 1480 and an average risk calculated at block 1485 .
  • the calculations may be used for model validation at block 1460 , as shown.
  • the simulation of quasi-identifiers in risk measurement and mitigation can be further extended to contexts such as incremental/streaming data and risk monitoring.
  • quasi-identifiers may occur infrequently or sparsely enough in the data that it is not possible to compute robust estimates of their relative contribution to disclosure risk.
  • the use of simulated risk contributions by the computing device of the detected identifiers to could allow for dynamic or real-time disclosure risk calculation and anonymization of data, thereby preventing identity disclosure.
  • the use of periodically updated, probabilistic models also lends itself to Bayesian formulations of disclosure risk, such that new data/evidence can be applied to historical/prior information to generate more accurate posterior risk estimates.
  • a computing device can compare simulated distributions with actual distributions of incremental data to determine whether further disclosure risk control is necessary or whether an existing de-identification strategy is still applicable to the new data. This can save processing in the context of incremental/streaming data.
  • an interim clinical dataset may contain incomplete descriptions of patient visits, or less detailed information on medical conditions, treatments, or medications, as compared to the final clinical dataset.
  • the computing device can consider the contributions to disclosure risk of both the identifying information recorded in the data, as well as simulated quasi-identifiers for information that has not yet been seen, in order to provide a reasonably accurate estimate of the disclosure risk expected from a full data release.
  • the computing device may use this information to de-identify the incremental data in a manner that brings the estimated final disclosure risk below threshold, in order to ensure that only a suitable amount of information is disclosed in the incremental release.
  • the computing device can remove any simulated components that have been supplanted by real data and update the disclosure risk using this newly-available information.
  • the complete dataset can then be de-identified by the computing device in a manner that accounts for the new information available for the final data release and is also consistent with the de-identification strategy employed in the previous, incremental release.
  • the computing device can also update the simulated data models and distributions to derive more accurate estimates of disclosure risk and produce updated de-identification strategies for future data releases.
  • the use of simulated quasi-identifiers can serve as an efficient way for a computing device to estimate or anticipate the disclosure risk associated to a given data request. For example, given some amount of quasi-identifying information requested for a number of data subjects, the estimated disclosure risk can be computed by an external or embedded computing device before any actual data access occurs. If the expected disclosure risk of the data is above a specified threshold, the user can be prevented from accessing or downloading the data. The computing device can then simulate the conditions that would result in an estimated disclosure risk which falls below threshold and require the user to confirm that the proposed level of de-identification is acceptable. At which point, the true data can be retrieved from the data repository, de-identified, and provided to the user.
  • the use of simulated quasi-identifiers can serve as a more accurate way to estimate the disclosure risk associated to a given data request.
  • the computing device can utilize simulated quasi-identifiers in a manner to account for the various cohorts (i.e. heterogeneity) within the pooled data, to ensure that the expected disclosure risk of the released data is not under-estimated.
  • the computing device can then simulate the conditions that would result in all cohorts with an expected disclosure risk below a specified threshold and require the user to confirm that the proposed level of de-identification is acceptable.
  • the computing device can identify which cohorts have an expected disclosure risk above a specified threshold and prevent the user from accessing or downloading the data.
  • simulated quasi-identifiers can serve as an efficient way to estimate the disclosure risk associated to a given query or request.
  • the computing device can compute the estimated disclosure risk using simulated quasi-identifiers and select the appropriate subset or all records and de-identification strategy that would meet the disclosure risk requirements and privacy budget of the data recipient is such a system.
  • simulated quasi-identifiers can ensure that a response to a targeted query is calculated using the subset of data that meets the disclosure controls in an expedited fashion.
  • disclosure risk may be generalized to other metrics of interest in a given use scenario. For example, differential privacy mechanisms and other privacy-preserving techniques may be advantageously configured and/or modified to utilize the present simulated quasi-identifiers to thereby lower disclosure risk.
  • FIG. 16 is a diagram of an illustrative example of such an expedited risk measurement and mitigation system 1600 that is configured to combine simulated risk contributions and actual risk contributions computed from a dataset.
  • the system utilizes system actions 1605 and user actions 1610 to perform risk mitigation by focusing on transforming the user-identified quasi-identifiers. This techniques removes, in typical implementations, the need for the user to transform, or even identify, any of the “core” identifiers in the data.
  • the system enables the user to identify which of the “core” quasi-identifiers for which they wish to account using simulated risk contributions from a dataset 1615 .
  • the system 1600 requests that the remaining quasi-identifying information such as age, gender, race, ethnicity, etc. be identified and classified by the user, as indicated at block 1620 .
  • the system can perform a disclosure risk measurement by combining simulated risk contributions for the “core” quasi-identifiers (block 1625 ) with risk contributions from the remaining user-identified and -classified fields (block 1630 ).
  • An aggregated risk is computed by the system based on core and non-core contributions (block 1635 ).
  • the system transforms non-core quasi-identifiers and direct identifiers to mitigate risk (block 1640 ) and a de-identified dataset 1645 may be exported by the system.
  • FIG. 17 is a diagram of an illustrative system 1700 which is configured to combine simulated risk contributions with synthetic data.
  • the system may be configured to simulate the contribution of quasi-identifiers to disclosure risk, in some embodiments of the present invention, by programmatically generating synthetic data using a synthetic data component 1705 .
  • the synthetic data may be used to replace certain fields in a clinical trial dataset 1710 containing personally-identifying information.
  • the system aggregates the synthetic data with simulated risk contributions from a simulated risk component 1715 .
  • system 1700 fields considered under the scope of simulated risk contributions do not need to be identified or otherwise manipulated by the user, as the statistical models underpinning the simulation process already account for their contribution to disclosure risk.
  • user input at block 1720 required to identify the fields to be synthesized may be minimized or reduced to zero in some cases.
  • the system may autonomously identify such fields based on their content and formatting and synthesize the identified fields, accordingly, at block 1730 .
  • the system computes risk contributions of the identified fields at block 1735 .
  • the system identifies fields to be simulated at block 1740 and simulates risk contributions for each field at block 1745 .
  • the outputs of the simulated risk component and synthetic data component 1705 are aggregated at block 1750 to produce a final estimated risk estimate 1755 .
  • Synthetic data may be generated by the synthetic data component to emulate certain types of personal information such as clinical visit dates, treatment dates, etc., which conventionally requires stronger underlying assumptions and prior knowledge to effectively simulate. For example, in the case of dates recorded in a clinical trial dataset, the start and end dates of the trial would typically be needed to properly understand the scope of the risk contribution simulation, as well as the length of the period in which participants are recruited into the study. Synthetic data instead advantageously allows for the substitution of real personal information with generated data which possesses similar statistical properties to the original data—such as trends within a given field, and its correlations to other fields—but is not attributable back to the original data subjects. In this way, the system can combine synthetic data and simulated risk contributions together to produce a robust estimate of the disclosure risk associated with the dataset, with a minimum of user intervention.
  • simulated risk contributions can also be extended beyond anonymizing structured data to encompass a system which performs disclosure risk measurement and anonymization of unstructured data, such as clinical study reports (CSRs).
  • CSRs clinical study reports
  • instances of personal information such as patient demographics, medical histories, drug codes, etc. are embedded in unstructured documents in a manner that oftentimes requires a lengthy initial phase of detection leveraging automated or semi-automated natural language processing (NLP) technologies before risk measurement and mitigation can begin.
  • NLP natural language processing
  • a suitably high recall of this embedded personal information is required before the risk measurement and de-identification processes produce reliable and accurate assessments of the original and mitigated disclosure risk.
  • a system combining simulated risk contributions and modern NLP technologies, using unstructured data as input may allow for a substantial decrease in the amount of time and effort required to reach a state of readiness for risk measurement and mitigation.
  • the system may produce simulated risk contributions for personal information such as demographics, medical history codes, concomitant medication codes, etc. Therefore, the remaining detection process would be limited to capturing fields such as dates which feature regular and repeating formats and content, and which can be captured almost fully automatically by leveraging NLP technologies embedded within the system, with minimal additional user input or effort. Simulated quasi-identifier distributions and models can also be used to improve the detection of personally-identifying information in unstructured data, by using the simulated distributions as a form of gazetteer to inform the natural language processing technologies. By combining these simulated risk contributions together with the remaining detected personally-identifying information , the system may compute an estimate of disclosure risk, with risk mitigation focusing on transforming the detected personally-identifying information in a way that achieves a suitably low disclosure risk.
US16/991,199 2019-08-12 2020-08-12 Simulated risk contribution Pending US20210049282A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CA3089835A CA3089835A1 (en) 2019-08-12 2020-08-12 Simulated risk contributions
US16/991,199 US20210049282A1 (en) 2019-08-12 2020-08-12 Simulated risk contribution

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962885435P 2019-08-12 2019-08-12
US16/991,199 US20210049282A1 (en) 2019-08-12 2020-08-12 Simulated risk contribution

Publications (1)

Publication Number Publication Date
US20210049282A1 true US20210049282A1 (en) 2021-02-18

Family

ID=72050778

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/991,199 Pending US20210049282A1 (en) 2019-08-12 2020-08-12 Simulated risk contribution

Country Status (3)

Country Link
US (1) US20210049282A1 (de)
EP (1) EP3779757B1 (de)
CA (1) CA3089835A1 (de)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210165913A1 (en) * 2019-12-03 2021-06-03 Accenture Global Solutions Limited Controlling access to de-identified data sets based on a risk of re- identification
US20210279367A1 (en) * 2020-03-09 2021-09-09 Truata Limited System and method for objective quantification and mitigation of privacy risk
CN115935359A (zh) * 2023-01-04 2023-04-07 北京微步在线科技有限公司 一种文件处理方法、装置、计算机设备及存储介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8316054B2 (en) * 2008-09-22 2012-11-20 University Of Ottawa Re-identification risk in de-identified databases containing personal information
US9087216B2 (en) * 2013-11-01 2015-07-21 Anonos Inc. Dynamic de-identification and anonymity
US20170103232A1 (en) * 2015-07-15 2017-04-13 Privacy Analytics Inc. Smart suppression using re-identification risk measurement
US20170177907A1 (en) * 2015-07-15 2017-06-22 Privacy Analytics Inc. System and method to reduce a risk of re-identification of text de-identification tools
US20180114037A1 (en) * 2015-07-15 2018-04-26 Privacy Analytics Inc. Re-identification risk measurement estimation of a dataset
US9990515B2 (en) * 2014-11-28 2018-06-05 Privacy Analytics Inc. Method of re-identification risk measurement and suppression on a longitudinal dataset
US10572679B2 (en) * 2015-01-29 2020-02-25 Affectomatics Ltd. Privacy-guided disclosure of crowd-based scores computed based on measurements of affective response
US10924934B2 (en) * 2017-11-17 2021-02-16 Arm Ip Limited Device obfuscation in electronic networks
US20220222374A1 (en) * 2019-04-30 2022-07-14 Sensyne Health Group Limited Data protection

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8316054B2 (en) * 2008-09-22 2012-11-20 University Of Ottawa Re-identification risk in de-identified databases containing personal information
US9087216B2 (en) * 2013-11-01 2015-07-21 Anonos Inc. Dynamic de-identification and anonymity
US9990515B2 (en) * 2014-11-28 2018-06-05 Privacy Analytics Inc. Method of re-identification risk measurement and suppression on a longitudinal dataset
US10572679B2 (en) * 2015-01-29 2020-02-25 Affectomatics Ltd. Privacy-guided disclosure of crowd-based scores computed based on measurements of affective response
US20170103232A1 (en) * 2015-07-15 2017-04-13 Privacy Analytics Inc. Smart suppression using re-identification risk measurement
US20170177907A1 (en) * 2015-07-15 2017-06-22 Privacy Analytics Inc. System and method to reduce a risk of re-identification of text de-identification tools
US20180114037A1 (en) * 2015-07-15 2018-04-26 Privacy Analytics Inc. Re-identification risk measurement estimation of a dataset
US10924934B2 (en) * 2017-11-17 2021-02-16 Arm Ip Limited Device obfuscation in electronic networks
US20220222374A1 (en) * 2019-04-30 2022-07-14 Sensyne Health Group Limited Data protection

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210165913A1 (en) * 2019-12-03 2021-06-03 Accenture Global Solutions Limited Controlling access to de-identified data sets based on a risk of re- identification
US20210279367A1 (en) * 2020-03-09 2021-09-09 Truata Limited System and method for objective quantification and mitigation of privacy risk
US11768958B2 (en) * 2020-03-09 2023-09-26 Truata Limited System and method for objective quantification and mitigation of privacy risk
CN115935359A (zh) * 2023-01-04 2023-04-07 北京微步在线科技有限公司 一种文件处理方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
CA3089835A1 (en) 2021-02-12
EP3779757A1 (de) 2021-02-17
EP3779757B1 (de) 2023-10-04

Similar Documents

Publication Publication Date Title
US10685138B2 (en) Re-identification risk measurement estimation of a dataset
US10818383B2 (en) Hospital matching of de-identified healthcare databases without obvious quasi-identifiers
EP3779757B1 (de) Simulierte risikobeiträge
Gutman et al. A Bayesian procedure for file linking to analyze end-of-life medical costs
US8326849B2 (en) System and method for optimizing the de-identification of data sets
US8316054B2 (en) Re-identification risk in de-identified databases containing personal information
US10061894B2 (en) Systems and methods for medical referral analytics
US11664098B2 (en) Determining journalist risk of a dataset using population equivalence class distribution estimation
CA2913647C (en) Method of re-identification risk measurement and suppression on a longitudinal dataset
CN111971675A (zh) 数据产品发布方法或系统
CA2734545A1 (en) A system and method for evaluating marketer re-identification risk
US10430716B2 (en) Data driven featurization and modeling
US20170124351A1 (en) Re-identification risk prediction
JP6956107B2 (ja) 明確な照合情報を持たない識別不能のヘルスケアデータベースの病院マッチング
Hotz et al. Balancing data privacy and usability in the federal statistical system
WO2015154058A1 (en) Systems and methods for medical referral analytics
US20240070323A1 (en) Method and system for modelling re-identification attacker's contextualized background knowledge
Hernandez-Matamoros et al. Comparative Analysis of Local Differential Privacy Schemes in Healthcare Datasets
US20230153757A1 (en) System and Method for Rapid Informatics-Based Prognosis and Treatment Development
Zhou Synthetic Data Sharing and Estimation of Viable Dynamic Treatment Regimes with Observational Data
WO2023081919A1 (en) Systems and methods for de-identifying patient data

Legal Events

Date Code Title Description
AS Assignment

Owner name: PRIVACY ANALYTICS INC., CANADA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DI VALENTINO, DAVID NICHOLAS MAURICE;MIAN, MUHAMMAD ONEEB REHMAN;REEL/FRAME:053469/0381

Effective date: 20200812

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER