EP4226268A1 - Method for evaluating the risk of re-identification of anonymized data - Google Patents
Method for evaluating the risk of re-identification of anonymized dataInfo
- Publication number
- EP4226268A1 EP4226268A1 EP21810398.4A EP21810398A EP4226268A1 EP 4226268 A1 EP4226268 A1 EP 4226268A1 EP 21810398 A EP21810398 A EP 21810398A EP 4226268 A1 EP4226268 A1 EP 4226268A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- individuals
- original
- anonymous
- individual
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 85
- 230000008569 process Effects 0.000 claims description 10
- 238000004458 analytical method Methods 0.000 claims description 8
- 238000000513 principal component analysis Methods 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000013500 data storage Methods 0.000 claims description 3
- 238000003672 processing method Methods 0.000 claims description 2
- 230000001131 transforming effect Effects 0.000 claims description 2
- 238000012545 processing Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000033228 biological regulation Effects 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 238000000556 factor analysis Methods 0.000 description 3
- 230000000717 retained effect Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000002860 competitive effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000000491 multivariate analysis Methods 0.000 description 1
- 238000011158 quantitative evaluation Methods 0.000 description 1
- 238000012502 risk assessment Methods 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/52—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
- G06F21/54—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow by adding security routines or objects to programs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/57—Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
- G06F21/575—Secure boot
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/602—Providing cryptographic facilities or services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
- G06F21/6254—Protecting personal data, e.g. for financial or medical purposes by anonymising data, e.g. decorrelating personal data from the owner's identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W12/00—Security arrangements; Authentication; Protecting privacy or anonymity
- H04W12/02—Protecting privacy or anonymity, e.g. protecting personally identifiable information [PII]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L2209/00—Additional information or applications relating to cryptographic mechanisms or cryptographic arrangements for secret or secure communication H04L9/00
- H04L2209/42—Anonymization, e.g. involving pseudonyms
Definitions
- the invention generally relates to the anonymization of sensitive data intended to be shared with third parties, for example, for research, analysis or exploitation purposes. More particularly, the invention relates to a method for evaluating the risk of re-identification of anonymized data.
- data is a source of performance for organizations and constitutes an important asset for them.
- Data provides crucial and valuable information for the production of quality goods and services, as well as for decision-making. They provide a competitive advantage that allows organizations to survive and stand out from the competition.
- the sharing of data for example in the form of open data known as "open data" in English, is today perceived as offering many opportunities, in particular for the extension of knowledge and human knowledge, innovation and creation of new products and services.
- the data may contain personal data, known as "personal data”, which is subject to regulations relating to the protection of privacy.
- personal data is subject to regulations relating to the protection of privacy.
- the use, storage and sharing of personal data are subject in France to the European GDPR regulation, for "General Data Protection Regulation", and to the French law known as the "IT law”. and freedoms >>.
- Certain data, such as those relating to the state of health, private and family life, assets and others, are particularly sensitive and must be subject to special precautions.
- Data anonymization can be defined as a process that removes the association between the identifying dataset and the data subject.
- the process of anonymization aims to prevent the singling out of an individual within a dataset, the link between two records within the same dataset, or between two distinct datasets, when one of the records matches to individual-specific data, and inferring information from the dataset.
- the data is presented in a form that should not allow individuals to be identified, even when combined with other data.
- the anonymization method called "k-anonymization” is one of the most widely used methods. This method seeks to make each record of a data set indistinguishable from at least k-1 other records of this data set.
- the so-called “L-diversity” anonymization method is an extension of the "k-anonymization” method which allows better data protection by involving in each group of k records, called “k-group", the presence of at least L sensitive attribute values.
- the main known anonymization algorithms modify data by deleting, generalizing or replacing personal information in individual records.
- An alteration of the informative content of the data may be the consequence of excessive anonymization.
- it is important that anonymized data remains quality data that retains a maximum of informative content. It is on this condition that anonymized data remain useful for the extraction of knowledge through analysis and reconciliation with other data.
- the degree of reliability of the anonymization algorithm is directly related to the risk of re-identification of anonymized data.
- This risk includes the risk of individualization, that is to say, the possibility of isolating an individual, the risk of correlation, that is to say, the possibility of linking distinct sets of data concerning the same individual, and the risk of inference, that is, the possibility of inferring information about an individual.
- risk of individualization that is to say, the possibility of isolating an individual
- the risk of correlation that is to say, the possibility of linking distinct sets of data concerning the same individual
- the risk of inference that is, the possibility of inferring information about an individual.
- Different methods for evaluating the risk of re-identification of a set of data having undergone anonymization processing also referred to as “metrics” below, have been proposed and provide quantitative evaluations of this risk.
- Probabilistic matching makes it possible to establish probabilities of links between records. Two records are considered linked when the probability of a link between them exceeds a certain threshold. Probabilistic matching is described by Fellegi LP. et al., Jaro MA, and Winkler WE in their respective articles "A theory of record linkage", Journal of the American Statistical Association 64, 1969, pp. 1 183-1210, "Advances in record-linkage methodology as applied to matching the 1985 Census of Tampa, Florida", Journal of the American Statistical Association 84, 1989, pp. 414-420, and “Advanced methods for record linkage”, Proceedings of the American Statistical Association Section on Survey Research Methods, 1995, pp. 467-472. Distance-based matching is described by Pagliuca D. et al.
- the objective of the present invention is to provide a new method for evaluating the risk of re-identification of anonymized data during a search for correspondence attack comprising a deterministic search based on external sources of information and a search based on the distance.
- the invention relates to a data processing method implemented by computer for the evaluation of a risk of re-identification of anonymized data, the method providing a protection rate representative of the risk of re-identification in the case of a match-seeking attack comprising a deterministic search based on at least one external source of information and a distance-based match search, the method comprising the steps of E) grouping an original data set comprising a plurality of original individuals and a set of anonymized data comprising a plurality of anonymous individuals, the anonymous individuals being produced by an anonymizing process of the original individuals; F) identifying in said source data set at-risk source individuals as being source individuals having at least one remarkable, or unique, value in at least one considered variable, or at least one combination of remarkable values , or unique, in a set of considered variables, in a deterministic matching search and to which can be associated only one respective anonymous individual approaching by the deterministic matching search; G) Evaluate a re-identification failure rate for the original and anonymized datasets
- an anonymous individual is considered to be an anonymous individual approaching an individual of origin at risk considered when 1) the anonymous individual has a variable of the same modality as a considered variable of the original individual at risk in the search for correspondence in the case where the variable is a qualitative variable, or when 2) the anonymous individual has a value for the considered variable equal to within a tolerance interval to the value of the same variable considered of the original individual at risk in the case where the variable considered in the search for deterministic correspondence is a continuous variable.
- step G) comprises the sub-steps of a) linking the set of original data to the set of anonymized data; b) transforming the original individuals and the anonymous individuals in Euclidean space, the original individuals and anonymous individuals being represented by coordinates in Euclidean space; c) identify for each said original individual one or more closest anonymous individuals on the basis of a distance, by the so-called "k-NN” method; and d) calculating the re-identification failure rate as a percentage of cases where a closest anonymous individual identified in substep c) for a considered original individual is not a corresponding valid anonymous individual to this original individual.
- the aforementioned distance is a Euclidean distance.
- the transformation of sub-step b) is carried out by a factorial method and/or using an artificial neural network called an “auto-encoder”.
- the factorial method used for the transformation of sub-step b) is a method called "Principal Component Analysis” when the individuals include variables of continuous type, a method called “Analysis of Multiple Correspondences >> when individuals include qualitative type variables or a method called “Factor Analysis of Mixed Data” when individuals include mixed “continuous/qualitative” type variables.
- the invention also relates to a data anonymization computer system comprising a data storage device storing program instructions for implementing the method as described briefly above.
- the invention also relates to a computer program product comprising a medium in which are recorded program instructions readable by a processor for implementing the method as described briefly above.
- Fig.1 is a flowchart showing the major steps included in a particular embodiment of the method according to the invention.
- Fig.2 represents an illustrative diagram of a method used in the particular embodiment of the method of the invention of Fig.1, to evaluate a re-identification failure rate of a attacking when searching for a match based on distance.
- FIG.3 shows an example of a general architecture of a data anonymization computer system in which the method according to the invention is implemented.
- Assessing the risk of re-identification requires comparing a set of original data made up of so-called original individuals with a set of anonymized data made up of so-called anonymous individuals.
- Individuals are typically data records.
- Each anonymized individual in the anonymized dataset represents an anonymized version of a corresponding original individual.
- a pair formed by an original individual and a corresponding anonymous individual is referred to as an “original/anonymous pair”.
- Re-identification risk is the risk that an attacker will successfully link an original individual to their anonymized record, i.e. the corresponding anonymous individual, thereby forming a valid original/anonymous pair.
- the method according to the invention for the evaluation of the risk of re-identification of data provides a metric, based on an individual-centric approach, which makes it possible to quantify the risk of re-identification of personal data during a match search comprising a deterministic search based on external sources of information and a search based on distance.
- MR3 a particular embodiment, designated MR3, of the method of the invention is now described, having an interesting applicability in the context of a straddling attack between a deterministic correspondence search based on one or more external sources of information and a match search based on distance.
- MR3 essentially comprises ten steps S3-1 to S3-10.
- the first step S3-1 performs data join processing and combines a set of original data EDO comprising a plurality of original individuals IO with a set of anonymized data EDA comprising a plurality of anonymized individuals IA.
- EDA anonymized data is that provided by an anonymization process that has processed the original EDO data and corresponds to it.
- the second step S3-2 is a step of identifying individuals of origin at risk, hereinafter designated IOr, in the EDO set considered which comprises M individuals of origin IO.
- IOr individuals of origin at risk
- the individuals of origin IO having at least one remarkable or unique value in at least one variable considered or at least one combination of remarkable or unique values in a set of values considered are sought. , in deterministic matching. Those individuals of IO origin having a remarkable or unique value or combination of values are those identified as being the individuals of IO origin exposed to a risk of re-identification. It is considered here that R individuals of origin at IOr risk are identified among the M individuals of IO origin considered.
- the third step S3-3 is a step of identifying anonymous individuals close to the original individuals at risk IOr identified in step S3-2, hereinafter designated IAP.
- close anonymous individuals IA P are sought for each of the R original individuals at risk IOr.
- the anonymous individuals IA which are retained as being approaching anonymous individuals IA P are those having the same modalities as the original individual at risk IOr considered.
- the anonymous individuals IA which are retained as being approaching anonymous individuals IA P are those whose variables have values equivalent to those of the variables of the individuals of origin , that is to say, equal to within a tolerance interval.
- the tolerance interval could be predefined at plus or minus (+/-) 2.5% for example of the variance of the variable considered.
- the fourth step S3-4 is a step of identifying, according to the results of step S3-3, the individuals potentially most exposed among the original individuals at risk IOr identified in step S3-2.
- this step S3-4 only the individuals of IOr origin having a unique approaching anonymous individual IA P are retained as being potentially the most exposed to risks of re-identification.
- These selected individuals of IOr origin are referred to below as lOrs. It is considered here that RS original individuals at risk have been identified.
- the unique approaching anonymous RS individuals corresponding to the original RS individuals lOrs are designated IA prs .
- the fifth to eighth following steps S3-5 to S3-8 implement a method, designated MR1 , making it possible to evaluate, for the sets EDO and EDA, a txP1 re-identification failure rate of an attacker during a distance-based match-seeking attack.
- step S3-5 the original data set EDO comprising the original individuals IO is linked to the anonymized data set EDA comprising the anonymized individuals IA.
- Step S3-6 performs transformation processing of individuals IO and IA in Euclidean space.
- various transformation methods may be used.
- a factorial method or an artificial neural network called “autoencoder”, or “autoencoder” in English can be used to convert the individuals IO and IA in the form of coordinates in a Euclidean space.
- PCA Principal Component Analysis
- ACM Multiple Correspondence Analysis
- MCA Multiple Correspondence Analysis
- step S3-6 a factorial method is used in step S3-6.
- significant axes of variance are identified in the data sets by multivariate data analysis. These significant axes of variance determine the axes of Euclidean space onto which individuals IO and IA are projected.
- the transformation of individuals IO and IA in Euclidean space makes it possible to calculate the mathematical distance between individuals, from their coordinates.
- the method of the invention provides for a privileged use of a Euclidean distance as a mathematical distance.
- a privileged use of a Euclidean distance as a mathematical distance.
- various other mathematical distances such as a Manhattan distance, a Mahalanobis distance and the like, is included within the scope of the present invention.
- step S3-7 the method of the “k nearest neighbors” called “k-NN” (from “k-Nearest Neighbors” in English) is used to identify the anonymous individuals IA closest to the individuals of origin IO, with a mathematical distance such as a Euclidean distance.
- step S3-8 based on the distance measurement results obtained in the previous step S3-7, the re-identification failure rate txP1 of an attacker employing a match search is calculated. based on distance.
- the re-identification failure rate txP1 is represented by the percentage of cases where an individual of IO origin and the individual anonymous closest match IA, identified in step S3-7, does not form a valid origin/anonymous pair.
- Fig.2 The processing performed by the fifth to eighth steps S3-5 to S3-8 described above is shown in Fig.2.
- the original 10 individuals and the anonymous IA individuals are represented by black circles and white circles, respectively, in a Euclidean space with coordinate axes A1 and A2.
- the attacker In order to re-identify the valid origin/anonymous pair (IOi, IAi), the attacker must perform a matching of individuals and uses for this a mathematical distance between them, such as a Euclidean distance.
- the attacker identifies the anonymous individual IAk as the closest anonymous individual to the original individual IOi, as shown schematically in Fig.2, and associates the anonymous individual IAk with the original individual Ii.
- Fig.2 shows the case of a failed attacker who failed to identify the valid origin/anonymous pair (IOi, IAi) based on distance.
- the re-identification failure rate txP1 is equal to 95%.
- the ninth step S3-9 is a step of evaluating the number m of successful re-identifications by the attacker on the original individuals lOrs, from the failure rate of re-identification txP1 obtained in step S3-8 and the original individual RS number lOrs.
- the identified valid anonymous individuals IA are the unique approaching anonymous individuals IA prs (step S3-4) of the original individuals lOrs.
- the tenth step S3-10 calculates a protection rate, hereinafter referred to as txP3, for the considered original data set EDO.
- the protection rate txP3 therefore corresponds to the percentage of IO individuals that have not been re-identified by the attacker in the original EDO dataset.
- RS 4 individuals, for example, are identified as fulfilling the above condition.
- FIG. 1 A general architecture of a data anonymization computer system SAD in which the method according to the invention for evaluating the risk of re-identification is implemented is shown by way of example in FIG.
- the SAD system is implemented here in a local computer system DSL and comprises two software modules MAD and MET.
- the MAD and MET software modules are hosted in data storage devices SD, such as memory and/or hard disk, of the local computer system DSL.
- the local computer system DSL also hosts an original database BDO in which original data DO is stored and an anonymized database BDA in which anonymized data DA is stored.
- the MAD software module implements a data anonymization process which processes the original data DO and outputs the anonymized data DA.
- the software module MET implements the method according to the invention for the evaluation of the risk of re-identification of the data.
- the software module MET receives as input original data DO and anonymized data DA and provides as output a protection rate TP against the risk of re-identification.
- the implementation of the method according to the invention is ensured by the execution of code instructions of the software module MET by a processor (not shown) of the local computer system DSL.
- the protection rate TP provided by the software module MET provides a measure of the performance of the data anonymization process implemented by the software module MAD.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Computer Security & Cryptography (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computer Hardware Design (AREA)
- Bioethics (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Storage Device Security (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR2010259A FR3114892A1 (en) | 2020-10-07 | 2020-10-07 | PROCEDURE FOR ASSESSING THE RISK OF RE-IDENTIFICATION OF ANONYMIZED DATA |
PCT/FR2021/000114 WO2022074302A1 (en) | 2020-10-07 | 2021-10-07 | Method for evaluating the risk of re-identification of anonymized data |
Publications (1)
Publication Number | Publication Date |
---|---|
EP4226268A1 true EP4226268A1 (en) | 2023-08-16 |
Family
ID=74553910
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP21810398.4A Withdrawn EP4226268A1 (en) | 2020-10-07 | 2021-10-07 | Method for evaluating the risk of re-identification of anonymized data |
EP21810059.2A Withdrawn EP4226267A1 (en) | 2020-10-07 | 2021-10-07 | Method for evaluating the risk of re-identification of anonymised data |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP21810059.2A Withdrawn EP4226267A1 (en) | 2020-10-07 | 2021-10-07 | Method for evaluating the risk of re-identification of anonymised data |
Country Status (5)
Country | Link |
---|---|
US (2) | US20230367901A1 (en) |
EP (2) | EP4226268A1 (en) |
CA (2) | CA3194570A1 (en) |
FR (1) | FR3114892A1 (en) |
WO (2) | WO2022074301A1 (en) |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR3048101A1 (en) * | 2016-02-22 | 2017-08-25 | Digital & Ethics | METHOD AND DEVICE FOR EVALUATING THE ROBUSTNESS OF AN ANONYMOUSING OF A SET OF DATA |
US11188678B2 (en) * | 2018-05-09 | 2021-11-30 | Fujitsu Limited | Detection and prevention of privacy violation due to database release |
-
2020
- 2020-10-07 FR FR2010259A patent/FR3114892A1/en active Pending
-
2021
- 2021-10-07 WO PCT/FR2021/000113 patent/WO2022074301A1/en unknown
- 2021-10-07 CA CA3194570A patent/CA3194570A1/en active Pending
- 2021-10-07 CA CA3194820A patent/CA3194820A1/en active Pending
- 2021-10-07 EP EP21810398.4A patent/EP4226268A1/en not_active Withdrawn
- 2021-10-07 US US18/030,558 patent/US20230367901A1/en active Pending
- 2021-10-07 EP EP21810059.2A patent/EP4226267A1/en not_active Withdrawn
- 2021-10-07 US US18/030,545 patent/US20240005035A1/en active Pending
- 2021-10-07 WO PCT/FR2021/000114 patent/WO2022074302A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
CA3194820A1 (en) | 2022-04-14 |
CA3194570A1 (en) | 2022-04-14 |
WO2022074301A1 (en) | 2022-04-14 |
WO2022074302A1 (en) | 2022-04-14 |
US20240005035A1 (en) | 2024-01-04 |
US20230367901A1 (en) | 2023-11-16 |
EP4226267A1 (en) | 2023-08-16 |
FR3114892A1 (en) | 2022-04-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10019653B2 (en) | Method and system for predicting personality traits, capabilities and suggested interactions from images of a person | |
US8301498B1 (en) | Video content analysis for automatic demographics recognition of users and videos | |
WO2017202006A1 (en) | Data processing method and device, and computer storage medium | |
US20200285960A1 (en) | Effective user modeling with time-aware based binary hashing | |
Csányi et al. | Challenges and open problems of legal document anonymization | |
US20080080745A1 (en) | Computer-Implemented Method for Performing Similarity Searches | |
Jusas et al. | Methods and tools of digital triage in forensic context: Survey and future directions | |
CN111859451A (en) | Processing system of multi-source multi-modal data and method applying same | |
US20090132264A1 (en) | Media asset evaluation based on social relationships | |
Osia et al. | Privacy-preserving deep inference for rich user data on the cloud | |
Grubl et al. | Applying artificial intelligence for age estimation in digital forensic investigations | |
Papapetrou et al. | Social context discovery from temporal app use patterns | |
EP4226268A1 (en) | Method for evaluating the risk of re-identification of anonymized data | |
EP3752948A1 (en) | Automatic processing method for anonymizing a digital data set | |
US11314897B2 (en) | Data identification method, apparatus, device, and readable medium | |
Erfanian et al. | Chameleon: Foundation Models for Fairness-aware Multi-modal Data Augmentation to Enhance Coverage of Minorities | |
Marturana et al. | A machine learning‐based approach to digital triage | |
Pushpalatha et al. | An information theoretic similarity measure for unified multimedia document retrieval | |
Erol et al. | Detecting personal health data disclosures in turkish social data | |
US20230379178A1 (en) | System for dynamic data aggregation and prediction for assessment of electronic non-fungible resources | |
Ganga et al. | Sentimental Analysis on Cosmetics using Machine Learning | |
Jeǵou | Efficient similarity search | |
Dantcheva | Computer vision for deciphering and generating faces | |
Mewada et al. | SUH-AIFRD: A self-training-based hybrid approach for individual fake reviewer detection | |
US20200175410A1 (en) | Computer architecture for generating hierarchical clusters in a correlithm object processing system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20230505 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20231128 |