EP4226267A1 - Method for evaluating the risk of re-identification of anonymised data - Google Patents
Method for evaluating the risk of re-identification of anonymised dataInfo
- Publication number
- EP4226267A1 EP4226267A1 EP21810059.2A EP21810059A EP4226267A1 EP 4226267 A1 EP4226267 A1 EP 4226267A1 EP 21810059 A EP21810059 A EP 21810059A EP 4226267 A1 EP4226267 A1 EP 4226267A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- individuals
- original
- anonymous
- data
- individual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 79
- 230000001131 transforming effect Effects 0.000 claims abstract description 3
- 230000008569 process Effects 0.000 claims description 10
- 238000004458 analytical method Methods 0.000 claims description 8
- 238000000513 principal component analysis Methods 0.000 claims description 7
- 238000011156 evaluation Methods 0.000 claims description 5
- 230000009466 transformation Effects 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000013500 data storage Methods 0.000 claims description 3
- 238000003672 processing method Methods 0.000 claims description 2
- 238000004422 calculation algorithm Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 238000013459 approach Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000033228 biological regulation Effects 0.000 description 3
- 238000000556 factor analysis Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000005304 joining Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000002860 competitive effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000000491 multivariate analysis Methods 0.000 description 1
- 238000011158 quantitative evaluation Methods 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/52—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
- G06F21/54—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow by adding security routines or objects to programs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/57—Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
- G06F21/575—Secure boot
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/602—Providing cryptographic facilities or services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
- G06F21/6254—Protecting personal data, e.g. for financial or medical purposes by anonymising data, e.g. decorrelating personal data from the owner's identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W12/00—Security arrangements; Authentication; Protecting privacy or anonymity
- H04W12/02—Protecting privacy or anonymity, e.g. protecting personally identifiable information [PII]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L2209/00—Additional information or applications relating to cryptographic mechanisms or cryptographic arrangements for secret or secure communication H04L9/00
- H04L2209/42—Anonymization, e.g. involving pseudonyms
Definitions
- the invention generally relates to the anonymization of sensitive data intended to be shared with third parties, for example, for research, analysis or exploitation purposes. More particularly, the invention relates to a method for evaluating the risk of re-identification of anonymized data.
- data is a source of performance for organizations and constitutes an important asset for them.
- Data provides crucial and valuable information for the production of quality goods and services, as well as for decision-making. They provide a competitive advantage that allows organizations to survive and stand out from the competition.
- the sharing of data for example in the form of open data known as "open data" in English, is today perceived as offering many opportunities, in particular for the extension of knowledge and human knowledge, innovation and creation of new products and services.
- the data may contain personal data, known as "personal data”, which is subject to regulations relating to the protection of privacy.
- personal data is subject to regulations relating to the protection of privacy.
- the use, storage and sharing of personal data are subject in France to the European GDPR regulation, for "General Data Protection Regulation", and to the French law known as the "IT law”. and freedoms >>.
- Certain data, such as those relating to the state of health, private and family life, assets and others, are particularly sensitive and must be subject to special precautions.
- Data anonymization can be defined as a process that removes the association between the identifying dataset and the subject of data.
- the process of anonymization aims to prevent the singling out of an individual within a dataset, the link between two records within the same dataset, or between two distinct datasets, when one of the records matches to individual-specific data, and inferring information from the data set.
- the data is presented in a form that should not allow individuals to be identified, even by combination with other data.
- the anonymization method called "k-anonymization” is one of the most widely used methods. This method seeks to make each record of a data set indistinguishable from at least k-1 other records of this data set.
- the so-called “L-diversity” anonymization method is an extension of the "k-anonymization” method which allows better data protection by involving in each group of k records, called “k-group", the presence of at least L sensitive attribute values.
- the main known anonymization algorithms modify data by deleting, generalizing or replacing personal information in individual records.
- An alteration of the informative content of the data may be the consequence of excessive anonymization.
- it is important that anonymized data remains quality data that retains a maximum of informative content. It is on this condition that anonymized data remain useful for the extraction of knowledge through analysis and reconciliation with other data.
- the degree of reliability of the anonymization algorithm is directly related to the risk of re-identification of anonymized data.
- This risk includes the risk of individualization, i.e. the possibility of isolating an individual, the risk of correlation, i.e. the possibility of linking distinct sets of data concerning the same individual, and the risk of inference, that is, the possibility of inferring information about an individual.
- risk of individualization i.e. the possibility of isolating an individual
- the risk of correlation i.e. the possibility of linking distinct sets of data concerning the same individual
- the risk of inference that is, the possibility of inferring information about an individual.
- Different methods for evaluating the risk of re-identification of a set of data having undergone anonymization processing also referred to as “metrics” below, have been proposed and provide quantitative evaluations of this risk.
- Probabilistic matching makes it possible to establish probabilities of links between records. Two records are considered linked when the probability of a link between them exceeds a certain threshold. Probabilistic matching is described by Fellegi LP. et al., Jaro MA, and Winkler WE in their respective articles "A theory of record linkage", Journal of the American Statistical Association 64, 1969, pp. 1 183-1210, "Advances in record-linkage methodology as applied to matching the 1985 Census of Tampa, Florida", Journal of the American Statistical Association 84, 1989, pp. 414-420, and “Advanced methods for record linkage”, Proceedings of the American Statistical Association Section on Survey Research Methods, 1995, pp. 467-472. Distance-based matching is described by Pagliuca D. et al.
- the aim of the present invention is to provide a new method for evaluating the risk of re-identification of anonymized data during a distance-based matching search attack.
- the invention relates to a data processing method implemented by computer for the evaluation of a risk of re-identification of anonymized data, the method providing a protection rate representative of the risk of re-identification in the case of a distance-based match-seeking attack, the method comprising the steps of a) linking an original data set comprising a plurality of original individuals to an anonymized data set comprising a plurality of anonymous individuals, the anonymous individuals being produced by a process of anonymization of the original individuals; b) transforming the original individuals and the anonymous individuals in Euclidean space, the original individuals and anonymous individuals being represented by coordinates in Euclidean space; c) identify for each said original individual one or more nearest anonymous individuals on the basis of a distance, by a so-called "k-NN" method; and d) calculating the protection rate as an average number of anonymous individuals closest to the original individual under consideration who are not a valid anonymous individual corresponding to the original individual under consideration, the anonymous individuals closer being those identified in step c) and having
- the aforementioned distance is a Euclidean distance.
- step b) the transformation of step b) is carried out by a factorial method and/or using an artificial neural network called an “auto-encoder”.
- the factorial method used in step b) is a so-called “Principal Component Analysis” method when the individuals include variables of the continuous type, a so-called “Multiple Correspondence Analysis” method when the individuals include qualitative type variables or a so-called “Mixed Data Factor Analysis” method when individuals include mixed “continuous/qualitative” type variables.
- the invention also relates to a data anonymization computer system comprising a data storage device storing program instructions for implementing the method as described briefly above.
- the invention also relates to a computer program product comprising a medium in which are recorded program instructions readable by a processor for implementing the method as described briefly above.
- Fig.1 is a flowchart showing a particular embodiment of the method according to the invention.
- Fig.2 represents an illustrative diagram relating to the embodiment of the method according to the invention of Fig.1.
- FIG.3 shows an example of a general architecture of a data anonymization computer system in which the method according to the invention is implemented.
- Assessing the risk of re-identification requires comparing a set of original data made up of so-called original individuals with a set of anonymized data made up of so-called anonymous individuals.
- Individuals are typically data records.
- Each anonymized individual in the anonymized dataset represents an anonymized version of a corresponding original individual.
- a pair formed by an original individual and a corresponding anonymous individual is referred to as an “original/anonymous pair”.
- Re-identification risk is the risk that an attacker will successfully link an original individual to their anonymized record, i.e. the corresponding anonymous individual, thus forming a valid original/anonymous pair.
- the method according to the invention for the evaluation of the risk of re-identification of data provides a metric, based on an individual-centric approach, which makes it possible to quantify the risk of re-identification of personal data during a distance-based match search.
- a particular embodiment, designated MR2 of the method of the invention is now described, having an interesting applicability in the context of a distance-based match-seeking attack.
- This particular embodiment MR2 is built with a decidedly different approach compared to the known methods of the state of the art, by establishing a protection rate which is based on the evaluation of a density of presence of anonymous individuals in the immediate environment of the original individuals.
- this embodiment MR2 comprises five steps S2-1 to S2-5.
- the first step S2-1 performs data joining processing.
- the first step S2-1 is a data joining step.
- a set of original data EDO comprising a plurality of original individuals IO is linked to a set of anonymized data EDA comprising a plurality of anonymous individuals IA.
- EDA anonymized data is that provided by an anonymization process that has processed the original EDO data and corresponds to it.
- the second step S2-2 carries out a processing of transformation of the individuals IO and IA in a Euclidean space.
- various transformation methods may be used.
- a factorial method or an artificial neural network called "auto-encoder”, or “autoencoder” in English, can be used to convert the individuals IO and IA in the form of coordinates in a Euclidean space.
- PCA Principal Component Analysis
- ACM Multiple Correspondence Analysis
- MCA Multiple Correspondence Analysis
- step S2-2 a factorial method is used in step S2-2.
- significant axes of variance are identified in the data sets by multivariate data analysis. These significant axes of variance determine the axes of Euclidean space onto which individuals IO and IA are projected.
- the transformation of the individuals IO and IA in Euclidean space makes it possible to calculate the mathematical distance between the individuals, from their coordinates.
- the method of the invention provides for a privileged use of a Euclidean distance as a mathematical distance.
- a privileged use of a Euclidean distance as a mathematical distance.
- various other mathematical distances such as a Manhattan distance, a Mahalanobis distance and the like, is included within the scope of the present invention.
- the third step S2-3 is a mathematical distance calculation step, such as a Euclidean distance.
- the fourth step S2-4 is a step of counting, for each original individual IOi, the number Nj of invalid anonymous individuals IAj separated from the original individual IOi by a mathematical distance dij which is less than the distance due calculated in step S2-3.
- the "k-nearest neighbors” method known as “k-NN” (from “k-Nearest Neighbors” in English) is used here to identify, for each individual of origin, one or more anonymous individuals closest on the based on a mathematical distance, such as a Euclidean distance.
- the original individual IOi is all the better protected against re-identification as the number Nj is high. Indeed, the Nj invalid anonymous individuals IAj being closer, in terms of mathematical distance, to the original individual IOi than the valid anonymous individual IAi, an attack based on the distance will be based on selecting as a priority one Nj invalid anonymous individuals IAj as being the corresponding anonymous individual.
- the number Nj represents the number of possible matches for the attacker before selecting the valid anonymous individual IAi.
- the fifth step S2-5 is a step for calculating the data protection rate against re-identification, designated here txP2, for the data set considered.
- the protection rate txP2 is calculated here as being a median number Nm of invalid anonymous individuals IAj present around an original individual in the considered data set.
- the denser the environment of the original individual IOi, with numerous invalid anonymous individuals IAj the more difficult this individual IOi will be to re-identify.
- FIG. 1 A general architecture of a data anonymization computer system SAD in which the method according to the invention for evaluating the risk of re-identification is implemented is shown by way of example in FIG.
- the SAD system is implemented here in a local computer system DSL and comprises two software modules MAD and MET.
- the MAD and MET software modules are hosted in data storage devices SD, such as memory and/or hard disk, of the local computer system DSL.
- the local computer system DSL also hosts an original database BDO in which original data DO is stored and an anonymized database BDA in which anonymized data DA is stored.
- the MAD software module implements a data anonymization process which processes the original data DO and outputs the anonymized data DA.
- the software module MET implements the method according to the invention for the evaluation of the risk of re-identification of the data.
- the software module MET receives as input original data DO and anonymized data DA and provides as output a protection rate TP against the risk of re-identification.
- the implementation of the method according to the invention is ensured by the execution of code instructions of the software module MET by a processor (not shown) of the local computer system DSL.
- the protection rate TP provided by the software module MET provides a measure of the performance of the data anonymization process implemented by the software module MAD.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Computer Security & Cryptography (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Bioethics (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Storage Device Security (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR2010259A FR3114892A1 (en) | 2020-10-07 | 2020-10-07 | PROCEDURE FOR ASSESSING THE RISK OF RE-IDENTIFICATION OF ANONYMIZED DATA |
PCT/FR2021/000113 WO2022074301A1 (en) | 2020-10-07 | 2021-10-07 | Method for evaluating the risk of re-identification of anonymised data |
Publications (1)
Publication Number | Publication Date |
---|---|
EP4226267A1 true EP4226267A1 (en) | 2023-08-16 |
Family
ID=74553910
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP21810059.2A Withdrawn EP4226267A1 (en) | 2020-10-07 | 2021-10-07 | Method for evaluating the risk of re-identification of anonymised data |
EP21810398.4A Withdrawn EP4226268A1 (en) | 2020-10-07 | 2021-10-07 | Method for evaluating the risk of re-identification of anonymized data |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP21810398.4A Withdrawn EP4226268A1 (en) | 2020-10-07 | 2021-10-07 | Method for evaluating the risk of re-identification of anonymized data |
Country Status (5)
Country | Link |
---|---|
US (2) | US20230367901A1 (en) |
EP (2) | EP4226267A1 (en) |
CA (2) | CA3194570A1 (en) |
FR (1) | FR3114892A1 (en) |
WO (2) | WO2022074302A1 (en) |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR3048101A1 (en) * | 2016-02-22 | 2017-08-25 | Digital & Ethics | METHOD AND DEVICE FOR EVALUATING THE ROBUSTNESS OF AN ANONYMOUSING OF A SET OF DATA |
US11188678B2 (en) * | 2018-05-09 | 2021-11-30 | Fujitsu Limited | Detection and prevention of privacy violation due to database release |
-
2020
- 2020-10-07 FR FR2010259A patent/FR3114892A1/en active Pending
-
2021
- 2021-10-07 WO PCT/FR2021/000114 patent/WO2022074302A1/en active Application Filing
- 2021-10-07 CA CA3194570A patent/CA3194570A1/en active Pending
- 2021-10-07 US US18/030,558 patent/US20230367901A1/en active Pending
- 2021-10-07 EP EP21810059.2A patent/EP4226267A1/en not_active Withdrawn
- 2021-10-07 CA CA3194820A patent/CA3194820A1/en active Pending
- 2021-10-07 WO PCT/FR2021/000113 patent/WO2022074301A1/en unknown
- 2021-10-07 US US18/030,545 patent/US20240005035A1/en active Pending
- 2021-10-07 EP EP21810398.4A patent/EP4226268A1/en not_active Withdrawn
Also Published As
Publication number | Publication date |
---|---|
US20240005035A1 (en) | 2024-01-04 |
FR3114892A1 (en) | 2022-04-08 |
WO2022074302A1 (en) | 2022-04-14 |
WO2022074301A1 (en) | 2022-04-14 |
US20230367901A1 (en) | 2023-11-16 |
CA3194820A1 (en) | 2022-04-14 |
CA3194570A1 (en) | 2022-04-14 |
EP4226268A1 (en) | 2023-08-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP2017091515A (en) | Computer-implemented system and method for automatically identifying attributes for anonymization | |
FR2984559A1 (en) | IDENTIFICATION OF INDIVIDUALS BY SECURE CALCULATION | |
EP2924609B1 (en) | Method for enrolment of data in a database for the protection of said data | |
CN111859451B (en) | Multi-source multi-mode data processing system and method for applying same | |
CA2743954C (en) | Identification or authorisation method, and associated system and secure module | |
WO2013189881A1 (en) | Secure method of processing data | |
Osia et al. | Privacy-preserving deep inference for rich user data on the cloud | |
WO2009067159A2 (en) | Media asset evaluation based on social relationships | |
WO2018138423A1 (en) | Automatic detection of frauds in a stream of payment transactions by neural networks integrating contextual information | |
EP4226267A1 (en) | Method for evaluating the risk of re-identification of anonymised data | |
EP3752948A1 (en) | Automatic processing method for anonymizing a digital data set | |
US11314897B2 (en) | Data identification method, apparatus, device, and readable medium | |
FR3048101A1 (en) | METHOD AND DEVICE FOR EVALUATING THE ROBUSTNESS OF AN ANONYMOUSING OF A SET OF DATA | |
WO2020165519A1 (en) | Method for constructing behavioural software signatures | |
CH717260A2 (en) | Computer-implemented method for analogue document retrieval. | |
Boudewijn et al. | Privacy Measurements in Tabular Synthetic Data: State of the Art and Future Research Directions | |
Marturana et al. | A machine learning‐based approach to digital triage | |
US11436515B2 (en) | Computer architecture for generating hierarchical clusters in a correlithm object processing system | |
WO2022008845A1 (en) | Method and system for anonymisation of time series | |
WO2021009364A1 (en) | Method for identifying outlier data in a set of input data acquired by at least one sensor | |
FR3134674A1 (en) | Method and device for communicating data representative of graphic objects generated from data representative of a set of electronic messages | |
FR3080930A1 (en) | COMPUTER BASED DATA SYSTEM | |
FR3067899A1 (en) | METHOD AND MODULE FOR MANAGING SECURE DATA TRANSMISSIONS AND CORRESPONDING PROGRAM. | |
EP2477148A1 (en) | Method and system for private data protection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20230505 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20231128 |