CN113938265A - Information de-identification method and device and electronic equipment - Google Patents
Information de-identification method and device and electronic equipment Download PDFInfo
- Publication number
- CN113938265A CN113938265A CN202010677009.1A CN202010677009A CN113938265A CN 113938265 A CN113938265 A CN 113938265A CN 202010677009 A CN202010677009 A CN 202010677009A CN 113938265 A CN113938265 A CN 113938265A
- Authority
- CN
- China
- Prior art keywords
- attack
- defense
- determining
- model
- identification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013503 de-identification Methods 0.000 title claims abstract description 118
- 238000000034 method Methods 0.000 title claims abstract description 49
- 230000007123 defense Effects 0.000 claims abstract description 187
- 238000009826 distribution Methods 0.000 claims description 30
- 238000003860 storage Methods 0.000 claims description 13
- 238000004088 simulation Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 description 15
- 230000006854 communication Effects 0.000 description 14
- 238000004891 communication Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 8
- 239000011159 matrix material Substances 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000007704 transition Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 244000166124 Eucalyptus globulus Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/002—Countermeasures against attacks on cryptographic mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/18—Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Optimization (AREA)
- Theoretical Computer Science (AREA)
- Computational Mathematics (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computer Security & Cryptography (AREA)
- Databases & Information Systems (AREA)
- Algebra (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Operations Research (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The application provides an information de-identification method, an information de-identification device and electronic equipment, relates to the technical field of information security, and is used for achieving information de-identification. In the embodiment of the application, the attack success probability of the attack side and the defense success probability of the defense side are determined according to the first model parameter and the second model parameter; and further determining the attack capability of the attack side and the defense capability of the defense side. Determining target model parameters from the selectable model parameters that make the defense capacity greater than the attack capacity; and establishing a de-identification model according to the target model parameters, and performing de-identification on the target data. The rationality of the model parameter setting of the de-identification model is improved, so that the de-identification model can not only complete de-identification of the target data, reduce the risk of re-identification, but also ensure the usability of the target data.
Description
[ technical field ] A method for producing a semiconductor device
The present application relates to the field of information security technologies, and in particular, to an information de-identification method and apparatus, and an electronic device.
[ background of the invention ]
With the rapid development of network information and computer technology, information in society and networks is continuously developing towards information sharing and resource mutual benefit. Meanwhile, in order to reduce the risk of personal information leakage in information sharing, the personal information needs to be subjected to de-identification processing before being issued. The information de-identification refers to a process of removing a group of identifiable information and an association relation between subjects corresponding to the information, so as to prevent leakage of personal information. When information is released, the commonly used de-identification models mainly include: the main ideas of the de-identification model are as follows: each data under a data attribute is required to at least contain k (or l or t) records to form an equivalence group, so that an attacker cannot be related to a main body corresponding to the data.
The existing method for establishing the de-identification model has the following problems that when the de-identification model is established, the k (or l or t) value is determined according to the individual subjective will of a model establishing person, and the k (or l or t) value is unreasonable, so that the established de-identification model is difficult to complete information hiding or can complete information hiding, but the information distortion is serious, and the information availability is greatly reduced.
[ summary of the invention ]
The embodiment of the application provides an information de-identification method, an information de-identification device and electronic equipment, so that the value of a model parameter k (or l or t) in a de-identification model is reasonably determined, the de-identification model can complete information hiding, and the usability of information can be guaranteed to the maximum extent.
In a first aspect, an embodiment of the present application provides an information de-identification method, including: determining the attack success probability of the attack side according to the first model parameter, and determining the defense success probability of the defense side according to the second model parameter; determining the attack capability of an attack side and the defense capability of a defense side according to the attack success probability and the defense success probability; determining optional model parameters which enable the defense capacity to be larger than the attack capacity, and determining target model parameters from the optional model parameters; establishing a de-identification model according to the target model parameters; and carrying out de-identification on the target data by utilizing the de-identification model.
In one possible implementation manner, an attack channel is formed by simulation between a first input variable and an output variable of an attack side, and a defense channel is formed by simulation between a second input variable and an output variable of a defense side; determining the attack capability of the attack side and the defense capability of the defense side according to the attack success probability and the defense success probability, wherein the determination comprises the following steps: determining the first channel capacity of an attack channel according to the attack success probability and the defense success probability; determining the attack capability of an attack side according to the first channel capacity; determining the second channel capacity of the defending channel according to the attack success probability and the defending success probability; and determining the defense capability of a defense side according to the second channel capacity.
In one possible implementation manner, determining the first channel capacity of the attack channel according to the attack success probability and the defense success probability includes: determining a first joint probability distribution between the first input variable and the output variable of the attack channel according to the attack success probability and the defense success probability; and determining first mutual information between a first input variable and an output variable according to the first joint probability distribution, and determining the first channel capacity based on the first mutual information.
In one possible implementation manner, determining the second channel capacity of the defensive channel according to the attack success probability and the defensive success probability includes: determining a second joint probability distribution between the second input variable and the output variable of the defending channel according to the attack success probability and the defending success probability; and determining second mutual information between a second input variable and an output variable according to the second joint probability distribution, and determining the second channel capacity based on the second mutual information.
In one possible implementation manner, in the representation of the attack capability, the second model parameter is a variable, and the first model parameter is a first fixed value; in the representation of the defense ability, the first model parameter is a variable, and the second model parameter is a second fixed value; determining optional model parameters that make the defense capability greater than the attack capability, including: and when the first fixed value and the second fixed value are the same, determining selectable model parameters which enable the defense capacity to be larger than the attack capacity.
In one possible implementation manner, the size of the equivalence group in the de-identification model is determined according to the target model parameter; and establishing a de-identification model according to the determined size of the equivalence group.
In one possible implementation manner, the identifying target data using the identifying model includes: grouping the target data according to the size of the equivalence group in the de-identification model; and according to the grouping result, performing de-identification processing on the data corresponding to the target attribute in each group of data to obtain de-identified target data.
In a second aspect, an embodiment of the present application provides an information de-identification apparatus, including: the determining module is used for determining the probability of attack success of the attack side according to the first model parameter and determining the probability of defense success of the defense side according to the second model parameter; determining the attack capability of an attack side and the defense capability of a defense side according to the attack success probability and the defense success probability; determining optional model parameters which enable the defense capacity to be larger than the attack capacity, and determining target model parameters from the optional model parameters; the model establishing module is used for establishing a de-identification model according to the target model parameters; and the de-identification module is used for carrying out de-identification on the target data by utilizing the de-identification model.
In a third aspect, an embodiment of the present application provides an electronic device, including: at least one processor; and at least one memory communicatively coupled to the processor, wherein: the memory stores program instructions executable by the processor, which when called by the processor are capable of performing the method as described above.
In a fourth aspect, embodiments of the present application provide a non-transitory computer-readable storage medium storing computer instructions that cause the computer to perform the method as described above.
In the technical scheme, the attack success probability of the attack side and the defense success probability of the defense side are determined according to the first model parameter and the second model parameter; and further determining the attack capability of the attack side and the defense capability of the defense side. Determining target model parameters from the selectable model parameters that make the defense capacity greater than the attack capacity; and establishing a de-identification model according to the target model parameters, and performing de-identification on the target data by using the de-identification model. Based on the scheme of the embodiment of the invention, the reasonability of the model parameter setting of the de-identification model can be improved, so that the de-identification model can complete the de-identification of the target data, the risk of re-identification is reduced, and the usability of the target data can be ensured.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart of one embodiment of a method for de-identifying information of the present application;
FIG. 2 is a flow chart of another embodiment of a method for de-identifying information of the present application;
FIG. 3 is a graph showing the average mutual information amount variation of an attack channel in the information de-identification method of the present application;
FIG. 4 is a graph illustrating the average mutual information change of defense channels in the information de-identification method of the present application;
FIG. 5 is a graph of a combination of attack and defense capabilities in the information de-identification method of the present application;
FIG. 6 is a schematic structural diagram of an embodiment of an information de-identification apparatus according to the present application;
fig. 7 is a schematic structural diagram of an embodiment of an electronic device according to the present application.
[ detailed description ] embodiments
For better understanding of the technical solutions of the present application, the following detailed descriptions of the embodiments of the present application are provided with reference to the accompanying drawings.
It should be understood that the embodiments described are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
When publishing data, common de-identification models are: k-anonymity model, l-diversity model, t-accessibility model, etc. The main idea of the de-identification model is as follows: each data under a data attribute is required to contain at least N records to form an equivalent group, so that an attacker cannot be related to a main body corresponding to the data.
For the k-anonymous model, each data under one data attribute is required to at least contain k records; for the l-diversity model, there are l records; for the t-accessibility model, t records are included. And k, l and t are model parameters of each de-identification model respectively, and the size of the equivalent group in each de-identification model can be determined according to the model parameters.
Fig. 1 is a flowchart of an embodiment of an information de-identification method according to the present application, and as shown in fig. 1, the information de-identification method may include:
and 101, determining the attack success probability of the attack side according to the first model parameter, and determining the defense success probability of the defense side according to the second model parameter.
In this embodiment, the first model parameter is a model parameter of the attack-side de-identification model, and a value thereof represents a size of an equivalence group in the attack-side de-identification model. The second model parameter is a model parameter of the defensive side de-identification model, and the value of the second model parameter represents the size of the equivalent group in the defensive side de-identification model.
According to the size of the equivalence group in the attack side de-identification model, the attack success probability of the attack side and the attack failure probability can be determined; according to the size of the equivalence group in the defense side identification model, the success probability of defense of the defense side and the failure probability of defense can be determined.
In one specific implementation, if the first model parameter is K1Then the size of the equivalence group in the attack side de-identification model is K1(ii) a The probability of attack success on the attack side is 1/K1Accordingly, the probability of attack failure is 1-1/K1. If the second model parameter is K2Then the size of the equivalence group in the defense side de-identification model is K2(ii) a The probability of successful defending of the defending side is 1-1/K2Accordingly, the probability of attack failure is 1/K2。
K above1And K2Is merely an exemplary presentation and does not represent a limitation of the present embodiments. In practical cases, the first model parameters and the second model parameters may be set according to a specific de-identified model.
And 102, determining the attack capability of the attack side and the defense capability of the defense side according to the attack success probability and the defense success probability.
Firstly, an attack channel is formed by simulation between a first input variable and an output variable of an attack side, and a defense channel is formed by simulation between a second input variable and an output variable of a defense side.
In this embodiment, the attack side refers to a party that initiates the re-identification attack. When the attack side attacks, the attack process can be regarded as a communication process inside one channel. When the output information of the channel is consistent with the input information, the channel communication is considered to be successful, namely, the attack side attacks successfully once. Correspondingly, the defense side refers to a party resisting the re-identification attack, and the defense process can also be regarded as a communication process inside one channel, and the channel communication is successful, namely the defense is successful.
Based on the above, the present embodiment analyzes the attack side and the defense side by constructing the attack channel and the defense channel.
Specifically, an attack event on an attack side is used as a first input variable, and a defense event on a defense side is used as a second input variable. Meanwhile, an output variable is constructed according to the corresponding relation between the first input variable and the second input variable.
Further, any two random variables can form a channel, and in combination with the above content of this embodiment, an attack channel is formed by simulation using the first input variable and the output variable, and a defense channel is formed by simulation using the second input variable and the output variable.
Then, according to the probability of successful attack and the probability of successful defense, determining the first channel capacity of the attack channel; and determining the attack capability of the attack side according to the first channel capacity. Determining the second channel capacity of the defending channel according to the attack success probability and the defending success probability; and determining the defense capability of the defense side according to the second channel capacity.
For the attack side, the attack capability refers to the capability of successfully completing the re-identification attack. The first channel capacity represents the maximum information rate transmitted without errors in the attack channel, i.e. the maximum capacity for successful communication in the attack channel. Therefore, the first channel capacity is proportional to the attack capability of the attack side, and therefore, the value of the attack capability of the attack side is expressed by the value of the first channel capacity in the present embodiment.
Similarly, for the defense side, the defense capability refers to the capability of successfully defending against the heavy identification attack. The second channel capacity represents the maximum information rate transmitted error-free in the defensive channel, i.e. the maximum establishment of successful communication in the defensive channel. Therefore, the second channel capacity is proportional to the defensive capacity of the defensive side, and therefore, the magnitude of the defensive side defensive capacity is expressed by the value of the second channel capacity in this embodiment.
Specifically, a first joint probability distribution between a first input variable and an output variable of an attack channel is determined according to the probability of attack success and the probability of defense success. First mutual information between the first input variable and the output variable is determined according to the first joint probability distribution. A first channel capacity is determined based on the first mutual information. The size of the first channel capacity may be used to represent the size of the attack-side attack capability.
And determining a second joint probability distribution between a second input variable and an output variable of the defense channel according to the attack success probability and the defense success probability. And determining second mutual information between the second input variable and the output variable according to the second combined probability distribution. A second channel capacity is determined based on the second mutual information. The size of the second channel capacity may be used to represent the size of the defensive side defensive capability.
And 103, determining optional model parameters which enable the defense capacity to be larger than the attack capacity, and determining target model parameters from the optional model parameters.
In this embodiment, for the attack side, the first model parameter is determined, and the attack capability thereof is determined by the second model parameter of the defense side. Accordingly, for the defense side, the second model parameters are determined, and the defense capability thereof is determined by the first model parameters of the attack side. That is, in the attack capability representation on the attack side, the second model parameter is a variable, and the first model parameter is a first fixed value. Similarly, in the representation of the defense capability of the defense side, the first model parameter is a variable, and the second model parameter is a second fixed value.
Based on the above, when determining the target model parameters, first, the first fixed value and the second fixed value are equal in value. The attack capability of the attack side can then be represented as a first variation curve with the second model parameters as variables. The defensive ability of the defensive side may be expressed as a second variation curve having the first model parameter as a variable. And determining optional model parameters which enable the defense capacity to be larger than the attack capacity according to the first variation curve and the second variation curve. Then, of the selectable model parameters, the smallest selectable model parameter is selected as the target model parameter of the present embodiment.
And 104, establishing a de-identification model according to the target model parameters.
And determining the size of the equivalence group in the de-identification model according to the target model parameters, and establishing the de-identification model.
Specifically, for the K-anonymous model, if the target model parameter is K, determining that the size of an equivalence group in the K-anonymous model is K; for the L-diversity model, if the target model parameter is L, then the size of the equivalence group in the L-diversity model is determined to be L, and for the T-accessibility model, if the target model parameter is T, then the size of the equivalence group in the T-accessibility model is determined to be T. Of course, other de-identification models can be included, and the method is the same as that described above and is not described again.
And 105, utilizing the de-identification model to perform de-identification on the target data.
First, the target data is grouped according to the size of the equivalence groups in the de-identified model.
Specifically, under each data attribute in the target data, data with the same value and similar values are divided into the same equivalent group. The number of data records in each equivalence group is determined by the size of the equivalence group.
And then, according to the grouping result, carrying out de-identification on the data corresponding to the target attribute in each group of data to obtain de-identified target data.
In this embodiment, an optional manner is to perform generalization processing on the data according to the value range of the data corresponding to the target attribute in each equivalence group, so as to obtain the target data without identification.
For example, when the data corresponding to the age attribute in an equivalence group with a size of 4 are 14, 11, 10, and 15, the data are generalized to obtain de-labeled target data [10-15], and [10-15 ].
Another optional mode is to perform shielding processing on the data according to the data corresponding to the target attribute in each equivalence group to obtain the de-identified target data.
For example, when the data corresponding to the identification mark attribute in an equivalent group of 4 is 145864199602270020, 115248199805260247, 105428189506120451, 155856200612030020, the data are masked, and the obtained de-labeled target data are 145864, 115248, 0247, 105428, 0451, 155856, 0020.
In this embodiment, according to the first model parameter and the second model parameter, the probability of successful attack by the attack side and the probability of successful defense by the defense side are determined, and then an attack channel and a defense channel are constructed. And determining the attack capability of the attack side according to the first channel capacity of the attack channel. And determining the defense capability of the defense side according to the second channel capacity of the defense channel. Target model parameters are determined from the selectable model parameters that cause the defense capability to be greater than the attack capability. And establishing a de-identification model according to the target model parameters, and performing de-identification on the target data. The rationality of the model parameter setting of the de-identification model is improved, so that the de-identification model can not only complete de-identification of the target data, play a role in hiding the data, but also ensure the usability of the target data.
In another embodiment of the present application, the establishment of attack and defense channels is further described.
First, a first input variable and a second input variable are determined.
And taking an attack event on the attack side as a variable X, namely a first input variable. When the attack side blindly evaluates that the attack is successful, the attack side is marked as X-1, and when the attack side blindly evaluates that the attack is failed, the attack side is marked as X-0. And taking the defense event on the defense side as a variable Y, namely a second input variable. When the defense side blindly evaluates that defense succeeds, the system is marked as Y being 1, and when the blindly evaluates that defense fails, the system is marked as Y being 0.
Then, a variable Z, i.e., an output variable, is constructed according to the correspondence between the first input variable and the second input variable. And forming an attack channel and a defense channel by using the output variable simulation.
Specifically, the first input variable is used as an input end, the output variable is used as an output end, and an attack channel is formed in a simulation mode. When the attack side attack succeeds, the attack channel is considered to successfully pass the input information to the output, i.e., an event { Z ═ 0, X ═ 0} or an event { Z ═ 1, X ═ 1} occurs. At this time, it is considered that the defensive side defensive failure, that is, Y becomes 0.
And when the second input variable is used as an input end and the output variable is used as an output end, a defense channel can be formed in a simulated mode. When the defense side defends successfully, the defense channel is considered to successfully transmit the input information to the output, namely, an event occurs { Z ═ 0, Y ═ 0} or an event { Z ═ 1, Y ═ 1 }. At this time, the attack side attack is considered to have failed, i.e., X is 0.
In another embodiment of the present application, a specific method for determining the attack capability of the attack side and the defense capability of the defense side is provided.
Fig. 2 is a flowchart of another embodiment of the information de-identification method of the present application. As shown in the figure, in this embodiment, the steps of determining the attack capability of the attack side and the defense capability of the defense side are as follows:
And determining a first joint probability distribution between a first input variable and an output variable of the attack channel and a second joint probability distribution between a second input variable and an output variable of the defense channel according to the attack success probability and the defense success probability.
After the attacking and defending parties compete for a long enough time, the probability distribution and the joint probability distribution of the first input variable and the second input variable can be obtained as follows:
PS(success of attack) ═ PS(X=1)=p
PS(attack failure) ═ PS(X=0)=1-p
PS(success in defense) ═ PS(Y=1)=q
PS(defense failure) ═ PS(Y=0)=1-q
PS(attack success, defense success) PS(X=1,Y=1)=a
PS(attack success, defense failure) — PS(X=1,Y=0)=b
PS(attack failure, defense success) PS(X=0,Y=1)=c
PS(attack failure, defense failure) PS(X=0,Y=0)=d
Note that a + b + c + d is 1.
For the attack side, the 2 x 2 order transition probability matrix of the attack channel is: a ═ A (x, z)]=[PS(z|x)](x, z ═ 0 or 1), then:
therefore, the transition matrix of the attack channel formed by the first input variable and the output variable is:
then, a first joint probability distribution (X, Z) between a first input variable and an output variable of the attack channel is:
PS(X=0,Z=0)=PS(X=0,Y=0)=d
PS(X=0,Z=1)=PS(X=0,Y=1)=c
PS(X=1,Z=0)=PS(X=1,Y=1)=a
PS(X=1,Z=1)=PS(X=1,Y=0)=b
for the defending side, the 2 x 2 order transition probability matrix of the defending channel is: b ═ B (y, z)]=[PS(z|y)](y, z is 0 or 1) then:
therefore, the transition matrix of the defensive channel formed by the second input variable and the output variable is:
then, a second joint probability distribution (Y, Z) between a second input variable and an output variable of the defensive channel is:
PS(Y=0,Z=0)=PS(X=0,Y=0)=d
PS(Y=0,Z=1)=PS(X=1,Y=0)=b
PS(Y=1,Z=0)=PS(X=1,Y=1)=a
PS(Y=1,Z=1)=PS(X=0,Y=1)=c
And respectively determining first mutual information between the first input variable and the output variable and second mutual information between the second input variable and the output variable according to the first joint probability distribution and the second joint probability distribution.
For the attack side, obtaining first mutual information between the first input variable and the output variable according to the first joint probability distribution is as follows:
for the defense side, according to the second joint probability distribution, obtaining second mutual information between the second input variable and the output variable as follows:
and step 203, determining attack capability and defense capability according to the first mutual information and the second mutual information.
Finally, optionally, the first channel capacity of the attack side is determined according to the first mutual information of the attack side, and then the attack capability of the attack side is determined according to the first channel capacity. And determining the second channel capacity of the defense side according to the second mutual information of the defense side, and further determining the defense capability of the defense side according to the second channel capacity.
For the attack side, the first channel capacity is the maximum value of the first mutual information. Namely:
first channel capacity C ═ Imax(X, Z), the attack capability of the attack side can be determined by the above formula.
For the defending side, the second channel capacity is the maximum value of the second mutual information. Namely:
second channel capacity F ═ Imax(Y, Z) the defensive ability of the defensive side is determined by the above formula.
In the embodiment, when the attack capability and the defense capability are determined, the attack capability can be determined according to the average value of the first mutual information of the attack side; and determining the defense capability by the average value of the second mutual information of the defense side.
In another embodiment of the present application, a specific method for determining selectable model parameters and determining target model parameters from the selectable model parameters is provided.
From the above embodiments, it can be known that C is the first channel capacity of the attack channel, i.e. the attack capability of the attack side; f is the second channel capacity of the defensive channel, i.e. the defensive capability of the defensive side. When C is less than F, the attack capability is less than the defense capability; when C is larger than F, the attack capability is larger than the defense capability; when C ═ F, the attacking ability is comparable to the defending ability.
In a specific de-labeled model, the values of p, q, a, b, c, and d in the above embodiments can be determined by the model parameters of the de-labeled model. Therefore, when the model parameters of the de-labeled model are determined, the attack capability of the attack side and the defense capability of the defense side under the current model parameters can be calculated.
Therefore, the model parameters with the defense capability larger than the attack capability can realize effective de-identification of the de-identified model, have higher safety and can be used as optional model parameters.
In consideration of protection of data authenticity in the de-identification model, the smallest one of the plurality of selectable model parameters may be used as the target model parameter of the embodiment. Therefore, when the data in the equivalence group is subjected to de-identification processing, such as generalization processing on the data in the equivalence group, the generalization range is correspondingly reduced, and the authenticity and the usability of the data are protected to the maximum extent.
Of course, when determining the target model parameter, the larger one of the selectable model parameters can be used as the target model parameter according to the actual requirement, so as to ensure that the defense capability is far greater than the attack capability, and protect the security of the data after the identification is removed to the maximum extent.
In the embodiment, the model parameter which enables the defense capability to be larger than the attack capability is determined to be the optional model parameter, so that the de-identification model can complete effective de-identification of the target data, and the capability of resisting the re-identification attack is improved. Meanwhile, the smallest one of the selectable model parameters is used as a target model parameter, so that the size of the equivalence group is as small as possible, and the authenticity of data can be ensured to the maximum extent when the data in the equivalence group is subjected to de-identification processing.
In another embodiment of the present application, a specific implementation process for implementing information de-identification by using the information de-identification method of the present application is provided.
The embodiment takes a k-anonymous model as an example, and a specific implementation process is given.
For each data in an equivalence group, the probability of re-identification, i.e., the probability of attack success on the attack side, is equal to 1 divided by the size of its equivalence group. Since in the k-anonymous model, the model parameter k represents the size of its equivalence set, the attack success probability is 1/k. The larger the model parameter k is, the larger the size of the equivalent group is, and correspondingly, the smaller the probability of attack success at the attack side is.
In this example, K1As the first model parameter of the attack side, K2And the second model parameter of the defense side. For the k-anonymity model, the individual event probabilities are as follows:
PS(success of attack) ═ PS(X=1)=1/K1
PS(attack failure) ═ PS(X=0)=1-1/K1
PS(success in defense) ═ PS(Y=1)=1-1/K2
PS(defense failure) ═ PS(Y=0)=1/K2
PS(attack success, defense success) PS(X=1,Y=1)=1/K1(1-1/K2)=a
PS(attack success, defense failure) — PS(X=1,Y=0)=1/K1K2=b
PS(attack failure, defense success) PS(X=0,Y=1)=(1-1/K1)(1-1/K2)=c
PS(attack failure, defense failure) PS(X=0,Y=0)=1/K2(1-1/K1)=d
And constructing a random variable Z as an output variable, taking an attack event at an attack side as a first input variable, and simulating the first input variable and the output variable to form an attack channel. And taking the defense event on the defense side as a second input variable, and simulating with the output variable to form a defense channel.
Wherein, when constructing the random variable Z, Z should satisfy the condition as shown in the following table 1:
TABLE 1
| Y | Z | |
0 | 1 | 1 | |
0 | 0 | 0 | |
1 | 1 | 0 | |
1 | 0 | 1 |
That is, when the event { Z ═ 0, X ═ 0} or the event { Z ═ 1, X ═ 1} occurs, the guard side fails to guard when the information communication in the attack channel is considered to be successful, and the second input variable Y is 0. When an event { Z ═ 0, Y ═ 0} or an event { Z ═ 1, Y ═ 1} occurs, the information communication in the defensive channel is considered to be successful, at which time the attack side attack fails, and the first input variable X is 0.
In this embodiment, when the random variable Z is constructed, Z ═ X + Y) mod 2, that is, the value of the first input variable X and the value of the second input variable Y are added, and the resulting sum and 2 are subjected to a remainder operation to obtain the random variable Z. At this time, the random variable Z satisfies the condition in table 1.
For an attack channel, the transfer matrix is:
a first joint probability distribution (X, Z) between a first input variable and an output variable of the attack channel is:
PS(X=0,Z=0)=PS(X=0,Y=0)=d
PS(X=0,Z=1)=PS(X=0,Y=1)=c
PS(X=1,Z=0)=PS(X=1,Y=1)=a
PS(X=1,Z=1)=PS(X=1,Y=0)=b
therefore, the first mutual information between the first input variable and the output variable is:
for the defending channel, the transfer matrix is:
a second joint probability distribution (Y, Z) between a second input variable and an output variable of the defensive channel is:
PS(Y=0,Z=0)=PS(X=0,Y=0)=d
PS(Y=0,Z=1)=PS(X=1,Y=0)=b
PS(Y=1,Z=0)=PS(X=1,Y=1)=a
PS(Y=1,Z=1)=PS(X=0,Y=1)=c
therefore, the second mutual information between the second input variable and the output variable is:
the first mutual information and the second mutual information respectively represent attack ability and defense ability, and in an actual process, the attack ability and the defense ability are determined by the value of a model parameter k of the k-anonymous model.
Fig. 3 is a graph showing the average mutual information amount variation of an attack channel in the information de-identification method of the present application.
As shown in FIG. 3, for the attack side, the attack capability is determined by the second model parameter K of the defending side2Value determination, i.e. first model parameter K on the attack side1Under the condition of definite value, the attack ability of the system follows the second model parameter K of the defending side2The value changes.
Fig. 4 is a graph illustrating an average mutual information amount variation of a defensive channel in the information de-identification method of the present application.
As shown in FIG. 4, for the defending side, the defending ability is determined by the first model parameter K of the attacking side1Determination of value, i.e. second model parameter K on the defense side2Under the condition of definite value, the defending ability of the eucalyptus K is along with the first model of the attacking side1The value changes.
Therefore, when the first model parameter K of the attack side1Second model parameter K of duty and defense side2When the values are the same, the change curve of the attack capability of the attack side and the change curve of the defense capability of the defense side can be combined, and the model parameter which enables the defense capability to be larger than the attack capability is determined to be the optional model parameter。
Fig. 5 is a combined graph of attack ability and defense ability in the information de-identification method of the present application.
As shown in FIG. 5, the two curves are, respectively, when the second model parameter K is2When the value is 50, the variation curve of the first mutual information of the attack side is the variation curve of the attack capability; and when the first model parameter K1When the value is 50, the change curve of the second mutual information on the defense side, i.e., the change curve of the defense ability.
As can be seen from fig. 5, in this example, when the model parameter takes a value of 8, the attack capability and the defense capability are equivalent. Therefore, model parameters greater than 8 are considered as optional model parameters. At this time, the defensive ability is greater than the attack ability.
In this embodiment, in order to ensure the authenticity of the de-identified data to the maximum extent, the smallest one of the selectable model parameters is used as the target model parameter, that is, 9 is used as the target model parameter.
When the model parameter of the k-anonymous model takes a value of 9, the size of the equivalence group is 9, that is, for each data attribute, at least 9 records are included in one equivalence group. Therefore, at least 9 data that are the same or similar in each data attribute are grouped as a group according to the size of the equivalence group.
After grouping is completed, the data corresponding to each data attribute in each equivalence group is subjected to de-identification processing, and the specific processing mode may include: and shielding, generalizing, deleting and the like to obtain the de-identified target data.
Fig. 6 is a schematic structural diagram of an embodiment of an information de-identification apparatus according to the present application, where the information de-identification apparatus in the present embodiment may be used as an information de-identification device to implement the information de-identification method provided in the present application. As shown in fig. 6, the information de-identification device may include: a determination module 51, a model building module 52 and a de-identification module 53.
The determination model 51 is used for determining the probability of attack success on the attack side according to the first model parameters and determining the probability of defense success on the defense side according to the second model parameters; determining the attack capability of an attack side and the defense capability of a defense side according to the probability of attack success and the probability of defense success; and determining optional model parameters which enable the defense capacity to be larger than the attack capacity, and determining target model parameters from the optional model parameters.
During specific implementation, an attack channel is simulated and formed between a first input variable and an output variable of an attack side, and a defense channel is simulated and formed between a second input variable and an output variable of a defense side.
The determining module 51 determines a first joint probability distribution between a first input variable and an output variable of the attack channel according to the attack success probability and the defense success probability, further determines first mutual information between the first input variable and the output variable, and determines a first channel capacity based on the first mutual information. And determining second joint probability distribution between a second input variable and an output variable of the defense channel according to the attack success probability and the defense success probability, further determining second mutual information between the second input variable and the output variable, and determining the capacity of the second channel based on the second mutual information. The first mutual information and the second mutual information represent the attack capability of the attack side and the defense capability of the defense side respectively.
The determining module 51 is further configured to determine the model parameters that make the defense capability greater than the attack capability as optional model parameters, and determine the target model parameters from the optional model parameters.
And a model building module 52, configured to build a de-labeled model according to the target model parameters.
Specifically, the model building module 52 determines the size of the equivalence group in the de-labeled model according to the target model parameters, and builds the de-labeled model.
And a de-identification module 53, configured to perform de-identification on the target data by using a de-identification model. The method is specifically used for grouping the target data according to the size of the equivalence group in the de-identification model. And according to the grouping result, performing de-identification processing on the data corresponding to each attribute in each group of data to obtain de-identified target data.
In the information de-identification device, the determining module 51 determines mutual information between a first input variable and an output variable of an attack channel according to the attack success probability of the attack side and the defense success probability of the defense side, so as to obtain the attack capability of the attack side; and determining mutual information between a second input variable and an output variable of the defense channel according to the attack success probability of the attack side and the defense success probability of the defense side, and further obtaining the defense capability of the defense side. And taking the model parameters which enable the defense capacity to be larger than the attack capacity as selectable model parameters, and determining target model parameters from the selectable model parameters. The model building module 52 builds a de-identification model according to the target model parameters determined by the determining module, and the de-identification module 53 performs de-identification processing on the target data according to the obtained de-identification model. On one hand, the defense capability is greater than the attack capability, so that the safety of the data after being subjected to the de-identification can be ensured, and the risk of re-identification is reduced; on the other hand, on the basis of ensuring the safety, the equivalent group of the de-identification model is made as small as possible, and the authenticity and the usability of the data can be ensured to the maximum extent.
FIG. 7 is a schematic structural diagram of an embodiment of an electronic device according to the present application, as shown in FIG. 7, the electronic device may include at least one processor; and at least one memory communicatively coupled to the processor, wherein: the memory stores program instructions executable by the processor, and the processor calls the program instructions to execute the information de-identification method provided by the embodiment of the application.
The electronic device may be an information de-identification device, and the embodiment does not limit the specific form of the electronic device.
FIG. 7 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present application. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 7, the electronic device is in the form of a general purpose computing device. Components of the electronic device may include, but are not limited to: one or more processors 410, a memory 430, and a communication bus 440 that connects the various system components (including the memory 430 and the processing unit 410).
Electronic devices typically include a variety of computer system readable media. Such media may be any available media that is accessible by the electronic device and includes both volatile and nonvolatile media, removable and non-removable media.
A program/utility having a set (at least one) of program modules, including but not limited to an operating system, one or more application programs, other program modules, and program data, may be stored in memory 430, each of which examples or some combination may include an implementation of a network environment. The program modules generally perform the functions and/or methodologies of the embodiments described herein.
The electronic device may also communicate with one or more external devices (e.g., keyboard, pointing device, display, etc.), one or more devices that enable a user to interact with the electronic device, and/or any devices (e.g., network card, modem, etc.) that enable the electronic device to communicate with one or more other computing devices. Such communication may occur via communication interface 420. Furthermore, the electronic device may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network such as the Internet) via a Network adapter (not shown in FIG. 7) that may communicate with other modules of the electronic device via the communication bus 440. It should be appreciated that although not shown in FIG. 7, other hardware and/or software modules may be used in conjunction with the electronic device, including but not limited to: microcode, device drivers, Redundant processing units, external disk drive Arrays, disk array (RAID) systems, tape Drives, and data backup storage systems, among others.
The processor 410 executes programs stored in the memory 430 to perform various functional applications and data processing, such as implementing the information de-identification method provided by the embodiments of the present application.
The embodiment of the present application further provides a non-transitory computer-readable storage medium, where the non-transitory computer-readable storage medium stores computer instructions, and the computer instructions enable the computer to execute the information de-identification method provided in the embodiment of the present application.
The non-transitory computer readable storage medium described above may take any combination of one or more computer readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a flash Memory, an optical fiber, a portable compact disc Read Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
It should be noted that the terminal according to the embodiments of the present application may include, but is not limited to, a Personal Computer (Personal Computer; hereinafter, referred to as PC), a Personal Digital Assistant (Personal Digital Assistant; hereinafter, referred to as PDA), a wireless handheld device, a Tablet Computer (Tablet Computer), a mobile phone, an MP3 player, an MP4 player, and the like.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.
Claims (10)
1. An information de-identification method, comprising:
determining the attack success probability of the attack side according to the first model parameter, and determining the defense success probability of the defense side according to the second model parameter;
determining the attack capability of an attack side and the defense capability of a defense side according to the attack success probability and the defense success probability;
determining optional model parameters which enable the defense capacity to be larger than the attack capacity, and determining target model parameters from the optional model parameters;
establishing a de-identification model according to the target model parameters;
and carrying out de-identification on the target data by utilizing the de-identification model.
2. The method of claim 1, wherein an attack channel is formed by simulation between a first input variable and an output variable of the attack side, and a defense channel is formed by simulation between a second input variable and an output variable of the defense side;
determining the attack capability of the attack side and the defense capability of the defense side according to the attack success probability and the defense success probability, wherein the determination comprises the following steps:
determining the first channel capacity of an attack channel according to the attack success probability and the defense success probability; determining the attack capability of an attack side according to the first channel capacity;
determining the second channel capacity of the defending channel according to the attack success probability and the defending success probability; and determining the defense capability of a defense side according to the second channel capacity.
3. The method of claim 2, wherein determining the first channel capacity of the attack channel based on the probability of attack success and the probability of defense success comprises:
determining a first joint probability distribution between the first input variable and the output variable of the attack channel according to the attack success probability and the defense success probability;
and determining first mutual information between a first input variable and an output variable according to the first joint probability distribution, and determining the first channel capacity based on the first mutual information.
4. The method of claim 2, wherein determining the second channel capacity of the defensive channel based on the probability of attack success and the probability of defensive success comprises:
determining a second joint probability distribution between the second input variable and the output variable of the defending channel according to the attack success probability and the defending success probability;
and determining second mutual information between a second input variable and an output variable according to the second joint probability distribution, and determining the second channel capacity based on the second mutual information.
5. The method according to claim 1, characterized in that the second model parameter is a variable and the first model parameter is a first fixed value in the representation of the attack capability; in the representation of the defense ability, the first model parameter is a variable, and the second model parameter is a second fixed value;
determining optional model parameters that make the defense capability greater than the attack capability, including:
and when the first fixed value and the second fixed value are the same, determining selectable model parameters which enable the defense capacity to be larger than the attack capacity.
6. The method of claim 1, wherein building a de-labeled model from the target model parameters comprises:
determining the size of an equivalence group in the de-identified model according to the target model parameters;
and establishing a de-identification model according to the determined size of the equivalence group.
7. The method of claim 6, wherein the de-identifying target data using the de-identification model comprises:
grouping the target data according to the size of the equivalence group in the de-identification model;
and according to the grouping result, carrying out de-identification on the data corresponding to the target attribute in each group of data.
8. An information de-identification apparatus, comprising:
the determining module is used for determining the probability of attack success of the attack side according to the first model parameter and determining the probability of defense success of the defense side according to the second model parameter; determining the attack capability of an attack side and the defense capability of a defense side according to the attack success probability and the defense success probability; determining optional model parameters which enable the defense capacity to be larger than the attack capacity, and determining target model parameters from the optional model parameters;
the model establishing module is used for establishing a de-identification model according to the target model parameters;
and the de-identification module is used for carrying out de-identification on the target data by utilizing the de-identification model.
9. An electronic device, comprising:
at least one processor; and
at least one memory communicatively coupled to the processor, wherein:
the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of any of claims 1 to 7.
10. A non-transitory computer-readable storage medium storing computer instructions that cause a computer to perform the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010677009.1A CN113938265B (en) | 2020-07-14 | 2020-07-14 | Information de-identification method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010677009.1A CN113938265B (en) | 2020-07-14 | 2020-07-14 | Information de-identification method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113938265A true CN113938265A (en) | 2022-01-14 |
CN113938265B CN113938265B (en) | 2024-04-12 |
Family
ID=79274108
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010677009.1A Active CN113938265B (en) | 2020-07-14 | 2020-07-14 | Information de-identification method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113938265B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2679800A1 (en) * | 2008-09-22 | 2010-03-22 | University Of Ottawa | Re-identification risk in de-identified databases containing personal information |
CN106940777A (en) * | 2017-02-16 | 2017-07-11 | 湖南宸瀚信息科技有限责任公司 | A kind of identity information method for secret protection measured based on sensitive information |
CN108040070A (en) * | 2017-12-29 | 2018-05-15 | 北京奇虎科技有限公司 | A kind of network security test platform and method |
CN109067750A (en) * | 2018-08-14 | 2018-12-21 | 中国科学院信息工程研究所 | A kind of location privacy protection method and device based on anonymity |
-
2020
- 2020-07-14 CN CN202010677009.1A patent/CN113938265B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2679800A1 (en) * | 2008-09-22 | 2010-03-22 | University Of Ottawa | Re-identification risk in de-identified databases containing personal information |
CN106940777A (en) * | 2017-02-16 | 2017-07-11 | 湖南宸瀚信息科技有限责任公司 | A kind of identity information method for secret protection measured based on sensitive information |
CN108040070A (en) * | 2017-12-29 | 2018-05-15 | 北京奇虎科技有限公司 | A kind of network security test platform and method |
CN109067750A (en) * | 2018-08-14 | 2018-12-21 | 中国科学院信息工程研究所 | A kind of location privacy protection method and device based on anonymity |
Non-Patent Citations (2)
Title |
---|
DELANEY, A.M.; BROPHY, E. AND WARD, T.E.: "Synthesis of Realistic ECG using Generative Adversarial Networks", 《ARXIV》 * |
谢安明;金涛;周涛: "个人信息去标识化框架及标准化", 《大数据》 * |
Also Published As
Publication number | Publication date |
---|---|
CN113938265B (en) | 2024-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111539027B (en) | Information verification method and system based on privacy protection of two parties | |
US11520899B2 (en) | System and method for machine learning architecture with adversarial attack defense | |
CN108985066A (en) | Intelligent contract security vulnerability detection method, device, terminal and storage medium | |
CN112214775B (en) | Injection attack method, device, medium and electronic equipment for preventing third party from acquiring key diagram data information and diagram data | |
WO2022142001A1 (en) | Target object evaluation method based on multi-score card fusion, and related device therefor | |
CN114818000B (en) | Privacy protection set confusion intersection method, system and related equipment | |
KR20100094487A (en) | Pairing computation device, pairing computation method, and recording medium where pairing computation program is recorded | |
CN111209600A (en) | Block chain-based data processing method and related product | |
CN113938265A (en) | Information de-identification method and device and electronic equipment | |
KR20220107420A (en) | Pairing-based zero-knowledge proof protocol system for proves and verifies calculation results | |
CN112307477A (en) | Code detection method, device, storage medium and terminal | |
CN114584285B (en) | Secure multiparty processing method and related device | |
CN109324843B (en) | Fingerprint processing system and method and fingerprint equipment | |
CN112149834A (en) | Model training method, device, equipment and medium | |
CN111382831A (en) | Method and device for accelerating forward reasoning of convolutional neural network model | |
CN113961962A (en) | Model training method and system based on privacy protection and computer equipment | |
CN115270198A (en) | Signature method, device and storage medium of PDF document | |
CN113886881A (en) | Graph data privacy protection method and system based on genetic algorithm and electronic equipment | |
CN108512663A (en) | The dot product method, apparatus and computer readable storage medium of elliptic curve cryptography | |
CN113379038A (en) | Data processing method and electronic equipment | |
CN113849471A (en) | Data compression method, device, equipment and storage medium | |
CN111523681A (en) | Global feature importance representation method and device, electronic equipment and storage medium | |
CN112511361A (en) | Model training method and device and computing equipment | |
CN115334698B (en) | Construction method, device, terminal and medium of target 5G safety network of target range | |
CN115048676B (en) | Safe intelligent verification method in privacy computing application and related device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |