CN110852450B - Method and device for identifying countermeasure sample to protect model security - Google Patents

Method and device for identifying countermeasure sample to protect model security Download PDF

Info

Publication number
CN110852450B
CN110852450B CN202010040234.4A CN202010040234A CN110852450B CN 110852450 B CN110852450 B CN 110852450B CN 202010040234 A CN202010040234 A CN 202010040234A CN 110852450 B CN110852450 B CN 110852450B
Authority
CN
China
Prior art keywords
sample
samples
privacy
control
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010040234.4A
Other languages
Chinese (zh)
Other versions
CN110852450A (en
Inventor
石磊磊
熊涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ant Zhian Safety Technology Shanghai Co ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202010040234.4A priority Critical patent/CN110852450B/en
Publication of CN110852450A publication Critical patent/CN110852450A/en
Application granted granted Critical
Publication of CN110852450B publication Critical patent/CN110852450B/en
Priority to PCT/CN2020/138824 priority patent/WO2021143478A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes

Abstract

An embodiment of the present specification provides a method for identifying countermeasure samples to protect privacy security, the method including: firstly, sampling a plurality of non-confrontation samples related to private data to obtain a first reference sample set; secondly, adding a target sample to be detected into the first control sample set to obtain a first experiment sample set; then, training the initial machine learning model by using the first reference sample set and the first experimental sample set respectively to obtain a trained first reference model and a trained first experimental model; then, respectively carrying out performance evaluation on the first control model and the first experiment model by using the test sample set to obtain a first control value and a first experiment value aiming at a preset evaluation index; then, the difference between the first control value and the first experimental value is calculated as a first gain value of the target sample for model performance. Thus, it is possible to determine whether or not the target sample is a challenge sample based on the first gain value or a plurality of gain values obtained by repeating the above-described procedure.

Description

Method and device for identifying countermeasure sample to protect model security
Technical Field
One or more embodiments of the present disclosure relate to the field of data computing security, and more particularly, to a method and apparatus for identifying countermeasure samples to protect model security.
Background
A countersample is an input sample that is formed by purposely adding subtle perturbations to the data set that cause the machine learning model to output erroneous results with high confidence. For example, in an image recognition scenario, a picture that was originally recognized as a panda by the image processing model is misclassified as a gibbon after a slight modification that is not even noticeable to the human eye is added.
The countervailing samples may be used by an attacker to attack the machine learning model. For example, in the process of training the model, the countermeasure samples include wrong labels, so that the performance of training the model is reduced, and the accuracy of the prediction result of the model obtained after the training is completed is low.
Therefore, a reasonable and reliable scheme is urgently needed, and confrontation samples can be accurately identified so as to protect the safety of the model, so that the training performance and the prediction performance of the model are improved.
Disclosure of Invention
One or more embodiments of the present specification describe a method and apparatus for identifying countermeasure samples to secure a model, which can be used to improve the training performance and the prediction performance of the model.
According to a first aspect, there is provided a method of identifying countermeasure samples to secure a model, the method comprising: sampling a plurality of non-antagonistic samples for a plurality of times to obtain a plurality of control sample sets; adding target samples to be detected into the plurality of control sample sets respectively to obtain a plurality of experimental sample sets; aiming at any first control sample set in the plurality of control sample sets, training an initial machine learning model by using the first control sample set to obtain a trained first control model; performing performance evaluation on the first control model by using a test sample set to obtain a first control value aiming at a preset evaluation index, wherein the test sample set is determined based on the plurality of non-confrontation samples; training the initial machine learning model by using a first experimental sample set obtained by adding the target sample into the first control sample set to obtain a trained first experimental model; performing performance evaluation on the first experiment model by using the test sample set to obtain a first experiment value aiming at the preset evaluation index; determining a difference value between the first experimental value and the first control value as a first gain value; and determining whether the target sample belongs to the confrontation sample or not by using a plurality of gain values determined based on the plurality of control sample sets and the plurality of experiment sample sets.
In one embodiment, the plurality of non-countermeasure samples and target samples are image samples, the initial machine learning model is an image processing model; or, the plurality of non-confrontation samples and the target sample are text samples, and the initial machine learning model is a text processing model; or, the plurality of non-antagonistic samples and target samples are speech samples and the initial machine learning model is a speech processing model.
In one embodiment, the plurality of non-challenge samples are sampled several times to obtain a plurality of control sample sets, including: sampling the plurality of non-antagonistic samples for a plurality of times by using an enumeration method to obtain a plurality of control sample sets; or, using a layered sampling method to sample the plurality of non-confrontation samples for a plurality of times to obtain a plurality of control sample sets; or, sampling the plurality of non-confrontation samples for a plurality of times by using a self-service sampling method to obtain a plurality of control sample sets.
In one embodiment, the preset evaluation index includes one or more of the following: error rate, accuracy, recall, precision.
In one embodiment, determining whether the target sample is a challenge sample using gain values determined based on the control sample sets and the experiment sample sets comprises: determining a gain mean value of the gain values, and determining that the target sample belongs to a confrontation sample if the gain mean value is smaller than a set threshold; or, determining a gain proportion larger than a set threshold value in the plurality of gain values, and determining that the target sample belongs to the confrontation sample when the gain proportion is smaller than a first preset proportion.
In a specific embodiment, determining whether the target sample is a challenge sample further comprises: averaging a plurality of control values of the plurality of control sample sets aiming at the preset evaluation index to obtain a control mean value; and determining the product of the comparison average value and a second preset proportion as the set threshold value.
According to a second aspect, there is provided an apparatus for identifying countermeasure samples to secure a model, the apparatus comprising: the sampling unit is configured to sample a plurality of non-antagonistic samples for a plurality of times to obtain a plurality of control sample sets; the adding unit is configured to add target samples to be detected into the plurality of control sample sets respectively to obtain a plurality of experiment sample sets; a first training unit, configured to train an initial machine learning model by using a first reference sample set to obtain a trained first reference model, for any first reference sample set in the plurality of reference sample sets; a first evaluation unit configured to perform performance evaluation on the first control model by using a test sample set, which is determined based on the plurality of non-confrontation samples, to obtain a first control value for a preset evaluation index; a second training unit, configured to train the initial machine learning model with a first experimental sample set obtained by adding the target sample to the first control sample set, to obtain a trained first experimental model; the second evaluation unit is configured to perform performance evaluation on the first experiment model by using the test sample set to obtain a first experiment value aiming at the preset evaluation index; a gain determination unit configured to determine a difference value of the first experimental value and the first control value as a first gain value; a determination unit configured to determine whether the target sample belongs to the confrontation sample using gain values determined based on the control sample sets and the experiment sample sets.
According to a third aspect, a method of identifying antagonistic privacy samples to protect privacy security is provided. The method comprises the following steps: sampling a plurality of non-confrontation privacy samples for a plurality of times to obtain a plurality of contrast privacy sample sets; respectively adding target privacy samples to be detected into the plurality of comparison privacy sample sets to obtain a plurality of experiment privacy sample sets; aiming at any first contrast privacy sample set in the plurality of contrast privacy sample sets, training an initial machine learning model by using the first contrast privacy sample set to obtain a trained first contrast model; performing performance evaluation on the first control model by using a test privacy sample set to obtain a first control value aiming at a preset evaluation index, wherein the test privacy sample set is determined based on the non-confrontation privacy samples; training the initial machine learning model by using a first experiment privacy sample set obtained by adding the target privacy sample to the first control privacy sample set to obtain a trained first experiment model; performing performance evaluation on the first experiment model by using the test privacy sample set to obtain a first experiment value aiming at the preset evaluation index; determining a difference value between the first experimental value and the first control value as a first gain value; and judging whether the target privacy sample belongs to the confrontation privacy sample or not by using a plurality of gain values determined based on the plurality of comparison privacy sample sets and the plurality of experiment privacy sample sets.
According to a fourth aspect, an apparatus is provided for identifying countering privacy samples to protect privacy security. The device includes: the sampling unit is configured to sample the non-confrontation privacy samples for a plurality of times to obtain a plurality of contrast privacy sample sets; the adding unit is configured to add the target privacy samples to be detected to the comparison privacy sample sets respectively to obtain a plurality of experiment privacy sample sets; a first training unit, configured to train an initial machine learning model with respect to any first comparison privacy sample set in the comparison privacy sample sets, to obtain a trained first comparison model; a first evaluation unit configured to perform a performance evaluation on the first control model using a test privacy sample set, which is determined based on the plurality of non-countervailing privacy samples, to obtain a first control value for a preset evaluation index; a second training unit configured to train the initial machine learning model with a first experiment privacy sample set obtained by adding the target privacy sample to the first control privacy sample set, so as to obtain a trained first experiment model; the second evaluation unit is configured to perform performance evaluation on the first experiment model by using the test privacy sample set to obtain a first experiment value aiming at the preset evaluation index; a gain determination unit configured to determine a difference value of the first experimental value and the first control value as a first gain value; and the judging unit is configured to judge whether the target privacy sample belongs to the confrontation privacy sample by using a plurality of gain values determined based on the plurality of control privacy sample sets and the plurality of experiment privacy sample sets.
According to a fifth aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of the first or third aspect.
According to a sixth aspect, there is provided a computing device comprising a memory having stored therein executable code, and a processor which, when executing the executable code, implements the method of the first or third aspect.
In summary, in the identification method and apparatus disclosed in the embodiments of the present disclosure, a gain value of a target sample for a model performance is first determined, and then the gain value is used to determine whether the target sample belongs to a countermeasure sample, so that the countermeasure sample can be accurately identified, and the safety of the model that would otherwise use the countermeasure sample is further protected, so as to ensure good training performance and prediction performance of the model.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 illustrates a block diagram of an implementation of a method of identifying countermeasure samples, according to one embodiment;
FIG. 2 illustrates a flow diagram of a method of identifying countermeasure samples to secure a model according to one embodiment;
FIG. 3 illustrates a timing step diagram of identifying countermeasure samples in accordance with one embodiment;
FIG. 4 illustrates a diagram of an apparatus for identifying countermeasure samples to secure a model according to one embodiment;
FIG. 5 illustrates a flow diagram of a method of identifying countering privacy samples to protect privacy security, according to one embodiment;
FIG. 6 illustrates a block diagram of an apparatus for identifying countering privacy samples to protect privacy security according to one embodiment.
Detailed Description
The scheme provided by the specification is described below with reference to the accompanying drawings.
Training samples currently used for model training may include various sources, such as manual marking, crawling from a website or web platform, etc., where challenge samples are easily mixed. As mentioned above, identifying the challenge sample is important to ensure the training performance and the prediction performance of the model, thereby protecting the model.
Furthermore, the inventors consider that the labels of the challenge samples are wrong according to the definition of the challenge samples, so that the performance gain brought to the model is negative or very small. Thus, it is possible to detect whether a sample is a challenge sample by calculating the gain of the sample on the model performance, or alternatively, it is possible to identify a challenge sample by calculating the gain of the sample on the model performance.
Based on this, the inventors propose a method of identifying challenge samples to secure the model. In one embodiment, fig. 1 shows a block diagram of an implementation of a method for identifying challenge samples according to an embodiment, as shown in fig. 1, first, a plurality of non-challenge samples are sampled for several times to obtain a plurality of control sample sets, which are labeled N in fig. 1, where N is a positive integer. And then, adding the target samples to be detected into the plurality of control sample sets respectively to obtain a plurality of experimental sample sets. Then, determining a plurality of gain values of the target sample on the model performance based on a plurality of control sample sets and a plurality of experiment sample sets, which specifically comprises: on one hand, for any first control sample set in the plurality of control sample sets, training an initial machine learning model by using the first control sample set, and performing performance evaluation on the trained first control model to obtain a first control value indicating the performance of the model; on the other hand, for a first experiment sample set comprising a target sample and samples in the first reference sample set, training the initial machine learning model by using the first experiment sample set, and performing performance evaluation on the trained first experiment model to obtain a first experiment value indicating the performance of the model; further, the difference between the first control value and the first experimental value is determined as a first gain value, from which the gain values can be determined. Then, according to a plurality of gain values and a preset judgment rule, whether the target sample is a countermeasure sample is judged. In this way, accurate identification of challenge samples can be achieved.
The following describes specific implementation steps of the above identification method with reference to specific embodiments.
Fig. 2 shows a flowchart of a method for identifying countermeasure samples to secure a model according to an embodiment, and an execution subject of the method can be any device, equipment, platform, or equipment cluster having computing and processing capabilities. As shown in fig. 2, the method comprises the steps of:
step S210, sampling a plurality of non-antagonistic samples for a plurality of times to obtain a plurality of control sample sets; step S220, adding target samples to be detected into the plurality of control sample sets respectively to obtain a plurality of experiment sample sets; step S230, aiming at any first control sample set in the plurality of control sample sets, training an initial machine learning model by using the first control sample set to obtain a trained first control model; step S240, utilizing a test sample set to perform performance evaluation on the first contrast model to obtain a first contrast value aiming at a preset evaluation index, wherein the test sample set is determined based on the plurality of non-confrontation samples; step S250, for a first experiment sample set obtained by adding the target sample to the first control sample set, training the initial machine learning model by using the first experiment sample set to obtain a trained first experiment model; step S260, performing performance evaluation on the first experiment model by using the test sample set to obtain a first experiment value aiming at the preset evaluation index; step S270, determining the difference value between the first experimental value and the first control value as a first gain value; step S280, determining whether the target sample belongs to a confrontation sample by using gain values determined based on the control sample sets and the experiment sample sets.
First, the terms "first" in the first reference sample set, the first experimental sample set, the first reference model, the first experimental model, and the like in the following description are used only for distinguishing the same kind of things, and do not have other limiting effects.
Furthermore, for the plurality of non-antagonistic samples and target samples involved in the steps shown in fig. 2, on the one hand, these samples may be private data samples, i.e. where user private data is involved, in one embodiment from the point of view of the data content comprised by the samples. At this time, it is important to identify the countermeasure sample to protect the model. For example, for a classification model (e.g., a face recognition model) for identifying a user identity, if a countermeasure sample included in a training sample of the classification model is not recognized and removed, when the classification model is put into use, identity information (e.g., a face and the like) provided by one user may be mistakenly recognized as belonging to another user, so that the identity is falsely used or a user account is mistakenly deducted, and the like, which endangers the user privacy security. On the other hand, from the perspective of the data form of the samples, in one embodiment, the samples may be image samples, and accordingly, the initial machine learning model may be an image processing model. In a particular embodiment, the samples may include face images, iris images, fingerprint images, etc., and the initial machine learning model may be an identification model. In another embodiment, the samples may be text samples and, accordingly, the initial machine learning model may be a text processing model. In yet another embodiment, the samples may be speech samples and, accordingly, the initial machine learning model may be a speech processing model.
The above steps shown in fig. 2 are specifically as follows:
first, in step S210, a plurality of non-confrontation samples are sampled for several times, and a plurality of control sample sets are obtained. In one embodiment, the plurality of non-antagonistic samples may be normal samples that have been manually cross-checked to confirm the tag is error-free.
In one embodiment, multiple samples may be taken by enumeration to obtain multiple control sample sets, where enumeration is one example of all possible methods, and assuming that the multiple non-antagonistic samples comprise 3 samples, denoted A, B and C, respectively, the control sample sets obtained by enumeration include Ø, { A }, { B }, { C }, { A, B }, { A, C }, and { A, B, C }.
In another embodiment, the control sample sets may be obtained by sampling several times using a hierarchical sampling method. The layered sampling method comprises the step of selecting samples with the same or similar proportion among the number of the samples corresponding to each label in each sampling. In one example, assuming that in a binary scenario, the plurality of non-antagonistic samples includes positive and negative samples, for any two sampling, the ratio of the positive and negative samples in the two control sample sets may be maintained at 3:1, for example, the number of positive and negative samples in one control sample set is 30 and 10, respectively, and the number of positive and negative samples in the other control sample set is 45 and 15, respectively.
In yet another embodiment, the sampling may be performed several times by using a self-sampling method to obtain several control sample sets. Specifically, for a certain sampling, assuming that the number of the non-confrontation samples is M, and the number of the samples to be collected is M, one sample can be randomly selected from the M non-confrontation samples each time, classified into the M samples, and then placed back into the M non-confrontation samples, so that the sample can still be selected at the next time of selection, and after the process is repeatedly executed M times, a control sample set including the M samples can be obtained.
Thus, by sampling several times, several control sample sets can be obtained. Next, in step S220, the target samples to be detected are added to the plurality of control sample sets, respectively, to obtain a plurality of experimental sample sets. That is, the target sample to be detected is added to each control sample set, so as to obtain each experimental sample set corresponding to each control sample set, and form a plurality of experimental sample sets.
Then, in step S230, an initial machine learning model is trained on a first control sample set of any of the plurality of control sample sets by using the first control sample set, so as to obtain a trained first control model. And, in step S240, performing performance evaluation on the first control model by using a test sample set determined based on the plurality of non-confrontational samples to obtain a first control value for a preset evaluation index.
It should be noted that after step S210 is executed, step S220 and step S230 may be executed at the same time, or step S220 and step S230 may be executed successively, and in short, the execution order of both is not limited.
In one embodiment, step S230 may include: respectively inputting a plurality of first samples in the first reference sample set into the initial machine learning model to obtain a plurality of corresponding first prediction results; and then adjusting model parameters in the initial machine learning model according to the plurality of first prediction results, the plurality of sample labels of the first samples and a preset loss function to obtain a parameter-adjusted first comparison model. Therefore, the initial machine learning model can be adjusted and parametered by utilizing the plurality of comparison sample sets to obtain the corresponding plurality of comparison models.
For the set of test samples, the determination may be based on the plurality of non-antagonistic samples. It should be understood that the test sample set is usually mutually exclusive from the training sample set (such as the above-mentioned control sample sets), i.e. the samples in the test sample set are usually not present in the training sample set and are not used in the training process. Moreover, the test sample set and the training sample set are generally divided to maintain the consistency of data distribution.
In one embodiment, the number of the test sample sets may be one. In this case, when there are a plurality of the plurality of control models, it means that performance evaluation can be performed on different control models using the same test sample set. In a specific embodiment, the step S210 may include: dividing two mutually exclusive sets based on the plurality of non-antagonistic samples, wherein one set is used as the test sample set, and the other set is used for sampling and determining the plurality of control sample sets.
In another embodiment, the test sample set may be multiple, so that different control models may be evaluated for performance using different test sample sets. In a specific embodiment, the step S210 may include: based on the hierarchical sampling method, the plurality of (e.g., M) non-antagonistic samples are divided into a predetermined number of mutually exclusive sets (e.g., k, where k is a positive integer less than M), and a union set of (k-1) mutually exclusive sets is used as a control sample set, and the remaining one mutually exclusive set is used as a corresponding test sample set, so that (k-1) control sample sets and corresponding (k-1) test sample sets can be obtained. In this manner, a set of test samples for evaluating model performance may be determined.
For the initial machine learning model described above, in one embodiment, the initial machine learning model may be an initialization model, that is, the initial machine learning model may be a model that has not been trained, wherein the model parameters are parameters assigned when the model is initialized. In another embodiment, the initial machine learning model may also be a model trained using some non-competing samples other than the plurality of non-competing samples described above. On the other hand, the initial machine learning model may be a classification model, a regression model, a neural network model, or the like, which is not limited thereto.
The preset evaluation index may include: error rate, accuracy, recall, precision, etc. It should be understood that the error rate refers to the ratio of the number of test samples with predicted errors to the total number of test samples. The precision refers to the proportion of the number of test samples which are predicted to be correct to the total number of test samples. For the binary classification problem, the precision ratio represents the proportion of the test samples which are truly true examples (namely, the label identification is the true example) in the test samples which are predicted to be the true examples; the recall ratio represents the proportion of samples that are predicted to be correct among the positive examples (i.e., the labels are identified as positive examples) included in the test samples. In one example, the prediction evaluation index may include precision, and the first reference value may include precision 0.88. In another example, the predictive evaluation index may include an error rate, and the first control value may include an error rate of 0.16.
In the above steps S230 and S240, the first control value corresponding to any first control sample set can be obtained, and thus, a plurality of control values corresponding to a plurality of control sample sets can be obtained. On the other hand, in step S250, for a first experimental sample set obtained by adding the target sample to the first control sample set, the initial machine learning model is trained using the first experimental sample set, and a trained first experimental model is obtained. And step S260, performing performance evaluation on the first experimental model by using the test sample set to obtain a first experimental value for the preset evaluation index.
It should be noted that the initial machine learning model trained using the first control sample set is the same as the initial machine learning model trained using the first experimental sample set, and the test sample set used for performance evaluation of the first experimental model is the same as the test sample set used for performance evaluation of the first control model. In addition, for the description of step S250 and step S260, reference may be made to the description of step S230 and step S240, which is not repeated.
In one example, the prediction evaluation index includes precision, and the first experimental value may include precision of 0.80 or 0.90. In another example, the predictive evaluation index may include an error rate, and the first control value may include an error rate of 0.10 or 0.20.
In the above steps S250 and S260, the first experimental value corresponding to any first experimental sample set can be obtained, and thus, a plurality of experimental values corresponding to a plurality of experimental sample sets can be obtained. It should be noted that, for the execution sequence of the foregoing steps S210 to S260, it is only required that the step S210 is the first step to be executed, and then the steps S230 and S240 are executed sequentially on the one hand, and the steps S220, S250 and S260 are executed sequentially on the other hand, and the rest is not limited. Specifically, in one embodiment, step S210, step S230, step S220, step S250, step S240, and step S260 may be performed sequentially in that order. In another embodiment, step S210, step S220, step S230, step S240, step S250, and step S260 may be performed sequentially.
Then, in step S270, the difference between the first experimental value and the first control value is determined as a first gain value.
It is to be understood that the gain values are used to characterize the optimization effect that the target samples have on the model performance. In one embodiment, when the predetermined evaluation index is used for forward characterizing the model performance (e.g., the predetermined evaluation index is precision, recall, or precision), the first gain value is a difference obtained by subtracting the first control value from the first experimental value. In one example, the predetermined evaluation index is precision, and the first gain value is-0.80 if the first control value and the first experiment value are 0.88 and 0.80, respectively, and the first gain value is 0.20 if the first control value and the first experiment value are 0.88 and 0.90, respectively.
In another embodiment, when the predetermined evaluation index is used to negatively characterize the model performance (e.g., the predetermined evaluation index is an error rate), the first gain value is a difference of the first control value minus the first experimental value. In one example, the preset evaluation index is an error rate, and the first gain value is 0.60 if the first control value and the first experimental value are 0.16 and 0.10, respectively, and-0.04 if the first control value and the first experimental value are 0.16 and 0.20, respectively.
Thus, a plurality of gain values can be obtained based on the plurality of control values and the plurality of experimental values. Based on this, in step S280, it is determined whether the target sample belongs to the confrontation sample using the gain values determined based on the control sample sets and the experiment sample sets.
In one embodiment, this step may include: determining a gain average of the plurality of gain values; further, in the case where the gain mean is smaller than a set threshold, it is determined that the target sample belongs to a challenge sample, and in the case where the gain mean is not smaller than a set threshold, it is determined that the target sample does not belong to a challenge sample.
In a specific embodiment, the set threshold may be a manually set threshold, such as 0 or 0.05. In another specific embodiment, the setting of the threshold may be based on the following steps: firstly, averaging a plurality of control values of the plurality of control sample sets aiming at the preset evaluation index to obtain a control mean value; and determining the product of the comparison average value and a second preset proportion as the set threshold. In a more specific embodiment, the second preset ratio may be set by the service personnel according to expert experience or actual requirements, such as 0.05 or 0.02. In one example, assuming that the above control average is 0.80 and the second preset ratio is 0.05, the set threshold may be determined to be 0.04.
According to a specific example, assuming that the threshold is set to 0.04, if the gain mean value is 0.01, it can be determined that the corresponding target sample belongs to the confrontation sample, and if the gain mean value is 0.06, it can be determined that the corresponding target sample does not belong to the confrontation sample.
In another embodiment, this step may include: and determining the gain proportion larger than a set threshold value in the plurality of gain values, and judging that the target sample belongs to the confrontation sample when the gain proportion is smaller than a first preset proportion. It should be noted that, the setting of the threshold value can be referred to the related description in the above embodiments, and furthermore, in a specific embodiment, the first preset ratio can be set by the service personnel according to the expert experience or the actual requirement, such as setting to 0.80 or 0.90.
According to a specific example, assuming that the first preset ratio is 0.80, if the determined gain ratio is 0.20, it may be determined that the corresponding target sample belongs to the confrontation sample, and if the determined gain ratio is 0.87, it may be determined that the corresponding target sample does not belong to the confrontation sample.
Thus, whether the target sample belongs to the confrontation sample or not can be detected.
In summary, in the method for identifying a countermeasure sample disclosed in the embodiment of the present specification, a gain value of a target sample for a model performance is first determined, and then whether the target sample belongs to the countermeasure sample is determined by using the gain value, so that the countermeasure sample can be accurately identified, and the safety of the model using the countermeasure sample originally is further protected, so as to ensure good training performance and prediction performance of the model. For example, in the process of training a model for identifying the user identity, the method for identifying the countermeasure sample may be firstly adopted to identify the countermeasure sample included in the training sample collected in advance, and the training sample set without the countermeasure sample is used to train the identity recognition model, so as to ensure the safety of the model. Meanwhile, the trained model has good prediction performance, and can effectively prevent false recognition, so that high-risk consequences such as identity misuse, privacy leakage, property loss and the like caused by false recognition are prevented.
The above identification method is described below with reference to specific embodiments. FIG. 3 illustrates a timing step diagram of identifying countermeasure samples in accordance with one embodiment. As shown in fig. 3, wherein identifying the challenge sample comprises the steps of:
in step S31, a normal sample (i.e., a non-antagonistic sample) is sampled to obtain a control sample set.
And step S32, training the initial model by using the comparison sample set, and performing performance evaluation on the trained model by using the test sample set to obtain a comparison evaluation result.
And step S33, adding the sample to be detected into the control sample set to obtain an experimental sample set.
And step S34, training the initial model by using the experiment sample set, and evaluating the performance of the trained model by using the test sample set to obtain an experiment evaluation result.
And step S35, determining the gain of model performance according to the experimental evaluation result and the comparison evaluation result.
And step S36, repeating the step S31 and the step S35, and determining the gain of the model performance of the sample to be detected for each sampling.
And step S37, calculating the average value of the model gain brought by the sample to be detected.
In step S38, the samples with the mean value below the threshold are identified as challenge samples.
The above can realize the identification of the confrontation sample.
Corresponding to the identification method, the embodiment of the specification further discloses an identification device. FIG. 4 illustrates an apparatus structure diagram for identifying countermeasure samples to secure a model according to one embodiment. As shown in fig. 4, the apparatus 400 may include: a sampling unit 410 configured to sample the plurality of non-antagonistic samples for several times to obtain several control sample sets; an adding unit 420 configured to add target samples to be detected to the plurality of control sample sets respectively to obtain a plurality of experiment sample sets; a first training unit 430, configured to train an initial machine learning model with a first reference sample set of any of the several reference sample sets, to obtain a trained first reference model; a first evaluation unit 440 configured to perform a performance evaluation on the first control model using a test sample set determined based on the plurality of non-confrontational samples, resulting in a first control value for a preset evaluation index; a second training unit 450, configured to train the initial machine learning model with a first experimental sample set obtained by adding the target sample to the first control sample set, so as to obtain a trained first experimental model; a second evaluation unit 460, configured to perform performance evaluation on the first experimental model by using the test sample set, so as to obtain a first experimental value for the preset evaluation index; a gain determination unit 470 configured to determine a difference value of the first experimental value and the first control value as a first gain value; a determining unit 480 configured to determine whether the target sample belongs to the confrontation sample by using gain values determined based on the control sample sets and the experiment sample sets.
In one embodiment, the plurality of non-countermeasure samples and target samples are image samples, the initial machine learning model is an image processing model; or, the plurality of non-confrontation samples and the target sample are text samples, and the initial machine learning model is a text processing model; or, the plurality of non-antagonistic samples and target samples are speech samples and the initial machine learning model is a speech processing model.
In one embodiment, the sampling unit 410 is configured to: sampling a plurality of non-antagonistic samples for a plurality of times by using an enumeration method to obtain a plurality of control sample sets; or, using a layered sampling method to sample the plurality of non-confrontation samples for a plurality of times to obtain a plurality of control sample sets; or, sampling the plurality of non-confrontation samples for a plurality of times by using a self-service sampling method to obtain a plurality of control sample sets.
In one embodiment, the preset evaluation index includes one or more of the following: error rate, accuracy, recall, precision.
In one embodiment, the decision unit 480 is configured to: determining a gain mean value of a plurality of gain values, and judging that the target sample belongs to a confrontation sample under the condition that the gain mean value is smaller than a set threshold value; or, determining a gain proportion larger than a set threshold value in the plurality of gain values, and determining that the target sample belongs to the confrontation sample when the gain proportion is smaller than a first preset proportion.
In one embodiment, the decision unit 480 is further configured to: averaging a plurality of control values of the plurality of control sample sets aiming at the preset evaluation index to obtain a control mean value; and determining the product of the comparison average value and a second preset proportion as the set threshold value.
In summary, in the apparatus for identifying a countermeasure sample disclosed in the embodiment of the present specification, a gain value of a target sample for a model performance is first determined, and then whether the target sample belongs to the countermeasure sample is determined by using the gain value, so that the countermeasure sample can be accurately identified, and then the safety of the model using the countermeasure sample is protected, so as to ensure good training performance and prediction performance of the model.
According to another aspect of embodiments, the present specification also discloses a method of identifying antagonistic privacy samples to protect privacy security. Fig. 5 shows a flowchart of a method for identifying countermeasure samples to protect privacy security according to an embodiment, and an execution subject of the method can be any device, equipment, platform, or equipment cluster with computing and processing capabilities. As shown in fig. 5, the method comprises the steps of:
step S510, sampling a plurality of non-countermeasure privacy samples for a plurality of times to obtain a plurality of comparison privacy sample sets; step S520, adding target privacy samples to be detected into the comparison privacy sample sets respectively to obtain a plurality of experiment privacy sample sets; step S530, aiming at any first comparison privacy sample set in the comparison privacy sample sets, training an initial machine learning model by using the first comparison privacy sample set to obtain a trained first comparison model; step S540, evaluating the performance of the first contrast model by using a test privacy sample set to obtain a first contrast value aiming at a preset evaluation index, wherein the test privacy sample set is determined based on the non-confrontation privacy samples; step S550, aiming at a first experiment privacy sample set obtained by adding the target privacy sample into the first control privacy sample set, training the initial machine learning model by using the first experiment privacy sample set to obtain a trained first experiment model; step S560, performing performance evaluation on the first experimental model by using the test privacy sample set to obtain a first experimental value aiming at the preset evaluation index; step S570, determining a difference between the first experimental value and the first control value as a first gain value; step S580, determining whether the target privacy sample belongs to the counterprivacy sample by using a plurality of gain values determined based on the plurality of control privacy sample sets and the plurality of experiment privacy sample sets.
With respect to the above steps, it should be noted that the above steps are mainly different from the steps shown in fig. 2 in that the non-countermeasure privacy sample and the target privacy sample involved therein relate to the privacy data. In one embodiment, the private data may include user personal information, biometric information, and the like. It should be noted that, for the description of the step shown in fig. 5, reference may be made to the description of the step shown in fig. 2, which is not described herein again.
Corresponding to the identification method shown in fig. 5, the embodiment of the present specification also discloses an identification device. In particular, FIG. 6 illustrates an apparatus structure diagram for identifying countering privacy samples to protect privacy security according to one embodiment. As shown in fig. 6, the apparatus 600 may include:
a sampling unit 610 configured to sample the plurality of non-countermeasure privacy samples for a plurality of times to obtain a plurality of comparison privacy sample sets; an adding unit 620 configured to add the target privacy samples to be detected to the comparison privacy sample sets respectively to obtain experiment privacy sample sets; a first training unit 630, configured to train, for any first control privacy sample set in the plurality of control privacy sample sets, an initial machine learning model using the first control privacy sample set, so as to obtain a trained first control model; a first evaluation unit 640 configured to perform a performance evaluation on the first control model using a test privacy sample set, which is determined based on the plurality of non-confronted privacy samples, to obtain a first control value for a preset evaluation index; a second training unit 650 configured to train the initial machine learning model using a first experimental privacy sample set obtained by adding the target privacy sample to the first control privacy sample set, so as to obtain a trained first experimental model; a second evaluation unit 660, configured to perform performance evaluation on the first experiment model by using the test privacy sample set to obtain a first experiment value for the preset evaluation index; a gain determination unit 670 configured to determine a difference value of the first experimental value and the first control value as a first gain value; a determining unit 680 configured to determine whether the target privacy sample belongs to the confrontation privacy sample by using a plurality of gain values determined based on the plurality of control privacy sample sets and the plurality of experiment privacy sample sets.
In addition, it should be noted that, for the description of the apparatus shown in fig. 6, reference may also be made to the foregoing description of the apparatus shown in fig. 4, which is not repeated herein.
According to an embodiment of a further aspect, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method described in connection with fig. 1 or fig. 2 or fig. 3 or fig. 5.
According to an embodiment of yet another aspect, there is also provided a computing device comprising a memory and a processor, the memory having stored therein executable code, the processor, when executing the executable code, implementing the method described in connection with fig. 1 or fig. 2 or fig. 3 or fig. 5.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in this invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
The above-mentioned embodiments, objects, technical solutions and advantages of the present invention are further described in detail, it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present invention should be included in the scope of the present invention.

Claims (16)

1. A method of identifying countermeasure samples to secure a model, comprising:
sampling a plurality of non-antagonistic samples for a plurality of times to obtain a plurality of control sample sets;
aiming at any first control sample set in the plurality of control sample sets, training an initial machine learning model by using the first control sample set to obtain a trained first control model;
adding target samples to be detected into the plurality of control sample sets respectively to obtain a plurality of experimental sample sets;
training the initial machine learning model by using a first experimental sample set obtained by adding the target sample into the first control sample set to obtain a trained first experimental model;
performing performance evaluation on the first control model by using a test sample set to obtain a first control value aiming at a preset evaluation index, wherein the test sample set is determined based on the plurality of non-confrontation samples;
performing performance evaluation on the first experiment model by using the test sample set to obtain a first experiment value aiming at the preset evaluation index;
determining a difference value between the first experimental value and the first control value as a first gain value;
and determining whether the target sample belongs to the confrontation sample or not by using a plurality of gain values determined based on the plurality of control sample sets and the plurality of experiment sample sets.
2. The method of claim 1, wherein,
the plurality of non-countermeasure samples and the target sample are image samples, and the initial machine learning model is an image processing model; or the like, or, alternatively,
the plurality of non-confrontation samples and the target sample are text samples, and the initial machine learning model is a text processing model; or the like, or, alternatively,
the plurality of non-antagonistic samples and the target sample are speech samples, and the initial machine learning model is a speech processing model.
3. The method of claim 1, wherein sampling the plurality of non-challenge samples several times to obtain a plurality of control sample sets comprises:
sampling the plurality of non-antagonistic samples for a plurality of times by using an enumeration method to obtain a plurality of control sample sets; or the like, or, alternatively,
sampling the plurality of non-confrontation samples for a plurality of times by using a hierarchical sampling method to obtain a plurality of control sample sets; or the like, or, alternatively,
and sampling the plurality of non-confrontation samples for a plurality of times by using a self-service sampling method to obtain a plurality of control sample sets.
4. The method of claim 1, wherein the preset evaluation index comprises one or more of: error rate, accuracy, recall.
5. The method of claim 1, wherein determining whether the target sample is a challenge sample using gain values determined based on the control sample sets and the experiment sample sets comprises:
determining a gain mean value of the gain values, and determining that the target sample belongs to a confrontation sample if the gain mean value is smaller than a set threshold; or the like, or, alternatively,
and determining the gain proportion larger than a set threshold value in the plurality of gain values, and judging that the target sample belongs to the confrontation sample when the gain proportion is smaller than a first preset proportion.
6. The method of claim 5, wherein determining whether the target sample is a challenge sample further comprises:
averaging a plurality of control values of the plurality of control sample sets aiming at the preset evaluation index to obtain a control mean value;
and determining the product of the comparison average value and a second preset proportion as the set threshold value.
7. An apparatus for identifying countermeasure samples to secure a model, comprising:
the sampling unit is configured to sample a plurality of non-antagonistic samples for a plurality of times to obtain a plurality of control sample sets;
the adding unit is configured to add target samples to be detected into the plurality of control sample sets respectively to obtain a plurality of experiment sample sets;
a first training unit, configured to train an initial machine learning model by using a first reference sample set to obtain a trained first reference model, for any first reference sample set in the plurality of reference sample sets;
a first evaluation unit configured to perform performance evaluation on the first control model by using a test sample set, which is determined based on the plurality of non-confrontation samples, to obtain a first control value for a preset evaluation index;
a second training unit, configured to train the initial machine learning model with a first experimental sample set obtained by adding the target sample to the first control sample set, to obtain a trained first experimental model;
the second evaluation unit is configured to perform performance evaluation on the first experiment model by using the test sample set to obtain a first experiment value aiming at the preset evaluation index;
a gain determination unit configured to determine a difference value of the first experimental value and the first control value as a first gain value;
a determination unit configured to determine whether the target sample belongs to the confrontation sample using gain values determined based on the control sample sets and the experiment sample sets.
8. The apparatus of claim 7, wherein,
the plurality of non-countermeasure samples and the target sample are image samples, and the initial machine learning model is an image processing model; or the like, or, alternatively,
the plurality of non-confrontation samples and the target sample are text samples, and the initial machine learning model is a text processing model; or the like, or, alternatively,
the plurality of non-antagonistic samples and the target sample are speech samples, and the initial machine learning model is a speech processing model.
9. The apparatus of claim 7, wherein the sampling unit is configured to:
sampling the plurality of non-antagonistic samples for a plurality of times by using an enumeration method to obtain a plurality of control sample sets; or the like, or, alternatively,
sampling the plurality of non-confrontation samples for a plurality of times by using a hierarchical sampling method to obtain a plurality of control sample sets; or the like, or, alternatively,
and sampling the plurality of non-confrontation samples for a plurality of times by using a self-service sampling method to obtain a plurality of control sample sets.
10. The apparatus of claim 7, wherein the preset evaluation index comprises one or more of: error rate, accuracy, recall.
11. The apparatus of claim 7, wherein the determination unit is configured to:
determining a gain mean value of the gain values, and determining that the target sample belongs to a confrontation sample if the gain mean value is smaller than a set threshold; or the like, or, alternatively,
and determining the gain proportion larger than a set threshold value in the plurality of gain values, and judging that the target sample belongs to the confrontation sample when the gain proportion is smaller than a first preset proportion.
12. The apparatus of claim 11, wherein the determination unit is further configured to:
averaging a plurality of control values of the plurality of control sample sets aiming at the preset evaluation index to obtain a control mean value;
and determining the product of the comparison average value and a second preset proportion as the set threshold value.
13. A method of identifying antagonistic privacy samples to protect privacy security, comprising:
sampling a plurality of non-confrontation privacy samples for a plurality of times to obtain a plurality of contrast privacy sample sets;
respectively adding target privacy samples to be detected into the plurality of comparison privacy sample sets to obtain a plurality of experiment privacy sample sets;
aiming at any first contrast privacy sample set in the plurality of contrast privacy sample sets, training an initial machine learning model by using the first contrast privacy sample set to obtain a trained first contrast model;
performing performance evaluation on the first control model by using a test privacy sample set to obtain a first control value aiming at a preset evaluation index, wherein the test privacy sample set is determined based on the non-confrontation privacy samples;
training the initial machine learning model by using a first experiment privacy sample set obtained by adding the target privacy sample to the first control privacy sample set to obtain a trained first experiment model;
performing performance evaluation on the first experiment model by using the test privacy sample set to obtain a first experiment value aiming at the preset evaluation index;
determining a difference value between the first experimental value and the first control value as a first gain value;
and judging whether the target privacy sample belongs to the confrontation privacy sample or not by using a plurality of gain values determined based on the plurality of comparison privacy sample sets and the plurality of experiment privacy sample sets.
14. An apparatus that identifies antagonistic privacy samples to protect privacy security, comprising:
the sampling unit is configured to sample the non-confrontation privacy samples for a plurality of times to obtain a plurality of contrast privacy sample sets;
the adding unit is configured to add the target privacy samples to be detected to the comparison privacy sample sets respectively to obtain a plurality of experiment privacy sample sets;
a first training unit, configured to train an initial machine learning model with respect to any first comparison privacy sample set in the comparison privacy sample sets, to obtain a trained first comparison model;
a first evaluation unit configured to perform a performance evaluation on the first control model using a test privacy sample set, which is determined based on the plurality of non-countervailing privacy samples, to obtain a first control value for a preset evaluation index;
a second training unit configured to train the initial machine learning model with a first experiment privacy sample set obtained by adding the target privacy sample to the first control privacy sample set, so as to obtain a trained first experiment model;
the second evaluation unit is configured to perform performance evaluation on the first experiment model by using the test privacy sample set to obtain a first experiment value aiming at the preset evaluation index;
a gain determination unit configured to determine a difference value of the first experimental value and the first control value as a first gain value;
and the judging unit is configured to judge whether the target privacy sample belongs to the confrontation privacy sample by using a plurality of gain values determined based on the plurality of control privacy sample sets and the plurality of experiment privacy sample sets.
15. A computer-readable storage medium, having stored thereon a computer program, wherein the computer program, when executed in a computer, causes the computer to perform the method of any of claims 1-6, 13.
16. A computing device comprising a memory and a processor, wherein the memory has stored therein executable code that when executed by the processor implements the method of any of claims 1-6, 13.
CN202010040234.4A 2020-01-15 2020-01-15 Method and device for identifying countermeasure sample to protect model security Active CN110852450B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010040234.4A CN110852450B (en) 2020-01-15 2020-01-15 Method and device for identifying countermeasure sample to protect model security
PCT/CN2020/138824 WO2021143478A1 (en) 2020-01-15 2020-12-24 Method and apparatus for identifying adversarial sample to protect model security

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010040234.4A CN110852450B (en) 2020-01-15 2020-01-15 Method and device for identifying countermeasure sample to protect model security

Publications (2)

Publication Number Publication Date
CN110852450A CN110852450A (en) 2020-02-28
CN110852450B true CN110852450B (en) 2020-04-14

Family

ID=69610734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010040234.4A Active CN110852450B (en) 2020-01-15 2020-01-15 Method and device for identifying countermeasure sample to protect model security

Country Status (2)

Country Link
CN (1) CN110852450B (en)
WO (1) WO2021143478A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110852450B (en) * 2020-01-15 2020-04-14 支付宝(杭州)信息技术有限公司 Method and device for identifying countermeasure sample to protect model security
CN113449097A (en) * 2020-03-24 2021-09-28 百度在线网络技术(北京)有限公司 Method and device for generating countermeasure sample, electronic equipment and storage medium
CN111340008B (en) * 2020-05-15 2021-02-19 支付宝(杭州)信息技术有限公司 Method and system for generation of counterpatch, training of detection model and defense of counterpatch
CN111860698B (en) * 2020-08-05 2023-08-11 中国工商银行股份有限公司 Method and device for determining stability of learning model
CN113012153A (en) * 2021-04-30 2021-06-22 武汉纺织大学 Aluminum profile flaw detection method
CN114140670A (en) * 2021-11-25 2022-03-04 支付宝(杭州)信息技术有限公司 Method and device for model ownership verification based on exogenous features

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304858A (en) * 2017-12-28 2018-07-20 中国银联股份有限公司 Fight specimen discerning model generating method, verification method and its system
CN108710892A (en) * 2018-04-04 2018-10-26 浙江工业大学 Synergetic immunity defence method towards a variety of confrontation picture attacks
CN108932527A (en) * 2018-06-06 2018-12-04 上海交通大学 Using cross-training model inspection to the method for resisting sample
WO2019228358A1 (en) * 2018-05-31 2019-12-05 华为技术有限公司 Deep neural network training method and apparatus
CN110674856A (en) * 2019-09-12 2020-01-10 阿里巴巴集团控股有限公司 Method and device for machine learning

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3512415A4 (en) * 2016-09-13 2020-08-19 Ohio State Innovation Foundation Systems and methods for modeling neural architecture
CN109543760B (en) * 2018-11-28 2021-10-19 上海交通大学 Confrontation sample detection method based on image filter algorithm
CN110363243A (en) * 2019-07-12 2019-10-22 腾讯科技(深圳)有限公司 The appraisal procedure and device of disaggregated model
CN110852450B (en) * 2020-01-15 2020-04-14 支付宝(杭州)信息技术有限公司 Method and device for identifying countermeasure sample to protect model security

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304858A (en) * 2017-12-28 2018-07-20 中国银联股份有限公司 Fight specimen discerning model generating method, verification method and its system
CN108710892A (en) * 2018-04-04 2018-10-26 浙江工业大学 Synergetic immunity defence method towards a variety of confrontation picture attacks
WO2019228358A1 (en) * 2018-05-31 2019-12-05 华为技术有限公司 Deep neural network training method and apparatus
CN108932527A (en) * 2018-06-06 2018-12-04 上海交通大学 Using cross-training model inspection to the method for resisting sample
CN110674856A (en) * 2019-09-12 2020-01-10 阿里巴巴集团控股有限公司 Method and device for machine learning

Also Published As

Publication number Publication date
WO2021143478A1 (en) 2021-07-22
CN110852450A (en) 2020-02-28

Similar Documents

Publication Publication Date Title
CN110852450B (en) Method and device for identifying countermeasure sample to protect model security
US11481684B2 (en) System and method for machine learning model determination and malware identification
CN107609493B (en) Method and device for optimizing human face image quality evaluation model
US6397200B1 (en) Data reduction system for improving classifier performance
US8242881B2 (en) Method of adjusting reference information for biometric authentication and apparatus
JP7130984B2 (en) Image judgment system, model update method and model update program
CN112927061B (en) User operation detection method and program product
CN110689048A (en) Training method and device of neural network model for sample classification
CN111626367A (en) Countermeasure sample detection method, apparatus, device and computer readable storage medium
CN110825969A (en) Data processing method, device, terminal and storage medium
CN111630521A (en) Image processing method and image processing system
CN111898129B (en) Malicious code sample screener and method based on Two-Head anomaly detection model
CN114817933A (en) Method and device for evaluating robustness of business prediction model and computing equipment
CN101299762B (en) Identification authentication method and apparatus
CN102243707A (en) Character recognition result verification apparatus and character recognition result verification method
CN112214402B (en) Code verification algorithm selection method, device and storage medium
CN112434651A (en) Information analysis method and device based on image recognition and computer equipment
CN116502705A (en) Knowledge distillation method and computer equipment for dual-purpose data set inside and outside domain
CN111488950B (en) Classification model information output method and device
CN114020905A (en) Text classification external distribution sample detection method, device, medium and equipment
CN114978616B (en) Construction method and device of risk assessment system, and risk assessment method and device
KR102325293B1 (en) Adaptive method, device, computer-readable storage medium and computer program for detecting malware based on machine learning
CN112801214B (en) Mouse quantity prediction method based on interaction of mouse recognition terminal and cloud computing platform
Ali et al. 6 Evaluation of AI model performance
JP2022088886A (en) Method for estimating correctness of label of labeled inspection data, information processing apparatus, and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200630

Address after: 200120 Zhang Yang Road, Pudong New Area free trade test district, Shanghai City 707, two layer West Area

Patentee after: Shanghai wind newspaper Mdt InfoTech Ltd.

Address before: 801-11, Section B, 8th floor, No. 556, Xixi Road, Xihu District, Hangzhou City, Zhejiang Province

Patentee before: Alipay (Hangzhou) Information Technology Co.,Ltd.

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Room 1607, 16th Floor, No. 447, Nanquan North Road, China (Shanghai) Pilot Free Trade Zone, 200120

Patentee after: Ant Zhian safety technology (Shanghai) Co.,Ltd.

Address before: 200120 west area, 2nd floor, no.707 Zhangyang Road, Pudong New Area pilot Free Trade Zone, Shanghai

Patentee before: Shanghai wind newspaper Mdt InfoTech Ltd.