US20230274003A1 - Identifying and correcting vulnerabilities in machine learning models - Google Patents
Identifying and correcting vulnerabilities in machine learning models Download PDFInfo
- Publication number
- US20230274003A1 US20230274003A1 US17/652,822 US202217652822A US2023274003A1 US 20230274003 A1 US20230274003 A1 US 20230274003A1 US 202217652822 A US202217652822 A US 202217652822A US 2023274003 A1 US2023274003 A1 US 2023274003A1
- Authority
- US
- United States
- Prior art keywords
- machine learning
- learning model
- assessment
- data
- attacks
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000010801 machine learning Methods 0.000 title claims abstract description 380
- 238000012549 training Methods 0.000 claims abstract description 177
- 238000000605 extraction Methods 0.000 claims abstract description 79
- 230000007123 defense Effects 0.000 claims abstract description 40
- 230000009471 action Effects 0.000 claims abstract description 34
- 238000000034 method Methods 0.000 claims description 77
- 230000008569 process Effects 0.000 claims description 40
- 238000012545 processing Methods 0.000 claims description 17
- 239000013598 vector Substances 0.000 claims description 13
- 230000009466 transformation Effects 0.000 claims description 12
- 230000004913 activation Effects 0.000 claims description 8
- 230000003044 adaptive effect Effects 0.000 claims description 8
- 239000000654 additive Substances 0.000 claims description 8
- 230000000996 additive effect Effects 0.000 claims description 8
- 238000012360 testing method Methods 0.000 claims description 8
- 238000001514 detection method Methods 0.000 claims description 7
- 230000003993 interaction Effects 0.000 claims description 7
- 238000013459 approach Methods 0.000 claims description 5
- 230000035515 penetration Effects 0.000 claims description 5
- 238000007418 data mining Methods 0.000 claims description 4
- 230000001010 compromised effect Effects 0.000 description 20
- 230000015654 memory Effects 0.000 description 15
- 238000004891 communication Methods 0.000 description 12
- 238000007726 management method Methods 0.000 description 9
- 230000006855 networking Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 231100000572 poisoning Toxicity 0.000 description 8
- 230000000607 poisoning effect Effects 0.000 description 8
- 238000012502 risk assessment Methods 0.000 description 8
- 238000011012 sanitization Methods 0.000 description 7
- 238000001994 activation Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000000844 transformation Methods 0.000 description 6
- 230000006399 behavior Effects 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000007423 decrease Effects 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000002441 reversible effect Effects 0.000 description 3
- 206010028980 Neoplasm Diseases 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 2
- 201000011510 cancer Diseases 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000000875 corresponding effect Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- 206010000117 Abnormal behaviour Diseases 0.000 description 1
- 230000016571 aggressive behavior Effects 0.000 description 1
- 238000012550 audit Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 235000000332 black box Nutrition 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- ZXQYGBMAQZUVMI-GCMPRSNUSA-N gamma-cyhalothrin Chemical compound CC1(C)[C@@H](\C=C(/Cl)C(F)(F)F)[C@H]1C(=O)O[C@H](C#N)C1=CC=CC(OC=2C=CC=CC=2)=C1 ZXQYGBMAQZUVMI-GCMPRSNUSA-N 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000013439 planning Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000011028 process validation Methods 0.000 description 1
- 238000005295 random walk Methods 0.000 description 1
- 238000013102 re-test Methods 0.000 description 1
- 238000005067 remediation Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/57—Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
- G06F21/577—Assessing vulnerabilities and evaluating computer system security
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
- G06V10/7747—Organisation of the process, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/776—Validation; Performance evaluation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2221/00—Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/03—Indexing scheme relating to G06F21/50, monitoring users, programs or devices to maintain the integrity of platforms
- G06F2221/033—Test or assess software
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
- G06N5/045—Explanation of inference; Explainable artificial intelligence [XAI]; Interpretable artificial intelligence
Abstract
A device may receive a machine learning model and training data utilized to train the machine learning model, and may perform a data veracity assessment of the training data to identify and remove poisoned data from the training data. The device may perform an adversarial assessment of the machine learning model to generate adversarial attacks and to provide defensive capabilities for the adversarial attacks, and may perform a membership inference assessment of the machine learning model to generate membership inference attacks and to provide secure training data as a defense for the membership inference attacks. The device may perform a model extraction assessment of the machine learning model to identify model extraction vulnerabilities and to provide a secure application programming interface as a defense to the model extraction vulnerabilities, and may perform actions based on results of one or more of the assessments.
Description
- Security is an essential part of any machine learning model, especially when it comes to inherent vulnerabilities associated with machine learning models. Machine learning model vulnerabilities pose challenges because machine learning models are used in a wide range of computer vision, audio or text processing tasks, and they process sensitive information that needs to be protected.
- Some implementations described herein relate to a method. The method may include receiving, from a user device, a machine learning model and training data utilized to train the machine learning model, and performing a data veracity assessment of the training data to identify and remove poisoned data from the training data. The method may include providing, to the user device, the training data without the identified poisoned data, and performing an adversarial example assessment of the machine learning model with generated adversarial example attacks and to provide defensive capabilities for the adversarial example attacks. The method may include performing a membership inference assessment of the machine learning model to generate membership inference attacks and to provide secure training data as a defense for the membership inference attacks, and performing a model extraction assessment of the machine learning model to identify model extraction vulnerabilities and to provide a secure application programming interface as a defense to the model extraction vulnerabilities. The method may include performing one or more actions based on results of the data veracity assessment, the adversarial example assessment, the membership inference assessment, and the model extraction assessment.
- Some implementations described herein relate to a device. The device may include one or more memories and one or more processors coupled to the one or more memories. The one or more processors may be configured to receive, from a user device, a machine learning model and training data utilized to train the machine learning model, and perform a data veracity assessment of the training data to identify and remove poisoned data from the training data. The one or more processors may be configured to provide, to the user device, the training data without the identified poisoned data, and perform an adversarial example assessment of the machine learning model with generated adversarial example attacks and to provide defensive capabilities for the adversarial example attacks. The one or more processors may be configured to perform a membership inference assessment of the machine learning model to generate membership inference attacks and to provide secure training data as a defense for the membership inference attacks, where the membership inference attacks include one or more of a baseline attack, a fixed noise attack, an additive noise attack, or an output probability vector attack. The one or more processors may be configured to perform a model extraction assessment of the machine learning model to identify model extraction vulnerabilities and to provide a secure application programming interface as a defense to the model extraction vulnerabilities, and perform one or more actions based on results of the data veracity assessment, the adversarial example assessment, the membership inference assessment, and the model extraction assessment.
- Some implementations described herein relate to a non-transitory computer-readable medium that stores a set of instructions for a device. The set of instructions, when executed by one or more processors of the device, may cause the device to receive, from a user device, a machine learning model and training data utilized to train the machine learning model, and perform a data veracity assessment of the training data to identify and remove poisoned data from the training data. The set of instructions, when executed by one or more processors of the device, may cause the device to provide, to the user device, the training data without the identified poisoned data, and perform an adversarial example assessment of the machine learning model with generated adversarial example attacks and to provide defensive capabilities for the adversarial example attacks, where the adversarial example attacks include one or more of a fast gradient sign method attack, a basic iterative method attack, a projected gradient descent attack, a Jacobian saliency map approach attack, a Carlini-Wagner attack, a boundary attack, or an adaptive attack. The set of instructions, when executed by one or more processors of the device, may cause the device to perform a membership inference assessment of the machine learning model to generate membership inference attacks and to provide secure training data as a defense for the membership inference attacks, and perform a model extraction assessment of the machine learning model to identify model extraction vulnerabilities and to provide a secure application programming interface as a defense to the model extraction vulnerabilities. The set of instructions, when executed by one or more processors of the device, may cause the device to perform one or more actions based on results of the data veracity assessment, the adversarial example assessment, the membership inference assessment, and the model extraction assessment.
-
FIGS. 1A-1G are diagrams of an example implementation described herein. -
FIG. 2 is a diagram illustrating an example of training and using a machine learning model. -
FIG. 3 is a diagram of an example environment in which systems and/or methods described herein may be implemented. -
FIG. 4 is a diagram of example components of one or more devices ofFIG. 3 . -
FIG. 5 is a flowchart of an example process for identifying and correcting vulnerabilities in machine learning models. - The following detailed description of example implementations refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements.
- Examples of attacks on machine learning models may include model manipulation, data poisoning, model extraction, membership inference, and/or the like. Model manipulation (e.g., adversarial machine learning model) includes modifying the input data (e.g., that is undetectable to the human eye) to cause misclassifications by the machine learning model. Data poisoning includes modifying or injecting data into the training data to bias the machine learning model to make a future misclassification. Model extraction includes using predictions of a machine learning model to extract and replicate information about the machine learning model. Membership inference includes analyzing the behavior of a machine learning model and predicting whether a point is included in training data. However, current techniques for detecting and defending against machine learning model attacks are not integrated into a platform for the machine learning model, and are unable to detect and correct all of the machine learning model attacks.
- Therefore, current techniques for detecting and defending against machine learning model attacks consume computing resources (e.g., processing resources, memory resources, communication resources, and/or the like), networking resources, and/or the like associated with failing to detect and correct one or more machine learning model attacks, implementing a compromised machine learning model, utilizing incorrect results generated by the compromised machine learning model, handling theft of confidential information associated with the compromised machine learning model, and/or the like.
- Some implementations described herein relate to an assessment system that identifies and corrects vulnerabilities in machine learning models. For example, the assessment system may receive, from a user device, a machine learning model and training data utilized to train the machine learning model, and may perform a data veracity assessment of the training data to identify and remove poisoned data from the training data. The assessment system may provide, to the user device, the training data without the identified poisoned data, and may perform an adversarial example assessment of the machine learning model with generated adversarial example attacks and to provide defensive capabilities for the adversarial example attacks. The assessment system may perform a membership inference assessment of the machine learning model to generate membership inference attacks and to provide secure training data as a defense for the membership inference attacks, and may perform a model extraction assessment of the machine learning model to identify model extraction vulnerabilities and to provide a secure application programming interface as a defense to the model extraction vulnerabilities. The assessment system may perform one or more actions based on results of the data veracity assessment, the adversarial example assessment, the membership inference assessment, and the model extraction assessment.
- In this way, the assessment system identifies and corrects vulnerabilities in machine learning models. The assessment system may perform risk assessments of a machine learning model to identify vulnerabilities (e.g., for a variety of machine learning model attacks) associated with the machine learning model. The assessment system may perform the risk assessments based on the machine learning model, training data for the machine learning model, and a pipeline configuration associated with the machine learning model. The assessment system may generate defenses or corrections for the vulnerabilities, and may cause the corrected machine learning model to be implemented. This, in turn, conserves computing resources, networking resources, and/or the like that would otherwise have been consumed in failing to detect and correct one or more machine learning model attacks, implementing a compromised machine learning model, utilizing incorrect results generated by the compromised machine learning model, handling theft of confidential information associated with the compromised machine learning model, and/or the like.
-
FIGS. 1A-1G are diagrams of an example 100 associated with identifying and correcting vulnerabilities in machine learning models. As shown inFIGS. 1A-1G , example 100 includes a user device associated with an assessment system. The user device may include a wireless communication device, a mobile phone, a user equipment, a laptop computer, a tablet computer, a desktop computer, and/or the like. The assessment system may include a system that identifies and corrects vulnerabilities in machine learning models. Further details of the user device and the assessment system are provided elsewhere herein. - As shown in
FIG. 1A , the user device may store, have access to, and/or utilize a machine learning (ML) model. The user device may utilize training data to train the machine learning model and to generate a trained machine learning model. In some implementations, rather than training the machine learning model, the user device may obtain the trained machine learning model from another system or device that trained the machine learning model. In this case, the user device may provide the other system or device with the training data for use in training the machine learning model, and may provide the other system or device with updated training data to retrain the machine learning model in order to update the machine learning model. As shown inFIG. 1A , the user device may preprocess the training data so that the training data is in a format that may be understood by the machine learning model. The user device may train the machine learning model with the preprocessed training data to generate a trained machine learning model. - As further shown in
FIG. 1A , the user device may evaluate the trained machine learning model. For example, the user device may process validation data, with the trained machine learning model, to validate that the trained machine learning model is operating correctly. If the trained machine learning model is operating correctly, the user device may process the trained machine learning model, with test data, to further ensure that the trained machine learning model is operating correctly. If the trained machine learning model is operating incorrectly (e.g., not good), the user device may retrain the trained machine learning model and may revalidate and/or retest the modified machine learning model based on the validation data and/or the test data. For example, the user device may retrain the trained machine learning model by tuning hyperparameters associated with the trained machine learning model. If the trained machine learning mode is operating correctly (e.g., good), the user device may provide, to the assessment system, the trained machine learning model, the training data, and/or pipeline data identifying a configuration pipeline for the machine learning model. - As further shown in
FIG. 1A , the assessment system may include a data veracity assessment component that receives the trained machine learning model, the training data, and the pipeline data from the user device. The data veracity assessment component may perform a data veracity assessment of the training data to determine whether the training data is poisoned or clean (e.g., not poisoned). If the training data is poisoned, the assessment system may utilize a data sanitization component to remove the poisoned data from the training data. If the training data is not poisoned, the data veracity assessment component may provide the trained machine learning model, the training data, and the pipeline data to an adversarial example assessment component. Further details of the data veracity assessment component and the data sanitization component are provided elsewhere herein. - As further shown in
FIG. 1A , the adversarial example assessment component may receive input data and the trained machine learning model, and may determine whether the input data includes adversarial examples. If adversarial examples are included in the input data, the adversarial example assessment component may utilize an adversarial example detection component and/or an input hardening component as defenses to the adversarial examples if the machine learning model cannot be retrained. If adversarial examples are included in the input data and the machine learning model can be retrained, the adversarial example assessment component may utilize an adversarial training component to retrain the machine learning model (e.g., to provide a more robust model). Further details of the adversarial example assessment component, adversarial example detection component, an input hardening component, and the adversarial training component are provided elsewhere herein. - As further shown in
FIG. 1A , if adversarial examples are not included in the input data, a membership inference assessment component may receive the trained machine learning model and a distribution of the training data, and may determine whether the trained machine learning model is at risk of a membership inference attack. If the trained machine learning model is at risk of a membership inference attack, a secure data component of the assessment system may be utilized to secure the training data. Further details of the membership inference assessment component and the secure data component are provided elsewhere herein. - As further shown in
FIG. 1A , if the trained machine learning model is not at risk of a membership inference attack, a model extraction assessment component of the assessment system may receive the trained machine learning model and the pipeline data, and may determine whether the trained machine learning model is at risk of a model extraction attack. For example, the model extraction assessment component may determine a quantity of queries required to extract the machine learning model. If the trained machine learning model is at risk of a model extraction attack, a secure application programming interface (API) component of the assessment system may be utilized to secure the machine learning model. If the trained machine learning model is not at risk of a model extraction attack, the assessment system may cause the machine learning model to be deployed. Further details of the model extraction assessment component and the secure API component are provided elsewhere herein. - The assessment system may integrate several techniques and methods for analyzing security of a machine learning model to produce an all-in-one comprehensive analysis system. For example, the assessment system may test the machine learning model for current data poisoning attacks, adversarial example attacks, model memorization attacks, membership inference attacks, model extraction attacks, and/or the like. The assessment system may strategically perform attack assessments at a proper location of the machine learning model pipeline. The assessment system may provide remediations and/or suggestions to further protect machine learning models against attacks.
- As shown in
FIG. 1B , and byreference number 105, the assessment system may receive, from the user device, a machine learning model, training data utilized to train the machine learning model, and a pipeline configuration (e.g., the pipeline data) for the machine learning model. The machine learning model may include any machine learning model, such as, for example, a machine learning model associated with a surveillance system (e.g., using facial recognition), a machine learning model associated with an X-ray reading system (e.g., for disease diagnosis), and/or the like. The training data may include the data utilized to train the machine learning model to perform a particular function. For example, the training data for a surveillance system machine learning model may include images of faces of different individuals, the training data for an X-ray reading system machine learning model may include X-rays associated with different diseases, and/or the like. The pipeline configuration may include data identifying how the machine learning model is trained, validated, and/or tested, parameters associated with the machine learning model, and/or the like. - The user device may provide the machine learning model, the training data, and the pipeline configuration to the assessment system with a request to evaluate the machine learning model, the training data, and the pipeline configuration. The assessment system may receive the request, the machine learning model, the training data, and the pipeline configuration from the user device. Alternatively, the assessment system may request the machine learning model, the training data, and the pipeline configuration from the user device, and may receive the machine learning model, the training data, and the pipeline configuration based on the request. In some implementations, the assessment system may store the machine learning model, the training data, and the pipeline configuration in a data structure (e.g., a database, a table, a list, and/or the like) associated with the assessment system.
- As further shown in
FIG. 1B , and byreference number 110, the assessment system may perform a data veracity assessment of the training data to identify and remove poisoned data from the training data. For example, the assessment system may utilize the data veracity assessment component to perform the data veracity assessment of the training data to identify and remove poisoned data from the training data. Data poisoning is an attack that tries to manipulate the training data in order to control a prediction behavior of a trained machine learning model such that the machine learning model will include malicious examples in desired classes. A data poisoning attack can be launched by injecting new data (e.g., aimed at corrupting the machine learning model) in the training data, changing labels of existing real data in the training data, modifying the training data, submitting manipulated data through approved channels, and/or the like. A machine learning model trained using compromised training data can be induced to make incorrect predictions that cause detection failures, create financial losses for the organization, trigger unpredictable behavior, and/or the like. Data poisoning attacks may include a denial-of-service attack (e.g., where a predictive accuracy of the machine learning model will decrease for any input or most inputs), a backdoor/Trojan horse attack (e.g., where a predictive accuracy of the machine learning model will decrease only for inputs selected by an adversary), a data injection attack (e.g., where an attacker manipulates some aspects of the training data in order to cause the machine learning model to output a faulty classifier), a data modification attack (e.g., where an attacker can control labels assigned to a fraction of the training data), and/or the like. - The data veracity assessment component may perform the data veracity assessment of the training data before the machine learning model is implemented in production. The data veracity assessment component may utilize different techniques, that depend on data types of the training data, to identify poisoned data in the training data. For example, when the data type is tabular or discrete data, the data veracity assessment component may process the tabular or discrete training data, with a counterfactual explanation model, to identify poisoned data in the tabular or discrete training data. A counterfactual explanation is a technique utilized to interpret a machine learning model and to detect trigger features in tabular or discrete datasets for different targets. A counterfactual explanation may provide “what if” feedback of the form “if an input datapoint were x′ instead of x, then a machine learning model's output would be y′ instead of y.” When the data type is image, audio, or sensor data, the data veracity assessment component may process the image, audio, or sensor training data, with an activation clustering model, to identify poisoned data in the image, audio, or sensor training data. Activation clustering includes analyzing neural network activations of training data to determine whether the training data has been poisoned, and, if so, which datapoints are poisoned. Activation weights for poisoned training data may break up into distinguishable clusters, while activation weights for clean training data may not break up into clusters.
- If the data veracity assessment component identifies poisoned data in the training data, the data sanitization component of the assessment system may remove the identified poisoned data from the training data. In some implementations, the assessment system may provide, to the user device and/or to users associated with the machine learning model, notifications (e.g., email messages, instant messages, and/or the like) indicating that the training data includes the poisoned data, may validate the training data without the poisoned data, may backup the training data without the poisoned data, may audit the training data, and/or the like.
- As further shown in
FIG. 1B , and byreference number 115, the assessment system may provide, to the user device, the training data without the identified poisoned data. For example, the data sanitization component of the assessment system may provide sanitized training data (e.g., the training data without the identified poisoned data) to the user device. The user device may utilize the sanitized training data to retrain the machine learning model so that the machine learning model is not susceptible to the data poisoning attack. - As shown in
FIG. 1C , and byreference number 120, the assessment system may perform an adversarial example assessment of the machine learning model to adversarial example attacks and to provide defensive capabilities for the adversarial example attacks. For example, the adversarial example assessment component of the assessment system may perform the adversarial example assessment of the machine learning model to the generated adversarial example attacks and to provide the defensive capabilities for the adversarial example attacks. An adversarial example attack may include providing, as inputs to a machine learning model, adversarial examples that an attacker has intentionally designed to cause the machine learning model to make a mistake. With an adversarial example attack, a machine learning model may make bad decisions (e.g., incorrect predictions) due to small changes (e.g., adding perturbations) in inputs provided to the machine learning model. - The adversarial example attacks may include a white-box attack (e.g., where attackers have full knowledge about the machine learning model, including parameters and hyperparameters of the machine learning model), a black-box attack (e.g., where attackers know almost nothing about the machine learning model except for some features of the machine learning model), an untargeted attack (e.g., that includes a predicted class of a benign input to some other classes without a specific target class), a targeted attack (e.g., that includes a predicted class of a benign input to a targeted class different from an original class), and/or the like. In some implementations, the adversarial example attacks may include a fast gradient sign method attack, a basic iterative method attack, a projected gradient descent attack, a Jacobian saliency map approach attack, a Carlini-Wagner attack, a boundary attack, an adaptive attack (e.g., designed to evade color depth reduction), and/or the like.
- In some implementations, when performing the adversarial example assessment of the machine learning model, the assessment system may generate a feature squeezed data point from a training data point, and may process the feature squeezed data point, with the machine learning model, to generate a first prediction. Feature squeezing is a general framework (e.g., that includes bit depth reduction, spatial smoothing, and/or the like) that reduces a search space available for an adversary and detects adversarial examples. The assessment system may process the training data point, with the machine learning model, to generate a second prediction, and may determine whether a difference between the first prediction and the second prediction exceeds a threshold. The assessment system may determine one of the adversarial example attacks when the difference between the first prediction and the second prediction exceeds the threshold.
- In some implementations, when performing the adversarial example assessment of the machine learning model, the assessment system may utilize an adversarial example (xadv) and may perform random walks toward a boundary between an adversarial image and a non-adversarial image such that the distance between the adversarial example (xadv) and an original example (x) is minimized. The distance between a benign image (x) and an adversarial image (xadv) can be measured using L0, L1, L2, and L∞ norms, where L0 corresponds to a quantity of pixels of x that are changed, L1 corresponds to a sum of an absolute difference between x and xadv, L2 corresponds to a Euclidean distance between x and xadv, and L∞ corresponds to a maximum angle to any pixel of the image input.
- When performing the adversarial example assessment of the machine learning model to provide the defensive capabilities for the adversarial example attacks, the assessment system may perform adversarial robustness checks by generating multiple adversarial example attacks to show a robustness of the machine learning model to different types of adversarial example attacks. The assessment system may add the adversarial example attacks into the training data in order to increase the robustness of the machine learning model. For example, the assessment system may utilize Madry's adversarial training method to generate the adversarial examples, adjust labels of the adversarial examples, and add the adversarial examples to the training data. The machine learning model may be retrained based on the added adversarial examples to make the machine learning model more robust to the adversarial examples.
- When performing the adversarial example assessment of the machine learning model to provide the defensive capabilities for the adversarial example attacks, the assessment system may perform input hardening to increase difficulty for attackers and to break weak adversarial example attacks. For example, the assessment system may utilize input transformations (e.g., randomly selected from a library of transformations) to smooth out perturbations in the training data, may utilize adversarial training to improve robustness of the machine learning model, may utilize adversarial example detection to detect newly submitted input data that appears to attempt to cause unstable performance for the machine learning model, and/or the like. With regard to input transformations, the assessment system may utilize ridge aggression to identify interactions between input transformations. For example, the assessment system may generate a model that is fitted, provides meaningful information about a strength of each interaction between input transformations, and predicts an effectiveness of different input transformation ensembles. In some implementations, the assessment system may utilize computational cost to determine the defensive capabilities for the adversarial example attacks.
- In some implementations, when performing the adversarial example assessment of the machine learning model to provide the defensive capabilities for the adversarial example attacks, the assessment system may utilize input transformation as one of the defensive capabilities, may utilize reconstruction as one of the defensive capabilities, may utilize a decision boundary as one of the defensive capabilities, and/or may utilize an ensemble model (e.g., that includes input transformation, reconstruction, and decision boundary) as one of the defensive capabilities. Input transformation is described above. Reconstruction may include determining a difference between an original image and a reconstruction of the original image. Reconstruction may utilize an image-to-image translation method (e.g., a pix2pix package method) to determine the difference. For example, if a landscape is to be converted from day to night, the assessment system may train a pix2pix model using images of same locations in day and night and may apply the pix2pix model to a target image. In another example, the assessment system may train the pix2pix model to reconstruct an original image, and may measure a distance from the original image or a reconstruction loss. A threshold may be utilized with the reconstruction loss to determine whether an image is an adversarial example. A decision boundary may include a boundary that partitions a vector space into two sets, one set for each class. A classifier may classify all points on one side of the decision boundary as belonging to one class and all points on another side of the decision boundary as belonging to another class. A decision boundary is the region of a problem space in which an output label of a classifier is ambiguous.
- In some implementations, when performing the adversarial example assessment of the machine learning model to provide the defensive capabilities for the adversarial example attacks, the assessment system may utilize input preprocessing (e.g., utilize the library of input transformations with no modifications to the machine learning model) as a defensive capability, may utilize model robustness (e.g., replacing weights of the machine learning model with more robust weights or changing/converting the machine learning model architecture to be more robust) as a defensive capability, may utilize post-processing analysis (e.g., detecting abnormal behavior occurring from a same and machine learning model interaction) as a defensive capability, and/or the like.
- As shown in
FIG. 1D , and byreference number 125, the assessment system may perform a membership inference assessment of the machine learning model to generate membership inference attacks and to provide secure training data as a defense for the membership inference attacks. For example, the membership inference assessment component of the assessment system may perform the membership inference assessment of the machine learning model to generate the membership inference attacks and to provide the secure training data as the defense for the membership inference attacks. A membership inference attack may include determining whether data was used as training data for the machine learning model. For example, a membership interference attack may include inferring a health condition diagnosis. If an attacker knows certain information about a person, the attacker may input the information into a machine learning model identifying cancer patients. If the attacker can conclude that the person is a member of the machine learning mode outputs, the attacker may infer that the person has cancer. In another example, an insurance company seeking customers may utilize a competitor's machine learning model to determine clients of the competitor. If a person is identified as a client of the competitor, the insurance company could target the person with more enticing promotions. In still another example, if a machine learning model reveals some subject and a separate database contains other information regarding the subject, combining the other information with the subject may lead to a privacy breach. - The membership inference attacks may include a baseline attack, a fixed noise attack, an additive noise attack, an output probability vector attack, and/or the like. A baseline attack is an unsophisticated attack that infers membership based on whether a prediction was made correctly. If the prediction is correct, an observation may be inferred to be a member. If the prediction is incorrect, the observation may be inferred to be a non-member. A fixed noise attack includes adding a quantity of noise to an observation and monitoring how a prediction differs from an original unperturbed data prediction. If a difference between the predictions decreases significantly, an attacker may determine that an original data observation is a member and that the noise is a non-member. An additive noise attack includes adding incremental levels of noise to an observation and monitoring when a predicted class changes. The additive noise attack may require different quantities of noise to change a predicted class label between members and non-members. An output probability vector attack includes utilizing a probability vector output by the machine learning model to determine membership. Members and non-members may have differently distributed probability vectors.
- In some implementations, the assessment system may generate a membership inference model that generates combinations of the membership inference attacks based on a data type, a problem type, available information, and/or the like. The assessment system may generate an individual membership inference model for each output class. For example, if the machine learning model outputs ten classes, the assessment system may generate ten membership inference models. The assessment system may utilize shadow data to train a shadow model to emulate the machine learning model, and may generate a membership inference dataset based on the shadow model. The assessment system may generate the membership inference model based on the membership inference dataset, and may utilize the membership inference model to perform attacks on the machine learning model.
- In some implementations, the assessment system may process the shadow dataset, with the machine learning model, to generate hard labels, and may train the shadow model with the shadow data and the hard labels. The assessment system may process the shadow dataset and a holdout dataset, with the shadow model, to generate soft labels, and may train the membership inference model with the soft labels and member/non-member labels. In some implementations, the assessment system may process the shadow dataset, with the machine learning model, to generate soft labels, and may train the shadow model with the shadow data and the soft labels. The assessment system may process the shadow dataset and a holdout dataset, with the shadow model, to generate additional soft labels, and may train the membership inference model with the additional soft labels and member/non-member labels.
- The assessment system may provide the secure training data as the defense for the membership inference attacks by generating training data that prevents the machine learning model from generating inferences about the training data, by modifying the training data to prevent the machine learning model from generating inferences about the training data, and/or the like. For example, the assessment system may reduce overfitting of the training data as a defense for the membership inference attacks, may utilize ensemble models as a defense for the membership inference attacks, may limit information returned by the machine learning model as a defense for the membership inference attacks, may prevent utilization of web applications as a defense for the membership inference attacks, and/or the like.
- As shown in
FIG. 1E , and byreference number 130, the assessment system may perform a model extraction assessment of the machine learning model to identify model extraction vulnerabilities and to provide a secure API as a defense to the model extraction vulnerabilities. For example, the model extraction assessment component may perform the model extraction assessment of the machine learning model to identify the model extraction vulnerabilities and the secure API component may provide the secure API as the defense to the model extraction vulnerabilities. A model extraction attack may include stealing a confidential machine learning model. An attacker may utilize queries to create a series of input-response pairs for the machine learning model, and may train a new machine learning model based on results of the queries. The new machine learning model may function similarly to the machine learning model. This enables the attacker to interact with and reverse engineer the machine learning model. Surrogate and shadow models reverse engineered from the machine learning model may be used to evade per-usage pricing models for the machine learning model. A competitor's reverse engineered model may reproduce functionality of the machine learning model, enabling the competitor to offer competing products. The surrogate and shadow models can be used to conduct experiments and optimize poisoning and manipulation attacks on the machine learning model. - When performing the model extraction assessment of the machine learning model to identify the model extraction vulnerabilities, the assessment system may determine model extraction vulnerabilities of the machine learning model at different interaction levels and threat scenarios, may determine reduced risk usage thresholds for individual users interacting with the machine learning model, may determine threats an attacker must take to extract confidential information, and/or the like. For example, the assessment system may identify a query threshold for a minimum viable strong attack on accuracy and fidelity metrics at difference risk tolerances. The assessment system may analyze artifacts produced during the training process to evaluate the query thresholds for the machine learning model based on a threat scenario for the machine learning model.
- The model extraction vulnerabilities may include open source datasets, associated with the machine learning model, that resemble a task, competitor data associated with the machine learning model, unsophisticated random inputs associated with the machine learning model, binary hard labels associated with the machine learning model (e.g., only one response of a highest likelihood classification), probability vectors associated with the machine learning model (e.g., likelihoods of each class), random query selection associated with the machine learning model, adaptive queries associated with the machine learning model, and/or the like.
- When performing the model extraction assessment of the machine learning model to provide the secure API as the defense to the model extraction vulnerabilities, the assessment system may utilize the secure API to provide access control for the machine learning model. The assessment system may provide access control by limiting access of the secure API to approved users (e.g., domains of subscribers, registered devices, and/or the like), implementing cryptographic mechanisms to protect the confidentiality and integrity of access to the secure API, requiring user authentication/authorization with identities and API security tokens/keys, requiring two-step authentication access to prevent account/credential attacks and abuse, and/or the like.
- The assessment system may provide account management for the secure API by prohibiting anonymous/unauthorized API queries, authorizing API least-privileges based on defined attributes for each account, monitoring and disabling accounts in violation of API use notification agreements, enforcing usage conditions of the API by account (e.g., usage duration, restricting usage to certain days/times, query types/endpoints, and/or the like), and/or the like. The assessment system may provide data mining protection for the secure API by employing analytical processes to identify and halt patterns of API access correlated with attempts to extract the machine learning model (e.g., based on query frequency, type, and/or work factor), safeguarding technical API reference information/documentation/architecture from unnecessary exposure, protecting or disabling API caching mechanisms from mining actions, and/or the like.
- The assessment system may provide information flow control for the secure API by blocking/sanitizing API queries that do not conform to expected API input (e.g., length, format, content type, and/or illegal content), blocking/stripping API output that contains errors or any other processing failure that discloses information, obfuscating/randomizing API output to minimize information disclosure while retaining fidelity (e.g., strip decimal points or round up/down), and/or the like.
- The assessment system may provide concurrent session control by limiting a quantity of concurrent sessions for the API system (e.g., globally, by account type, by account, or any combination thereof), and/or the like. The assessment system may provide penetration testing by conducting penetration testing to validate defenses and identify vulnerable components in the secure API that could be exploited. The assessment system may provide API configuration hardening by managing, applying and verifying settings to limit functionality of the secure API (e.g., security headers, endpoints, and/or return codes), rate limiting (throttling) a rate of requests coming into the secure API, planning for periodic configuration baseline validation and vulnerability scanning to mitigate flaws/vulnerabilities in the secure API system and its components, and/or the like.
- As shown in
FIG. 1F , and byreference number 135, the assessment system may perform one or more actions based on results of the data veracity assessment, the adversarial example assessment, the membership inference assessment, and the model extraction assessment. In some implementations, performing the one or more actions includes the assessment system providing the results for display. For example, the assessment system may provide the results of the data veracity assessment, the adversarial example assessment, the membership inference assessment, and the model extraction assessment to the user device. The user device may display the results to a user of the user device so that the user may take actions to mitigate one or more attacks on the machine learning model. In this way, the assessment system conserves computing resources, networking resources, and/or the like that would otherwise have been consumed in failing to detect and correct one or more machine learning model attacks, implementing a compromised machine learning model, utilizing incorrect results generated by the compromised machine learning model, handling theft of confidential information associated with the compromised machine learning model, and/or the like. - In some implementations, performing the one or more actions includes the assessment system causing the machine learning model to be implemented based on the results. For example, if the assessment system determines that the machine learning model is not susceptible to the attacks generated based on the assessments, the assessment system may determine that the machine learning model is ready to be implemented. The assessment system may cause the machine learning model to be implemented in a system, in the user device, and/or the like based on the determination. In this way, the assessment system conserves computing resources, networking resources, and/or the like that would otherwise have been consumed in implementing a compromised machine learning model, utilizing incorrect results generated by the compromised machine learning model, handling theft of confidential information associated with the compromised machine learning model, and/or the like.
- In some implementations, performing the one or more actions includes the assessment system modifying the machine learning model based on the results and causing the modified machine learning model to be implemented. For example, the assessment system may modify hyperparameters of the machine learning model based on the results of the assessments, or may modify the training data for the machine learning model, to generate a modified model. The assessment system may cause the modified machine learning model to be implemented in a system, in the user device, and/or the like. In this way, the assessment system conserves computing resources, networking resources, and/or the like that would otherwise have been consumed in failing to detect and correct one or more machine learning model attacks, utilizing incorrect results generated by a compromised machine learning model, handling theft of confidential information associated with the compromised machine learning model, and/or the like.
- In some implementations, performing the one or more actions includes the assessment system retraining the machine learning model based on the results. For example, the assessment system may utilize the results as additional training data for retraining the machine learning model, thereby increasing the quantity of training data available for training the machine learning model. Accordingly, the assessment system may conserve computing resources associated with identifying, obtaining, and/or generating historical data for training the machine learning model relative to other systems for identifying, obtaining, and/or generating historical data for training machine learning models.
- In some implementations, performing the one or more actions includes the assessment system preventing the machine learning model from being implemented based on the results. For example, if the assessment system determines that the machine learning model is susceptible to one or more attacks generated based on the assessments, the assessment system may determine that the machine learning model is not ready to be implemented. The assessment system may prevent the machine learning model from being implemented until the machine learning model is not susceptible to the attacks. In this way, the assessment system conserves computing resources, networking resources, and/or the like that would otherwise have been consumed in failing to detect and correct one or more machine learning model attacks, implementing a compromised machine learning model, utilizing incorrect results generated by the compromised machine learning model, and/or the like.
-
FIG. 1G depicts an example user interface that may be generated by the assessment system. As shown, the user interface may include information associated with a risk assessment of the machine learning model. The risk assessment may include risk information associated with model security of the machine learning model (e.g., 44% risk), data security of the machine learning model (e.g., 36% risk), and/or platform security of the machine learning model (e.g., 20% risk). The risk assessment may also provide indications of potential attacks on the machine learning model (e.g., model extraction attacks, model manipulation attacks, model compromising attacks, data memorization attacks, membership inference attacks, and/or the like). The risk assessment provided by the user interface may enable a user of the assessment system to determine whether to implement the machine learning model in production. - In this way, the assessment system identifies and corrects vulnerabilities in machine learning models. The assessment system may perform risk assessments of a machine learning model to identify vulnerabilities (e.g., for a variety of machine learning model attacks) associated with the machine learning model. The assessment system may perform the risk assessments based on the machine learning model, training data for the machine learning model, and a pipeline configuration associated with the machine learning model. The assessment system may generate defenses or corrections for the vulnerabilities, and may cause the corrected machine learning model to be implemented. This, in turn, conserves computing resources, networking resources, and/or the like that would otherwise have been consumed in failing to detect and correct one or more machine learning model attacks, implementing a compromised machine learning model, utilizing incorrect results generated by the compromised machine learning model, handling theft of confidential information associated with the compromised machine learning model, and/or the like.
- As indicated above,
FIGS. 1A-1G are provided as an example. Other examples may differ from what is described with regard toFIGS. 1A-1G . The number and arrangement of devices shown inFIGS. 1A-1G are provided as an example. In practice, there may be additional devices, fewer devices, different devices, or differently arranged devices than those shown inFIGS. 1A-1G . Furthermore, two or more devices shown inFIGS. 1A-1G may be implemented within a single device, or a single device shown inFIGS. 1A-1G may be implemented as multiple, distributed devices. Additionally, or alternatively, a set of devices (e.g., one or more devices) shown inFIGS. 1A-1G may perform one or more functions described as being performed by another set of devices shown inFIGS. 1A-1G . -
FIG. 2 is a diagram illustrating an example 200 of training and using a machine learning model. The machine learning model training and usage described herein may be performed using a machine learning system. The machine learning system may include or may be included in a computing device, a server, a cloud computing environment, and/or the like, such as the assessment system described in more detail elsewhere herein. - As shown by
reference number 205, a machine learning model may be trained using a set of observations. The set of observations may be obtained from historical data, such as data gathered during one or more processes described herein. In some implementations, the machine learning system may receive the set of observations (e.g., as input) from the assessment system, as described elsewhere herein. - As shown by
reference number 210, the set of observations includes a feature set. The feature set may include a set of variables, and a variable may be referred to as a feature. A specific observation may include a set of variable values (or feature values) corresponding to the set of variables. In some implementations, the machine learning system may determine variables for a set of observations and/or variable values for a specific observation based on input received from the assessment system. For example, the machine learning system may identify a feature set (e.g., one or more features and/or feature values) by extracting the feature set from structured data, by performing natural language processing to extract the feature set from unstructured data, by receiving input from an operator, and/or the like. - As an example, a feature set for a set of observations may include a first feature of
feature 1 data, a second feature offeature 2 data, a third feature offeature 3 data, and so on. As shown, for a first observation, the first feature may have a value offeature 1data 1, the second feature may have a value offeature 2data 1, the third feature may have a value offeature 3data 1, and so on. These features and feature values are provided as examples and may differ in other examples. - As shown by
reference number 215, the set of observations may be associated with a target variable. The target variable may represent a variable having a numeric value, may represent a variable having a numeric value that falls within a range of values or has some discrete possible values, may represent a variable that is selectable from one of multiple options (e.g., one of multiple classes, classifications, labels, and/or the like), may represent a variable having a Boolean value, and/or the like. A target variable may be associated with a target variable value, and a target variable value may be specific to an observation. In example 200, the target variable has a value oftarget variable 1 for the first observation. - The target variable may represent a value that a machine learning model is being trained to predict, and the feature set may represent the variables that are input to a trained machine learning model to predict a value for the target variable. The set of observations may include target variable values so that the machine learning model can be trained to recognize patterns in the feature set that lead to a target variable value. A machine learning model that is trained to predict a target variable value may be referred to as a supervised learning model.
- In some implementations, the machine learning model may be trained on a set of observations that do not include a target variable. This may be referred to as an unsupervised learning model. In this case, the machine learning model may learn patterns from the set of observations without labeling or supervision, and may provide output that indicates such patterns, such as by using clustering and/or association to identify related groups of items within the set of observations.
- As shown by
reference number 220, the machine learning system may train a machine learning model using the set of observations and using one or more machine learning algorithms, such as a regression algorithm, a decision tree algorithm, a neural network algorithm, a k-nearest neighbor algorithm, a support vector machine algorithm, and/or the like. After training, the machine learning system may store the machine learning model as a trainedmachine learning model 225 to be used to analyze new observations. - As shown by
reference number 230, the machine learning system may apply the trainedmachine learning model 225 to a new observation, such as by receiving a new observation and inputting the new observation to the trainedmachine learning model 225. As shown, the new observation may include a first feature offeature 1 data X, a second feature offeature 2 data Y, a third feature offeature 3 data Z, and so on, as an example. The machine learning system may apply the trainedmachine learning model 225 to the new observation to generate an output (e.g., a result). The type of output may depend on the type of machine learning model and/or the type of machine learning task being performed. For example, the output may include a predicted value of a target variable, such as when supervised learning is employed. Additionally, or alternatively, the output may include information that identifies a cluster to which the new observation belongs, information that indicates a degree of similarity between the new observation and one or more other observations, and/or the like, such as when unsupervised learning is employed. - As an example, the trained
machine learning model 225 may predict a value of target variable A for the target variable of the clustered events for the new observation, as shown byreference number 235. Based on this prediction, the machine learning system may provide a first recommendation, may provide output for determination of a first recommendation, may perform a first automated action, may cause a first automated action to be performed (e.g., by instructing another device to perform the automated action), and/or the like. - In some implementations, the trained
machine learning model 225 may classify (e.g., cluster) the new observation in a cluster, as shown byreference number 240. The observations within a cluster may have a threshold degree of similarity. As an example, if the machine learning system classifies the new observation in a first cluster (e.g., afeature 1 data cluster), then the machine learning system may provide a first recommendation. Additionally, or alternatively, the machine learning system may perform a first automated action and/or may cause a first automated action to be performed (e.g., by instructing another device to perform the automated action) based on classifying the new observation in the first cluster. - As another example, if the machine learning system were to classify the new observation in a second cluster (e.g., a
feature 2 data cluster), then the machine learning system may provide a second (e.g., different) recommendation and/or may perform or cause performance of a second (e.g., different) automated action. - In some implementations, the recommendation and/or the automated action associated with the new observation may be based on a target variable value having a particular label (e.g., classification, categorization, and/or the like), may be based on whether a target variable value satisfies one or more thresholds (e.g., whether the target variable value is greater than a threshold, is less than a threshold, is equal to a threshold, falls within a range of threshold values, and/or the like), may be based on a cluster in which the new observation is classified, and/or the like.
- In this way, the machine learning system may apply a rigorous and automated process to determine a target variable. The machine learning system enables recognition and/or identification of tens, hundreds, thousands, or millions of features and/or feature values for tens, hundreds, thousands, or millions of observations, thereby increasing accuracy and consistency and reducing delay associated with determining a target variable relative to requiring computing resources to be allocated for tens, hundreds, or thousands of operators to manually provide a target variable.
- As indicated above,
FIG. 2 is provided as an example. Other examples may differ from what is described in connection withFIG. 2 . -
FIG. 3 is a diagram of anexample environment 300 in which systems and/or methods described herein may be implemented. As shown inFIG. 3 , theenvironment 300 may include anassessment system 301, which may include one or more elements of and/or may execute within acloud computing system 302. Thecloud computing system 302 may include one or more elements 303-313, as described in more detail below. As further shown inFIG. 3 , theenvironment 300 may include anetwork 320 and/or a user device 330. Devices and/or elements of theenvironment 300 may interconnect via wired connections and/or wireless connections. - The
cloud computing system 302 includescomputing hardware 303, aresource management component 304, a host operating system (OS) 305, and/or one or morevirtual computing systems 306. Theresource management component 304 may perform virtualization (e.g., abstraction) of thecomputing hardware 303 to create the one or morevirtual computing systems 306. Using virtualization, theresource management component 304 enables a single computing device (e.g., a computer, a server, and/or the like) to operate like multiple computing devices, such as by creating multiple isolatedvirtual computing systems 306 from thecomputing hardware 303 of the single computing device. In this way, thecomputing hardware 303 can operate more efficiently, with lower power consumption, higher reliability, higher availability, higher utilization, greater flexibility, and lower cost than using separate computing devices. - The
computing hardware 303 includes hardware and corresponding resources from one or more computing devices. For example, thecomputing hardware 303 may include hardware from a single computing device (e.g., a single server) or from multiple computing devices (e.g., multiple servers), such as multiple computing devices in one or more data centers. As shown, thecomputing hardware 303 may include one ormore processors 307, one ormore memories 308, one ormore storage components 309, and/or one ormore networking components 310. Examples of a processor, a memory, a storage component, and a networking component (e.g., a communication component) are described elsewhere herein. - The
resource management component 304 includes a virtualization application (e.g., executing on hardware, such as the computing hardware 303) capable of virtualizing thecomputing hardware 303 to start, stop, and/or manage the one or morevirtual computing systems 306. For example, theresource management component 304 may include a hypervisor (e.g., a bare-metal orType 1 hypervisor, a hosted orType 2 hypervisor, and/or the like) or a virtual machine monitor, such as when thevirtual computing systems 306 arevirtual machines 311. Additionally, or alternatively, theresource management component 304 may include a container manager, such as when thevirtual computing systems 306 arecontainers 312. In some implementations, theresource management component 304 executes within and/or in coordination with ahost operating system 305. - A
virtual computing system 306 includes a virtual environment that enables cloud-based execution of operations and/or processes described herein usingcomputing hardware 303. As shown, avirtual computing system 306 may include avirtual machine 311, acontainer 312, ahybrid environment 313 that includes a virtual machine and a container, and/or the like. Avirtual computing system 306 may execute one or more applications using a file system that includes binary files, software libraries, and/or other resources required to execute applications on a guest operating system (e.g., within the virtual computing system 306) or thehost operating system 305. - Although the
assessment system 301 may include one or more elements 303-313 of thecloud computing system 302, may execute within thecloud computing system 302, and/or may be hosted within thecloud computing system 302, in some implementations, theassessment system 301 may not be cloud-based (e.g., may be implemented outside of a cloud computing system) or may be partially cloud-based. For example, theassessment system 301 may include one or more devices that are not part of thecloud computing system 302, such asdevice 400 ofFIG. 4 , which may include a standalone server or another type of computing device. Theassessment system 301 may perform one or more operations and/or processes described in more detail elsewhere herein. - The
network 320 includes one or more wired and/or wireless networks. For example, thenetwork 320 may include a cellular network, a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a private network, the Internet, and/or the like, and/or a combination of these or other types of networks. Thenetwork 320 enables communication among the devices of theenvironment 300. - The user device 330 includes one or more devices capable of receiving, generating, storing, processing, and/or providing information, as described elsewhere herein. The user device 330 may include a communication device and/or a computing device. For example, the user device 330 may include a wireless communication device, a mobile phone, a user equipment, a laptop computer, a tablet computer, a desktop computer, a gaming console, a set-top box, a wearable communication device (e.g., a smart wristwatch, a pair of smart eyeglasses, a head mounted display, or a virtual reality headset), or a similar type of device.
- The number and arrangement of devices and networks shown in
FIG. 3 are provided as an example. In practice, there may be additional devices and/or networks, fewer devices and/or networks, different devices and/or networks, or differently arranged devices and/or networks than those shown inFIG. 3 . Furthermore, two or more devices shown inFIG. 3 may be implemented within a single device, or a single device shown inFIG. 3 may be implemented as multiple, distributed devices. Additionally, or alternatively, a set of devices (e.g., one or more devices) of theenvironment 300 may perform one or more functions described as being performed by another set of devices of theenvironment 300. -
FIG. 4 is a diagram of example components of adevice 400, which may correspond to theassessment system 301 and/or the user device 330. In some implementations, theassessment system 301 and/or the user device 330 may include one ormore devices 400 and/or one or more components of thedevice 400. As shown inFIG. 4 , thedevice 400 may include abus 410, aprocessor 420, amemory 430, aninput component 440, anoutput component 450, and acommunication component 460. - The
bus 410 includes a component that enables wired and/or wireless communication among the components ofdevice 400. Theprocessor 420 includes a central processing unit, a graphics processing unit, a microprocessor, a controller, a microcontroller, a digital signal processor, a field-programmable gate array, an application-specific integrated circuit, and/or another type of processing component. Theprocessor 420 is implemented in hardware, firmware, or a combination of hardware and software. In some implementations, theprocessor 420 includes one or more processors capable of being programmed to perform a function. Thememory 430 includes a random-access memory, a read only memory, and/or another type of memory (e.g., a flash memory, a magnetic memory, and/or an optical memory). - The
input component 440 enables thedevice 400 to receive input, such as user input and/or sensed inputs. For example, theinput component 440 may include a touch screen, a keyboard, a keypad, a mouse, a button, a microphone, a switch, a sensor, a global positioning system component, an accelerometer, a gyroscope, an actuator, and/or the like. Theoutput component 450 enables thedevice 400 to provide output, such as via a display, a speaker, and/or one or more light-emitting diodes. Thecommunication component 460 enables thedevice 400 to communicate with other devices, such as via a wired connection and/or a wireless connection. For example, thecommunication component 460 may include a receiver, a transmitter, a transceiver, a modem, a network interface card, an antenna, and/or the like. - The
device 400 may perform one or more processes described herein. For example, a non-transitory computer-readable medium (e.g., the memory 430) may store a set of instructions (e.g., one or more instructions, code, software code, program code, and/or the like) for execution by theprocessor 420. Theprocessor 420 may execute the set of instructions to perform one or more processes described herein. In some implementations, execution of the set of instructions, by one ormore processors 420, causes the one ormore processors 420 and/or thedevice 400 to perform one or more processes described herein. In some implementations, hardwired circuitry may be used instead of or in combination with the instructions to perform one or more processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software. - The number and arrangement of components shown in
FIG. 4 are provided as an example. Thedevice 400 may include additional components, fewer components, different components, or differently arranged components than those shown inFIG. 4 . Additionally, or alternatively, a set of components (e.g., one or more components) of thedevice 400 may perform one or more functions described as being performed by another set of components of thedevice 400. -
FIG. 5 is a flowchart of anexample process 500 for identifying and correcting vulnerabilities in machine learning models. In some implementations, one or more process blocks ofFIG. 5 may be performed by a device (e.g., the assessment system 301). In some implementations, one or more process blocks ofFIG. 5 may be performed by another device or a group of devices separate from or including the device, such as a user device (e.g., the user device 330). Additionally, or alternatively, one or more process blocks ofFIG. 5 may be performed by one or more components of thedevice 400, such as theprocessor 420, thememory 430, theinput component 440, theoutput component 450, and/or thecommunication component 460. - As shown in
FIG. 5 ,process 500 may include receiving, from a user device, a machine learning model and training data utilized to train the machine learning model (block 510). For example, the device may receive, from a user device, a machine learning model and training data utilized to train the machine learning model, as described above. - As further shown in
FIG. 5 ,process 500 may include performing a data veracity assessment of the training data to identify and remove poisoned data from the training data (block 520). For example, the device may perform a data veracity assessment of the training data to identify and remove poisoned data from the training data, as described above. In some implementations, performing the data veracity assessment of the training data to identify and remove poisoned data from the training data includes processing the training data, with a counterfactual explanation model, to identify first poisoned data in the training data; processing the training data, with an activation clustering model, to identify second poisoned data in the training data; and removing the first poisoned data and the second poisoned data from the training data. - As further shown in
FIG. 5 ,process 500 may include providing, to the user device, the training data without the identified poisoned data (block 530). For example, the device may provide, to the user device, the training data without the identified poisoned data, as described above. - As further shown in
FIG. 5 ,process 500 may include performing an adversarial example assessment of the machine learning model with generated adversarial example attacks and to provide defensive capabilities for the adversarial example attacks (block 540). For example, the device may perform an adversarial example assessment of the machine learning model with generated adversarial example attacks and to provide defensive capabilities for the adversarial example attacks, as described above. In some implementations, performing the adversarial example assessment of the machine learning model with generated adversarial example attacks and to provide the defensive capabilities for the adversarial example attacks includes processing the machine learning model, with an adversarial robustness model, to generate the adversarial example attacks and to determine a robustness of the machine learning model to the adversarial example attacks; utilizing input hardening and adversarial example detection as the defensive capabilities for the adversarial example attacks; and adding the adversarial example attacks into the training data. - In some implementations, the adversarial example attacks include one or more of a fast gradient sign method attack, a basic iterative method attack, a projected gradient descent attack, a Jacobian saliency map approach attack, a Carlini-Wagner attack, a boundary attack, or an adaptive attack.
- In some implementations, performing the adversarial example assessment of the machine learning model with the generated adversarial example attacks and to provide the defensive capabilities for the adversarial example attacks includes generating a feature squeezed data point from a training data point; processing the feature squeezed data point, with the machine learning model, to generate a first prediction; processing the training data point, with the machine learning model, to generate a second prediction; determining whether a difference between the first prediction and the second prediction exceeds a threshold; and determining one of the adversarial example attacks based on the difference between the first prediction and the second prediction exceeding the threshold.
- In some implementations, performing the adversarial example assessment of the machine learning model with the generated adversarial example attacks and to provide the defensive capabilities for the adversarial example attacks includes one or more of utilizing input transformation as one of the defensive capabilities, utilizing reconstruction as one of the defensive capabilities, utilizing a decision boundary as one of the defensive capabilities, or utilizing an ensemble model as one of the defensive capabilities.
- As further shown in
FIG. 5 ,process 500 may include performing a membership inference assessment of the machine learning model to generate membership inference attacks and to provide secure training data as a defense for the membership inference attacks (block 550). For example, the device may perform a membership inference assessment of the machine learning model to generate membership inference attacks and to provide secure training data as a defense for the membership inference attacks, as described above. In some implementations, performing the membership inference assessment of the machine learning model to generate the membership inference attacks and to provide the secure training data as the defense for the membership inference attacks includes generating one or more of a baseline attack, a fixed noise attack, an additive noise attack, or an output probability vector attack as the membership inference attacks. - As further shown in
FIG. 5 ,process 500 may include performing a model extraction assessment of the machine learning model to identify model extraction vulnerabilities and to provide a secure application programming interface as a defense to the model extraction vulnerabilities (block 560). For example, the device may perform a model extraction assessment of the machine learning model to identify model extraction vulnerabilities and to provide a secure application programming interface as a defense to the model extraction vulnerabilities, as described above. In some implementations, performing the model extraction assessment of the machine learning model to identify the model extraction vulnerabilities and to provide the secure application programming interface as the defense to the model extraction vulnerabilities includes determining the model extraction vulnerabilities at different interaction levels and threat scenarios, and providing one or more of access control, account management, data mining protection, information flow control, concurrent session control, penetration testing, or configuration hardening for the secure application programming interface. - In some implementations, the model extraction vulnerabilities include one or more of open source datasets, associated with the machine learning model, that resemble a task; competitor data associated with the machine learning model; unsophisticated random inputs associated with the machine learning model; binary hard labels associated with the machine learning model; probability vectors associated with the machine learning model; random query selection associated with the machine learning model; or adaptive queries associated with the machine learning model.
- As further shown in
FIG. 5 ,process 500 may include performing one or more actions based on results of the data veracity assessment, the adversarial example assessment, the membership inference assessment, and the model extraction assessment (block 570). For example, the device may perform one or more actions based on results of the data veracity assessment, the adversarial example assessment, the membership inference assessment, and the model extraction assessment, as described above. In some implementations, performing the one or more actions includes one or more of providing the results for display, or causing the machine learning model to be implemented based on the results. In some implementations, performing the one or more actions includes modifying the machine learning model based on the results and to generate a modified machine learning model, and causing the modified machine learning model to be implemented. - In some implementations, performing the one or more actions includes one or more of retraining the machine learning model based on the results, or preventing the machine learning model from being implemented based on the results. In some implementations, performing the one or more actions includes causing the user device to utilize the training data, without the identified poisoned data, to retrain the machine learning model.
- Although
FIG. 5 shows example blocks ofprocess 500, in some implementations,process 500 may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG. 5 . Additionally, or alternatively, two or more of the blocks ofprocess 500 may be performed in parallel. - The foregoing disclosure provides illustration and description but is not intended to be exhaustive or to limit the implementations to the precise form disclosed. Modifications may be made in light of the above disclosure or may be acquired from practice of the implementations.
- As used herein, the term “component” is intended to be broadly construed as hardware, firmware, or a combination of hardware and software. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware, firmware, and/or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods are described herein without reference to specific software code—it being understood that software and hardware can be used to implement the systems and/or methods based on the description herein.
- As used herein, satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, and/or the like, depending on the context.
- Although particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various implementations includes each dependent claim in combination with every other claim in the claim set.
- No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, a combination of related and unrelated items, and/or the like), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”).
- In the preceding specification, various example embodiments have been described with reference to the accompanying drawings. It will, however, be evident that various modifications and changes may be made thereto, and additional embodiments may be implemented, without departing from the broader scope of the invention as set forth in the claims that follow. The specification and drawings are accordingly to be regarded in an illustrative rather than restrictive sense.
Claims (20)
1. A method, comprising:
receiving, by a device and from a user device, a machine learning model and training data utilized to train the machine learning model;
performing, by the device, a data veracity assessment of the training data to identify and remove poisoned data from the training data;
providing, by the device and to the user device, the training data without the identified poisoned data;
performing, by the device, an adversarial example assessment of the machine learning model with generated adversarial example attacks and to provide defensive capabilities for the adversarial example attacks;
performing, by the device, a membership inference assessment of the machine learning model to generate membership inference attacks and to provide secure training data as a defense for the membership inference attacks;
performing, by the device, a model extraction assessment of the machine learning model to identify model extraction vulnerabilities and to provide a secure application programming interface as a defense to the model extraction vulnerabilities; and
performing, by the device, one or more actions based on results of the data veracity assessment, the adversarial example assessment, the membership inference assessment, and the model extraction assessment.
2. The method of claim 1 , wherein performing the data veracity assessment of the training data to identify and remove poisoned data from the training data comprises:
processing the training data, with a counterfactual explanation model, to identify first poisoned data in the training data;
processing the training data, with an activation clustering model, to identify second poisoned data in the training data; and
removing the first poisoned data and the second poisoned data from the training data.
3. The method of claim 1 , wherein performing the adversarial example assessment of the machine learning model with the generated adversarial example attacks and to provide the defensive capabilities for the adversarial example attacks comprises:
processing the machine learning model, with an adversarial robustness model, to generate the adversarial example attacks and to determine a robustness of the machine learning model to the adversarial example attacks;
utilizing input hardening and adversarial example detection as the defensive capabilities for the adversarial example attacks; and
adding the adversarial example attacks into the training data.
4. The method of claim 1 , wherein performing the membership inference assessment of the machine learning model to generate the membership inference attacks and to provide the secure training data as the defense for the membership inference attacks comprises:
generating one or more of a baseline attack, a fixed noise attack, an additive noise attack, or an output probability vector attack as the membership inference attacks.
5. The method of claim 1 , wherein performing the model extraction assessment of the machine learning model to identify the model extraction vulnerabilities and to provide the secure application programming interface as the defense to the model extraction vulnerabilities comprises:
determining the model extraction vulnerabilities at different interaction levels and threat scenarios; and
providing one or more of access control, account management, data mining protection, information flow control, concurrent session control, penetration testing, or configuration hardening for the secure application programming interface.
6. The method of claim 1 , wherein the adversarial example attacks include one or more of:
a fast gradient sign method attack,
a basic iterative method attack,
a projected gradient descent attack,
a Jacobian saliency map approach attack,
a Carlini-Wagner attack,
a boundary attack, or
an adaptive attack.
7. The method of claim 1 , wherein performing the adversarial example assessment of the machine learning model with the generated adversarial example attacks and to provide the defensive capabilities for the adversarial example attacks comprises:
generating a feature squeezed data point from a training data point;
processing the feature squeezed data point, with the machine learning model, to generate a first prediction;
processing the training data point, with the machine learning model, to generate a second prediction;
determining whether a difference between the first prediction and the second prediction exceeds a threshold; and
determining one of the adversarial example attacks based on the difference between the first prediction and the second prediction exceeding the threshold.
8. A device, comprising:
one or more processors configured to:
receive, from a user device, a machine learning model and training data utilized to train the machine learning model;
perform a data veracity assessment of the training data to identify and remove poisoned data from the training data;
provide, to the user device, the training data without the identified poisoned data;
perform an adversarial example assessment of the machine learning model with generated adversarial example attacks and to provide defensive capabilities for the adversarial example attacks;
perform a membership inference assessment of the machine learning model to generate membership inference attacks and to provide secure training data as a defense for the membership inference attacks,
wherein the membership inference attacks include one or more of:
a baseline attack,
a fixed noise attack,
an additive noise attack, or
an output probability vector attack;
perform a model extraction assessment of the machine learning model to identify model extraction vulnerabilities and to provide a secure application programming interface as a defense to the model extraction vulnerabilities; and
perform one or more actions based on results of the data veracity assessment, the adversarial example assessment, the membership inference assessment, and the model extraction assessment.
9. The device of claim 8 , wherein the one or more processors, to perform the adversarial example assessment of the machine learning model with the generated adversarial example attacks and to provide the defensive capabilities for the adversarial example attacks, are configured to one or more of:
utilize input transformation as one of the defensive capabilities;
utilize reconstruction as one of the defensive capabilities;
utilize a decision boundary as one of the defensive capabilities; or
utilize an ensemble model as one of the defensive capabilities.
10. The device of claim 8 , wherein the model extraction vulnerabilities include one or more of:
open source datasets, associated with the machine learning model, that resemble a task,
competitor data associated with the machine learning model,
unsophisticated random inputs associated with the machine learning model,
binary hard labels associated with the machine learning model,
probability vectors associated with the machine learning model,
random query selection associated with the machine learning model, or
adaptive queries associated with the machine learning model.
11. The device of claim 8 , wherein the one or more processors, to perform the one or more actions, are configured to one or more of:
provide the results for display; or
cause the machine learning model to be implemented based on the results.
12. The device of claim 8 , wherein the one or more processors, to perform the one or more actions, are configured to:
modify the machine learning model based on the results and to generate a modified machine learning model; and
cause the modified machine learning model to be implemented.
13. The device of claim 8 , wherein the one or more processors, to perform the one or more actions, are configured to one or more of:
retrain the machine learning model based on the results; or
prevent the machine learning model from being implemented based on the results.
14. The device of claim 8 , wherein the one or more processors, to perform the one or more actions, are configured to:
cause the user device to utilize the training data, without the identified poisoned data, to retrain the machine learning model.
15. A non-transitory computer-readable medium storing a set of instructions, the set of instructions comprising:
one or more instructions that, when executed by one or more processors of a device, cause the device to:
receive, from a user device, a machine learning model and training data utilized to train the machine learning model;
perform a data veracity assessment of the training data to identify and remove poisoned data from the training data;
provide, to the user device, the training data without the identified poisoned data;
perform an adversarial example assessment of the machine learning model with generated adversarial example attacks and to provide defensive capabilities for the adversarial example attacks,
wherein the adversarial example attacks include one or more of:
a fast gradient sign method attack,
a basic iterative method attack,
a projected gradient descent attack,
a Jacobian saliency map approach attack,
a Carlini-Wagner attack,
a boundary attack, or
an adaptive attack;
perform a membership inference assessment of the machine learning model to generate membership inference attacks and to provide secure training data as a defense for the membership inference attacks;
perform a model extraction assessment of the machine learning model to identify model extraction vulnerabilities and to provide a secure application programming interface as a defense to the model extraction vulnerabilities; and
perform one or more actions based on results of the data veracity assessment, the adversarial example assessment, the membership inference assessment, and the model extraction assessment.
16. The non-transitory computer-readable medium of claim 15 , wherein the one or more instructions, that cause the device to perform the data veracity assessment of the training data to identify and remove poisoned data from the training data, cause the device to:
process the training data, with a counterfactual explanation model, to identify first poisoned data in the training data;
process the training data, with an activation clustering model, to identify second poisoned data in the training data; and
remove the first poisoned data and the second poisoned data from the training data.
17. The non-transitory computer-readable medium of claim 15 , wherein the one or more instructions, that cause the device to perform the adversarial example assessment of the machine learning model with the generated adversarial example attacks and to provide the defensive capabilities for the adversarial example attacks, cause the device to:
process the machine learning model, with an adversarial robustness model, to generate the adversarial example attacks and to determine a robustness of the machine learning model to the adversarial example attacks;
utilize input hardening and adversarial example detection as the defensive capabilities for the adversarial example attacks; and
add the adversarial example attacks into the training data.
18. The non-transitory computer-readable medium of claim 15 , wherein the one or more instructions, that cause the device to perform the membership inference assessment of the machine learning model to generate the membership inference attacks and to provide the secure training data as the defense for the membership inference attacks, cause the device to:
generate one or more of a baseline attack, a fixed noise attack, an additive noise attack, or an output probability vector attack as the membership inference attacks.
19. The non-transitory computer-readable medium of claim 15 , wherein the one or more instructions, that cause the device to perform the model extraction assessment of the machine learning model to identify the model extraction vulnerabilities and to provide the secure application programming interface as the defense to the model extraction vulnerabilities, cause the device to:
determine the model extraction vulnerabilities at different interaction levels and threat scenarios; and
provide one or more of access control, account management, data mining protection, information flow control, concurrent session control, penetration testing, or configuration hardening for the secure application programming interface.
20. The non-transitory computer-readable medium of claim 15 , wherein the one or more instructions, that cause the device to perform the adversarial example assessment of the machine learning model with the generated adversarial example attacks and to provide the defensive capabilities for the adversarial example attacks, cause the device to:
generate a feature squeezed data point from a training data point;
process the feature squeezed data point, with the machine learning model, to generate a first prediction;
process the training data point, with the machine learning model, to generate a second prediction;
determine whether a difference between the first prediction and the second prediction exceeds a threshold; and
determine one of the adversarial example attacks based on the difference between the first prediction and the second prediction exceeding the threshold.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/652,822 US20230274003A1 (en) | 2022-02-28 | 2022-02-28 | Identifying and correcting vulnerabilities in machine learning models |
EP23154810.8A EP4235523A1 (en) | 2022-02-28 | 2023-02-03 | Identifying and correcting vulnerabilities in machine learning models |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/652,822 US20230274003A1 (en) | 2022-02-28 | 2022-02-28 | Identifying and correcting vulnerabilities in machine learning models |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230274003A1 true US20230274003A1 (en) | 2023-08-31 |
Family
ID=85174080
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/652,822 Pending US20230274003A1 (en) | 2022-02-28 | 2022-02-28 | Identifying and correcting vulnerabilities in machine learning models |
Country Status (2)
Country | Link |
---|---|
US (1) | US20230274003A1 (en) |
EP (1) | EP4235523A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210319098A1 (en) * | 2018-12-31 | 2021-10-14 | Intel Corporation | Securing systems employing artificial intelligence |
US20220129794A1 (en) * | 2020-10-27 | 2022-04-28 | Accenture Global Solutions Limited | Generation of counterfactual explanations using artificial intelligence and machine learning techniques |
US20230281281A1 (en) * | 2022-03-03 | 2023-09-07 | Oracle International Corporation | Cloud based machine learning model attack guard |
CN116881954A (en) * | 2023-09-07 | 2023-10-13 | 徐州医科大学 | Medical image data safety destroying method based on countermeasure generation coverage strategy |
US11899824B1 (en) * | 2023-08-09 | 2024-02-13 | Vive Concierge, Inc. | Systems and methods for the securing data while in transit between disparate systems and while at rest |
US11954199B1 (en) * | 2023-02-23 | 2024-04-09 | HiddenLayer, Inc. | Scanning and detecting threats in machine learning models |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11689566B2 (en) * | 2018-07-10 | 2023-06-27 | International Business Machines Corporation | Detecting and mitigating poison attacks using data provenance |
US11275841B2 (en) * | 2018-09-12 | 2022-03-15 | Adversa Ai Ltd | Combination of protection measures for artificial intelligence applications against artificial intelligence attacks |
-
2022
- 2022-02-28 US US17/652,822 patent/US20230274003A1/en active Pending
-
2023
- 2023-02-03 EP EP23154810.8A patent/EP4235523A1/en active Pending
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210319098A1 (en) * | 2018-12-31 | 2021-10-14 | Intel Corporation | Securing systems employing artificial intelligence |
US20220129794A1 (en) * | 2020-10-27 | 2022-04-28 | Accenture Global Solutions Limited | Generation of counterfactual explanations using artificial intelligence and machine learning techniques |
US20230281281A1 (en) * | 2022-03-03 | 2023-09-07 | Oracle International Corporation | Cloud based machine learning model attack guard |
US11954199B1 (en) * | 2023-02-23 | 2024-04-09 | HiddenLayer, Inc. | Scanning and detecting threats in machine learning models |
US11899824B1 (en) * | 2023-08-09 | 2024-02-13 | Vive Concierge, Inc. | Systems and methods for the securing data while in transit between disparate systems and while at rest |
CN116881954A (en) * | 2023-09-07 | 2023-10-13 | 徐州医科大学 | Medical image data safety destroying method based on countermeasure generation coverage strategy |
Also Published As
Publication number | Publication date |
---|---|
EP4235523A1 (en) | 2023-08-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230274003A1 (en) | Identifying and correcting vulnerabilities in machine learning models | |
Gibert et al. | The rise of machine learning for detection and classification of malware: Research developments, trends and challenges | |
Gao et al. | Backdoor attacks and countermeasures on deep learning: A comprehensive review | |
Ibitoye et al. | The Threat of Adversarial Attacks on Machine Learning in Network Security--A Survey | |
Chumachenko | Machine learning methods for malware detection and classification | |
Nahmias et al. | Deep feature transfer learning for trusted and automated malware signature generation in private cloud environments | |
Liu et al. | ATMPA: attacking machine learning-based malware visualization detection methods via adversarial examples | |
Rosenberg et al. | Query-efficient black-box attack against sequence-based malware classifiers | |
Garg et al. | A novel parallel classifier scheme for vulnerability detection in android | |
US11106801B1 (en) | Utilizing orchestration and augmented vulnerability triage for software security testing | |
Li et al. | Deep learning backdoors | |
Bala et al. | DroidEnemy: battling adversarial example attacks for Android malware detection | |
Huang et al. | Open source intelligence for malicious behavior discovery and interpretation | |
Tayyab et al. | A comprehensive review on deep learning algorithms: Security and privacy issues | |
Geetha et al. | A FKPCA-GWO WDBiLSTM classifier for intrusion detection system in cloud environments | |
Avci et al. | Analyzing the performance of long short‐term memory architectures for malware detection models | |
Asvija et al. | Bayesian attack graphs for platform virtualized infrastructures in clouds | |
US20220318387A1 (en) | Method and Computer for Learning Correspondence Between Malware and Execution Trace of the Malware | |
Wei et al. | Toward identifying APT malware through API system calls | |
Fascí et al. | Disarming visualization-based approaches in malware detection systems | |
Macas et al. | Adversarial examples: A survey of attacks and defenses in deep learning-enabled cybersecurity systems | |
Gulmez et al. | XRan: Explainable deep learning-based ransomware detection using dynamic analysis | |
Sallam et al. | Efficient implementation of image representation, visual geometry group with 19 layers and residual network with 152 layers for intrusion detection from UNSW‐NB15 dataset | |
Rozenberg et al. | A method for detecting unknown malicious executables | |
US20240160744A1 (en) | Identifying and assessing costs associated with correcting vulnerabilities in machine learning models |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: ACCENTURE GLOBAL SOLUTIONS LIMITED, IRELAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LIU, CHANGWEI;DIVALENTIN, LOUIS;LIBERMAN, NEIL HAYDEN;AND OTHERS;SIGNING DATES FROM 20220216 TO 20220226;REEL/FRAME:059124/0192 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: ACCENTURE GLOBAL SOLUTIONS LIMITED, IRELAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BEN SALEM, MALEK;REEL/FRAME:063065/0789 Effective date: 20230322 |