JPWO2021074770A5 - - Google Patents
Download PDFInfo
- Publication number
- JPWO2021074770A5 JPWO2021074770A5 JP2022521116A JP2022521116A JPWO2021074770A5 JP WO2021074770 A5 JPWO2021074770 A5 JP WO2021074770A5 JP 2022521116 A JP2022521116 A JP 2022521116A JP 2022521116 A JP2022521116 A JP 2022521116A JP WO2021074770 A5 JPWO2021074770 A5 JP WO2021074770A5
- Authority
- JP
- Japan
- Prior art keywords
- machine learning
- learning models
- adversarial
- trained machine
- protection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000010801 machine learning Methods 0.000 claims 28
- 238000004590 computer program Methods 0.000 claims 2
- 230000001537 neural Effects 0.000 claims 2
- 238000007781 pre-processing Methods 0.000 claims 2
Claims (16)
1つまたは複数のトレーニングされた機械学習モデルに敵対的保護を加えることによって、敵対的攻撃に対しセキュア化された1つまたは複数の強化された機械学習モデルを提供することを含む、方法。 A method for securing a trained machine learning model in a computing environment by one or more processors, the method comprising:
A method comprising providing one or more hardened machine learning models secured against adversarial attacks by adding adversarial protection to one or more trained machine learning models.
1つまたは複数の敵対的保護プロトコルに基づいて、前記敵対的保護を含むように前記1つまたは複数のトレーニングされた機械学習モデルを再トレーニングすることと、
を更に含む、請求項1に記載の方法。 receiving the one or more trained machine learning models;
retraining the one or more trained machine learning models to include the adversarial protection based on one or more adversarial protection protocols;
2. The method of claim 1, further comprising:
ユーザから、前記1つもしくは複数の強化された機械学習モデルを提供するために用いられる1つもしくは複数の敵対的保護プロトコルを受信すること、
を更に含む、請求項1ないし5のいずれかに記載の方法。 automatically implementing one or more adversarial protection protocols used to provide said one or more enhanced machine learning models; or from a user, said one or more enhanced machine learning models. receiving one or more adversarial protection protocols used to provide learning models;
6. The method of any of claims 1-5, further comprising:
前記再トレーニング中の前記1つもしくは複数のトレーニングされた機械学習モデルに対するトレーニング崩壊を検出すること、または
再トレーニング動作中、前記1つもしくは複数のトレーニングされた機械学習モデルについて1つもしくは複数のロール・バック戦略を可能にすること、
を更に含む、請求項1ないし6のいずれかに記載の方法。 monitoring and tracking states of the one or more trained machine learning models while being retrained;
detecting training collapse for the one or more trained machine learning models during the retraining; or one or more roles for the one or more trained machine learning models during a retraining operation.・To enable the back strategy,
7. The method of any one of claims 1-6, further comprising:
実行可能な命令を有する1つまたは複数のコンピュータを備え、前記実行可能な命令は、実行されると、前記システムに、
1つまたは複数のトレーニングされた機械学習モデルに敵対的保護を加えることによって、敵対的攻撃に対しセキュア化された1つまたは複数の強化された機械学習モデルを提供させる、システム。 A system for securing a trained machine learning model in a computing environment, comprising:
one or more computers having executable instructions that, when executed, cause the system to:
A system that provides one or more hardened machine learning models secured against adversarial attacks by adding adversarial protection to one or more trained machine learning models.
前記1つまたは複数のトレーニングされた機械学習モデルを受信し、
1つまたは複数の敵対的保護プロトコルに基づいて前記敵対的保護を含むように前記1つまたは複数のトレーニングされた機械学習モデルを再トレーニングする、
請求項8に記載のシステム。 The executable instructions are:
receiving the one or more trained machine learning models;
retraining the one or more trained machine learning models to include the adversarial protection based on one or more adversarial protection protocols;
9. System according to claim 8.
前記1つもしくは複数の強化された機械学習モデルを提供するために用いられる1つもしくは複数の敵対的保護プロトコルを自動的に実施するか、または
ユーザから、前記1つもしくは複数の強化された機械学習モデルを提供するために用いられる1つもしくは複数の敵対的保護プロトコルを受信する、請求項8ないし12のいずれかに記載のシステム。 The executable instructions are:
automatically implement one or more adversarial protection protocols used to provide said one or more enhanced machine learning models; or from a user, said one or more enhanced machine learning models. 13. The system of any of claims 8-12, receiving one or more adversarial protection protocols used to provide learning models.
再トレーニングされている間、前記1つもしくは複数のトレーニングされた機械学習モデルの各状態をモニタリングおよび追跡するか、
前記再トレーニング中の前記1つもしくは複数のトレーニングされた機械学習モデルに対するトレーニング崩壊を検出するか、または
再トレーニング動作中、前記1つもしくは複数のトレーニングされた機械学習モデルについて1つもしくは複数のロール・バック戦略を可能にする、請求項8ないし13のいずれかに記載のシステム。 The executable instructions are:
monitoring and tracking each state of the one or more trained machine learning models while being retrained;
detect training collapse for the one or more trained machine learning models during the retraining; or perform one or more rolls for the one or more trained machine learning models during a retraining operation. - A system according to any one of claims 8 to 13, which enables a back strategy.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/601,451 US11334671B2 (en) | 2019-10-14 | 2019-10-14 | Adding adversarial robustness to trained machine learning models |
US16/601,451 | 2019-10-14 | ||
PCT/IB2020/059559 WO2021074770A1 (en) | 2019-10-14 | 2020-10-12 | Adding adversarial robustness to trained machine learning models |
Publications (2)
Publication Number | Publication Date |
---|---|
JP2022552243A JP2022552243A (en) | 2022-12-15 |
JPWO2021074770A5 true JPWO2021074770A5 (en) | 2022-12-22 |
Family
ID=75383118
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2022521116A Pending JP2022552243A (en) | 2019-10-14 | 2020-10-12 | Adding Adversarial Robustness to Trained Machine Learning Models |
Country Status (7)
Country | Link |
---|---|
US (1) | US11334671B2 (en) |
JP (1) | JP2022552243A (en) |
KR (1) | KR20220054812A (en) |
CN (1) | CN114503108A (en) |
AU (1) | AU2020368222B2 (en) |
GB (1) | GB2604791B (en) |
WO (1) | WO2021074770A1 (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
BR112021010468A2 (en) * | 2018-12-31 | 2021-08-24 | Intel Corporation | Security Systems That Employ Artificial Intelligence |
US11675896B2 (en) * | 2020-04-09 | 2023-06-13 | International Business Machines Corporation | Using multimodal model consistency to detect adversarial attacks |
US20220114259A1 (en) * | 2020-10-13 | 2022-04-14 | International Business Machines Corporation | Adversarial interpolation backdoor detection |
US11785024B2 (en) * | 2021-03-22 | 2023-10-10 | University Of South Florida | Deploying neural-trojan-resistant convolutional neural networks |
IL307781A (en) * | 2021-04-19 | 2023-12-01 | Deepkeep Ltd | Device, system, and method for protecting machine learning, artificial intelligence, and deep learning units |
EP4348508A1 (en) * | 2021-05-31 | 2024-04-10 | Microsoft Technology Licensing, LLC | Merging models on an edge server |
US20230134546A1 (en) * | 2021-10-29 | 2023-05-04 | Oracle International Corporation | Network threat analysis system |
CN114355936A (en) * | 2021-12-31 | 2022-04-15 | 深兰人工智能(深圳)有限公司 | Control method and device for intelligent agent, intelligent agent and computer readable storage medium |
CN114694222B (en) * | 2022-03-28 | 2023-08-18 | 马上消费金融股份有限公司 | Image processing method, device, computer equipment and storage medium |
GB2621838A (en) * | 2022-08-23 | 2024-02-28 | Mindgard Ltd | Method and system |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150134966A1 (en) | 2013-11-10 | 2015-05-14 | Sypris Electronics, Llc | Authentication System |
US9619749B2 (en) | 2014-03-06 | 2017-04-11 | Progress, Inc. | Neural network and method of neural network training |
US20160321523A1 (en) | 2015-04-30 | 2016-11-03 | The Regents Of The University Of California | Using machine learning to filter monte carlo noise from images |
WO2017120336A2 (en) | 2016-01-05 | 2017-07-13 | Mobileye Vision Technologies Ltd. | Trained navigational system with imposed constraints |
US20180005136A1 (en) | 2016-07-01 | 2018-01-04 | Yi Gai | Machine learning in adversarial environments |
US11526601B2 (en) | 2017-07-12 | 2022-12-13 | The Regents Of The University Of California | Detection and prevention of adversarial deep learning |
CN107390949B (en) * | 2017-09-13 | 2020-08-07 | 广州视源电子科技股份有限公司 | Method and device for acquiring reference data of touch screen, storage medium and touch display system |
CN108304858B (en) | 2017-12-28 | 2022-01-04 | 中国银联股份有限公司 | Generation method, verification method and system of confrontation sample recognition model |
US11315012B2 (en) | 2018-01-12 | 2022-04-26 | Intel Corporation | Neural network training using generated random unit vector |
CN108099598A (en) | 2018-01-29 | 2018-06-01 | 三汽车起重机械有限公司 | Drive device for a crane and crane |
US11562244B2 (en) * | 2018-02-07 | 2023-01-24 | Royal Bank Of Canada | Robust pruned neural networks via adversarial training |
CN108322349B (en) | 2018-02-11 | 2021-04-06 | 浙江工业大学 | Deep learning adversity attack defense method based on adversity type generation network |
CN108537271B (en) | 2018-04-04 | 2021-02-05 | 重庆大学 | Method for defending against sample attack based on convolution denoising self-encoder |
CN108615048B (en) | 2018-04-04 | 2020-06-23 | 浙江工业大学 | Defense method for image classifier adversity attack based on disturbance evolution |
CA3043809A1 (en) * | 2018-05-17 | 2019-11-17 | Royal Bank Of Canada | System and method for machine learning architecture with adversarial attack defence |
US10861439B2 (en) * | 2018-10-22 | 2020-12-08 | Ca, Inc. | Machine learning model for identifying offensive, computer-generated natural-language text or speech |
US20200125928A1 (en) * | 2018-10-22 | 2020-04-23 | Ca, Inc. | Real-time supervised machine learning by models configured to classify offensiveness of computer-generated natural-language text |
US11526746B2 (en) * | 2018-11-20 | 2022-12-13 | Bank Of America Corporation | System and method for incremental learning through state-based real-time adaptations in neural networks |
US11481617B2 (en) * | 2019-01-22 | 2022-10-25 | Adobe Inc. | Generating trained neural networks with increased robustness against adversarial attacks |
EP3944159A1 (en) * | 2020-07-17 | 2022-01-26 | Tata Consultancy Services Limited | Method and system for defending universal adversarial attacks on time-series data |
-
2019
- 2019-10-14 US US16/601,451 patent/US11334671B2/en active Active
-
2020
- 2020-10-12 WO PCT/IB2020/059559 patent/WO2021074770A1/en active Application Filing
- 2020-10-12 AU AU2020368222A patent/AU2020368222B2/en active Active
- 2020-10-12 JP JP2022521116A patent/JP2022552243A/en active Pending
- 2020-10-12 CN CN202080070524.1A patent/CN114503108A/en active Pending
- 2020-10-12 GB GB2207000.7A patent/GB2604791B/en active Active
- 2020-10-12 KR KR1020227008142A patent/KR20220054812A/en not_active Application Discontinuation
Similar Documents
Publication | Publication Date | Title |
---|---|---|
GB2604791A (en) | Adding adversarial robustness to trained machine learning models | |
JP2020512639A5 (en) | ||
JP2010532055A5 (en) | ||
JPWO2021074770A5 (en) | ||
JP6824382B2 (en) | Training machine learning models for multiple machine learning tasks | |
JP2020149719A (en) | Batch normalization layers | |
JP2020535546A5 (en) | ||
JP2017513127A5 (en) | ||
JP2015506617A5 (en) | ||
WO2019241570A8 (en) | Quantum virtual machine for simulation of a quantum processing system | |
JP2018534651A5 (en) | ||
WO2008000499A3 (en) | Using multiple status models in a computer system | |
JP2013506199A5 (en) | ||
JP2018533138A5 (en) | ||
WO2008000497A3 (en) | Using status models in a computer system | |
JP2017509952A5 (en) | ||
JP2016502160A5 (en) | ||
WO2018112699A1 (en) | Artificial neural network reverse training device and method | |
WO2008000500A3 (en) | Using status models with preconditions in a computer system | |
WO2008000504A3 (en) | Using status models with status transitions in a computer system | |
JP2010170419A5 (en) | Behavior time ratio calculation device and behavior time ratio calculation method | |
JP2019185127A5 (en) | Neural network learning device and its control method | |
JP2019124582A5 (en) | Tactile information estimation device, tactile information estimation method, program, and non-transitory computer-readable medium | |
JP2016531335A5 (en) | ||
JP2012123782A5 (en) |